Folgen
-
Summary – Finops XIn this conversation, Joe Daly and Rob Martin from the FinOps Foundation discuss the latest developments in the FinOps space and Finops-X. They talk about the evolution of FinOps practices, the growth of the FinOps community, and the importance of the Focus project, which aims to standardize billing data from different cloud providers. They also discuss the adoption of FinOps practices by SaaS companies and the future of the FinOps space. The conversation covers the updates and changes in the FinOps framework, including the addition of allied personas and the simplification of domains and capabilities. It also discusses the upcoming Finops-X conference and the value it provides for attendees, including deep and concrete content, networking opportunities, and career advancement. KeywordsFinOps, FinOps Foundation, FinOps X conference, podcast, cloud providers, Focus project, billing data, cloud-agnostic, tool agnostic, open source project, SaaS companies, FinOps framework, allied personas, domains and capabilities, Finops-X conference, deep content, networking, career advancement, Finops-X EuropeTakeawaysFinOps practices have evolved to focus on making processes more operational and improving decision-making in businesses.The FinOps Foundation has seen significant growth, with over 100 members, including major cloud providers.The Focus project, an open billing standard, aims to consolidate billing data from different cloud providers and enable more effective cost allocation.The adoption of FinOps practices by SaaS companies is increasing, with a focus on consumption-based licensing management.The future of the FinOps space includes expanding the Focus project to include sustainability data and additional usage-based data. The FinOps framework has been updated to include allied personas and simplified domains and capabilities.Finops-X conference provides valuable content, networking opportunities, and career advancement for attendees.Finops-X Europe conference in Barcelona offers a focused event for the European market.The conversation also mentions the importance of small businesses attending the conference and the success stories of attendees.Sound Bites“How do I make these processes much more operational? How do I affect the broader decision-making going on in my business?”“The Focus project… will consolidate or specify how billing data should come from the different cloud providers.”“The Focus project… essentially handles the data ingestion problem that has plagued a lot of organizations early on.”“The two big changes that happened this year were the addition of a lot of allied personas.”“We’ve simplified those down into four key domains.”“What other things are you guys excited about for Finops-X?”About Joe Daily & Rob Martin
Joe Daly is a Director of Community for the FinOps Foundation, which is kind of like sitting at the largest lunch table in Middle School, but with less vaping. He’s had illustrious careers as a CPA (the Statute of Limitations has past for all tax returns he prepared and he has let his CPA expire), Corporate Taxation, IT Finance & Accounting, IT Portfolio Management, a regrettable stint as Manager of Server Operations, and has started two teams that perform what has come to be known as FinOps. He lives in Columbus, OH and enjoys copying off Rob. Go Captains!
Rob Martin is a FinOps Principal at the FinOps Foundation, which is kind of like being a Middle School Principal, but with less vaping. He’s had illustrious careers at Accenture, the US Department of Justice, Amazon Web Services, and Cloudability, and less lustrious jobs at a few other places. He now spends his time collecting, developing, and distributing FinOps content among the huge global community of people who deliver value from cloud. He lives in Leesburg, VA, and enjoys games (including the FinOps Boardgame!), hiking, and announcing for his son’s high school soccer team. Go Captains!
Chapters00:00 Introduction and Overview02:32 The Evolution of FinOps Practices05:19 The Growth of the FinOps Community06:18 The Importance of the Focus Project09:29 Adoption of FinOps Practices by SaaS Companies12:35 The Future of the FinOps Space24:29 The Value of Finops-X Conference28:29 Finops-X Europe: A Focused Event for the European Market29:32 Success Stories and Career Advancement at Finops-XLearn More:Finops FoundationFinops Foundation on TwitterFinops-XSubscribe to The Cloud Pod -
For this special edition of TCP Talks, Justin and Jonathan are joined by Travis Runty, CTO of Public Cloud with Rackspace Technology. In today’s interview, they discuss being accidentally multi cloud, public vs private cloud, and cloud migration, and best practices when assisting clients with their cloud journeys.
Background
Rackspace Technology, commonly known as Rackspace, is a leading multi-cloud solutions provider headquartered in San Antonio, Texas, United States. Founded in 1998, Rackspace has established itself as a trusted partner for businesses seeking expertise in managing and optimizing their cloud environments.
The company offers a wide range of services aimed at helping organizations navigate the complexities of cloud computing, including cloud migration, managed hosting, security, data analytics, and application modernization. Rackspace supports various cloud platforms, including AWS, Azure, and GCP, among others.
Rackspace prides itself on its “Fanatical Experience” approach, which emphasizes delivering exceptional customer support and service. This commitment to customer satisfaction has contributed to Rackspace’s reputation as a reliable and customer-centric provider in the cloud computing industry.
Meet Travis Runty, CTO of Public Cloud for Rackspace Technology
Beginning his career with Rackspace as a Linux engineer, Travis has spent the last 15 years working his way through multiple divisions of the company, including 10 years in senior and director level positions. Most recently, Travis served as VP of Technical Support of Global Cloud Operations from 2020-2022.
Travis is extremely passionate about building and leading high performance engineering teams and delivering innovative solutions. Most recently, as a member of their technology council, Travis wrote an article for Forbes – Building a Cloud-Savvy Workforce: Empowering Your Team for Success – where he discussed best practices for prioritizing workforce enablement, especially when it comes to training and transformation initiatives.
Interview Notes:
In the main show, TCP has been talking a lot about Cloud / hybrid cloud / multi-cloud and repatriating data back to on prem, and today’s guest knows all about those topics.
Rackspace has had quite a few phases in their journey to public cloud – including building a data center in an unused mall, introducing managed services, creating partnerships with VMware, an attempt to go head to head with the hyperscalers, and then ultimately focusing on public cloud and instead partnering with the hyperscalers.
Rackspace has both a focus on private and public cloud; when it comes to private cloud they focus mainly on VMware and OpenStack, whereas in the public cloud side, Rackspace partners with the hyperscalers to assist clients with their cloud journey.
Quotes from today’s show
Travis: “We want to make sure that when a customer goes on their public cloud journey, that they actually have a robust strategy that is going to be effective. From there, we’re able to leverage our professional services teams to make sure that they can realize that transformation, and hopefully there *is* a transformation, and it’s not just a lift and shift.”
Travis: “A conflict that we continuously have to strike the balance of is when do we apply a cloud native solution, and where do we apply the Rackspace elements on top. The hyperscalers technology is the best there is, and we’re probably not going to create a better version of “x” than AWS does – nor do we want to.”
Travis: “We favor cloud native. Every single time we’re going to favor the platform’s native solution, unless the customer has a really really strong opinion about being vendor locked. Which sometimes they do. And if that’s the case we can establish a solution that gives them that portability. But for right now, the customers are generally preferring cloud native solutions.”
-
Fehlende Folgen?
-
A bonus episode of The Cloud Pod may be just what the doctor ordered, and this week Justin and Jonathan are here to bring you an interview with Sandy Bird of Sonrai Security. There’s so much going on in the IAM space, and we’re really happy to have an expert in the studio with us this week to talk about some of the security least privilege specifics.
BackgroundSonrai (pronounced Son-ree, which means data in Gaelic) was founded in 2017. Sonrai provides Cloud Data Control, and seeks to deliver a complete risk model of all identity and data relationships, which includes activity and movement across cloud accounts, providers, and third party data stores.
Try it free for 14 daysStart your free trial today
Meet Sandy Bird, Co founder of Sonrai SecuritySandy is the co-founder and CTO of Sonrai, and has a long career in the tech industry. He was the CTO and co-founder of Q1 Labs, which was acquired by IBM in 2011, and helped to drive IBM security growth as CTO for global business security there.
Interview Notes:One of the big questions we start the interview with is just how has IAM evolved – and what kind of effect have those changes had on the identity models? Enterprise wants things to be least privilege, but it’s hard to find the logs. In cloud, however *most* things are logged – and so least privilege became an option.
Sonrai offers the first cloud permissions firewall, which enables one click least privilege management, which is important in the current environment where the platforms operate so differently from each other. With this solution, you have better control of your cloud access, limit your permissions, attack surface, and automate least privilege – all without slowing down DevOps2.
Is the perfect policy achievable? Sandy breaks it between human identities and workload identities; they’re definitely separate. He claims, in workload identities the perfect policy is probably possible. Human identity is hugely sporadic, however, it’s important to at least try to get to that perfect policy, especially when dealing with sensitive information. One of the more interesting data pieces they found was that less than 10% of identities with sensitive permissions actually used them – and you can use the information to balance out actually handing out permissions versus a one time use case.
Sonrai spent a lot of time looking at new solutions to problems with permissions; part of this includes purpose-built integration, offering a flexible open GraphQL API with prebuilt integrations.
Sonrai also offers continuous monitoring; providing ongoing intelligence on all the permission usage – including excess permissions – and enables the removal of unused permissions without any sort of disruptions. Policy automation automatically writes IAM policies tailored to access needs, and simplifies processes for teams.
On demand access is another tool that gives on demand requests for permissions that are restricted with a quick and efficient process.
Quotes from today’s showSandy: “The unbelievably powerful model in AWS can do amazing things, especially when you get into some of the advanced conditions – but man, for a human to understand what all this stuff is, is super hard. Then you go to the Azure model, which is very different. It’s an allow first model. If you have an allow anywhere in the tree, you can do whatever is asked, but there’s this hierarchy to the whole thing, and so when you think you want to remove something you may not even be removing it., because something above may have that permission anyway. It’s a whole different model to learn there.”
Sandy: “Only like 8% of those identities actually use the sensitive parts of them; the other 92 just sit in the cloud, never being used, and so most likely during that break loss scenario in the middle of the night, somebody’s troubleshooting, they have to create some stuff, and overpermission it . If we control this centrally, the sprawl doesn’t happen.”
Sandy: There is this fear that if I remove this identity, I may not be able to put it back the way it was if it was supposed to be important… We came up with a secondary concept for the things that you were worried about… where we basically short circuit them, and say these things can’t log in and be used anymore, however we don’t delete the key material, we don’t delete the permissions. We leave those all intact.”
-
Andrew Krug from Datadog
In this episode, Andrew Krug talks about Datadog as a security observability tool, shedding light on some of its applications as well as its benefits to engineers.
Andrew is the lead in Datadog Security Advocacy and Datadog Security Labs. Also a Cloud Security consultant, he started the Threat Response Project, a toolkit for Amazon Web Services first responders. Andrew has also spoken at Black Hat USA, DEFCON, re:Invent, and other platforms..
DataDog Product OverviewDatadog is focused on bringing security to engineering teams, not just security people. One of the biggest advantages of Datadog or other vendors is how they ingest and normalize various log sources. It can be very challenging to maintain a reasonable data structure for logs ingested from cloud providers.
Vendors try to provide customers with enough signals that they feel they are getting value while trying not to flood them with unactionable alerts. Also, considering the cloud friendliness for the stack is crucial for clients evaluating a new product.
Datadog is active in the open-source community and gives back to groups like the Cloud native computing foundation. One of their popular open-source security tools created is Stratus-red-team which simulates the techniques of attackers in a clean room environment. The criticality of findings is becoming a major topic. It is necessary when evaluating that criticality is based on how much risk applies to the business, and what can be done.
One of the things that teams struggle with as high maturity DevOps is trying to automate incident handling or response to critical alerts as this can cause Configuration Drift which is why there is a lot of hesitation to fully automate things. Having someone to make hard choices is at the heart of incident handling processes.
Datadog Cloud SIEM was created to help customers who were already customers of logs. Datadog SIEM is also very easy to use such that without being a security expert, the UI is simple. It is quite difficult to deploy a SIEM on completely unstructured logs, hence being able to extract and normalize data to a set of security attributes is highly beneficial. Interestingly, the typical boring hygienic issues that are easy to detect still cause major problems for very large companies. This is where posture management comes in to address issues on time and prevent large breaches.
Generally, Datadog is inclined towards moving these detections closer to the data that they are securing, and examining the application run time in real-time to verify that there are no issues. Datadog would be helpful to solve IAM challenges through CSPM which evaluates policies. For engineering teams, the benefit is seen in how information surfaces in areas where they normally look, especially with Datadog Security products where Issues are sorted in order of importance.
Security Observability Day is coming up on the 18th of April when Datadog products will be highlighted; the link to sign up is available on the Datadog Twitter page and Datadog community Slack. To find out more, reach out to Andrew on Twitter @andrewkrug and on the Datadog Security Labs website.
Top Quotes
“I think that great security solutions…start with alerts that you are hundred percent confident as a customer that you would act on” “When we talk about the context of ‘how critical is an alert?’ It is always nice to put that risk lens on it for the business” “Humans are awesome unless you want really consistent results, and that’s where automating part of the process comes into play” “More standardization always lends itself to better detection” -
In this episode, Ravi Mayuram highlights the functionality of Couchbase as an evolutionary database platform, citing several simple day-to-day use cases and particular advantages of Couchbase.
Ravi Mayuram is CTO of Couchbase. He is an accomplished engineering executive with a passion for creating and delivering game-changing products for startups as well as Fortune-500 industry-leading companies.
Notes
Couchbase set out to build a next-generation database. Data has evolved greatly with IT advancements. The goal was to build a database that will connect people to the newer technologies, addressing problems that relational systems did not have to solve. The fundamental shift is that earlier systems were internally focused, built for trained users but now the systems are built directly for consumers. This shift also plays out in the vast difference in the number of consumers now interacting with these systems compared to the fewer trained users previously interacting with the systems.
One of the key factors that sets Couchbase apart is the No-SQL Database. It is a database that has evolved by combining five systems; a Cache and Key-value store, a Document store, a Relational document store, a Search system, and an Analytical system. Secondly, Couchbase performs well in the geo-distributed manner such that with one click, data is made available across availability zones. Lastly, all of this can be done at a large scale in seconds.
Regarding the global database concept that Google talks about, a globally consistent database may not be needed by most companies. The performance will be the biggest problem as transaction speed will be considerably low. Couchbase does these transactions locally within the data center and replicates them on the other side. The main issue of relational systems is that they make you pay the price of every transaction no matter how minor, but with Couchbase, it is possible to pay only the cost only with certain crucial transactions.
Edge has become a part of the enterprise architecture even such that people now have edge-based solutions. Two edges are emerging; the Network edge and the Tool edge where people are interfacing. Couchbase has built a mobile database available on devices, with sync capability.
As a consumer, the primary advantage of bringing data closer to the consumer is the latency issue. Often, data has to go through firewalls and multiple steps which delays it but this is the benefit of Couchbase. The user simply continues to have access to the data while Couchbase synchronizes the data in the back.
One of the applications of Couchbase in healthcare is insulin tracking. With many devices that monitor insulin which must work everywhere you go, Couchbase Lite does the insulin tracking, keeps the data even in the absence of a network, and later syncs it for review by healthcare professionals. This is also useful in operating rooms where the network is not accessible. The real benefit is seen when the data eventually gets back to the server and can be interpreted to make decisions on patient care.
The Couchbase Capella Service runs in the cloud and allows clients to specify what data should be sent to the edge and what should not be. This offers privacy and security measures, such that even in the loss or damage of a device, the data is secure and can be recovered. To effectively manage edge in devices, a lot of problems must be addressed to make it easier.
One of the concerns for anyone coming into Couchbase Capella is the expense of data extraction from the cloud, however, Couchbase is available on all three cloud providers. Also, with Couchbase, there is no need to keep replicating data as you can work on the data without moving it, which largely saves costs.
Other use cases for Couchbase include information for flight bookings, flight crew management systems, hotel reservations, and credit card payments. To learn more, visit the Couchbase website. There is also a free trial for the Couchbase Capella Service.
Top Quotes
“The modern database has to do more than what the old database did” “Managing edge in devices is not an easy thing, and so you have to solve a lot of problems so it becomes easier” -
Revolutionizing Observability with New Relic
In this episode, Daniel explains a new strategy towards observability aimed at contextualizing large volumes of data to make it easier for users to identify the root cause of problems with their systems.
Daniel Kim is a Principal Developer Relations Engineer at New Relic and the founder of Bit Project, a 501(c)(3) nonprofit dedicated to making tech accessible to under-served communities. His job is basically to get developers excited about Observability, and he hopes to inspire students to maximize their potential in tech through inclusive, accessible developer education. He is passionate about diversity and inclusion in tech, good food, and dad jokes.
Show NotesFirst, it is important to differentiate between monitoring and observability. Monitoring is basically when a code is instrumented to send data to a backend, to give answers to preconceived questions. With Observability, the goal is to monitor your system so as to later ask questions that were not in mind during the instrumentation of the system. Hence, if something new comes up you can find the root cause without modifying the code. There are so many levels of things to check when troubleshooting to find the cause of a problem, and this is where observability comes in.
There are different use cases for logs, metrics, and traces; Logs are files that record events, warnings, or errors however logs are ephemeral which means there is increased risk of losing a lot of data. A system needs to be in place to move logs to a central source. Another issue with logs is that it is poorly structured data. Logs are good to have as the last step of observability. Metrics and traces can however help to narrow down where to search in the logs to solve an issue.
Metrics are measurements that reflect the performance or health of your applications. They give an overview of how the systems are doing but tend to not be very specific in finding the root cause of a problem; other forms of data have to be adopted to get a clear picture. This is where Traces come in.
Traces are pieces of data that track a request as it goes through the system. Because of this, they can identify the root cause of an error or bottlenecks slowing down the system. However, they are very expensive and as such sampling is used when tracing but this reduces the accuracy of traces. Correlating information from logs, metrics, and traces gives a full clear picture for debugging to be carried out successfully. A lot of New Relic customers strive to get more pieces of data to get errors faster.
To balance the right data at the right time with the right cost, the first step when collecting large amounts of data is to find out how your organization is leveraging the data. A quick audit of the data to identify useful data is helpful. This can be done monthly or quarterly. Unstructured logs are difficult to aggregate
In the cloud native space, being able to be compatible with as many people as possible will determine the winners because there are many projects people use in production. Projects that are compatible with many other projects are the way forward.
APM is still very useful to understand application performance and in the future, data from all sources will be correlated to figure out the cause of a problem. Getting value very early from the system involves having a solid infrastructure and installing APM. The real power of full stack observability is getting data from different parts of your stack so you can diagnose what part of your system is going wrong. Leveraging AI to make sense of large amounts of data for engineers is going to be a huge plus.
A lot of vendors claim that their alert systems will automatically generate all alerts for you but this is not true because they would not know your team’s needs. It is ultimately up to your team to set up alerts that create an observability strategy. Those who invest time into setting this up get the most ROI from New Relic. Engineers need to figure out what metrics are important to them.
About New Relic One:This was made to be a singular observability platform where people can correlate various pieces of data to get more context making the work easy for engineers. The goal was to help engineers to find the information they need as fast as possible, especially during a crisis.
This kind of third-party solution is much more applicable for processing millions of logs or larger data, compared to native tools. It also provides a large amount of expertise around observability and curated experiences around machine-generated data.
The future seems to have customers tilting towards open-source observability solutions. OpenTelemetry is one example of this, as it brings together all observability offerings in open source in a whole stack observability experience.
Visit the New Relic website to learn more about it. To learn more about ways to use New Relic, check out the New Relic Blogs.
Top Quotes “Having so much data and information about your system, you’re able to quickly figure and rule out issues that you may be having that’s causing the issue” “A really good practice when we think about controlling cost is getting a really good idea of how you’re actually using the data that you’re collecting” “Having structured logs is really helpful when we’re talking about observability” “Something that I’ve realized in the tenure that I’ve been working in observability is that when something sounds too good to be true, it probably is” -
Spatial Simulations with AWS SimSpace Weaver
In this episode, Peter sits with Rahul Thakkar to discuss the revolutionary AWS SimSpace Weaver, highlighting its unique function and applications across several industries.
Rahul Thakkar is the Director and General Manager of Simulation Technologies at Amazon Web Services. Before AWS, he held multiple executive roles at Boeing, Brivo, PIXIA, and DreamWorks Animation. He is an inventor, and global technology executive with a background in cloud computing, distributed and high-performance computing, media and entertainment, film, television, defense and intelligence, aerospace, and access control.
His film credits include Shrek, Antz, and Legend of Bagger Vance. In 2002, he was part of the team that won an Academy Award for Shrek as the Best Animated Feature. Again in 2016, at the 88th Annual Academy Awards, Thakkar received a Technical Achievement Award.
NotesAWS SimSpace Weaver enables customers to run extremely large-scale spatial simulations without having to manage any of the underlying infrastructure. It also removes the complexity of state management of entities as they move about the simulation. Previously, carrying out such simulations would be done sequentially, in a cumbersome manner over years but now it can be done in parallel in weeks. Different organizations have tried out this functionality for several scenarios and the results have been amazing. This value was largely made possible due to the approach of working with customer feedback.
Rahul’s interest in the cloud came much later in his career which started initially in the R&D department of the Motion Picture industry where he created many of the complex graphics in movies. He later moved into a small start-up that was developing technologies for satellite imagery and mapping, and from here he moved to aerospace. Generally, he observed the problem that it is very expensive for companies to maintain their infrastructure when dealing with simulations. It also would drain resources and distract from the main focus of the company. Eventually, knew he had to use AWS, and now he works with them.
All the other primitive tools within AWS are being consumed to build the service. There is also the ability to write to S3 so that customers can write the simulations out. This helps customers to remember how the simulation played out.
Relating this new service to the metaverse, Rahul believes that when it comes to the metaverse, each organization has its vision of what it should be. However, AWS built the tools to empower these organizations to build their metaverses. Despite the possibility of having competition from Azure or GCP, the focus of AWS would remain on the customer and their needs, innovation on their behalf.
Identifying new problems that the service would be very applicable for is a great challenge that AWS relies on customers for, to help AWS envision where they want to go with the service. There are definitely many companies running simulations but it is hard to predict how many would migrate to the AWS SimSpace Weaver because it is still a new product. Nonetheless, a lot of industries are interested in this new service. These include smart cities, organizations ranging from local to federal or international, logistics and supply chains, large-scale event planning, or any situation where there is a need to simulate a large problem with digital replicas of the real world.
Top Quotes “The fact that we worked from the customer backwards is something that allowed us to deliver the kind of value that they’re getting right now with AWS SimSpace Weaver” “Each one of these organizations have their own vision of a metaverse” -
Applying and Maximizing Observability
In this episode, Christine talks about her company, Honeycomb which runs on AWS, with the goal of promoting observability for clients interested in the performance of their code or those trying to identify problem areas that need to be corrected.
Christine Yen is the Co-Founder and CEO of Honeycomb. Before founding Honeycomb, she built analytics products at Parse/Facebook and loved writing software to separate signals from noise. Christine delights in being a developer in a room full of ops folks. Outside of work, Christine is kept busy by her two dogs and wants your sci-fi & fantasy book recommendations.
NotesHoneycomb is an observability platform that helps customers understand why their code is behaving differently from what they expected. The inspiration behind this software came after Christine’s previous company was acquired by Facebook and they realized how software made it very easy to identify problems in large code data within a short time. This encouraged them to build the tool and make it available to all engineers.
If the first wave of DevOps was Ops-people learning how to automate their working code, the second wave would be helping developers learn to operate their code. Honeycomb is designed intentionally to ensure that all types of engineers can make sense of the tool.
Honeycomb has always come up with ways for customers to use AWS products and get the data reflected in Honeycomb to be manipulated. Over the last few months, they have ensured that it is possible for clients to plug into CloudWatch Log and CloudWatch metrics, and redirect data directly from AWS products into Honeycomb instead. Clients can also use Honeycomb to extract data based on what their applications are doing. This applies to performance optimization, experimentation, or any situation where a company wants to try a code to see how it performs on production. The focus remains on the application layer. Before Honeycomb, no one was using observability in this context.
The pricing of Honeycomb is based on the volume of data, which makes it predictable and understandable. Unlike when the pricing scale is based on the fidelity of the data, which can be quite expensive.
Challenges within the observability space: The question is how to help new engineers learn from the seasoned engineers on the team through paper trails left by the seasoned engineers. This is a problem that can only be solved by enabling teams to orient new engineers on their systems without having to create another question as part of the code.
Building an AI Approach in Honeycomb may not be suitable because of the context involved, since training effective machine learning models relies on a vast amount of easily classifiable data and this does not apply in the world of software; every engineering team’s systems are different from every other engineering team’s systems. Honeycomb is interested in using Al to build these models in order to help users know what questions to ask.
With Honeycomb, usage patterns are much more dependent on the curiosity and proficiency of the engineering team; while some engineers who are used to getting answers directly may just leave the software, those who have a culture of asking questions will benefit more from it.
Top Quotes “Not having to predict ahead of time what matters, is making such a difference in our ability as engineers to get ahead of issues, identify them quickly, resolve them” “We’re out of a world where any individual engineer holds the entire system in their head” “Observability is the only way forward as we make our worlds ever less predictable” -
In this TCP Talks episode, Justin Brodley and Jonathan Baker talk with Anthony Lye, Executive Vice President and General Manager of NetApp’s Public Cloud Services Business Unit. An industry veteran for over 25 years, Anthony has been at the forefront of cloud innovation for over half this time.
Anthony shares his insight on the importance of embracing disruption in the tech industry. He discusses how NetApp seized the right opportunities, got lucky, and came to dominate the Cloud space — even while younger app developers may have no idea what it was.
“They don’t comprehend — nor should they — the complexities of infrastructure,” Anthony explains. “And I really love the fact that we’ve been able to democratize ONTAP, because it’s cool, but you’ve got to be really smart to get the best out of it. And so we just decided we would be the smart ones.”
What’s really behind innovation in tech? “The context is where you are. And people like to think that the world operates through evolution. And sometimes it’s revolution –- sometimes, you have to do something radically different.”
Anthony also discusses cloud computing trends, the importance of customer focus, what NetApp does differently, and the multi-cloud.
Featured GuestName: Anthony Lye
What he does: Anthony is Executive Vice President and General Manager of the Public Cloud Services business for NetApp
Key quote: “You’ve got to put the customer in the middle of your business. And you’ve got to go where they want you to go. If you don’t, your hold may last a while, but it won’t last. And I still can’t believe that what we did we got away with, and we’ve gotten so much time to build so aggressively. It’s great.”
Where to find him: LinkedIn
Key Takeaways There are two halves of the cloud space: the IT half and the app half. IT people see huge opportunities in extending data centers. App people want to and can build and run their own stacks, and Anthony took advantage of this. “They don’t have to wait for the IT people,” Anthony says. “And I wanted to build something for them — I didn’t want to just hang out on the IT side. I went and asked a whole bunch of application people: what do you need?” NetApp spies huge business growth potential on the horizon with recurring revenues. “Recurring revenues are the best kinds of revenues you can get,” Anthony clarifies. But people don’t always consider this. “Because they’re different, they sort of ignore them — they don’t like them. And before they know it, they’re years behind and caught. And passed as if they’re standing still.” The customer is and always should be focused on as front and center of any business. For NetApp, the software and implementation are the same, but the unique integrations are what makes the service stand out. With SaaS, it’s now the second “S” — the service — that matters most. “The rule of SaaS is the other Henry Ford thing: you can have it in any color you want, as long as it’s black,” Anthony says. “We’re going to run it for you as a service, and you’re going to love it”, NetApp tells customers, increasing developer productivity and providing a much higher release cadence.ResourcesHere’s what was mentioned in the episode ARM: the most widely used family of instruction set architectures with over 200 billion ARM chips produced. CloudCheckr: an end-to-end cloud management platform with cost, security, resource and service functionality. CI/CD (continuous integration/continuous delivery): software development approaches often used in tandem for rapid code delivery and deployment. Databricks: a data warehouse and machine learning company. Elastic Block Service (EBS): an AWS scalable block service. EMC: Dell’s hybrid cloud solution. Filament: a cloud-native platform for data analysis. Grafana: a fully managed service for scalable, managed backend for metrics, logs, and traces. HP Cloud: Hewlett-Packard (HP)’s doomed set of cloud services. ITIL: a library that defines the organizational structure and skill requirements of an IT organization for SOPs and management. Magic Quadrant: a Gartner research methodology. Marketing Research Management (MRM): software for marketers to manage their assets, with planning, budget, production, and program and campaign execution capabilities. “Managed Service Providers (MSP)”: cloud management service. ONTAP: NetApp’s data management software. Platform-as-a-Service (PaaS): a model for running applications across all enterprise sizes. Request for Proposal (RFP): an open-source software that uses APIs or a dashboard to control pools of compute, storage, and networking resources. Amazon S3: an AWS cloud object service. SAP: a PaaS cloud service for custom web app development and deployment. Apache Spark: a cloud workload deployment engine. Spot: NetApp’s cloud automation and cost optimization software. VMWare Cloud (VMC): a joint-engineered cloud service between AWS and VMWare.Top quotes in this episode[8:43] “People talk about us now as a cloud native storage product. They don’t ever think now that it’s something we lifted and shifted, because we didn’t. We really reimagined and engineered it to be unlike any other cloud service.”
[12:02] “You’ve got to disrupt yourself, I think you’ve got to come at it from a very different perspective, it’s hard. Most people don’t really like change — they’d like to know that tomorrow will be like yesterday. And what makes this industry so good is how often it’s disrupted. The frequency of disruption in tech is just amazing … there’s been so much innovation … now every company is a software company — everybody. It’s been really, really fun.”
[31:45] “In the cloud, you have to show how good you are before you get to really take on some of these mega relationships that we had. And we had to show Amazon that we were good enough to be in the cloud — that we were fast enough. And we could prove to Amazon that we could release at cloud speed and operate and run a cloud business. And honestly, you kind of have to beat them a few times. And we did that. And we did something with Microsoft no one’s ever done. We did something with Google no one’s ever done. And now we’ve done something with Amazon that no one’s ever done. So we’ve got these three industry firsts. And it was probably more about just us proving ourselves time and time again, and pestering them time and time again, and proving to them time and time again, that we were a credible partner for a very core and fundamental service.”
-
In this TCP Talks episode, Justin Brodley and Jonathan Baker talk with Jonathan Heiliger, co-founder and partner at Vertex Ventures: an early-stage venture capital firm backing innovative technology entrepreneurs.
Earlier in his career, at just 19, Jonathan co-founded web hosting provider GlobalCenter and served as CTO. He went on to hold engineering roles at Walmart and Danger, Inc., the latter of which was acquired by Microsoft. He was also Vice President of Infrastructure and Operations at Facebook (now Meta), and a general partner at North Bridge Ventures. The latter firm’s portfolio included Quora, Periscope, and Lytro (which has been acquired by Google.)
At Vertex Ventures, Jonathan has helped cutting-edge companies like LaunchDarkly and OpsLevel revolutionize the tech space with continuous delivery and IT service management solutions. Jonathan shares his insights into the shifting market of IT services and explains why decentralizing infrastructure management can help digitally native companies operate at a faster pace.
According to Jonathan, the question of IT service infrastructure isn’t being adequately addressed. Without properly defining service ownership, businesses looking to scale run the risk of siloing critical knowledge, and losing track of services networks.
Jonathan also discusses his own experiences running infrastructure at Facebook (oops, Meta), the merits of both centralized and decentralized IT services management, and how he and his partners at Vertex Ventures approach new investments.
Featured GuestName: Jonathan Heiliger
What he does: Jonathan is a co-founder and partner at Vertex Ventures, an early stage venture capital firm that backs B2B software entrepreneurs. He held his first CTO role at 19, and has previously worked for Walmart; Danger, Inc.; Facebook (soon to be known as Meta); and North Bridge Ventures.
Key quote: “We need systems to help us build bridges from the world of paper-based and in-memory to scaling to tens and then hundreds of microservices. It’s that pain point of tracking all the info about apps and their services, dependencies, ownership and versions that I think is this big problem lurking below the surface.”
Where to find him: LinkedIn | Twitter
Key TakeawaysAs companies rely on an increasing number of IT services, Jonathan says that it’s imperative that technology leaders establish ownership of IT service management, and meticulously track their software and vendor partners.
According to Jonathan, this kind of IT management is still done in a fairly rudimentary way, even for larger companies. “Every engineering team — even the most well run engineering orgs — the majority of them use Excel spreadsheets to track who owns what service, and even what services may talk to one another,” he says. He sees this as a big problem that’s going to catch up with companies one day.
When considering whether a centralized or decentralized IT management service infrastructure is best for you, Jonathan suggests doing a deep dive on your business objectives.
For example, digitally native businesses, which rely on a vast network of microservices, might work better with a decentralized infrastructure. Non-digitally native brands, on the other hand, might benefit from a centralized system to ensure continuity in the technology.
Avoiding vendor lock-in — i.e. becoming too dependent on a single service provider — is critical in keeping your business flexible and agile, but it can be tricky. Jonathan describes a situation at Facebook where vendor lock-in became a problem. The solution the company settled on involved bringing data center creation and management in-house.
He argues, “the biggest way to manage vendor lock-in is through open source and through open communities.”
ResourcesHere’s what was mentioned in the episodeVertex Ventures: an early stage venture capital firm that backs enterprising founders.
OpsLevel: Microservice catalog software that helps companies track service ownership.
Kafka: an open-source distributed event streaming platform.
Kubernetes: an open-source system for managing containerized applications.
Cloudflare: an infrastructure company that provides content delivery and DDos mitigation.
Amazon Lex: a service for building conversational interfaces using voice and text.
Kustomer: an omnichannel customer support platform for responses and requests.
Elasticsearch: a distributed search and analytics engine built on Apache Lucene.
MongoDB: database for apps.
Confluent: a stream data platform that enables users to organize and manage data.
HashiCorp: provides open-source tools that enable cloud-computing infrastructure management.
Prometheus: Open-source monitoring and alerting system.
Grafana: open-source analytics and monitoring solution.
LaunchDarkly: enables safe continuous software delivery and feature management for customers.
Gitpod: allows users to continuously build git branches without waiting for dependencies.
GitLab: a DevOps lifecycle tool with a Git repository manager
-
In this TCP Talks episode, Justin Brodley and Jonathan Baker talk with Josh Stella, co-founder and CEO of Fugue, a cloud security company that helps businesses run faster on the cloud without breaking any rules.
Josh shares insights from Fugue’s State of Cloud Security 2021 Report, and highlights key themes, including preventative security measures, automation, and engineering-first compliance.
According to the report, within the next two years, all but 1% of security breaches will be caused by misconfiguration of cloud resources. Josh and his team at Fugue aim to minimize these mistakes by simplifying cloud security through a systems-based approach.
One way to streamline security, Josh notes, is to take advantage of automation. With cloud environments becoming increasingly complex, relying on pure knowledge will soon be untenable. Josh urges business leaders to embrace automation to reduce the risk of human error in their security systems.
Josh also discusses how businesses can declutter security tech stacks, the “land grab” happening in the cloud, and trends he predicts will shape the future of cloud compliance.
Featured GuestName: Josh Stella
What he does: Josh is the co-founder and CEO at Fugue, a cloud security company on a mission to help businesses move faster by ensuring safe cloud environments. He has over a decade of experience in the cloud security space, including positions at Amazon Web Services and in national security.
Key quote: “If Fugue as a software vendor and as domain experts in cloud security can’t make your job a lot easier through tooling, then we’re not doing our job.”
Where to find him: LinkedIn | Twitter | YouTube
Key TakeawaysWhile compiling the State of Cloud Security 2021 Report, Josh and his team at Fugue interviewed over 300 organizations. They found that as cloud environments have grown and become more complex, organizations are seeing more instances of misconfigurations.
According to the report, 49% of respondents experienced over 50 misconfigurations per day. Another interesting detail: For the first time since Fugue started compiling its annual report, Identity and Access Management (IAM) was the number one concern regarding misconfigurations.
Josh argues that automation is the next step in making cloud environments more secure. Fugue aims to make security automation easy by providing pre-built rules and templates to automatically check code and monitor deployments.
Looking forward, Josh is optimistic that automation will become a key piece in enterprise cloud security. “The thing I would like to see a change in is the attitude that security problems are because people are screwing up … [I would like to see people] thinking about how to actually solve these problems, which is through computer science and automation,” he says.
One way to enable automation is to put engineering departments in charge of compliance, as opposed to traditional security teams. According to the State of Cloud Security 2021 Report, more than 66% of businesses are delegating security policy to engineering teams — a trend Josh hopes to see continue.
He says that today, engineering and DevOps teams work so fast security teams struggle to keep pace. Businesses that haven’t moved responsibility for security over to these teams are more likely to experience those potentially dangerous misconfigurations.
Resources -
In this episode of TCP Talks, Justin Brodley and Jonathan Baker talk with Miles Ward, the founder of the Google Cloud’s Solutions Architecture practice. Currently, Miles leads the cloud strategy and solutions capabilities as the Chief Technology Officer for consulting and IT services company SADA.
Startups have helped increase the popularity of open source products among enterprise businesses. Changing systems can be a struggle for larger, more traditional companies. But legacy businesses also want to accomplish more in a shorter amount of time, which requires shedding clunky, legacy systems.
“Those building blocks make it so that companies operate at a certain rate of change. And I know zero companies asking me to slow down their rate of change,” he notes.
The evolution of product compatibility is also discussed. Product sellers need to help customers understand how much of their system fits and how much doesn’t fit in one solution compared to another, Miles says. Customers need to have a clear understanding of what’s involved and how much work it’s going to be.
In addition, Miles shares his thoughts on the role of the CTO as well as the benefits of rebranding a product everybody hates.
Featured GuestName: Miles Ward
What he does: As CTO of SADA, Miles leads the cloud strategy and solutions capabilities. His remit includes delivering next-generation solutions to challenges in big data and analytics, application migration, infrastructure automation, and cost optimization; and engaging with customers on their most complex and ambitious plans around Google Cloud.
Key quote: “There used to be big crunchy legacy impediments to adoption… But it’s 2021 — live in the future, that shit works. Now it’s more about making it easy enough and predictable enough to consume that folks can unlock the business justification.”
Where to find him: LinkedIn | Twitter
Key Takeaways Gone are the days when products from different technology providers, like Oracle or SAP, couldn’t work together to solve a customer problem. These days, companies need to make products easy and predictable enough so customers can unlock the business justification straight away. For Google Cloud, the next phase of growth will require investment in higher-level relationships with customers. Miles references his experience with current Google Cloud CEO Thomas Kurian (TK). “TK is super focused about spending the majority of his time face to face with customers,” he says. “He’s not doing it to be a glad-hand, he’s deal making and proposal pushing and thinking through the machinery of how to build higher level relationships.” There’s a huge opportunity to help the “the real world divisions inside of real world businesses” — not just serve the IT department.Miles says, “I think there’s a bunch of cloud providers that are working really hard now to facilitate the plumbing and governance and oversight and security controls and operational management of what is — not a hybrid between their data center, and a cloud — a hybrid between their SaaS fleet and the couple of things they still need to run on their own.” Worried about leveraging a Google solution and then having them pull the plug on it? Miles doesn’t think you should be too concerned about deprecation. “I think they have heard this feedback really loud and clear,” he says. “There’s a whole bunch of people that have made it really obvious that if -
Note: This interview is part of a paid sponsorship between Open Raven and The Cloud Pod.
In this TCP Talks episode, Justin Brodley and Jonathan Baker talk with Mark Curphey, Chief Product Office and Co-Founder of Open Raven, a fully integrated platform for security and privacy workflows.
Featured GuestName: Mark Curphey
What he does: Mark is Chief Product Officer and Co-Founder of Open Raven.
Where to find him: LinkedIn | Twitter
Listen to Mark discuss the Open Raven strategy for protecting your data, the use of serverless workflows to scale to enormous workloads. Protecting your data and ensuring compliance using the Open Policy Agent – and more.
Key PointsDiscover – Classify – Monitor – Protect“The cloud has moved in incredibly fast; security has been moved off to the side and as a result companies don’t know where their data is, breaches are happening constantly, and these are the big things that get companies in the press.”
Macie“Every single customer that we spoke to in the early stages said, a) It doesn’t work b) It’s ridiculously expensive, and c) It’s only on s3 buckets. Well, whilst The Register is always reporting breaches of S3 buckets, my customer data is in RDS! That’s a real piece of the problem for me; sure, it’s popular, but I shouldn’t just be thinking about trying to protect myself from getting on The Register.”
Part of the challenge is that data is not one thing… I may have a name, I may have an address, I may have a card number. There are all sorts of different parameters, and the data could be stored in multiple ways. So you have the concept of like data adjacency; If I have a CCV number, and expiry date and name associated to it that might be something which is real.
With Macie, even if you just use the straight matching techniques, you don’t have control over the adjacency thing, so that’s why a lot of the basic trivial cases get completely missed.
Security at the edge?“If you are protecting data in the cloud, you have to wire the tools into the cloud to understand which IAM has access, which routes, which security groups can give you access? That’s the only way to understand the context to protect it. You can’t do it in some sort of edge device.”
Getting started with Open RavenVisit openraven.com to get a 15 day trial. Spin up a SaaS instance and go play.
“We already think we’re a better choice than Macie, but don’t think that’s the end goal. Come partner with us, work with us on the end goal, because those are things that we love; solving massive, complex, and interesting problems.”
https://www.openraven.com/thecloudpod
-
In this TCP Talks episode, Justin Brodley and Jonathan Baker talk with Forrest Brazeal, a Senior Manager at A Cloud Guru, a cloud education platform that has attracted more than two million students. A Cloud Guru offers full certification training and technical deep dives for Amazon Web Services, Microsoft Azure, Google Cloud Platform, and more.
Forrest talks about why companies need to invest in training to reap the benefits of “cloud fluency,” and how A Cloud Guru is contributing to cloud adoption success at Fortune 500 companies.
While discussing knowledge gaps, Forrest highlights how important it is to clearly identify which cloud services and knowledge areas you’re going to become certified in to avoid missing important high level areas.
“Going through the certification training and prep really helps you to avoid those blind spots that will keep you from speaking effectively to the other teams that you work with,” says Forrest.
Featured Guest Name: Forrest Brazeal What he does: Forrest is a Senior Manager at cloud learning platform A Cloud Guru. Key quote: “When I look at people who are going from the data center to the cloud today, they are thinking about the cloud as something that’s going to take undifferentiated heavy lifting away from them.” Where to find him: LinkedIn l Twitter | Personal WebsiteKey Takeaways Be strategic with your cloud certifications. If you’re trying to reach a certain number of certifications, make sure you have a plan or you might end up with gaps in your knowledge. “It’s so easy to do, right?” Forrest says, “as I’m sitting on one team, and I’m touching one technology all the time, I could go two, three, four years and never know anything about networking because all I’m doing is databases, right? Or never know anything about compute, because all I’m doing is storage. Going through the certification training prep really helps you to avoid those blind spots that will keep you from speaking effectively to the other teams that you work with.” College grads beware: Just because you have a Computer Science degree doesn’t mean you’ll just be writing algorithms all day. If you’re looking at a career in programming, the day to day job includes negotiating with people and figuring out what requirements of the business are – not just writing algorithms. Forrest says “it’s figuring out requirements, and it’s writing the same line of code and then deleting it because it turns out the business requirement changed.” Scaling to zero, where a function can be reduced down to zero replicas when idle and brought back to the required amount of replicas when needed, is one example of how the underlying principles adopted by the serverless community that might have been considered “radical” five or six years ago is now seen as welcome wisdom in the broader cloud community. The term, “serverless,” might be retired eventually, but the fundamental principles will remain and evolve into “cloud native.” Here’s what was mentioned in the episode Microsoft Azure: Cloud Computing Services AWS: Amazon Web Services -
Note: This interview is part of a paid sponsorship between Protera and The Cloud Pod.
In this TCP Talks episode, Justin Brodley and Jonathan Baker talk with Patrick Osterhaus, CTO and Founder of Protera Technologies, a preeminent provider for SAP and cloud managed services.
Patrick discusses how the cloud, COVID-19, and work-from-home are influencing SAP and legacy enterprise software packages today, and Protera’s goal to provide the very best SAP services available on the cloud.
Covering issues around migration to SAP, Patrick takes the opportunity to reflect on Protera’s history, while also addressing corporate IT integration. “We call this the transformation journey-site assessment, specific to each client’s needs, looking beyond SAP to the SAP systems, we use a tool we call [Protera] FlexBridgeSM,” notes Patrick.
Featured GuestName: Patrick Osterhaus
What he does: Patrick is CTO and Founder of Protera Technologies.
Key quote: “The complexity of moving to public cloud is getting those non-cloud native applications into the cloud, and then looking at the transformation of those applications once they’re in the cloud.”
Where to find him: LinkedIn | Twitter
Key TakeawaysThe best way to prepare for cloud migration is what Patrick calls “the journey,” which involves a site assessment of the customer environment and understanding how everything on-premise, or in a hybrid environment, is working together.
COVID-19 has accelerated migration to the cloud and has forced companies to plan their disaster recovery systems. Patrick says businesses aren’t just thinking about their earpiece systems — the thinking extends to ancillary systems like CRMs and web access systems — “all these systems to be connected and have it fully available in the cloud as a backup.” He adds, “We’ve seen a natural interest in what is good practice,” which is to have a protection plan for critical SAP applications.
Working with many compliance-heavy industries, such as financial or military and defense clients, Protera stresses has learned the importance of not only application security, but also the physical security necessary around data centers. He says the discussing the real-world protection of data centers — “who owns the data, how it’s governed, how it’s protected” — is important to raise with the client.
ResourcesHere’s what was mentioned in the episodeSAP: Systems in Application Products and Data Processing
“What is DevOps?“: An AWS blog post explaining the DevOps model
Microsoft Azure: Cloud Computing Services
Am
-
In this TCP Talks episode, Justin Brodley and Jonathan Baker talk with Bart Castle, an AWS and cloud computing trainer and media personality. Bart works with IT training company CBT Nuggets and also does cloud-migration consulting projects.
Bart shares the patterns he seems based on training demand and also advises how to decide which certification to go for next. He discusses the importance of solving business problems that will help achieve the business’ goals while retooling and transforming systems.
“At this point in my career, every technical conversation that I have is always paired up with a business value conversation,” he notes.
But how should a data team shift focus to better solve business problems? He suggests looking for patterns. Uncovering patterns can help determine actionable steps to maximize efficiency and enable new business opportunities.
Bart also discusses cloud computing trends, CloudFormation stacking, hybrid deployments, and containers.
Featured Guest Name: Bart Castle What he does: Bart is a cloud computing and AWS expert and technical trainer, as well as a consultant. Key quote: “In the end, we’re still looking for those tools that will bridge gaps. This is why, for me, being an integrations professional and getting what integration means is skill number one across all different arenas. Everywhere you look, it’s an integration problem.” Where to find him: LinkedIn | Twitter | YouTubeKey Takeaways When thinking about all the different training options, Bart suggests pursuing the certification that would help you land a specific job or role. If you’re not sure what your next job might be, look at SysOps administration first since it is closest to traditional network help desk operations support roles. Based on his training background, Bart sees a rising interest in network automation. Many teams are working with various vendors to address networking and connectivity and to make the transition from command line administration to Python automation.“A lot of what I’m seeing here is the switch from real deep specialty to real broad generalization, and that can be an overwhelming bite to take when you look at how much information there is to consume,” says Bart.
Learning how the tools work is the easy part, but you have to dig deeper to make it work for your specific business use case. Bart recommends looking for white papers, as well as case studies and blog posts. Communities (like TCP!) can also point you in the right direction.Bart says, “Once you get those examples of how a piece of input data with the right transformation with this pairing of reporting can solve this problem — now, you’re putting tools in your belt that are going beyond just using the tools, and how to actually solve business problems with them.”
Here’s what was mentioned in the episodeCBT Nuggets: provides in-demand training, primarily in IT, project management, and office productivity topics.
Amazon Simple Storage Service (S3): a cloud object storage service.
“What is
-
In this TCP Talks episode, Justin Brodley and Jonathan Baker chat with Liz Rice, VP of open source engineering for Aqua Security, which provides tools to secure cloud-native deployments.
Liz describes Aqua’s evolution over the years: From a provider of container security to its acquisition of CloudSploit and its development of open-source security solutions. Most customers are using cloud native software, and Aqua wants to secure those workloads and engage that community.
“As a business, we have to be where the discussions are. Having open-source tools that are genuinely useful gives us a good way to participate in that community,” Liz explains.
In addition to her role at Aqua Security, she is the chair on the CloudNative Computing Foundation‘s (CNCF) Technical Oversight Committee. During the conversation, Liz gives an overview of how they handle projects.
Key Takeaways Open source tools offer an entry point into communities. “As a business, we have to be there — we have to be where the discussions are. And having open source tools and solutions that are genuinely useful gives us a good way of participating in that community,” Liz says of the value of Aqua developing open-source tools. The company’s Starboard toolkit for finding risks in Kubernetes workloads and environments is a recent example. Liz discusses Starboard’s comparative advantage — it integrates existing Kubernetes tools, not just from Aqua but also from third-parties, into the Kubernetes experience. “You can run Trivy through Starboard and your results are right there next to the workload you’re interested in,” she says. Liz discusses CNCF’s role with Kubernetes and beyond. “Google today contributes tons of time, energy, and engineering hours into Kubernetes. If tomorrow they were to decide they were going to walk away, Kubernetes still exists, and it would do so because of the CNCF and its participants,” she explains. ResourcesHere’s what was mentioned in the episode
“Container Security: Fundamental Technology Concepts that Protect Containerized Applications“: Liz Rice’s book. Aqua Security: a company that delivered security solutions for applications. Cloud Native Computing Foundation: CNCF serves as the vendor-neutral home for many of the fastest-growing open-source projects, including Kubernetes, Prometheus, and Envoy. CloudSploit: security scanner for cloud accounts.Trivy: vulnerability scanner for container images.Starboard: makes security information available across the Kubernetes API in a native way.Prometheus: an open-source metrics-based monitoring system.Istio: Googl -
In this episode of TCP-Talks we chat with Amiram Shachar, founder and CEO of Spot, which aims to help its customers reduce complexity and compute costs by up to 90% in the AWS, GCP and Azure clouds.
We talk about the impact on the spot pricing market, and the differences between the AWS, GCP and Azure approach to spot pricing and delivery, and whether customers are asking for multi cloud solutions.
Amiram discusses the problems Spot solves, why they chose to partner with NetApp and reveal the mystery of the rebrand from Spotinst, then takes us on a deeper dive into Spot’s Ocean, a Serverless Infrastructure Engine for Containers,.
-
This week Chris Riley DevOps Advocate for Splunk and Podcast Host of Developers Eating the World joins us. We ask the tough questions, like what is Observability exactly? We touch on the risk of robots taking my job, with AI-Ops, and if it is marketing buzzwords or a product. Plus the mad rush to SRE all the NOCs, because GOOGLE DOES IT and more on TCP-Talks.
Twitter: https://twitter.com/hoardinginfo
Developers Eating the World Podcast
-
Josh Stella (twitter: @joshstella) joins us to talk about the state of cloud security. We discuss Fugue’s new report, the complexity and challenges of IAM, and the most common cloud misconfiguration aren’t always the ones you would expect.
Fugue ensures cloud infrastructure stays in continuous compliance with enterprise security policies. Our solution identifies cloud infrastructure security risks and compliance violations and ensures that they are never repeated. Fugue provides baseline drift detection and automated remediation to eliminate data breaches, and powerful visualization and reporting tools to easily demonstrate compliance.
- Mehr anzeigen