Воспроизведено

  • Summary

    The market for data warehouse platforms is large and varied, with options for every use case. ClickHouse is an open source, column-oriented database engine built for interactive analytics with linear scalability. In this episode Robert Hodges and Alexander Zaitsev explain how it is architected to provide these features, the various unique capabilities that it provides, and how to run it in production. It was interesting to learn about some of the custom data types and performance optimizations that are included.

    AnnouncementsHello and welcome to the Data Engineering Podcast, the show about modern data managementWhen you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With 200Gbit private networking, scalable shared block storage, and a 40Gbit public network, you’ve got everything you need to run a fast, reliable, and bullet-proof data platform. If you need global distribution, they’ve got that covered too with world-wide datacenters including new ones in Toronto and Mumbai. And for your machine learning workloads, they just announced dedicated CPU instances. Go to dataengineeringpodcast.com/linode today to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!Integrating data across the enterprise has been around for decades – so have the techniques to do it. But, a new way of integrating data and improving streams has evolved. By integrating each silo independently – data is able to integrate without any direct relation. At CluedIn they call it “eventual connectivity”. If you want to learn more on how to deliver fast access to your data across the enterprise leveraging this new method, and the technologies that make it possible, get a demo or presentation of the CluedIn Data Hub by visiting dataengineeringpodcast.com/cluedin. And don’t forget to thank them for supporting the show!You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data management.For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Dataversity, and the Open Data Science Conference. Coming up this fall is the combined events of Graphorum and the Data Architecture Summit. The agendas have been announced and super early bird registration for up to $300 off is available until July 26th, with early bird pricing for up to $200 off through August 30th. Use the code BNLLC to get an additional 10% off any pass when you register. Go to dataengineeringpodcast.com/conferences to learn more and take advantage of our partner discounts when you register.Go to dataengineeringpodcast.com to subscribe to the show, sign up for the mailing list, read the show notes, and get in touch.To help other people find the show please leave a review on iTunes and tell your friends and co-workersJoin the community in the new Zulip chat workspace at dataengineeringpodcast.com/chatYour host is Tobias Macey and today I’m interviewing Robert Hodges and Alexander Zaitsev about Clickhouse, an open source, column-oriented database for fast and scalable OLAP queriesInterviewIntroductionHow did you get involved in the area of data management?Can you start by explaining what Clickhouse is and how you each got involved with it?What are the primary use cases that Clickhouse is targeting?Where does it fit in the database market and how does it compare to other column stores, both open source and commercial?Can you describe how Clickhouse is architected?Can you talk through the lifecycle of a given record or set of records from when they first get inserted into Clickhouse, through the engine and storage layer, and then the lookup process at query time?I noticed that Clickhouse has a feature for implementing data safeguards (deletion protection, etc.). Can you talk through how that factors into different use cases for Clickhouse?Aside from directly inserting a record via the client APIs can you talk through the options for loading data into Clickhouse?For the MySQL/Postgres replication functionality how do you maintain schema evolution from the source DB to Clickhouse?What are some of the advanced capabilities, such as SQL extensions, supported data types, etc. that are unique to Clickhouse?For someone getting started with Clickhouse can you describe how they should be thinking about data modeling?Recent entrants to the data warehouse market are encouraging users to insert raw, unprocessed records and then do their transformations with the database engine, as opposed to using a data lake as the staging ground for transformations prior to loading into the warehouse. Where does Clickhouse fall along that spectrum?How is scaling in Clickhouse implemented and what are the edge cases that users should be aware of?How is data replication and consistency managed?What is involved in deploying and maintaining an installation of Clickhouse?I noticed that Altinity is providing a Kubernetes operator for Clickhouse. What are the opportunities and tradeoffs presented by that platform for Clickhouse?What are some of the most interesting/unexpected/innovative ways that you have seen Clickhouse used?What are some of the most challenging aspects of working on Clickhouse itself, and or implementing systems on top of it?What are the shortcomings of Clickhouse and how do you address them at Altinity?When is Clickhouse the wrong choice?Contact InfoRobertLinkedInhodgesrm on GitHubAlexanderalex-zaitsev on GitHubLinkedInParting QuestionFrom your perspective, what is the biggest gap in the tooling or technology for data management today?LinksClickhouseAltinityOLAPM204SybaseMySQLVerticaYandexYandex MetricaGoogle AnalyticsSQLGreenplumInfoBrightInfiniDBMariaDBSparkSIMD (Single Instruction, Multiple Data)MergesortETLChange Data CaptureMapReduceKDBOLTPCassandraInfluxDBPrometheusSnowflakeDBHiveHadoop

    The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

  • Summary

    In order to scale the use of data across an organization there are a number of challenges related to discovery, governance, and integration that need to be solved. The key to those solutions is a robust and flexible metadata management system. LinkedIn has gone through several iterations on the most maintainable and scalable approach to metadata, leading them to their current work on DataHub. In this episode Mars Lan and Pardhu Gunnam explain how they designed the platform, how it integrates into their data platforms, and how it is being used to power data discovery and analytics at LinkedIn.

    AnnouncementsHello and welcome to the Data Engineering Podcast, the show about modern data managementWhat are the pieces of advice that you wish you had received early in your career of data engineering? If you hand a book to a new data engineer, what wisdom would you add to it? I’m working with O’Reilly on a project to collect the 97 things that every data engineer should know, and I need your help. Go to dataengineeringpodcast.com/97things to add your voice and share your hard-earned expertise.When you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $60 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!If you’ve been exploring scalable, cost-effective and secure ways to collect and route data across your organization, RudderStack is the only solution that helps you turn your own warehouse into a state of the art customer data platform. Their mission is to empower data engineers to fully own their customer data infrastructure and easily push value to other parts of the organization, like marketing and product management. With their open-source foundation, fixed pricing, and unlimited volume, they are enterprise ready, but accessible to everyone. Go to dataengineeringpodcast.com/rudder to request a demo and get one free month of access to the hosted platform along with a free t-shirt.You listen to this show to learn and stay up to date with what’s happening in databases, streaming platforms, big data, and everything else you need to know about modern data platforms. For more opportunities to stay up to date, gain new skills, and learn from your peers there are a growing number of virtual events that you can attend from the comfort and safety of your home. Go to dataengineeringpodcast.com/conferences to check out the upcoming events being offered by our partners and get registered today!Your host is Tobias Macey and today I’m interviewing Pardhu Gunnam and Mars Lan about DataHub, LinkedIn’s metadata management and data catalog platformInterviewIntroductionHow did you get involved in the area of data management?Can you start by giving an overview of what DataHub is and some of its back story?What were you using at LinkedIn for metadata management prior to the introduction of DataHub?What was lacking in the previous solutions that motivated you to create a new platform?There are a large number of other systems available for building data catalogs and tracking metadata, both open source and proprietary. What are the features of DataHub that would lead someone to use it in place of the other options?Who is the target audience for DataHub?How do the needs of those end users influence or constrain your approach to the design and interfaces provided by DataHub?Can you describe how DataHub is architected?How has it evolved since you first began working on it?What was your motivation for releasing DataHub as an open source project?What have been the benefits of that decision?What are the challenges that you face in maintaining changes between the public repository and your internally deployed instance?What is the workflow for populating metadata into DataHub?What are the challenges that you see in managing the format of metadata and establishing consistent models for the information being stored?How do you handle discovery of data assets for users of DataHub?What are the integration and extension points of the platform?What is involved in deploying and maintaining and instance of the DataHub platform?What are some of the most interesting or unexpected ways that you have seen DataHub used inside or outside of LinkedIn?What are some of the most interesting, unexpected, or challenging lessons that you learned while building and working with DataHub?When is DataHub the wrong choice?What do you have planned for the future of the project?Contact InfoMarsLinkedInmars-lan on GitHubPardhuLinkedInParting QuestionFrom your perspective, what is the biggest gap in the tooling or technology for data management today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersJoin the community in the new Zulip chat workspace at dataengineeringpodcast.com/chatLinksDataHubMap/ReduceApache FlumeLinkedIn Blog Post introducing DataHubWhereHowsHive MetastoreKafkaCDC == Change Data CapturePodcast EpisodePDL LinkedIn languageGraphQLElasticsearchNeo4JApache PinotApache GobblinApache SamzaOpen Sourcing DataHub Blog Post

    The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

  • Summary

    There is a lot of attention on the database market and cloud data warehouses. While they provide a measure of convenience, they also require you to sacrifice a certain amount of control over your data. If you want to build a warehouse that gives you both control and flexibility then you might consider building on top of the venerable PostgreSQL project. In this episode Thomas Richter and Joshua Drake share their advice on how to build a production ready data warehouse with Postgres.

    AnnouncementsHello and welcome to the Data Engineering Podcast, the show about modern data managementWhen you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!Firebolt is the fastest cloud data warehouse. Visit dataengineeringpodcast.com/firebolt to get started. The first 25 visitors will receive a Firebolt t-shirt.Atlan is a collaborative workspace for data-driven teams, like Github for engineering or Figma for design teams. By acting as a virtual hub for data assets ranging from tables and dashboards to SQL snippets & code, Atlan enables teams to create a single source of truth for all their data assets, and collaborate across the modern data stack through deep integrations with tools like Snowflake, Slack, Looker and more. Go to dataengineeringpodcast.com/atlan today and sign up for a free trial. If you’re a data engineering podcast listener, you get credits worth $3000 on an annual subscriptionYour host is Tobias Macey and today I’m interviewing Thomas Richter and Joshua Drake about using Postgres as your data warehouseInterviewIntroductionHow did you get involved in the area of data management?Can you start by establishing a working definition of what constitutes a data warehouse for the purpose of this discussion?What are the limitations for out-of-the-box Postgres when trying to use it for these workloads?There are a large and growing number of options for data warehouse style workloads. How would you categorize the different systems and what is PostgreSQL’s position in that ecosystem?What do you see as the motivating factors for a team or organization to select from among those categories?Why would someone want to use Postgres as their data warehouse platform rather than using a purpose-built engine?What is the cost/performance equation for Postgres as compared to other data warehouse solutions?For someone who wants to turn Postgres into a data warehouse engine, what are their options?What are the relative tradeoffs of the different open source and commercial offerings? (e.g. Citus, cstore_fdw, zedstore, Swarm64, Greenplum, etc.)One of the biggest areas of growth right now is in the "cloud data warehouse" market where storage and compute are decoupled. What are the options for making that possible with Postgres? (e.g. using foreign data wrappers for interacting with data lake storage (S3, HDFS, Alluxio, etc.))What areas of work are happening in the Postgres community for upcoming releases to make it more easily suited to data warehouse/analytical workloads?What are some of the most interesting, innovative, or unexpected ways that you have seen Postgres used in analytical contexts?What are the most interesting, unexpected, or challenging lessons that you have learned from your own experiences of building analytical systems with Postgres?When is Postgres the wrong choice for a data warehouse?What are you most excited for/what are you keeping an eye on in upcoming releases of Postgres and its ecosystem?Contact InfoThomasLinkedInJDLinkedIn@linuxhiker on TwitterParting QuestionFrom your perspective, what is the biggest gap in the tooling or technology for data management today?LinksPostgreSQLPodcast EpisodeSwarm64Podcast EpisodeCommand Prompt Inc.IBMCognosOLAP CubeMariaDBMySQLPowell’s BooksDBasePractical PostgreSQLNetezzaPrestoTrinoApache DrillParquetParquet Foreign Data WrapperSnowflakePodcast EpisodeAmazon RDSAmazon AuroraHyperscaleCitusTimescaleDBPodcast EpisodeFollowup Podcast EpisodeGreenplumzedstoreRedshiftMicrosoft SQL ServerPostgres TablespacesDebeziumPodcast EpisodeEDI == Enterprise Data IntegrationChange Data CapturePodcast Episode

    The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

  • Summary

    The Data industry is changing rapidly, and one of the most active areas of growth is automation of data workflows. Taking cues from the DevOps movement of the past decade data professionals are orienting around the concept of DataOps. More than just a collection of tools, there are a number of organizational and conceptual changes that a proper DataOps approach depends on. In this episode Kevin Stumpf, CTO of Tecton, Maxime Beauchemin, CEO of Preset, and Lior Gavish, CTO of Monte Carlo, discuss the grand vision and present realities of DataOps. They explain how to think about your data systems in a holistic and maintainable fashion, the security challenges that threaten to derail your efforts, and the power of using metadata as the foundation of everything that you do. If you are wondering how to get control of your data platforms and bring all of your stakeholders onto the same page then this conversation is for you.

    AnnouncementsHello and welcome to the Data Engineering Podcast, the show about modern data managementWhen you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.Your host is Tobias Macey and today I’m interviewing Max Beauchemin, Lior Gavish, and Kevin Stumpf about the real world challenges of embracing DataOps practices and systems, and how to keep things secure as you scaleInterviewIntroductionHow did you get involved in the area of data management?Before we get started, can you each give your definition of what "DataOps" means to you?How does this differ from "business as usual" in the data industry?What are some of the things that DataOps isn’t (despite what marketers might say)?What are the biggest difficulties that you have faced in going from concept to production with a workflow or system intended to power self-serve access to other members of the organization?What are the weak points in the current state of the industry, whether technological or social, that contribute to your greatest sense of unease from a security perspective?As founders of companies that aim to facilitate adoption of various aspects of DataOps, how are you applying the products that you are building to your own internal systems?How does security factor into the design of robust DataOps systems? What are some of the biggest challenges related to security when it comes to putting these systems into production?What are the biggest differences between DevOps and DataOps, particularly when it concerns designing distributed systems?What areas of the DataOps landscape do you think are ripe for innovation?Nowadays, it seems like new DataOps companies are cropping up every day to try and solve some of these problems. Why do you think DataOps is becoming such an important component of the modern data stack?There’s been a lot of conversation recently around the "rise of the data engineer" versus other roles in the data ecosystem (i.e. data scientist or data analyst). Why do you think that is?What are some of the most valuable lessons that you have learned from working with your customers about how to apply DataOps principles?What are some of the most interesting, unexpected, or challenging lessons that you have learned while building your respective platforms and businesses?What are the industry trends that you are each keeping an eye on to inform you future product direction?Contact InfoKevinLinkedInkevinstumpf on GitHub@kevinstumpf on TwitterMaximeLinkedIn@mistercrunch on Twittermistercrunch on GitHubLiorLinkedIn@lgavish on TwitterParting QuestionFrom your perspective, what is the biggest gap in the tooling or technology for data management today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersJoin the community in the new Zulip chat workspace at dataengineeringpodcast.com/chatLinksTectonMonte CarloSupersetPresetBarracuda NetworksFeature StoreDataOpsDevOpsData CatalogAmundsenOpenLineageThe Downfall of the Data EngineerHashicorp VaultReverse ELT

    The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

  • Summary

    "Business as usual" is changing, with more companies investing in data as a first class concern. As a result, the data team is growing and introducing more specialized roles. In this episode Josh Benamram, CEO and co-founder of Databand, describes the motivations for these emerging roles, how these positions affect the team dynamics, and the types of visibility that they need into the data platform to do their jobs effectively. He also talks about how his experience working with these teams informs his work at Databand. If you are wondering how to apply your talents and interests to working with data then this episode is a must listen.

    AnnouncementsHello and welcome to the Data Engineering Podcast, the show about modern data managementWhen you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.Your host is Tobias Macey and today I’m interviewing Josh Benamram about the continued evolution of roles and responsibilities in data teams and their varied requirements for visibility into the data stackInterviewIntroductionHow did you get involved in the area of data management?Can you start by discussing the set of roles that you see in a majority of data teams?What new roles do you see emerging, and what are the motivating factors?Which of the more established positions are fracturing or merging to create these new responsibilities?What are the contexts in which you are seeing these role definitions used? (e.g. small teams, large orgs, etc.)How do the increased granularity/specialization of responsibilities across data teams change the ways that data and platform architects need to think about technology investment?What are the organizational impacts of these new types of data work?How do these shifts in role definition change the ways that the individuals in the position interact with the data platform?What are the types of questions that practitioners in different roles are asking of the data that they are working with? (e.g. what is the lineage of this asset vs. what is the distribution of values in this column, etc.)How can metrics and observability data about pipelines and data systems help to support these various roles?What are the different ways of measuring data quality for the needs of these roles?How is the work you are doing at Databand informed by these changing needs?One of the big challenges caused by data systems is the varying modes of access and interaction across the different stakeholders and activities. How can data platform teams and vendors help to surface useful metrics and information across these various interfaces without forcing users into a new or unfamiliar workflow?What are some of the long-term impacts that you foresee in the data ecosystem and ways of interacting with data as a result of the current trend toward more specialized tasks?As a vendor working to provide useful context to these practitioners what are some of the most interesting, unexpected, or challenging lessons that you have learned?What do you have planned for the future of Databand?Contact InfoEmailParting QuestionFrom your perspective, what is the biggest gap in the tooling or technology for data management today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersJoin the community in the new Zulip chat workspace at dataengineeringpodcast.com/chatLinksDatabandWebsitePlatformOpen CoreMore data engineering stories & best practicesAtlassianChartioData Mesh ArticlePodcast EpisodeGrafanaMetabaseSupersetPodcast.__init__ EpisodeSnowflakePodcast EpisodeSparkAirflowPodcast.__init__ Episode

    The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

  • Summary

    One of the biggest obstacles to success in delivering data products is cross-team collaboration. Part of the problem is the difference in the information that each role requires to do their job and where they expect to find it. This introduces a barrier to communication that is difficult to overcome, particularly in teams that have not reached a significant level of maturity in their data journey. In this episode Prukalpa Sankar shares her experiences across multiple attempts at building a system that brings everyone onto the same page, ultimately bringing her to found Atlan. She explains how the design of the platform is informed by the needs of managing data projects for large and small teams across her previous roles, how it integrates with your existing systems, and how it can work to bring everyone onto the same page.

    AnnouncementsHello and welcome to the Data Engineering Podcast, the show about modern data managementWhen you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.Your host is Tobias Macey and today I’m interviewing Prukalpa Sankar about Atlan, a modern data workspace that makes collaboration among data stakeholders easier, increasing efficiency and agility in data projectsInterviewIntroductionHow did you get involved in the area of data management?Can you start by giving an overview of what you are building at Atlan and some of the story behind it?Who are the target users of Atlan?What portions of the data workflow is Atlan responsible for?What components of the data stack might Atlan replace?How would you characterize Atlan’s position in the current data ecosystem?What makes Atlan stand out from other systems for data cataloguing, metadata management, or data governance?What types of data assets (e.g. structured vs unstructured, textual vs binary, etc.) is Atlan designed to understand?Can you talk through how Atlan is implemented?How have the goals and design of the platform changed or evolved since you first began working on it?What are some of the early assumptions that you have had to revisit or reconsider?What is involved in getting Atlan deployed and integrated into an existing data platform?Beyond the technical aspects, what are the business processes that teams need to implement to be successful when incorporating Atlan into their systems?Once Atlan is set up, what is a typical workflow for an individual and their team to collaborate on a set of data assets, or building out a new processing pipeline?What are some useful steps for introducing all of the stakeholders to the system and workflow?What are the available extension points for managing data in systems that aren’t supported by Atlan out of the box?What are some of the most interesting, innovative, or unexpected ways that you have seen Atlan used?What are the most interesting, unexpected, or challenging lessons that you have learned while building Atlan?When is Atlan the wrong choice?What do you have planned for the future of the product?Contact InfoLinkedIn@prukalpa on TwitterParting QuestionFrom your perspective, what is the biggest gap in the tooling or technology for data management today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersJoin the community in the new Zulip chat workspace at dataengineeringpodcast.com/chatLinksAtlanIndia’s National Data PlatformWorld Economic ForumUNGates FoundationGitHubFigmaSnowflakeRedshiftDatabricksDBTSisenseLookerApache AtlasImmutaDataHubDatakinAapache RangerGreat ExpectationsTrinoAirflowDagsterPrivaceraDatabandCloudformationGrafanaDeequWe Failed to Set Up a Data Catalog 3x. Here’s Why.Analysing the analysers bookOpenAPI

    The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

  • A body is found at the end of a path leading to three houses. In West Cork, the police, known in Ireland as ‘the guards,’ have little experience with serious crime, and the victim—a French woman with a holiday home in the area—is a mysterious figure in West Cork. That night, as news of the murder snakes through the community, everyone begins to question whether or not they were ever, truly, safe. To be first to listen to bonus episodes of West Cork and Sam and Jennifer's incredible next series, sign up at www.yarn.fm

    Hosted on Acast. See acast.com/privacy for more information.

  • Summary

    Data quality is on the top of everyone’s mind recently, but getting it right is as challenging as ever. One of the contributing factors is the number of people who are involved in the process and the potential impact on the business if something goes wrong. In this episode Maarten Masschelein and Tom Baeyens share the work they are doing at Soda to bring everyone on board to make your data clean and reliable. They explain how they started down the path of building a solution for managing data quality, their philosophy of how to empower data engineers with well engineered open source tools that integrate with the rest of the platform, and how to bring all of the stakeholders onto the same page to make your data great. There are many aspects of data quality management and it’s always a treat to learn from people who are dedicating their time and energy to solving it for everyone.

    AnnouncementsHello and welcome to the Data Engineering Podcast, the show about modern data managementWhen you’re ready to build your next pipeline, or want to test out the projects you hear about on the show, you’ll need somewhere to deploy it, so check out our friends at Linode. With their managed Kubernetes platform it’s now even easier to deploy and scale your workflows, or try out the latest Helm charts from tools like Pulsar and Pachyderm. With simple pricing, fast networking, object storage, and worldwide data centers, you’ve got everything you need to run a bulletproof data platform. Go to dataengineeringpodcast.com/linode today and get a $100 credit to try out a Kubernetes cluster of your own. And don’t forget to thank them for their continued support of this show!Modern Data teams are dealing with a lot of complexity in their data pipelines and analytical code. Monitoring data quality, tracing incidents, and testing changes can be daunting and often takes hours to days. Datafold helps Data teams gain visibility and confidence in the quality of their analytical data through data profiling, column-level lineage and intelligent anomaly detection. Datafold also helps automate regression testing of ETL code with its Data Diff feature that instantly shows how a change in ETL or BI code affects the produced data, both on a statistical level and down to individual rows and values. Datafold integrates with all major data warehouses as well as frameworks such as Airflow & dbt and seamlessly plugs into CI workflows. Go to dataengineeringpodcast.com/datafold today to start a 30-day trial of Datafold. Once you sign up and create an alert in Datafold for your company data, they will send you a cool water flask.RudderStack’s smart customer data pipeline is warehouse-first. It builds your customer data warehouse and your identity graph on your data warehouse, with support for Snowflake, Google BigQuery, Amazon Redshift, and more. Their SDKs and plugins make event streaming easy, and their integrations with cloud applications like Salesforce and ZenDesk help you go beyond event streaming. With RudderStack you can use all of your customer data to answer more difficult questions and then send those insights to your whole customer data stack. Sign up free at dataengineeringpodcast.com/rudder today.Your host is Tobias Macey and today I’m interviewing Maarten Masschelein and Tom Baeyens about the work are doing at Soda to power data quality managementInterviewIntroductionHow did you get involved in the area of data management?Can you start by giving an overview of what you are building at Soda?What problem are you trying to solve?And how are you solving that problem?What motivated you to start a business focused on data monitoring and data quality?The data monitoring and broader data quality space is a segment of the industry that is seeing a huge increase in attention recently. Can you share your perspective on the current state of the ecosystem and how your approach compares to other tools and products?who have you created Soda for (e.g platform engineers, data engineers, data product owners etc) and what is a typical workflow for each of them?How do you go about integrating Soda into your data infrastructure?How has the Soda platform been architected?Why is this architecture important?How have the goals and design of the system changed or evolved as you worked with early customers and iterated toward your current state?What are some of the challenges associated with the ongoing monitoring and testing of data?what are some of the tools or techniques for data testing used in conjunction with Soda?What are some of the most interesting, innovative, or unexpected ways that you have seen Soda being used?What are the most interesting, unexpected, or challenging lessons that you have learned while building the technology and business for Soda?When is Soda the wrong choice?What do you have planned for the future?Contact InfoMaartenLinkedIn@masscheleinm on TwitterTomLinkedIn@tombaeyens on TwitterParting QuestionFrom your perspective, what is the biggest gap in the tooling or technology for data management today?Closing AnnouncementsThank you for listening! Don’t forget to check out our other show, Podcast.__init__ to learn about the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you’ve learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workersJoin the community in the new Zulip chat workspace at dataengineeringpodcast.com/chatLinksSoda DataSoda SQLRedHatCollibraSparkGetting Things Done by David Allen (affiliate link)SlackOpsGenieDBTAirflow

    The intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA

  • West Cork, Ireland is an outpost at the edge of Europe, the jumping-off point for America. Rugged windswept and coastal, it was a place of farmers and fisherman until the 1960’s, when it was discovered by the ‘blow-in’s.’ People who drove until the road ran out, artists and urban runaways – a haven for those ready to turn their backs on their old lives and start again. But then there was a murder in West Cork, and overnight, everything changed. To be first to listen to bonus episodes of West Cork and Sam and Jennifer's incredible next series, sign up at www.yarn.fm

    Hosted on Acast. See acast.com/privacy for more information.