Folgen

  • On this episode of How We Made That App, join host Madukar Kumar as he delves into the groundbreaking realm of AI in education with Dev Aditya, CEO and Co-Founder of the Otermans Institute. Discover the evolution from traditional teaching methods to the emergence of AI avatar educators, ushering in a new era of learning.

    Dev explores how pandemic-induced innovation spurred the development of AI models, revolutionizing the educational landscape. These digital teachers aren't just transforming classrooms and corporate training. They're also reshaping refugee education in collaboration with organizations like UNICEF.

    Dev’s deep dive into the creation and refinement of culturally aware and pedagogically effective AI. He shares insights into the meticulous process behind AI model development, from the MVP's inception with 13,000 lines of Q&A to developing a robust seven billion parameter model, enriched by proprietary data from thousands of learners.

    We also discuss the broader implications of AI in data platforms and consumer businesses. Dev shares his journey from law to AI research, highlighting the importance of adaptability and logical thinking in this rapidly evolving field. Join us for an insightful conversation bridging the gap between inspiration and innovation in educational AI!

    Key Quotes:

    “People like web only and app only, right? They like it. But in about July this year, we are launching alpha versions of our products as Edge AI. Now that's going to be a very narrowed down version of language models that we are working on right now, taking from these existing stacks. So that's going to be about 99 percent our stuff. And it's, going to be running on people's devices. It's going to help with people's privacy. Your data stays in your device. And even as a business, it actually helps a lot because I am hopefully, going to see a positive difference in our costs because a lot of that cloud costs now rests in your device.”“My way of dealing with AI is, is narrow intelligence, break a problem down into as many narrow points as possible, storyboard, storyboard color, as micro as possible. If you can break that down together, you can teach each agent and each model to do that phenomenally well. And then it's just an integration game. It will do better than a human being in, you know, as a full director of a movie. Also, if you know, if you, from the business logic standpoint, understand what does a director do, it is possible, theoretically. I don't think people go deep enough to understand what a teacher does, or what a doctor is just not a surgeon, right? How they are thinking, what is their mechanism? If you can break that down. You can easily make, like, probably there are 46, I'm just saying, 46 things that a doctor does, right? If you have 46 agents working together, each one knowing that,be amazing. That's a different game. I think agents are coming.”

    Timestamps

    (00:00) - AI Avatar Teachers in Education(09:29) - AI Teaching Model Development Challenges(13:27) - Model Fine-Tuning for Knowledge Augmentation(25:22) - Evolution of Data Platforms and AI(32:15) - Technology Trends in Consumer Business

    Links

    Connect with Dev

    Visit the Otermans Institute

    Connect with Madhukar

    Visit SingleStore

  • In this episode of How We Made That App, host Madhukar welcomes Jack Ellis, CTO and co-founder of Fathom Analytics, who shares the inside scoop on how their platform is revolutionizing the world of web analytics by putting user privacy at the forefront. With a privacy-first ethos that discards personal data like IP addresses post-processing, Fathom offers real-time analytics while ensuring user privacy. Breaking away from the traditional cookie-based tools like Google Analytics. Jack unpacks the technical challenges they faced in building a robust, privacy-centric analytics service, and he explains their commitment to privacy as a fundamental service feature rather than just a marketing strategy.

    Jack dives into the fascinating world of web development and software engineering practices. Reflecting on Fathom's journey with MySQL and PHP, detailing the trials and tribulations of scaling in high-traffic scenarios. He contrasts the robustness of PHP and the rising popularity of frameworks like Laravel with the allure of Next.js among the younger developer community. Jack also explores the evolution from monolithic applications to serverless architecture and the implications for performance and scaling, particularly as we efficiently serve millions of data points.

    Jack touches on the convergence of AI with database technology and its promising applications in healthcare, such as enhancing user insights and decision-making. Jack shares intriguing thoughts on how AI can transform societal betterment, drawing examples from SingleStore's work with Thorn. You don’t want to miss this revolutionizing episode on how the world of analytics is changing!

    Key Quotes:

    “When we started selling analytics people they were a bit hesitant to pay for analytics but over time people have started valuing privacy over everything And so it's just compounded from there as people have become more aware of the issues and people absolutely still will only use Google Analytics but the segment of the market that is moving towards using solutions like us is growing.”“People became used to Google's opaque ways of processing data. They weren't sure what data was being stored, how long were they keeping the IP address for. All of these other personal things as well. And we came along and we basically said, we're not interested in that. tracking person A around multiple different websites. We're actually only interested in person A's experience on one website. We do not, under any circumstances, want to have a way to be able to profile an individual IP address across multiple entities. And so we invented this mechanism where the web traffic would come in and we'd process it and we'd work out whether they're unique and whatever else. And then we would discard the personal data.”“The bottleneck for most applications is not your web framework, it's always your database and I ran through Wikipedia's numbers, Facebook's numbers and I said it doesn't matter, we can add compute, that's easy peasy, it's always the database, every single time, so stop worrying about what framework you're using and pick the right database that has proven that it can actually scale.”“If you're using an exclusively OLTP database, you might think you're fine. But when you're trying to make mass modifications, mass deletions, mass moving of data, OLTP databases seem to fall over. I had RDS side by side with SingleStore, the same cost for both of them, and I was showing people how quickly SingleStore can do stuff. That makes a huge difference, and it gives you confidence, and I think that you need a database that's going to be able to do that.”

    Timestamps

    (00:55) Valuing consumer’s privacy (06:01) Creating Fathom Analytics' architecture(20:48) Compounding growth to scale(23:08) Structuring team functions(25:39) Developing features and product design(38:42) Advice for building applications

    Links

    Connect with Jack

    Visit Fathom Analytics

    Connect with Madhukar

    Visit SingleStore

  • Fehlende Folgen?

    Hier klicken, um den Feed zu aktualisieren.

  • On this episode of How We Made That App, host Madhukar Kumar welcomes Co-Founder and CEO of LlamaIndex, Jerry Liu! Jerry takes us from the humble beginnings of GPT Index to the impactful rise of Lamaindex, a game-changer in the data frameworks landscape. Prepare to be enthralled by how Lama Index is spearheading retrieval augmented generation (RAG) technology, setting a new paradigm for developers to harness private data sources in crafting groundbreaking applications. Moreover, the adoption of Lamaindex by leading companies underscores its pivotal role in reshaping the AI industry.

    Through the rapidly evolving world of language model providers discover the agility of model-agnostic platforms that cater to the ever-changing landscape of AI applications. As Jerry illuminates, the shift from GPT-4 to Cloud 3 Opus signifies a broader trend towards efficiency and adaptability. Jerry helps explore the transformation of data processing, from vector databases to the advent of 'live RAG' systems—heralding a new era of real-time, user-facing applications that seamlessly integrate freshly assimilated information. This is a testament to how Lamaindex is at the forefront of AI's evolution, offering a powerful suite of tools that revolutionize data interaction.

    Concluding our exploration, we turn to the orchestration of agents within AI frameworks, a domain teeming with complexity yet brimming with potential. Jerry delves into the multifaceted roles of agents, bridging simple LLM reasoning tasks with sophisticated query decomposition and stateful executions. We reflect on the future of software engineering as agent-oriented architectures redefine the sector and invite our community to contribute to the flourishing open-source initiative. Join the ranks of data enthusiasts and PDF parsing experts who are collectively sculpting the next chapter of AI interaction!

    Key Quotes:

    “If you're a fine-tuning API, you either have to cater to the ML researcher or the AI engineer. And to be honest, most AI engineers are not going to care about fine-tuning, if they can just hack together some system initially, that kind of works. And so I think for more AI engineers to do fine-tuning, it either has to be such a simple UX that's basically just like brainless, you might as well just do it and the cost and latency have to come down. And then also there has to be guaranteed metrics improvements. Right now it's just unclear. You'd have to like take your data set, format it, and then actually send it to the LLM and then hope that actually improves the metrics in some way. And I think that whole process could probably use some improvement right now.”“We realized the open source will always be an unopinionated toolkit that anybody can go and use and build their own applications. But what we really want with the cloud offering is something a bit more managed, where if you're an enterprise developer, we want to help solve that clean data problem for you so that you're able to easily load in your different data sources, connect it to a vector store of your choice. And then we can help make decisions for you so that you don't have to own and maintain that and that you can continue to write your application logic. So, LlamaCloud as it stands is basically a managed parsing and injection platform that focuses on getting users like clean data to build performant RAG and LLM applications.”“You have LLMs that do decision-making and tool calling and typically, if you just take a look at a standard agent implementation it's some sort of query decomposition plus tool use. And then you make a loop a little bit so you run it multiple times and then by running it multiple times, that also means that you need to make this overall thing stateful, as opposed to stateless, so you have some way of tracking state throughout this whole execution run. And this includes, like, conversation memory, this includes just using a dictionary but basically some way of, like, tracking state and then you complete execution, right? And then you get back a response.And so that actually is a roughly general interface that we have like a base abstraction for.”“A lot of LLMs, more and more of them are supporting function calling nowadays.So under the hood within the LLM, the API gives you the ability to just specify a set of tools that the LLM API can decide to call tools for you. So it's actually just a really nice abstraction, instead of the user having to manually prompt the LLM to coerce it, a lot of these LLM providers just have the ability for you to specify functions under the hood and if you just do a while loop over that, that's basically an agent, right? Because you just do a while loop until that function calling process is done and that's basically, honestly, what the OpenAI Assistance agent is. And then if you go into some of the more recent agent papers you can start doing things beyond just the next step chain of thought into every stage instead of just reasoning about what you're going to do next, reason about like an entire map of what you're going to do, roll out like different scenarios get the value functions of each of them and then make the best decision And so you can get pretty complicated with the actual reasoning process that which then feeds into tool use and everything else.”

    Timestamps

    (1:25) Llamindex origins (5:45) Building LLM Applications with Lama Index(10:35) Finding patterns and fine-tuning in LLM usage(18:50) Keeping LlamaIndex in the open-source community(23:46) LlamaCloud comprehensive evaluation capabilities (31:45) The future of the modern data stack (40:10) Best practices when building a new application

    Links

    Connect with Jerry

    Visit LlamIndex

    Connect with Madhukar

    Visit SingleStore

  • In this engaging episode, host Madhukar Kumar dives deep into the world of data architecture, deployment processes, machine learning, and AI with special guest Premal Shah, the Co-Founder and Head of Engineering at 6sense. Join them as Premal traces the technological evolution of Sixth Sense, from the early use of FTP to the current focus on streamlining features like GitHub Copilot and enhancing customer interactions with GenAI.

    Discover the journey through the adoption of Hive and Spark for big data processing, the implementation of microservice architecture, and massive-scale containerization. Learn about the team's cutting-edge projects and how they prioritize product development based on data value considerations.

    Premal also shares valuable advice for budding engineers looking to enter the field. Whether you're a tech enthusiast or an aspiring engineer, this episode provides fascinating insights into the ever-evolving landscape of technology!

    Key Quotes:

    “What is important for our customers, is that 6sense gives them the right insight and gives them the insight very quickly. So we have a lot of different products where people come in and they infer the data from what we're showing. Now it is our responsibility to help them do that faster. So now we are bringing in GenAI to give them the right summary to help them to ask questions of the data right from within the product without having to think about it more or like open a support ticket or like ask their CSM.”“We had to basically build a platform that would get all of our customer's data on a daily basis or hourly basis and process it every day and give them insights on top of it. So, we had some experience with Hadoop and Hive at that time. So we used that platform as like our big data platform and then we used MySQL as our metadata layer to store things like who is the customer, what products are there, who are the users, et cetera. So there was a clear separation of small data and big data.”“Pretty soon we realized that the world is moving to microservices, we need to make it easy for our developers to build and deploy stuff in the microservice environment. So, we started investing in containerization and figuring out, how we could deploy it, and at that same time Kubernetes was coming in so with using docker and Kubernetes we were able to blow up our monolith into microservices and a lot of them. Now each team is responsible for their own service and scaling and managing and building and deploying the service. So the confluence of technologies and what you can foresee as being a challenge has really helped in making the transition to microservices.”“We brought in like SingleStore to say, ‘let's just move all of our UIs to one data lake and everybody gets a consistent view.’ There's only one copy. So we process everything on our hive and spark ecosystem, and then we take the subset of the process data, move it to SingleStore, and that's the customer's access point.”“We generally coordinate our releases around a particular time of the month, especially for the big features, things go behind feature flags. So not every customer immediately gets it. You know, some things go in beta, some things go in direct to production. So there are different phases for different features. Then we have like test environments that we have set up, so we can simulate as much as possible, uh, for the different integrations. Somebody has Salesforce, somebody has Mercado, Eloqua, HubSpot. All those environments can be like tested. ”“A full stack person is pretty important these days. You should be able to understand the concepts of data and storage and at least the basics. Have a backing database to build an application on top of it, able to write some backend APIs, backend code, and then build a decent looking UI on top of it. That actually gives you an idea of what is involved end to end in building an application. Versus being just focused on I only do X versus Y. You need the versatility. A lot of employers are looking for that.”

    Timestamps

    (00:23) Premal’s Background and Journey into Engineering

    (06:37) Introduction to 6sense: The Company and Its Mission

    (09:15) The Evolution of 6sense: From Idea to Reality

    (13:07) The Technical Aspects: Data Management and Infrastructure

    (18:03) Shifting to a micro-service-focused world

    (31:16) Challenges of Data Management and Scaling

    (38:26) Deployment Strategies in Large-Scale Systems

    (47:49) The Impact of Generative AI on Development and Deployment

    (55:18) The Future of AI in Engineering

    (01:01:07) Quick Hits

    Links

    Connect with Premal

    Visit 6sense

    Connect with Madhukar

    Visit SingleStore

  • On this episode of How We Made That App, embark on a captivating journey into STEM education with host Madhukar Kumar and the Co-Founder of the brilliant app Numerade Alex Lee!

    Alex gives an in-depth and fascinating fusion of philosophy and technology that propels Numerade's innovative learning platform. Alex unveils the intricate layers of AI and machine learning models that power their educational ecosystem. Beyond the present, he explores the promising future integration of Large Language Models (LLMs), offering a glimpse into the next frontier of education.

    Numerade is more than just AI and LLM enhancements, Alex emphasizes the human touch woven into Numerade's approach. Discover the impact of meaningful interactions on the learning experience and the deliberate efforts to maintain a personal connection in the digital realm. Alex envisions growth by seamlessly aligning Numerade's services with the dynamic advancements in AI, creating a bridge between cutting-edge technology and genuine human engagement.

    Tune in as this episode unravels the philosophy, technology, and human-centric approach that define Numerade's quest to revolutionize STEM education.

    Key Quotes:

    “Our thesis has always been that when it comes to learning this complex material, it's so much more effective to be able to get sight and sound. So, to be able to sit down, have in front of you an expert educator, who's walking you through the video. Speaking to you, the high-level concepts, guiding you through the various skills that are required to be able to tackle that problem. That's what we found really, really constructive to the learning cycle for our students.”“One thing that we do also in the background, and this is where AI and machine learning comes in, is being able to create holistic end-to-end experiences that really stitch together a lot of these different videos to provide something that takes the student through the whole journey of learning something first at a conceptual level. So really building that knowledge foundation. So, for example, understanding what momentum even is. And then gradually as they're building that knowledge framework, we're giving them more discrete items and problems for them to solve. And that way we're doing more of that skill building aspect. So really honing in on how do you solve momentum questions and equations.”“When we were experiencing growth, the one thing that we realized was, sitting students down and conducting these more qualitative feedback sessions with them, getting them into focus groups. That wasn't really all that scalable. It works really, really well, and it's still something that we do today, but as the amount of traffic that you get on the site, as the number of users that begin interacting with our content, as that number starts to grow, there needs to be better ways for us to gain these deeper insights. And that's where we started the exploration of how do we best set a system up for ingesting all of the con, all of the data that's being created by our users during their time interacting with our site.”“Right now from all of the data that we're able to see, humans will, at least in the immediate future, still be a very much a part of the human experience in the learning experience. And I think the reason behind that is just because the learning experience is also inherently human. And you have to have some of that human element behind it to really effectuate great learning.”

    Timestamps

    (1:08) Numerade’s origins(5:00) Expanding education in the TikTok era(11:10) Building Numerade through existing technologies(17:05) Using product-led growth to expand Numerade(21:05) Using LLMs and AI to utilize Numerade(29:35) Quick Hits

    Links

    Connect with Alex

    Visit Numerade

    Connect with Madhukar

    Visit SingleStore

  • Discover the groundbreaking potential of a Squirrel Bot (SQrL) in transforming your interactions with JIRA users! Join us in this episode as we sit down with Dave Eyler, Senior Director of Product Management at SingleStore. From his journey as a software engineer to a product manager, Dave takes us through the evolution of his career.

    Tune in and explore the limitless possibilities of SingleStore's databases, delving into their incredible versatility across various use cases. Brace yourself for a mind-blowing exploration of vector analysis queries and their exceptional performance in AI use cases. We also delve into the future of application development, uncovering how AI technology can elevate applications and the potential of text interfaces in managing complex applications.

    In the final stretch, we navigate the impact of AI and machine learning on the industry, unraveling the dramatic shifts within software teams. Dave candidly shares insights on the allure of product management, shedding light on why it magnetizes engineers. Don't miss this remarkable episode filled with insights, experiences, and a touch of humor – a journey into the transformative power of AI and the future of databases!

    Key Quotes:

    “People rag on JIRA. Jira is like the ultimate Swiss Army knife - it's not good at anything but it can be made to do anything, and I think there is power in that. So people make fun of JIRA, but JIRA is actually, I think, a pretty impressive piece of software if you overlook the maddening nature of it sometimes.”“This is just not a thing that is out there. So we solve this in this need that is growing bigger and bigger and bigger. I want to have real time analytics. I want to do real time operations. I wanna make smart decisions. As you know, data grows and companies get smarter about data and their operations,like this is only getting increased. And so the thing I love about SingleStore is it's actually an incredibly differentiated product and solves a real need for our customers. So that's definitely one of my favorite things about the job.”“We're converging now to where it's like, customers, they don't want to have a million databases. They want to have the smallest number of databases they can and serve all their use cases. And that's really the power of SingleStore, this vector analysis use case is not like, we had to go build a ton of stuff to make it work. We already had it because we're a super powerful database, shared nothing distributed and just purpose built for speed. “

    Timestamps

    (00:38) Intro(09:47) The Future of Databases and AI Applications(13:20) Product Management and Application Evolution(20:57) Customer Feedback, AI, and Product Management(29:13) Parents' Reaction to Children's Career Choices

    Links

    Connect with David

    Connect with Madhukar

    Visit SingleStore

  • In this episode, we delve into a fascinating conversation with Marcus O'Brien, VP of Product, AutoCAD, at Autodesk, focusing on the evolution of AutoCAD. Marcus takes us on a journey, discussing how AutoCAD has evolved since the 1980s, establishing itself as a go-to tool for architects, engineers, and designers worldwide. From creating 2D and 3D objects to evolving into an extensible platform, Marcus shares insightful details about the product and its wide range of applications.

    We also have the opportunity to hear Marcus's extraordinary personal journey from Ireland to America, and his transition into product management. Marcus enlightens us on the evolution of product management, discussing the industry's macro shifts that have influenced products and the strategies to enter product management today. Additionally, he shares his thoughts on what makes a great product manager, how AI and ML are utilized in product management and his experience in building models for Autodesk's products.

    We conclude the episode by exploring the use of LLMs in 3D modeling and design, along with the capabilities of AutoCAD products in generative design. Marcus offers insights into onboarding customers and highlights the available tools for individuals interested in learning 3D modeling and design. Tune in to this insightful episode to learn from an industry expert and explore the world of AutoCAD and product management.

    Key Quotes:

    I think going through the technical route and then getting into product management later is a really strong foundation in being able to understand some technical engineering concepts, and then you can kind of scale yourself, learn a bit about strategy but be rooted in the technical side, I think is one of the things that makes you really successful but I think when I look at founders, if I look at all the VC investment that's happening at the moment. It's for a more technical founder base. So I think the wild west of, you can just go to a VC and you've got a business plan and you can talk the talk.I think those days might be over now, unless your company name ends in AI. But there tends to be more of a technical bias to these positions now, so I think anyone coming in with a technical background and then switching to PM, it's a good route.When I look at AutoCAD's journey, the first 20 years was about building automations on desktop software. Maybe, after 20 years for the next 10 years was about acquiring vertical products or building vertical products and bringing them to market to target specific niches. From 2010, maybe to 2018 was more about multi-platform about creating AutoCAD that is truly everywhere where it's desktop, web, mobile. We've got AutoCAD design automation API in the cloud, so that if you want to run automations, if you don't want to do use your GPU, if you want to do things online with servers, we've developed this full third party ecosystem of developers who develop capabilities on top of AutoCAD. I think that was the kind of push and certainly this last number of years for PMs, it's been about machine learning and AI.I think you need to learn it on the job, if I'm honest, maybe I'm a bit old school like that. I would push back on the ego and I actually think the most successful product managers are humble. And I think that is one of the qualities you look for. You want table stakes. You need the smartest person, you know, super smart people. My personal preference is a strong bias for action. So somebody who doesn't have to have the idea, but as the person who wants to get traction and make progress with the idea, incredible communication skills, both written and verbal, you have to be one of those people who just enjoys it.I think if your company is solely reliant on LLMs to check your AI ML capabilities, you're probably missing a beat.I think the companies that are looking at more broadly beyond LLMs, maybe have a little bit more strategic advantage and more value to offer to customers ultimately.I think that the way that I raise my kids needs to be different now, because I need them to be comfortable with working with AIs.I think that that's going to be their childhood. They're going to grow up with AIs. I think we have a role to play in teaching our kids how to get the best from AI in the way that we had to learn how to use iPads. They're going to have to learn how to work with AIs

    Timestamps

    (1:45) - The journey of AutoCAD

    (7:56) - Marcus’ journey from Ireland to America

    (12:05) - Taking the technical route to product management

    (19:20) - Bringing GenAI and product management together

    (25:26) - LLMs in 3D Modeling and Design

    (29:56) - Goal Setting and Adapting in Product Management

    (37:15) - Quick hits

    Links

    Connect with Marcus

    Check out the AutoCAD podcast

    Check out the Figuring Things Out podcast

    Connect with Madhukar

    Visit SingleStore

  • In this riveting podcast episode, host Madhukar Kumar engages in a deep and enlightening conversation with Scott Booker, the Chief Customer Officer at EasyPark. Together, they embark on a journey to uncover the fascinating story behind EasyPark, from its inception and the intricate process of its creation to the breadth of its operational territories and its exciting plans for future expansion.

    Scott Booker generously shares his insights into the inner workings of EasyPark, shedding light on the innovative ways they harness data to fine-tune their services. He also elucidates how cutting-edge AI and mobile technology seamlessly integrate into their approach, paving the way for an exciting future in urban mobility.

    Scott also offers thought-provoking commentary on the evolution of parking management, highlighting the potential transformative impact of autonomous vehicles on parking requirements. He paints a vivid picture of the pivotal role that EasyPark envisions playing in the future of city planning, making this episode a must-listen for anyone interested in the ever-evolving urban landscape

    Key Quotes:

    “Making it easy was a large part of what we're trying to do. You see it in our reviews, you see it in the feedback we get through customer care, and so forth. We make it so much easier for people to do what they need to do in terms of parking. So, we sort of gravitated in that direction.”“What we're trying to do is reduce that flow and congestion that happens on a regular basis in cities so that they can focus on using their investment dollars and so forth to make their cities more livable, more usable for the citizens. And so, part of that is really understanding, using that data, what's happening on an hourly basis, on a daily basis, and a weekly basis.”“I have responsibilities for marketing, product, customer care, and analytics So, [in] all of my areas, we'll push and lead in helping [cities] use data to make decisions. I think it's super important in this space. So if I'm on the product side, for instance, I need to understand what the data is telling me about how the app's being used, how many errors are being thrown, what's the A-B test environment look like when we're running multiple tests, because we're always in a constant, kind of mode to improve the conversion of that app. To continue to simplify. There shouldn't be any roadblocks or hindrances, to make it easy to park.”

    Time Stamps

    (02:45) - Introduction to EasyPark

    (03:18) - The Functionality and Benefits of EasyPark

    (06:06) - Real-Time Data and Analytics in EasyPark

    (21:50) - The Role of AI in EasyPark

    (28:39)- Quick Hits

    Links

    Connect with Scott

    Check out EasyPark

    Connect with Madhukar

    Visit SingleStore

  • Who wouldn't desire a second brain? Envision a transformative tool that serves as the custodian of your thoughts and files, liberating your mind for unbridled creativity. Today, we embark on a captivating exploration with Stan Girard, the visionary mind behind Quivr and the head of Gen AI at Theodo.

    Quivr, a groundbreaking generative AI application, is engineered to be your unrivaled memory repository, where no idea or data is ever misplaced. But that's just the beginning. Stan's vision for Quivr extends far beyond a mere storage solution; it's an interactive, ChatGPT-powered file system that will revolutionize the way you interact with your stored knowledge.

    Going beyond Quivr, Stan shares profound insights into the future of data storage and the limitless potential of generative AI applications. He discusses how autonomous agents are poised to reshape the landscape of publishing and marketing. Stan's infectious enthusiasm for AI, coupled with his visionary outlook, will leave you with much food for thought.

    Key Quotes:

    “I think that keeping it open source is very important when you are storing information when you're the second brain of people. Keeping your product open source allows people to look into it or at least know that they can look into it. And that gives confidence to people, especially when you store the information. They want to know how they want to know what to do with it. They want to know if you use it. Open AI when they closed source GPT-4, created a distrust in the generative AI world where people don't know what to do with the information. So the goal is to keep it open source as much as I want and I can.”“I really want it to be your second brain and allow you to forget things. So it's not only data, it's also things to do. So you can be more creative when you are using your first brain.”“There needs to be an ethical part of it where maybe keep the recommendation system open source so people can look into it, see if we are not pushing more kinds of information, a specific way or a specific view and this is going to be interesting too in the future.”“If you want to create a product, use either long chain or LAMA Index for the obstruction of complexity with the interaction to vector stores. For creating agents or for chatting with open AI and topics. So they create an abstraction layer and for example, if I recall correctly, in LAMA index, you can even use long chain LLMs models.”

    Timestamps:

    (:35) - Intro(6:40) - Envisioning a new file system(13:24) - Quivr and Open-Source Frameworks Potential(25:39) - Data Stores and Gen AI Applications

    Links

    Connect with Stan

    Check out Quivr

    Visit Theodo

    Connect with Madhukar

    Visit SingleStore

  • Welcome to “How We Made That App”, where we explore the crazy, wild, and sometimes downright bizarre stories behind the creation of some of the world's most popular apps.

    I'm your host, the always charming and devastatingly handsome Madhukar Kumar. After starting my career as a developer and then as a product manager, I am now the Chief Marketing Officer at SingleStore. And I'm here to take you on a journey through the data, challenges and obstacles that app developers face on the road to creating their masterpieces.

    On each episode, we'll dive deep into the origins of a different app and find out what went into making it the success it is today. We'll explore the highs and lows of development, the technical challenges that had to be overcome, and the personalities and egos that clashed along the way.

    With our signature blend of irreverent humor, snarky commentary and razor-sharp wit, we'll keep you entertained and informed as we explore the cutting edge of app development.

    So grab your favorite coding language, crank up the volume, and join us for

    “How We Made That App”

    Brought to you by the top app-building platform wizards at SingleStore.