Episodes
-
Today, Iâm talking to Andy Sutton, GM of Data and AI at Endeavour Group, Australia's largest liquor and hospitality company. In this episode, Andyâwho is also a member of the Data Product Leadership Community (DPLC)âshares his journey from traditional, functional analytics to a product-led approach that drives their mission to leverage data and personalization to build the âSpotify for wines.â This shift has greatly transformed how Endeavourâs digital and data teams work together, and Andy explains how their advanced analytics work has paid off in terms of the companyâs value and profitability.
Youâll learn about the often overlooked importance of relationships in a data-driven world, and how Andy sees the importance of understanding how users do their job in the wild (with and without your product(s) in hand). Earlier this year, Andy also gave the DPLC community a deeper look at how they brew data products at EDG, and that recording is available to our members in the archive.
We covered:What it was like at EDG before Andy started adopting a producty approach to data products and how things have now changed (1:52)The moment that caused Andy to change how his team was building analytics solutions (3:42)The amount of financial value that Andy's increased with his scaling team as a result of their data product work (5:19)How Andy and Endeavour use personalization to help build âthe Spotify of wineâ (9:15)What the team under Andy required in order to make the transition to being product-led (10:27)The successes seen by Endeavour through the digital and data teamsâ working relationship (14:04)What data product management looks like for Andyâs team (18:45)How Andy and his team find solutions to âbridging the adoption gap (20:53)The importance of exposure time to end users for the adoption of a data product (23:43)How talking to the pub staff at EDGâs bars and restaurants helps his team build better data products (27:04)What Andy loves about working for Endeavour Group (32:25)What Andy would change if he could rewind back to 2022 and do it all over (34:55)Final thoughts (38:25)Quotes from Todayâs EpisodeâI think the biggest thing is the value we unlock in terms of incremental dollars, right? Iâve not worked in analytics team before where weâve been able to deliver a measurable valueâŠ. So, weâre actuallyâin theoryâweâre becoming a profit center for the organization, not just a cost center. And so, thereâs kind of one key metric. The second one, we do measure the voice of the team and how engaged our team are, and thatâs on an upward trend since we moved to the new operating model, too. We also measure [a type of] âvoice of partnerâ score [and] get something like a 4.1 out of 5 on that scale. Those are probably the three biggest ones: weâre putting value in, and weâre delivering products, I guess, our internal team wants to use, and we are building an enthused team at the same time.â - Andy Sutton (16:18)ââYou can put an [unfinished] product in front of an end customer, and they will give you quality feedback that you can then iterate on quickly. You can do that with an internal team, but youâll lose credibility. Internal teams hold their analytics colleagues to a higher standard than the external customers. Weâre trying to change how people do their roles. People feel very passionate about the roles they do, and how they do them, and what they bring to that role. Weâre trying to build some of that into products. It requires probably more design consideration than Iâd anticipated, and weâre still bringing in more designers to help us move closer to the start line.ââ - Andy Sutton (19:25)ââ[Customer research] is becoming critical in terms of the products weâre building. Youâre building a product, a set of products, or a process for an operations team. In our context, an operations team can mean a team of people who run a pub. Itâs not just about convincing me, my product managers, or my data scientists that you need research; we want to take some of the resources out of running that bar for a period of time because we want to spend time with [the pub staff] watching, understanding, and researching. Weâve learned some of these things along the way⊠weâve earned the trust, weâve earned that seat at the table, and so we can have those conversations. Itâs not trivial to get people to say, âIâll give you a day-long workshop, or give you my team off of running a restaurant and a bar for the day so that they can spend time with you, and so you can understand our processes.ââ - Andy Sutton (24:42)ââI think what is very particular to pubs is the importance of the interaction between the customer and the person serving the customer. [Pubs] are about the connections between the staff and the customer, and you donât get any of that if youâre just looking at things from a pure data perspective⊠You donât see the [relationships between pub staff and customer] in the [data], so how do you capture some of that in your product? Itâs about understanding the context of the data, not just the data itself.â - Andy Sutton (28:15)âEvery winery, every wine grower, every wine has got a story. These conversations [and relationships] are almost natural in our business. Our CEO started work on the shop floor in one of our stores 30 years ago. That kind of relationship stuff percolates through the organization. Having these conversations around the customer and internal stakeholders in the context of data feels a lot easier because storytelling and relationships are the way we get things done. An analytics team may get frustrated with people who canât understand data, but itâs [the analytics teamâs job] to help bridge that gap.â - Andy Sutton (32:34)Links ReferencedLinkedIn: https://www.linkedin.com/in/andysutton/ Endeavour Group: https://www.endeavourgroup.com.au/ Data Product Leadership Community https://designingforanalytics.com/community -
After getting started in construction management, Anna Jacobson traded in the hard hat for the world of data products and operations at a VC company. Anna, who has a structural engineering undergrad and a masters in data science, is also a Founding Member of the Data Product Leadership Community (DPLC). However, her work with data products is more âaccidentalâ and is just part of her responsibility at Operator Collective. Nonetheless, Anna had a lot to share about building data products, dashboards, and insights for usersâincluding resistant ones!
That resistance is precisely what I wanted to talk to her about in this episode: how does Anna get somebody to adopt a data product to which they may be apathetic, if not completely resistant?
At the end of the episode, Anna gives us a sneak peek at what sheâs planning to talk about in our final 2024 live DPLC group discussion coming up on 12/18/2024.
We covered:(1:17) Anna's background and how she got involved with data products(3:32) The ways Anna applied her experiences working in construction management to her current work with data products at a VC firm(5:32) Explaining one of the main data products she works on at Operator Collective(9:55) How Anna defines success for her data products(15:21) The process of designing data products for "non-believers"(21:08) How to think about "super users" and their feedback on a data product(27:11) How a company's cultural problems can be a blocker for product adoption(38:21) A preview of what you can expect from Anna's talk and live group discussion in the DPLC(40:24) Closing thoughts from Anna(42:54) Where you can find more from Anna Quotes from Todayâs EpisodeâPeople working with data products are always thinking about how to [gain user adoption of their product]... I canât think of a single one where [all users] were immediately on board. Thereâs a lot to unpack in what it takes to get non-believers on board, and itâs something that none of us ever get any training on. You just learn through experience, and itâs not something that most people took a class on in college. All of the social science around what we do gets really passed over for all the technical stuff. It takes thinking through and understanding where different [users] are coming from, and [understanding] that my perspective alone is not enough to make it happen.â - Anna Jacobson (16:00)âââIf you only bring together the super users and donât try to get feedback from the average user, you are missing the perspective of the person who isnât passionate about the product. A non-believer is someone who is just over capacity. They may be very hard-working, they may be very smart, but they just donât have the bandwidth for new things. Thatâs something that has to be overcome when youâre putting a new product into place.â - Anna Jacobson (22:35)âIf a company canât find budget to support [a data product], thatâs a cultural decision. Itâs not a financial decision. They find the money for the things that they care about. Solving the technology challenge is pretty easy, but you have to have a company thatâs motivated to do that. If you want to implement something new, be it a data product or any change in an organization, identifying the cultural barriers and figuring out how to bring [people in an organization] on board is the crux of it. The money and the technology can be found.â - Anna Jacobson (27:58)âI think people are actually very bad at explaining what they want, and asking people what they want is not helpful. If you ask people what they want to do, then I think you have a shot at being able to build a product that does [what they want]. The executive sponsors typically have a very different perspective on what the product [should be] than the users do. If all of your information is getting filtered through the executive sponsor, youâre probably not getting the full pictureâ - Anna Jacobson (31:45)âYou want to define what the opportunity is, the problem, the solution, and you want to talk about costs and benefits. You want to align [the data product] with corporate strategy, and those things are fairly easy to map out. But as you get down to the user, what they want to know is, âHow is this going to make my life easier? How is this going to make [my job] faster? How is it going to result in better outcomes?â They may have an interest in how it aligns with corporate strategy, but thatâs not whatâs going to motivate them. Itâs really just easier, faster, better.â - Anna Jacobson (35:00)Links ReferencedLinkedIn: https://www.linkedin.com/in/anna-ching-jacobson/
DPLC (Data Product Leadership Community): https://designingforanalytics.com/community
-
Missing episodes?
-
R&D for materials-based products can be expensive, because improving a productâs materials takes a lot of experimentation that historically has been slow to execute. In traditional labs, you might change one variable, re-run your experiment, and see if the data shows improvements in your desired attributes (e.g. strength, shininess, texture/feel, power retention, temperature, stability, etc.). However, today, there is a way to leverage machine learning and AI to reduce the number of experiments a material scientist needs to run to gain the improvements they seek. Materials scientists spend a lot of time in the labâaway from a computer screenâso how do you design a desirable informatics SAAS that actually works, and fits into the workflow of these end users?
As the Chief Product Officer at MaterialsZone, Ori Yudilevich came on Experiencing Data with me to talk about this challenge and how his PM, UX, and data science teams work together to produce a SAAS product that makes the benefits of materials informatics so valuable that materials scientists depend on their solution to be time and cost-efficient with their R&D efforts.
We covered:(0:45) Explaining what Ori does at MaterialZone and who their product serves(2:28) How Ori and his team help make material science testing more efficient through their SAAS product(9:37) How they design a UX that can work across various scientific domains(14:08) How âdoing productâ at MaterialsZone matured over the past five years(17:01) Explaining the "Wizard of Oz" product development technique(21:09) The importance of integrating UX designers into the "Wizard of Oz"(23:52) The challenges MaterialZone faces when trying to get users to adopt to their product(32:42) Advice Ori would've given himself five years ago(33:53) Where you can find more from MaterialsZone and OriQuotes from Todayâs EpisodeâThe fascinating thing about materials science is that you have this variety of domains, but all of these things follow the same process. One of the problems [consumer goods companies] face is that they have to do lengthy testing of their products. This is something you can use machine learning to shorten. [Product research] is an iterative process that typically takes a long time. Using your data effectively and using machine learning to predict what can happen, whatâs better to try out, and what will reduce costs can accelerate time to market.â - Ori Yudilevich (3:47)âThe difference [in time spent testing a product] can be up to 70% [i.e. you can run 70% fewer experiments using ML.] That [also] means 70% less resources youâre using. Under the âold systemâ of trial and error, you were just trying out a lot of things. The human mind cannot process a large number of parameters at once, so [a materials scientist] would just start playing only with [one parameter at a time]. Youâll have many experiments where you just try to optimize [for] one parameter, but then you might have 20, 30, or 100 more [to test]. Using machine learning, you can change a lot of parameters at once. The model can learn what has the most effect, what has a positive effect, and what has a negative effect. The differences can be really huge.â - Ori Yudilevich (5:50)âOnce you go deeper into a use case, you see that there are a lot of differences. The types of raw materials, the data structure, the quantity of data, etc. For example, with batteries, you have lots of data because you can test hundreds all at once. Whereas with something like ceramics, you donât try so many [experiments]. You just canât. Itâs much slower. You canât do so many [experiments] in parallel. You have much less data. Your models are different, and your data structure is different. But thereâs also quite a lot of commonality because youâre storing the data. In the end, you have each domain, some raw materials, formulations, tests that youâre doing, and different statistical plots that are very common.â - Ori Yudilvech (11:24)âWeâll typically do what we call the âWizard of Ozâ technique. You simulate as if you have a feature, but youâre actually working for your client behind the scenes. You tell them [the simulated feature] is what youâre doing, but then measure [the clientâs response] to understand if thereâs any point in further developing that feature. Once you validate it, have enough data, and know where the feature is going, then youâll start designing it and releasing it in incremental stages. Weâve made a lot of progress in how we discover opportunities and how we build something iteratively to make sure that weâre always going in the right directionâ - Ori Yudilevich (15:56)âThe main problem weâre encountering is changing the mindset of users. Our users are not people who sit in front of a computer. These are researchers who work in [a materials science] lab. The challenge [we have] is getting people to use the platform more. To see itâs worth [their time] to look at some insights, and run the machine learning models. Weâre always looking for ways to make that transition faster⊠and I think the key is making [the user experience] just fun, easy, and intuitive.â - Ori Yudilevich (24:17)âEven if you make [the user experience] extremely smooth, if [users] donât see what they get out of it, theyâre still not going to [adopt your product] just for the sake of doing it. What we find is if this [product] can actually make them work faster or develop better productsâ that gets them interested. If youâre adopting these advanced tools, it makes you a better researcher and worker. People who [adopt those tools] grow faster. They become leaders in their team, and they slowly drag the others in.â - Ori Yudilevich (26:55)âSome of [MaterialsZoneâs] most valuable employees are the people who have been users. Our product manager is a materials scientist. Iâm not a material scientist, and itâs hard to imagine being that person in the lab. What I think is correct turns out to be completely wrong because I just donât know what itâs like. Having [material scientists] whoâve made the transition to software and data science? You canât replace that.â - Ori Yudilevich (31:32)Links ReferencedWebsite: https://www.materials.zone
LinkedIn: https://www.linkedin.com/in/oriyudilevich/
Email: [email protected]
-
Jeremy Forman joins us to open up about the hurdlesâ and successes that come with building data products for pharmaceutical companies. Although heâs new to Pfizer, Jeremy has years of experience leading data teams at organizations like Seagen and the Bill and Melinda Gates Foundation. He currently serves in a more specialized role in Pfizerâs R&D department, building AI and analytical data products for scientists and researchers. .
Jeremy gave us a good luck at his team makeup, and in particular, how his data product analysts and UX designers work with pharmaceutical scientists and domain experts to build data-driven solutions.. We talked a good deal about how and when UX design plays a role in Pfizerâs data products, including a GenAI-based application they recently launched internally.
Highlights/ Skip to:(1:26) Jeremy's background in analytics and transition into working for Pfizer(2:42) Building an effective AI analytics and data team for pharma R&D(5:20) How Pfizer finds data products managers(8:03) Jeremy's philosophy behind building data products and how he adapts it to Pfizer(12:32) The moment Jeremy heard a Pfizer end-user use product management research language and why it mattered(13:55) How Jeremy's technical team members work with UX designers(18:00) The challenges that come with producing data products in the medical field(23:02) How to justify spending the budget on UX design for data products(24:59) The results we've seen having UX design work on AI / GenAI products(25:53) What Jeremy learned at the Bill & Melinda Gates Foundation with regards to UX and its impact on him now(28:22) Managing the "rough dance" between data science and UX(33:22) Breaking down Jeremy's GenAI application demo from CDIOQ(36:02) What would Jeremy prioritize right now if his team got additional funding(38:48) Advice Jeremy would have given himself 10 years ago(40:46) Where you can find more from JeremyQuotes from Todayâs EpisodeâWe have stream-aligned squads focused on specific areas such as regulatory, safety and quality, or oncology research. Thatâs so we can create functional career pathing and limit context switching and fragmentation. They can become experts in their particular area and build a culture within that small team. Itâs difficult to build good [pharma] data products. You need to understand the domain youâre supporting. You canât take somebody with a financial background and put them in an Omics situation. It just doesnât work. And we have a lot of the scars, and the failures to prove that.â - Jeremy Forman (4:12)âYou have to have the product mindset to deliver the value and the promise of AI data analytics. I think small, independent, autonomous, empowered squads with a product leader is the only way that you can iterate fast enough with [pharma data products].â - Jeremy Forman (8:46)âThe biggest challenge is when we say data products. It means a lot of different things to a lot of different people, and itâs difficult to articulate what a data product is. Is it a view in a database? Is it a table? Is it a query? Weâre all talking about it in different terms, and nobodyâs actually delivering data products.â - Jeremy Forman (10:53)âI think when weâre talking about [data products] thereâs some type of data asset that has value to an end-user, versus a report or an algorithm. I think itâs even hard for UX people to really understand how to think about an actual data product. I think itâs hard for people to conceptualize, how do we do design around that? Itâs one of the areas I think Iâve seen the biggest challenges, and I think some of the areas weâve learned the most. If you build a data product, itâs not accurate, and people are getting results that are incomplete⊠people will abandon it quickly.â - Jeremy Forman (15:56)â I think that UX design and AI development or data science work is a magical partnership, but they often donât know how to work with each other. Thatâs been a challenge, but I think investing in that has been critical to us. Even though weâve had struggles⊠I think weâve also done a good job of understanding the [user] experience and impact that we want to have. The prototype we shared [at CDIOQ] is driven by user experience and trying to get information in the hands of the research organization to understand some portfolio types of decisions that have been made in the past. And itâs been really successful.â - Jeremy Forman (24:59)âIf youâre having technology conversations with your business users, and youâre focused only the technology output, youâre just building reports. [After adopting If weâre having technology conversations with our business users and only focused on the technology output, weâre just building reports. [After we adopted a human-centered design approach], it was talking [with end-users] about outcomes, value, and adoption. Having that resource transformed the conversation, and I felt like our quality went up. I felt like our output went down, but our impact went up. [End-users] loved the tools, and that wasnât what was happening before⊠I credit a lot of that to the human-centered design team.â - Jeremy Forman (26:39)âWhen youâre thinking about automation through machine learning or building algorithms for [clinical trial analysis], it becomes a harder dance between data scientists and human-centered design. I think thereâs a lack of appreciation and understanding of what UX can do. Human-centered design is an empathy-driven understanding of usersâ experience, their work, their workflow, and the challenges they have. I donât think thereâs an appreciation of that skill set.â - Jeremy Forman (29:20)âAre people excited about it? Is there value? Are we hearing positive things? Do they want us to continue? Thatâs really how Iâve been judging success. Is it saving people time, and do they want to continue to use it? They want to continue to invest in it. They want to take their time as end-users, to help with testing, helping to refine it. Those are the indicators. Weâre not generating revenue, so what does the adoption look like? Are people excited about it? Are they telling friends? Do they want more? When I hear that the ten people [who were initial users] are happy and that they think it should be rolled out to the whole broader audience, I think thatâs a good sign.â - Jeremy Forman (35:19)Links ReferencedLinkedIn: https://www.linkedin.com/in/jeremy-forman-6b982710/
-
The relationship between AI and ethics is both developing and delicate. On one hand, the GenAI advancements to date are impressive. On the other, extreme care needs to be taken as this tech continues to quickly become more commonplace in our lives. In todayâs episode, Ovetta Sampson and I examine the crossroads ahead for designing AI and GenAI user experiences.
While professionals and the general public are eager to embrace new products, recent breakthroughs, etc.; we still need to have some guard rails in place. If we donât, data can easily get mishandled, and people could get hurt. Ovetta possesses firsthand experience working on these issues as they sprout up. We look at who should be on a team designing an AI UX, exploring the risks associated with GenAI, ethics, and need to be thinking about going forward.
Highlights/ Skip to:(1:48) Ovetta's background and what she brings to Googleâs Core ML group(6:03) How Ovetta and her team work with data scientists and engineers deep in the stack(9:09) How AI is changing the front-end of applications(12:46) The type of people you should seek out to design your AI and LLM UXs(16:15) Explaining why weâre only at the very start of major GenAI breakthroughs(22:34) How GenAI tools will alter the roles and responsibilities of designers, developers, and product teams(31:11) The potential harms of carelessly deploying GenAI technology(42:09) Defining acceptable levels of risk when using GenAI in real-world applications(53:16) Closing thoughts from Ovetta and where you can find herQuotes from Todayâs EpisodeâIf artificial intelligence is just another technology, why would we build entire policies and frameworks around it? The reason why we do that is because we realize there are some real thorny ethical issues [surrounding AI]. Who owns that data? Where does it come from? Data is created by people, and all people create data. Thatâs why companies have strong legal, compliance, and regulatory policies around [AI], how itâs built, and how it engages with people. Think about having a toddler and then training the toddler on everything in the Library of Congress and on the internet. Do you release that toddler into the world without guardrails? Probably not.â - Ovetta Sampson (10:03)â[When building a team] you should look for a diverse thinker who focuses on the limitations of this technology- not its capability. You need someone who understands that the end destination of that technology is an engagement with a human being. You need somebody who understands how they engage with machines and digital products. You need that person to be passionate about testing various ways that relationships can evolve. When we go from execution on code to machine learning, we make a shift from [human] agency to a shared-agency relationship. The user and machine both have decision-making power. Thatâs the paradigm shift that [designers] need to understand. You want somebody who can keep that duality in their head as theyâre testing product design.â - Ovetta Sampson (13:45)âWeâre in for a huge taxonomy change. There are words that mean very specific definitions today. Software engineer. Designer. Technically skilled. Digital. Art. Craft. AI is changing all that. Itâs changing what it means to be a software engineer. Machine learning used to be the purview of data scientists only, but with GenAI, all of that is baked in to Gemini. So, now you start at a checkpoint, and youâre like, all right, letâs go make an API, right? So, the skills, the understanding, the knowledge, the taxonomy even, how we talk about these things, how do we talk about the machine who speaks to us talks to us, who could create a podcast out of just voice memos?â - Ovetta Sampson (24:16)âWe have to be very intentional [when building AI tools], and thatâs the kind of folks you want on teams. [Designers] have to go and play scary scenarios. We have to do that. No designer wants to be âNegative Nancy,â but this technology has huge potential to harm. It has harmed. If we donât have the skill sets to recognize, document, and minimize harm, that needs to be part of our skill set. If weâre not looking out for the humans, then who actually is?â - Ovetta Sampson (32:10)â[Research shows] things happen to our brain when weâre exposed to artificial intelligence⊠there are real human engagement risks that are an opportunity for design. When youâre designing a self-driving car, you canât just let the person go to sleep unless the car is fully [automated] and every other car on the road is self-driving. If there are humans behind the wheel, you need to have a feedback loop systemâsomething thatâs going to happen [in case] the algorithm is wrong. If you donât have that designed, thereâs going to be a large human engagement risk that a car is going to run over somebody whoâs [for example] pushing a bike up a hill[...] Why? The car could not calculate the right speed and pace of a person pushing their bike. It had the speed and pace of a person walking, the speed and pace of a person on a bike, but not the two together. Algorithms will be wrong, right?â - Ovetta Sampson (39:42)âModel goodness used to be the purview of companies and the data scientists. Think about the first search engines. Their model goodness was [about] 77%. Thatâs good, right? And then people started seeing photos of apes when [they] typed in âblack people.â Companies have to get used to going to their customers in a wide spectrum and asking them when theyâre [models or apps are] right and wrong. They canât take on that burden themselves anymore. Having ethically sourced data input and variables is hard work. If youâre going to use this technology, you need to put into place the governance that needs to be there.â - Ovetta Sampson (44:08) -
Sometimes DIY UI/UX design only gets you so farâand you know itâs time for outside help. One thing prospects from SAAS analytics and data-related product companies often ask me is how things are like in the other guy/galâs backyard. They want to compare their situation to others like them. So, today, I want to share some of the common âthemesâ I see that usually are the root causes of what leads to a phone call with me.
By the time I am on the phone with most prospects who already have a product in market, theyâre usually either having significant problems with 1 or more of the following: sales friction (product value is opaque); low adoption/renewal worries (user apathy), customer complaints about UI/UX being hard to use; velocity (team is doing tons of work, but leader isnât seeing progress)âand the like.
Iâm hoping todayâs episode will explain some of the root causes that may lead to these issues â so you can avoid them in your data product building work!
Highlights/ Skip to:(10:47) Design != "front-end development" or analyst work(12:34) Liking doing UI/UX/viz design work vs. knowing (15:04) When a leader sees lots of work being done, but the UX/design isnât progressing(17:31) Your productâs UX needs to convey some magic IP/special sauceâŠbut it isnât(20:25) Understanding the tradeoffs of using libraries, templates, and other solutionâs design as a foundation for your own (25:28) The sunk cost bias associated with POCs and âweâll iterate on itâ(28:31) Relying on UI/UX "customization" to please all customers(31:26) The hidden costs of abstraction of system objects, UI components, etc. to make life easier for engineering and technical teams(32:32) Believing youâll know the design is good âwhen you see itâ (and what you donât know you donât know)(36:43) Believing that because the data science/AI/ML modeling under your solution was, accurate, difficult, and/or expensive makes it automatically worth paying for Quotes from Todayâs EpisodeThe challenge is often not knowing what you donât know about a project. We often end up focusing on building the tech [and rushing it out] so we can get some feedback on it⊠but product is not about getting it out there so we can get feedback. The goal of doing product well is to produce value, benefits, or outcomes. Learning is important, but thatâs not what the objective is. The objective is benefits creation. (5:47)When we start doing design on a project thatâs not design actionable, we build debt and sometimes can hurt the process of design. If you start designing your product with an entire green space, no direction, and no constraints, the chance of you shipping a good v1 is small. Your product strategy needs to be design-actionable for the team to properly execute against it. (19:19)While you donât need to always start at zero with your UI/UX design, what are the parts of your product or application that do make sense to borrow , âstealâ and cheat from? And when does it not? It takes skill to know when you should be breaking the rules or conventions. Shortcuts often donât produce outsized resultsâunless you know what a good shortcut looks like. (22:28)A proof of concept is not a minimum valuable product. Thereâs a difference between proving the tech can work and making it into a product thatâs so valuable, someone would exchange money for it because itâs so useful to them. Whatever that value is, these are two different things. (26:40)Trying to do a little bit for everybody [through excessive customization] can often result in nobody understanding the value or utility of your solution. Customization can hide the fact the team has decided not to make difficult choices. If youâre coming into a crowded space⊠itâs likeây not going to be a compelling reason to [convince customers to switch to your solution]. Customization can be a tax, not a benefit. (29:26)Watch for the sunk cost bias [in product development]. [Buyers] donât care how the sausage was made. Many donât understand how the AI stuff works, they probably donât need to understand how it works. They want the benefits downstream from technology wrapped up in something so invaluable they canât live without it. Watch out for technically right, effectively wrong. (39:27) -
In todayâs episode, Iâm joined by John Felushko, a product manager at LabStats who impressed me after we recently had a 1x1 call together. John and his team have developed a successful product that helps universities track and optimize their software and hardware usage so schools make smart investments. However, John also shares how culture and value are very tied togetherâand why their product isnât a fit for every school, and every country. John shares how important customer relationships are , how his team designs great analytics user experiences, how they do user research, and what he learned making high-end winter sports products thatâs relevant to leading a SAAS analytics product. Combined with Johnâs background in history and the political economy of finance, John paints some very colorful stories about what theyâre getting rightâand how theyâve course corrected over the years at LabStats.
Highlights/ Skip to:
(0:46) What is the LabStats product (2:59) Orienting analytics around customer value instead of IT/data(5:51) "Producer of Persistently Profitable Product Process"(11:22) How they make product adjustments based on previous failures(15:55) Why a lack of cultural understanding caused LabStats to fail internationally(18:43) Quantifying value beyond dollars and cents(25:23) How John is able to work so closely with his customers without barriers(30:24) Who makes up the LabStats product research team(35:04) ââHow strong customer relationships help inform the UX design process(38:29) Getting senior management to accept that you can't regularly and accurately predict when youâll be feature-complete and ship(43:51) Where John learned his skills as a successful product manager(47:20) Where you can go to cultivate the non-technical skills to help you become a better SAAS analytics product leader(51:00) What advice would John Felushko have given himself 10 years ago?(56:19) Where you can find more from John FelushkoQuotes from Todayâs EpisodeâThe product process is [essentially] really nothing more than the scientific method applied to business. Every product is an experiment - it has a hypothesis about a problem it solves. At LabStats [we have a process] where we go out and clearly articulate the problem. We clearly identify who the customers are, and who are [people at other colleges] having that problem. Incrementally and as inexpensively as possible, [we] test our solutions against those specific customers. The success rate [of testing solutions by cross-referencing with other customers] has been extremely high.â - John Felushko (6:46)âOne of the failures I see in Americans is that we donât realize how much culture matters. Americans have this bias to believe that whatever is valuable in my culture is valuable in other cultures. Value is entirely culturally determined and subjective. Value isnât a number on a spreadsheet. [LabStats positioned our producty] as something that helps you save money and be financially efficient. In French government culture, financial efficiency is not a top priority. Spending government money on things like education is seen as a positive good. The more money you can spend on it, the better. So, the whole message of financial efficiency wasnât going to work in that market.â - John Felushko (16:35)âWhat Iâm really selling with data products is confidence. Iâm selling assurance. Iâm selling an emotion. Before I was a product manager, I spent about ten years in outdoor retail, selling backpacks and boots. What I learned from that is youâre always selling emotion, at every level. If you can articulate the ROI, the real value is that the buyer has confidence they bought the right thing.â - John Felushko (20:29)â[LabStats] has three massive, multi-million dollar horror stories in our past where we [spent] millions of dollars in development work for no results. No ROI. Horror stories are what shape peopleâs values more than anything else. Avoiding negative outcomes is what people avoid more than anything else. [Itâs important to] tell those stories and perpetuate those [lessons] through the culture of your organization. These are the times we screwed up, and this is what we learned from itâdo you want to screw up like that again because we learned not to do that.â - John Felushko (38:45)âThereâs an old description of a product manager, like, âOh, they come across as the smartest person in the room.â Well, how do you become that person? Expand your view, and expand the amount of information you consume as widely as possible. Thatâs so important to UX design and thinking about what went wrong. Why are some customers super happy and some customers not? What is the difference between those two groups of people? Is it culture? Is it time? Is it mental ability? Is it the size of the screen theyâre looking at my product on? What variables can I define and rule out, and what data sources do I have to answer all those questions? Itâs just the normal product manager thingâconstant curiosity.â -John Felushko (48:04) -
In todayâs episode, Iâm going to perhaps work myself out of some consulting engagements, but hey, thatâs ok! True consulting is about serviceânot PPT decks with strategies and tiers of people attached to rate cards. Specifically today, I decided to reframe a topic and approach it from the opposite/negative side. So, instead of telling you when the right time is to get UX design help for your enterprise SAAS analytics or AI product(s), today Iâm going to tell you when you should NOT get help!
Reframing this was really fun and made me think a lot as I recorded the episode. Some of these reasons arenât necessarily representative of what I believe, but rather what Iâve heard from clients and prospects over 25 yearsâwhat they believe. For each of these, Iâm also giving a counterargument, so hopefully, you get both sides of the coin.
Finally, analytical thinkers, especially data product managers it seems, often want to quantify all forms of value they produce in hard monetary unitsâand so in this episode, Iâm also going to talk about other forms of value that products can create that are worth paying forâand how mushy things like âfeelingsâ might just come into play ;-) Ready?
Highlights/ Skip to:(1:52) Going for short, easy wins(4:29) When you think you have good design sense/taste (7:09) The impending changes coming with GenAI(11:27) Concerns about "dumbing down" or oversimplifying technical analytics solutions that need to be powerful and flexible(15:36) Agile and process FTW?(18:59) UX design for and with platform products(21:14) The risk of involving designers who donât understand data, analytics, AI, or your complex domain considerations (30:09) Designing after the ML models have been trainedâand itâs too late to go back (34:59) Not tapping professional design help when your user base is small , and you have routine access and exposure to them (40:01) Explaining the value of UX design investments to your stakeholders when you donât 100% control the budget or decisions Quotes from Todayâs EpisodeâIt is true that most impactful design often creates more product and engineering work because humans are messy. While there sometimes are these magic, small GUI-type changes that have big impact downstream, the big picture value of UX can be lost if youâre simply assigning low-level GUI improvement tasks and hoping to see a big product win. It always comes back to the game youâre playing inside your team: are you working to produce UX and business outcomes or shipping outputs on time? â (3:18)âIf youâre building something that needs to generate revenue, there has to be a sense of trust and belief in the solution. Weâve all seen the challenges of this with LLMs. [when] youâre unable to get it to respond in a way that makes you feel confident that it understood the query to begin with. And then you start to have all these questions about, âIs the answer not in there,â or âAm I not prompting it correctly?â If you think that most of this is just an technical data science problem, then donât bother to invest in UX design work⊠â (9:52)âDesign is about, at a minimum, making it useful and usable, if not delightful. In order to do that, we need to understand the people that are going to use it. What would an improvement to this personâs life look like? Simplifying and dumbing things down is not always the answer. There are tools and solutions that need to be complex, flexible, and/or provide a lot of power â especially in an enterprise context. Working with a designer who solely insists on simplifying everything at all costs regardless of your stated business outcome goals is a red flagâand a reason not to invest in UX designâat least with them!â (12:28)âI think what an analytics product manager [or] an AI product manager needs to accept is there are other ways to measure the value of UX designâs contribution to your product and to your organization. Letâs say that you have a mission-critical internal data product, itâs used by the most senior executives in the organization, and you and your team made their day, or their month, or their quarter. You saved their job. You made them feel like a hero. What is the value of giving them that experience and making them feel like those things⊠What is that worth when a key customer or colleague feels like you have their back with this solution you created? Ideas that spread, win, and if these people are spreading your idea, your product, or your solution⊠thereâs a lot of value in that.â (43:33)âLetâs think about value in non-financial terms. Terms like feelings. We buy insurance all the time. Weâre spending money on something that most likely will have zero economic value this year because weâre actually trying not to have to file claims. Yet this industry does very well because the feeling of security matters. That feeling is worth something to a lot of people. The value of feeling secure is something greater than whatever the cost of the insurance plan. If your solution can build feelings of confidence and security, what is that worth? Does âhard to measure preciselyâ necessarily mean âlow value?â (47:26) -
Due to a technical glitch that ended up unpublishing this episode right after it originally was released, Episode 151 is a replay of my conversation with Zalak Trivdei from this past March . Please enjoy our chat if you missed it the first time around!
Thanks,
Brian
LinksOriginal Episode: https://designingforanalytics.com/resources/episodes/139-monetizing-saas-analytics-and-the-challenges-of-designing-a-successful-embedded-bi-product-promoted-episode/
Sigma Computing: https://sigmacomputing.com
Email: [email protected]
LinkedIn: https://www.linkedin.com/in/trivedizalak/
Sigma Computing Embedded: https://sigmacomputing.com/embedded
About Promoted Episodes on Experiencing Data: https://designingforanalytics.com/promoted
-
âLast week was a great year in GenAI,â jokes Mark Ramseyâand itâs a great philosophy to have as LLM tools especially continue to evolve at such a rapid rate. This week, youâll get to hear my fun and insightful chat with Mark from Ramsey International about the world of large language models (LLMs) and how we make useful UXs out of them in the enterprise.
Mark shared some fascinating insights about using a companyâs website information (data) as a place to pilot a LLM project, avoiding privacy landmines, and how re-ranking of models leads to better LLM response accuracy. We also talked about the importance of real human testing to ensure LLM chatbots and AI tools truly delight users. From amusing anecdotes about the spinning beach ball on macOS to envisioning a future where AI-driven chat interfaces outshine traditional BI tools, this episode is packed with forward-looking ideas and a touch of humor.
Highlights/ Skip to:(0:50) Why is the world of GenAI evolving so fast?(4:20) How Mark thinks about UX in an LLM application(8:11) How Mark defines âSpecialized GenAI?â(12:42) Markâs consulting work with GenAI / LLMs these days(17:29) How GenAI can help the healthcare industry(30:23) Uncovering usersâ true feelings about LLM applications(35:02) Are UIs moving backwards as models progress forward?(40:53) How will GenAI impact data and analytics teams?(44:51) Will LLMs be able to consistently leverage RAG and produce proper SQL?(51:04) Where can find more from Mark and Ramsey InternationalQuotes from Todayâs EpisodeâWith [GenAI], we have a solution that weâve built to try to help organizations, and build workflows. We have a workflow that we can run and ask the same question [to a variety of GenAI models] and see how similar the answers are. Depending on the complexity of the question, you can see a lot of variability between the models⊠[and] we can also run the same question against the different versions of the model and see how itâs improved. Folks want a human-like experience interacting with these models.. [and] if the model can start responding in just a few seconds, that gives you much more of a conversational type of experience.â - Mark Ramsey (2:38)â[People] donât understand when you interact [with GenAI tools] and it brings tokens back in that streaming fashion, youâre actually seeing inside the brain of the model. Every token it produces is then displayed on the screen, and it gives you that typewriter experience back in the day. If someone has to wait, and all youâre seeing is a logo spinning, from a UX experience standpoint⊠people feel like the model is much faster if it just starts to produce those results in that streaming fashion. I think in a design, itâs extremely important to take advantage of that [...] as opposed to waiting to the end and delivering the results some models support that, and other models donât.â- Mark Ramsey (4:35)"All of the data thatâs on the website is public information. Weâve done work with several organizations on quickly taking the data thatâs on their website, packaging it up into a vector database, and making that be the source for questions that their customers can ask. [Organizations] publish a lot of information on their websites, but people really struggle to get to it. Weâve seen a lot of interest in vectorizing website data, making it available, and having a chat interface for the customer. The customer can ask questions, and it will take them directly to the answer, and then they can use the website as the source information.â - Mark Ramsey (14:04)âIâm not skeptical at all. Iâve changed much of my [AI chatbot searches] to Perplexity, and I think itâs doing a pretty fantastic job overall in terms of quality. Itâs returning an answer with citations, so you have a sense of where itâs sourcing the information from. I think itâs important from a user experience perspective. This is a replacement for broken search, as I really donât want to read all the web pages and PDFs you have that *might* be about my chiropractic care query to answer my actual [healthcare] question.â - Brian OâNeill (19:22)âWeâve all had great experience with customer service, and weâve all had situations where the customer service was quite poor, and weâre going to have that same thing as we begin to [release more] chatbots. We need to make sure we try to alleviate having those bad experiences, and have an exit. If someone is running into a situation where theyâd rather talk to a live person, have that ability to route them to someone else. Thatâs why the robustness of the model is extremely important in the implementation⊠and right now, organizations like OpenAI and Anthropic are significantly better at that [human-like] experience.â - Mark Ramsey (23:46)"Thereâs two aspects of these models: the training aspect and then using the model to answer questions. I recommend to organizations to always augment their content and donât just use the training data. Youâll still get that human-like experience thatâs built into the model, but youâll eliminate the hallucinations. If you have a model that has been set up correctly, you shouldnât have to ask questions in a funky way to get answers.â - Mark Ramsey (39:11)âPeople need to understand GenAI is not a predictive algorithm. It is not able to run predictions, it struggles with some math, so that is not the focus for these models. Whatâs interesting is that you can use the model as a step to get you [the answers]. A lot of the models now support functions⊠when you ask a question about something that is in a database, it actually uses its knowledge about the schema of the database. It can build the query, run the query to get the data back, and then once it has the data, it can reformat the data into something that is a good response back." - Mark Ramsey (42:02) LinksMark on LinkedInRamsey InternationalEmail: mark [at] ramsey.internationalRamsey International's YouTube Channel -
Guess what? Data science and AI initiatives are still failing here in 2024âdespite widespread awareness. Is that news? Candidly, youâll hear me share with Evan Shellshearâauthor of the new book Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analyticsâabout how much I actually didnât want to talk about this story originally on my podcastâbecause itâs not news! However, what is news is what the data says behind Evanâs findingsâand guess what? Itâs not the technology.
In our chat, Evan shares why he wanted to leverage a human approach to understand the root cause of multiple organizationsâ failures and how this approach highlighted the disconnect between data scientists and decision-makers. He explains the human factors at play, such as poor problem surfacing and organizational culture challengesâand how these human-centered design skills are rarely taught or offered to data scientists. The conversation delves into why these failures are more prevalent in data science compared to other fields, attributing it to the complexity and scale of data-related problems. We also discuss how analytically mature companies can mitigate these issues through strategic approaches and stakeholder buy-in. Join us as we delve into these critical insights for improving data science project outcomes.
Highlights/ Skip to:(4:45) Why are data science projects still failing?(9:17) Why is the disconnect between data scientists and decision-makers so pronounced relative to, say, engineering? (13:08) Why are data scientists not getting enough training for real-world problems?(16:18) What the data says about failure rates for mature data teams vs. immature data teams(19:39) How to change peopleâs opinions so they value data more(25:16) What happens at the stage where the beneficiaries of data donât actually see the benefits?(31:09) What are the skills needed to prevent a repeating pattern of creating data products that customers ignore??(37:10) Where do more mature organizations find non-technical help to complement their data science and AI teams? (41:44) Are executives and directors aware of the skills needed to level up their data science and AI teams?Quotes from Todayâs EpisodeâPeople know this stuff. Itâs not news anymore. And so, the reason why we needed this was really to dig in. And exactly like you did, like, keeping that list of articles is brilliant, and knowing whatâs causing the failures and whatâs leading to these issues still arising is really important. But at some point, we need to approach this in a scientific fashion, and we need to unpack this, and we need to really delve into the details beyond just the headlines and the articles themselves. And start collating and analyzing this to properly figure out whatâs going wrong, and what do we need to do about it to fix it once and for all so you can stop your endless collection, and the AI Incident Database that now has over 3500 entries. It can hang its hat and say, âIâve done my job. Itâs time to move on. Weâre not failing as we used to.ââ - Evan Shellshear (3:01)"What we did is we took a number of different studies, and we split companies into what we saw as being analytically matureâand this is a common, well-known thing; there are many maturity frameworks exist across data, across AI, across all different areasâand what we call analytically immature, so those companies that probably arenât there yet. And what we wanted to draw a distinction is okay, we say 80% of projects fail, or whatever the exact number is, but for who? And for what stage and for what capability? And so, what we then went and did is we were able to take our data and look at which failures are common for analytically immature organizations, and which failures are common for analytically mature organizations, and then weâre able to understand, okay, in the market, how many organizations do we think are analytically mature versus analytically immature, and then we were able to take that 80% failure rate and establish it. For analytically mature companies, the failure rate is probably more like 40%. For analytically immature companies, itâs over 90%, right? And so, youâre exactly right: organizations can do something about it, and they can build capabilities in to mitigate this. So definitely, it can be reduced. Definitely, it can be brought down. You might say, 40% is still too high, but it proves that by bringing in these procedures, youâre completely correct, that it can be reduced.â - Evan Shellshear (14:28)"What happens with the data science person, however, is typically theyâre seen as a cost centerâtypically, not always; nowadays, that dialog is changingâand what they need to do is find partners across the other parts of the business. So, theyâre going to go into the supply chain team, theyâll go into the merchandising team, theyâll go into the banking team, theyâll go into the other teams, and theyâre going to find their supporters and winners there, and theyâre going to probably build out from there. So, the first step would likely be, if youâre a big enough organization that youâre not having that strategy the executive level is to find your friendsâand there will be some of the organization who support this data strategyâand get some wins for them.â - Evan Shellshear (24:38)âItâs not like thereâs this box you put one in the other in. Because, like success and failure, thereâs a continuum. And companies as they move along that continuum, just like you said, this year, we failed on the lack of executive buy-in, so letâs fix that problem. Next year, we fail on not having the right resources, so we fix that problem. And you move along that continuum, and you build it up. And at some point as youâre going on, that failure rate is dropping, and youâre getting towards that end of the scale where youâve got those really capable companies that live, eat, and breathe data science and analytics, and so have to have these to be able to survive, otherwise a simple company evolution would have wiped them out, and they wouldnât exist if they didnât have that capability, if thatâs their core thing.â - Evan Shellshear (18:56)âNothing else could be correct, right? This subjective intuition and all this stuff, itâs never going to be as good as the data. And so, what happens is, is you, often as a data scientistâand Iâve been subjected to this myselfâcome in with this arrogance, this kind of data-driven arrogance, right? And itâs not a good thing. It puts up barriers, it creates issues, it separates you from the people.â - Evan Shellshear (27:38)"Knowing that youâre going to have to go on that journey from day one, you canât jump from level zero to level five. Thatâs what all these data maturity models are about, right? You canât jump from level zero data maturity to level five overnight. You really need to take those steps and build it up.â - Evan Shellshear (45:21)"What weâre talking about, itâs not new. Itâs just old wine in a new skin, and weâre just presenting it for the data science age." - Evan Shellshear (48:15)LinksWhy Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics, without the Hype: https://www.routledge.com/Why-Data-Science-Projects-Fail-the-Harsh-Realities-of-Implementing-AI-and-Analytics-without-the-Hype/Gray-Shellshear/p/book/9781032660301 LinkedIn: https://www.linkedin.com/in/eshellshear/ Get the Book:Get 20% off at Routledge.com w/ code dspf20 Get it at AmazonWhy do we still teach people to calculate? (People I Mostly Admire podcast) -
Ready for more ideas about UX for AI and LLM applications in enterprise environments? In part 2 of my topic on UX considerations for LLMs, I explore how an LLM might be used for a fictitious use case at an insurance companyâspecifically, to help internal tools teams to get rapid access to primary qualitative user research. (Yes, itâs a little âmetaâ, and Iâm also trying to nudge you with this hypothetical exampleâno secret!) ;-) My goal with these episodes is to share questions you might want to ask yourself such that any use of an LLM is actually contributing to a positive UX outcome Join me as I cover the implications for design, the importance of foundational data quality, the balance between creative inspiration and factual accuracy, and the never-ending discussion of how we might handle hallucinations and errors posing as âfactsââall with a UX angle. At the end, I also share a personal story where I used an LLM to help me do some shopping for my favorite product: TRIP INSURANCE! (NOT!)
Highlights/ Skip to:(1:05) I introduce a hypothetical internal LLM tool and what the goal of the tool is for the team who would use it (5:31) Improving access to primary research findings for better UX (10:19) What âquality dataâ means in a UX context(12:18) When LLM accuracy maybe doesnât matter as much(14:03) How AI and LLMs are opening the door for fresh visioning work(15:38) Brianâs overall take on LLMs inside enterprise software as of right now(18:56) Final thoughts on UX design for LLMs, particularly in the enterprise(20:25) My inspiration for these 2 episodesâand how I had to use ChatGPT to help me complete a purchase on a website that could have integrated this capability right into their websiteQuotes from Todayâs EpisodeâIf we accept that the goal of most product and user experience research is to accelerate the production of quality services, products, and experiences, the question is whether or not using an LLM for these types of questions is moving the needle in that direction at all. And secondly, are the potential downsides like hallucinations and occasional fabricated findings, is that all worth it? So, this is a design for AI problem.â - Brian T. OâNeill (8:09)âWhatâs in our data? Can the right people change it when the LLM is wrong? The data product managers and AI leaders reading this or listening know that the not-so-secret path to the best AI is in the foundational data that the models are trained on. But what does the word *quality* mean from a product standpoint and a risk reduction one, as seen from an end-usersâ perspective? Somebody whoâs trying to get work done? This is a different type of quality measurement.â - Brian T. OâNeill (10:40)âWhen we think about fact retrieval use cases in particular, how easily can product teamsâinternal or otherwiseâand end-users understand the confidence of responses? When responses are wrong, how easily, if at all, can users and product teams update the modelâs responses? Errors in large language models may be a significant design consideration when we design probabilistic solutions, and we no longer control what exactly our products and software are going to show to users. If bad UX can include leading people down the wrong path unknowingly, then AI is kind of like the team on the other side of the tug of war that weâre playing.â - Brian T. OâNeill (11:22)âAs somebody who writes a lot for my consulting business, and composes music in another, one of the hardest parts for creators can be the zero-to-one problem of getting startedâthe blank pageâand this is a place where I think LLMs have great potential. But it also means we need to do the proper research to understand our audience, and when or where theyâre doing truly generative or creative workâsuch that we can take a generative UX to the next level that goes beyond delivering banal and obviously derivative content.â - Brian T. OâNeill (13:31)âOne thing I actually like about the hype, investment, and excitement around GenAI and LLMs in the enterprise is that there is an opportunity for organizations here to do some fresh visioning work. And this is a place that designers and user experience professionals can help data teams as we bring design into the AI space.â - Brian T. OâNeill (14:04)âIf there was ever a time to do some new visioning work, I think now is one of those times. However, we need highly skilled design leaders to help facilitate this in order for this to be effective. Part of that skill is knowing who to include in exercises like this, and my perspective, one of those people, for sure, should be somebody who understands the data science side as well, not just the engineering perspective. And as I posited in my seminar that I teach, the AI and analytical data product teams probably need a fourth member. Itâs a quartet and not a trio. And that quartet includes a data expert, as well as that engineering lead.â - Brian T. OâNeill (14:38)LinksPerplexity.ai: https://perplexity.ai Ideaflow: https://www.amazon.com/Ideaflow-Only-Business-Metric-Matters/dp/0593420586 My article that inspired this episode -
Letâs talk about design for AI (which more and more, Iâm agreeing means GenAI to those outside the data space). The hype around GenAI and LLMsâparticularly as it relates to dropping these in as features into a software application or productâseems to me, at this time, to largely be driven by FOMO rather than real value. In this âpart 1â episode, I look at the importance of solid user experience design and outcome-oriented thinking when deploying LLMs into enterprise products. Challenges with immature AI UIs, the role of context, the constant game of understanding what accuracy means (and how much this matters), and the potential impact on human workers are also examined. Through a hypothetical scenario, I illustrate the complexities of using LLMs in practical applications, stressing the need for careful consideration of benchmarks and the acceptance of GenAI's risks.
I also want to note that LLMs are a very immature space in terms of UI/UX designâeven if the foundation models continue to mature at a rapid pace. As such, this episode is more about the questions and mindset I would be considering when integrating LLMs into enterprise software more than a suggestion of âbest practices.â
Highlights/ Skip to:
(1:15) Currently, many LLM feature initiatives seem to mostly driven by FOMO (2:45) UX Considerations for LLM-enhanced enterprise applications (5:14) Challenges with LLM UIs / user interfaces(7:24) Measuring improvement in UX outcomes with LLMs(10:36) Accuracy in LLMs and its relevance in enterprise software (11:28) Illustrating key consideration for implementing an LLM-based feature(19:00) Leadership and context in AI deployment(19:27) Determining UX benchmarks for using LLMs(20:14) The dynamic nature of LLM hallucinations and how we design for the unknown(21:16) Closing thoughts on Part 1 of designing for AI and LLMsQuotes from Todayâs Episode
âWhile many product teams continue to race to deploy some sort of GenAI and especially LLMs into their productsâparticularly this is in the tech sector for commercial software companiesâthe general sense Iâm getting is that this is still more about FOMO than anything else.â - Brian T. OâNeill (2:07)âNo matter what the technology is, a good user experience design foundation starts with not doing any harm, and hopefully going beyond usable to be delightful. And adding LLM capabilities into a solution is really no different. So, we still need to have outcome-oriented thinking on both our product and design teams when deploying LLM capabilities into a solution. This is a cornerstone of good product work.â - Brian T. OâNeill (3:03)âSo, challenges with LLM UIs and UXs, right, user interfaces and experiences, the most obvious challenge to me right now with large language model interfaces is that while weâve given users tremendous flexibility in the form of a Google search-like interface, weâve also in many cases, limited the UX of these interactions to a text conversation with a machine. Weâre back to the CLI in some ways.â - Brian T. OâNeill (5:14)âBefore and after we insert an LLM into a userâs workflow, we need to know what an improvement in their life or work actually means.â- Brian T. OâNeill (7:24)"If it would take the machine a few seconds to process a result versus what might take a day for a worker, whatâs the role and purpose of that worker going forward? I think these are all considerations that need to be made, particularly if youâre concerned about adoption, which a lot of data product leaders are." - Brian T. OâNeill (10:17)âSo, thereâs no right or wrong answer here. These are all range questions, and theyâre leadership questions, and context really matters. They are important to ask, particularly when we have this risk of reacting to incorrect information that looks plausible and believable because of how these LLMs tend to respond to us with a positive sheen much of the time.â - Brian T. OâNeill (19:00)Links
View Part 1 of my article on UI/UX design considerations for LLMs in enterprise applications: https://designingforanalytics.com/resources/ui-ux-design-for-enterprise-llms-use-cases-and-considerations-for-data-and-product-leaders-in-2024-part-1/ -
Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.
Iâm so excited to welcome this expert from the field of UX and design to todayâs episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.
In our chat, we covered:
Ben's career studying human-computer interaction and computer science. (0:30)'Building a culture of safety': Creating and designing âsafe, reliable and trustworthyâ AI systems. (3:55)'Like zoning boards': Why Ben thinks we need independent oversight of privately created AI. (12:56)'Thereâs no such thing as an autonomous device': Designing human control into AI systems. (18:16)A/B testing, usability testing and controlled experiments: The power of research in designing good user experiences. (21:08)Designing âcomprehensible, predictable, and controllableâ user interfaces for explainable AI systems and why [explainable] XAI matters. (30:34)Ben's upcoming book on human-centered AI. (35:55)Resources and Links:People-Centered Internet: https://peoplecentered.net/Designing the User Interface (one of Benâs earlier books): https://www.amazon.com/Designing-User-Interface-Human-Computer-Interaction/dp/013438038XBridging the Gap Between Ethics and Practice: https://doi.org/10.1145/3419764Partnership on AI: https://www.partnershiponai.org/AI incident database: https://www.partnershiponai.org/aiincidentdatabase/University of Maryland Human-Computer Interaction Lab: https://hcil.umd.edu/ACM Conference on Intelligent User Interfaces: https://iui.acm.org/2021/hcai_tutorial.htmlHuman-Computer Interaction Lab, University of Maryland, Annual Symposium: https://hcil.umd.edu/tutorial-human-centered-ai/Ben on Twitter: https://twitter.com/benbendc Quotes from Todayâs EpisodeThe world of AI has certainly grown and blossomed â itâs the hot topic everywhere you go. Itâs the hot topic among businesses around the world â governments are launching agencies to monitor AI and are also making regulatory moves and rules. ⊠People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but theyâre not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, thatâs where the action is. Of course, what we really want from AI is to make our world a better place, and thatâs a tall order, but we start by talking about the things that matter â the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a personâs sense of self-efficacy â they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And thatâs where we want to go. - Ben (2:05)The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because itâs not just programming, but it also involves the use of data thatâs used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, letâs say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. Thereâs been bias in facial recognition algorithms, which were less accurate with people of color. Thatâs led to some real problems in the real world. And thatâs where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)
Every company will tell you, âWe do a really good job in checking out our AI systems.â Thatâs great. We want every company to do a really good job. But we also want independent oversight of somebody whoâs outside the company â someone who knows the field, whoâs looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment â there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and thatâs where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)
Thereâs no such thing as an autonomous device. Someone owns it; somebodyâs responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when itâs performing poorly. ⊠Responsibility is a pretty key factor here. So, if thereâs something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: whatâs happening? Whatâs it doing? Whatâs going wrong and whatâs going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy thatâs hidden away and you never see it because thatâs just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. ⊠Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking whatâs going on and make sure it gets better. Every quarter. - Ben (19:41)
Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies â and this is not through heavy research â probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, theyâre at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that theyâre doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)
Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on whatâs usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, âWhat happened?â Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but Iâm afraid I havenât seen too many success stories of that working. ⊠Iâve been diving through this for years now, and Iâve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPAâs XAIâExplainable AIâproject, which has 11 projects within itâhas not really grappled with this in a good way about designing what itâs going to look like. Show it to me. ⊠There is another way. And the strategy is basically prevention. Letâs prevent the user from getting confused and so they donât have to request an explanation. We walk them along, let the user walk through the stepâthis is like Amazon checkout process, seven-step processâand you know whatâs happened in each step, you can go back, you can explore, you can change things in each part of it. Itâs also what TurboTax does so well, in really complicated situations, and walks you through it. ⊠You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)
-
Wait, Iâm talking to a head of data management at a tech company? Why!? Well, today I'm joined by Malcolm Hawker to get his perspective around data products and what heâs seeing out in the wild as Head of Data Management at Profisee. Why Malcolm? Malcolm was a former head of product in prior roles, and for several years, Iâve enjoyed Malcolmâs musings on LinkedIn about the value of a product-oriented approach to ML and analytics. We had a chance to meet at CDOIQ in 2023 as well and he went on my âneed to do an episodeâ list!
According to Malcom, empathy is the secret to addressing key UX questions that ensure adoption and business value. He also emphasizes the need for data experts to develop business skills so that they're seen as equals by their customers. During our chat, Malcolm stresses the benefits of a product- and customer-centric approach to data products and what data professionals can learn approaching problem solving with a product orientation.
Highlights/ Skip to:Malcolmâs definition of a data product (2:10)Understanding your customersâ needs is the first step toward quantifying the benefits of your data product (6:34)How product makers can gain access to users to build more successful products (11:36) Answering the UX question to get past the adoption stage and provide business value (16:03)Data experts must develop business expertise if they want to be seen as equals by potential customers (20:07)What people really mean by âdata culture" (23:02)Malcolmâs data product journey and his changing perspective (32:05)Using empathy to provide a better UX in design and data (39:24)Avoiding the death of data science by becoming more product-driven (46:23)Where the majority of data professionals currently land on their view of product management for data products (48:15)Quotes from Todayâs EpisodeâMy definition of a data product is something that is built by a data and analytics team that solves a specific customer problem that the customer would otherwise be willing to pay for. Thatâs it.â - Malcolm Hawker (3:42)âYou need to observe how your customer uses data to make better decisions, optimize a business process, or to mitigate business risk. You need to know how your customers operate at a very, very intimate level, arguably, as well as they know how their business processes operate.â - Malcolm Hawker (7:36)âSo, be a problem solver. Be collaborative. Be somebody who is eager to help make your customersâ lives easier. You hear "no" when people think that youâre a burden. You start to hear more âyesesâ when people think that you are actually invested in helping make their lives easier.â - Malcolm Hawker (12:42)âWe [data professionals] put data on a pedestal. We develop this mindset that the data matters moreâas much or maybe even more than the business processes, and that is not true. We would not exist if it were not for the business. Hard stop.â - Malcolm Hawker (17:07)âI hate to say it, I think a lot of this data stuff should kind of feel invisible in that way, too. Itâs like this invisible ally that youâre not thinking about the dashboard; you just access the information as part of your natural workflow when you need insights on making a decision, or a status check that youâre on track with whatever your goal was. Youâre not really going out of mode.â - Brian OâNeill (24:59)âBut you know, data people are basically librarians. We want to put things into classifications that are logical and work forwards and backwards, right? And in the product world, sometimes they just donât, where you can have something be a product and be a material to a subsequent product.â - Malcolm Hawker (37:57)âSo, the broader point here is just more of a mindset shift. And you know, maybe these things arenât necessarily a bad thing, but how do we become a little more product- and customer-driven so that we avoid situations where everybody thinks what weâre doing is a time waster?â - Malcolm Hawker (48:00)LinksProfisee: https://profisee.com/ LinkedIn: https://www.linkedin.com/in/malhawker/ CDO Matters: https://profisee.com/cdo-matters-live-with-malcolm-hawker/ -
Welcome to another curated, Promoted Episode of Experiencing Data!
In episode 144, Shashank Garg, Co-Founder and CEO of Infocepts, joins me to explore whether all this discussion of data products out on the web actually has substance and is worth the perceived extra effort. Do we always need to take a product approach for ML and analytics initiatives? Shashank dives into how Infocepts approaches the creation of data solutions that are designed to be actionable within specific business workflowsâand as I often do, I started out by asking Shashank how he and Infocepts define the term âdata product.â We discuss a few real-world applications Infocepts has built, and the measurable impact of these data productsâas well as some of the challenges theyâve faced that your team might as well. Skill sets also came up; who does design? Who takes ownership of the product/value side? And of course, we touch a bit on GenAI.
Highlights/ Skip to
Shashank gives his definition of data products (01:24)We tackle the challenges of user adoption in data products (04:29)We discuss the crucial role of integrating actionable insights into data products for enhanced decision-making (05:47)Shashank shares insights on the evolution of data products from concept to practical integration (10:35)We explore the challenges and strategies in designing user-centric data products (12:30)I ask Shashank about typical environments and challenges when starting new data product consultations (15:57)Shashank explains how Infocepts incorporates AI into their data solutions (18:55)We discuss the importance of understanding user personas and engaging with actual users (25:06)Shashank describes the roles involved in data product developmentâs ideation and brainstorming stages (32:20)The issue of proxy users not truly representing end-users in data product design is examined (35:47)We consider how organizations are adopting a product-oriented approach to their data strategies (39:48)Shashank and I delve into the implications of GenAI and other AI technologies on product orientation and user adoption (43:47)Closing thoughts (51:00)Quotes from Todayâs Episode
âData products, at least to us at Infocepts, refers to a way of thinking about and organizing your data in a way so that it drives consumption, and most importantly, actions.â - Shashank Garg (1:44)âThe way I see it is [that] the role of a DPM (data product manager)âwhether they have the title or notâis benefits creation. You need to be responsible for benefits, not for outputs. The outputs have to create benefits or it doesnât count. Game overâ - Brian OâNeill (10:07)We talk about bridging the gap between the worlds of business and analytics... There's a huge gap between the perception of users and the tech leaders who are producing it." - Shashank Garg (17:37)âIT leaders often limit their roles to provisioning their secure data, and then they rely on businesses to be able to generate insights and take actions. Sometimes this handoff works, and sometimes it doesnât because of quality governance.â - Shashank Garg (23:02)âData is the kind of field where people can react very, very quickly to whatâs wrong.â - Shashank Garg (29:44)âItâs much easier to get to a good prototype if we know what the inputs to a prototype are, which include data about the people who are going to use the solution, their usage scenarios, use cases, attitudes, beliefsâŠall these kinds of things.â - Brian OâNeill (31:49)âFor data, you need a separate person, and then for designing, you need a separate person, and for analysis, you need a separate personâthe more you can combine, I donât think you can create super-humans who can do all three, four disciplines, but at least two disciplines and can appreciate the third one that makes it easier.â - Shashank Garg (39:20)âWhen we think of AI, weâre all talking about multiple different delivery methods here. I think AI is starting to become GenAI to a lot of non-data people. Itâs like theirâeverything is GenAI.â - Brian O'Neill (43:48)Links
Infocepts website: https://www.infocepts.ai/Shashank Garg on LinkedIn: https://www.linkedin.com/in/shashankgarg/ Top 5 Data & AI initiatives for business success: https://www.infocepts.ai/downloads/top-5-data-and-ai-initiatives-to-drive-business-growth-in-2024-beyond/ -
Welcome back! In today's solo episode, I share the top five struggles that enterprise SAAS leaders have in the analytics/insight/decision support space that most frequently leads them to think they have a UI/UX design problem that has to be addressed. A lot of today's episode will talk about "slow creep," unaddressed design problems that gradually build up over time and begin to impact both UX and your revenue negatively. I will also share 20 UI and UX design problems I often see (even if clients do not!) that, when left unaddressed, may create sales friction, adoption problems, churn, or unhappy end users. If you work at a software company or are directly monetizing an ML or analytical data product, this episode is for you!
Highlights/ Skip to
I discuss how specific UI/UX design problems can significantly impact business performance (02:51)I discuss five common reasons why enterprise software leaders typically reach out for help (04:39)The 20 common symptoms I've observed in client engagements that indicate the need for professional UI/UX intervention or training (13:22)The dangers of adding too many features or customization and how it can overwhelm users (16:00)The issues of integrating AI into user interfaces and UXs without proper design thinking (30:08)I encourage listeners to apply the insights shared to improve their data products (48:02)Quotes from Todayâs EpisodeâOne of the problems with bad design is that some of it we can see and some of it we can't â unless you know what you're looking for." - Brian OâNeill (02:23)âDesign is usually not top of mind for an enterprise software product, especially one in the machine learning and analytics space. However, if you have human users, even enterprise ones, their tolerance for bad software is much lower today than in the past.â Brian OâNeill - (13:04)âEarly on when you're trying to get product market fit, you can't be everything for everyone. You need to be an A+ experience for the person you're trying to satisfy.â -Brian OâNeill (15:39)âOften when I see customization, it is mostly used as a crutch for not making real product strategy and design decisions.â - Brian OâNeill (16:04) "Customization of data and dashboard products may be more of a tax than a benefit. In the marketing copy, customization sounds like a benefit...until you actually go in and try to do it. It puts the mental effort to design a good solution on the user." - Brian OâNeill (16:26)âWe need to think strategically when implementing Gen AI or just AI in general into the product UX because it wonât automatically help drive sales or increase business value.â - Brian OâNeill (20:50) âA lot of times our analytics and machine learning tools⊠are insight decision support products. They're supposed to be rooted in facts and data, but when it comes to designing these products, there's not a whole lot of data and facts that are actually informing the product design choices.â Brian OâNeill - (30:37)âIf your IP is that special, but also complex, it needs the proper UI/UX design treatment so that the value can be surfaced in such a way someone is willing to pay for it if not also find it indispensable and delightful.â - Brian OâNeill (45:02)LinksThe (5) big reasons AI/ML and analytics product leaders invest in UI/UX design help: https://designingforanalytics.com/resources/the-5-big-reasons-ai-ml-and-analytics-product-leaders-invest-in-ui-ux-design-help/ Subscribe for free insights on designing useful, high-value enterprise ML and analytical data products: https://designingforanalytics.com/list Access my free frameworks, guides, and additional reading for SAAS leaders on designing high-value ML and analytical data products: https://designingforanalytics.com/resources Need help getting your productâs design/UX on trackâso you can see more sales, less churn, and higher user adoption? Schedule a free 60-minute Discovery Call with me and Iâll give you my read on your situation and my recommendations to get ahead:https://designingforanalytics.com/services/ -
Welcome to a special edition of Experiencing Data. This episode is the audio capture from a live Crowdcast video webinar I gave on April 26th, 2024 where I conducted a mini UI/UX design audit of a new podcast analytics service that Chris Hill, CEO of Humblepod, is working on to help podcast hosts grow their show. Humblepod is also the team-behind-the-scenes of Experiencing Data, and Chris had asked me to take a look at his new âListener Lifecycleâ tool to see if we could find ways to improve the UX and visualizations in the tool, how we might productize this MVP in the future, and how improving the toolâs design might help Chris help his prospective podcast clients learn how their listener data could help them grow their listenership and âtrue fans.â On a personal note, it was fun to talk to Chris on the show given we speak every week: Humblepod has been my trusted resource for audio mixing, transcription, and show note summarizing for probably over 100 of the most recent episodes of Experiencing Data. It was also fun to do a âlive recordingâ with an audienceâand we did answer questions in the full video version. (If you missed the invite, join my Insights mailing list to get notified of future free webinars).
To watch the full audio and video recording on Crowdcast, free, head over to: https://www.crowdcast.io/c/podcast-analytics-ui-ux-design
Highlights/ Skip to:Chris talks about using data to improve podcasts and his approach to podcast numbers (03:06)Chris introduces the Listener Lifecycle model which informed the dashboard design (08:17)Chris and I discuss the importance of labeling and terminology in analytics UIs (11:00)We discuss designing for practical use of analytics dashboards to provide actionable insights (17:05)We discuss the challenges podcast hosts face in understanding and utilizing data effectively and how design might help (21:44)I discuss how my CED UX framework for advanced analytics applications helps to facilitate actionable insights (24:37)I highlight the importance of presenting data effectively and in a way that centers to user needs (28:50)I express challenges users may have with podcast rankings and the reliability of data sources (34:24) Chris and I discuss tailoring data reports to meet the specific needs of clients (37:14)Quotes from Todayâs EpisodeâThe irony for me as someone who has a podcast about machine learning and analytics and design is that I basically never look at my analytics.â - Brian OâNeill (01:14)âThe problem that I have found in podcasting is that the number that everybody uses to gauge whether a podcast is good or not is the download numberâŠBut thereâs a lot of other factors in a podcast that can tell you how successful itâs going to beâŠwhere you can pull levers toâŠgrow your show, or engage more with an audience.â - Chris Hill (03:20)âI have a framework for user experience design for analytics called CED, which stands for Conclusions, Evidence, Data⊠The basic idea is really simple: lead your analytic service with conclusions.â- Brian OâNeill (24:37)âWhere the eyes glaze over is when tools are mostly about evidence generators, and we just give everybody the evidence, but thereâs no actual analysis about how [this is] helping me improve my life or my business. Itâs just evidence. I need someone to put that together.â - Brian OâNeill (25:23)âSometimes the data doesnât provide enough of a conclusion about what to doâŠThis is where your opinion starts to matterâ - Brian OâNeill (26:07)âIt sounds like a benefit, but drilling down for most people into analytics stuff is usually a tax unless youâre an analyst.â - Brian OâNeill (27:39)âWhereâs the source of this data, and who decided what these numbers are? Because so much of this stuffâŠis not shared. As someone whoâs in this space, itâs not even that itâs confusing. Itâs more like, you got to distill this down for me.â - Brian OâNeill (34:57)âYour clients are probably going to glaze over at this level of data because itâs not helping them make any decision about what to change.â- Brian OâNeill (37:53)LinksWatch the original Crowdcast video recording of this episodeBrianâs CED UX Framework for Advanced Analytics SolutionsJoin Brianâs Insights mailing list -
In this week's episode of Experiencing Data, I'm joined by Duncan Milne, a Director, Data Investment & Product Management at the Royal Bank of Canada (RBC). Today, Duncan (who is also a member of the DPLC) gives a preview of his upcoming webinar on April 24, 2024 entitled, âIs that Data Product Worth Building? Estimating Economic ValueâŠBefore You Build It!â Duncan shares his experience of implementing a product mindset within RBC's Chief Data Office, and he explains some of the challenges, successes, and insights gained along the way. He emphasizes the critical role of understanding user needs and evaluating the economic impact of data productsâbefore they are built. Duncan was gracious to let us peek inside and see a transformation that is currently in progress and Iâm excited to check out his webinar this month!
Highlights/ Skip to:
I introduce Duncan Milne from RBC (00:00)Duncan outlines the Chief Data Office's function at RBC (01:01)We discuss data products and how they are used to improve business process (04:05)The genesis behind RBC's move towards a product-centric approach in handling data, highlighting initial challenges and strategies for fostering a product mindset (07:26)Duncan discusses developing a framework to guide the lifecycle of data products at RBC (09:29)Duncan addresses initial resistance and adaptation strategies for engaging teams in a new product-centric methodology (12:04)The scaling challenges of applying a product mindset across a large organization like RBC (22:02)Insights into the framework for evaluating and prioritizing data product ideas based on their desirability, usability, feasibility, and viability. (26:30)Measuring success and value in data product management (30:45)Duncan explores process mapping challenges in banking (34:13)Duncan shares creating specialized training for data product management at RBC (36:39)Duncan offers advice and closing thoughts on data product management (41:38)Quotes from Todayâs EpisodeâWe think about data products as anything that solves a problem using data... it's helping someone do something they already do or want to do faster and better using data." - Duncan Milne (04:29)âThe transition to data product management involves overcoming initial resistance by demonstrating the tangible value of this approach." - Duncan Milne (08:38)"You have to want to show up and do this kind of work [adopting a product mindset in data product management]âŠeven if you do a product the right way, it doesnât always work, right? The thing you make may not be desirable, it may not be as usable as it needs to be. It can be technically right and still fail. Itâs not a guarantee, itâs just a better way of working.â - Brian T. OâNeill (15:03)â[Product management]... it's like baking versus cooking. Baking is a science... cooking is much more flexible. Itâs about... did we produce a benefit for users? Did we produce an economic benefit? ...Itâs a multivariate problem... a lot of it is experimentation and figuring out what works." - Brian T. O'Neill (23:03)"The easy thing to measure [in product management] is did you follow the process or not? That is not the point of product management at all. It's about delivering benefits to the stakeholders and to the customer." - Brian O'Neill (25:16)âData product is not something that is set in stone... You can leverage learnings from a more traditional product approach, but donât be afraid to improvise." - Duncan Milne (41:38)âData products are fundamentally different from digital products, so even the traditional approach to product management in that space doesnât necessarily work within the data products construct.â - Duncan Milne (41:55)âThere is no textbook for data product management; the field is still being developedâŠdonât be afraid to create your own answer if what exists out there doesnât necessarily work within your context.â- Duncan Milne (42:17)LinksDuncanâs Linkedin: https://www.linkedin.com/in/duncanwmilne/?originalSubdomain=ca -
This week on Experiencing Data, I chat with a new kindred spirit! Recently, I connected with Thabata Romanowskiâbetter known as "T from Data Rocks NZ"âto discuss her experience applying UX design principles to modern analytical data products and dashboards. T walks us through her experience working as a data analyst in the mining sector, sharing the journey of how these experiences laid the foundation for her transition to data visualization. Now, she specializes in transforming complex, industry-specific data sets into intuitive, user-friendly visual representations, and addresses the challenges faced by the analytics teams she supports through her design business. T and I tackle common misconceptions about design in the analytics field, discuss how we communicate and educate non-designers on applying UX design principles to their dashboard and application design work, and address the problem with "pretty charts." We also explore some of the core ideas in T's Design Manifesto, including principles like being purposeful, context-sensitive, collaborative, and humanisticâall aimed at increasing user adoption and business value by improving UX.
Highlights/ Skip to:
I welcome T from Data Rocks NZ onto the show (00:00)T's transition from mining to leading an information design and data visualization consultancy. (01:43)T discusses the critical role of clear communication in data design solutions. (03:39)We address the misconceptions around the role of design in data analytics. (06:54) T explains the importance of journey mapping in understanding users' needs. (15:25)We discuss the challenges of accurately capturing end-user needs. (19:00) T and I discuss the importance of talking directly to end-users when developing data products. (25:56) T shares her 'I like, I wish, I wonder' method for eliciting genuine user feedback. (33:03)T discusses her Data Design Manifesto for creating purposeful, context-aware, collaborative, and human-centered design principles in data. (36:37)We wrap up the conversation and share ways to connect with T. (40:49)Quotes from Todayâs Episode"It's not so much that peopleâŠdon't know what design is, it's more that they understand it differently from what it can actually do..." - T from Data Rocks NZ (06:59)"I think [misconception about design in technology] is rooted mainly in the fact that data has been very tied to IT teams, to technology teams, and theyâre not always up to what design actually does.â - T from Data Rocks NZ (07:42) âIf you strip design of function, it becomes art. So, itâs not art⊠itâs about being functional and being useful in helping people.â - T from Data Rocks NZ (09:06)"Itâs not that people donât know, really, that the word design exists, or that design applies to analytics and whatnot; itâs more that they have this misunderstanding that itâs about making things look a certain way, when in fact... Itâs about function. Itâs about helping people do stuff better." - T from Data Rocks NZ (09:19)âJourney Mapping means that you have to talk to people... Data is an inherently human thing. It is something that we create ourselves. So, itâs biased from the start. You canât fully remove the human from the data" - T from Data Rocks NZ (15:36) âThe biggest part of your data product successâŠhappens outside of your technology and outside of your actual analysis. Itâs defining who your audience is, what the context of this audience is, and to which purpose do they need that product. - T from Data Rocks NZ (19:08)â[In UX research], a tight, empowered product team needs regular exposure to end customers; thereâs nothing that can replace that." - Brian O'Neill (25:58)âYou have two sides [end-users and data team] that are frustrated with the same thing. The side who asked wasnât really sure what to ask. And then the data team gets frustrated because the users donât know what they wantâŠNobody really understood what the problem is. Thereâs a lot of assumptions happening there. And this is one of the hardest things to let go.â - T from Data Rocks NZ (29:38)âNo piece of data product exists in isolation, so understanding what people do with it⊠is really important.â - T from Data Rocks NZ (38:51)LinksDesign Matters Newsletter: https://buttondown.email/datarocksnz Website: https://www.datarocks.co.nz/LinkedIn: https://www.linkedin.com/company/datarocksnz/BlueSky: https://bsky.app/profile/datarocksnz.bsky.socialMastodon: https://me.dm/@datarocksnz - Show more