Episodi
-
For developers, impersonation can be a powerful tool, but with great power comes great responsibility. In today’s episode, hosts Stephanie and Joël explore the complexities of implementing impersonation features in software development, giving you the ability to take over someone’s account and act as the user. They delve into the pros and cons of impersonation, from how it can help with debugging and customer support to its prime drawbacks regarding security and auditing issues. Discover why the need for impersonation is often a sign of poor admin tooling, alternative solutions to true impersonation, and the scenarios where impersonation might be the most pragmatic approach. You’ll also learn why they advocate for understanding the root problem and considering alternative solutions before implementing impersonation. Tune in today for a deep dive into impersonation and the best ways to use it (or not use it)!
Key Points From This Episode:
What’s new in Stephanie’s world: how Notion Calendar is helping her manage her schedule.
Joël’s quest to find a health plan: how he used a spreadsheet to compare his options.
A client request to build an impersonation feature, and why Joël has mixed feelings about it.
What an impersonation tool does: it allows you to take over someone’s account.
When it’s useful to use implementation as a feature, like for debugging and support.
Potential risks and responsibilities associated with impersonation.
Why the need for impersonation often indicates poor admin tooling.
Technical and security implications of impersonation.
Solutions for logging the audit trail when you’re doing impersonation.
Differentiating between the logged-in user and the user you’re rendering views for.
Building an app that isn’t as tightly coupled to the “current user.”
Suggested alternatives to true impersonation.
The value of cross-functional teams and collaborative problem-solving.
Links Mentioned in Today’s Episode:
Mailtrap (https://l.rw.rw/the_bike_shed)
Notion Calendar (https://www.notion.com/product/calendar)
'Implementing Impersonation' (https://jamie.ideasasylum.com/2018/09/29/implementing-impersonation)
Sustainable Web Development with Ruby on Rails (https://sustainable-rails.com/)
The Bike Shed (https://bikeshed.thoughtbot.com/)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)
Joël Quenneville on X (https://x.com/joelquen)
Support The Bike Shed (https://github.com/sponsors/thoughtbot)
WorkOS (https://workos.com/) -
When is it time for a rewrite? How do you justify it? If you’re tasked with one, how do you approach it? In today’s episode of The Bike Shed, we dive into the tough question of software rewrites, sharing firsthand experiences that reveal why these projects are often more complicated and risky than they first appear. We unpack critical factors that make or break a rewrite, from balancing developer satisfaction with business value to managing stakeholder expectations when costs and timelines stretch unexpectedly. You’ll hear about real-world rewrite pitfalls like downtime and reintroducing bugs, as well as strategies for achieving similar improvements through incremental changes or refactoring instead. If you’re a developer or team lead considering a rewrite, this conversation offers a pragmatic perspective that could save your team time, effort, and potential setbacks. Tune in to learn how to make the best call for your codebase and find out when a rewrite might actually be necessary!
Key Points From This Episode:
Accessible selectors versus test IDs: best practices in Capybara and React Testing Library.
Balancing test coverage with pragmatism and risk tolerance with Good Enough Testing.
Software rewrites and the tough questions around deciding when they're necessary.
The importance of prioritizing business value over frustrations with the current codebase.
Drawbacks of rewrites, such as downtime, data loss, and reintroducing past bugs.
Risks of “grass is greener” thinking and using mocked data in demos.
Unrealistic expectations of full feature parity and why an MVP approach is better.
How incremental refactoring can achieve similar goals to a complete rewrite.
The appeal and hubris of a “fresh start” and why it’s much more complex than that.
Balancing innovation with practicality: ways to introduce new elements without rewriting.
An example that illustrates when a rewrite might actually be necessary.
Reasons that early prototypes and test builds are the best candidates for rewrites.
Links Mentioned in Today’s Episode:
Mailtrap (https://l.rw.rw/the_bike_shed)
WorkOS (http://workos.com/)
Matt Brictson: ‘Simplify your Capybara selectors’ (https://mattbrictson.com/blog/simplify-capybara-selectors)
React Testing Library Guidelines (https://testing-library.com/docs/queries/about/#priority)
Capybara Accessibility Selectors (https://github.com/citizensadvice/capybara_accessible_selectors)
Good Enough Testing (https://goodenoughtesting.com/)
‘RailsConf 2023: The Math Every Programmer Needs by Joël Quenneville’ (https://youtu.be/fMetBx77vKY)
‘Testing Your Edge Cases’ (https://thoughtbot.com/blog/testing-your-edge-cases)
'Working Iteratively' (https://thoughtbot.com/blog/working-iteratively)
'Technical Considerations to Help Scale Your Product' (https://thoughtbot.com/blog/technical-considerations-when-scaling-your-application)
Dan McKinley: ‘Choose Boring Technology' (https://mcfunley.com/choose-boring-technology)
The Bike Shed (https://bikeshed.thoughtbot.com/)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)
Joël Quenneville on X (https://x.com/joelquen)
Support The Bike Shed (https://github.com/sponsors/thoughtbot) -
Episodi mancanti?
-
Does having smaller, more frequent iterations help to ease your cognitive load? During this episode, we discuss the benefits and challenges of working iteratively and whether or not it can prevent costly errors. You’ll hear about juggling individual pieces effectively, factors that incentivize and de-incentivize working iteratively, and how Joël gauges whether or not a project should be broken up into smaller tasks. It can be hard to adopt small iterations, and this conversation also touches on the idea of ‘good enough code’ and discusses how agility can reduce the cost of making changes. Tuning in, you’ll hear about some of the challenges of keeping up with changes as they evolve and why it is beneficial to do so. You will also be equipped with a thought experiment involving elephant carpaccio to build your understanding of working iteratively, explore the challenge of keeping up with evolving changes, and more. Thanks for listening.
Key Points From This Episode:
Stephanie shares a recent mishap that happened at work and what she learned from it.
Unpacking pressures and other aspects that may have contributed to the error.
Joël’s recent travels and his fresh appreciation for fall.
The cost of an incident occurring, how this increases, and the role of code review.
Benefits and pitfalls of more regular code review.
Why working with smaller chunks of work is helpful for Joël’s focus.
Juggling individual pieces effectively.
Factors that de-incentivize working iteratively such as waiting on 24-hour quality control processes.
How working iteratively can facilitate better communication.
Why Joël feels that work that spans a few days should be broken up into smaller chunks.
The idea of ‘good enough code’.
How agility can reduce the cost of making changes.
Using the elephant carpaccio exercise to bolster your understanding of working iteratively.
The challenge of keeping up with changes as they evolve and why it is beneficial to do so.
Involvement from the team and the capacity to change course.
Links Mentioned in Today’s Episode:
WorkOS (http://workos.com/)
Working Incrementally (https://bikeshed.thoughtbot.com/361)
Working Iteratively (https://thoughtbot.com/blog/working-iteratively)
Elephant Carpaccio Exercise (https://docs.google.com/document/u/1/d/1TCuuu-8Mm14oxsOnlk8DqfZAA1cvtYu9WGv67Yj_sSk/pub)
The Bike Shed (https://bikeshed.thoughtbot.com/)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)
Joël Quenneville on X (https://x.com/joelquen)
Support The Bike Shed (https://github.com/sponsors/thoughtbot) -
What’s the difference between solving problems and recognizing patterns, and why does it matter for developers? In this episode, Stephanie and Joël discuss transitioning from collecting solutions to identifying patterns applicable to broader contexts in software development. They explore the role of heuristics, common misconceptions among junior and intermediate developers, and strategies for leveling up from a solution-focused mindset to thinking in patterns. They also discuss their experiences of moving through this transition during their careers and share advice for upcoming software developers to navigate it successfully. They explore how learning abstraction, engaging in code reviews, and developing a strong intuition for code quality help developers grow. Uncover the issue of over-applying patterns and gain insights into the benefits of broader, reusable approaches in code development. Join us to discover how to build your own set of coding heuristics, the pitfalls of pattern misuse, and how to become a more thoughtful developer. Tune in now!
Key Points From This Episode:
Stephanie unpacks the differences between patterns and solutions.
The role of software development experience in recognizing patterns.
Why transitioning from solving problems to recognizing patterns is crucial.
Joël and Stephanie talk about the challenges of learning abstraction.
Hear pragmatic strategies for implementing patterns effectively.
How junior developers can build their own set of heuristics for code quality.
Discover valuable tools and techniques to identify patterns in your work.
Find out about approaches to documenting, learning, and sharing patterns.
Gain insights into the process of refactoring a solution into a pattern.
Outlining the common mistakes developers make and the pitfalls to avoid.
Steps for navigating disagreements and feedback in a team environment.
Links Mentioned in Today’s Episode:
WorkOS (http://workos.com/)
RubyConf 2021 - The Intro to Abstraction I Wish I'd Received (https://www.youtube.com/watch?v=m0dC5RmxcFk)
'Ruby Science' (https://thoughtbot.com/ruby-science/introduction.html)
Refactoring.Guru (https://refactoring.guru/)
Thoughtbot code review guide (https://github.com/thoughtbot/guides/blob/main/code-review/README.md)
The Bike Shed (https://bikeshed.thoughtbot.com/)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)
Joël Quenneville on X (https://x.com/joelquen)
Support The Bike Shed (https://github.com/sponsors/thoughtbot) -
Learning from other developers is an important ingredient to your success. During this episode, Joël Quenneville is joined by Stefanni Brasil, Senior Developer at Thoughtbot, and core maintainer of faker-ruby. To open our conversation, she shares the details of her experience at the Rails World conference in Toronto and the projects she enjoyed seeing most. Next, we explore the challenge of Mac versus Windows and how these programs interact with Ruby on Rails and dive into Stefanni’s involvement in Open Source for Thoughtbot and beyond; what she loves about it, and how she is working to educate others and expand the current limitations that people experience. This episode is also dedicated to the upcoming Open Source Summit that Stefanni is planning on 25 October 2024, what to expect, and how you can get involved. Thanks for listening!
Key Points From This Episode:
Introducing and catching up with Thoughtbot Senior Developer and maintainer of faker-ruby, Stefanni Brasil.
Her experience at the Rails World conference in Toronto and the projects she found most inspiring.
Why accessibility remains a key topic.
How Ruby on Rails translates on Mac and Windows.
Stefanni’s involvement in Open Source and why she enjoys it.
Her experience as core maintainer at faker-ruby.
Ideas she is exploring around Jeremy Evans’ book Polished Ruby Programming and the direction of Faker.
Involvement in Thoughtbot’s Open Source and how it drew her in initially.
The coaching series on Open Source that she participated in earlier this year.
What motivated her to create a public Google doc on Open Source maintenance.
An upcoming event: the Open Source Summit.
The time commitment expected from attendees.
How Stefanni intends to interact with guests and the talk that she will give at the event.
Why everyone is welcome to engage at any level they are comfortable with.
Links Mentioned in Today’s Episode:
Stefanni Brasil (https://www.stefannibrasil.me/)
Stefanni Brasil on X (https://x.com/stefannibrasil)
Thoughtbot Open Summit (https://thoughtbot.com/events/open-summit)
Open Source Issues doc (https://docs.google.com/document/d/1zok6snap6T6f4Z1H7mP9JomNczAvPEEqCEnIg42dkU4/edit#heading=h.rq72izdz9oh6)
Open Source at Thoughtbot (https://thoughtbot.com/open-source)
Polished Ruby Programming (https://www.packtpub.com/en-us/product/polished-ruby-programming-9781801072724)
Faker Gem (https://github.com/faker-ruby/faker)
Rails World (https://rubyonrails.org/world/)
The Bike Shed (https://bikeshed.thoughtbot.com/)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/) -
What is a program? Your answer to this question will determine the paradigm through which you view programming. During this episode, you’ll come to understand how things change once you develop an awareness of your paradigm, and what. To kick off this episode, Stephanie shares key insights she took from Planet Argon’s 2024 Ruby on Rails survey and dives deeper into her history with Ruby on Rails. Next, we dive into the definition of a paradigm and unpack three different paradigms you might hold as a developer: procedural, object-oriented, and functional. Considering how each of these impacts the way that you might approach your work as a developer, and what you can learn from the ones that are less familiar to you. Joël describes his scripting style and evaluates the concept of pure functions and their place in development, and we close by digging deeper into how your paradigm might impact the code that you write. Tune in to hear all this and more.
Key Points From This Episode:
The EPI feature that Joël has started to build out for his client.
Why Stephanie is excited about the results of Planet Argon’s 2024 Ruby on Rails community survey.
What a procedural program is: programming envisions a program as a series of instructions to a computer.
Defining an object-oriented paradigm: programming envisions a program as the behavior that emerges from objects talking to each other.
How a functional paradigm envisions a program as a series of data transformations.
Alan Turing and Alonzo Church’s approach to understanding this.
How a lot of the foundations of computer science came to be built before we had computers.
Using Ruby to make judgments and assessing whether or not this is a procedural habit.
Why Joël describes his scripting style as being very procedural.
Unpacking the meaning of functional programming.
Evaluating the concept of pure functions.
Considering how your paradigm may impact the Ruby code that you write.
Links Mentioned in Today’s Episode:
2024 Ruby on Rails Community Survey (https://railsdeveloper.com/survey/2024/)
Church-Turing Thesis (https://ocw.mit.edu/courses/24-242-logic-ii-spring-2004/489f7e42fb619645158d7c21a8fb83ad_chuh_trng_thesis.pdf)
Dynamic type systems are not inherently more open (https://lexi-lambda.github.io/blog/2020/01/19/no-dynamic-type-systems-are-not-inherently-more-open/)
What is Functional Programming? (http://blog.jenkster.com/2015/12/what-is-functional-programming.html)
Blocks as an abstraction vs for loops (https://thoughtbot.com/blog/special-cases)
Functional core imperative shell (https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell)
Testing objects with a functional mindset (https://thoughtbot.com/blog/functional-viewpoints-on-testing-objectoriented-code)
The Bike Shed (https://bikeshed.thoughtbot.com/)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)
Support The Bike Shed (https://github.com/sponsors/thoughtbot) -
For a long time, Programming Ruby was the authority in the developing world. Now, a much-needed update has been published. During this conversation, we are joined by Noel Rappin, who shares how his frustration at the idea of static type in Ruby motivated him to investigate why he felt this way, as he published his findings in The Pickaxe Book. We discuss how this book differs from previous material he has published, explore a recent blog post series that explored the idea of failing fast, and address the widespread opinion that developers should take a simpler approach that is more accessible. Noel also explores the responsibility of understanding how readers consume material and the importance of providing thorough context as an author, how Programming Ruby became the most significant programming reference, and the surprising journey that led Noel to realize he was able to provide an updated version of the theory in it. Next, we dive into some of the more opinionated blog posts Noel has posted and the harshest feedback he has received in response to them. You’ll also hear about his research and learning during the act of writing the book. Join us today to hear all this and more.
Key Points From This Episode:
Noel Rappin’s recently published work, The Pickaxe Book, on current versions of Ruby.
The inception of the book during discussions about the collision of Sorbet and Ruby.
How his background made him comfortable with the idea that there are no static types.
A recent blog post series and how it answered a question about failing fast.
Considering whether developers pursue simpler things that are more accessible to a wider range of coders.
The problem of thoroughness and longevity in writing instructional material.
Developing awareness of how readers consume and contextualize theory and opinion.
How Programming Ruby became the most significant programming reference.
Noel’s updated version of this material in his latest book.
His blog posts on real-life applications of Ruby and the feedback he receives.
How he goes about framing blog posts as opinion or instruction.
Determining what community consensus is.
The bewilderment that often accompanies onboarding sessions.
Research and learning leading up to writing and publishing the book.
Feedback and reviews on the book.
Links Mentioned in Today’s Episode:
Noel Rappin (https://noelrappin.com/)
Noel Rappin on X (https://x.com/noelrap)
Programming Ruby (https://pragprog.com/titles/ruby5/programming-ruby-3-3-5th-edition/)
How Not to Use Static Typing in Ruby (https://noelrappin.com/blog/2024/09/how-not-to-use-static-typing-in-ruby/)
David Copeland Talk (https://www.youtube.com/watch?v=unpJ9qRjdMw)
Better Know a Ruby Thing (https://noelrappin.com/tags/better_know/)
How To Manage Duplicate Test Setup, Or Can I Interest You in Weird RSpec? (https://noelrappin.com/blog/2023/12/how-to-manage-duplicate-test-setup-or-can-i-interest-you-in-weird-rspec/)
Better Know a Ruby Thing: On The Use of Private Methods (https://noelrappin.com/blog/2024/06/better-know-access-control-part-2/)
Standardrb (https://github.com/standardrb/standard)
Rails Test Prescriptions (https://www.amazon.com/Rails-Test-Prescriptions-Pragmatic-Programmers/dp/1934356646)
Programming Ruby: A Pragmatic Programmer’s Guide (https://amazon.com/Programming-Ruby-Pragmatic-Programmers-Guide/dp/0201710897)
The Bike Shed (https://bikeshed.thoughtbot.com/)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)
Support The Bike Shed (https://github.com/sponsors/thoughtbot) -
When does it make sense to step away from Rails conventions? What are the limits of convention over configuration? While Rails conventions provide a solid foundation, there are times when customization is necessary to meet specific project needs. In this episode, Joël and Stephanie dive into the tradeoffs of breaking away from Rails defaults. They explore the limits of convention over configuration and share their experiences with customizing beyond the typical Rails setup. Joël offers insights from a recent project where the client opted for all dry-rb objects, and they unpack the benefits and potential challenges of this approach. Stephanie talks about why people tend to shy away from certain Ruby features and her lessons regarding leveraging callbacks for code development. Explore different testing frameworks, the situations when following Ruby defaults is better, the benefits of the ActiveModel ecosystem, and more! Whether you are a Rails purist or looking to bend the rules, this episode will help you understand the pros and cons of stepping outside the Ruby on Rails box. Don’t miss it!
Key Points From This Episode:
Joël shares details about a large-scale refactoring initiative he has been working on.
Stephanie’s recent legacy-code production problem and lessons from her experience.
What Joël would have done differently when building his refactoring initiative.
The problems of renaming background applications during code development.
Why the open-close principle is valuable for making class changes to a system.
Reasons that a migration strategy is vital for navigating new and legacy code.
Explore approaches for overcoming synchronization issues between systems.
Learn about the concept of connascence for coupling systems together.
Considerations for using asynchronous tools with a connascence approach.
Practical ways to maintain naming consistency during code development.
The importance of differentiating between web and business-logic layers.
Situations where relying on callbacks for connascence becomes problematic.
Other issues that callback problems can reveal during code development.
Joël unpacks the scenarios where he deviates from the Ruby on Rails standard.
Frameworks for testing code and final takeaways from Joël and Stephanie.
Links Mentioned in Today’s Episode:
'Refactoring Legacy Code with the Strangler Fig Pattern' (https://shopify.engineering/refactoring-legacy-code-strangler-fig-pattern)
Connascence of Name (CoN) (https://thoughtbot.com/blog/connascence-as-a-vocabulary-to-discuss-coupling#connascence-of-name-con)
ActiveModel docs (https://guides.rubyonrails.org/active_model_basics.html)
GitHub | activemodel (https://github.com/rails/rails/tree/main/activemodel)
'Vanilla Rails is plenty' (https://dev.37signals.com/vanilla-rails-is-plenty/)
GitHub | minitest (https://github.com/seattlerb/minitest)
GitHub | test-unit (https://github.com/test-unit/test-unit)
Episode 435: Cohesive Code with Jared Norman (https://bikeshed.thoughtbot.com/435)
Ruby on Rails The Bike Shed (https://rubyonrails.org)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)
Support The Bike Shed (https://github.com/sponsors/thoughtbot) -
How can asynchronous programming transform your Ruby on Rails applications? Today, Stephanie sits down with Hello Weather co-creator Trevor Turk to unpack asynchronous programming in Ruby on Rails. Trevor Turk is a seasoned software developer known for his work on Hello Weather, a minimalist weather app that delivers essential weather data quickly and precisely. He’s also the creator of Weather Machine, an advanced weather data platform designed to serve reliable and highly accurate forecasts via API. With a background that includes work at innovative tech companies, Trevor brings years of experience in developing intuitive, user-friendly digital tools. Trevor talks about the focus of his API work, the complexity of web-based apps, and what makes Hello Weather unique. He explains the fundamentals of asynchronous programming within the Ruby on Rails framework and why it is an approach all programmers should consider. Explore the nuances of programming for different data sources, how he leverages fibers and threads for the Hello Weather platform, and why asynchronous programming is not a silver bullet for application development. Discover how to start using asynchronous methods, the various asynchronous tools available in Ruby, and why experimenting with concurrent programming is essential. Join us to gain insights into why including asynchronous tools is vital for the Ruby on Rails ecosystem, improving platforms through open-source development, how to help improve the adoption of asynchronous tools in Ruby, and more. Tune in now!
Key Points From This Episode:
Introduction to Turk and his background in Ruby on Rails.
Details about his companies Hello Weather and Weather Machine.
The innovative features that the Hello Weather platform offers.
Hear how Hello Weather transitioned from a web-based to an application.
Why he needed to alter his programming approach to scale the company.
How he came across the concept of asynchronous programming.
Discover how using fibers is different from using threads in Ruby.
Find out about the different use cases of asynchronous programming.
Learn about the benefits of implementing concurrent programming.
Trevor shares the challenges of working with different versions of Ruby.
His role in enhancing asynchronous methods within the Ruby framework.
Common misconceptions of working with Ruby on Rails.
Final takeaways for those interested in asynchronous programming.
Links Mentioned in Today’s Episode:
Trevor Turk on LinkedIn (https://www.linkedin.com/in/trevorturk/)
Trevor Turk on X (https://x.com/trevorturk)
Trevor Turk on Threads (https://www.threads.net/@trevorturk)
Hello Weather (https://helloweather.com/)
Weather Machine (https://weathermachine.io)
GitHub | async gem (https://github.com/socketry/async)
GitHub | falcon gem (https://github.com/socketry/falcon)
'Async Ruby on Rails' (https://thoughtbot.com/blog/async-ruby-on-rails)
load_async (https://api.rubyonrails.org/classes/ActiveRecord/Relation.html#method-i-load_async)
Episode 437: Contributing to Open Source in the Midst of Daily Work with Steve Polito (https://bikeshed.thoughtbot.com/437)
GitHub | Action Cable server adapter (https://github.com/rails/rails/pull/50979)
ActiveRecord connection checkout caching (https://github.com/rails/rails/pull/50793)
Ruby on Rails The Bike Shed (https://rubyonrails.org/)
The Bike Shed (https://bikeshed.thoughtbot.com/)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/)
Support The Bike Shed (https://github.com/sponsors/thoughtbot) -
Writing abstractions in tests can be surprisingly similar to storytelling. The most masterful stories are those where the author has stripped away all of the extra information, and given you just enough knowledge to be immersed and aware of what is going on. But striking that balance can be tricky, both in storytelling and abstractions in tests. Too much information and you risk overwhelming the reader. Too little and they won’t understand why things are operating the way they are. Today, Stephanie and Joël get into some of the more controversial practices around testing, why people use them, and how to strike the right balance with your information. They discuss the most common motivations for introducing abstractions, from improved readability to simplifying the test’s purpose and the types of tests where they are most likely to introduce abstractions. Our hosts also reflect on how they feel about different abstractions in tests – like custom matchers and shared examples – outlining when they reach for them, and the tradeoffs and benefits that come with each. To learn more about how to find the perfect level of abstraction, be sure to tune in!
Key Points From This Episode:
What’s new in Joël’s world; mocking out screens for processes or a new bit of UI.
The new tool Stephanie’s using for reading on the web: Reader by Readwise.
Today’s topic: controversial practices around testing.
How Stephanie and Joël feel about looping through arrays and having IT blocks for each.
The most common motivations for introducing abstractions or helper methods into your tests.
Pros and cons of factories as abstractions in testing.
Types of tests where Joël and Stephanie are more likely to introduce abstractions.
Using page objects in system tests to improve user experience.
Finding the balance between too little and too much information with abstraction in testing.
Why Stephanie has been enjoying fancier matchers like RSpecs.
Top uses of custom matchers, especially for specialized error messaging.
Why Stephanie prefers custom matchers over shared examples.
Using helper methods as a lighter version of abstraction.
Differences and similarities between abstractions in tests versus application code.
A reminder to keep your goals in mind when using abstraction.
Links Mentioned in Today’s Episode:
Reader by Readwise (https://readwise.io/read)
Why factories (https://thoughtbot.com/blog/why-factories)
Why not factories (https://thoughtbot.com/blog/speed-up-tests-by-selectively-avoiding-factory-bot)
Capybara at a single level of abstraction (https://thoughtbot.com/blog/acceptance-tests-at-a-single-level-of-abstraction)
Writing custom RSpec matchers (https://thoughtbot.com/blog/acceptance-tests-at-a-single-level-of-abstraction)
Value objects shared examples (https://thoughtbot.com/blog/value-object-semantics-in-ruby)
'DRY is about knowledge' (https://verraes.net/2014/08/dry-is-about-knowledge/)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/) -
Are you passionate about open source but struggling to find time amidst your daily work? Today on the podcast, Joël Quenneville sits down with Steve Polito to discuss practical strategies for making meaningful contributions to the open-source community, even when your schedule is packed. Steve is a developer with extensive experience in the open-source world seamlessly. He’s known for his ability to integrate open-source contributions into his daily workflow, all while maintaining high productivity in his professional role. In our conversation, we explore balancing professional responsibilities with open-source contributions. Steve walks us through his process, from the importance of keeping notes to leveraging Rails issue templates. Discover strategies for contributing to open-source work during work hours, the benefits of utilizing existing processes, and why extending the success of your work to the larger developer community is essential. Join us to hear recommendations for handling pull requests with Ruby on Rails, tips for using reproduction scripts, why you should release reports early and often, and much more. Tune in and learn how to seamlessly integrate open-source contributions into your daily workflow with Steve Polito!
Key Points From This Episode:
Joël and Steve catch up and share what they are currently working on.
Transitioning synchronous processing in a web request to the background.
An update on Steve’s “building in public” approach and its reception at thoughtbot.
How Steve chooses to document and track his development process.
Find out how he uses templates to enhance and increase productivity.
Why open-source work does not need to be done during your free time.
Ways you can contribute to open-source projects during normal work hours.
The benefits of sharing troubleshooting solutions with the open-source community.
Pull request lessons from his time working with Ruby on Rails.
Reasons why issues have a lower barrier to entry with Ruby on Rails.
His unique approach of using issues, pull requests, and suspenders.
Identifying aspects of everyday work that are suitable for open-source projects.
Links Mentioned in Today’s Episode:
Steve Polito (https://stevepolito.design/)
Steve Polito on X (https://x.com/stevepolitodsgn)
Episode 351: Learning in Public (https://bikeshed.thoughtbot.com/351)
Rails issue templates (https://guides.rubyonrails.org/contributing_to_ruby_on_rails.html#create-an-executable-test-case)
Suspenders (https://github.com/thoughtbot/suspenders)
Mermaid (https://mermaid.js.org/)
Ruby on Rails (https://rubyonrails.org/)
WorkOS (https://workos.com/)
thoughtbot (https://thoughtbot.com)
Joel Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/) -
How can we optimize our time and environment to do our best work as developers? In today’s episode, we are joined by Stephanie Viccari, former co-host of The Bike Shed and Senior Developer at thoughtbot, to unpack the steps for creating work conditions that enhance productivity. In this conversation, we delve into her unique communication style and approach to optimizing productivity within a team. She explains why she decided to hang up her consulting hat and join the product team at Cisco Meraki, her new role there, and how her consulting skills benefit her new position. Tuning in, you’ll discover the key to empathetic communication, how to unblock yourself, tips to help you navigate different communication styles, and why you should advocate for your needs. Stephanie also shares strategies for effective communication and recommendations for managing ‘deep work’ when your time is limited. Gain valuable insights into how to uncover what makes your skillset unique, why it takes a team to manage complex software, benchmarking performance, keeping motivated during stressful times, and more. To learn how to create the conditions for your best work and unlock your full potential as a developer, don’t miss this episode with Stephanie Viccari!
Key Points From This Episode:
Catch up with Stephanie: what she’s been up to since leaving thoughtbot.
How she mastered optimizing workflows and enhancing productivity.
Similarities and differences between working as a consultant versus on a product team.
Ways Stephanie’s mindset shifted from individual thinking to team-oriented strategies.
Nuances of advocating for changes as a consultant versus within a product team.
What software developers need to achieve their best work.
The role of trust between managers and developers in effective problem-solving.
Tips and recommendations for identifying and delivering your best work.
Practical advice for doing your best work, even when you feel demotivated.
Why it's important not to steal from tomorrow's productivity.
Links Mentioned in Today’s Episode:
Stephanie Viccari (https://sviccari.github.io/)
Stephanie Viccari on LinkedIn (https://www.linkedin.com/in/sviccari/)
Stephanie Viccari on X (https://x.com/sviccari)
Stephanie Viccari on GitHub (https://github.com/sviccari)
Cisco Meraki (https://meraki.cisco.com/)
thoughtbot (https://thoughtbot.com/)
Stephanie Viccari’s The Bike Shed’s Episodes (https://bikeshed.thoughtbot.com/hosts/steph-viccari)
‘Generative AI is not going to build your engineering team for you’ (https://stackoverflow.blog/2024/06/10/generative-ai-is-not-going-to-build-your-engineering-team-for-you/)
Joel Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/) -
How easy is it for a layperson to understand your systems? Jared Norman is a software consultant, speaker, and host of the Dead Code Podcast who specializes in building e-commerce applications in Ruby on Rails. This episode follows two recent talks at RailsConf and covers a theme that emerged from both of them: coupling and cohesion. Tuning in, you’ll gain insights on how to create more cohesive components to allow for change and improve your understanding of value objects, systems, and more. You’ll also hear about navigating the complexity of domain-driven design and learn how to gauge if your code is easy to understand through a simple rule of thumb. We discuss what it might look like to improve the cohesion of individual objects, identify your systems’ seams to create simplicity, and the liminal space between inheritance and composition and the role of decorators in moving through it. Join us today to hear all this and more!
Key Points From This Episode:
Introducing Jared Norman recent speaker at RailsConf and Ruby on Rails specialist.
Jared’s interests outside of coding: cycling.
Themes that emerged from Jared and Stephanie’s talks: coupling and cohesion.
A rule of thumb for achieving high cohesion.
How value objects tie into the idea of cohesion.
Creating more cohesive components in order to have code and systems that are easier to change.
The relationships between objects in increasing cohesion and how complex nestings of objects can hinder this.
Rearranging systems in order to find seams and create cohesion.
Simplifying code in order to facilitate it working independently to support functionality.
Improving systems by identifying opportunities for decoupling and other relationships.
Inheritance, composition, and decorators and the liminal space between.
The complexity of domain-driven design.
A rule that indicates when a system is easy to understand.
Links Mentioned in Today’s Episode:
Jared Norman (https://jardo.dev/)
Jared Norman on X (https://x.com/jardonamron)
The Most Useful Design Pattern (https://www.youtube.com/watch?v=0bQVH2IM0Ao)
Dead Code (https://shows.acast.com/dead-code)
So Writing Tests Feel Painful. What Now? (https://www.youtube.com/watch?v=t5MnS20adG4)
Dungeons & Dragons & Rails by Joël Quenneville (https://www.youtube.com/watch?v=T7GdshXgQZE)
Building Reusable Object-Oriented Systems: Composition (https://thoughtbot.com/blog/reusable-oo-composition)
Debugging at the Boundaries (https://thoughtbot.com/blog/debugging-at-the-boundaries)
Working Effectively with Legacy Code (https://www.oreilly.com/library/view/working-effectively-with/0131177052/)
Growing Object-Oriented Software Guided by Tests (http://www.growing-object-oriented-software.com/)
What’s in a Name (https://www.youtube.com/watch?v=YOQYxgLu5ys)
Joel Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/) -
It's Calls for Proposals (CFP) season, and in the process of helping our friends and colleagues flesh out their CFPs, we came up with a few questions to help them frame their proposals for success. After learning about the importance of finding your audience and angle of approach for your CFP, we dive into today's main topic – our Git and GitHub workflows. Joel and Stephanie walk us through their current workflows before exploring the differences between main branch and future branch commits. Then, we explore commits editing and why it's okay to make mistakes, commit messages versus GitHub pull requests (PR), what you need to know if you're new to Git, and what you need to understand about PR sizes and Git merge strategies. To end, Joel shares the commit messages that satisfy him the most, and we discover how to make one's life easier when reviewing PRs.
Key Points From This Episode:
Our CFP framework of questions to help you build a winning proposal.
Why it's important to understand who your audience is and who you're speaking to.
Ascertaining your angle of approach - how will you tell your story?
The ins and outs of Stephanie's current work life.
How discipline and particularly, self-discipline relate to our Git and GitHub workflows.
Understanding Joel and Stephanie's workflows - how they're similar and how they differ.
The differences between main branch and future branch commits.
Editing commits and editing commits history, and why it's okay to make mistakes.
Commit messages versus GitHub pull requests (PR).
Some advice and strategies for those who are new to Git.
Discussing Git merge strategies, PR sizes, and online changes.
Joel details the types of commit messages that he finds most satisfying.
How to make your life easier when reviewing PRs.
Links Mentioned in Today’s Episode:
RubyConf Rubric
'Working Iteratively’
Good Commit Messages
Shotgun Surgery
'Episode 401: Making the Right Thing Easy’
Joel Quenneville on X
Joel Quenneville on LinkedIn
Support The Bike Shed -
Have you ever wondered how improvisation can revolutionize coding? In today’s episode, Stephanie sits down with Kasper Timm Hansen to discuss his innovative “riffing” approach to code development. Kasper is a long-time Ruby developer and former member of the Rails core team. He focuses on Ruby and domain modeling, developing various Ruby gems, and providing consulting services in the developer space. He has become renowned for his approach of “riffing” to software development, particularly in the Ruby on Rails framework. In our conversation, we delve into his unique approach to coding, how it differs from traditional methods, and the benefits of improvisation to code development. Discover the “feeling” part of riffing, the steps to uncovering relationships between models, and why it is okay not to know how to do something. Explore how riffing enhances collaboration, improves communication with and between teams, identifies alternative code, why “clever code” does not make for good solutions, and much more! Tune in to learn how to take your coding skills to the next level and uncover the magic of riffing with Kasper Timm Hansen!
Key Points From This Episode:
Introduction to Kasper, his background in Ruby, and experience as a consultant.
An overview of his RailsConf 2024 presentation on domain modeling.
His motivation behind his presentation and the overall reception of the concept.
Unpack the concept of “riffing” with code as a developer.
Insights into his methodology and how it differs from traditional approaches.
Examples of “riffing" and how it benefits the development process.
How he determines the best code to implement during his process.
Kasper shares how he frames problems and builds solutions.
Ways riffing highlights gaps in skillsets early in the development process.
Hear about the various ways riffing fosters and improves collaboration.
Unpack how riffing can help developers communicate more effectively.
Balancing the demands of code review with the riffing approach.
Final takeaways for listeners and how to contact Kasper to begin riffing!
Links Mentioned in Today’s Episode:
Kasper on Github (https://github.com/kaspth), Mastodon (https://ruby.social/@kaspth), LinkedIn (https://www.linkedin.com/in/kasper-timm-hansen-33b151314/), and X (https://twitter.com/kaspth)
Riffing on Rails RailsConf talk (https://www.youtube.com/watch?v=vH-mNygyXs0) and slides (https://speakerdeck.com/kaspth/railsconf-2024-riffing-on-rails-sketch-your-way-to-better-designed-code)
Riffing on Spotify’s generated mixes (https://www.youtube.com/watch?v=i1MM2EOniPg) with Jeremy Smith
Modeling a Kanban board with riffing (https://buttondown.email/kaspth/archive/how-to-approach-modelling-a-kanban-board-in-rails/)
Some of Kasper's open source work:
* ActiveRecord Associated Object (https://github.com/kaspth/active_record-associated_object)
* ActiveJob Performs (https://github.com/kaspth/active_job-performs)
* Oaken (https://github.com/kaspth/oaken) -
The term ‘nil’ refers to the absence of value, but we often imbue it with much more meaning than just that. Today, hosts Joël and Stephanie discuss the various ways we tend to project extra semantics onto nil and the implications of this before unpacking potential alternatives and trade-offs.
Joël and Stephanie highlight some of the key ways programmers project additional meaning onto nil (and why), like when it’s used to create a guest session, and how this can lead to bugs, confusion, and poor user experiences. They discuss solutions to this problem, like introducing objects for improved readability, before taking a closer look at the implications of excessive guard clauses in code.
Our hosts also explore the three-state Boolean problem, illustrating the pitfalls of using nullable Booleans, and why you should use default values in your database. Joël then shares insights from the Elm community and how it encourages rigorous checks and structured data modeling to manage nil values effectively. They advocate for using nil only to represent truly optional data, cautioning against overloading nil with additional meanings that can compromise code clarity and reliability. Joël also shares a fun example of modeling a card deck, explaining why you might be tempted to add extra semantics onto nil, and why the joker always inevitably ends up causing chaos!
Key Points From This Episode
The project Joël is working on and why he’s concerned about bugs and readability.
Potential solutions for a confusing constant definition in a nested module.
A client work update from Stephanie: cleaning up code and removing dead dependencies.
How she used Figjam to discover dependencies and navigate her work.
Today’s topic: how programmers project extra semantics onto nil.
What makes nil really tricky to use, like forcing you to go down a default path.
How nil sweeps the cases you don’t want to think too hard about under the rug.
Extra semantics that accompany nil (that you might not know about) like a guest session.
Examples of how these semantics mean different things in different contexts.
How these can lead to bugs, hard-to-find knowledge, confusion, and poor user experiences.
Introducing objects to replace extra nil semantics, improve readability, and other solutions.
Some of the reasons why programmers tend to project extra semantics onto nil.
How to notice that nil has additional meanings, and when to model it differently.
The implications of excessive guard clauses in code.
An overview of the three-state Boolean problem with nullable Booleans.
Connecting with the Elm community: how it can help you conduct more rigorous checks.
Some of the good reasons to have nil as a value in your database.
The benefits of using nil only to represent truly optional data.
Links Mentioned in Today’s Episode
Figjam (https://www.figma.com/figjam/)
Miro (https://miro.com/)
'Working Iteratively' blog post (https://thoughtbot.com/blog/working-iteratively)
Mermaid.js (https://mermaid.js.org/)
Draw.io (https://draw.io/)
Check your return values (web) (https://thoughtbot.com/blog/check-return-values-web)
Check your return values (API) (https://thoughtbot.com/blog/check-return-values-api)
Primitive obsession (https://wiki.c2.com/?PrimitiveObsession)
'Avoid the Three-state Boolean Problem' (https://thoughtbot.com/blog/avoid-the-threestate-boolean-problem)
Elm Community (https://elm-lang.org/community)
'The Shape of Data': Modeling a deck of cards (https://thoughtbot.com/blog/modeling-with-union-types#the-shape-of-data)
The Bike Shed (https://bikeshed.thoughtbot.com/)
Joël Quenneville on LinkedIn (https://www.linkedin.com/in/joel-quenneville-96b18b58/) -
Stephanie shares her newfound interest in naming conventions, highlighting a resource called "Classnames" that provides valuable names for programming and design. Joël, in turn, talks about using AI to generate names for D&D characters, emphasizing how AI can help provide inspiration and reasoning behind name suggestions. Then, they shift to Joël's interest in Roman history, where he discusses a blog by a Roman historian that explores distinctions between state and non-state peoples in the ancient Mediterranean.Together, the hosts delve into the importance of asking questions as consultants and developers to understand workflows, question assumptions, and build trust for better onboarding. Stephanie categorizes questions by engagement stages and their social and technical aspects, while Joël highlights how questioning reveals implicit assumptions and speeds up learning. They stress maintaining a curious mindset, using questions during PR reviews, and working with junior developers to foster collaboration. They conclude with advice on documenting answers and using questions for continuous improvement and effective decision-making in development teams.Class names inspiration (https://classnames.paulrobertlloyd.com/)How to Raise a Tribal Army in Pre-Roman Europe, Part II: Government Without States (https://acoup.blog/2024/06/14/collections-how-to-raise-a-tribal-army-in-pre-roman-europe-part-ii-government-without-states/)Diocletian, Constantine, Bedouin Sayings, and Network Defense (https://www.youtube.com/watch?v=qCUI5ryyMSE)The Power of Being New: A Proven Recipe for High Impact (https://hazelweakly.me/blog/the-power-of-being-new--a-proven-recipe-for-high-impact/#the-power-of-being-new-a-proven-recipe-for-high-impact)How to ask good questions (https://jvns.ca/blog/good-questions/)Transcript: JOËL: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, if it has not been clear about just kind of the things I'm mentioning on the podcast the past few weeks, I've been obsessed with naming things lately [chuckles] and just thinking about how to name things, and, yeah, just really excited about...or even just having fun with that more than I used to be as a dev. And I found a really cool resource called "Classnames." Well, it's like just a little website that a designer and developer shared from kind of as an offshoot from his personal website. I'll link it in the show notes.But it's basically just a list of common names that are very useful for programming or even design. It's just to help you find some inspiration when you're stuck trying to find a name for something. And they're general or abstract enough that, you know, it's almost like kind of like a design pattern but a naming pattern [laughs], I suppose. JOËL: Ooh.STEPHANIE: Yeah, right? And so, there's different categories. Like, here's a bunch of words that kind of describe collections. So, if you need to find the name for a containment or a group of things, here's a bunch of kind of words in the English language that might be inspiring. And then, there's also other categories like music for describing kind of the pace or arrangement of things. Fashion, words from fashion can describe, like, the size of things. You know, we talk about T-shirt sizes when we are estimating work. And yeah, I thought it was really cool that there's both things that draw on, you know, domains that most people know in real life, and then also things that are a little more abstract. But yeah, "Classnames" by Paul Robert Lloyd — that's been a fun little resource for me lately.JOËL: Very cool. Have you ever played around at all with using AI to help you come up with the naming? STEPHANIE: I have not. But I know that you and other people in my world have been enjoying using AI for inspiration when they feel a little bit stuck on something and kind of asking like, "Oh, like, how could I name something that is, like, a group of things?" or, you know, a prompt like that. I suspect that that would also be very helpful. JOËL: I've been having fun using that to help me come up with good names for D&D characters, and sometimes they're a little bit on the nose. But if I sort of describe my character, and what's their vibe, and a little bit of, like, what they do and their background, and, like, I've built this whole, like, persona, and then, I just ask the AI, "Hey, what might be some good names for this?" And the AI will give me a bunch of names along with some reasoning for why they think that would be a good match. So, it might be like, oh, you know, the person's name is, I don't know, Starfighter because it evokes their connection to the night sky or whatever because that was a thing that I put in the background. And so, it's really interesting. And sometimes they're, like, just a little too obvious. Like, you don't want, you know, Joe Fighter because he's a fighter. STEPHANIE: And his name is Joe [laughs]. JOËL: Yeah, but some of them are pretty good. STEPHANIE: Cool. Joël, what's new in your world? JOËL: I guess in this episode of how often does Joël think about the Roman Empire...STEPHANIE: Oh my gosh [laughs]. JOËL: Yes [laughs]. STEPHANIE: Spoiler: it's every day [laughs]. JOËL: Whaaat? There's a blog that I enjoy reading from a Roman historian. It's called "A Collection of Unmitigated Pedantry", acoup.blog. He's recently been doing an article series on not the Romans, but rather some of these different societies that are around them, and talking a little bit about a distinction that he calls sort of non-state peoples versus states in the ancient Mediterranean. And what exactly is that distinction? Why does it matter? And those are terms I've heard thrown around, but I've never really, like, understood them. And so, he's, like, digging into a thing that I've had a question about for a while that I've been really appreciating. STEPHANIE: Can you give, like, the reader's digest for me?JOËL: For him, it's about who has the ability to wield violence legitimately. In a state, sort of the state has a monopoly on violence. Whereas in non-state organizations, oftentimes, it's much more personal, so you might have very different sort of nobles or big men who are able to raise, let's say, private armies and wage private war on each other, and that's not seen as, like, some, like, big breakdown of society. It's a legitimate use of force. It's just accepted that that's how society runs. As opposed to in a state, if a, you know, wealthy person decided to raise a private army, that would be seen as a big problem, and the state would either try to put you down or, like, more generally, society would, like, see you as having sort of crossed a line you shouldn't have crossed.STEPHANIE: Hmm, cool. I've been reading a lot of medieval fantasy lately, so this is kind of tickling my brain in that way when I think about, like, what drives different characters to do things, and kind of what the consequences of those things are. JOËL: Right. I think it would be really fascinating to sort of project this framework forwards and look at the European medieval period through that lens. It seems to me that, at least from a basic understanding, that the sort of feudal system seems to be very much in that sort of non-state category. So, I'd be really interested to see sort of a deeper analysis of that. And, you know, maybe he'll do an addendum to this series. Right now, he's mostly looking at the Gauls, the Celtiberians, and the Germanic tribes during the period of the Roman Republic.STEPHANIE: Cool. Okay. Well, I also await the day when you somehow figure how this relates to software [laughter] and inevitably make some mind-blowing connection and do a talk about it [laughs]. JOËL: I mean, theming is always fun. There's a talk that I saw years ago at Strange Loop that was looking at the defense policy of the Roman Emperor Diocletian and the Roman Emperor Constantine, and the ways that they sort of defended the borders of the empire and how they're very different, and then related it to how you might handle network security. STEPHANIE: Whaat?JOËL: And sort of like a, hey, are we using more of a Diocletian approach here, or are we using more of a Constantine approach here? And all of a sudden, just, like, having those labels to put on there and those stories that went with it made, like, what could be a really, like, dry security talk into something that I still remember 10 years later.STEPHANIE: Yeah. Yeah. We love stories. They're memorable. JOËL: So, I'll make sure to link that in the show notes. STEPHANIE: Very cool. JOËL: We've been talking a lot recently about my personal note system, where I keep a bunch of, like, small atomic notes that are all usually based around a single thesis statement. And I was going through that recently, and I found one that was kind of a little bit juicy. So, the thesis is that consultants are professional question-askers. And I'm curious, as a consultant yourself, how do you feel about that idea? STEPHANIE: Well, my first thought would be, how do I get paid to only ask in questions [laughs] or how to communicate in questions and not do anything else [laughs]? It's almost like I'm sure that there is some, like, fantasy character, you know, where it's like, there's some villain or just obstacle where you have this monster character who only talks in questions. And it's like a riddle that you have to solve [laughs] in order to get past. JOËL: I think it's called a three-year-old. STEPHANIE: Wow. Okay. Maybe a three-year-old can do my job then [laughter]. But I do think it's a juicy one, and it's very...I can't wait to hear how you got there, but I think my reaction is yes, like, I do be asking questions [laughs] when I join a project on a client team. And I was trying to separate, like, what kinds of questions I ask. And I kind of came away with a few different categories depending on, like, the stage of the engagement I'm in. But, you know, when I first join a team and when I'm first starting out consulting for a team, I feel like I just ask a lot of basic questions. Like, "Where's the Jira board [laughs]?" Like, "How do you do deployments here?" Like, "What kind of Git process do you use?" So, I don't know if those are necessarily the interesting ones. But I think one thing that has been nice is being a consultant has kind of stripped the fear of asking those questions because, I don't know, these are just things I need to know to do my work. And, like, I'm not as worried about, like, looking dumb or anything like that [laughs]. JOËL: Yeah. I think there's often a fear that asking questions might make you look incompetent or maybe will sort of undermine your appearance of knowing what you're talking about, and I think I've found that to be sort of the opposite. Asking a lot of questions can build more trust, both because it forces people to think about things that maybe they didn't think about, bring to light sort of implicit assumptions that everyone has, and also because it helps you to ramp up much more quickly and to be productive in a way that people really appreciate.STEPHANIE: Yeah. And I also think that putting those things in, like, a public and, like, documented space helps people in the future too, right? At least I am a power Slack searcher [laughs]. And whenever I am onboarding somewhere, one of the first places I go is just to search in Slack and see if someone has asked this question before. I think the next kind of category of question that I discerned was just, like, questions to understand how the team understands things. So, it's purely just to, like, absorb kind of like perspective or, like, a worldview this team has about their codebase, or their work, or whatever. So, I think those questions manifest as just like, "Oh, like, you know, I am curious, like, what do you think about how healthy your codebase is? Or what kinds of bugs is your team, like, dealing with?" Just trying to get a better understanding of like, what are the challenges that this team is facing in their own words, especially before I even start to form my own opinions. Well, okay, to be honest, I probably am forming my own opinions, like, on the side [laughs], but I really try hard to not let that be the driver of how I'm showing up and especially in the first month I'm starting on a new team. JOËL: Would you say these sorts of questions are more around sort of social organization or, like, how a team approaches work, that sort of thing? Or do you classify more technical questions in this category? So, like, "Hey, tell me a little bit about your philosophy around testing." Or we talked in a recent episode "What value do you feel you get out of testing?" as a question to ask before even, like, digging into the implementation. STEPHANIE: Yeah, I think these questions, for me, sit at, like, the intersection of both social organization and technical questions because, you know, asking something like, "What's the value of testing for your team?" That will probably give me information about how their test suite is like, right? Like, what kinds of tests they are writing and kind of the quality of them maybe. And it also tells me about, yeah, like, maybe the reasons why, like, they only have just unit tests or maybe, like, just [inaudible 12:31] test, or whatever. And I think all of that is helpful information. And then, that's actually a really...I like the distinction you made because I feel like then the last category of questions that I'll mention, for now, feels like more geared towards technical, especially the questions I ask to debunk assumptions that might be held by the team. And I feel like that's like kind of the last...the evolution of my question-asking. Because I have, hopefully, like, really absorbed, like, why, you know, people think the way they do about some of these, you know, about their code and start to poke a little bit on being like, "Why do you think, you know, like, this problem space has to be modeled this way?" And that has served me well as a consultant because, you know, once you've been at an organization for a while, like, you start to take a lot of things for granted about just having to always be this way, you know, it's like, things just are the way they are. And part of the power of, you know, being this kind of, like, external observer is starting to kind of just like, yeah, be able to question that. And, you know, at the end of day, like, we choose not to change something, but I think it's very powerful to be able to at least, like, open up that conversation. JOËL: Right. And sometimes you open up that conversation, and what you get is a link to a big PR discussion or a Wiki or something where that discussion has already been had. And then, that's good for you and probably good for anybody else who has that question as well. STEPHANIE: I'm curious, for you, though, like, this thesis statement, atomic note, did you have notes around it, or was it just, like, you dropped it in there [laughs]? JOËL: So, I have a few things, one is that when you come in as a consultant, and, you know, we're talking here about consultants because that's what we do. I think this is probably true for most people onboarding, especially for non-junior roles where you're coming in, and there's an assumption of expertise, but you need to onboard onto a project. This is just particularly relevant for us as consultants because we do this every six months instead of, you know, a senior developer who's doing this maybe every two to three years. So, the note that I have here is that when you're brought on, clients they expect expertise in a technology, something like Ruby on Rails or, you know, just the web environment in general. They don't expect you as a consultant to be an expert in their domain or their practices. And so, when you really engage with this sort of areas that are new by asking a lot of questions, that's the thing that's really valuable, especially if those questions are coming from a place of experience in other similar things. So, maybe asking some questions around testing strategies because you've seen three or four other ways that work or don't work or that have different trade-offs. Even asking about, "Hey, I see we went down a particular path, technically. Can you walk me through what were the trade-offs that we evaluated and why we decided this was the path that was valuable for us?" That's something that people really appreciate from outside experts. Because it shows that you've got experience in those trade-offs, that you've thought the deeper thoughts beyond just shipping the next ticket. And sometimes they've made the decisions without actually thinking through the trade-offs. And so, that can be an opening for a conversation of like, "Hey, well, we just went down this path because we saw a blog article that recommended this, or we just did this because it felt right. Talk us through the trade-offs." And now maybe you have a conversation on, "Hey, here are the trade-offs that you're doing. Let me know if this sounds right for your organization. If not, maybe you want to consider changing some things or tweaking your approach." And I think that is valuable sort of at the big level where you're thinking about how the team is structured, how different parts of work is done, the technical architecture, but it also is valuable at the small level as well.STEPHANIE: Yeah, 100%. There is a blog post I really like by Hazel Weakly, and it's called "The Power of Being New: A Proven Recipe for High Impact." And one thing that she says at the beginning that I really enjoy is that even though, like, whenever you start on a new team there's always that little bit of pressure of starting to deliver immediate value, right? But there's something really special about that period where no one expects you to do anything, like, super useful immediately [laughs]. And I feel like it is both a fleeting time and, you know, I'm excited to continue this conversation of, like, how to keep integrating that even after you're no longer new. But I like to use that time to just identify, while I have nothing really on my plate, like, things that might have just been overlooked or just people have gotten used to that sometimes is, honestly, like, can be a quick fix, right? Like, just, I don't know, deleting a piece of dead code that you're seeing is no longer used but just gets fallen off other people's plates. I really enjoy those first few weeks, and people are almost, like, always so appreciative, right? They're like, "Oh my gosh, I have been meaning to do that." Or like, "Great find." And these are things that, like I said, just get overlooked when you are, yeah, kind of busy with other things that now are your responsibility. JOËL: You're talking about, like, that feeling of can you add value in the, like, initial time that you join. And I think that sometimes it can be easy to think that, oh, the only value you can add is by, like, shipping code. I think that being sort of noisy and asking a lot of questions in Slack is often a great way to add value, especially at first.STEPHANIE: Yeah, agreed.JOËL: Ideally, I think you come in, and you don't sort of slide in under the radar as, like, a new person on the team. Like, you come in, and everybody knows you're there because you are, like, spamming the channel with questions on all sorts of things and getting people to either link you to resources they have or explaining different topics, especially anything domain-related. You know, you're coming in with an outside expertise in a technology. You are a complete new person at the business and the problem domain. And so, that's an area where you need to ask a lot of questions and ramp up quickly. STEPHANIE: Yes. I have a kind of side topic. I guess it's not a side topic. It's about asking questions, so it's relevant [laughs]. But one thing that I'm curious about is how do you approach kind of doing this in a place where question asking is not normalized and maybe other people are less comfortable with kind of people asking questions openly and in public? Like, how do you set yourself up to be able to ask questions in a way that doesn't lead to just, like, some just, like, suspicion or discomfort about, like, why you're asking those questions?JOËL: I think that's the beauty of the consultant title. When an organization brings in outside experts, they kind of expect you to ask questions. Or maybe it's not an explicit expectation, but when they see you asking a lot of questions, it sort of, I think, validates a lot of things that they expect about what an outside expert should be. So, asking a lot of questions of trying to understand your business, asking a lot of questions to try to understand the technical architecture, asking questions around, like, some subtle edge cases or trade-offs that were made in the technical architecture. These are all things that help clients feel like they're getting value for the money from an outside expert because that's what you want an outside expert to do is to help you question some of your assumptions, to be able to leverage their, like, general expertise in a technology by applying it to your specific situation.I've had situations where I'll ask, like, a very nuanced, deep technical question about, like, "Hey, so there's, like, this one weird edge case that I think could potentially happen. How do we, like, think through about this?" And one of the, like, more senior people on the team who built the initial codebase responded, like, almost, like, proud that I've discovered this, like, weird edge case, and being like, "Oh yeah, that was a thing that we did think about, and here's why. And it's really cool that, like, day one you're, like, just while reading through the code and were like, 'Oh, this thing,' because it took us, like, a month of thinking about it before we stumbled across that." So, it was a weird kind of fun interaction where as a new person rolling on, one of the more experienced devs in the codebase almost felt, like, proud of me for having found that. STEPHANIE: I like that, yeah. I feel like a lot of the time...it's like, it's so easy to ask questions to help people feel seen, to be like, "Oh yeah, like, I noticed this." And, you know, if you withhold any kind of, like, judgment about it when you ask the question, people are so willing to be like, yeah, like you said, like, "Oh, I'm glad you saw that." Or like, "Isn't that weird? Like, I was feeling, you know, I saw that, too." Or, like, it opens it up, I think, for building trust, which, again, like, I don't even think this is something that you necessarily need to be new to even do. But if at any point you feel like, you know, maybe your working relationship with someone could be better, right? To the point where you feel like you're, like, really on the same page, yeah, ask questions [laughs]. It can be that easy. JOËL: And I think what can be really nice is, in an environment where question asking is not normalized, coming in and doing that can help sort of provide a little bit of cover to other people who are feeling less comfortable or less safe doing that. So, maybe there's a lot of junior members on the team who are feeling not super confident in themselves and are afraid that asking questions might undermine their position in the company. But me coming in as a sort of senior consultant and asking a lot of those questions can then help normalize that as a thing because then they can look and say, "Oh, well he's asking all these questions. Maybe I can ask my question, and it'll be okay." STEPHANIE: I also wanted to talk about setting yourself up and asking questions to get a good answer, asking good questions to get useful answers. One thing that has worked really well for me in the past few months has been sharing why I'm asking the question. And I think this goes back to a little bit of what I was hinting at earlier. If the culture is not really used to people asking questions and that just being a thing that is normal, sharing a bit of intention can help, like, ease maybe some nervousness that people might feel. Especially as consultants, we also are in a bit of a, I don't know, like, there is some power dynamics occasionally where it's like, oh, like, the consultants are here. Like, what are they going to come in and change or, like, start, you know, doing to, quote, unquote, "improve", whatever, I don't know [laughs]. JOËL: Right, right. STEPHANIE: Yeah, that's the consultant archetype, I think. Anyway. JOËL: Just coming in and being like, "Oh, this is bad, and this is bad, and you're doing it wrong."STEPHANIE: [laughs]JOËL: Ooh, I would be ashamed if I was the author of this code. STEPHANIE: Yeah, my hot take is that that is a bad consultant [laughs]. But maybe I'll say, like, "I am looking for some examples of this pattern. Where can I find them [laughs]?" Or "I've noticed that the team is struggling with, like, this particular part of the codebase, and I am thinking about improving it. What are some of your biggest challenges, like, working with this, like, model?" something like that. And I think this also goes back to, like, proving value, right? Even if it's like, sometimes I know kind of what I want to do, and I'll try to be explicit about that. But even before I have, like, a clear action item, I might just say like, "I'm thinking about this," you know, to convey that, you know, I'm still in that information gathering stage, but the result of that will be useful to help me with whatever kind of comes out of it.JOËL: A lot of it is about, like, genuine curiosity and an amount of empathetic listening. Existing team knows a lot about both the code and the business. And as a consultant coming on or maybe even a more senior person onboarding onto a team, the existing team has so much that they can give you to help you be better at your job.STEPHANIE: I was also revisiting a really great blog post from Julia Evans about "How to Ask Good Questions." And this one is more geared towards asking technical questions that have, like, kind of a maybe more straightforward answer. But she included a few other strategies that I liked a lot. And, frankly, I feel like I want to be even better at finding the right time to ask questions [laughs] and finding the right person to ask those questions to.I definitely get in the habit of just kind of like, I don't know, I'll just put it out there and [laughs], hopefully, get some answers. But there are definitely ways, I think, that you can be more strategic, right? About identifying who might be the best person to provide the answers you're looking for. And I think another thing that I often have to balance in the consulting position is when to know when to, like, stop kind of asking the really big questions because we just don't have time [laughs]. JOËL: Right. You don't want to be asking questions in a way that's sort of undermining the product, or the decisions that are being made, or the work that has to get done. Ideally, the questions that you're asking are helping move the project forward in a positive way. Nobody likes the, you know, just asking kind of person. That person's annoying. STEPHANIE: Do you have an approach or any thoughts about like, once you get an answer, like, what do you do with that? Yeah, what happens then for you?JOËL: I guess there's a lot of different ways it can go. A potential way if it's just, like, an answer explained in Slack, is maybe saying, "We should document this." Or maybe even like, "Is this documented anywhere? If not, can I add that documentation somewhere?" And maybe that's, you know, a code comment that we want to add. Maybe that's an entry to the Wiki. Maybe that's updating the README. Maybe that's adding a test case. But converting that into something actionable can often be a really good follow-up. STEPHANIE: Yeah, I think that mitigates the just asking [laughs] thing that you were saying earlier, where it's like, you know, the goal isn't to ask questions to then make more work for other people, right? It's to ask questions so, hopefully, you're able to take that information and do something valuable with it. JOËL: Right. Sometimes it can be a sort of setup for follow-up questions. You get some information and you're like, okay, so, it looks like we do have a pattern for interacting with third-party APIs, but we're not using it consistently. Tell me a little bit about why that is. Is that a new pattern that we've introduced and we're trying to, like, get more buy-in from the team? Is this a pattern that we used to have, and we found out we didn't like it? So, we stopped using it, but we haven't found a replacement pattern that we like. And so, now we're just kind of...it's a free-for-all, and we're trying to figure it out.Maybe there's two competing patterns, and there is this, like, weird politics within the tech team where they're sort of using one or the other, and that's something I'm going to have to be careful to navigate. So, asking some of those follow-up questions and once you have a technical answer can yield a lot of really interesting information and then help you think about how you can be impactful on the organization. STEPHANIE: And that sounds like advice that's just true, you know, regardless of your role or how long you've been in it, don't you think? JOËL: I would say yes. If you've been in the role a long time, though, you're the person who has that sort of institutional history in your mind. You know that in 2022, we switched over from one framework to another. You know that we used to have this, like, very opinionated architect who mandated a particular pattern, and then we moved away from it. You know that we were all in on this big feature last summer that we released and then nobody used it, and then the business pivoted, but there's still aspects of it that are left around. Those are things that someone knew onboarding doesn't know and that, hopefully, they're asking questions that you can then answer.STEPHANIE: Have you been in the position where you have all that, like, institutional knowledge? And then, like, how do you maintain that sense of curiosity or just that sense of kind of, like, what you're talking about, that superpower that you get when you're new of being able to just, you know, kind of question why things are the way they are?JOËL: It's hard, right? We're talking about how do you keep that sort of almost like a beginner's mindset, in this case, maybe less of a, like, new coder mindset and more of a new hire mindset. It's something that I think is much more front of mind for me because I rotate onto new clients every, like, 6 to 12 months. And so, I don't have very long to get comfortable before I'm immediately thrown into, like, a new situation. But something that I like to do is to never sort of solely be in one role or the other, a sort of, like, experienced person helping others or the new person asking for help. Likely, you are not going to be the newest person on the team for long. Maybe you came on as a cohort and you've got a group of new people, all of whom are asking different questions. And maybe somebody is asking a question that you've asked before, that you've asked in a different channel or on a call with someone. Or maybe someone joins two weeks after you; you don't have deep institutional knowledge. But if you've been asking a lot of questions, you've been building a lot of that for yourself, and you have a little bit that you can share to the next person who knows even less than you do. And that's an approach that I took even as an apprentice developer. When I was, like, brand new to Rails and I was doing an internship, and another intern joined me a couple of weeks after, and I was like, "You know what? I barely know anything. But I know what an instance variable is. And I can help you write a controller action. Let's pair on that. We'll figure it out. And, you know, ask me another question next week. I might have more answers for you." So, I guess a little bit of paying it forward. STEPHANIE: Yeah, I really like that advice, though, of, like, switching up the role or, like, kind of what you're working on, just finding opportunities to practice that, you know, even if you have been somewhere for a long time. I think that is really interesting advice. And it's hard, too, right? Because that requires, like, doing something new, and doing something new can be hard [laughs]. But if you're, you know, aren't in a consultant role, where you're not rotating onto new projects every 6 to 12 months, that, I feel like, would be a good strategy to grow in that particular way.JOËL: And even if you're not switching companies or in a consulting situation, it's not uncommon to have people switch from one team to another within an organization. And new team might mean new dynamics. That team might be doing a slightly different approach to project management. Their part of the code might be structured slightly differently. They might be dealing with a part of the business domain that you're less familiar with. While that might not be entirely new to you because, you know, you know a little bit of the organization's DNA and you understand the organization's mission and their core product, there are definitely a lot of things that will be new to you, and asking those questions becomes important. STEPHANIE: I also have another kind of, I don't know, it's not even a strategy. It's just a funny thing that I do where, like, my memory is so poor that, like, even code I wrote, you know, a month ago, I'm like, oh, what was past Stephanie thinking here [laughs]? You know, questioning myself a little bit, right? And being willing to do that and recognizing that, like, I have information now that I didn't have in the past. And, like, can that be useful somehow? You know, it's like, the code I wrote a month ago is not set in stone. And I think that's one way I almost, like, practice that skill with myself [laughs]. And yeah, it has helped me combat that, like, things are the way they are mentality, which, generally, I think is a very big blocker [laughs] when it comes to software development, but that's a topic for another day [laughs]. JOËL: I like the idea of questioning yourself, and I think that's something that is a really valuable skill for all developers. I think it can come up in things like documentation. Let's say you're leaving a comment on a method, especially one that's a bit weird, being able to answer that "Why was this weird technical decision made?" Or maybe you do this in your PR description, or your commit message, or in any of the other places where you do this, not just sort of shipping the code as is, but trying to look at it from an outsider's eyes. And being like, what are the areas where they're going to, like, get a quizzical look and be like, "Why is this happening? Why did you make this choice?" Bonus points if you talked a little bit about the trade-offs that were decided on to say, "Hey, there were two different implementations available for this. I chose to take implementation A because I like this set of trade-offs better." That's gold. And, I guess, as a reviewer, if I'm seeing that in a PR, that's going to make my job a lot easier.STEPHANIE: Yes. Yeah, I never thought about it that way, but yeah, I guess I do kind of apply, you know, the things that I would kind of ask to other team members to myself sometimes. And that is...it's cool to hear that you really appreciate that because I always kind of just did it for myself [laughs], but yeah, I'm sure that it, like, is helpful for other people as well.JOËL: I guess you were asking what are ways that you can ask questions even when you are more established. And talking about these sorts of self-reflective questions in the context of review got me thinking that PRs are a great place to ask questions. They're great when you're a newcomer. One of the things I like to do when I'm new on a project is do a lot of PR reviews so I can just see the weird things that people are working on and ask a lot of questions about the patterns.STEPHANIE: Yep. Same here. JOËL: Do a lot of code reading. But that's a thing that you can keep doing and asking a lot of questions on PRs and not in a, like, trying to undermine what the person is doing, but, like, genuine questions, I think, is a great way to maintain that mindset. STEPHANIE: Yeah, yeah, agreed. And I think when I've seen it done well, it's like, you get to be engaged and involved with the rest of your team, right? And you kind of have a bit of an idea about what people are working on. But you're also kind of entrusting them with ownership of that work. Like, you don't need to be totally in the weeds and know exactly how every method works. But, you know, you can be curious about like, "Oh, like, what were you thinking about this?" Or like, "What about this pattern appeals to you?" And all of that information, I think, helps you become a better, like, especially a senior developer, but also just, like, a leader on the team, I think. JOËL: Yeah, especially the questions around like, "Oh, walk me through some of the trade-offs that you chose for this method." And, you know, for maybe a person who's more senior, that's great. They have an opportunity to, like, talk about the decisions they made and why. That's really useful information. For a more junior person, maybe they've never thought about it. They're like, "Oh, wait, there are trade-offs here?" and now that's a great learning opportunity for them.And you don't want to come at it from a place of judgment of like, oh, well, clearly, you know, you're a terrible developer because you didn't think about the performance implications of this method. But if you come at it from a place of, like, genuine curiosity and sort of assuming the best of people on the team and being willing to work alongside them, help them discover some new concepts...maybe they've never, like, interacted so much with performance trade-offs, and now you get to have a conversation. And they've learned a thing, and everybody wins. STEPHANIE: Yeah. And also, I think seeing people ask questions that way helps more junior folks also learn when to ask those kinds of questions, even if they don't know the answer, right? But maybe they start kind of pattern matching. Like, oh, like, there might be some other trade-offs to consider with this kind of code, but I don't know what they are yet. But now I know to at least start asking and find someone who can help me determine that. And when I've seen that, that has been always, like, just so cool because it's upskilling happening [laughs] in practice. JOËL: Exactly. I love that phrase that you said: "Asking questions where you don't know the answers," which I think is the opposite of what lawyers are taught to do. I think lawyers the mantra they have is you never ask a witness a question that you don't know the answer to. But I like to flip that for developers. Ask a lot of questions on PRs where you don't know the answer, and you'll grow, and the author will grow. And this is true across experience levels. STEPHANIE: That's one of my favorite parts about being a developer, and maybe that's why I will never be a lawyer [laughter]. JOËL: On that note, I have a question maybe I do know the answer to. Shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore.STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at [email protected] via email.JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!AD:Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
-
Stephanie and Joël discuss the recent announcement of the call for proposals for RubyConf in November. Joël is working on his proposals and encouraging his colleagues at thoughtbot to participate, while Stephanie is excited about the conference being held in her hometown of Chicago!
The conversation shifts to Stephanie's recent work, including completing a significant client project and her upcoming two-week refactoring assignment. She shares her enthusiasm for refactoring code to improve its structure and stability, even when it's not her own. Joël and Stephanie also discuss the everyday challenges of maintaining a test suite, such as slowness, flakiness, and excessive database requests. They discuss strategies to balance the test pyramid and adequately test critical paths.
Finally, Joël emphasizes the importance of separating side effects from business logic to enhance testability and reduce complexity, and Stephanie highlights the need to address testing pain points and ensure tests add real value to the codebase.
RubyConf CFP (https://sessionize.com/rubyconf-2024/)
RubyConf CFP coaching (https://docs.google.com/forms/d/e/1FAIpQLScZxDFaHZg8ncQaOiq5tjX0IXvYmQrTfjzpKaM_Bnj5HHaNdw/viewform?pli=1)
Testing pyramid (https://thoughtbot.com/blog/rails-test-types-and-the-testing-pyramid)
Outside-in testing (https://thoughtbot.com/blog/testing-from-the-outsidein)
Writing fewer system specs with request specs (https://thoughtbot.com/blog/faster-tests-with-capybara-and-request-specs)
Unnecessary factories (https://thoughtbot.com/blog/speed-up-tests-by-selectively-avoiding-factory-bot)
Your Test Suite is Making Too Many Database Calls (https://www.youtube.com/watch?v=LOlG4kqfwcg)
Your flaky tests might be time dependent (https://thoughtbot.com/blog/your-flaky-tests-might-be-time-dependent)
The Secret Ingredient: How To Understand and Resolve Just About Any Flaky Test (https://www.youtube.com/watch?v=De3-v54jrQo)
Separating side effects to improve tests (https://thoughtbot.com/blog/simplify-tests-by-extracting-side-effects)
Functional core, imperative shell (https://www.destroyallsoftware.com/screencasts/catalog/functional-core-imperative-shell)
Thoughtbot testing articles (https://thoughtbot.com/blog/tags/testing)
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: Something that's new in my world is that RubyConf just announced their call for proposals for RubyConf in November. They're open for...we're currently recording in June, and it's open through early July, and they're asking people everywhere to submit talk ideas. I have a few of my own that I'm working with. And then, I'm also trying to mobilize a lot of other colleagues at thoughtbot to get excited to submit.
STEPHANIE: Yes, I am personally very excited about this year's RubyConf in November because it's in Chicago, where I live, so I have very little of an excuse not to go [laughs]. I feel like so much of my conference experience is traveling to just kind of, like, other cities in the U.S. that I want to spend some time in and, you know, seeing all of my friends from...my long-distance friends. And it definitely does feel like just a bit of an immersive week, right? And so, I wonder how weird it will feel to be going to this conference and then going home at the end of the night. Yeah, that's just something that I'm a bit curious about. So, yeah, I mean, I am very excited. I hope everyone comes to Chicago. It's a great city.
JOËL: I think the pitch that I'm hearing is submit a proposal to the RubyConf CFP to get a chance to get a free ticket to go to RubyConf, where you get to meet Bike Shed co-host Stephanie Minn.
STEPHANIE: Yes. Ruby Central should hire me to market this conference [laughter] and that being the main value add of going [laughs], obviously. Jokes aside, I'm excited for you to be doing this initiative again because it was so successful for RailsConf kind of internally at thoughtbot. I think a lot of people submitted proposals for the first time with some of the programming you put on. Are you thinking about doing things any differently from last time, or any new thoughts about this conference cycle?
JOËL: I think I'm iterating on what we did last time but trying to keep more or less the same formula. Among other things, people don't always have ideas immediately of what they want to speak about. And so, I have a brainstorming session where we're just going to get together and brainstorm a bunch of topics that are free for anyone to take. And then, either someone can grab one of those topics and pitch a talk on it, or it can be, like, inspiration where they see that it jogs their mind, and they have an idea that then they go off and write a proposal.
And so, that allows, I think, a lot of colleagues as well, who are maybe not interested in speaking but might have a lot of great ideas, to participate and sort of really get a lot of that energy going. And then, from there, people who are excited to speak about something can go on to maybe draft a proposal. And then, I've got a couple of other events where we support people in drafting a proposal and reviewing and submitting, things like that.
STEPHANIE: Yes, I really love how you're just involving people with, you know, just different skills and interests to be able to support each other, even if, you know, there's someone on our team who's, like, not interested in speaking at all, but they're, like, an ideas person, right? And they would love to see their idea come to life in a talk from someone else. Like, I think that's really cool, and I certainly appreciate it as a not ideas person [laughs].
JOËL: Also, I want to shout out that Ruby Central is doing CFP coaching sessions on June 24th, June 25th, and June 26th, and those are open to anyone. You can sign up. We'll put a link to the signup form in the show notes. If you've never submitted something before and you'd like some tips on what makes for a good CFP, how can you up your chances of getting accepted, or maybe you've submitted before, you just want to get better at it; I recommend joining one of those slots. So, Stephanie, what's new in your world?
STEPHANIE: So, I just successfully delivered a big project on my client work last week. So, I'm kind of riding that wave and getting into the next bit of work that I have been assigned for this team, and I'm really excited to do this. But I also, I don't know, I've been just, like, thinking about it quite a bit. Basically, I'm getting to spend two dedicated weeks to just refactoring [laughs] some really, I guess, complicated code that has led to some bugs recently and just needing some love, especially because there's some whiffs of potentially, like, really investing in this area of the product, and people wanting to make sure that the foundation does feel very stable to build on top of for extending and changing that code.
And I think I, like, surprised myself by how excited I was to do this because it's not even code I wrote. You know, sometimes when you are the one who wrote code, you're like, oh, like, I would love time to just go back and clean up all these things that I kind of missed the first time around or just couldn't attend to for whatever reason. But yeah, I think I was just a little bit in the peripheries of that code, and I was like, oh, like, just seeing some weird stuff. And now to kind of have the time to be like, oh, this is all I'm going to be doing for two weeks, to, like, really dive into it and get my hands dirty [laughs], I'm very excited.
JOËL: I think that refactoring is a thing that can be really fun. And also, when you have a larger chunk of time, like two days, it's easy to sort of get lost in sort of grand visions or projects. How do you kind of balance the, I want to do a lot of refactoring; I want to take on some bigger things while maybe trying to keep some focus or have some prioritization?
STEPHANIE: Yeah, that's a great question. I was actually the one who said, like, "I want two weeks on this." And it also helped that, like, there was already some thoughts about, like, where they wanted to go with this area of the codebase and maybe what future features they were thinking about. And there are also a few bugs that I am fixing kind of related to this domain. So, I think that is actually what I started with.
And that was really helpful in just kind of orienting myself in, like, the higher impact areas and the places that the pain is felt and exploring there first to, like, get a sense of what is going on here. Because I think that information gathering is really important to be able to kind of start changing the code towards what it wants to be and what other devs want it to be.
I actually also started a thread in Slack for my team. I was, like, asking for input on what's the most confusing or, like, hard to reason about files or areas in this particular domain or feature set and got a lot of really good engagement. I was pleasantly surprised [laughs], you know, because sometimes you, like, ask for feedback and just crickets. But I think, for me, it was very affirming that I was, like, exploring something that a lot of people are like, oh, we would love for someone to, you know, have just time to get into this. And they all were really excited for me, too. So, that was pretty cool.
JOËL: Interesting. So, it sounds like you sort of budgeted some refactoring time and then, from there, broke it down into a series of a couple of debugging projects and then a couple of, like, more bounded refactoring projects, where, like, specifically, I want to restructure the way this object works or something like that.
STEPHANIE: Yeah. I think there was that feeling of wanting to clean up this area of the codebase, but you kind of caught on to that bit of, you know, it can go so many different ways. And, like, how do you balance your grand visions [laughs] of things with, I guess, a little bit of pragmatism? So, it was very much like, here's all these bugs that are causing our customers problems that are kind of, like, hard for the devs to troubleshoot. You know, that kind of prompts the question, like, why?
And so, if there can be, you know, the fixing of the bugs, and then the learning of, like, how that part of the system works, and then, hopefully, some improvements along the way, yeah, that just felt like a dream [laughs] for me. And two weeks felt about the right amount of time. I don't know if anyone kind of hears that and feels like it's too long or too little. I would be really curious. But I feel like it is complex enough that, like, context switching would, I think, make this work harder, and you kind of do have to just sit with it for a little bit to get your bearings.
JOËL: A scenario that we encounter on a pretty regular basis is a customer coming to us and telling us that they're feeling a lot of test pain and asking what are the ways that we can help them to make things better and that test pain can come under a lot of forms.
It might be a test suite that's really slow and that's hurting the team in terms of their velocity. It might be a test suite that is really flaky. It might be one that is really difficult to add to, or maybe one that has very low coverage, or one that is just really brittle. Anytime you try to make a change to it, a bunch of things break, and it takes forever to ship anything. So, there's a lot of different aspects of challenging test suites that clients come to us with.
I'm curious, Stephanie, what are some of the ones that you've encountered most frequently?
STEPHANIE: I definitely think that a slow test suite and a flaky test suite end up going hand in hand a lot, or just a brittle one, right? That is slowing down development and, like you said, causing a lot of pain. I think even if that's not something that a client is coming to us directly about, it maybe gets, like, surfaced a little bit, you know, sometime into the engagement as something that I like to keep an eye on as a consultant. And I actually think, yeah, that's one of kind of the coolest things, I think, about our consulting work is just getting to see so many different test suites [laughs]. I don't know. I'm a testing nerd, so I love that kind of stuff.
And then, I think you were also kind of touching on this idea of, like, maintaining a test suite and, yeah, making testing just a better experience. I have a theory [laughs], and I'd be curious to get your thoughts on it. But one thing that I really struggle with in the industry is when people talk about writing tests as if it's, like, the morally superior thing to do. And I struggle with this because I don't think that it is a very good strategy for helping people feel better or more confident and, like, upskill at writing tests.
I think it kind of shames people a little bit who maybe either just haven't gotten that experience or, you know, just like, yeah, like, for whatever reason, are still learning how to do this thing. And then, I think that mindset leads to bad tests [laughs] or tests that aren't really serving the purpose that you hope they would because people are doing it more out of obligation rather than because they truly, like, feel like it adds something to their work. Okay, I kind of just dropped that on you [laughs]. Do you have any reactions?
JOËL: Yeah, I guess the idea that you're just checking a box with your test rather than writing code that adds value to the codebase. They're two very different perspectives that, in the end, will generate more lines of code if you're just doing a checkbox but may or may not add a whole lot of value. So, maybe before even looking at actual, like, test practices, it's worth stepping back and asking more of a mindset question: Why does your team test? What is the value that your team feels they get out of testing?
STEPHANIE: Yeah. Yeah. I like that because I was about to say they go hand in hand. But I do think that maybe there is some, you know, question asking [laughs] to be done because I do think people like to kind of talk about the testing practices before they've really considered that. And I am, like, pretty certain from just kind of, at least what I've seen, and what I've heard, and what I've experienced on embedding into client teams, that if your team can't answer that question of, like, "What value does testing bring?" then they probably aren't following good testing practices [laughs]. Because I do think you kind of need to approach it from a perspective of like, okay, like, I want to do this because it helps me, and it helps my team, rather than, like you said, getting the check mark.
JOËL: So, once we've sort of established maybe a bit of a mindset or we've had a conversation with the team around what value they think they're getting out of tests, or maybe even you might need to sell the team a little bit on like, "Hey, here's, like, all these different ways that testing can bring value into your life, make your life as developers easier," but once you've done that sort of pre-work and you can start looking at what's actually the problem with a test suite, a common complaint from developers is that the test suite is too slow. How do you like to approach a slow test suite?
STEPHANIE: That's a good question. I actually...I think there's a lot of ways to answer that. But to kind of stay on the theme of stepping back a little bit, I wonder if assessing how well your test suite aligns with the testing pyramid would be a good place to start; at least, that could be where I start if I'm coming into a client team for the first time, right, and being asked to start assessing or just poking around. Because I think the slowness a lot of the time comes from a lot of quote, unquote, "integration tests" or, like, unit tests masquerading as integration tests, where you end up having, like, a lot of duplication of things that are being tested in ways that are integrating with some slow parts of the system like the database.
And yeah, I think even before getting into some of the more discreet reasons why you might be writing slow tests, just looking at the structure of your test suite and what kinds of things you're testing, and, again, even going back to your team and asking, like, "What kinds of things do you test?" Or like, "Do you try to test or wish to be testing more of, less of?" Like looking at the structure, I have found to be a good place to start.
JOËL: And for those who are not familiar, you used the term testing pyramid. This is a concept which says that you probably want to have a lot of small, fast unit tests, a medium amount of integration tests that test a few different components together, and then a few end-to-end tests. Because as you go up that pyramid, tests become more expensive. They take a lot longer to run, whereas the little unit tests are super cheap. You can create thousands of them, and they will barely impact your run time. Adding a dozen end-to-end tests is going to be noticeable. So, you want to balance sort of the coverage that you get from end to end with the sort of cheapness and ubiquity of the little unit tests, and then split the difference for tests that are in between.
STEPHANIE: And I think that is challenging, even, you know, you're talking about how you want the peak of your pyramid to be end-to-end tests. So, you don't want a lot of them, but you do want some of them to really ensure that things are totally plumbed and working correctly. But that does require, I think, really looking at your application and kind of identifying what features are the most critical to it. And I think that doesn't get paid enough attention, at least from a lot of my client experiences. Like, sometimes teams just end up with a lot of feature bloat and can't say like, you know, they say, "Everything's important [chuckles]," but everything can't be equally important, you know?
JOËL: Right. I often like to develop using a sort of outside-in approach, where you start by writing an end-to-end test that describes the behavior that your new feature ticket is asking for and use that to drive the work that I'm doing. And that might lead to some lower-level unit tests as I'm building out different components, but the sort of high-level behavior that we're adding is driven by adding an end-to-end spec.
Do you feel that having one new end-to-end spec for every new feature ticket that you work on is a reasonable thing to do, or do you kind of pick and choose? Do you write some, but maybe start, like, coalescing or culling them, or something like that? How do you manage that idea that maybe you would or would not want one end-to-end spec for each feature ticket?
STEPHANIE: Yeah, it's a good question. Actually, as you were saying that, I was about to ask you, do you delete some afterwards [laughs]? Because I think that might be what I do sometimes, especially if I'm testing, you know, edge cases or writing, like, the end-to-end test for error states. Sometimes, not all of them make it into my, like, final, you know, commit. But they, you know, had their value, right? And at least it prompted me to make sure I thought about them and make sure that they were good error states, right? Like things that had visible UI to the user about what was going on in case of an error. So, I would say I will go back and kind of coalesce some of them, but they at least give me a place to start. Does that match your experience?
JOËL: Yeah, I tend to mostly write end-to-end tests for happy paths and then write kind of mid-level things to cover some of my edge cases, maybe a couple of end-to-end tests for particularly critical paths. But, at some point, there's just too many paths through the app that you can't have end-to-end coverage for every single branch on every single path that can happen.
STEPHANIE: Yeah, I like that because if you find yourself having a lot of different conditions that you need to test in an end-to-end situation, maybe there's room for that to, like, be better encapsulated in that, like, more, like, middle layer or, I don't know, even starting to ask questions about, like, does this make sense with the product? Like, having all of these different things going on, does that line up with kind of the vision of what this feature is trying to be or should be? Because I do think the complexity can start at that high of a level.
JOËL: How do you feel about the idea that adding more end-to-end tests, at some point, has diminishing returns?
STEPHANIE: I'm not quite sure I'm following [laughs].
JOËL: So, let's say you have an end-to-end test for the happy path for every core feature of the app. And you decide, you know what, I want to add maybe some, like, side features in, or maybe I want to have more error states. And you start, like, filling in more end-to-end tests for those. Is it fair to say that adding some of those is a bit of a diminishing return? Like, you're not getting as much value as you would from the original specs. And maybe as you keep finding more and more rare edge cases, you get less and less value for your test.
STEPHANIE: Oh, yeah, I see. And there's more of a cost, too, right? The cost of the time to run, maintain, whatever.
JOËL: Right. Let's say they're roughly all equally expensive in terms of cost to run. But as you stray further and further off of that happy path, you're getting less and less value from that integration test or that end-to-end test.
STEPHANIE: I'm actually a little conflicted about this because that sounds right in theory, but then in practice, I feel like I've seen error states not get enough love [laughs] that it's...I don't even want to say, like, you make any kind of claim [laughs] about it. But, you know, if you're going to start somewhere, if you have, like, a limited amount of time and you're like, okay, I'm only going to write a handful of end-to-end tests, yeah, like, write tests for your happy paths [laughs].
JOËL: I guess it's probably fair to say that error states just don't get as much love as they should throughout the entire testing stack: at the unit level, at the integration level, all the way up to end to end.
STEPHANIE: I'm curious if you were trying to get at some kind of conclusion, though, with the idea of diminishing returns.
JOËL: I guess I'm wondering if, from there, we can talk about maybe a breakdown of a particular testing pyramid for a particular test suite is being top heavy, and whether there's value in maybe pushing some of these tests, some of these edge cases, some of these maybe less important features down from that, like, top end-to-end layer into maybe more of an integration layer. So, in a Rails context, that might be moving system specs down to something like a request spec.
STEPHANIE: Yeah, I think that is what I tend to do. I'm trying to think of how I get there, and I'm not quite sure that I can explain it quite yet. Yeah, I don't know. Do you think you can help me out here? Like, how do you know it's time to start writing more tests for your unhappy paths lower on the pyramid?
JOËL: Ideally, I think a lot of your code should be unit-tested. And when you are unit testing it, those pieces all need coverage of the happy and unhappy paths. I think the way it may often happen naturally is if you're pushing logic out of your controllers because it's a little bit challenging sometimes to test Rails controllers.
And so, if you're moving things into domain objects, even service objects, depending on how you implement them, just doing that and then making sure you unit test them can give you a lot more coverage of all the different edge cases that can happen. Where things sometimes fall apart is getting out of that business layer into the web layer and saying, "Hey, if something raises an error or if the save fails or something like that, does the user get a good experience, or do we just crash and give them a 500 page?"
STEPHANIE: Yeah, that matches with a lot of what I've seen, where if you then spend too much time in that business layer and only handling errors there, you don't really think too much about how it bubbles up. And, you know, then you are digging through, like, your error monitoring [laughs] service, trying to find out what happened so that you can tell, you know, your customer support team [laughs] to help them resolve, like, a bug report they got.
But I actually think...and you were talking about outside in, but, in some ways, in my experience, I also get feedback from the bottom up sometimes that then ends up helping me adjust some of those integration or end-to-end tests about kind of what errors are possible, like, down in the depths of the code [laughs], and then finding ways to, you know, abstract that or, like, kind of be like, "Oh, like, here are all these possible, like, exceptions that might be raised." Like, what HTTP status code do I want to be returned to capture all of these things? And what do I want to say to the user? So, yeah, I'm [laughs] kind of a little lost myself, but this idea that going both, you know, outside in and then maybe even going back up a little bit has served me well before.
JOËL: I think there can be a lot of value in sort of dropping down a level in the pyramid, and maybe instead of doing sort of end-to-end tests where you, like, trigger a scenario where something fails, you can just write a request back against the controller and say, "Hey, if I go to this controller and something raises an error, expect that you get redirected to this other location." And that's really cheap to run compared to an end-to-end test. And so, I think that, for me, is often the right compromise is handling error states at sort of the next lowest level and also in slightly more atomic pieces. So, more like, if you hit this endpoint and things go wrong, here's how things happen.
And I use endpoint not so much in an API sense, although it could be, but just your, you know, maybe you've got a flow that's multiple steps where, you know, you can do a bunch of things. But I might have a test just for one controller action to say, "Hey, if things go wrong, it redirects you here, or it shows you this error page." Whereas the end-to-end test might say, "Oh, you're going to go through the entire flow that hits multiple different controllers, and the happy path is this nice chain." But each of the exit points off at where things fail would be covered by a more scoped request spec on that controller.
STEPHANIE: Yeah. Yeah. That makes sense. I like that.
JOËL: So, that's kind of how I've attempted to balance my pyramid in a way that balances complexity and time with coverage. You mentioned that another area that test suites get slow is making too many requests to the database. There's a lot of ways that that happens. Oftentimes, I think a classic is using a factory where you really don't need to, persisting data to the database when all you needed was some object in memory. So, there are different strategies for avoiding that.
It's also easy to be creating too much data. So, maybe you do need to persist some things in the database, but you're persisting a hundred objects into memory or into the database when you really meant to persist two, so that's an easy accident. A couple of years ago, I gave a talk at RailsConf titled "Your Test Suite is Making Too Many Database Requests" that went over a bunch of different ways that you can be doing a lot of expensive database requests you didn't plan on making and how that slows down your test suite. So, that is also another hot spot that I like to look at when dealing with a slow test suite.
STEPHANIE: Yeah, I mentioned earlier the idea of unit tests really masquerading as integration tests [laughs]. And I think that happens especially if you're starting with a class that may already be a little bit too big than it should be or have more responsibilities than it should be. And then, you are, like, either just, like, starting with using the create build, like, strategy with factories, or you find yourself, like, not being able to fully run the code path you're trying to test without having stuff persisted.
Those are all, I think, like, test smells that, you know, are signaling a little bit of a testing anti-pattern that, yeah, like, is there a way to write, like, true unit tests for this stuff where you are only using objects in memory? And does that require breaking out some responsibilities? That is a lot of what I am kind of going through right now, actually, with my little refractoring project [laughs] is backfilling some tests, finding that I have to create a lot of records.
And you know what? Like, the first step will probably be to write those tests and commit them, and just have them live there for a little while while I figure out, you know, the right places to start breaking things up, and that's okay. But yeah, I did want to, like, just mention that if you are having to create a lot of records and then also noticing, like, your test is running kind of slow [laughs], that could be a good indicator to just give a good, hard look at what kind of style of test you think you're writing [laughs].
JOËL: Yeah, your tests speak to you, and when you're feeling pain, oftentimes, it can be a sign that you should consider refactoring your implementation. And I think that's doubly true if you're writing tests after the fact rather than test driving. Because sometimes you sort of...you came up with an implementation that you thought would be good, and then you're writing tests for it, and it's really painful. And that might be telling you something about the underlying implementation that you have that maybe it's...you thought it's well scoped, but maybe it actually has more responsibilities than you initially realized, or maybe it's just really tightly coupled in a way that you didn't realize. And so, learning to listen to your tests and not just sort of accepting the world for being the way it is, but being like, "No, I can make it better."
STEPHANIE: Yeah, I've been really curious why people have a hard time, like, recognizing that pain sometimes, or maybe believing that this is the way it is and that there's not a whole lot that you can do about it. But it's not true, like, testing really does not have to be painful. And I feel like, again, this is one of those things that's like, it's hard to believe until you really experience it, at least, that was the case for me.
But if you're having a hard time with tests, it's not because you're not smart enough. Like, that, I think, is a thing that I really want to debunk right now [laughs] for anyone who has ever had that thought cross their mind. Yeah, things are just complicated and complex somehow, or software entropy happens. That's, like, not how it should be, and we don't have to accept that [laughs]. So, I really like what you said about, oh, you can change it. And, you know, that is a bit of a callback to the whole mindset of testing that we mentioned earlier at the beginning.
JOËL: Speaking of test suites, we have not covered yet is paralyzing it. That could probably be its own Bike Shed episode on its own entirely on paralyzing a test suite. We've done entire engagements where our job was to come in and help paralyze a test suite, make it faster. And there's a lot of, like, pros and cons. So, I think maybe we can save that for a different episode. And, instead, I'd like to quickly jump in a little bit to some other common pain points of test suites, and I would say probably top of that list is test flakiness. How do you tend to approach flakiness in a client project?
STEPHANIE: I am, like, laughing to myself a little bit because I know that I was dealing with test flakiness on my last client engagement, and that was, like, such a huge part of my day-to-day is, like, hitting that retry button. And now that I am on a project with, like, relatively low flakiness, I just haven't thought about it at all [laughs], which is such a privilege, I think [laughs].
But one of the first things to do is just start, like, capturing metrics around it. If you, you know, are hearing about flakiness or seeing that, like, start to plague your test suite or just, you know, cropping up in different ways, I have found it really useful to start, like, I don't know, just, like, maybe putting some of that information in a dashboard and seeing how, just to, like, even make sure that you are making improvements, that things are changing, and seeing if there's any, like, patterns around what's causing the flakiness because there are so many different causes of it.
And I think it is pretty important to figure out, like, what kind of code you're writing or just trying to wrangle. That's, you know, maybe more likely to crop up as flakiness for your particular domain or application. Yeah, I'm going to stop there and see, like, because I know you have a lot of thoughts about flakiness [laughs].
JOËL: I mean, you mentioned that there's a lot of different causes for flakiness. And I think, in my experience, they often sort of group into, let's say, like, three different buckets. Anytime you're testing code that's doing things that are non-deterministic, that's easy for tests to be flaky. And so, you might think, oh, well, you know, you have something that makes a call to random, and then you're going to assert on a particular outcome of that. Well, clearly, that's going to not be the same every time, and that might be flaky.
But there are, like, more subtle versions of that, so maybe you're relying on the system clock in some way. And, you know, depending on the time you run that test, it might give you a different value than you expect, and that might cause it to fail. And it doesn't have to be you're asserting on, like, oh, specifically a given millisecond. You might be doing math around, like, number of days, and when you get near to, let's say, the daylight savings boundary, all of a sudden, no, you're off by an hour, and your number of days...calculation breaks because relying on the clock is something that is inherently non-deterministic. Non-determinism is a bucket.
Leaky tests is another bucket of failures that I see, things where one test might impact another that gets run after the fact, oftentimes by mutating some sort of global state. So, maybe you're both relying on some sort of, like, external file that you're both writing to or maybe a cache that one is writing to that the other one is reading from, something like that. It could even just be writing records into the database in a way that's not wrapped in a transaction, such that there's more data in the database when the next test runs than it expects.
And then, finally, if you are doing any form of parallelization, that can improve your test suite speed, but it also potentially leads to race conditions, where if your resources aren't entirely isolated between parallel test runners, maybe you're sharing a database, maybe you're sharing Redis instance or whatever, then you can run into situations where you're both kind of fighting over the same resources or overriding each other's data, or things like that, in a way that can cause tests to fail intermittently. And I think having a framework like that of categorization can then help you think about potential solutions because debugging approaches and then solutions tend to be a little bit different for each of these buckets.
STEPHANIE: Yeah, the buckets of different causes of flaky tests you were talking about, I think, also reminded me that, you know, some flakiness is caused by, like, your testing environment and your infrastructure. And other kinds of flakiness are maybe caused more from just the way that you've decided how your code should work, especially that, like, non-deterministic bucket. So, yeah, I don't know, that was just, like, something that I noticed as you were going through the different categories. And yeah, like, certainly, the solutions for approaching each kind are very different.
JOËL: I would like to pitch a talk from RubyConf last year called "The Secret Ingredient: How To Resolve And Understand Just About Any Flaky Test" by Alan Ridlehoover. Just really excellent walkthrough of these different buckets and common debugging and solving approaches to each of them. And I think having that framework in mind is just a great way to approach different types of flaky tests.
STEPHANIE: Yes, I'll plus one that talk, lots of great pictures of delicious croissants as well.
JOËL: Very flaky pastry.
STEPHANIE: [laughs] Joël, do you have any last testing anti-pattern guidances for our audience who might be feeling some test pain out there?
JOËL: A quick list, I'm going to say tight coupling that has then led to having a lot of stubbing in your tests often leads to tests that are very brittle, so tests that maybe don't fail when they should when you've actually broken things, or maybe, alternatively, tests that are constantly failing for the wrong reasons. And so, that is a thing that you can fix by making your code less coupled.
Tests that also require stubbing a lot of things because you do a lot of side effects. If you are making a lot of HTTP calls or things like that, that can both make a test more complex because it has to be aware of that. But also, it can make it more non-deterministic, more flaky, and it can just make it harder to change. And so, I have found that separating side effects from sort of business logic is often a great way to make your test suite much easier to work with.
I have a blog post on that that I'll link in the show notes. And I think this maybe also approaches the idea of a functional core and an imperative shell, which I believe was an idea pitched by Gary Bernhardt, like, over ten years ago. There's a famous video on that that we'll also link in the show notes. But that architecture for building an app can lead to a much nicer test to write. I guess the general idea being that testing code that does side effects is complicated and painful. Testing code that is more functional tends to be much more pleasant. And so, by not intermingling the two, you tend to get nicer tests that are easier to maintain.
STEPHANIE: That's really interesting. I've not heard that guidance before, but now I am intrigued. That reminded me of another thing that I had a conversation with someone about. Because after the RailsConf talk I gave, which was about testing pain, there was some stubbing involved in the examples that I was showing because I just see a lot of that stuff. And, you know, this audience member kind of had that question of, like, "How do you know that things are working correctly if you have to stub all this stuff out?"
And, you know, sometimes you just have to for the time being [chuckles]. And I wanted to just kind of call back to that idea of having those end-to-end tests testing your critical paths to at least make sure that those things work together in the happy way. Because I have seen, especially with apps that have a lot of service objects, for some reason, those being kind of the highest-level test sometimes. But oftentimes, they end up not being composed well, being quite coupled with other service objects. So, you end up with a lot of stubbing of those in your test for them. And I think that's kind of where you can see things start to break down.
JOËL: Yep. And when the RailsConf videos come out, I recommend seeing Stephanie's talk, some great gems in there for building a more maintainable test suite. Stephanie and I and, you know, most of us here at thoughtbot, we're testing nerds. We think about this a lot. We've also written a lot about this. There are a lot of resources in the show notes for this episode. Check them out. Also, just generally, check out the testing tag on the thoughtbot blog. There is a ton of content there that is worth looking into if you want to dig further into this topic.
STEPHANIE: Yeah, and if you are wanting some, like, dedicated, customized testing training, thoughtbot offers an RSpec workshop that's tailored to your team. And if you kind of are interested in the things we're sharing, we can definitely bring that to your company as well.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions. -
Stephanie has a newfound interest in urban foraging for serviceberries in Chicago. Joël discusses how he uses AI tools like ChatGPT to generate creative Dungeons & Dragons character concepts and backstories, which sparks a broader conversation with Stephanie about AI's role in enhancing the creative process.Together, the hosts delve into professional growth and experience, specifically how to leverage everyday work to foster growth as a software developer. They discuss the importance of self-reflection, note-taking, and synthesizing information to enhance learning and professional development. Stephanie shares her strategies for capturing weekly learnings, while Joël talks about his experiences using tools like Obsidian's mind maps to process and synthesize new information. This leads to a broader conversation on the value of active learning and how structured reflection can turn routine work experiences into meaningful professional growth.Obsidian (https://obsidian.md/)Zettelkasten (https://en.wikipedia.org/wiki/Zettelkasten)Mindmaps in Mermaid.js (https://mermaid.js.org/syntax/mindmap.html)Module Docs episode (https://bikeshed.thoughtbot.com/417)Writing Quality Method docs blog post (https://thoughtbot.com/blog/writing-quality-method-docs)Notetaking for Developers episode (https://bikeshed.thoughtbot.com/357)Learning by Helping blog post (https://thoughtbot.com/blog/learning-by-helping)Transcript: JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way. JOËL: So, Stephanie, what's new in your world? STEPHANIE: So, as of today, while we record this, it's early June, and I have started foraging a little bit for what's called serviceberries, which is a type of tree/shrub that is native to North America. And I feel like it's just one of those, like, things that more people should know about because it makes these little, tiny, you know, delicious fruit that you can just pick off of the tree and have a little snack. And what's really cool about this tree is that, like I said, it's native, at least to where I'm from, and it's a pretty common, like, landscaping tree.So, it has, like, really pretty white flowers in the spring and really beautiful, like, orange kind of foliage in the fall. So, they're everywhere, like, you can, at least where I'm at in Chicago, I see them a lot just out on the sidewalks. And whenever I'm taking a walk, I can just, yeah, like, grab a little fruit and have a little snack on them. It's such a delight. They are a really cool tree. They're great for birds. Birds love to eat the berries, too. And yeah, a lot of people ask my partner, who's an arborist, like, if they're kind of thinking about doing something new with the landscaping at their house, they're like, "Oh, like, what are some things that I should plant?" And serviceberry is his recommendation. And now I'm sharing it with all of our Bike Shed listeners. If you've ever wondered about [laughs] a cool and environmentally beneficial tree [laughs] to add to your front yard, highly recommend, yeah, looking out for them, looking up what they look like, and maybe you also can enjoy some June foraging. JOËL: That's interesting because it sounds like you're foraging in an urban environment, which is typically not what I associate with the idea of foraging.STEPHANIE: Yeah, that's a great point because I live in a city. I don't know, I take what I can get [laughs]. And I forget that you can actually forage for real out in, you know, nature and where there's not raccoons and garbage [laughs]. But yeah, I think I should have prefaced by kind of sharing that this is a way if you do live in a city, to practice some urban foraging, but I'm sure that these trees are also out in the world, but yeah, have proved useful in an urban environment as well. JOËL: It's really fun that you don't have to, like, go out into the countryside to do this activity. It's a thing you can do in the environment that you live in. STEPHANIE: Yeah, that was one of the really cool things that I got into the past couple of years is seeing, even though I live in a city, there's little pieces of nature around me that I can engage with and picking fruit off of people's [inaudible 03:18] [laughs], like, not people's, but, like, parkway trees. Yeah, the serviceberry is also a pretty popular one here that's planted in the Chicago parks. So, yeah, it's just been like, I don't know, a little added delight to my days [laughs], especially, you know, just when you're least expecting it and you stumble upon it. It's very fun. JOËL: That is really fun. It's great to have a, I guess, a snack available wherever you go. STEPHANIE: Anyway, Joël, what is new in your world? JOËL: I've been intersecting two, I guess, hobbies of mine: D&D and AI. I've been playing a lot of one-shot games with friends, and that means that I need to constantly come up with new characters. And I've been exploring what AI can do to help me develop more interesting or compelling character concepts and backstories. And I've been pretty satisfied with the result. STEPHANIE: Cool. Yeah. I mean, if you're playing a lot and having to generate a lot of new ideas, it can be hard if you're, you know, just feeling a little empty [laughs] in terms of, you know, coming up with a whole character. And that reminds me of a conversation that you and I had in person, like, last month as we were talking about just how you've been, you know, experimenting with AI because you had used it to generate images for your RailsConf talk. And I think I connected it to the idea of, like, randomness [laughs] and how just injecting some of that can help spark some more, I think, creativity, or just help you think of things in a new way, especially if you're just, like, having a hard time coming up with stuff on your own. And even if you don't, like, take exactly what's kind of provided to you in a generative AI, it at least, I don't know, kind of presents you with something that you didn't see before, or yeah, it's just something to react to. JOËL: Yeah, it's a great tool for getting unstuck from that kind of writer's block or that, like, blank page feeling. And oftentimes, it'll give you a thing, and you're like, that's not really exactly what I wanted. But it sparks another idea, which is what I actually want. Or sometimes you can be like, "Hey, here's an idea I have. I'm not sure what direction to take it in. Give me a few options." And then, you see that, and you're like, "Oh, that's actually pretty interesting."One thing that I think is interesting is once I've come up with a little bit of the character concept, or maybe even, like, a backstory element...so, I'm using ChatGPT, and it has that concept of memory. And so, throughout the conversation, it keeps bringing it back. So, if I tell it, "Look, this is an element that's going to be core to the character," and then later on, I'm like, "Okay, help me brainstorm some potential character flaws for this character," it'll actually find things that connect back to my, like, core concept, or maybe an element of the backstory. And it'll give me like, you know, 5 or 10 different ideas, and some of them can be actually really good.So, I've really enjoyed doing that. It's not so much to just generate me a character so much as it is like a conversation back and forth of like, "Okay, help me come up with a vibe for it. Okay, now that I have a vibe or a backstory element or, like, a concept, help me workshop this thing. And what about that?" And if I want to say, "It's going to be this character class, what are maybe some ways I could develop it that are unusual?" and just sort of step by step kind of choose your own adventure. And it kind of walking me through the process has been really fun. STEPHANIE: Nice. Yeah, the way you're talking about it makes a lot of sense to me how asking it to help you, not necessarily do all of it, like, you know, kind of just spit out something that you're like, okay, like, that's what I'm going to use, approaching it as a tool, and yeah, that's really fun. Have you had good experiences then playing with those characters [chuckles]? JOËL: I have. I think it's also really great for sort of padding out some of the content. So, I had a character I played who was a washed-up politician. And at one point, I knew that I was going to have to make a campaign speech. And I asked ChatGPT, "Can you help me, like...here are the themes I want to hit. Give me a, like, classic, very politician-sounding speech that sounds inspiring but also says nothing at the same time." And it did a really good job of that. And you can tell it, "Oh, that's too long. That's too short. I want three sentences. I want five sentences." And that was great. So, I saved that, brought it to the table, and read out my campaign speech, and it was a hit. STEPHANIE: Amazing. That's really fun. I like that because, yeah, I don't think...I am so poor at just improvising things like that, even though, like, I want to really embody the character. So, that's cool that you found a way to help you be able to do that because that just feels like kind of what playing D&D can be about. JOËL: I've never DM'd, but I could imagine a situation where, because the DMs have to improv so much, and you know what the players do, I could imagine having a tool like that available behind the DM screen being really helpful. So, all of a sudden, someone's just like, "Oh, I went to a place," and, like, all of a sudden, you have to, like, sort of generate a village and, like, ten characters on the spot for people that you didn't expect, or an organization or something like that. I could imagine having a tool like that, especially if it's already primed with elements from your world that you've created, being something really helpful. That being said, I've never DM'd myself, so I have no idea what it actually is like to be on the other side of that screen.STEPHANIE: Cool. I mean, if you ever do try that or have a DM experience and you're like, hmm, I wonder kind of how I might be able to help me here, I bet that would be a very cool experience to share on the show. JOËL: I definitely have to report back here. Something that I've been thinking about a lot recently is the difference between sort of professional growth and experience, so the time that you put into doing work. Particularly maybe because, you know, we spend part of our week doing client work, and then we have part of the week that's dedicated to maybe more directly professional growth: our investment day. How do we grow from that, like, four days a week where we're doing client work? Because not all experience is created equal. Just because I put in the hours doesn't mean that I'm going to grow. And maybe I'm going to feel like I'm in a rut. So, how do I take those four days a week that I'm doing code and transform that into some sort of growth or expansion of my knowledge as a developer? Do you have any sort of tactics that you like to use or ways you try to be a little bit more mindful of that?STEPHANIE: Yeah, this is a fun question for me, and kind of reminds me of something we've talked a little bit about before. I can't remember if it was, like, on air or just separately, but, you know, we talk a lot about, like, different learning strategies on the show, I think, because that's just something you and I are very into. And we often, like, lean on, you know, our investment day, so our Fridays that we get to not do client work and kind of dedicate to professional development. But you and I also try to remember that, like, most people don't have that. And most people kind of are needing to maybe find ways to just grow from the day-to-day work that they do, and that is totally possible, I think. And some of the strategies that I have are, I guess, like, it is really...it can be really challenging to, like, you know, be like, okay, I spent 40 hours doing this, and like, what did I learn [chuckles]? Feeling like you have to have something to show for it or something to point to.And one thing that I've been really liking is these automated check-ins we have at the end of the week. And, you know, I suspect that this is not that uncommon for just, like, a workplace to be like, "Hey, like, how did your week go? Like, what are some ways that it was successful? Like, what are your challenges? Like, where do you need support or help?" And I think I've now started using that as both, like, space for giving an update on just, like, business-y things. Like, "Here's the status of this project," or, like, "Here's, you know, a roadblock that we faced that took some extra time," or whatever. Then also being like, oh, this is a great time to make this space for myself, especially because...I don't know about you, but whenever I have, like, performance review time and I have to write, like, a self-review, I'm just like, did I do anything in the last six months [laughs], or how have I grown in the last six months? It feels like such a big question, kind of like you were talking about that blank page syndrome a little bit.But if I have kind of just put in the 10 minutes during my Friday to be like, is there something that was kind of just for me that I can say in my check-in? I can go back and, yeah, just kind of start to see just, like, you know, pick out or just pay attention to how, like, my 40 hours is kind of serving me in growing in the ways that I want to and not just to deliver code [laughs].JOËL: What you're describing there, that sort of weekly check-in and taking notes, reminds me of the practice of journaling. Is that something that you've ever tried to do in your, like, regular life? STEPHANIE: Oh yeah, very much so. But I'm not nearly as, like, routine about it in my personal life. But I suspect that the routine is helpful in more of a, like, workplace setting, at least for me, because I do have, like, more clear pathways of growth that I'm interested in or just, like, something that, I don't know, not that it's, like, expected of everyone, but if that is part of your goals or, like, part of your company's culture, I feel like I benefit from that structure. And yeah, I mean, I guess maybe that's kind of my way of integrating something that I already do in my personal life to an environment where, like I said, maybe there is, like, that is just part of the work and part of your career progression. JOËL: I'm curious about the frequency. You mentioned that you sort of do this once a week, sort of a check-in at the end of the week. Do you find that once a week is about the right frequency versus maybe something like daily? I know a lot of these sort of more modern note-taking systems, Roam Research, or Obsidian, or whatever, have this concept of, like, a daily note that's supposed to encourage something that's kind of like journaling. Have you ever tried something more on a daily basis, or do you feel like a week is about...or once a week is about the right cadence for you? STEPHANIE: Listen, I have, like, complicated feelings about this because I think the daily note is so aspirational for me [laughs] and just not how I work. And I have finally begrudgingly come to accept this no matter how much, like, I don't know, like, bullet journal inspirational content I consume on the internet [laughs]. I have tried and failed many a time to have more frequency in that way. But, I don't know, I think it almost just, like, sets me up for failure [laughs] because I have these expectations. And that's, like, the other thing. It's like, you can't force learning necessarily. I don't know if this is, like, a strategy, but I think there is some amount of, like, making sure that I'm in the right headspace for it and, you know, like, my environment, too, kind of is conducive to it. Like, I have, like, the time, right? If I'm trying to squeeze in, I don't know, maybe, like, in between meetings, 20 minutes to be like, what did I learn from this experience? Nothing's coming out [laughs]. That was another thing that I was kind of mulling over when he had this topic proposed is this idea of, like, mindset and environment being really important because you know when you are saying, like, not all time is created equal, and I suspect that if, you know, either you or, like, the people around you and the environment you're in is not also facilitating growth, and, like, how much can you really expect for it to be happening? JOËL: I mean, that's really interesting, right? The impact of sort of a broader company culture. And I think that definitely can act as a catalyst for growth, either to kind of propel you forward or to pull you back. I want to dig into a little bit something you were saying about being in the right headspace to capture ideas. And I think that there's sort of almost, like, two distinct phases. There's the, like, capturing data, and information, and experiences, and then, there's synthesizing it, turning information into learning. STEPHANIE: Yes. JOËL: And it sounds like you're making a distinction between those two things, specifically that synthesis step is something that has to happen separately. STEPHANIE: Ooh, I don't even...I don't know if I would necessarily say that I'm only talking about synthesis, but I do like that you kind of separated those categories because I do think that they are really important. And they kind of remind me a lot about the scientific method a little bit where, you know, you have the gathering data and, like, observations, and you have, you know, maybe some...whatever is precipitating learning that you're doing maybe differently or new. And that also takes time, I think, or intention at least, to be like, oh, do I have what I need to, like, get information about how this is going? And then, yeah, that synthesis step that I think I was talking about a little bit more. But I don't think either is just automatic. There is, I think, quite a bit of intention involved. JOËL: I think maybe the way I think about this is colored by reading some material on the Zettelkasten method of note-taking, which splits up the idea of fleeting notes and literature notes, which are sort of just, like, jotting down ideas, or things you've seen, things that you've learned, maybe a thought you had when you read a particular paragraph in a blog post, something like that. And then, the permanent notes, which are more, like, fully formed thoughts that arise out of the more fleeting ones. And so, the idea is that the fleeting ones maybe you're taking those in a notebook if you're doing it pen and paper. You could be doing it in some sort of, like, daily note, or something like that. And then, those are temporary. They were there to just capture information. Later on, you process that, and then you can throw them out if you need to. STEPHANIE: Yeah, that makes a lot of sense. This has actually been a shift for me, where I used to rely a lot more on memory and perhaps, like, didn't have a great system for taking things like fleeting notes and, like, documenting kind of [inaudible 18:28] what I was saying earlier about how do I make sure that the information is recorded, you know, for me to synthesize later? And I have found a lot more success lately in that fleeting note style of operating. And thanks to Obsidian honestly, now it's so easy to be like, oh, I'm just going to open a quick new file. And I need as little friction as possible to, like, put stuff somewhere [laughs]. And, actually, I'm excited to talk a little bit more about this with you because I think you're a little bit different where you somehow find the time [laughs] and care to create your diagrams. I'm like, if I can, for some reason, even get an Obsidian file open, I'll tab to Slack. And I send myself a lot of notes in my just own personal DM space. In fact, it's actually kind of embarrassing because I use the Command+K shortcut to navigate to my own personal DMs, which you can get to by typing me, like, M-E. And sometimes I've accidentally just entered that into a channel chat [laughs], and then I have to delete it really quick later when I realize what I've done. So, yeah, like, I meant to navigate to my personal notes, and I just put in our team chat, "Me [laughs]." And, I don't know, I have no idea how that comes up [laughs], what people think is going on. But if anyone's listening to this podcast from thoughtbot and has seen that of me, that's what happened. JOËL: You may not be the only one who's done that. STEPHANIE: Thank you. Yeah [laughs], that's good to know. JOËL: I want to step back a little bit because we've been talking about, like, introspection, and synthesis, and finding moments to capture information. And I think we've sort of...there's an unspoken assumption here that a way to kind of turbocharge learning from day-to-day experience is some form of synthesis or self-reflection. Would you agree with that statement? STEPHANIE: Okay. This is another thing that I am perhaps, like, still trying to figure out, and we can figure it out together, which is separating, like, self-driven learning and, like, circumstance-driven learning. Because it's so much easier to want to reflect on something and find time to be, like, oh, like, how does this kind of help my goals or, like, what I want to be doing with my work? Versus when you are just asked to do something, and it could still be learning, right? It could still be new, and you need to go do some research or, you know, play around with a new tool. But there's less of that internal motivation or, like, kind of drive to integrate it. Like, do you have this distinction? JOËL: I've definitely noticed that when there is motivation, I get more out of every hour of work that I put in in terms of learning new things. The more interest, the more motivation, the more value I get per unit of effort I put in. STEPHANIE: Yeah. I think, for me, the other difference is, like, generative learning versus just kind of absorbing information that's already out there that someone else's...that is kind of, yeah, just absorbing rather than, like, creating something new from, like, those connections.JOËL: Ooh.STEPHANIE: Does that [chuckles] spark something for you? JOËL: The gears are turning in my head because I'm almost hearing that as, like, a passive versus active learning thing. But just sort of like, I'm going to let things happen to me, and I will come out of that with some experience, and something is going to happen. Versus an active, I am going to, like, try to move in a direction and learn from that and things like that. And I think this maybe connects back to the original question. Maybe this sort of, like, checking in at the end of the week, taking notes is a way to convert something that's a bit more of a passive experience, spending four days a week doing a project for a client, into something that's a little bit of a more active learning, where you say, "Okay, I did four weeks of this particular type of Rails work. What do I get out of it? What have I learned? What is something new that I've seen? What are some opinions I have formed, patterns I like or dislike?"STEPHANIE: Yeah, I like that distinction because, you know, a few weeks ago, we were at RailsConf. We had kind of recapped it in a previous episode. And I think we had talked about like, oh, do we, like, to sit in talks or participate in workshops? And I think that's also another example of, like, passive versus active, right? Because I 100%, like, don't have the same type of learning by just, you know, listening to a talk that I do with maybe then going to look up, like, other things this person has put out in the world, finding them to talk to them about it, like, doing something with the content, right? Otherwise, it's just like, oh yeah, I heard this talk. Maybe one day I'll remember it when the need arises [laughs]. I, like, have a pointer to it in my brain. But until then, it probably just kind of, like, sits there, and nothing's really happened with it. JOËL: I think maybe another thing that's interesting in that passive versus active distinction is that synthesis is inherently an act of creation. You are now creating new ideas of your own rather than just capturing information that is being thrown at you, either by sitting in a talk or by shipping tickets. The act of synthesizing and particularly, I think, making connections between ideas, either because something that, let's say you're in a talk, a speaker said that sparks an idea for yourself, or because you can connect something that speaker said with another idea that you already have or an idea that you've seen elsewhere.So, you're like, oh, the thing this person is saying connects to this thing I read in a book or something another speaker said in an earlier session, or something like that. All of a sudden, now you're creating these new bits of knowledge, new perspectives, maybe even new mental models. We talked about mental models last week. And so, knowledge is not just the facts that you absorb or memorize. A lot of it is building the connections between those facts. And those are things that are not always given to you. You have to create them yourself. STEPHANIE: Yeah, I am nodding my head a lot because that's resonating with, like, an experience that I'm having kind of coaching and mentoring a client developer on my team who is earlier in her career. And one thing that I've been really, like, working on with her is asking like, "Oh, like, what do you think of this?" Or like, "Have you seen this before? What are your reactions to this code, or, like this comment?" or whatever. And I get the sense that, like, not a lot of people have prompted her to, like, come up with answers for those kinds of questions. And I'm really, really hopeful that, like, that kind of will help her achieve some of the goals that she's, like, hoping for in terms of her technical growth, especially where she's felt like she's stagnated a little bit. And I think that calls back really well to what you said at the beginning of, like, you can spend years, right? Just kind of plugging away. But that's not the same as that really active growth. And, again, like, that's fine if that's where you're at or want to be at for a little while. But I suspect if anyone is kind of, like, wondering, like, where did that time go [laughs]...even for me, too, like, once someone started asking me those questions, I was like, oh, there's still so much to figure out or explore.And I think you're actually really good at doing that, asking questions of yourself. And then, another thing that I've picked up from you is you ask questions about, like, what are questions other people would have? And that's a skill that I feel like I still have yet to figure out. I'm [chuckles] curious what you think about that.JOËL: That's interesting because that kind of goes to another level. I often think of the questions other people would have from a more, like, pedagogical sense. So, I write a lot of blog posts. I write a lot of talks that I give. So, oftentimes when I'm creating that kind of material, there's a bit of an inner critic who's trying to, you know, sitting in the audience listening to myself speak, and who's going to maybe roll their eyes at certain points, or just get lost, or maybe raise their hand with a question. And that's who I try to address those things so that then when I go through it the next time, that inner critic is actually feeling engaged and paying attention. STEPHANIE: Do you find that you're able to do that because you've seen that happen enough times where you're like, oh, I can kind of predict maybe what someone might feel confused about? I'm curious, like, how you got from being, like, well, I know what I would be confused about to what would someone else be unsure or, like, want more information about. JOËL: Part of the answer there is that I'm a very harsh critic myself. STEPHANIE: [laughs] Yes.JOËL: So, I'm sitting in somebody else's talk, and there are probably parts where I'm rolling my eyes or being like, wait a minute, how did you get from this idea to this other thing? That doesn't follow. And so, I try to turn that back towards myself and use that as fuel to make my own work better. STEPHANIE: Yeah, that's cool. I like that. Even if it's just framed as, like, a missed opportunity for people to have better or more comprehensive understanding. I know that's something that you're, like, very motivated to help kind of spread more of [laughs]. Understanding and learning is just important to you and to me. So, I think that's really cool that you're able to find ways to do that. JOËL: Well, you definitely want to, I think, to keep a sort of beginner's mindset for a lot of these things, and one of the best ways to do that is to work with beginners. So, I spent a lot of time, back in the day, for example, in the Elm language chat room, just helping people answer basic questions, looking up documentation, explaining sort of basic concepts. And that, I think, helped me get a sense of like, where were newcomers to the language getting stuck? And what were the explanations of those concepts that really connected? Which I could then translate into my work. And I think that that made me a better developer and helped me build this, like, really deep understanding of the underlying concepts in a way that I wouldn't have had just writing code on my own.STEPHANIE: Wow, forum question answering hero. I have never thought to do that or felt compelled to do that. But I remember my friend was telling me, she was like, "Yeah, sometimes I just want to feel good about myself. And I remember that I know things that other people, like, are wanting to find out," and she just will answer some easy questions on Stack Overflow, you know, about, like, basic Rails stuff or something. And she is like, "Yeah, and that's doing my good deed [laughs]." And yeah, I think that it also, you know, has the same benefits that you were just saying earlier about...because you want to be helpful, you figure out how to actually be helpful, right? JOËL: There's maybe a sense as well that helping others, once more, forces you into more of an active mindset for growth in the same way that interrogating yourself does, except now it's a beginner who's interrogating you. And so, it forces you to think a little bit more about those whys or those places where people get stuck. And you've just sort of assumed it's a certain way, but now you have to, like, explain it and really get into some of the concepts. STEPHANIE: So, on the show, we've talked a lot about the fun things you share in the dev channel in our Slack workspace. But I recently discovered that someone (Was it you?) created an Obsidian MD channel for our favorite note-taking software. And in it, you shared a really cool tool that is available in Obsidian called mind maps. JOËL: Yeah, so mind maps are a type of diagram. They're effectively a tree structure, but they don't really look like that when you draw them out. You start with a sort of topic in the center, and then you just keep drawing branches off of that, going every direction. And then, maybe branches off branches and keep going as you add more content. Turns out that Mermaid.js supports mind maps as a graph type, and Obsidian embeds Mermaid diagrams. So, you can use Mermaid's little language to express a mind map. And now, all of a sudden, you have mind mapping as a tool available for you within Obsidian.STEPHANIE: And how have you been using that to kind of process and experience or maybe, like, end up with some artifacts from, like, something that you're just doing in regular day-to-day work? JOËL: So, kind of like you, I think I have the aspiration of doing some kind of, like, daily note journaling thing and turning that into bigger ideas. In practice, I do not do that. Maybe that's the thing that I will eventually incorporate into my practice, but that's not something that I'm currently doing. Instead, a thing that I've done is a little bit more like you, but it's a little bit more thematically chunked. So, for example, recently, I did several weeks of work that involved doing a lot of documentation for module-level documentation.You know, I'd invested a lot of time learning about YARD, which is Ruby's documentation system, and trying to figure out, like, what exactly are docs that are going to be helpful for people? And I wanted that to not just be a thing I did once and then I kind of, like, move on and forget it. I wanted to figure out how can I sort of grow from that experience maximally? And so, the approach I took is to say, let's take some time after I've completed that experience and actually sort of almost interrogate it, ask myself a bunch of questions about that experience, which will then turn into more broad ideas. And so, what I ended up doing is taking a mind-mapping approach. So, I start that center circle is just a circle that says, "My experience writing docs," and then I kind of ring it with a series of questions. So, what are questions that might be interesting to ask someone who just recently had experience writing documentation? And so, I come up with 4,5,6 questions that could be interesting to ask of someone who had experience. And here I'm trying to step away from myself a little bit. And then, maybe I can start answering those questions, or maybe there are sub-questions that branch off of that. And maybe there are answers, or maybe there are answers that are interesting but that then trigger follow-up questions. And so I'm almost having a conversation with myself and using the mind map as a tool to facilitate that. But the first step is putting that experience in the center and then ringing it with questions, and then kind of seeing where those lead. STEPHANIE: Cool. Yeah, I am, like, surprised that you're still following that thread because the module docs experience was quite a little bit a while ago now. We even, you know, had an episode on it that I'll link in the show notes. How do you manage, like, learning new things all the time and knowing what to, like, invest energy and attention into and what to kind of maybe, like, consider just like, oh, like, I don't know, that was just an experience that I had, and I might not get around to doing anything with it? JOËL: I don't know that I have a great system. I think sometimes when I do, especially a more prolonged chunk of time doing a thing, I find it really worthwhile to say, hey, I don't want that to sort of just be a thing that was in my memory, and then it moves out. I'd like to pull out some more maybe practical or long-term ideas from it. Part of that is capture, but some of that is also synthesis. I just spent two weeks or I just spent a month using a particular technology or doing a new kind of task. What do I have to show for it? Are there any, like, bigger ideas that I have here? Does this connect with any other technologies I've done or any other ideas or theories? Did I come up with any opinions? Did I like this technology? Did I not? Are there elements that were inspirational? And then capturing some of that eventually with the idea of...so I do a sort of Zettelkasten-style permanent note collection, the idea to create at least a few of those based off of the experience that I can then connect to other things. And maybe it eventually turns into other content. Maybe it's something I hold onto for a while. In the case of the module docs, it turned into a Bike Shed episode. It also turned into a blog post that was published this past week. And so, it does have a way of coming back. STEPHANIE: Yeah. Yeah. One thing that sparked for me was that, you know, you and I spend a lot of time thinking about, like, the practice of writing software, you know, in the work we do as consultants, too. But I find that, like, you can also apply this to the actual just your work that you are getting paid for [laughs]. This was, I think, a nascent thought in the talk that I had given. But there's something to the idea of, like, you know, if you are working in some code, especially legacy code, for a long time, and you learn so much about it, and then what do you have to show for it [chuckles], you know?I have really struggled with feeling like all of that work and learning was useful if it just, like, remains in my memory and not necessarily shared with the team or, I don't know, just, like, knowing that if I leave, especially since I am a contractor, like, just recognizing that there's value in being like, oh, I spent an hour or, like, half a day sifting through this complex legacy code just to make, like, a small change. But that small change is not the full value of all of the work that I did. And I suspect that, like, just the mind mapping stuff would be really interesting to apply to more. It's not, like, just practical work, but, like, more mundane, I don't know, like, labor [laughs], if you will. JOËL: I can think of, like, sort of two types of knowledge that you can take out of something like that. Some of it is just understanding how this legacy system works, saying, oh, well, they have this user model that's connected to this old persona table, which is kind of unused, but we sometimes rely for in this legacy case. And you've got to have this permission flag turned on and, like, all those things that you had to just discover by reading the code and exploring. And that's going to be useful to you as long as you work in that legacy codebase, as long as you work through that path. But when you move on to another project, that knowledge probably doesn't serve you a whole lot. There are things that you did throughout that journey, though, that you can probably pull out that are going to be useful to you on other projects. And that might be maybe you came up with a new way of navigating the code or a new way of, like, finding how different pieces were connected. Maybe it was a diagramming tool; maybe it was some sort of gem. Maybe it was just a, oh, a heuristic, like, when I see a model, I like to follow the associations first. And I always go for the hasmanys over the belongstos because those generally lead me in the right direction. Like, that's really interesting insight, and that's something that might serve you on a following project.You can also pull out bigger things like, are there refactoring techniques that you experimented with or that you learned on this project that you would use again elsewhere? Are there ways of maybe quarantining scary code on a legacy project that are a thing that you would want to make more consistent part of your practice? Those are all great things to pull out of, just a like, oh yeah, I did some work on a, like, old legacy part of an app. And what do I have to show for it? I think you can actually have a lot to show for it. STEPHANIE: Yeah, that's really cool. That sounds like a sure way of multiplying the learning. And I think I didn't really consider that when I was first talking about it, too. But yeah, there are, like, both of those things kind of available to you to, like, learn from. Yeah, it's like, that time is never just kind of, like, purely wasted. Oh, I don't know, sometimes it really feels like that [laughs] when you are debugging something really silly. But yeah, like, I would be interested in kind of thinking about it from both of those lenses because I think there's value in what you learn about that particular system in that moment of time, even if it might not translate to just future works or future projects. And, like, that's something that I think we would do better at kind of capturing, and also, there's so much stuff, too, kind of to that higher level growth that you were speaking to. JOËL: I think some of the distinctions we're talking about here is something that was explored in an older episode on note-taking with Amanda Beiner, where we sort of explored the difference between exploratory notes, debugging notes, idea notes, and how note-taking is not a single thing. It can serve many purposes, and they can have different lifespans. And those are all just ways to aid your thinking. But being maybe aware of the kind of thinking that you're trying to do, the kind of notes you're trying to take can help you make better use of that time. STEPHANIE: I have one last question for you before we wrap up, which is, do you find, like, the stuff we're talking about to be particularly true about software development, or it just happens to be the thing that you and I both do, and we also love to learn, and so, therefore, we are able to talk about this for, like, 50 minutes [laughs]? Are you able to make any kind of distinction there, or is it just kind of part of pedagogy in general? JOËL: I would say that that sort of active versus passive thing is a thing that's probably true, just about anything that you do. For example, I do a lot of bouldering. Just going spending a lot of time on the wall, climbing a lot; that's going to help me get better. But a classic way that people try to improve is filming themselves or having a friend film themselves, and then you can look at it, and then you evaluate, oh, that's what I did. This is where I was struggling to get the next hold. What if I try to do something different? So, building in an amount of, like, self-reflection into the loop all of a sudden catalyzes that learning and helps you grow at a rate that's much more than if you're just kind of mindlessly putting time into it. So, I would go so far as to say that self-reflection, synthesis—those are all things that are probably going to catalyze growth in most areas of your life if you're being a little bit more self-aware. But I've found that it's been particularly useful for me when it comes to trying to get better at the job that I do every week.STEPHANIE: Yeah, I think, for me, it's like, yeah, getting better at being a developer rather than being, you know, a software developer at X company. Like, not necessarily just getting better at working at that company but getting better at the skill itself. JOËL: And those two things have a way of sort of, like, folding back into themselves, right? If you're a better software developer in general, you will probably be a better developer at that company. Yes, you want domain knowledge and, like, a deep understanding of how the system works is going to make you a better developer at that company. But also, if you're able to find more generic approaches to onboard onto new things, or to debug more effectively, or to better read or understand unknown code of high complexity, those are all going to make you much better at being a developer at that company as well. And they're transferable skills, so they're all really good things to have. STEPHANIE: On that note. Shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm.JOËL: This show has been produced and edited by Mandy Moore.STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at [email protected] via email.JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeee!!!!!!AD:Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
-
Joël explains his note-taking system, which he uses to capture his beliefs and thoughts about software development. Stephanie recalls feedback from her recent RailsConf talk, where her confidence stemmed from deeply believing in her material despite limited rehearsal. This leads to a conversation about the value of mental models in building a comprehensive understanding of a topic, which can foster confidence and adaptability during presentations and discussions.The episode then shifts focus to the practical application of enumerators in Ruby, exploring various mental models to understand their functionality better. Joël introduces several metaphors, such as enumerators as cursors, lazy collections, and sequence generators, which help demystify their use cases.Episode on note-taking (https://bikeshed.thoughtbot.com/357)What we believe about software (https://bikeshed.thoughtbot.com/172)Ruby Enumerators (https://ruby-doc.org/3.3.1/Enumerator.html)Enumerator Lazy (https://ruby-doc.org/3.3.1/Enumerator/Lazy.html)Modeling a Paginated API as a lazy stream (https://thoughtbot.com/blog/modeling-a-paginated-api-as-a-lazy-stream)Solving a memory performance issue with enumerator (https://thoughtbot.com/blog/how-we-used-a-custom-enumerator-to-fix-a-production-problem)Find in batches (https://api.rubyonrails.org/classes/ActiveRecord/Batches.html#method-i-find_in_batches)Binary tree implementation with different traversals (https://gist.github.com/JoelQ/02f3ef9f61bebc7c8e5ea67d10ed92c6)Teaching Ruby to Count (https://www.youtube.com/watch?v=PHMOsTK1jSE)Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville, and together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: So, what's new in my world isn't exactly a new thing. I've talked about it on the podcast here before, and it's my note-taking system. I have a system where I try to capture notes that are things I believe about software or things I think are probably true about software. They're chunked up in really small pieces, such that every note is effectively one small thesis statement and a paragraph of text, and maybe a diagram or a code snippet to support that. And then, it's highly hyperlinked to other notes. So, I sort of build out some thoughts on software that way. A thing that I've done recently that's been pretty exciting with that is introducing a sort of separate set of notes that connect to my sort of opinion notes. So, I create individual notes for public works that I've done, things like blog posts or conference talks. Because a lot of those are built on top of ideas that have been sitting in my note system for a while. Readers and listeners get to sort of see the final product, but often sort of built up over several months or even a couple of years as I added different notes that kind of circled a topic and then eventually got to a thing. What I did, though, was actually making those connections explicit. And so I use Obsidian. Obsidian has this cool graph view where it just sort of shows all of the notes, and it circles them with, like, connections between them where the notes connect. So, I can now see in a visual format how my thoughts cluster in different topics, but then also which clusters have talks and blog posts hanging off of them and also which ones don't, which ones are like, oh, I have a lot of thoughts on this topic, and I've not yet written about it in a public forum; maybe that would be a thing to explore. So, seeing that visual got me really excited. I was having a good time. STEPHANIE: Yes, I have several thoughts coming to mind in response, which is, I know you love a visual. I really like the system of, even if you have created content for it, like, you have a space for, like, thoughts about it to evolve. Because you said, like, sometimes content comes out of notes that you've been...or, like, thoughts you've been having over years, but it's like, even afterwards, I'm sure there will still be new thoughts about it, too. I always have a hard time finding a place for that thing kind of once I, I don't know, it's like some of that stuff is never really considered done, right? So, that is really cool. And I also was just thinking about an old episode of The Bike Shed back when Chris Toomey and Steph Viccari hosted the podcast called "What We Believe About Software," I think, is the title. And I was just thinking about how, like, if only we could just dump all of your notes [laughs] into some, you know, stream [laughs], and that would be really cool. If we ever do, like, an episode like that, that would be really fun. And I'm sure, you know, you already have this, like, huge bank of ideas [laughs]. JOËL: Yes. It is really fun because I build up...the thoughts are often sort of interconnected, and so they might have a topic, but they are very focused. So, I might have, like, three or four things I believe about a particular topic that cluster together. So, we could...and, actually, I have used, in the past, some of those clusters as initial food for thought for a Bikeshed episode.STEPHANIE: Yeah, that's really neat. I like this idea of a kind of just, like, a repository for putting down what you believe about software as kind of, like, guiding principles for yourself as a developer a little bit. I remember a piece of feedback I got about my RailsConf talk that I gave a few weeks ago, and someone said like, "Oh, you sounded really confident in what you were talking about." And that surprised me because I, like, didn't practice rehearsing giving the talk all that much [laughs]. It's because they had asked like, "Oh, like, did you practice a lot?" or something like that. And I think I realized that I, like, really believed in what I was sharing and kind of that, I think, was perhaps what they were picking up on. And even though, like, maybe the rehearsal of the presentation itself was not where I had spent a lot of time on, I had spent a lot of time thinking about what I wanted to share and just building up my confidence around that. So, I thought that was an interesting connection. JOËL: Yeah, you fully developed the idea. You kind of explored all the side trails, maybe a little bit on your own as well. You're on very familiar terrain. And so, that is a way of building confidence separate from just sort of memorizing a talk. STEPHANIE: Yeah, yeah. Exactly.JOËL: In a sense, I almost feel like that's a better sense of confidence because then you can sort of...you can roll with the punches. You know, if a slide is out of order or something, sure, it maybe messes up a little bit of the narrative that you're trying to say. But you're not like, "Oh no, what is this content?" You're like, "Oh yeah, this thing," and you can dive right into it. Somebody asks you a question, and you're not like, "Oh no, that was not in the script," because, again, you've sort of mastered your topic. You know the area as a whole, even sort of the blurry edges beyond the talk, and can react in a way that is pretty confident. STEPHANIE: Yeah. I still definitely fear the open Q&A. I've never done it before, but maybe one day I will be able to because I just, you know, know my topic so well inside and out [laughs] that I can roll with the punches, as you say. JOËL: Open Q&A is just...it's a roll of the dice. Sometimes, you get some really good conversation topics there, and sometimes, it's just a waste of everyone's time. STEPHANIE: I like that take [laughs]. JOËL: Maybe that should go into the things I believe about software. So, other than receiving feedback about your RailsConf talk, what is new in your world?STEPHANIE: Yeah, so I am wrapping up a pretty large project on my client work that we're hoping to release soon. And, in fact, it's actually being released along with a big announcement from the client company to their customers. Essentially, at a conference, they're going to say like, "Hey, like, we now have this new feature." And so, I think there's some hype generated around it. And this past week, we've been doing a lot of internal testing of the feature because there are a lot of employees of my client company who are, like, pretty big users of the product, which is cool because I think we're getting, you know, we have easy access to people who can give us good feedback.But I am having a hard time with being on the receiving end of the feedback and figuring out, like, what is stuff I need to attend to now before, you know, this big release? And what is stuff that is just kind of, like, general feedback like, "Oh, like, I wish it did this," but, you know, it turns out that that's not really what we were building? And how do I just kind of, like, accept that? You know, it's coming from a good place, but I can't really help them there, at least right now. And that's hard for me because I like helping people, right? And so, if someone says something like, "Oh, like, I wish it did this," or like, "Oh, that's kind of weird," I'm like, "Oh, I want to just, like, fix that for you right now [laughs]." And I suspect that a lot of other devs can relate to this, especially if, like, you know, you've been working on something for a little bit, and it feels...I'm just going to say it: it feels a little precious to me. So, what I'm trying to do today, actually, is not look at any of the feedback at all [laughs] and come at it tomorrow with a bit of a calmer vibe and be able to separate out, like, you know, I think all feedback is informative, but not all of it is useful for you at any given moment. Like, if there are bugs, then those will be my immediate priority. If there's maybe some small tweaks that we can make the feature just a little bit more polished, then I also think those are good. But then we are discovering a few things, too, about, like, what this feature is or could be. And I think those are the things that, you know, need to be brought into a conversation with a broader group and think about, like, is this the direction we want to go? So, that's kind of how I'm bucketing that feedback right now. JOËL: How do you feel about receiving direct feedback versus having something filtered through something like a product team? STEPHANIE: Ooh, that's an interesting question. Because right now we're doing, I think, a mix of both that I'm not sure that I really like. On one hand, when it's filtered, it's hard to get to the root of what someone is asking for. And oftentimes, like, it may not even include enough information after the fact to be able to come at it from a dev perspective. But then direct feedback, I think, is just a little bit overwhelming sometimes. And it can be hard to figure out what to pay attention to if you don't have that, like, input from a product team about, like, what the roadmap is looking like or where, you know, strategically their heads are at. So, one thing that kind of has emerged from this is like, oh, I was getting, you know, notifications for the feedback coming in. And what we did was set up a meeting [laughs] so that we can...maybe all of us can, like, scan it together ahead of time and then come at it with a little bit of context about what's come in but then maybe coalesce around the things that we feel are important. JOËL: Well, you'll have to keep us updated on how that plays out, and we can kind of hear what is the balance that ends up working well for you. STEPHANIE: Yeah, I hope so. I think this is actually maybe something that's a bit underexplored from the dev perspective, you know, that in-between stage of you're not totally done because it's not shipped to the world yet, but, you know, you're starting to get a little bit of that input. And what you do with that? Because I think there is some value in being engaged in that process. JOËL: So, we were talking earlier about this note-taking system that I use and sort of a renewed excitement that I have about it. And one thing that I did when I was going through and finding clusters of things that hadn't been written about was I found that I had a cluster of notes on different mental models that I had for understanding Ruby enumerators, not the enumerable module, but the enumerator object. And I decided, you know what? This would probably make for a good blog post. So, I drafted a blog post, and I've been thinking about this a little bit more recently. So, I've been really hyped about digging into enumerators because of that experience. STEPHANIE: Yeah, that's very cool. I have to say that I feel like I did not know a lot about enumerators and the API for them kind of before you brought this topic up, and I did a bit of a deep dive in preparation for us to discuss it. I feel like most devs, you know, work with enumerators via methods on enumerable without totally knowing that they are. So, I think that this would be a really interesting episode for people to be like, oh, like, I've been using this stuff, you know, the whole time, and now I can have a different perspective or just more insight on what they can do.JOËL: Before we dig into individual mental models, though, I want to think a little bit about the concept of mental models as a whole. Years ago, someone gave me advice to sort of pay attention to mental models, ways I think about the world or different code structures, different code approaches, and that really stuck with me. So, I've since been, like, kind of, like, collecting mental models. And, in a way, they're like a, for me, a bit more of a concrete way to look at a particular topic. So, I can say I'm looking at this particular topic through the lens of a particular mental model that helps me build more clarity around it. And if I have three or four, then I can kind of look at it from three or four different perspectives. And now, all of a sudden, I feel like I'm seeing in three dimensions. STEPHANIE: Whoa, the Matrix even [laughs]. That's cool. Yeah, I really like that advice. I think I'm going to steal it and start kind of suggesting it to other people because I think, in a way, on this show, that has come through a lot. And talking about things on the podcast has helped me develop a lot of my mental models. And I think we've done a few, like, episodes in the past about various ones we have for just our work because it's like, that's infinite [laughs]. But what I really have been appreciating is that mental models just need to work for you. As long as you're able to understand something, then it's valuable. And that has really helped me also, like, just get on the same understanding with others because the goal is not necessarily to, like, explain it the way that I would think of it, but figure out what would help them kind of develop their own mental model for understanding something, and, you know, kind of as long as we both feel like we have that shared understanding, no matter what lens it's through. And, you know, sometimes it's even more effective when we are able to share it. But I feel like, you know, you can still find ways to collaborate on something with a diversity of mental models. JOËL: Yeah, they're a great way to build self-understanding. They're a great way to sort of build understanding between two people. So, I'm a huge fan of the concept. And part of what I've been doing with my note-taking system is trying to capture those as much as possible. If I'm ever, like, trying to understand a complex topic and I'm like, oh, I think I've got a breakthrough here; I understand it; it's kind of like this, or you can imagine it in this perspective, it's like, write that down. That's gold. STEPHANIE: Very cool. So, Joël, would you be able to share some of your mental models for enumerator? JOËL: So, one way that I look at it is the idea that an enumerator is effectively a cursor over a collection. So, you have an array and a regular array; you're either in the middle of iterating through it using something like each, or you're not. You just have a collection of items. Enumerator introduces the idea that you're actually sort of at a position in the array. So, you're sort of focused on, let's say, the third item or the fourth item. You have a cursor there, and you can move that cursor forward as you sort of step through. But the really cool thing is you can also kind of pause and just pass that cursor on to someone else, and someone else can move the cursor a few steps further down the collection, pause, pass it on to someone else. And it's totally fine. Nobody has to, like, go through an entire, like, each iteration. STEPHANIE: Yeah. So, when you were talking about cursors, that got me thinking a little bit because I actually have struggled with that concept, especially when it comes to, you know, things code-related. Like, when I've had to work with database things and stuff, like, the idea of a cursor was a little, like, difficult for me to wrap my head around. And I was looking at the methods on enumerator, like the instance methods on enumerator. And one of them actually is what helped me develop this mental model. And I'm excited to see what you think. But there is a rewind method that basically rewinds the sequence back to its beginning, right? And what that triggered for me was a VHS tape [laughs] and just those, like, car-shaped rewinders for tapes back in the '90s. I don't know if you ever had one in your house, but I did. And I just thought that was such a cool method name because it was very, I don't know, it was just like a word that we use in the English language, right? So, the idea of, like, tapes, you know, like, cassette tapes or VHS tape kind of also it sounds like it matches well with what you were sharing, too, where it's like, I could pass, I don't know, maybe I, like, listen to a few songs on my cassette tape, and then I give it to someone else, and they can pick up where I left off. And yeah, that was really helpful in understanding, like, a marker of a position a little more than cursor was able to for me.JOËL: That's really interesting because now I wonder, like, how far we could push that metaphor. So, musical data is encoded on magnetic tape. Cassette tapes typically there are sort of two spools. You start off with all of the tape wound up around one spool, and then as it sort of moves across the read head, it gets wound up on sort of the, I don't know, destination spool. I guess you can call them origin and destination. And because of that, you can sort of be in a, like, partly read state where, you know, half the tape is on the destination spool, half of it is on the origin spool, and you have that read head that's in the middle, and you're just kind of paused there. And you can kind of jump forward in that. So, I imagine something like that in your metaphor is like an enumerator. Contrast that to imagine just a single spool, which is just we have musical data encoded on magnetic tape, and we wrapped it up on a spool. I feel like that's almost more like a regular array because you don't have that concept of, like, position, or being able to read parts of it or anything like that. It's just, here's some data. STEPHANIE: Yeah. While you were talking about the two spools, I was thinking about, like, part of what is nice about enumerator is that you can go forward or backwards, right? And that feels a little more possible with that two-spool metaphor [laughs], rather than just unraveling something, where you are kind of discarding what has already been read. JOËL: The one caveat there is that enumerators can move forward one item at a time. They can only move backwards by jumping back to the beginning. So, you can't step forward or step back. STEPHANIE: Yeah, that's fair. JOËL: You step forward, or you, like, rewind to the beginning. I think, in my mind, I was thinking a little bit more about this metaphor. And I think it's also just a metaphor for what's called the External Iterator Pattern. It's one of the classic Gang of Four Patterns, which is what enumerator, the object in Ruby, is an implementation of. I feel like I always see that in the documentation, like, oh, enumerator is an implementation of the External Iterator Pattern. And I just kind of go, what? STEPHANIE: [laughs]JOËL: Or maybe I kind of understand the idea of, like, okay, it's a way to, like, be able to step through a collection. But thinking in terms of a cursor or even your model as a cassette tape, I think that gives me a model, not just for enumerators, but then for better understanding that external iterator pattern. Like, I'm now not going to think of if I'm ever reading through the Gang Of Four book, or some other languages say we're an doing External Iterator Pattern, and I'll immediately be like, oh, that's a cursor, or that's a cassette tape.STEPHANIE: Yeah, very cool. I like it. JOËL: Another mental model that I have is thinking of enumerator in terms of a lazy collection. This is something that you tend to see more in functional programming languages, so the idea that you have a collection of potentially infinite length, or it could even be unknown length. But each element only sort of comes into being as you attempt to read it. So, it's kind of, like, a potentially infinite chain of Schrodinger's boxes. And you've got to open each of them to find out what's inside. STEPHANIE: Do you know what this reminded me of? Like elementary school math questions that were like, "What comes next in this pattern?" And it has, like, you know, the first, like, four or five values in a sequence or something. And then, you have to figure out, like, what the next value is. But then, in some ways, you know, I think it can depend on whether your enumerator is using the previous value to determine the next one. But yeah, it's like, you can't just jump ahead to figure out what the 10th, you know, value in this pattern is without kind of knowing what's come before it.JOËL: And sort of that needing to step through the entire collection, sort of one element at a time. STEPHANIE: Yeah, exactly. JOËL: I think a way that that concept is interesting, to me, is situations where a collection might be expensive, and you don't necessarily need all of it. So, you might have a bunch of calculations, but you can stop when you've hit the first one that succeeds or that matches a certain criteria. And so, it's not worth it to calculate the entire array of calculations if you're going to stop at the third one. And you could do that with some sort of, like, loop or something like that. But having it as a collection means you get to just treat it like an array, and you can call detect on it and do all the nice things that you're used to. It just happens to be a little bit more efficient in terms of not creating more data than you need to. STEPHANIE: Yeah. And I think there's some really cool stuff you can do when you start chaining enumerators with this concept of it being lazy evaluated. So, one of the things I learned in my deep dive is that when you are using the lazy method, you're able to chain enumerators. And they work a bit differently, where the default functionality is, like, everything in the collection gets evaluated through the first method, and then it gets iterated over in the second method. Whereas if you use lazy, I believe how it works is that, like, the first value gets kind of processed by all of the methods. And then, you get, you know, the output before moving on to the second, like, the next value. Does that sound right? JOËL: Yes. And I think that's where there's often a lot of confusion because there's sort of plain enumerator, and then there's a lazy enumerator that Ruby provides. A plain enumerator is a lazy list in the sense that items don't get evaluated unless you try to reach for them. So, if you have an enumerator and you say, "Just give me the first five items," it will do that. And even if the collection was 200 items long, the next 195 don't get evaluated. So, that's very efficient there. Where you would get into trouble is that plain enumerators are not lazy when it comes to traversals. So, any method that would traverse the entire collection, so something like a map or a select, is not going to be lazy because it's going to traverse the entire collection, therefore forcing us to evaluate each of the items in there. Whereas something like enumerable lazy will not actually traverse the collection when you do your map or you're selecting. It will wait for you to say, "Give me the first item," or "Give me the first ten items," or something like that. But you don't always need lazy. You really only need lazy when you're doing a traversal method.STEPHANIE: Okay. Cool, cool, cool. That makes a lot of sense. JOËL: I think a sort of spinoff metaphor that I have there is this idea of a lazy list. Another concept that, in my mind, is very adjacent to lazy lists is the concept of streams. And streams I typically think of them in terms of, like, files or networking, things like that. But a thing that you can do let's say you're working on data that's in a very large file, so big that you can't fit it into memory, a common solution there is streaming it. So, you don't load the entire file into memory and then operate on it. Instead, little chunks of it are loaded into memory. You operate on them, and then you release that memory and load the next chunk. So, you sort of work through that file in chunks, but you'd only have, you know, 1 line or ten lines or however big your chunk is in memory at a time.An enumerator allows you to do that with things that are not files. So, this could be a situation where, let's say, you're reading a lot of data from the database. You just have too many rows. You can't load them all into memory at once. But you do want to traverse through them. You could chunk that using enumerator so that every, you know, it loads 100 rows at a time or 1,000 rows at a time, or something like that. And your enumerator allows you to treat that as though it's a single array, even though, in the background, it's being chunked into pieces so that you never have more than a thousand rows at a time in memory. So, it allows you to do some, like, really nice sort of memory performance things. STEPHANIE: When would you want to use this over kind of something like batching queries?JOËL: So, I think ActiveRecord findinbatches does something like this under the hood. STEPHANIE: Oh, cool. JOËL: I don't know if they use Ruby's enumerator or if they sort of build their own custom extension to it, but it's built on this idea. STEPHANIE: Okay, that's really neat. I have another mental model that I wanted to get your thoughts on. JOËL: Yeah!STEPHANIE: One of the ways that I looked up that you can construct an enumerator, an infinite enumerator like we were talking about a little bit earlier, was with the produce class method. And that actually got me thinking about a production line and this idea that, you know, you have this mechanism for, you know, producing some kind of material or, like, good or something like that. And it's just there and waiting and ready [laughs] for you to, like, kind of ask for it, like, what it needs to do. And you can do that, like, sometimes in batches, right? If you are asking for like, "Okay, I want a thousand units," and then the production line goes to work [laughs]. But yeah, that was another one of those things where I'm like, wow, they really, I think, came up with a cool method name that evoked, like, an image in my head.JOËL: That's the power of naming, right? And I think it's interesting you've mentioned twice how going through the method names on enumerator and finding different method names all of a sudden, like, turned on a light bulb in your mind. So, if you're naming things well, it can be incredibly useful for users of your library to pick up on what you're trying to do. So, I want to circle back to something that you mentioned earlier, the idea of elementary school quizzes where you have to, like, figure out the next item in the sequence. Because that, for me, is very similar to my mental model: the idea that an enumerator is a sequence generator. So, instead of thinking of it as, oh, it's like an array or it's some kind of collection, instead, think of it as a robot that I can just ask it, hey, give me a value, and it will give me a value. And then, it will, like, keep doing that as long as I keep asking it for it. And those values, you know, they could be totally random. You can build one of those. But you can also have it so that the values sort of come from a sequence. It's not like an array where you're like, oh, I'm going to, like, predefine an array of, I don't know, the Fibonacci sequence, and when someone asks me for the third value, I'll just go and read that third value from the array. Instead, it knows the algorithm, and it just says, "Oh, you want the next value in the Fibonacci sequence? Let me calculate it. Here it is. Oh, you want the next value? Here it is." And so, thinking from that perspective helped me really come to terms with the concept that values really do get calculated just in time. It's not really a collection. It's an object that can give you new values if you ask it. STEPHANIE: Yeah, okay. That is making a lot more sense kind of in conjunction with the lazy list model that you shared earlier, and even a little bit with the production line that I was kind of sharing where it's like, you know, in this case, kind of, it's, like, the potential for a value, right?JOËL: Right, exactly. And, you know, these are all mental models that converge on the same ideas because they're all just slightly different perspectives on what the same object does. And so, there is going to be some overlap, some converging between all of them. I have another fun one. Can I throw it at you?STEPHANIE: Please. JOËL: This one's a little bit different, and it's the idea that enumerators are a tool to bring your own iteration to a collection. So, imagine a situation where you're building your own, let's say, binary tree implementation. And there are multiple ways to traverse through a binary tree. In particular, let's say you're doing depth-first search. There are sort of three classic ways to traverse that are called pre-order, post-order, and in-order traversals. And it really is just sort of what order do you visit all the children in your tree? Now, the point of a collection, oftentimes, is you need a way to iterate through it. And a classic solution would be to include enumerable, the module. In order to do that, you have to define a way to iterate through your collection. You call that each. And then, enumerable just gives you all the other nice things for free. The question is, though, for something like a tree where there are multiple valid ways to traverse, which one do you pick to make it the each that gets sort of all the enumerable goodies, and then the others are just, like, random methods you've defined? Because if you define, let's say, pre-order traversal as each, now your detect and select and all those are going to work in pre-order, but the others are not going to get that. So, if you map over a tree, you're forced to map over in pre-order because that's what the library author chose. But what if you want to map over a tree in post-order or in-order?STEPHANIE: Yeah, well, I'm guessing that here's where enumerator comes in handy [laughs]. JOËL: Yes. The approach here is instead of designating sort of one of those traversals as the sort of blessed traversal that gets to have enumerable; you build three of these, one for each of these traversals. And then, what's really nice is that because enumerators are themselves enumerable, they have map and select and all of these things built in. Now you can do something like mytree dot preorder dot map or mytree dot postorder dot map. And you get all the goodies for free, but the users of your library get to basically choose which traversal they want to have. As a library author, you're not forced to pick ahead of time and sort of choose; this is the one I'm going to have. You sort of bring your own traversal by providing an enumerator, and then everything else just kind of falls into place. STEPHANIE: Bring Your Own Traversal (BYOT) [laughter]. I like it. Yeah, that's cool. I can see how that would be really handy. I have not yet encountered a situation where I needed to get that deep into how my iteration is traversed, but that's really interesting. And, I mean, I can start even imagining, like, having an each method defined in these different ways, and then all of that being able to be composed with some of the other...just other methods. And now you have, like, so many different ways to perhaps, like, help, you know, different performance use cases. JOËL: Yeah, it can be performance. I often tend to think of enumerator as a performance thing because of its sort of lazy properties because; it allows you to sort of stream or chunk data that you're working with. But in the case of this mental model of the Bring Your Own Traversal, it actually is more about flexibility and having sort of the beauty of Ruby without having to compromise on, oh, I have to pick a single way to traverse a collection. STEPHANIE: But I really appreciate kind of this discussion about enumerator because this was previously, like, I don't think I have really ever used the class itself to solve a problem, but now I feel a lot more equipped to do so with a couple of the different kind of perspectives. And I think what they helped me do is just prime myself. If I see a problem that might benefit from something being iterated in a lazy way, like, being like, oh, I remember this thing, this mental model. Now I can go kind of look at the documentation for how to use it. And yeah, like, I don't know how I would have stumbled across, like, reaching for it otherwise. JOËL: That's a really interesting thing to notice because we've been talking a lot about how mental models can be a tool for understanding. But once you build an understanding, even though it's somewhat fuzzy, they're also a great tool for sort of recall. So, not only are you thinking, okay, well, this mental model says enumerators are kind of like this, or they function in this way. On the flip side of it, you can say, "Well, lazy evaluation problems are often enumerator problems. Like, streaming or chunked data problems are often enumerator problems. Multiple traversals are enumerator problems." So, now, even though you don't, like, fully understand it in your mind, you've got that recall where you can enter it, where you can come across that problem, and immediately you're like, oh, I'm dealing with multiple traversals here. I don't remember exactly how, but somehow, in my mind, I've got a connection that says, "Enumerators are a solution for this. Let me dig into that." STEPHANIE: Yeah, especially as an alternative to where I would normally reach for something...a more kind of common enumerable method. Because I definitely know that feeling of like, oh, like, I wish it could just, like, do this a little bit differently, you know. And it turns out that, you know, something like that probably exists already. I just needed to know what it was [laughs]. JOËL: On that theme of I wish that I could have something that behaved just a little bit more...like, I'm doing something slightly weird, and I wish they would behave more, like, just plain Ruby does normally with my, like, collections I'm familiar with. I'm going to pitch a talk that I gave at RubyConf Mini called "Teaching Ruby to Count." Some of these mental models actually showed up there. But the whole idea is like, oh, if you're bringing in sort of more custom objects and all of that, how can you just tweak them a little bit so that they're just as joyful to use and interact with as arrays, and numbers, and ranges? And they just sort of fit into that beauty of Ruby that we get out of the box. STEPHANIE: Awesome. On that note, shall we wrap up? JOËL: Let's wrap up. STEPHANIE: Show notes for this episode can be found at bikeshed.fm.JOËL: This show has been produced and edited by Mandy Moore.STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at [email protected] via email.JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeee!!!!!!!AD:Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
- Mostra di più