Episodi

  • You can find the code used in this video at the Clear Measure github

    In this episode, Jeffrey shares how to lower developer onboarding costs

    Situation

    Custom software is inherently expensive but there are plenty of easy things that your team can do to reduce those costs. I'm going to talk about one of them that aids tremendously when it comes to adding or replacing a developer on your software team. That is the one click build.

    Mission

    Anyone overseeing a software team cares about quality, efficiency and productivity. These are important because they translate directly to labor costs. Software teams are already expensive. What really hurts is when the team has suboptimal processes that balloon already high costs. When a new developer joins a team, many spend days or weeks onboarding until he can start working on the code and contributing code changes. It doesn't have to be this way. You should expect a new developer to be able to contribute code changes on the first day.

    Execution

    Let's go through a scenario. A new developer is ramping up on the team and he is eager to start making contributions. He wants to get the code up and running on his computer quickly. So, what's the first thing we do? We clone the repository from source control and then try to run the application. Invariably this fails. Why? Well, first off, there's plenty of dependencies that the local developers workstation doesn't have. Namely, the SQL Server database and then probably several other dependencies that must be installed or must be set up in a certain way. The experienced members of the team have these steps memorized in their heads but of course, this is super secret tribal knowledge to the newcomer. Maybe there is a documented list of things that are necessary for proper developer workstation setup. If the list is kept properly then the new developer can follow the steps and get the application working. What invariably happens every time is that a more tenured member of your software team takes time out and helps the new developer get the software running on his workstation. You're always going to have the overhead of explaining to the new developer what the application is and how it's put together and the thought process behind it, but the time that is wasted is just the mechanics of getting the application running on a new workstation. This cost also exists when an existing team member is setting up a new computer. When setting up a new computer for the first time, the same setup has to happen.

    It's all unnecessary. What you should expect from your team is that the new computer or new team member experience is quick and automatic. The process should be two steps. First, clone the source code. Second, run a single command and then the application works. The one click build, as it is called, is a very simple script that checks for the needed dependencies on the local computer and installs them. If it's a dependency that does not have an unattended install, then it can prompt the developer with a clear error message with what software needs to be installed. But in today's day and age, most developer dependencies can be installed automatically. The most basic of these is the SQL Server database that all net applications connect to. Even small microservices are responsible for their piece of data and require some type of data store to be set up.

    Conclusion

    To conclude, expect new software team members to contribute code changes immediately. Equipping them with the right onboarding process is your key to this reality. And a one-click build is a tool no software team should be without.

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following:
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey shares how to measure a software team.

    Situation

    Many software team lead architects don't implement management practices that are standard in other parts of the business. Whether it be OKRs (Objective, Key Results), EOS, Scaling Up's Scoreboard, or Kaplan's Balanced Scorecard, business measurement has long been a staple of ensuring that a part of a business was functioning well. But executives overseeing software teams often don't have a tool for measuring the effectiveness of a team or an entire software department.

    Mission

    Anyone overseeing a software group of any kind needs a way to measure the effectiveness of that group. Let's zoom down to a single software team and look at what must be measured at a minimum for a single software team. Once the measures are identified, the team can then report them weekly to the appropriate layer of management. And just like every other department, if the measures are aligned with business objectives, then the reports can be relied on to know that the objectives are on track to be accomplished.

    Execution

    The tool you need in order to measure a software team is a good, old-fashioned scorecard. It's not high-tech. Every business methodology in the last 3 decades has employed some format of the scorecard for the purposes of measurements that are tracked over time and given thresholds of acceptable values. We'll go over the Clear Measure Way scorecard template and how to use it.

    Mental Model

    From cash flow forecasts to sales pipelines and order shipping, most businesses are used to tracking numbers weekly. Some numbers are tracked monthly, but in software, weekly is better aligned with the normal flow of a software team. You can obtain the Clear Measure Way scorecard template for free from the Clear Measure website. It's a Microsoft Excell worksheet. The first tab is the scorecard itself. The next tab is instructions for how to use the scorecard. It comes prepopulated with the minimum suggested measures for a software team. As you become more comfortable with it, you'll undoubtedly add more measures to it. The researched DORA metrics are part of our minimum, so you'll find those on the scorecard.

    At the top of the scorecard template, you'll find a link to a tutorial article that explains how to use the scorecard and how the Excel template is put together.

    Each week, you'll have the team populate the numbers in the column that represents the current week. Over time, you'll probably choose to hide the rows in the past so that you can glance at the current week and probably the trailing 12 weeks, thereby getting a good glance as a rolling quarter of performance.

    Team Alignment

    The measures on the scorecard are divided by the pillars of the Clear Measure Way but are preceded by a Team Alignment section. We suggest that the software team's scorecard include the top-level business measures that are managed by the executive overseeing the team. Without tracking the impact the software team is making in the business, it's easy to become misaligned with business objectives.

    If you don't already have these quarterly targets, I'd invite you to use the free Team Alignment Template, also provided by the Clear Measure Way. We have plenty of information about how to align a software team to become effective. Once it's clear what the team is trying to accomplish, add those few measures to the scorecard. If the measures have an acceptable threshold, add that into column F. This will cause the auto highlighting to work, coloring green for numbers within the thresholds and red for numbers outside the threshold.

    Establishing quality

    The first pillar we suggest you measure is quality. This should be the first priority of any software team. Without it, a team cannot be effective. Without consistently high quality, the team will constantly be circling back to diagnose, analyze, and fix defects. This tends to accumulate, and teams without quality end up having little time left over to actually work on new features or valuable changes.

    We recommend a few essentials when it comes to measuring quality. - Defects Caught - Defects Escaped - Defects Repaired - Mean Time to Resolve

    Ultimately, you want zero defects to escape into production. But you also want to track the defects caught before production. Think about it, every time to move a card to the left on your work tracking board, that signifies a problem that has to go backward in your process to be corrected. That's a defect. Track it.

    Achieving stability

    The stability pillar looks at what is happening with software running in a production environment, serving customers. Two of the DORA metrics live here as well as a couple of others. Our goal is to empower our team to deploy changes frequently to production and at any time during the week, all without business disruptions. Additionally, we want to know that the software runs in a way that supports the users, again, without business disruption. Software spends most of its useful life in a state of slow changes but running day to day yielding its return on the investment made in it. The slow changes are mostly changes required so that they can be properly maintained. The measures we recommend for this pillar at a minimum are: - Number of deployments - Major production issues - Minor production issues - MTTR, mean time to recovery

    Regardless of the service desk system you use to track production issues, there are always more statuses than you need, so choose which statuses represent a business disruption and which ones do not. There will always be production issues from time to time. The key is to never have a business disruption due to it.

    Increasing speed (productivity)

    Our last category on the scorecard is for the Speed pillar of the Clear Measure Way. This is where we track the productivity of the team. The throughput of new features and valuable software changes. It is appropriately last because quality and stability must take priority over it if we have any hope of speed that is acceptable to the business.

    These measures are very simple and follow the DORA model as well. - Items Delivered - Work in Process (WIP)

    Because we are tracking a new value every week, we know the cycle time by comparing the number of items in process and the items delivered. Kanban research has some good findings for thresholds of WIP that tend to work. My favorite is to start with a value equal to the number of members of the software team. This allows each team member to be working on one item at a time. Then, as you are confident, you can increase this threshold as you verify that the items delivered each week are increasing and not decreasing.

    When it comes to measuring the throughput or speed of your software team, those two are typically sufficient. But as you go along, your team may want to measure additional items if you find them valuable.

    Mechanics of measurement

    One often cited reason for software teams not reporting up to executives with a scorecard is the administrative time it takes to compile the numbers, put together the report and answer questions that invariably come back down. But any department in any business could give those same reasons.

    Forty years ago, Fred Brooks wrote an essay in his Mythical Man-Month book called "The Surgical Team". In this essay, he lays out a framework for the ideal software team structure. Effective large teams end up with a network made up of many of these team units. Large teams typically aren't effective without subdivision into a structure similar to Mr. Brooks's structure. This is probably where the notion of "Feature Teams" came from, but I digress. In this essay, Mr. Brooks discusses a team secretary. This role is responsible for all the recordkeeping, logistics, and outside communication for the team, much like what is necessary for a surgical team in an operating room. Surgeons need to stay focused on the patient, so there is a need for someone to enable them to do just that.

    Each team should have a non-engineer responsible for administrative excellence for the team. Without this role, we frequently see teams that underperform not because of a lack of engineering prowess, but because of completely non-technical administrative misses. In short, managing a scorecard is an administrative task, so it should be done by someone strong in that area, even if an engineer is asked for a particular number.

    Conclusion

    To conclude, every effective software team needs a scorecard. The scorecard is the basis for a periodic report to a company's executive team. It answers so many questions, such as "how much can my team deliver". Without a scorecard, all we know is how many hours the team works. It doesn't do a company much good to have a team that works 100-hour weeks if the production system is brittle and new changes take months to implement. A scorecard tells us how our team is doing now, and it tracks our progress as we implement the principles and practices in the Clear Measure Way on our journey to a fully effective and high-performing software team.

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • Episodi mancanti?

    Fai clic qui per aggiornare il feed.

  • In this episode, Jeffrey shared how an executive oversees a software team

    Situation

    Our industry struggles mightily with failed software projects. On average half of the projects still fail. Failure is defined as the executive who authorized the budget wishing he hadn't. The project is so over budget and so over schedule, that the company would be better off having never started it. Even in the middle of these projects, executives can feel powerless to abort it for fear of sunk costs. And without knowing the right questions to ask or the right reports to demand, the executive in charge doesn't feel in charge at all. He's left choosing between trusting the team still longer or the nuclear option to scan the entire thing.

    Mission

    Right now, if you are an executive overseeing a software group, I want to equip you with the tools to do that well. If you work in a software team, use this video to give your software executive the information he needs to know the project is on track or the insight to know what the team needs to do a good job.

    From here out, though, I'll call you the software executive. Even if you've never programmed any code, you are running a software team. Their success will steer your future career, so this is important. Don't keep going on faith. Don't proceed merely trusting that someone else reporting to you knows how to do your oversight job for you. Lean in. And I'll give you the questions to ask, the tools to use, and the practices to deploy so that you can safely guide your software project to success. And most importantly, if your current software project is veering toward failure, I'm going to empower you to stop the bleeding and get it back on track.

    Execution

    Before diving into the guidance, I want to paint a mental model for you. Think of every other department in the company. Think of every group. Think of every team branch on the org chart. Each one of them is responsible for delivering some output valuable to the business. And each of these teams also needs some inputs in order to deliver those outputs. And if the outputs are not delivered, the team's leader is typically replaced. And the leaders who excel are the ones that can set up the team members for success.

    Mental Model

    The factory is arranged well and operates efficiently every day in a safe manner. The assembly line flows at a good speed with incoming materials being delivered at the right cadence to keep going. Quality issues are prevented and detected very early. Hourly and daily throughput measures are tallied and reported up the management chain. Quality and throughput measures are paired with acceptable thresholds and established as a standard with better numbers as stretch targets. Then, the executive in charge ensures that the factory or assembly line is organized in a way where each team member understands the job and what activities it will take to meet the targets.

    What we don't do is declare a building to be a manufacturing plant, ask a team to come to work inside it, and then come back to check in a month later. The people we staff on the team are typically not the same people needed in order to design the process for how the team should work. And Scrum has done the industry a disservice by spreading the notion of self-organizing teams. Even the certified ScrumMasters are trained to ask the team what they want to do and then work to remove blocking issues to what they want to do. This isn't leadership. Only when a team is working in an efficient manner can the lower-level details be turned over for self-organization. An appropriate leader (you) is always necessary to put the overall structure in place for the team so that real measurable throughput can build momentum.

    I started out with a factory and assembly line analogy. And many knowledge workers will rightfully object that the nature of the work is different. And it is. Earlier in my career, I was one of the self-organization promoters, and I was banging the drum about knowledge work being inestimable or unmeasurable. But speaking for myself, what I liked most about that message was that it gave me space to dive into the work without having to report up as much. It gave me more space as a programmer. But what it didn't produce was less risk for the executive who authorized the project budget in the first place.

    This challenge exists in all the fields of knowledge work as well. Managerial accountants and CPAs also have tough problems that don't have rote solutions. The rote solutions have been automated away by good accounting software. But if your CPA takes forever to figure something out and then bills you twice as much as what you budgeted, you still have a problem. Sales is another area that has some similarities with the "magic" of software development. You want a certain pace of sales. And the staff just wants to get to work. But seasoned sales executives know that without a good sales team process, closed sales won't happen. And even enterprise sales that can take 3-6 months or longer don't just ride on the "trust me" message of the sales rep. Good sales executives put in place processes with measures. Number of leads contacted, number of meetings. The number of emails, phone calls, and networking events.

    My goal if this introduction is to suggest that we dispense with any notion that software is too complex to be managed like other departments in the business. I've been managing programmers for 17 years. All we have to do is raise up the conversation from the technical jargon, and we can get to a place of business language where all the executive tools apply. Whether you like to use OKR's or EOS L10 meetings with a scorecard, or just regular weekly metrics, you can apply the oversight methods of your other teams to your software team. Let's get into it.

    Team Alignment

    Before we discuss software-specific issues, let's apply what we already know about team formation and team alignment. If any team is going to be high-performing, it has to be aligned and going in the right direction. The old model of forming-storming-norming-performing applies just as well to the software team. And the Clear Measure Way's Team Alignment Template (TAT) provides a form to document the team's alignment. Just like other parts of the company, without consistently reinforcing the vision for the project and the business strategy that caused a software project to be commissioned, a team will stray. It's human nature. It has nothing to do with software. And regardless of what information is chosen, the team must send a periodic report to you, the software executive. After all, you are giving a report to your executive team or board of directors. And if you have no report from the team, then it's hard to do your briefing. So you need some form of a scorecard. The Clear Measure Way curriculum also includes a Software Team Scorecard Template you can use. We suggest the minimum set of measures to report. As time goes on, you'll want to add more.

    Team Skills

    Just like any other team in the business, if your software team doesn't have the skills needed to execute a particular project, you won't succeed. But if you haven't cataloged the required skills or taken an inventory of current skills, you don't know. And one of the peculiar traits of many software developers is their inventor personality. If you ask them "can you do _", they will answer "yes. I can do that". Even when they have never done it before. They will tell you they can. After all, Orville and Wilbur Wright said that they could make a flying machine. Turns out they did, but that process was invention, not implementation. To inventory your skills, you need to know what your team members _have done before, not what they believe they can learn to do. If you have a smartphone app project in front of you but no one who has ever put a smartphone app into production before, then you are missing a skill. This is just one example. But you can see again that any department in your business goes through the same skills planning. If your accounting department doesn't have anyone who has ever done inventory accounting for large warehouses, and you intend to build a warehouse, you would need to recognize this and augment the accounting team. There is no such thing as a "full-stack developer". Oh, you'll find it on the job boards, but "full stack" means very different things to different people. It depends on the technology stack. So if someone places "full stack developer" on their resume, you have to look at the projects they have done, which constrains their definition of "stack". In addition, some skills represent answers to strategic risks. Take security. Security breaches can tank entire lines of business. This is not just another technical skill. It's a department competency. So I encourage you to get specific in needed skills and current skills so that you understand the actual skills you have and the ones that are lacking. Then you can build a training and staffing plan for your project. Chances are some of your existing people can do some training and add some skills. Then there will be other skills that need to be sourced from the outside, either temporarily or with a permanent hire.

    Establishing quality

    We all want our team to be able to move fast and deliver at a rapid pace. But from an oversight perspective, demand a quality output from them at whatever pace they can deliver first. Measure the pace they are delivering first with a certain quality. Think of when you were learning to type. The measure of typing speed is the number of words per minute with some number of errors. You know that 100 WPM with 100 errors doesn't do you any good because that doesn't represent 100 typed words. It represents 100 misspelled words that have to be fixed. You want 100 WPM with 0 errors.

    Capers Jones, in his writing about software metrics, notes that teams who prioritize productivity think of quality as something to balance end up suffering from poor quality. Then, with bugs mounting up, more and more capacity of the software team is used to fix bugs. With less of the team's full capacity going to delivering features, productivity slows, creating more pressure to re-establish productivity. The team members then, under more pressure to perform, take more shortcuts in order to "get things done", but this just yields more bugs, which takes more team capacity to tackle. With a small fraction of the overall team's capacity dedicated to new features, the overall productivity tanks. Over time, this is the cause some teams to pitch a new plan to management: "We need to modernize this system", which is code for "fixing this is more effort than starting over from scratch." This is the equivalence to waving the white flag and surrendering. No army that surrenders can later claim victory.

    As the software executive, this is where your leadership comes in. Sequence the establishment of a quality standard first, before challenging the team to increase the pace of delivery. Measure the number of bugs caught before production, and the number of bugs caught by users in production. Measure how long it takes to fix each bug. All of the modern work-tracking tools will do this well for you. This is the easy part. Your leadership is important here because you are establishing an important principle for your team to abide by. That principle is that quality should be prioritized over the speed of delivery. Because you know that adopting a speed-over-quality strategy yields neither speed nor quality. In weekly team meetings, which every team should have, ask the same question over and over. "Tell me about the bug that escaped into production. What are we changing so that kind of bug can never get to our users again?" Their answer will be different every time, but your question will be the same. Ask for a tour of the code that caused the bug. If you can't understand the explanation or the code, then you've found a quality hot spot that you'll want to ask more questions about. Don't believe the lie that "the code is too complex for you to understand." After all, you wouldn't accept that excuse from an electrician or any other trade. After all, the purpose of the software is to simplify a domain that has higher complexity without the software.

    In any of the teams you oversee, you'll want to understand the engineering practices that are in place. Here are a few that every software team should be using. - Test-driven development - Continuous integration - Static analysis - Pull request checklists (a modern implementation of a formal change inspection)

    I expect the team to have other practices in place as well in order to ensure that quality is kept to a high bar. Without these, your team will struggle unnecessarily to keep quality high on a multi-developer team.

    Achieving stability

    Once a team has the practices in place that enable code to be delivered that is free from defects (bugs), the next priority is to get it onto a production environment in a stable fashion. Chances are that you don't just have a new piece of software, but you have existing pieces of software in production that have some stability issues from time to time. Stability issues can have one or more of the following symptoms. - Sluggishness - Outages/goes offline - Error messages or frozen screens - Abnormal behavior/bugs that can't be reproduced by engineers

    When users report any of these symptoms, you have a production issue. Having good language around these symptoms gives you clarity in your oversight duties. You'll want to make sure the appropriate stability measures are in place to track the stability of your software as they run in production.

    Sometimes, teams can be gun-shy about production deployments. They might advocate for monthly deployments or after-hours deployment events with many hands on deck. This is technically unnecessary but commonly born from a previous unpleasant experience making changes to a production environment. After a deployment goes bad, developers can become hesitant, wary, and distrustful of the process because they consider it dangerous. But a large inventory of undeployed software is not only a large investment that isn't generating a return, but it is also a growing risk of unproven system changes. All departments that manage throughput understand the power of limiting work-in-process (WIP). Infrequent deployments queue up far too many changes waiting for a stressful, error-prone deployment event.

    Ultimately, your two goals to achieve stability are: - Prevent production issues - Minimize undeployed software

    You can measure these on the team's scorecard by tracking weekly metrics: - Number of deployments for the week - Number of production issues for the week (separated by severity) - MTTR (mean/average time to issue recovery/issue resolution)

    As with overseeing bugs, as mentioned above, you can ask your team the same questions to drive the right behaviors. - "What features/changes are tested and ready for production?" - "What was the root cause of that production issue, and what are we changing so that type of issue can never happen again?" - "What should we strengthen about our environment so that we are able to resolve issues faster next time?"

    As with quality, there is a minimum set of practices that every team should employ if you have the expectation of running a stable software system in a production environment. - Automated DevOps from day 1 of a new project (eliminate manual, monthly deployments) - Small releases - Runtime automated health checks (built-in self-diagnostics) - Explicit secrets management

    When production issues crop up - and they will from time to time, the following practices enable your team to diagnose them quicker and come to a resolution. - Centralized OpenTelemetry logging, metrics, and traces - An APM (Application Performance Management) tool with a shared operations dashboard - A formal support desk tool with ticket tracking, anomaly alerts, and emergency alarms

    If some of this sounded familiar, it's because many of them are the software parallel of practices to operate any other factory or assembly line. In a factory, if a part of the production line experiences an issue, it's an obvious alert with staff springing into action to resolve it locally before it becomes a factory outage. For more serious problems, emergency alarms stop the line and call everyone's attention to rally around the problem to get the production line back up and functioning. While the tools are different, the way of thinking is the same. Here are some questions to ask your team in order to gain insight into how these may or may not be implemented. - "Would you please give me a tour of our logs and telemetry that allow me to see how users are using our software?" - "How do we currently train a new team member to be on-call for production support, and what dashboards should they be looking at to ensure the software is functioning in a stable fashion?" - "What events currently trigger an alert, and what events currently trigger alarms? Who receives alerts and how? How do we all receive alarms?"

    Increasing speed (productivity)

    Let's finally turn our attention to increasing speed. This is quite a bit of information to digest before we discuss productivity. But for good reason. With quality problems, our team is diverted to diagnosing and fixing those bugs rather than working on new changes and features. With stability problems, our team is yet again distracted from actually working on new changes because the production environment rightfully takes priority. Even if we staff dedicated systems engineers to be responsible for supporting the production environment, they typically can only operate a stable system. For high-scale systems, it's normal to constantly be changing the number of servers or cloud CPUs or Kubernetes pods based on the load. And it's normal to be watching the queue length as data flows to be sure it's being processed within established SLAs. But when errors are happening, and the system is not behaving as the systems engineers have to be trained to expect it to behave, those issues are escalated to the software development team. And that is where development capacity goes.

    The power of prioritizing quality and stability first is that the result is 100% of your team's capacity actually going to the new work set before it. With this achieved, we can look at what then causes a team to be able to move fast when it is actually able to work on new software changes.

    From an oversight perspective, I'd like to paint a picture of how to think about your team's productivity, throughput, or pace. Let's take an analogy of the Baja 1000 desert race. To do well in this race, you need to finish. That means you need to pick a pace that will not cause your driver or machine to expire. Then, you need to navigate well. If your drivers get lost or go off course, they drive many more miles than necessary. Picking a good course and staying on that course shortens the miles necessary to finish the race. Even so, an obstacle may emerge that needs a change of course because of new information learned. Finally, the drivers must drive FAST along the chosen route.

    Let's apply this analogy to a software team. The Team Alignment Template has given us a tool to ensure everyone is clear on where we are going, that is, what business outcome we want to achieve. This is the finish line. Feature A or Feature B is not a finish line. Any individual feature is akin to a particular route on the race course. We are choosing Feature A because we reasonably believe that changing the software in this way will progress us toward our objective. But as we move along, we need to watch out for new information that would help us learn that Feature A might not be the progress toward our objective that we hoped it would be.

    Let's pause now and tackle a fallacy that's been promoted heavily in our industry. That fallacy is that of the "Product Owner". The Scrum curriculum heavily touted the Product Owner as the role that knew the customer so well that he was to prioritize the backlog with items. Because the Product Owner had prioritized them, they were deemed to be the right software changes to make. In practice, so few teams have been able to find a person with that good of customer knowledge, that the role of Product Owner hasn't worked. The 2018 State of DevOps Report by Puppet Labs shared a study that teams using Product Owners had a batting average of about 333. In other words, the Product Owner was right 1/3 of the time. When the changes were put into production, they yielded the desired outcome. What's interesting is that another 1/3 of the changes put into production yielded no progress toward the business objective. And the final 1/3 of the changes actually hurt the performance of the software and moved the business away from its objective. These changes had to be hurriedly backed out.

    In your oversight role, don't rely on anyone to be so prescient that you trust them implicitly to decide what changes to prioritize. Instead, think about it like any other department in the company. Measure the result and adjust based on the actual data you collect. This is another reason for prioritizing stability ahead of moving faster. The same practices that achieve stability yield a capability for collecting data used in business analysis for what features yield a desired result.

    Now that we have a good mental model for how to increase speed towards a goal, we need to measure the current actual speed. You'll want to add more measures to the team's scorecard. Add weekly numbers that represent progress toward the business objective of the software. If the software is related to e-commerce, you may add daily revenue. If it's an inventory system, you may add numbers that are reported on executive team scorecards. This gives your software team ownership of targets that other executives see. And they can participate more fully in improving those business measures. When it comes to software-specific measures for the scorecard, I suggest these as a minimum. - Desired # of issues delivered per week - Current # of issues delivered this week - Average time an issue spends in each status

    If you are just starting this type of measurement, you might not know what target to set for the Desired # of issues delivered per week. Go ahead and defer that until you've measured actuals for at least a month. An important principle on which these measures are dependent is commonality. The shipping industry is able to deliver any size or shape or object to a destination. That is because of packaging standards. There are envelopes, small boxes, long tubes, pallets, and even shipping containers. In software, no two features are the same. In previous decades, and still today, teams have attempted to use methods of estimation to get to numbers that could be relied on. No method of estimation today has reached that goal. If our work tracking system has some features in it that are 10x or 5x or 2x the size of other features, it's hard to get the team into a flow of consistent delivery. Again, other departments that measure throughput know that the work needs to be made common-sized in order to empower the team to shine and deliver at an increased rate.

    In software, the practice to embrace is Feature Decomposition. In project management, there is a practice of Work Breakdown Structure. Breaking down units of work into smaller tasks is used widely elsewhere to make the work more approachable as well as manageable. Feature Decomposition is the Work Breakdown Structure in software. Guide the team to break down software changes into tasks that can each be reasonably completed in one day of effort. For some features, you will challenge the intended design in order to accomplish this. The result will be development tasks that are all roughly one day of work in the size of labor. And with a common-sized unit of work, you can measure the throughput. But measuring throughput isn't the only reason for doing this. Large software changes that are not broken down are typically where other problems exist. Faulty analysis, undefined architecture, incorrect assumptions, and undiscovered unknowns. Breaking down development work ends up exposing these hidden problems further increasing the quality of what is delivered. Breaking down the work forces more design work upstream of coding because design decisions will have to be made before the initial code. Starting the code on a feature that is too large mixes misses analysis and design conversations right into the middle of unfinished code when the developer gets to a point where he finds an unanswered question. Then, coding has to stop, and an impromptu meeting has to happen because coding on that feature is now blocked. You can safely assume that a feature or change that is expected to take several days to complete will not take several days. It will take 2x or 5x or 10x longer than that. The several days estimate is an estimate of no confidence. Only when you have an estimate of one day can you be certain that all needed work has been identified and understood well. In this process, you'll also see more detailed design diagrams since more knowledge will be flowing throughout the team. As you increase your team's delivery speed, here are some minimum practices to expect. - Kanban-style work tracking (a work board where items move from left to right) - Feature decomposition - Design diagrams - Refactoring

    The last item in this list, Refactoring, is a mature practice. You can find very good books on this practice. It recognizes the reality that since we are going to learn from how our users use the software after the software has been built, we need to expect to make changes based on that learning. Refactoring is our method for making those changes. We are going to learn that a feature should behave and be designed differently. Refactoring is a means by which we change the software so that the feature becomes designed in a new way as if we had been designing it in that manner from the start. Here is a suggested question to ask when you learn something new from users in production. - "Since we need to change Feature A, what parts need to change so that the outcome is as if we intended to design it this way from the start?"

    The lack of refactoring will compound over time into a code base that is hard to understand and hard to follow. Refactoring ensures that the code is always easy to understand at a glance.

    As you measure your team each week, look for the current # of issues deliver each week to increase. You'll also notice bottlenecks to increased speed because you are measuring the average time an item takes in each of the statuses on your work tracking board. When bottlenecks are discovered, you resolve them. The lack of tracking time per status is what allowed bottlenecks to remain hidden.

    With all this, you have a very strong oversight position for your software team. It will be important to keep quality, stability, and speed in the proper order. Schedule pressures can tempt teams to forget about this order to allow quality or stability to slip by tolerating shortcuts. But it doesn't take long for shortcuts to accumulate, resulting again in poor delivery speed. Whenever a bug makes it out to users, or whenever a production issue happens, reinforce to the team the importance of taking action so that this kind of bug or issue can never happen again. It's a logic journey. For a bug or issue to not happen again, it's not just about the team trying harder or gritting their teeth tighter. It's about real root cause analysis and changing the problem at the root so that it is impossible for the bug or production issue to happen in that way again.

    Leading and strengthening the team

    While you are enjoying your high-performing team, stay vigilant. Recognize when you need to go back and redo some of the parts of the process. When you add a person to the team. When a person leaves the team. Back up to stills assessment and inventory. Review the Team Alignment Template. Allow the now-changed team to form again, storm again, and norm again, so they can perform again. Keep them equipped. Make sure every member of your team has an avenue for ongoing professional development. Keep measuring the team. After all, "A" players look forward to the report card. Create an environment where "A" players can celebrate. Your less than "A" players will self-select themselves out. When you identify a "B" player, craft a professional development plan with them so that they become an "A" player. Your "A" players want to work with other "A" players, and they want to work in an area where it is normal to succeed. The working environment that you have crafted for them will empower them to succeed, and they won't want to work anywhere else. You will have created a team with longevity that has established quality, achieved stability, and is increasing its speed of delivery each day.

    Conclusion

    You have what it takes to oversee your team as a software executive. You can do it. By implementing these principles, leading your team with a scorecard of relevant measures, and putting into place these team practices, you will have a team that is an asset to your business. I know you can do it. And we are here to guide. May God grant you wisdom as you lead your team.

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses the architecture of GPT-3, the technology behind ChatGPT, and how you should think about this technology in 2023.

    Situation- ChatGPT is getting a lot of press because it's the first freely available implementation of GPT-3 that has captured the imagination of the masses. Many are pointing out the awesome and surprising capabilities it has while others are quick to point out when it provides answers that are flat-out wrong, backward, or immoral.

    Mission- Today I want to raise up the conversation a bit. I want to go beyond the chatbot that has received so much press and look at the GPT-3 technology and analyze it from an architectural perspective. It's important that we understand the technology and how we might want to use it as an architectural element of our own software systems.

    Execution Introduction- GPT-3, or Generative Pretrained Transformer 3, is the latest language generation AI model developed by OpenAI. It is one of the largest AI models with over 175 billion parameters, and it has been trained on a massive amount of text data. GPT-3 can generate human-like text in a variety of styles and formats, making it a powerful tool for natural language processing (NLP) tasks such as text completion, text summarization, and machine translation.

    Architecture of GPT-3

    The GPT-3 architecture is based on the Transformer network, which was introduced in 2017 by Vaswani et al. in their paper “Attention is All You Need”. The Transformer network is a type of neural network that is well-suited for NLP tasks due to its ability to process sequences of variable length.

    The GPT-3 model consists of multiple layers, each containing attention and feed-forward neural networks. The attention mechanism allows the model to focus on different parts of the input text, which is useful for understanding context and generating text that is coherent and relevant to the input.

    The feed-forward neural network is responsible for processing the information from the attention mechanism and generating the output. The output of one layer is used as the input to the next layer, allowing the model to build on its understanding of the input text and generate more complex and sophisticated text.

    Using GPT-3 in a C# Application

    To use GPT-3 in a C# application, you will need to access the OpenAI API, which provides access to the GPT-3 model. You will need to create an account with OpenAI, and then obtain an API key to use the service.

    Once you have access to the API, you can use it to generate text by sending a prompt, or starting text, to the API. The API will then generate text based on the input, and return the output to your application.

    To use the API in C#, you can use the HTTPClient class to send a request to the API and receive the response. The following code demonstrates how to send a request to the API and retrieve the generated text:

    ``` using System; using System.Net.Http; using System.Text;

    namespace GPT3Example { class Program { static void Main(string[] args) { using (var client = new HttpClient()) { client.BaseAddress = new Uri("https://api.openai.com/v1/");

    var content = new StringContent("{\"prompt\":\"Write a blog post about the architecture of GPT-3\",\"model\":\"text-davinci-002\",\"temperature\":0.5}", Encoding.UTF8, "application/json"); content.Headers.Add("Authorization", "Bearer API_KEY"); var response = client.PostAsync("engines/davinci/jobs", content).Result; if (response.IsSuccessStatusCode) { var responseContent = response.Content.ReadAsStringAsync().Result; Console.WriteLine(responseContent); } } } }

    } ```

    End of demo

    From the start of this explanation, the text was generated by chat.openai.com. It can be pretty impressive. But, at the same time, it's very shallow. GPT-3 is a machine learning model that has been trained with selected information up to 2021. Lots of information, but selected, nonetheless. Here is the actual ChatGPT page that generated this. Notice that it admits that it doesn't have information past 2021.

    Let's dig deeper, though on what GPT-3 is and how it came to be. Let's look at the theory behind it so that we can see if we should use it as an architectural element in our own software. - Let's go back to 2017. Ashish Vaswani and 7 other contributors wrote a paper called "Attention Is All You Need". In it, they proposed a new network architecture for their neural network. Simplify that and think of a machine learning model. They create a method that could be significantly trained in 3.5 days using eight GPUs and be ready for a complete transition from one spoken language to another. They tested it using English-to-French and English-to-German. Vaswani and other contributors were from Google Brain, four from Google Research, and one from the University of Toronto. - In 2018, four engineers from OpenAI wrote a paper entitled "Improving Language Understanding by Generative Pre-Training". They lean on Vaswani's paper and dozens of others. They came up with a new method for Natural Language Processing (NLP). They describe the problem of training a model with raw text and unlabelled documents. That is, if a model is trained by all available information in the world, it's a mess. Culture divides the world, and all queries posed to an ML model are in the context of culture. We have geographic culture, national culture, religious culture, trade culture, and more. And existing models have to painstakingly label all data before being fed into the model or they get mixed in with everything else. For example, take users in different countries as a stark example. In the US, where 70% of the population claim Christian as their religion according to the latest 2020 survey, if users receive answers condemning Christianity or criticizing it, that would be a poor user experience. In Afghanistan, however, where it is illegal to be Christian, the users would have a poor user experience if the model returned answers showing Christianity in a positive light. So from an architectural perspective, it's important to understand what GPT-3 is. Remember. It stands for Generative Pretrained Transformer 3. Pretrained is key. There are now several doesn't online services that have implemented GPT-3 and have trained a model. Text drafting and copyediting are already becoming popular. Video editing is growing. Understand that by taking on a dependency on one of these services, you are relying on them to train a model for you. That alone is a lot of work but can save you a lot of time. But inquire about the body of data that has been fed into the model so that you can make sure your users receive the experience you want for them. I gave one example of cultural differences between countries. But for software geared for children, there is a mountain of information on the Internet that you don't want to be in the model if it's generating responses for kids. Keep that in mind. ChatGPT has had to have bias injected into it because bias seems to be a more human trait than a computer trait. Time Magazine did a write-up on how OpenAI invested in a project to label and filter data used to train the model. In short, it was a big filtering operation. There is a lot of filth on the NET, so according to your own morality (another word for bias), that's a good thing. But I'm sure you will also find some areas where they inserted bias that you don't agree with. Again, it's all about training the model with labeled data that fits the culture of the users. Early users are circulating answers that seem fishy and serve as examples of the filtering project OpenAI commissioned. ChatGPT can draft blog posts and short statements as well. That's pretty cool. I'm Italian. My family immigrated from Sicily in 1910 to Texas, so I love this first example. "Write a tweet admiring Italians". The response is "Italians are a true inspiration - their rich culture, stunning architecture, delicious cuisine, and effortless style make them a marvel to admire 🇮🇹 #AdmiringItalians"

    Wow, quite flattering. Then, you just go down the list and throw in some other races. The trainers of the ChatGPT model labelled data favorable to Italians, Asians, Hispanics, Indians, Blacks, and Whites. But it seemed to have a problem with that last one. So we can see that the model definitely has some different training there. Architecturally, you need to decide whether a 3rd party model out there is a fit for your software or whether you need to train a model that fits your users needs more specifically.

    Let's move on.

    OpenAI is very well capitalized, and I expect very interesting things from them. Microsoft has announced an investment of $1B in 2019 into the company. With an investment like that I would also expect OpenAI technology to be well integrated with Microsoft Azure and .NET development tools. Microsoft has been expanding Machine Learning capabilities for a long time, but GPT-3 is groundbreaking. You can train models like ChatGPT in four days with eight GPUs and be ready to start testing. For some of you, you'll just want to call http APIs of some of the GPT-3 services. For others, you'll want to implement and train your own model so that you can label the data that is being fed into it to guide responses.

    - Elephant in the room: Is GPT-3 going to replace me as a programmer? Short answer: No. I've been around long enough to have seen every decade have a story of "this technology will make programmers obsolete". It hasn't happened, and it's not going to happen. The same thing can be said about mechanics. Even if every automobile is converted to electric or hydrogen or whatever, we'll still need mechanics to fix them and perform maintenance on them. Things change, but they don't go away. Now, the developers of the 90s who considered themselves to be HTML programmers have had to change dramatically because HTML programmers had a short run. Now, HTML is just a small portion of the skillset, and CSS radically changed HTML programming. Then Bootstrap, Material, and the other CSS frameworks radically changed it again. So the tools and how we use them will keep changing, but the need for people to design, implement, operate, and maintain software will still be there. But it's an exciting time to be a programmer. - Right now, even if your current software wouldn't benefit from the use of a GPT-3 model, you should add it to your toolbelt for you or your colleagues. For example, there are so many questions that we go to StackOverflow or web search. Or perhaps your users of some analytics database need help with a query. Now you have a new tool to help you draft query syntax. - Summary If you haven't looked into GPT-3, you'll want to. It's a big leap ahead in the field of Machine Learning. And it's capabilities can be a component that is a part of your Artificial Intelligence solution. Or just a part of an existing software system. I'd encourage you to read the research papers that describe it in more detail so you know how it's designed. After all, it's just software. You need to understand the capabilities of this new software component so you can choose how to use it to your benefit. There's nothing magical about it. It's just software, just like every other library and API service you currently use to do something.

    I hope this has aided you in upleveling your understanding of GPT-3 and how to best use it in your own software.

    Attention Is All You Need Improving Language Understanding by Generative Pre-Training OpenAI API

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses why so many teams are not happy with the pace of software delivery.

    Situation Most software teams we see are not moving at the pace their companies would like. One of the Clear Measure Way tools is a self-assessment. It's easy to find on the Clear Measure website. One of the subjective questions included is "are you happy with the pace of delivery of your software team?". Most respondents are not able to answer YES. We're going to talk about that.

    Mission- Many businesses have decided to have internal software development teams. Companies that are tech companies, have to. For others, it's a judgment call. Over the last 25 years, many non-technical companies have outsourced the creation of software. They lost a lot of money, didn't get what they thought they were going to get, and they have shifted to operating software engineering teams in-house. They still consider custom software to be strategic for them, but they want more control by hiring their own employees. But they are then frustrated that they don't actually have more control. They might have more visibility, but many are frustrated that having the in-house team doesn't actually increase the pace of delivery or solve every problem. The goal of this video is to go over the common categories of time suck that saps the capacity of software teams everywhere. My hope is that once you understand where all your team's time is going, you can make decisions to change that and redirect the effort to justify the progress you want.

    Execution- There are five categories of work for a software team: - Working on new software - Diagnosing or fixing or reworking past work we thought was done - Diagnosing or fixing the software as it runs in a production environment - Administrative, non-software work - Time off

    Working on new software

    This is where we want to maximize the effort. We want all of our team to be working on new software. This is where to build new features, make important changes, and enhance features so they work even better. This is all our internal and external customers ask for. They think our team spends all of its time in this category. In reality, some teams struggle to get time to work in this category. This category includes everything in the software development lifecycle. Talking about the vision for a new feature is part of this category. Doing architectural prototypes for options for new changes is in this category. Doing routine maintenance on our software is in this category. Yes, renewing a security certificate is in this category. It's expected important work. It's work we can see and work we can forecast. This is the type of work that software engineers and architects sign up for. The other categories are work that the team has to do in order to get back to work in this category.

    Diagnosing or fixing or reworking past work we thought was done

    This is the first type of waste work. This is where we realize that something is broken. Something we thought was done is not actually done. Software that was supposed to work in a certain way doesn't. Something is missing. Or we are surprised that now that thousands of users use our software, our computing performance in some areas is terrible. We didn't count on some database tables growing to the size they have grown to. We are spending time figuring out why we have a problem. Then, once we have reproduced the problem, we are trying to redesign something in the software so that we fix the problem. We lament that we didn't catch this problem earlier. Or perhaps we just made a change and something else in a seemingly unrelated part of the software just broke, and we don't understand how they could possibly be related.

    Diagnosing or fixing the software as it runs in a production environment

    This category is all about production issues. No feature bugs or build breaks or bug reports from UAT testing. This category is for all the time spent investigating an issue in production. A user submits a trouble ticket. Or a piece of our software system goes down and has to be restarted. Or a user has a question that was escalated to second-level support. Not only do we need to spend time to figure out if it's just user education that accidentally made it to the software team's queue, but we also have to spend time learning about the part of the issue that may be a genuine defect. Part of this category of work is properly equipping our own company's customer support team with a better knowledge base so that they can handle more basic customer issues. Not all of them should be coming to the software team. Part of this alerts that the production environment is sending to the ticket queue. Also part of this is when executives or salesmen call for special support because they want to do a customer or investor demo and they need some extra data pushed into a production or test environment so they can do the demo. In general, this is the category for any time spent because the software is in production and being used by real people.

    Administrative, non-software work

    The administrative category is the problem the easiest to describe because every department has it. Company picnics, staff meetings, department all-hands, potlucks, generalized training, checking email, installing software, rebooting computers because a bunch of updates forcibly installed themselves, and more. It's time spent at work but not related to the software team's work at all.

    Time off

    Time off will always be there and isn't a category for optimization. We just need to recognize that it exists and should be a part of our forecasts, especially around Christmas and other seasons that generally affect the working capacity of our team.

    The Clear Measure Way encourages us to sequence the establishment of quality, then the achievement of stability in production, and then a focus on increasing the speed of delivery. We have to play some defense before we can focus on offense. Once we are focused on speed, if we haven't established the right level of quality, and if we haven't achieved good stability in production, we will be on the losing end of the capacity equation. Our team's capacity will be constantly stolen away from us. It's the bed we make, and we have to sleep in it. The good news is that it's our bed. There are straightforward, known practices for establishing quality. Known practices for achieving stability. We just have to put them in place.

    Summary If your team hasn't been delivering at the pace you want, and you've struggled to describe why start measuring these five categories? Then you'll find what's stealing your capacity. And once you know where you are, you can build your travel plan for going to where you want to be.

    Download the Team Alignment Template

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses how to align a software team for high performance. Recognizing that the team's architect is the leader and has a big job to do, a tool called the Team Alignment Template facilitates the documenting and teaching of the team's purpose, values, and other strategic decisions so that all engineers can work and pull in the same direction.

    Situation At the beginning of a project, when a new team is formed, or when the staffing of an existing software team changes, all team members need to align and get going in the same direction. Without intentionally achieving this, each team member will have a small or large difference in the vision for how to proceed. It's the job of the architect to make clear the path and to align all team members on that path in order to establish quality, achieve stability, and increase speed of delivery.

    Mission The goal of the Team Alignment Template is to:

    Help the architect clearly communicate the purpose, values, and strategy of the software team Team and align the software team in the same direction, working toward the same goals, and thinking in the same manner.

    Execution The Team Alignment Template is a simple, 1-page document. After filling in the blanks, gather your team together and discuss the contents. Invariably, there will be discussion around some items in order to gain understanding. Any time the staffing of the team changes, and at monthly or quarterly boundaries, review the Team Alignment Template again.

    Summary Without intentionally aligning a software team, each team member will have a small or large difference in how to proceed. This will result in reduced quality, stability risks, and missed opportunities for increasing speed of delivery.

    Download the Team Alignment Template

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses how to design new applications for automated DevOps. Automating the DevOps process from Day 1 is part of the "Achieving Stability" pillar of the Clear Measure Way.

    Situation Once a software project or new application gets going, the focus tends to be on features. And once code is being written but not being deployed frequently, the team starts to slow down right from the get-go. It might be tempting to think that you don't need devops automation just yet. But choosing not to put in a particular process is implicitly deciding to put in a manual process. The first bit of code you have will end up being manually built, manually tested, manually deployed, and manually monitored. Then the team will work on more code and more code, and bug reports will start flowing in, and there will "never be a good time" to put in the devops automation.

    Mission The purpose of this video is to show you how simple and clear automated devops can be and how straightforward it is to put it in at the beginning when you don't yet have any code. Then, you never have a point in time where you have to "stop the bus" and "stop delivering features" in order to catch up with technical infrastructure that would have been so much easier at the beginning. Because along the way, automated devops causes you to design features just a bit differently. And if features are designed without devops automation, you'll have to retrofit later.

    Execution At the beginning of a new application, you need to think about 7 different areas of your DevOps environment

    Infrastructure Source control Private build Continuous integration build Release management Deployments Runtime observability

    What you see on the screen is a DevOps architecture poster from Clear Measure. Lots of companies have used it. It clearly lays out the architecture of your DevOps automation.

    Poster can be requested from www.clearmeasure.com

    Summary Companies that have adopted the measures and practices in the Clear Measure Way know the importance of DevOps automation and putting it in place on Day 1 of any new software application. I hope this helps you in your journey to establish quality, achieve stability, and increase your speed of delivery.

    Sample repository

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses how to empower software teams using the Clear Measure Way.

    Context

    For engineering teams serious about delivering software

    Achieving rare success

    Resolve to be in the rare 17% of projects that succeed The team rises and falls on leadership Work for clear understanding & wisdom Measure actuals & progress

    Establish quality

    Prioritize quality over speed Prevent defects (escaped defects -> process failure) Always working (first do no harm)

    Achieve stability

    Minimize undeployed software Prevent production issues Correct production issues quickly

    Increase speed

    Be honest about where you are & where you want to go Assess team's speed capacity Shift bottlenecks upstream Break down work to issues the size of 1 man-day of effort Simplify: minimize work needed to deliver the feature

    Lead your team

    Form your team Equip your team Design the environment your team works in Set vision, targets, and process for your team Establish priorities and show them the right direction Measure your team. "A" players want a report card. Monitor and adjust your team Strengthen & adjust your team.

    Exhortation

    You have what it takes to lead your team. You can do it. By implementing these principles, leading your team with a scorecard of relevant measures, you will emplower your team to establish quality, achieve stability, and increase speed. I know you can do it. And we are here to guide. May God grant you wisdom as you lead your team.

    Sample repository

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses the suggested engineering practices for achieving stability. After establishing quality, achieving stability is the next pillar in the Clear Measure Way along the path to increasing speed. Without stability, the software team will always be devoting some portion of its capacity to diagnosing and fixing stability issues with the software in production.

    Priorities

    Prevent production issues Correct production issues quickly

    Stability practices

    Automated deployments formal release candidates low-maintenance environments Runtime automated health checks production-like pre-production environments explicit secrets management centralized logging custom application metrics & events distributed tracing APM tool with an operations dashboard anomaly alerts emergency alarms formal support desk w/ ticket tracking

    Sample repository

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses using design patterns to increase speed. Speed is a pillar of the Clear Measure Way, just like establishing quality and achieving stability.

    Elements of a design pattern Problem: a tension or issue in the software. Some trait or condition that is desired to be improved Solution: the way of organizing some code elements to resolve the Problem Benefit: the concrete advantage that code applying the pattern demonstrates Language: higher-level name for code that creates a higher-order concept

    A design pattern is an idea. Code implementing it is merely an example of the idea

    Resources: - [https://www.gofpatterns.com] - [https://learn.microsoft.com/en-us/shows/visual-studio-toolbox/design-patterns-commandmemento]

    Sample repository

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses quality and the engineering practices that support it. Not at all a comprehensive list of possible practices, this list contains the practices that should be considered essential. Without these, any team would find it difficult to establish a high-quality piece of software. Establishing quality is one of the pillars of the Clear Measure Way.

    Without first establishing quality, software developers end up spending time finding, diagnosing, and fixing bugs that pop up. This robs the team of much-needed capacity for new features and enhancements. Trying to add features without first establishing quality is like budgeting based on a credit card rather than income. There is no way the budget will balance, and it's only a matter of time before the situation comes crumbling down. An automated (private) build Test-driven development Onion architecture (dependency management & proper factoring) Static analysis Pull request checklists Continuous integration Automated full-system acceptance tests

    Sample repository

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • Custom primitives, an example of how refactoring accelerates team speed, part of the Clear Measure Way

    In this episode, Jeffrey discusses custom primitives, a specific refactoring. Refactoring is one of the engineering practices that increases team speed. Moving fast is part of the Clear Measure Way.

    With a refactoring mindset, we modify existing code so that a new change appears as if we had designed it that way since the genesis of the software. We don't allow something to appear "bolted on" An Onion Architecture mindset encourages us to push dependencies to the edges and pull core concepts to the center Common operations on primitives clutter the UI, one of the edge interfaces in any application Pulling common operations into the core model of an application limits repetition and reduces concept count Options for refactoring custom usages of primitives C# implicit operators allow the rest of the application to believe our decimal is a normal C# decimal A custom primitive allows the execution of custom logic everywhere while keeping the logic in one place.

    Sample repository

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses how to form, equip, train, and lead your software team

    The fundamentals of software team leadership Read my book, .NET DevOps for Azure Write down your team charter to align your team members How to hire when adding new team members Training the team Equipping the team Leading the team

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses how to run your software with confidence by achieving production stability

    The funamentals of stability Read my book, .NET DevOps for Azure Stress the software more in test suites than you think production will Implement logs Implement tracing Implement metrics Build an automated telemetry dashboard that allows easy analysis of the above telemetry

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses how to set a foundation for software quality.

    The fundamentals of quality Read my book, .NET DevOps for Azure Detect early, or better yet, prevent defects Implement static analysis Use test-driven development with multiple test suites Add formal inspections to your pull request approvals Quality comes first, then stability in production, then the speed of productivity you want to achieve Costs of poor quality are also discussed

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses how to empower your software team to move fast. You are an architect. You lead a software team. You can do it. But it's up to you to turn their effort into progress.

    The fundamentals of moving fast Read my book, .NET DevOps for Azure Test-driven development Continuous integration (actual CI, not just a triggered compile) Automated deployments from day 1, through TDD, UAT, and Prod environments. Operational telemetry from day 1 in your environments Team scorecard. Know your throughput, velocity, and capacity. Measure it, analyze it, tune it. Threats to moving fast are also analyzed

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey discusses the need for strong software leaders. Whether they be software managers, lead engineers, lead developers, architects, or chief architects, he'll discuss the history of software industry failure and the forces at play that any software leader must contend with. Then, Jeffrey reviews the responsibilities of a software leader.

    What you do as a software leader Form your team Equip your team Design the environment your team works in Sets vision, targets, and process for your team Establishes priorities and pushes them in the right direction Measures your team. A players want a report card. C players don't want to be graded. Monitors and adjusts your team Grows/stregnthens your team.

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey reviews what the .gitignore file is, its importance, and how to use it. The file rreviewcan be found at .gitignore file here

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178

  • In this episode, Jeffrey updates us on the Microsoft Ignite 2022 conference and highlights twelve sessions highly relevant for software developers.

    Microsoft Ignite 2022

    Making cloud native easier: Architecting the next-gen Azure PaaS

    Deep dive into Dapr.io and Cloud Native Architectures

    Hidden gems and live coding with .NET 7

    How Visual Studio builds Visual Studio with Visual Studio

    Minimal APIs: The magic revealed

    Increasing productivity and quality for hybrid development teams

    Pragmatic production debugging in Azure

    7 Tips for Building Robust Cloud Applications

    Implementing Effective DevSecOps for Azure

    Make building cloud native solutions a breeze with visual studio code, GitHub and Azure container apps

    Supercharge your testing practice with UiPath Test Suite

    Automated cloud application testing with Azure and Playwright

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    Scott Hunter- AzureDevOps Podcast- Episode 24
    Scott Hunter- AzureDevOps Podcast- Episode 119
    Scott Hunter- AzureDevOps Podcast- Episode 152
    Scott Hunter- AzureDevOps Podcast- Episode 211
    Brady Gaster- AzureDevOps Podcast- Episode 102
    Kendall Roden- AzureDevOps Podcast- Episode 137
    Kendall Roden- AzureDevOps Podcast- Episode 203

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following Gentleman: 512-619-6950 Lady: 512-923-8178

  • Code Sample

    In this episode, Jeffrey is going to show you how to properly package release candidates of your application and prepare them to be deployed to a representative environment, a pre-production environment, that will eventually go to production. In order to have something to deploy, it needs to be packaged in the proper manner.

    First, you will create a release candidate with a version number using the proper pattern: major.minor.build.revision, and integrate that into your build. You will learn how to modify the build script for the build script to have the logic necessary to create deployable packages from the built code. You will then learn how to archive the files that are a result of this process into a package manager to produce the release candidate.

    A release candidate is a concept with multiple files and a particular version of a particular application. This can be composed of one or multiple files and one or multiple deployable components. In this example, the application has a website and database that will be packaged up, including the full system acceptance test suite. This will allow you to run the test suite in a fully deployed environment. This means there will be three things you will need to package and three things that are going to be deployed and operate in a production-like environment. They will all be stamped with the name of the application, the church bulletin in this example, with the version of the application that comes from the build number.

    You will also be introduced to proper build numbers- dot net, dot exe- to have the right files to package up. You will learn about a tool called Octopus CLI to package up the three components into NuGet packages ready for deployment.

    When archiving the release candidates, you will use Azure Artifacts in the full NuGet server to provide a safe, secured, and available location whenever the deployment tooling needs to grab a specific version or a specific deployable component.

    Thanks to Clear Measure for sponsoring this sample and episode of Programming with Palermo.

    This program is syndicated on many channels. To send a question or comment to the show, email [email protected]. We’d love to hear from you.

    To use the private and confidential Chaplain service, use the following
    Gentleman: 512-619-6950
    Lady: 512-923-8178