Episodes

  • The paper’s abstract reads:

    Healthcare systems are under stress as never before. An aging population, increasing complexity and comorbidities, continual innovation, the ambition to allow unfettered access to care, and the demands on professionals contrast sharply with the limited capacity of healthcare systems and the realities of financial austerity. This tension inevitably brings new and potentially serious hazards for patients and means that the overall quality of care frequently falls short of the standard expected by both patients and professionals. The early ambition of achieving consistently safe and high-quality care for all has not been realised and patients continue to be placed at risk. In this paper, we ask what strategies we might adopt to protect patients when healthcare systems and organisations are under stress and simply cannot provide the standard of care they aspire to.

    Discussion Points:

    Extrapolating out from the healthcare focus to other businessesThis paper was published pre-pandemicAdaptations during times of extreme stress or lack of resources - team responses will varyPeople under pressure adapt, and sometimes the new conditions become the new normalGuided adaptability to maintain safetySubstandard care in French hospitals in the studyThe dynamic adjustment for times of crisis vs. long-term solutionsShort-term adaptations can impede development of long-term solutionsFour basic principles in the paper:Giving up hope of returning to normalWe can never eliminate all risks and threatsPrincipal focus should be on expected problemsManagement of risk requires engagement and action at all managerial levelsGriffith university’s rules on asking for an extension…expected surprisesMiddle management liaising between frontlines and executivesManaging operations in “degraded mode” and minimum equipment listsAbsolute safety - we can’t aim for 100% - we need to write in what “second best” coversTakeaways:Most industries are facing more pressure today than in the past, focus on the current risksAll industries have constant risks and tradeoffs - how to address at each levelUnderstand how pressures are being faced by teams, what adaptations are acceptable for short and long term?For expected conditions and hazards, what does “second best” look like?Research is needed around “degraded operations”Answering our episode question: The wrong answer is to only rely on the highest standards which may not be achievable in degraded operations

    Quotes:

    “I think it’s a good reflection for professionals and organistions to say, “Oh, okay - what if the current state of stress is the ‘new normal’ or what if things become more stressed? Is what we’re doing now the right thing to be doing?” - David

    “There is also the moral injury when people who are in a ‘caring’ profession and they can’t provide the standard of care that they believe to be right standard.” - Drew

    “None of these authors share how often these improvised solutions have been successful or unsuccessful, and these short-term fixes often impede the development of longer-term solutions.” - David

    “We tend to set safety up almost as a standard of perfection that we don’t expect people to achieve all the time, but we expect those deviations to be rare and correctable.” - Drew

    Resources:

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • The paper’s abstract reads:

    This paper reflects on the credibility of nuclear risk assessment in the wake of the 2011 Fukushima meltdown. In democratic states, policymaking around nuclear energy has long been premised on an understanding that experts can objectively and accurately calculate the probability of catastrophic accidents. Yet the Fukushima disaster lends credence to the substantial body of social science research that suggests such calculations are fundamentally unworkable. Nevertheless, the credibility of these assessments appears to have survived the disaster, just as it has resisted the evidence of previous nuclear accidents. This paper looks at why. It argues that public narratives of the Fukushima disaster invariably frame it in ways that allow risk-assessment experts to “disown” it. It concludes that although these narratives are both rhetorically compelling and highly consequential to the governance of nuclear power, they are not entirely credible.

    Discussion Points:

    Following up on a topic in episode 100 - nuclear safety and risk assessmentThe narrative around planes, trains, cars and nuclear - risks vs. safetyPlanning for disaster when you’ve promised there’s never going to be a nuclear disasterThe 1975 WASH-1400 StudiesJapanese disasters in the last 100 yearsFour tenets of Downer’s paper:The risk assessments themselves did not fail Relevance Defense: The failure of one assessment is not relevant to the other assessmentsCompliance Defense: The assessments were sound, but people did not behave the way they were supposed to/did not obey the rulesRedemption Defense: The assessments were flawed, but we fixed themTheories such as: Fukushima did happen - but not an actual ‘accident/meltdown’ - it basically withstood a tsunami when the country was flattenedResidents of Fukushima - they were told the plant was ‘safe’The relevance defense, Chernobyl, and 3 Mile IslandBoeing disasters, their risk assessments, and blameAt the time of Fukushima, Japanese regulation and engineering was regarded as superiorThis was not a Japanese reactor! It’s a U.S. designThe compliance defense, human errorThe redemption defense, regulatory bodies taking all Fukushima elements into accountDowner quotes Peanuts comics in the paper - lessons - Lucy can’t be trusted!This paper is not about what’s wrong with risk assessments- it’s about how we defend what we doTakeaways:Uncertainty is always present in risk assessmentsYou can never identify all failure modesThree things always missing: anticipating mistakes, anticipating how complex tech is always changing, anticipating all of the little plastic connectors that can breakAssumptions - be wary, check all the what-if scenariosJust because a regulator declares something safe, doesn’t mean it isAnswering our episode question: You must question risk assessments CONSTANTLY

    Quotes:

    “It’s a little bit surprising we don’t scrutinize the ‘control’ every time it fails.” - Drew

    “In the case of nuclear power, we’re in this awkward situation where, in order to prepare emergency plans, we have to contradict ourselves.” - Drew

    “If systems have got billions of potential ’billion to one’ accidents then it’s only expected that we’re going to see accidents from time to time.” - David

    “As the world gets more and more complex, then our parameters for these assessments need to become equally as complex.” - David

    “The mistakes that people make in these [risk assessments] are really quite consistent.” - Drew

    Resources:

    Disowning Fukushima Paper by John Downer

    WASH-1400 Studies

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • Missing episodes?

    Click here to refresh the feed.

  • The book explains Perrow’s theory that catastrophic accidents are inevitable in tightly coupled and complex systems. His theory predicts that failures will occur in multiple and unforeseen ways that are virtually impossible to predict.

    Charles B. Perrow (1925 – 2019) was an emeritus professor of sociology at Yale University and visiting professor at Stanford University. He authored several books and many articles on organizations and their impact on society. One of his most cited works is Complex Organizations: A Critical Essay, first published in 1972.

    Discussion Points:

    David and Drew reminisce about the podcast and achieving 100 episodesOutsiders from sociology, management, and engineering entered the field in the 70s and 80sPerrow was not a safety scientist, as he positioned himself against the academic establishmentPerrow’s strong bias against nuclear power weakens his writingThe 1979 near-disaster at Three Mile Island - Perrow was asked to write a report, which became the book, “Normal Accidents…”The main tenets of Perrow’s core arguments:Start with a ‘complex high-risk technology’ - aircraft, nuclear, etcTwo or more values start the accident“Interactive Complexity”787 Boeing failures - failed system + unexpected operator response lead to disasterThere will always be separate individual failures, but can we predict or prevent the ‘perfect storm’ of mulitple failures at once?Better technology is not the answerPerrow predicted complex high-risk technology to be a major part of future accidentsPerrow believed nuclear power/nuclear weapons should be abandoned - risks outweigh benefitsThree reasons people may see his theories as wrong:If you believe the risk assessments of nuclear are correct, then my theories are wrongIf they are contrary to public opinion and valuesIf safety requires more safe and error-free organizationsIf there is a safer way to run the systems outside all of the aboveThe modern takeaway is a tradeoff between adding more controls, and increased complexityThe hierarchy of designers vs operatorsWe don’t think nearly enough about the role of power- who decides vs. who actually takes the risks?There should be incentives to reduce complexity of systems and the uncertainty it createsTo answer this show’s question - not entirely, and we are constantly asking why

    Quotes:

    “Perrow definitely wouldn’t consider himself a safety scientist, because he deliberately positioned himself against the academic establishment in safety.” - Drew

    “For an author whom I agree with an awful lot about, I absolutely HATE the way all of his writing is colored by…a bias against nuclear power.” - Drew

    [Perrow] has got a real skepticism of technological power.” - Drew

    "Small failures abound in big systems.” - David

    “So technology is both potentially a risk control, and a hazard itself, in [Perrow’s] simple language.” - David

    Resources:

    The Book – Normal accidents: Living with high-risk technologies

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • The paper’s abstract reads:

    The failure of 27 wildland firefighters to follow orders to drop their heavy tools so they could move faster and outrun an exploding fire led to their death within sight of safe areas. Possible explanations for this puzzling behavior are developed using guidelines proposed by James D. Thompson, the first editor of the Administrative Science Quarterly. These explanations are then used to show that scholars of organizations are in analogous threatened positions, and they too seem to be keeping their heavy tools and falling behind. ASQ's 40th anniversary provides a pretext to reexamine this potentially dysfunctional tendency and to modify it by reaffirming an updated version of Thompson's original guidelines.

    The Mann Gulch fire was a wildfire in Montana where 15 smokejumpers approached the fire to begin fighting it, and unexpected high winds caused the fire to suddenly expand. This "blow-up" of the fire covered 3,000 acres (1,200 ha) in ten minutes, claiming the lives of 13 firefighters, including 12 of the smokejumpers. Only three of the smokejumpers survived.

    The South Canyon Fire was a 1994 wildfire that took the lives of 14 wildland firefighters on Storm King Mountain, near Glenwood Springs, Colorado, on July 6, 1994. It is often also referred to as the "Storm King" fire.

    Discussion Points:

    Some details of the Mann Gulch fire deaths due to refusal to drop their tools Weich lays out ten reasons why these firefighters may have refused to drop their tools:Couldn't hear the orderLack of explanation for order - unusual, counterintuitiveYou don’t trust the leaderControl- if you lose your tools, lose capability, not a firefighterSkill at dropping tools - ie survivor who leaned a shovel against a tree instead of droppingSkill with replacement activity - it’s an unfamiliar situationFailure - to drop your tools, as a firefighter, is to failSocial dynamics - why would I do it if others are notConsequences - if people believe it won’t make a difference, they won’t drop.These men should have been shown the difference it would makeIdentity- being a firefighter, without tools they are throwing away their identity. This was also shortly after WWII, where you are a coward if you throw away your weapons, and would be alienated from your groupThomson had four principles necessary for research in his publication: Administrative science should focus on relationships - you can’t understand without structures and people and variables. Abstract concepts - not on single concrete ideas, but theories that apply to the fieldDevelopment of operational definitions that bridge concepts and raw experience - not vague fluffy things with confirmation bias - sadly, we still don’t have all the definitions todayValue of the problem - what do they mean? What is the service researchers are trying to provide? How Weick applies these principles to the ten reasons, then looks at what it means for researchersWeick’s list of ten- they are multiple, interdependent reasons – they can all be true at the same timeThompsons list of four, relating them to Weick’s ten, in today’s organizationsWhat are the heavy tools that we should get rid of? Weick links heaviest tools with identityDrew’s thought - getting rid of risk assessments would let us move faster, but people won’t drop them, relating to the ten reasons aboveTakeaways: 1) Emotional vs. cognitive (did I hear that, do I know what to do) emotional (trust, failure, etc.) in individuals and teams2) Understanding group dynamics/first person/others to follow - the pilot diversion story, Piper Alpha oil rig jumpers, first firefighter who drops tools. Next week is episode 100 - we’ve got a plan!

    Quotes:

    “Our attachment to our tools is not a simple, rational thing.” - Drew

    “It’s really hard to recognize that you’re well past that point where success is not an option at all.” - Drew

    “These firefighters were several years since they’d been in a really raging, high-risk fire situation…” - David

    “I encourage anyone to read Weick’s papers, they’re always well-written.” - David

    “Well, I think according to Weick, the moment you begin to think that dropping your tools is impossible and unthinkable, that might be the moment you actually have to start wondering why you’re not dropping your tools.” - Drew

    “The heavier the tool is, the harder it is to drop.” - Drew



    Resources:

    Karl Weick - Drop Your Tools Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • In 1939, Alfred Marrow, the managing director of the Harwood Manufacturing Corporation factory in Virginia, invited Kurt Lewin (a German-American psychologist, known as one of the modern pioneers of social, organizational, and applied psychology in the U.S.

    to come to the textile factory to discuss significant problems with productivity and turnover of employees. The Harwood study is considered the first experiment of group decision-making and self-management in industry and the first example of applied organizational psychology. The Harwood Experiment was part of Lewin's continuing exploration of participatory action research.

    In this episode David and Drew discuss the main areas covered by this research:

    Group decision-makingSelf-managementLeadership trainingChanging people’s thoughts about stereotypesOvercoming resistance to change

    It turns out that yes, Lewin identified many areas of the work environment that could be improved and changed with the participation of management and members of the workforce communicating with each other about their needs and wants.This was novel stuff in 1939, but proved to be extremely insightful and organizations now utilize many of this experiment’s tenets 80 years later.

    Discussion Points:

    Similarities in this study compared to the Chicago Western Electric “Hawthorne experiments”Organizational science – Lewin’s approachHow Lewin came to be invited to the Virginia factory and the problems they needed to solveAutocratic vs. democratic - studies of school children’s performanceThe setup of the experiment - 30 minute discussions several times a week with four cohortsThe criticisms and nitpicks around the study participantsGroup decision makingSelf-management and field theoryHarwood leaders were appointed for tech knowledge, not people skillsThe experiment held “clinics” where leaders could bring up their issues to discussChanging stereotypes - the factory refused to hire women over 30 - but experimented by hiring a group for this studyPresenting data does not work to change beliefs, but stories and discussions doResistance to change - changing workers’ tasks without consulting them on the changes created bitterness and lack of confidenceThe illusion of choice lowers resistanceThe four cohorts:Control group - received changes as they normally would - just ‘being told’Group received more detail about the changes, members asked to represeet the group with managementGroup c and d participated in voting for the changes, their productivity was the only one that increased– 15%This was an atypical factory/workforce to begin with, that already had a somewhat participatory approachTakeaways:Involvement in the discussion of change vs. no involvementSelf-management - setting own goals Leadership needs more than technical competenceStereotypes - give people space to express views, they may join the group majority in voting the other wayResistance to change - if people can contribute and participate, confidence is increasedFocus on group modifications, not individualsMore collaborative, less autocraticDoing this kind of research is not that difficult, you don’t need university-trained researchers, just people with a good mind for research ideas/methods

    Quotes:

    “The experiments themselves were a series of applied research studies done in a single manufacturing facility in the U.S., starting in 1939.” - David

    “Lewin’s principal for these studies was…’no research without action, and no action without research,’ and that’s where the idea of action research came from…each study is going to lead to a change in the plant.” - Drew

    “It became clear that the same job was done very differently by different people.” - David

    “This is just a lesson we need to learn over and over and over again in our organizations, which is that you don’t get very far by telling your workers what to do without listening to them.” - Drew

    “With 80 years of hindsight it's really hard to untangle the different explanations for what was actually going on here.” - Drew

    “Their theory was that when you include workers in the design of new methods…it increases their confidence…it works by making them feel like they’re experts…they feel more confident in the change.” - Drew

    Resources:

    The Practical Theorist: Life and Work of Kurt Lewin by Alfred Marrow

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • This was very in-depth research within a single organization, and the survey questions it used were well-structured. With 48 interviews to pull from, it definitely generated enough solid data to inform the paper’s results and make it a valuable study.We’ll be discussing the pros and cons of linking safety performance to monetary bonuses, which can often lead to misreporting, recategorizing, or other “perverse” behaviors regarding safety reporting and metrics, in order to capture that year-end dollar amount, especially among mid-level and senior management.

    Discussion Points:

    Do these bonuses work as intended?Oftentimes profit sharing within a company only targets senior management teams, at the expense of the front-line employeesIf safety and other measures are tied monetarily to bonuses, organizations need to spend more than a few minutes determining what is being measuredBonuses – do they really support safety? They don’t prevent accidents“What gets measured gets managed” OR “What gets measured gets manipulated”Supervisors and front-line survey respondents did not understand how metrics were used for bonuses87% replied that the safety measures had limited or negative effectNearly half said the bonus structure tied to safety showed that the organization felt safety was a priorityNothing negative was recorded by the respondents in senior management- did they believe this is a useful tool?Most organizations have only 5% or less performance tied to safetyDavid keeps giving examples in the hopes that Drew will agree that at least one of them is a good ideaDrew has “too much faith in humanity” around reporting and measuring safety in these organizationsTry this type of survey in your own organization and see what you find

    Quotes:

    “I’m really mixed, because I sort of agree on principle, but I disagree on any practical form.” - Drew

    “I think there’s a challenge between the ideals here and the practicalities.” - David

    “I think sometimes we can really put pretty high stakes on pretty poorly thought out things, we oversimplify what we’re going to measure and reward.” - Drew

    “If you look at the general literature on performance bonuses, you see that they cause trouble across the board…they don’t achieve their purposes…they cause senior executives to do behaviors that are quite perverse.” - Drew

    “I don’t like the way they’ve written up the analysis I think that there’s some lost opportunity due to a misguided desire to be too statistically methodical about something that doesn’t lend itself to the statistical analysis.” - Drew

    “If you are rewarding anything, then my view is that you’ve got to have safety alongside that if you want to signal an importance there.” - David

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • Just because concepts, theories, and opinions are useful and make people feel comfortable, doesn’t mean they are correct. No one so far has come up with an answer in the field of safety that proves, “this is the way we should do it,” and in the work of safety, we must constantly evaluate and update our practices, rules, and recommendations. This of course means we can never feel completely comfortable – and humans don’t like that feeling. We’ll dig into why we should be careful about feeling a sense of “clarity” and mental ease when we think that we understand things completely- because what happens if someone is deliberately making us feel that a problem is “solved”...?

    The paper we’re discussing deals with a number of interesting psychological constructs and theories. The abstract reads:

    The feeling of clarity can be dangerously seductive. It is the feeling associated with understanding things. And we use that feeling, in the rough-and-tumble of daily life, as a signal that we have investigated a matter sufficiently. The sense of clarity functions as a thought-terminating heuristic. In that case, our use of clarity creates significant cognitive vulnerability, which hostile forces can try to exploit. If an epistemic manipulator can imbue a belief system with an exaggerated sense of clarity, then they can induce us to terminate our inquiries too early — before we spot the flaws in the system. How might the sense of clarity be faked? Let’s first consider the object of imitation: genuine understanding. Genuine understanding grants cognitive facility. When we understand something, we categorize its aspects more easily; we see more connections between its disparate elements; we can generate new explanations; and we can communicate our understanding. In order to encourage us to accept a system of thought, then, an epistemic manipulator will want the system to provide its users with an exaggerated sensation of cognitive facility. The system should provide its users with the feeling that they can easily and powerfully create categorizations, generate explanations, and communicate their understanding. And manipulators have a significant advantage in imbuing their systems with a pleasurable sense of clarity, since they are freed from the burdens of accuracy and reliability. I offer two case studies of seductively clear systems: conspiracy theories; and the standardized, quantified value systems of bureaucracies.

    Discussion Points:

    This has been our longest break from the podcastDavid traveled to the USUncertainty can make us risk-averseOrganizations strive for more certainty in the workplaceScimago for evaluating research papersA well-written paper, but not peer-evaluated by psychologistsFocus on conspiracy theories and bureaucracyThe Studio C comedy sketch - bank robbers meet a philosopherAcademic evaluations - white men vs. minorities/womenPuzzles and pleasure spikesClarity as a thought terminatorEpistemic intimidation and epistemic seductionCognitive Fluency, Insight, and Cognitive FacilityAlthough fascinating, there is no evidence to support the paper’s claimsEcho chambers and thought bubblesRush Limbaugh and Fox News - buying into the belief systemNumbers, graphs, charts, grades, tables – all make us feel comfort and controlTakeaways:Just because it’s useful, doesn’t mean it’s correctThe world is not supposed to make sense, it’s important to live with some cognitive discomfortBe cautious about feeling safe and comfortableConstant evaluation of safety practices must be the norm

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • Assessing the Influence of “Take 5” Pre-Task Risk Assessments on Safety” by Jop Havinga, Mohammed Ibrahim Shire, and our own Andrew Rae. The paper was just published in “Safety,” - an international, peer-reviewed, open-access journal of industrial and human health safety published quarterly online by MDPI.

    The paper’s abstract reads:

    This paper describes and analyses a particular safety practice, the written pre-task risk assessment commonly referred to as a “Take 5”. The paper draws on data from a trial at a major infrastructure construction project. We conducted interviews and field observations during alternating periods of enforced Take 5 usage, optional Take 5 usage, and banned Take 5 usage. These data, along with evidence from other field studies, were analysed using the method of Functional Interrogation. We found no evidence to support any of the purported mechanisms by which Take 5 might be effective in reducing the risk of workplace accidents. Take 5 does not improve the planning of work, enhance worker heedfulness while conducting work, educate workers about hazards, or assist with organisational awareness and management of hazards. Whilst some workers believe that Take 5 may sometimes be effective, this belief is subject to the “Not for Me” effect, where Take 5 is always believed to be helpful for someone else, at some other time. The adoption and use of Take 5 is most likely to be an adaptive response by individuals and organisations to existing structural pressures. Take 5 provides a social defence, creating an auditable trail of safety work that may reduce anxiety in the present, and deflect blame in the future. Take 5 also serves a signalling function, allowing workers and companies to appear diligent about safety.

    Discussion Points:

    Drew, how are you feeling with just a week of comments and reactions coming in?If people are complaining that the study is not big enough, great! That means people are interestedIntroduction of Jop Havinga, and his top-level framing of the studyWhy do we do the ‘on-off’ style of research?We saw no difference in results when cards were mandatory, or optional, or bannedPerplexingly, some cards are filled out before getting to the job, and some after the job is complete, when there is no need for the cardOne way cards may be helpful is simply creating a mindfulness and heedfulness about proceduresThe “Not for Me” effect– people believe the cards may be good for others, but not necessary for selvesResearch criticisms like, “how can you actually tell people are paying attention or not?”The Take 5 cards serve as a protective layer for management and workers looking to avoid blameMain takeaway: Stop using Take 5s in accident investigations, as they provide no real data, and they may even be detrimental– as in “safety clutter”Send us your suggestions for future episodes, we are actively looking!

    Quotes:

    “You always get taken by surprise when people find other ways to criticize [the research.] I think my favorite criticism is people who immediately hit back by trying to attack the integrity of the research.” - Dr. Drew

    “So this link between behavioral psychology and safety science is sometimes very weak, it’s sometimes just a general idea of applying incentives.” - Dr. Drew

    “When someone says, ‘we introduced Take 5’s and we reduced our number of accidents by 50%,’ that is nonsense. There is no [one] safety intervention in the world where you could have that level of change and be able to see it.” - Dr. Drew

    “It’s really hard to argue that these Take 5s lead to actual better planning of the work they’re conducting.” - Dr. Jop Havinga

    “What we saw is just a total disconnect – the behavior happens without the Take 5s, the Take 5s happen without the behavior. The two NEVER actually happened at the same time.” - Dr. Drew

    “Considering that Take 5 cards are very generic, they will rarely contain anything new for somebody.” - Dr. Jop Havinga

    “Often the people who are furthest removed from the work are most satisfied with Take 5s and most reluctant to get rid of them.” - Dr. Drew

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • The authors’ goal was to produce a scoring protocol for safety-focused leadership engagements that reflects the consensus of a panel of industry experts. Therefore, the authors adopted a multiphased focus group research protocol to address three fundamental questions:

    1. What are the characteristics of a high-quality leadership engagement?

    2. What is the relative importance of these characteristics?

    3. What is the reliability of the scorecard to assess the quality of leadership engagement?

    Just like the last episode’s paper, the research has merit, even though it was published in a trade journal and not an academic one. The researchers interviewed 11 safety experts and identified 37 safety protocols to rank. This is a good starting point, but it would be better to also find out what these activities look like when they’re “done well,” and what success looks like when the safety measures, protocols, or attributes “work well.”

    The Paper’s Main Research Takeaways:

    Safety-focused leadership engagements are important because, if performed well, they can convey company priorities, demonstrate care and reinforce positive safety culture.A team of 11 safety experts representing the four construction industry sectors identified and prioritized the attributes of an effective leadership engagement.A scorecard was created to assess the quality of a leadership engagement, and the scorecard was shown to be reliable in independent validation.

    Discussion Points:

    Dr. Drew and Dr. David’s initial thoughts on the paperThoughts on quality vs. quantityHow do the researchers define “leadership safety engagements”The three key phases:Phase 1: Identification of key attributes of excellent engagementsPhase 2: Determining the relative importance of potential predictorsPhase 3: Reliability checkThe 15 key indicators–some are just common sense, some are relatively creepyThe end product, the checklist, is actually quite usefulThe next phase should be evaluating results – do employees actually feel engaged with this approach?Our key takeaways:It is possible to design a process that may not actually be validThe 37 items identified– a good start, but what about asking the people involved: what does it look like when “done well”No matter what, purposeful safety engagement is very importantAsk what the actual leaders and employees think!We look forward to the results in the next phase of this researchSend us your suggestions for future episodes, we are actively looking!

    Quotes:

    “If the measure itself drives a change to the practice, then I think that is helpful as well.” - Dr. David

    “I think just the exercise of trying to find those quality metrics gets us to think harder about what are we really trying to achieve by this activity.” - Dr. Drew

    “So I love the fact that they’ve said okay, we’re talking specifically about people who aren’t normally on-site, who are coming on-site, and the purpose is specifically a conversation about safety engagement. So it’s not to do an audit or some other activity.” - Dr. Drew

    “The goal of this research was to produce a scoring protocol for safety-focused leadership engagements, that reflects the common consensus of a panel of industry experts.” - Dr. David

    “We’ve been moving towards genuine physical disconnections between people doing work and the people trying to lead, and so it makes sense that over the next little while, companies are going to make very deliberate conscious efforts to reconnect, and to re-engage.” - Dr. Drew

    “I suspect people are going to be begging for tools like this in the next couple of years.” - Dr. Drew

    “At least the researchers have put a tentative idea out there now, which can be directly tested in the next phase, hopefully, of their research, or someone else’s research.” - Dr. Drew

    Resources:

    Link to the Research Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • We will discuss the pros and cons of “Golden Safety Rules” and a punitive safety culture vs. a critical risk management approach, and analyze the limitations of the methods used in this research.

    The paper’s abstract introduction reads:

    Golden safety rules (GSR) have been in existence for decades across multiple industry sectors – championed by oil and gas – and there is a belief that they have been effective in keeping workers safe. As safety programs advance in the oil and gas sector, can we be sure that GSR have a continued role? ERM surveyed companies across mining, power, rail, construction, manufacturing, chemicals and oil and gas, to examine the latest thinking about GSR challenges and successes. As we embarked on the survey, the level of interest was palpable; from power to mining it was apparent that companies were in the process of reviewing and overhauling their use of GSR. The paper will present key insights from the survey around the questions we postulated. Are GSR associated with a punitive safety culture, and have they outlived their usefulness as company safety cultures mature? Is the role of GSR being displaced as critical control management reaches new pinnacles? Do we comply with our GSR, and how do we know? Do our GSR continue to address the major hazards that our personnel are most at risk from? How do we apply our GSR with contractors, and to what extent do our contractors benefit from that? The paper concludes with some observations of how developments outside of the oil and gas sector provide meaningful considerations for the content and application of GSR for oil and gas companies.

    Discussion Points:

    There isn’t a lot of good research out there on Golden RulesMost of the research is statistics on accidents or incidentsMost Golden Rules are conceived without frontline or worker inputGolden Rules are viewed as either guidelines for actions, or a resource for actionsSome scenarios where workers should not/could not follow absolute rules– David’s example of the seatbelt story in the AU outbackIf rules cannot be followed, the work should be redesignedDiscussion of the paper from the APPEA Trade JournalAnswering seven questions:Are life-saving rules associated with punitive safety cultures?Have life-saving rules outlived their usefulness?Has the role of life-saving rules been replaced by more mature risk management programs?Do we actually comply with life-saving rules?How do we know there is compliance with life-saving rules?Do life-saving rules continue to address major hazards?How do we apply life-saving rules to our contractors?There were 15 companies involved in the research and a one hour interview with a management team member for each companyOur conclusions for each of the questions askedKey takeaways -If we’ve got rules that define key roles, they may continue to be relevantThere are a lot of factors that influence the effectiveness of the rules programIt’s difficult, if not impossible, to divorce a life-saving rule program from the development of a punitive safety cultureCritical control management needs to be developed in partnership with your workforceSo the answer to this episode’s question is – this paper cannot answer itSend us your suggestions for future episodes, we are actively looking!

    Quotes:

    “People tend to think of rules as constraining. They’re like laws that you stick within that you don’t step outside of.” Dr. Drew

    “Often the type of things that are published in trade associations are much closer to the real-world concerns of people at work, and a lot of people working for consultancies are very academically-minded.” - Dr. Drew

    “One way to get a name in safety is to be good at safety, another way to get a name in safety is to tell everyone how good you are at safety.” Dr. Drew

    “They’re not just talking to people who love Golden Rules [in this paper]. We’ve got some companies that never even wanted them, some companies that tried them and don’t like them, some companies that love them. So that’s a fantastic sample when it comes to, ‘do we have a diverse range of opinions.’” - Dr. Drew

    “In many organizations that have done life-saving rules, they saw this critical risk management framework as an evolution, an improvement, in what they’re doing.” Dr. David

    “I think that’s the danger of trying to make things too simple is it becomes either too generic or too vague, or just not applicable to so many circumstances.” Dr. Drew

    Resources:

    Link to the Golden Safety Rules Paper by Fraser and Colgan

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • The paper results center on a survey sent to a multitude of French industries, and although the sampling is from only one country, 15 years ago, the findings are very illustrative of common issues among safety professionals within their organizations. David used this paper as a reference for his PhD thesis, and we are going to dig into each section to discuss.

    The paper’s abstract introduction reads:

    What are the training needs of company preventionists? An apparently straightforward question, but one that will very quickly run into a number of difficulties. The first involves the extreme variability of situations and functions concealed behind the term preventionist and which stretch way beyond the term’s polysemous nature. Moreover, analysis of the literature reveals that very few research papers have endeavoured to analyse the activities associated with prevention practices, especially those of preventionists. This is a fact, even though prevention-related issues and preventionist responsibilities are becoming increasingly important.

    Discussion Points:

    The paper, reported from French industries, focuses heavily on safety in areas like occupational therapies, ergonomics, pesticides, hygiene, etc.The downside of any “survey” result is that we can only capture what the respondents “say” or self-report about their experiencesMost of the survey participants were not originally trained as safety professionalsThere are three subgroups within the survey:High school grads with little safety trainingPost high school with two-year tech training program paths to safety workUniversity-educated levels including engineers and managersThere were six main positions isolated within this study:Prevention Specialists - hold a degree in safety, high status in safety managementField Preventionists - lesser status, operations level, closer to front linesPrevention Managers - executive status, senior management, engineers/project managersPreventionist Proxies - may be establishing safety programs, in opposition to the organization, chaotic positionsBasic Coordinators - mainly focused on training othersUnstructured - no established safety procedures, may have been thrown into this roleSo many of the respondents felt isolated and frustrated within the organizations– which continues to be true in the safety professionThere is evidence in this paper and others that a large portion of safety professionals “hate their bosses” and feel ‘great distress’ in their positionsOnly 2.5% felt comfortable negotiating safety with managementTakeaways:Safety professionals come from widely diverse backgroundsTraining and education are imperativeThese are complex jobs that often are not on siteRole clarity is very low, leading to frustration and job dissatisfactionSend us your suggestions for future episodes, we are actively looking!

    Quotes:

    “I think this study was quite a coordinated effort across the French industry that involved a lot of different professional associations.” - David

    “It might be interesting for our readers/listeners to sort of think about which of these six groups do you fit into and how well do you reckon that is a description of what you do.” - Drew

    “I thought it was worth highlighting just how much these different [job] categories are determined by the organization, not by the background or skill of the safety practitioner.” - Drew

    “[I read a paper that stated:] There is a significant proportion of safety professionals that hate their bosses …and it was one of the top five professions that hate their bosses and managers.” - David

    “You don’t have to go too far in the safety profession to find frustrated professionals.” - David

    “There’s a lot to think on and reflect on…it’s one sample in one country 15 years ago, but these are useful reflections as we get to the practical takeaways.” - David

    “The activity that I like safety professionals to do is to think about the really important parts of their role that add the most value to the safety of work, and then go and ask questions of their stakeholders of what they think are the most valuable parts of the role, …and work toward alignment.” - David

    “Getting that role clarity makes you feel that you’re doing better in your job.” - Drew

    Resources:

    Link to the Safety Science Article

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • We will go through each letter of the amusing and memorable acronym and give you our thoughts on ways to make sure each point is addressed, and different methodologies to consider when verifying or assuring that each element has been satisfied before you cite the source.

    Sarah Blakeslee writes (about her CRAAP guidelines): Sometimes a person needs an acronym that sticks. Take CRAAP for instance. CRAAP is an acronym that most students don’t expect a librarian to be using, let alone using to lead a class. Little do they know that librarians can be crude and/or rude, and do almost anything in order to penetrate their students’ deep memories and satisfy their instructional objectives. So what is CRAAP and how does it relate to libraries? Here begins a long story about a short acronym…

    Discussion Points:

    The CRAAP guidelines were so named to make them memorableThe five CRAAP areas to consider when using sources for your work are:Currency- timeliness, how old is too old?Relevance- who is the audience, does the info answer your questionsAuthority- have you googled the author? What does that search show you?Accuracy- is it verifiable, supported by evidence, free of emotion?Purpose- is the point of view objective? Or does it seem colored by political, religious, or cultural biases?Takeaways:You cannot fully evaluate a source without looking AT the sourceBe cautious about second-hand sources– is it the original article, or a press release about the article?Be cautious of broad categories, there are plenty of peer-reviewed, well-known university articles that aren’t credibleTo answer our title question, use the CRAAP guidelines as a basic guide to evaluating your sources, it is a useful toolSend us your suggestions for future episodes, we are actively looking!

    Quotes:

    “The first thing I found out is there’s pretty good evidence that teaching students using the [CRAAP] guidelines doesn’t work.” - Dr. Drew

    “It turns out that even with the [CRAAP] guidelines right in front of them, students make some pretty glaring mistakes when it comes to evaluating sources.” - Dr. Drew

    “Until I was in my mid-twenties, I never swore at all.” - Dr. Drew

    “When you’re talking about what someone else said [in your paper], go read what that person said, no matter how old it is.” - Dr. Drew

    “The thing to look out for in qualitative research is, how much are the participants being led by the researchers.” - Dr. Drew

    “So what I really want to know when I’m reading a qualitative study is not what the participant answered. I want to know what the question was in the first place.” - Dr. Drew

    Resources:

    Link to the CRAAP Test

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • An excerpt from the paper’s abstract reads as follows: The proposition is based on theory about relationships between knowledge and power, complemented by organizational theory on standardization and accountability. We suggest that the increased reliance on self-regulation and international standards in safety management may be drivers for a shift in the distribution of power regarding safety, changing the conception of what is valid and useful knowledge. Case studies from two Norwegian transport sectors, the railway and the maritime sectors, are used to illustrate the proposition. In both sectors, we observe discourses based on generic approaches to safety management and an accompanying disempowerment of the practitioners and their perspectives.

    Join us as we delve into the paper and endeavor to answer the question it poses.We will discuss these highlights:

    Safety science may contribute to the marginalization of practical knowledgeHow “paper trails” and specialists marginalize and devalue experience-based knowledgeAn applied science needs to understand the effects it causes, also from a power-perspectiveSafety Science should reflect on how our results interact with existing system-specific knowledgeExamples from their case studies in maritime transport and railways

    Discussion Points:

    David has been traveling in the U.S. for much of January seeing colleaguesThis is one of David’s favorite papersDiscussion of the paper’s authors being academics, not scientistsHow does an organization create “good safety” and what does that look like?The rise of homogenous international standards of safetyCan safety professionals transfer their knowledge and work in other industriesThe two case studies in this paper: Norwegian railway and maritime systems/industriesThe separation between top-down system safety and local, front-line practitionersOur key takeaways from this paperSend us your suggestions for future episodes, we are actively looking!

    Quotes:

    “If you understand safety, then it really shouldn’t matter which industry you’re applying it on.” - Dr. Drew Rae

    “I can’t imagine, as a safety professional, how you’re impactful in the first 12 months [on a new job] until you actually understand what it is you’re trying to influence.” - Dr. David Provan

    “It feels to me this is what happened here, that they formed this view of what was going on and then actually traced back through their data to try to make sense of it.” - Dr. David Provan

    “I have to say I think they genuinely use these case studies to really effectively illustrate and support the argument that they’re making.” - Dr. Drew Rae

    “Once we start thinking too hard about a function, we start formalizing it and once we start formalizing it, it starts to become detached from operations and sort of flows from that operational side into the management side.” - Dr. Drew Rae

    “I don’t think it's being driven by the academics at all and clearly it’s in the sociology of the profession's literature all the way back to the 1950s and 60s.” - Dr. David Provan

    “We’re fighting amongst ourselves as a non-working community about whose [safety] model should be the one to then impose on the genuine front line practitioners.” - Dr. Drew Rae

    Resources:

    Link to Paper in JSS

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • Wastell, who has a BSc and Ph.D. from Durham University, is Emeritus Professor in Operations Management and Information Systems at Nottingham University in the UK. Professor Wastell began his academic career as a cognitive neuroscientist at Durham, studying the relationships between brain activity and psychological processes. His areas of expertise include neuroscience and social policy: critical perspectives; psychophysiological design of complex human-machine systems; Information systems and public sector reform; design and innovation in the public services; management as design; and human factors design of safe systems in child protection.

    Join us as we delve into the statement (summarized so eloquently in Wastell’s well-crafted abstract): “Methodology, whilst masquerading as the epitome of rationality, may thus operate as an irrational ritual, the enactment of which provides designers with a feeling of security and efficiency at the expense of real engagement with the task at hand.”

    Discussion Points:

    How and when Dr. Rae became aware of this paperWhy this paper has many structural similarities to our paper, ”Safety work versus the safety of work” published in 2019Organizations’ reliance on top-heavy processes and rituals such as Gantt charts, milestones, gateways, checklists, etcThoughts and reaction to Section I: A Cautionary TaleSection II: Methodology: The Lionization of TechniqueSection III: Methodology as a Social DefenseThe three elements of social defense against anxiety:Basic assumption (fight or flight)Covert coalition (internal organization protection/family/mafia)Organizational ritual (the focus of this paper)Section IV: The Psychodynamics of Learning: Teddy Bears and Transitional ObjectsPaul Feyerabend and his “Against Method” bookOur key takeaways from this paper and our discussion

    Quotes:

    “Methodology may not actually drive outcomes.” - David Provan

    “A methodology can probably never give us, repeatably, exactly what we’re after.” - David Provan

    “We have this proliferation of solutions, but the mere fact that we have so many solutions to that problem suggests that none of the individual solutions actually solve it.” - Drew Rae

    “Wastell calls out this large lack of empirical evidence around the structured methods that organizations use, and concludes that they seem to have more qualities of ‘religious convictions’ than scientific truths.” - David Provan

    “I love the fact that he calls out the ‘journey’ metaphor, which we use all the time in safety.” - Drew Rae

    “You can have transitional objects that don’t serve any of the purposes that they are leading you to.” - Drew Rae

    “Turn up to seminars, and just read papers, that are totally outside of your own field.” - Drew Rae

    Resources:

    Wastell’s Paper: The Fetish of Technique

    Paul Feyerabend (1924-1994)

    Book: Against Method by Paul Feyerabend

    Our Paper Safety Work vs. The Safety of Work

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • While this paper was written over half a century ago, it is still relevant to us today - particularly in the Safety management industry where we are often responsible for offering solutions to problems, and implementing those solutions, requires decisions to be made by top management.

    This is another fascinating piece of work that will broaden your understanding of why organisations often struggle with solving problems that involve making decisions.

    Topics:

    Introduction to the research paper: A Garbage Can Model of Organisational ChoiceOrganised anarchies Phenomena explained by this paperExamples of the garbage can modelsStandards CommitteesEnforceable undertakings processHow to influence the processDeciding on who makes decisionsConclusion - most problems will get solvedPractical takeawaysNot to get discouraged when your problem isn’t solved in a particular meetingBeing mindful of where your decision-making energy is spentProblems vs Solutions vs Decision-making Have multiple solutions ready for problems that may come up - but don’t force them all the time.

    Quotes:

    “Decisions aren’t made inside people’s heads, decisions are made in meetings, so we’ve got to understand the interplay between people in looking at how decisions are made.” - Dr. Drew Rae

    “Incident investigations are a great example of choice opportunities.” - Dr. Drew Rae

    “It’s probably a good reflection point for people to just think about how many decisions certain roles in the organization are being asked to be involved in.” - Dr. David Provan

    Resources:

    Griffith University Safety Science Innovation Lab

    The Safety of Work Podcast

    The Safety of Work LinkedIn

    [email protected]

    A Garbage Can Model of Organizational Choice (Wikipedia Page)

    Administrative Science Quarterly

  • We will review each section of Leveson’s paper and discuss how she sets each section up by stating a general assumption and then proceeds to break that assumption down.We will discuss her analysis of:

    Safety vs. ReliabilityRetrospective vs. Prospective AnalysisThree Levels of Accident Causes:Proximal event chainConditions that allowed the eventSystemic factors that contributed to both the conditions and the event

    Discussion Points:

    Unlike some others, Leveson makes her work openly available on her websiteLeveson’s books, SafeWare: System Safety and Computers (1995) and Engineering a Safer World: Systems Thinking Applied to Safety (2011)Drew describes Leveson as a “prickly character” and once worked for her, and was eventually fired by herLeveson came to engineering with a psychology backgroundMany safety professionals express concern regarding how major accidents keep happening and bemoaning - ‘why we can’t learn enough to prevent them?’The first section of Leveson’s paper: Safety vs. Reliability - sometimes these concepts are at odds, sometimes they are the same thingHow cybernetics used to be ‘the thing’ but the theory of simple feedback loops fell apartSumming up this section: safety is not the sum of reliability componentsThe second section of the paper: Retrospective vs. Prospective Accident AnalysisMost safety experts rely on and agree that retrospective accident analysis is still the best way to learnExample - where technology changes slowly, ie airplanes, it’s acceptable to run a two-year investigation into accident causesExample - where technology changes quickly, ie the 1999 Mars Climate Orbiter crash vs. Polar Lander crash, there is no way to use retrospective analysis to change the next iteration in timeThe third section of the paper: Three Levels of AnalysisIts easiest to find the causes that led to the proximal event chain and the conditions that allowed the event, but identifying the systemic factors is more difficult because it’s not as easy to draw a causal link, it’s too indirectThe “5 Whys” method to analyzing an event or failurePractical takeaways from Leveson’s paper–STAMP (System-Theoretic Accident Model and Processes) using the accident causality model based on systems theoryInvestigations should focus on fixing the part of the system that changes slowestThe exact front line events of the accident often don’t matter that much in improving safetyClosing question: “What exactly is systems thinking?” It is the adoption of the Rasmussian causation model– that accidents arise from a change in risk over time, and analyzing what causes that change in risk

    Quotes:

    “Leveson says, ‘If we can get it right some of the time, why can’t we get it right all of the time?’” - Dr. David Provan

    “Leveson says, ‘the more complex your system gets, that sort of local autonomy becomes dangerous because the accidents don’t happen at that local level.’” - Dr. Drew Rae

    “In linear systems, if you try to model things as chains of events, you just end up in circles.’” - Dr. Drew Rae

    “‘Never buy the first model of a new series [of new cars], wait for the subsequent models where the engineers had a chance to iron out all the bugs of that first model!” - Dr. David Provan

    “Leveson says the reason systemic factors don’t show up in accident reports is just because its so hard to draw a causal link.’” - Dr. Drew Rae

    “A lot of what Leveson is doing is drawing on a deep well of cybernetics theory.” - Dr. Drew Rae

    Resources:

    Applying Systems Thinking Paper by Leveson

    Nancy Leveson– Full List of Publications

    Nancy Leveson of MIT

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • We will discuss how other safety science researchers have designed theories that use Rasmussen’s concepts, the major takeaways from Rasmussen’s article, and how safety professionals can use these theories to analyze and improve systems in their own organizations today.

    Discussion Points:

    Rasmussen’s history of influence, and the parallels to (Paul) Erdős numbers in research paper publishingHow Rasmussen is the “grandfather” of safety scienceRasmussen’s impact across disciplines and organizational categories through the yearsThe basics of this paperWhy risk management models must never be staticHow other theorists and scientists take Rasmussen’s concepts and translate them into their own models and diagramsThe paper’s summary of the evolution of theoretical approaches up until ‘now’ (1997)Why accident models must use a holistic approach including technology AND peopleHow organizations are always going to have pressures of resources vs. required resultsEmployees vs. Management– both push for results with minimal acceptable effort, creating accident riskRasmussen identified we need different models that reflect the real worldTakeaways for our listeners from Rasmussen’s work

    Quotes:

    “That’s the forever challenge in safety, is people have great ideas, but what do you do with them? Eventually, you’ve got to turn it into a method.” - Drew Rae

    “These accidental events are shaped by the activity of people. Safety, therefore, depends on the control of people’s work processes.” - David Provan

    “There’s always going to be this natural migration of activity towards the boundaries of acceptable performance.” - David Provan

    “This is like the most honest look at work I think I’ve seen in any safety paper.” - Drew Rae

    “If you’re a safety professional, just how much time are you spending understanding all of these ins and outs and nuances of work, and people’s experience of work? …You actually need to find out from the insiders inside the system. ” - David Provan

    “‘You can’t just keep swatting at mosquitos, you actually have to drain the swamp.’ I think that’s the overarching conceptual framework that Rasmussen wanted us to have.” - David Provan

    Resources:

    Compute your Erdos Number

    Jens Rasmussen’s 1997 Paper

    David Woods LinkedIn

    Sidney Dekker Website

    Nancy Leveson of MIT

    Black Line/Blue Line Model

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    [email protected]

  • Find out our thoughts on this paper and our key takeaways for the ever-changing world of workplace safety.

    Topics:

    Introduction to the paper & the Author“Adding more rules is not going to make your system safer.”The principles of safety in the paperTypes of safety systems as broken down by the paperProblems in these “Ultrasafe systems”The Summary of developments of human errorThe psychology of making mistakesThe Efficiency trade-off element in safetySuggestions in Amalberti’s conclusionTakeaway messagesAnswering the question: Why does safety get harder as systems get safer?

    Quotes:

    “Systems are good - but they are bad because humans make mistakes” - Dr. Drew Rae

    “He doesn’t believe that zero is the optimal number of human errors” - Dr. Drew Rae

    “You can’t look at mistakes in isolation of the context” - Dr. Drew Rae

    “The context and the system drive the behavior. - Dr. David Provan

    “It’s part of the human condition to accept mistakes. It is actually an important part of the way we learn and develop our understanding of things. - Dr. David Provan

    Resources:

    Griffith University Safety Science Innovation Lab

    The Safety of Work Podcast

    The Safety of Work LinkedIn

    [email protected]

    The Paradoxes of Almost Totally Safe Transportation Systems by R. Amalberti

    Risk Management in a Dynamic society: a Modeling problem - Jens Rasmussen

    The ETTO Principle: Efficiency-Thoroughness Trade-Off: Why Things That Go Right Sometimes Go Wrong - Book by Erik Hollnagel

    Ep.81 How does simulation training develop Safety II capabilities?

    Navigating safety: Necessary Compromises and Trade-Offs - Theory and Practice - Book by R. Amalberti

  • This paper by Daniel Katz was published in 1964 and, scarily still has some very relevant takeaways for today’s safety procedures in organisations. We delve into this research and discover the ideas that Katz initiated all those years ago. The problem is that an organization cannot promote one of these concepts without negatively affecting the other. So how are organizations meant to manage this?

    We share some personal thoughts on whether or not the world of safety research has since found an answer to dealing with these two contradictory concepts.

    Topics:

    Introduction to the paperIntroduction to the Author Daniel KatzThe history of the safety research industryThree basic behaviors required from employees in all organizationsPeople’s willingness to stay in an organizationManaging dependable role performanceSpontanious initiativeFavourable attitudeCreating this motivation in employees to follow rulesCultivating innovative behaviourHow this paper remains relevant in current safety researchNo answer to this question of balancing these two behaviours

    Quotes:

    Katz is really one of the founding fathers in the field of organizational psychology. - Dr. Drew

    Rae

    It’s not just that you’re physically getting people to stay but getting them to stay and still be willing to be productive. Dr. Drew Rae

    “When we promote autonomy, we need to think about what that does to reliable role performance.” - Dr. Drew Rae

    Complex situations, clearly need complex solutions. - Dr. David Provan

    Resources:

    Griffith University Safety Science Innovation Lab

    The Safety of Work Podcast

    [email protected]

    Episode 2

    The motivational basis of organizational behavior (Paper)

  • This paper reveals some really interesting findings and it would be valuable for companies to take notice and possibly change the way they implement incident report recoMmendations.

    Topics:

    Introduction to the paperThe general process of an investigationThe Hypothesis The differences between the reports and their languageThe results of the three reportsDifferences in the recommendations on each of the reportsThe different ways of interpreting the resultsPractical TakeawaysNot sharing lessons learned from incidents - let others learn it for themselves by sharing the report.Summary and answer to the question

    Quotes:

    “All of the information in every report is factual, all of the information is about the same real incident that happened.” Drew Rae

    “These are plausibly three different reports that are written for that same incident but they’re in very different styles, they highlight different facts and they emphasize different things.” Drew Rae

    “Incident reports could be doing so much more for us in terms of broader safety in the organization.” David Provan

    “From the same basic facts, what you select to highlight in the report and what story you use to tell seems to be leading us toward a particular recommendation.” - Drew Rae

    Resources:

    Griffith University Safety Science Innovation Lab

    The Safety of Work Podcast

    [email protected]

    Accident Report Interpretation Paper

    Episode 18 - Do Powerpoint Slides count as a safety hazard?