Episódios

  • Lastly, we delve into the role of leadership in addressing psychosocial hazards, the importance of standardized guidance for remote work, and the challenges faced by line managers in managing remote workers. We wrap up the episode by providing a toolkit for managers to effectively navigate the challenges of remote work, and highlight the need for tailored safety strategies for different work arrangements.

    Discussion Points:

    Different work-from-home arrangementsSafety needs of work from homeChallenges of remote worker representationUnderstanding and managing psychosocial risksLeadership and managing technical risksRemote work challenges and physical presencePractical takeaways and general discussionSafety strategies for different work arrangementsThe answer to our episode’s question – the short answer is that there definitely isn't a short answer. But this paper comes from a larger project and I know that the people who did the work have gathered together a list of existing resources and toolboxes and, they've even created a few prototype tools and training packages

    Quotes:

    "There's a risk that we're missing important contributions from workers with different needs, neurodiverse workers, workers with mental health issues, workers with particular reasons for working at home and we’re not going to be able to comment on the framework and how it might affect them." - Drew

    “When organizations' number of incident reports go up and up and up and we struggle to understand, is that a sign of worsening safety or is that a sign of better reporting?” - David

    “They do highlight just how inconsistent organisations approaches are and perhaps the need for just some sort of standardised guidance on what is an organisation responsible for when you ask to work from home, or when they ask you to work from home.” - Drew

    “I think a lot of people's response to work from home is let's try to subtly discourage it because we're uncomfortable with it, at the same time as we recognise that it's probably inevitable.” - Drew

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • The conversation stems from a review of a noteworthy paper from the Academy of Management Review Journal titled "The Paradox of Stretch Goals: Organizations in Pursuit of the Seemingly Impossible," which offers invaluable insights into the world of goal setting in senior management.

    Discussion Points:

    The concept of seemingly impossible goals in organizationsControversial nature and impact of ‘zero harm’The role of stretch goals in promoting innovationPotential negative effects of setting stretch goalsPsychological effects of ambitious organizational targetsParadoxical outcomes of setting seemingly impossible goalsThe role of emotions in achieving stretch goalsFactors that contribute to the success of stretch goalsReal-world examples of successful stretch goal implementationCautions against blind imitation of successful stretch goal strategiesThe concept of zero harm in safety initiativesNeed for long-term research on zero harm effectivenessThe answer to our episode’s question – they're good when the organization is currently doing well enough, but stretch goals are not good when the organization is struggling and trying to turn a corner using that stretch goal.

    Quotes:

    "The basic idea [of ‘zero harm’] is that companies should adopt a visionary goal of having zero accidents. Often that comes along with commitment statements by managers, sometimes by workers as well that everyone is committed to the vision of having no accidents." - Drew

    “I think organizations are in this loop, where I know maybe I can't achieve zero, but I can't say anything other than zero because that wouldn't be moral or responsible, because I'd be saying it's okay to hurt people. So I set zero because it's the best thing for me to do.” - David

    “The “stretch goal” was credited with the introduction of hybrid cars. You've got to have a whole new way of managing your car to get that seemingly impossible goal of doubling your efficiency.”- Drew

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • Estão a faltar episódios?

    Clique aqui para atualizar o feed.

  • You’ll hear David and Drew delve into the often overlooked role of bias in accident investigations. They explore the potential pitfalls of data collection, particularly confirmation bias, and discuss the impacts of other biases such as anchoring bias and hindsight bias. Findings from the paper are examined, revealing insights into confirmation bias and its prevalence in interviews. Strategies for enhancing the quality of incident investigations are also discussed, emphasizing the need to shift focus from blaming individuals to investigating organizational causes. The episode concludes with the introduction of Safety Exchange, a platform for global safety community collaboration.

    Discussion Points:

    Exploring the role of bias in accident investigationsConfirmation bias in data collection can validate initial assumptionsReview of a study examining confirmation bias among industry practitionersAnchoring bias and hindsight bias on safety strategiesRecognizing and confronting personal biases Counterfactuals in steering conversations towards preconceived solutionsStrategies to enhance the quality of incident investigationsShifting focus from blaming individuals to investigating organizational causesSafety Exchange - a platform for global safety communityThe challenges organizations face when conducting good quality investigationsStandardization, trust, and managing time and production constraintsConfirmation bias in shaping investigation outcomesTechniques to avoid bias in accident investigations and improve their qualitySafety Exchange - a safe place for open discussionSix key questionsThe answer to our episode’s question – Very, and we all are as human beings. It does mean that we should probably worry more about the data collection phase of our investigations more than the causal analysis methodology and taxonomy that we concern ourselves with

    Quotes:

    "If we actually don't understand how to get a good data collection process, then it really doesn't matter what happens after that." - David

    "The trick is recognizing our biases and separating ourselves from prior experiences to view each incident with fresh eyes." - Drew

    "I have heard people in the industry say this to me, that there's no new problems in safety, we've seen them all before." - David

    "In talking with people in the industry around this topic, incident investigation and incident investigation quality, 80% of the conversation is around that causal classification taxonomy." - David

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • The research paper discussed is by Anita Tucker and Sarah Singer, titled "The Effectiveness of Management by Walking Around: A Randomised Field Study," published in Production and Operations Management.

    Discussion Points:

    Understanding senior leadership safety visits and management walkaroundsBest practices for safety management programsHow management walkarounds influence staff perceptionResearch findings comparing intervention and control groupsConsequences of management inactionEffective implementation of changes Role of senior managers in prioritizing problemsImpact of patchy implementationHow leadership visits affect staff perceptionInvestigating management inaction Effective implementation and consultationKey Takeaways:The same general initiative can have very different effectiveness depending on how it's implemented and who's implementing itWhen we do any sort of consultation effort, whether it's forums, walkarounds, reporting systems, or learning teams, what do we judge those on? Do we judge them on their success at consulting or do we judge them on their success at generating actions that get taken?The answer to our episode’s question – Your answer here at the end of our notes is sometimes yes, sometimes no. It depends on the resulting actions.

    Quotes:

    "I've definitely lived and breathed this sort of a program a lot during my career." - David

    "The effectiveness of management walkarounds depends on the resulting actions." - David

    "The worst thing you can do is spend lots of time deciding what is a high-value problem." - Drew

    "Having the senior manager allocated really means that something serious has been done about it." - Drew

    "The individual who walks around with the leader and talks about safety with the leader, thinks a lot better about the organization." - David

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • The paper reviewed in this episode is from the Journal of Applied Psychology entitled, “A meta-analysis of personality and workplace safety: Addressing unanswered questions” by Beus, J. M., Dhanani, L. Y., & McCord, M. A. (2015).

    Discussion Points:

    Overview of the intersection between psychology and workplace safetyHow personality tests may predict safety performanceAccident proneness theory to modern behaviorismResearch on personality and safety performancePersonality traits influencing work behaviorsThe influence of institutional logicPersonality tests for safety performanceThe need for further research and standardized measurement methodsExamining statistical evidence linking personality to safety performancePersonality traits and their impact on work behaviorAnalysis of research findings on personality and safety performanceThe practical implications of the research findingsThe intriguing yet complex relationship between personality and safetyTakeaways:While not total bunk, we definitely don't understand the impact of personality on safety nearly enough to use it as a tool to predict who will or won't make a safe employeeThere are lots of different ways that we could use personality to get some insights and to make some contributionsWe need people using those measurements to find out more about the relationship between personality and behavior in different situations in different contexts with different choices under different organizational influences.The answer to our episode’s question – Maybe. It depends. Sometimes, in some places not yet. I don't want to say no, but it's not yes yet either.

    Quotes:

    I have to admit, before I read this, I thought that the entire idea of personality testing for safety was total bunk. Coming out of it, I'm still not convinced, but it's much more mixed or nuanced than I was expecting. - Drew

    If there was a systemic trend where some people were genuinely more accident prone, we would expect to see much sharper differences between the number of times one person had an accident and all people who didn't have accidents. - Drew

    I think anything that lumps people into four or five categories downplays the uniqueness of each individual. - David

    There are good professionals in HR, there's good science in HR, but there is a huge amount of pseudo-science around recruiting practices and every country has its own pseudoscience. - Drew

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • Show Notes - The Safety of Work - Ep. 109 Do safety performance indicators mean the same thing to different stakeholdersDr. Drew Rae and Dr. David Provan

    The abstract reads:

    Indicators are used by most organizations to track their safety performance. Research attention has been drawn to what makes for a good indicator (specific, proactive, etc.) and the sometimes perverse and unexpected consequences of their introduction. While previous research has demonstrated some of the complexity, uncertainties and debates that surround safety indicators in the scientific community, to date, little attention has been paid to how a safety indicator can act as a boundary object that bridges different social worlds despite being the social groups’ diverse conceptualization. We examine how a safety performance indicator is interpreted and negotiated by different social groups in the context of public procurement of critical services, specifically fixed-wing ambulance services. The different uses that the procurer and service providers have for performance data are investigated, to analyze how a safety performance indicator can act as a boundary object, and with what consequences. Moving beyond the functionality of indicators to explore the meanings ascribed by different actors, allows for greater understanding of how indicators function in and between social groups and organizations, and how safety is more fundamentally conceived and enacted. In some cases, safety has become a proxy for other risks (reputation and financial). Focusing on the symbolic equivocality of outcome indicators and even more tightly defined safety performance indicators ultimately allows a richer understanding of the priorities of each actor within a supply chain and indicates that the imposition of oversimplified indicators may disrupt important work in ways that could be detrimental to safety performance.

    Discussion Points:

    What we turn into numbers in an organizationBackground of how this paper came aboutFour main groups - procurement, incoming operator, outgoing operator, pilotsAvailability is key for air ambulancesIncentivizing availabilityOutgoing operators/providers feel they lost the contract unfairlyThe point of view of the incoming operators/providers Military pilots fill in between providersUsing numbers to show how good/bad the service isPilots - caught in the middleContracts always require a trade-offBoundary objects- what does availability mean to different people?Maximizing core deliverables safelyProblems with measuring availabilityPressure within the systemPutting a number on performance Takeaways:Choice of a certain metric that isn’t what you need leads to perverse behaviorPlacing indicators on things can make other things invisibleFinancial penalties tied to indicators can be counteractiveThe answer to our episode’s question – Yes, metrics on the boundaries can communicate in different directions

    Quotes:

    “The way in which we turn things into numbers reveals a lot about the logic that is driving the way that we act and give meaning to our actions.” - Drew

    “You’ve got these different measures of the service that are vastly different, depending on what you’re counting, and what you’re looking for..” - David

    “The paper never draws a final conclusion - was the service good, was the service bad?” - Drew

    “The pilots are always in this sort of weird, negotiated situation, where ‘doing the right thing’ could be in either direction.” - Drew

    “If someone’s promising something better, bigger, faster and cheaper, make sure you take the effort to understand how that company is going to do that….” - David

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • This report details the full findings of the world’s largest four-day working week trial to date, comprising 61 companies and around 2,900 workers, that took place in the UK from June to December 2022. The design of the trial involved two months of preparation for participants, with workshops, coaching, mentoring and peer support, drawing on the experience of companies who had already moved to a shorter working week, as well as leading research and consultancy organisations. The report results draw on administrative data from companies, survey data from employees, alongside a range of interviews conducted over the pilot period, providing measurement points at the beginning, middle, and end of the trial.

    Discussion Points:

    Background on the five-day workweekWe’ll set out to prove or review two central claims:Reduce hours worked, and maintain same productivityReduced hours will provide benefits to the employeesDigging in to the Autonomy organization and the researchers and authorsSays “trial” but it’s more like a pilot program61 companies, June to December 2022Issues with methodology - companies will change in 6 months coming out of Covid- a controlled trial would have been betterThe pilot only includes white collar jobs - no physical, operational, high-hazard businessesThe revenue numbersAnalysing the staff numbers- how many filled out the survey? What positions did the respondents hold in the company?Who experienced positive vs. negative changes in individual resultsInterviews from the “shop floor” was actually CEOs and office staffEliminating wasted time from the five-day weekWhat different companies preferred employees to do with their ‘extra time’Assumption 1: there is a business use case benefit- not trueAssumption 2: benefits for staff - mixed resultsTakeaways:Don’t use averagesFinding shared goals can be good for everyoneBe aware of burden-shiftingThe answer to our episode’s question – It’s a promising idea, but results are mixed, and it requires more controlled trial research

    Quotes:

    “It’s important to note that this is a pre-Covid idea, this isn’t a response to Covid.” - Dr. Drew

    “...there's a reason why we like to do controlled trials. That reason is that things change in any company over six months.” - Drew

    “ …a lot of the qualitative data sample is very tiny. Only a third of the companies got spoken to, and only one senior representative who was already motivated to participate in the trial, would like to think that anything that their company does is successful.” - David

    “I'm pretty sure if you picked any company, you're taking into account things like government subsidies for Covid, grants, and things like that. Everyone had very different business in 2021-2022.” - Drew

    “We're not trying to accelerate the pace of work, we're trying to remove all of the unnecessary work.” - Drew

    “I think people who plan the battle don't battle the plan. I like collaborative decision-making in general, but I really like it in relation to goal setting and how to achieve those goals.” - David

    Resources:

    Link to the Pilot Study

    Autonomy

    The Harwood Experiment Episode

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • Summary:

    The purpose of the Australian Work Health and Safety (WHS) Strategy 2023–2033 (the Strategy) is to outline a national vision for WHS — Safe and healthy work for all — and set the platform for delivering on key WHS improvements. To do this, the Strategy articulates a primary goal supported by national targets, and the enablers, actions and system-wide shifts required to achieve this goal over the next ten years. This Strategy guides the work of Safe Work Australia and its Members, including representatives of governments, employers and workers – but should also contribute to the work and understanding of all in the WHS system including researchers, experts and practitioners who play a role in owning, contributing to and realising the national vision.

    Discussion Points:

    Background on Safe Work Australia The strategy includes six goals for reducing:Worker fatalities caused by traumatic injuries by 30% The frequency rate of serious claims resulting in one or more weeks off work by 20% The frequency rate of claims resulting in permanent impairment by 15% The overall incidence of work-related injury or illness among workers to below 3.5% The frequency rate of work-related respiratory disease by 20% No new cases of accelerated silicosis by 2033The strategy is a great opportunity to set a direction for research and educationFive actions covered by the strategy:Information and raising awarenessNational CoordinationData and intelligence gatheringHealth and safety leadershipCompliance and enforcementWhen regulators fund research - they demand tangible results quicklyMany safety documents and corporate safety systems never reach the most vulnerable workers, who don’t have ‘regular’ long-term jobsStandardization can increase unnecessary workWhen and where do organizations access safety information?Data - AI use for the futureStrategy lacks milestones within the ten-year spanEnforcement - we don’t have evidence-based data on the effectsTakeaways:The idea of a national strategy? Good.Balancing safety with innovation, evidenceAnswering our episode question: Need research into specific workforces, what is the evidence behind specific industry issues. “Lots of research is needed!”

    Quotes:

    “The fact is, that in Australia, traumatic injury fatalities - which are the main ones that they are counting - are really quite rare, even if you add the entire country together.” - Drew

    “I really see no point in these targets. They are not tangible, they’re not achievable, they’re not even measurable, with the exception of respiratory disease…” - Drew

    “These documents are not only an opportunity to set out a strategic direction for research and policy, and industry activity, but also an opportunity to educate.” - David

    “When regulators fund research, they tend to demand solutions. They want research that’s going to produce tangible results very quickly.” - Drew

    “I would have loved a concrete target for improving education and training- that is something that is really easy to quantify.” - Drew

    Resources:

    Link to the strategy document

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • Baron's work focuses primarily on judgment and decision-making, a multi-disciplinary area that applies psychology to problems of ethical decisions and resource allocation in economics, law, business, and public policy.

    The paper’s summary:

    Recent efforts to teach thinking could be unproductive without a theory of what needs to be taught and why. Analysis of where thinking goes wrong suggests that emphasis is needed on 'actively open-minded thinking'. including the effort to search for reasons why an initial conclusion might be wrong, and on reflection about rules of inference, such as heuristics used for making decisions and judgments. Such instruction has two functions. First. it helps students to think on their own. Second. it helps them to understand the nature of expert knowledge, and, more generally, the nature of academic disciplines. The second function, largely neglected in discussions of thinking instruction. can serve as the basis for thinking instruction in the disciplines. Students should learn how knowledge is obtained through actively open-minded thinking. Such learning will also teach students to recognize false claims to systematic knowledge.

    Discussion Points:

    Critical thinking and Chat AI Teaching knowledge vs. critical thinkingSection One: Introduction- critical thinking is a stated goal of many teaching institutionsSection Two: The Current Rationale/What is thinking? Reading about thinking is quite difficult!Baron’s “Myside Bias” is today’s confirmation or selection biasReflective learning- does it help with learning?Section Three: Abuses - misapplying thinking in schools and businessBreaking down learning into sub-sectionsSection Four: The growth of knowledge - beginning in Medieval timesSection Five: The basis of expertise - what is an ‘expert’? Every field has its own self-critiquesDrew’s brain is hurting just getting through this discussionSection Six: What the educated person should knowStudying accidents in safety science - student assignmentsTakeaways:Good thinking means being able to make good decisions re: expertsPrecision is required around what is necessary for learningWell-informed self-criticism is necessary Answering our episode question: Can we teach critical thinking? It was never answered in this paper, but it gave us a lot to think about

    Quotes:

    “It’s a real stereotype that old high schools were all about rote learning. I don’t think that was ever the case. The best teachers have always tried to inspire their students to do more than just learn the material.” - Drew

    “Part of the point he’s making is, is that not everyone who holds themself out to be an expert IS an expert…that’s when we have to have good thinking tools .. who IS an expert and how do we know who to trust?” - Drew

    “Baron also says that even good thinking processes won’t necessarily help much when specific knowledge is lacking…” - David

    ‘The smarter students are, the better they are at using knowledge about cognitive biases to criticize other people’s beliefs, rather than to help themselves think more critically.” - Drew

    “Different fields advance by different sorts of criticism..to understand expertise a field you need to understand how that field does its internal critique.” - Drew

    Resources:

    Link to the paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • You’ll hear a little about Schein’s early career at Harvard and MIT, including his Ph.D. work – a paper on the experience of POWs during wartime contrasted against the indoctrination of individuals joining an organization for employment. Some of Schein’s 30-year-old concepts that are now common practice and theory in organizations, such as “psychological safety”

    Discussion Points:

    A brief overview of Schein’s career, at Harvard and MIT’s School of Management and his fascinating Ph.D. on POWs during the Korean WarA bit about the book, Humble InquiryDigging into the paperThree types of learning:Knowledge acquisition and insight learningHabits and skillsEmotional conditioning and learned anxietyPractical examples and the metaphor of Pavlov’s dogCountering Anxiety I with Anxiety IIThree processes of ‘unfreezing’ an organization or individual to change:DisconfirmationCreation of guilt or anxietyPsychological safetyMistakes in organizations and how they respondThere are so many useful nuggets in this paperSchein’s solutions: Steering committees/change teams/groups to lead the organizations and manage each other’s anxietyTakeaways:How an organization deals with mistakes will determine how change happensAssessing levels of fear and anxietyKnow what stands in your way if you want progressAnswering our episode question: How can organizations learn faster? 1) Don't make people afraid to enter the green room. 2) Or make them more afraid to stand on the black platform.

    Quotes:

    “...a lot of people credit [Schein] with being the granddaddy of organizational culture.” - Drew

    “[Schein] says .. in order to learn skills, you've got to be willing to be temporarily incompetent, which is great if you're learning soccer and not so good if you're learning to run a nuclear power plant.” - Drew

    “Schein says quite clearly that punishment is very effective in eliminating certain kinds of behavior, but it's also very effective in inducing anxiety when in the presence of the person or the environment that taught you that lesson.” - Drew

    “We've said before that we think sometimes in safety, we're about three or four decades behind some of the other fields, and this might be another example of that.” - David

    “Though curiosity and innovation are values that are praised in our society, within organizations and particularly large organizations, they're not actually rewarded.” - Drew

    Resources:

    Link to the paper

    Humble Inquiry by Edgar Schein

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • You’ll hear some dismaying statistics around the validity of research papers in general, some comments regarding the peer review process, and then we’ll dissect each of six questions that should be asked BEFORE you design your research.

    The paper’s abstract reads:

    In this article, we define questionable measurement practices (QMPs) as decisions researchers make that raise doubts about the validity of the measures, and ultimately the validity of study conclusions. Doubts arise for a host of reasons, including a lack of transparency, ignorance, negligence, or misrepresentation of the evidence. We describe the scope of the problem and focus on how transparency is a part of the solution. A lack of measurement transparency makes it impossible to evaluate potential threats to internal, external, statistical-conclusion, and construct validity. We demonstrate that psychology is plagued by a measurement schmeasurement attitude: QMPs are common, hide a stunning source of researcher degrees of freedom, and pose a serious threat to cumulative psychological science, but are largely ignored. We address these challenges by providing a set of questions that researchers and consumers of scientific research can consider to identify and avoid QMPs. Transparent answers to these measurement questions promote rigorous research, allow for thorough evaluations of a study’s inferences, and are necessary for meaningful replication studies.

    Discussion Points:

    The appeal of the foundational question, “are we measuring what we think we’re measuring?”Citations of studies - 40-93% of studies lack evidence that the measurement is validPsychological research and its lack of defining what measures are used, and the validity of their measurement, etc.The peer review process - it helps, but can’t stop bad research being publishedWhy care about this issue? Lack of validity- the research answer may be the oppositeDesigning research - like choosing different paths through a gardenThe six main questions to avoid questionable measurement practices (QMPs)What is your construct? Why/how did you select your measure?What measure to operationalize the construct?How did you quantify your measure?Did you modify the scale? How and why?Did you create a measure on the fly? Takeaways:Expand your methods section in research papersAsk these questions before you design your researchAs research consumers, we can’t take results at face valueAnswering our episode question: How can we get better? Transparency is the starting point.

    Resources:

    Link to the paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • In concert with the paper, we’ll focus on two major separate but related Boeing 737 accidents:

    Lyon Air #610 in October 2018 - The plane took off from Jakarta and crashed 13 mins later, with one of the highest death tolls ever for a 737 crash - 189 souls.Ethiopian Airlines #30 in March 2019 - This plane took off from Addis Ababba and crashed minutes into takeoff, killing 157.

    The paper’s abstract reads:

    Following other contributions about the MAX accidents to this journal, this paper explores the role of betrayal and moral injury in safety engineering related to the U.S. federal regulator’s role in approving the Boeing 737MAX—a plane involved in two crashes that together killed 346 people. It discusses the tension between humility and hubris when engineers are faced with complex systems that create ambiguity, uncertain judgements, and equivocal test results from unstructured situations. It considers the relationship between moral injury, principled outrage and rebuke when the technology ends up involved in disasters. It examines the corporate backdrop against which calls for enhanced employee voice are typically made, and argues that when engineers need to rely on various protections and moral inducements to ‘speak up,’ then the ethical essence of engineering—skepticism, testing, checking, and questioning—has already failed.

    Discussion Points:

    Two separate but related air disastersThe Angle of Attack Sensor MCAS (Maneuvering Characteristics Augmentation System) on the Boeing 737Criticality rankingsThe article - Joe Jacobsen, an engineer/whistleblower who came forwardThe claim is that engineers need more moral courage/convictions and training in ethicsDefining moral injury Engineers - the Challenger accident, the Hyatt collapseDisaster literacy – check out the old Disastercast podcastHumility and hubrisRegulatory bodies and their issuesSolutions and remediesRisk assessmentsOther examples outside of BoeingTakeaways:Profit vs. risk, technical debtDon’t romanticize ethicsInternal emails can be your downfallRewards, accountability, incentivesLook into the engineering resourcesAnswering our episode question: In this paper, it's a sign that things are bad.

    Quotes:

    “When you develop a new system for an aircraft, one of the first safety things you do is you classify them according to their criticality.” - Drew

    “Just like we tend to blame accidents on human error, there’s a tendency to push ethics down to that front line.” - Drew

    “There’s this lasting psychological/biological behavioral, social or even spiritual impact of either perpetrating, or failing to prevent, or bearing witness to, these acts that transgress our deeply held moral beliefs and expectations.” - David

    “Engineers are sort of taught to think in these binaries, instead of complex tradeoffs, particularly when it comes to ethics.” - Drew

    “Whenever you have this whistleblower protection, you’re admitting that whistleblowers are vulnerable.” - Drew

    “Engineers see themselves as belonging to a company, not to a profession, when they’re working.” - Drew

    Resources:

    Link to the paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • The paper’s abstract reads:

    Healthcare systems are under stress as never before. An aging population, increasing complexity and comorbidities, continual innovation, the ambition to allow unfettered access to care, and the demands on professionals contrast sharply with the limited capacity of healthcare systems and the realities of financial austerity. This tension inevitably brings new and potentially serious hazards for patients and means that the overall quality of care frequently falls short of the standard expected by both patients and professionals. The early ambition of achieving consistently safe and high-quality care for all has not been realised and patients continue to be placed at risk. In this paper, we ask what strategies we might adopt to protect patients when healthcare systems and organisations are under stress and simply cannot provide the standard of care they aspire to.

    Discussion Points:

    Extrapolating out from the healthcare focus to other businessesThis paper was published pre-pandemicAdaptations during times of extreme stress or lack of resources - team responses will varyPeople under pressure adapt, and sometimes the new conditions become the new normalGuided adaptability to maintain safetySubstandard care in French hospitals in the studyThe dynamic adjustment for times of crisis vs. long-term solutionsShort-term adaptations can impede development of long-term solutionsFour basic principles in the paper:Giving up hope of returning to normalWe can never eliminate all risks and threatsPrincipal focus should be on expected problemsManagement of risk requires engagement and action at all managerial levelsGriffith university’s rules on asking for an extension…expected surprisesMiddle management liaising between frontlines and executivesManaging operations in “degraded mode” and minimum equipment listsAbsolute safety - we can’t aim for 100% - we need to write in what “second best” coversTakeaways:Most industries are facing more pressure today than in the past, focus on the current risksAll industries have constant risks and tradeoffs - how to address at each levelUnderstand how pressures are being faced by teams, what adaptations are acceptable for short and long term?For expected conditions and hazards, what does “second best” look like?Research is needed around “degraded operations”Answering our episode question: The wrong answer is to only rely on the highest standards which may not be achievable in degraded operations

    Quotes:

    “I think it’s a good reflection for professionals and organistions to say, “Oh, okay - what if the current state of stress is the ‘new normal’ or what if things become more stressed? Is what we’re doing now the right thing to be doing?” - David

    “There is also the moral injury when people who are in a ‘caring’ profession and they can’t provide the standard of care that they believe to be right standard.” - Drew

    “None of these authors share how often these improvised solutions have been successful or unsuccessful, and these short-term fixes often impede the development of longer-term solutions.” - David

    “We tend to set safety up almost as a standard of perfection that we don’t expect people to achieve all the time, but we expect those deviations to be rare and correctable.” - Drew

    Resources:

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • The paper’s abstract reads:

    This paper reflects on the credibility of nuclear risk assessment in the wake of the 2011 Fukushima meltdown. In democratic states, policymaking around nuclear energy has long been premised on an understanding that experts can objectively and accurately calculate the probability of catastrophic accidents. Yet the Fukushima disaster lends credence to the substantial body of social science research that suggests such calculations are fundamentally unworkable. Nevertheless, the credibility of these assessments appears to have survived the disaster, just as it has resisted the evidence of previous nuclear accidents. This paper looks at why. It argues that public narratives of the Fukushima disaster invariably frame it in ways that allow risk-assessment experts to “disown” it. It concludes that although these narratives are both rhetorically compelling and highly consequential to the governance of nuclear power, they are not entirely credible.

    Discussion Points:

    Following up on a topic in episode 100 - nuclear safety and risk assessmentThe narrative around planes, trains, cars and nuclear - risks vs. safetyPlanning for disaster when you’ve promised there’s never going to be a nuclear disasterThe 1975 WASH-1400 StudiesJapanese disasters in the last 100 yearsFour tenets of Downer’s paper:The risk assessments themselves did not fail Relevance Defense: The failure of one assessment is not relevant to the other assessmentsCompliance Defense: The assessments were sound, but people did not behave the way they were supposed to/did not obey the rulesRedemption Defense: The assessments were flawed, but we fixed themTheories such as: Fukushima did happen - but not an actual ‘accident/meltdown’ - it basically withstood a tsunami when the country was flattenedResidents of Fukushima - they were told the plant was ‘safe’The relevance defense, Chernobyl, and 3 Mile IslandBoeing disasters, their risk assessments, and blameAt the time of Fukushima, Japanese regulation and engineering was regarded as superiorThis was not a Japanese reactor! It’s a U.S. designThe compliance defense, human errorThe redemption defense, regulatory bodies taking all Fukushima elements into accountDowner quotes Peanuts comics in the paper - lessons - Lucy can’t be trusted!This paper is not about what’s wrong with risk assessments- it’s about how we defend what we doTakeaways:Uncertainty is always present in risk assessmentsYou can never identify all failure modesThree things always missing: anticipating mistakes, anticipating how complex tech is always changing, anticipating all of the little plastic connectors that can breakAssumptions - be wary, check all the what-if scenariosJust because a regulator declares something safe, doesn’t mean it isAnswering our episode question: You must question risk assessments CONSTANTLY

    Quotes:

    “It’s a little bit surprising we don’t scrutinize the ‘control’ every time it fails.” - Drew

    “In the case of nuclear power, we’re in this awkward situation where, in order to prepare emergency plans, we have to contradict ourselves.” - Drew

    “If systems have got billions of potential ’billion to one’ accidents then it’s only expected that we’re going to see accidents from time to time.” - David

    “As the world gets more and more complex, then our parameters for these assessments need to become equally as complex.” - David

    “The mistakes that people make in these [risk assessments] are really quite consistent.” - Drew

    Resources:

    Disowning Fukushima Paper by John Downer

    WASH-1400 Studies

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • The book explains Perrow’s theory that catastrophic accidents are inevitable in tightly coupled and complex systems. His theory predicts that failures will occur in multiple and unforeseen ways that are virtually impossible to predict.

    Charles B. Perrow (1925 – 2019) was an emeritus professor of sociology at Yale University and visiting professor at Stanford University. He authored several books and many articles on organizations and their impact on society. One of his most cited works is Complex Organizations: A Critical Essay, first published in 1972.

    Discussion Points:

    David and Drew reminisce about the podcast and achieving 100 episodesOutsiders from sociology, management, and engineering entered the field in the 70s and 80sPerrow was not a safety scientist, as he positioned himself against the academic establishmentPerrow’s strong bias against nuclear power weakens his writingThe 1979 near-disaster at Three Mile Island - Perrow was asked to write a report, which became the book, “Normal Accidents…”The main tenets of Perrow’s core arguments:Start with a ‘complex high-risk technology’ - aircraft, nuclear, etcTwo or more values start the accident“Interactive Complexity”787 Boeing failures - failed system + unexpected operator response lead to disasterThere will always be separate individual failures, but can we predict or prevent the ‘perfect storm’ of mulitple failures at once?Better technology is not the answerPerrow predicted complex high-risk technology to be a major part of future accidentsPerrow believed nuclear power/nuclear weapons should be abandoned - risks outweigh benefitsThree reasons people may see his theories as wrong:If you believe the risk assessments of nuclear are correct, then my theories are wrongIf they are contrary to public opinion and valuesIf safety requires more safe and error-free organizationsIf there is a safer way to run the systems outside all of the aboveThe modern takeaway is a tradeoff between adding more controls, and increased complexityThe hierarchy of designers vs operatorsWe don’t think nearly enough about the role of power- who decides vs. who actually takes the risks?There should be incentives to reduce complexity of systems and the uncertainty it createsTo answer this show’s question - not entirely, and we are constantly asking why

    Quotes:

    “Perrow definitely wouldn’t consider himself a safety scientist, because he deliberately positioned himself against the academic establishment in safety.” - Drew

    “For an author whom I agree with an awful lot about, I absolutely HATE the way all of his writing is colored by…a bias against nuclear power.” - Drew

    [Perrow] has got a real skepticism of technological power.” - Drew

    "Small failures abound in big systems.” - David

    “So technology is both potentially a risk control, and a hazard itself, in [Perrow’s] simple language.” - David

    Resources:

    The Book – Normal accidents: Living with high-risk technologies

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • The paper’s abstract reads:

    The failure of 27 wildland firefighters to follow orders to drop their heavy tools so they could move faster and outrun an exploding fire led to their death within sight of safe areas. Possible explanations for this puzzling behavior are developed using guidelines proposed by James D. Thompson, the first editor of the Administrative Science Quarterly. These explanations are then used to show that scholars of organizations are in analogous threatened positions, and they too seem to be keeping their heavy tools and falling behind. ASQ's 40th anniversary provides a pretext to reexamine this potentially dysfunctional tendency and to modify it by reaffirming an updated version of Thompson's original guidelines.

    The Mann Gulch fire was a wildfire in Montana where 15 smokejumpers approached the fire to begin fighting it, and unexpected high winds caused the fire to suddenly expand. This "blow-up" of the fire covered 3,000 acres (1,200 ha) in ten minutes, claiming the lives of 13 firefighters, including 12 of the smokejumpers. Only three of the smokejumpers survived.

    The South Canyon Fire was a 1994 wildfire that took the lives of 14 wildland firefighters on Storm King Mountain, near Glenwood Springs, Colorado, on July 6, 1994. It is often also referred to as the "Storm King" fire.

    Discussion Points:

    Some details of the Mann Gulch fire deaths due to refusal to drop their tools Weich lays out ten reasons why these firefighters may have refused to drop their tools:Couldn't hear the orderLack of explanation for order - unusual, counterintuitiveYou don’t trust the leaderControl- if you lose your tools, lose capability, not a firefighterSkill at dropping tools - ie survivor who leaned a shovel against a tree instead of droppingSkill with replacement activity - it’s an unfamiliar situationFailure - to drop your tools, as a firefighter, is to failSocial dynamics - why would I do it if others are notConsequences - if people believe it won’t make a difference, they won’t drop.These men should have been shown the difference it would makeIdentity- being a firefighter, without tools they are throwing away their identity. This was also shortly after WWII, where you are a coward if you throw away your weapons, and would be alienated from your groupThomson had four principles necessary for research in his publication: Administrative science should focus on relationships - you can’t understand without structures and people and variables. Abstract concepts - not on single concrete ideas, but theories that apply to the fieldDevelopment of operational definitions that bridge concepts and raw experience - not vague fluffy things with confirmation bias - sadly, we still don’t have all the definitions todayValue of the problem - what do they mean? What is the service researchers are trying to provide? How Weick applies these principles to the ten reasons, then looks at what it means for researchersWeick’s list of ten- they are multiple, interdependent reasons – they can all be true at the same timeThompsons list of four, relating them to Weick’s ten, in today’s organizationsWhat are the heavy tools that we should get rid of? Weick links heaviest tools with identityDrew’s thought - getting rid of risk assessments would let us move faster, but people won’t drop them, relating to the ten reasons aboveTakeaways: 1) Emotional vs. cognitive (did I hear that, do I know what to do) emotional (trust, failure, etc.) in individuals and teams2) Understanding group dynamics/first person/others to follow - the pilot diversion story, Piper Alpha oil rig jumpers, first firefighter who drops tools. Next week is episode 100 - we’ve got a plan!

    Quotes:

    “Our attachment to our tools is not a simple, rational thing.” - Drew

    “It’s really hard to recognize that you’re well past that point where success is not an option at all.” - Drew

    “These firefighters were several years since they’d been in a really raging, high-risk fire situation…” - David

    “I encourage anyone to read Weick’s papers, they’re always well-written.” - David

    “Well, I think according to Weick, the moment you begin to think that dropping your tools is impossible and unthinkable, that might be the moment you actually have to start wondering why you’re not dropping your tools.” - Drew

    “The heavier the tool is, the harder it is to drop.” - Drew



    Resources:

    Karl Weick - Drop Your Tools Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • In 1939, Alfred Marrow, the managing director of the Harwood Manufacturing Corporation factory in Virginia, invited Kurt Lewin (a German-American psychologist, known as one of the modern pioneers of social, organizational, and applied psychology in the U.S.

    to come to the textile factory to discuss significant problems with productivity and turnover of employees. The Harwood study is considered the first experiment of group decision-making and self-management in industry and the first example of applied organizational psychology. The Harwood Experiment was part of Lewin's continuing exploration of participatory action research.

    In this episode David and Drew discuss the main areas covered by this research:

    Group decision-makingSelf-managementLeadership trainingChanging people’s thoughts about stereotypesOvercoming resistance to change

    It turns out that yes, Lewin identified many areas of the work environment that could be improved and changed with the participation of management and members of the workforce communicating with each other about their needs and wants.This was novel stuff in 1939, but proved to be extremely insightful and organizations now utilize many of this experiment’s tenets 80 years later.

    Discussion Points:

    Similarities in this study compared to the Chicago Western Electric “Hawthorne experiments”Organizational science – Lewin’s approachHow Lewin came to be invited to the Virginia factory and the problems they needed to solveAutocratic vs. democratic - studies of school children’s performanceThe setup of the experiment - 30 minute discussions several times a week with four cohortsThe criticisms and nitpicks around the study participantsGroup decision makingSelf-management and field theoryHarwood leaders were appointed for tech knowledge, not people skillsThe experiment held “clinics” where leaders could bring up their issues to discussChanging stereotypes - the factory refused to hire women over 30 - but experimented by hiring a group for this studyPresenting data does not work to change beliefs, but stories and discussions doResistance to change - changing workers’ tasks without consulting them on the changes created bitterness and lack of confidenceThe illusion of choice lowers resistanceThe four cohorts:Control group - received changes as they normally would - just ‘being told’Group received more detail about the changes, members asked to represeet the group with managementGroup c and d participated in voting for the changes, their productivity was the only one that increased– 15%This was an atypical factory/workforce to begin with, that already had a somewhat participatory approachTakeaways:Involvement in the discussion of change vs. no involvementSelf-management - setting own goals Leadership needs more than technical competenceStereotypes - give people space to express views, they may join the group majority in voting the other wayResistance to change - if people can contribute and participate, confidence is increasedFocus on group modifications, not individualsMore collaborative, less autocraticDoing this kind of research is not that difficult, you don’t need university-trained researchers, just people with a good mind for research ideas/methods

    Quotes:

    “The experiments themselves were a series of applied research studies done in a single manufacturing facility in the U.S., starting in 1939.” - David

    “Lewin’s principal for these studies was…’no research without action, and no action without research,’ and that’s where the idea of action research came from…each study is going to lead to a change in the plant.” - Drew

    “It became clear that the same job was done very differently by different people.” - David

    “This is just a lesson we need to learn over and over and over again in our organizations, which is that you don’t get very far by telling your workers what to do without listening to them.” - Drew

    “With 80 years of hindsight it's really hard to untangle the different explanations for what was actually going on here.” - Drew

    “Their theory was that when you include workers in the design of new methods…it increases their confidence…it works by making them feel like they’re experts…they feel more confident in the change.” - Drew

    Resources:

    The Practical Theorist: Life and Work of Kurt Lewin by Alfred Marrow

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • This was very in-depth research within a single organization, and the survey questions it used were well-structured. With 48 interviews to pull from, it definitely generated enough solid data to inform the paper’s results and make it a valuable study.We’ll be discussing the pros and cons of linking safety performance to monetary bonuses, which can often lead to misreporting, recategorizing, or other “perverse” behaviors regarding safety reporting and metrics, in order to capture that year-end dollar amount, especially among mid-level and senior management.

    Discussion Points:

    Do these bonuses work as intended?Oftentimes profit sharing within a company only targets senior management teams, at the expense of the front-line employeesIf safety and other measures are tied monetarily to bonuses, organizations need to spend more than a few minutes determining what is being measuredBonuses – do they really support safety? They don’t prevent accidents“What gets measured gets managed” OR “What gets measured gets manipulated”Supervisors and front-line survey respondents did not understand how metrics were used for bonuses87% replied that the safety measures had limited or negative effectNearly half said the bonus structure tied to safety showed that the organization felt safety was a priorityNothing negative was recorded by the respondents in senior management- did they believe this is a useful tool?Most organizations have only 5% or less performance tied to safetyDavid keeps giving examples in the hopes that Drew will agree that at least one of them is a good ideaDrew has “too much faith in humanity” around reporting and measuring safety in these organizationsTry this type of survey in your own organization and see what you find

    Quotes:

    “I’m really mixed, because I sort of agree on principle, but I disagree on any practical form.” - Drew

    “I think there’s a challenge between the ideals here and the practicalities.” - David

    “I think sometimes we can really put pretty high stakes on pretty poorly thought out things, we oversimplify what we’re going to measure and reward.” - Drew

    “If you look at the general literature on performance bonuses, you see that they cause trouble across the board…they don’t achieve their purposes…they cause senior executives to do behaviors that are quite perverse.” - Drew

    “I don’t like the way they’ve written up the analysis I think that there’s some lost opportunity due to a misguided desire to be too statistically methodical about something that doesn’t lend itself to the statistical analysis.” - Drew

    “If you are rewarding anything, then my view is that you’ve got to have safety alongside that if you want to signal an importance there.” - David

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • Just because concepts, theories, and opinions are useful and make people feel comfortable, doesn’t mean they are correct. No one so far has come up with an answer in the field of safety that proves, “this is the way we should do it,” and in the work of safety, we must constantly evaluate and update our practices, rules, and recommendations. This of course means we can never feel completely comfortable – and humans don’t like that feeling. We’ll dig into why we should be careful about feeling a sense of “clarity” and mental ease when we think that we understand things completely- because what happens if someone is deliberately making us feel that a problem is “solved”...?

    The paper we’re discussing deals with a number of interesting psychological constructs and theories. The abstract reads:

    The feeling of clarity can be dangerously seductive. It is the feeling associated with understanding things. And we use that feeling, in the rough-and-tumble of daily life, as a signal that we have investigated a matter sufficiently. The sense of clarity functions as a thought-terminating heuristic. In that case, our use of clarity creates significant cognitive vulnerability, which hostile forces can try to exploit. If an epistemic manipulator can imbue a belief system with an exaggerated sense of clarity, then they can induce us to terminate our inquiries too early — before we spot the flaws in the system. How might the sense of clarity be faked? Let’s first consider the object of imitation: genuine understanding. Genuine understanding grants cognitive facility. When we understand something, we categorize its aspects more easily; we see more connections between its disparate elements; we can generate new explanations; and we can communicate our understanding. In order to encourage us to accept a system of thought, then, an epistemic manipulator will want the system to provide its users with an exaggerated sensation of cognitive facility. The system should provide its users with the feeling that they can easily and powerfully create categorizations, generate explanations, and communicate their understanding. And manipulators have a significant advantage in imbuing their systems with a pleasurable sense of clarity, since they are freed from the burdens of accuracy and reliability. I offer two case studies of seductively clear systems: conspiracy theories; and the standardized, quantified value systems of bureaucracies.

    Discussion Points:

    This has been our longest break from the podcastDavid traveled to the USUncertainty can make us risk-averseOrganizations strive for more certainty in the workplaceScimago for evaluating research papersA well-written paper, but not peer-evaluated by psychologistsFocus on conspiracy theories and bureaucracyThe Studio C comedy sketch - bank robbers meet a philosopherAcademic evaluations - white men vs. minorities/womenPuzzles and pleasure spikesClarity as a thought terminatorEpistemic intimidation and epistemic seductionCognitive Fluency, Insight, and Cognitive FacilityAlthough fascinating, there is no evidence to support the paper’s claimsEcho chambers and thought bubblesRush Limbaugh and Fox News - buying into the belief systemNumbers, graphs, charts, grades, tables – all make us feel comfort and controlTakeaways:Just because it’s useful, doesn’t mean it’s correctThe world is not supposed to make sense, it’s important to live with some cognitive discomfortBe cautious about feeling safe and comfortableConstant evaluation of safety practices must be the norm

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

  • Assessing the Influence of “Take 5” Pre-Task Risk Assessments on Safety” by Jop Havinga, Mohammed Ibrahim Shire, and our own Andrew Rae. The paper was just published in “Safety,” - an international, peer-reviewed, open-access journal of industrial and human health safety published quarterly online by MDPI.

    The paper’s abstract reads:

    This paper describes and analyses a particular safety practice, the written pre-task risk assessment commonly referred to as a “Take 5”. The paper draws on data from a trial at a major infrastructure construction project. We conducted interviews and field observations during alternating periods of enforced Take 5 usage, optional Take 5 usage, and banned Take 5 usage. These data, along with evidence from other field studies, were analysed using the method of Functional Interrogation. We found no evidence to support any of the purported mechanisms by which Take 5 might be effective in reducing the risk of workplace accidents. Take 5 does not improve the planning of work, enhance worker heedfulness while conducting work, educate workers about hazards, or assist with organisational awareness and management of hazards. Whilst some workers believe that Take 5 may sometimes be effective, this belief is subject to the “Not for Me” effect, where Take 5 is always believed to be helpful for someone else, at some other time. The adoption and use of Take 5 is most likely to be an adaptive response by individuals and organisations to existing structural pressures. Take 5 provides a social defence, creating an auditable trail of safety work that may reduce anxiety in the present, and deflect blame in the future. Take 5 also serves a signalling function, allowing workers and companies to appear diligent about safety.

    Discussion Points:

    Drew, how are you feeling with just a week of comments and reactions coming in?If people are complaining that the study is not big enough, great! That means people are interestedIntroduction of Jop Havinga, and his top-level framing of the studyWhy do we do the ‘on-off’ style of research?We saw no difference in results when cards were mandatory, or optional, or bannedPerplexingly, some cards are filled out before getting to the job, and some after the job is complete, when there is no need for the cardOne way cards may be helpful is simply creating a mindfulness and heedfulness about proceduresThe “Not for Me” effect– people believe the cards may be good for others, but not necessary for selvesResearch criticisms like, “how can you actually tell people are paying attention or not?”The Take 5 cards serve as a protective layer for management and workers looking to avoid blameMain takeaway: Stop using Take 5s in accident investigations, as they provide no real data, and they may even be detrimental– as in “safety clutter”Send us your suggestions for future episodes, we are actively looking!

    Quotes:

    “You always get taken by surprise when people find other ways to criticize [the research.] I think my favorite criticism is people who immediately hit back by trying to attack the integrity of the research.” - Dr. Drew

    “So this link between behavioral psychology and safety science is sometimes very weak, it’s sometimes just a general idea of applying incentives.” - Dr. Drew

    “When someone says, ‘we introduced Take 5’s and we reduced our number of accidents by 50%,’ that is nonsense. There is no [one] safety intervention in the world where you could have that level of change and be able to see it.” - Dr. Drew

    “It’s really hard to argue that these Take 5s lead to actual better planning of the work they’re conducting.” - Dr. Jop Havinga

    “What we saw is just a total disconnect – the behavior happens without the Take 5s, the Take 5s happen without the behavior. The two NEVER actually happened at the same time.” - Dr. Drew

    “Considering that Take 5 cards are very generic, they will rarely contain anything new for somebody.” - Dr. Jop Havinga

    “Often the people who are furthest removed from the work are most satisfied with Take 5s and most reluctant to get rid of them.” - Dr. Drew

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork