by Terry McSween
A few months ago, I was surprised to hear that Chevron was moving away from behavior-based safety. I found this difficult to understand. With empirical studies supporting behavioral safety, why was the company abandoning this evidence-based practice?
Then I came across an article in the Journal of SH&E Research describing a recent study done within Chevron that reported no significant relationship between the frequency of behavioral observations and safety incident rates (Agraz-Boenek, Groves, & Haight 2007). I thought perhaps this study would shed some light on Chevron’s recent decision.
After reading the study carefully, let me say first that the study does not invalidate behavior-based safety. In fact, while the article was clear and well written, I was surprised that the study found its way to publication as it displays several methodological problems that prevent any firm conclusions. That said, I think the study does shed some light on why Chevron appears to be abandoning BBS. The BASIC process (the name of the company’s in-house behavior-based safety process) described in this article had several significant problems when compared with BBS as more typically practiced. In short, in spite of my concerns about methodology, I think the authors have provided some fine suggestions on how to design an ineffective BBS process!
First, the article was a case study rather than a controlled experiment thus, while it raises important questions, it cannot provide scientific proof. In an experimental study, researchers manipulated the independent variable (BBS, in this case) and assessed its impact on the dependent variable (injuries, in this case). Experimental control is demonstrated either by systematic changes that clearly corresponded to manipulation of the independent variable (in a within-subject design) or by maintaining a control group for comparison (in a between group design).
Statistics can provide a level of confidence in the results attained in a study, but only in the presence of proper experimental control of both the independent variables and the potential confounding variables. Even with sophisticated statistical methods, researchers cannot draw scientific conclusions without definitive experimental controls.
BBS or Straw Man?
In describing the BASIC process, the authors correctly point out that a behavioral checklist, observations, and a Steering Committee are considered common elements of a behavior-based safety process. But let’s take a closer look. First, let’s consider their checklist. Formal research studies and leaders in the field recommend that the checklist specify behaviors that would have prevented catastrophic events and the majority of injuries that had occurred previously within the organization. The description and an examination of the checklist used in the study suggest that this was not the case. By the authors’ own admission, the checklist was created by a corporate group and based on Standard Operating Procedures rather than a careful analysis of previous incidents.
In fact, I would say the goals of BASIC differ significantly from the goals of a typical BBS process. The authors state, “The goal is to provide management with credible and sufficient information to make appropriate decisions regarding safety issues.” This statement is in marked contrast to the goal of most behavior-based safety, which is (primarily) to create a social community in which employees encourage one another to work safely and (secondly) to help focus efforts to improve safety systems and the physical facilities.
Yet another way BASIC appears to differ from most BBS processes is in the frequency of observations. Most BBS research and leading practitioners typically encourage weekly observations. Employees participating in BASIC were encouraged to do frequent daily observations. The concern is that this high frequency of observations increases the odds that the process becomes a numbers game rather than an intervention to enhance safety. The numbers are telling. With 200 employees, the organization ranged between 89 and 349 observations per day, with a total of 64,643 observations in 2003. The image I have is a small number of the 200 employees walking around and doing observations and doing those observations so frequently that their coworkers perceive them trying to avoid work. Further, they didn’t really talk with their coworkers (who one might imagine were not particularly fond of the “observers”) but instead simply turned their observations in to supervision so that the “hot topic” of the week could be identified.
Several other questions about the observation process are not addressed in the article. For example, the authors provide no information regarding how frequently observers provided feedback beyond mentioning “a feedback tailgate meeting is advisable if the conditions allow it,” which is in marked contrast to feedback being a critical element of almost every observation. While the study presented data of the number of observations, the authors presented no information about the level of participation, which we have always maintained is a critical metric. Research by John Austin and Alecia Alevero, among others, substantiates that conducting observations increases the consistency of safe work practices by those making the observations. So, the level of participation would seem a more critical measure than the frequency of observations.
An additional concern is the limited use of the data by the safety committee and management. The authors imply that observations were not used for anything other than to identify the “hot topic” for discussion topic in safety meetings. The “hot topic” was not trended over time to see whether it changed. Apparently the data was not analyzed or tracked in a systematic way (the authors note the difficulty of getting useable information out of their database), and no other action plans were routinely developed. More importantly, it appears that neither management nor the safety committee made any effort was made to identify and address facility issues based on the observation data.
In short, BASIC is an example of the kind of BBS process that our critics love to attack, one which attempts to fix behavior without addressing either the social or physical environment. It’s too bad Chevron didn’t know the difference.
Fifteen years ago, I heard Tom Kause give a keynote address at an early Behavioral Safety NOW conference and voice his concern that BBS would go down the path followed by many quality improvement initiatives. His concern was that well meaning people would implement BBS based on a limited understanding of the concepts and key elements, and that the poor results would damage the reputation of a powerful methodology.
In fact, today we find many companies implementing BBS based on what they have seen other companies doing or what they believe is important, based on their own history and experience. Some will be successful, but those of us in the safety field need to be careful not to let the others define behavior-based safety in terms of opinion rather than research based evidence. If you are going to do behavior-based safety, either take the time to really study the literature and gain an in-depth understanding, hire the expertise into your organization, or bring in outside support on a contract basis – even if just for a day to get input on the plans you developed internally! We are here to help