Process Hazards Analysis (PHA) studies (including HAZOP, What-if?, HAZID, and more) are easily subjective. This leads to multiple inconsistencies that have major safety and cost implications.
Previously on the blog, we explored how important it is to ensure complete risk discovery for safer, smarter operations, but also equally important is having consistency in the analysis of consequence severities and likelihoods.
In the field of psychology, it is commonly believed that risk is extremely difficult for people determine. Whether considering a benign, everyday risk like driving a car or evaluating a major accident hazard in a processing facility, risk assessment is a constant struggle, creates frequent challenges and can be a point of contention.
The High Cost of Inconsistent Assessments
What happens when facility A comes up with a fatality severity consequence ranking for a certain scenario and sister facilities B, C, D & E all have low severity rankings? Who got it right? Did they all get it right? Are they different enough that it makes sense there would be a higher severity for the one plant? These inconsistencies can drive significant costs, introduce risks and need to be understood.
In this short case study we examine a single scenario analyzed incorrectly, which can easily then drive recommendations resulting in hundreds of thousands of dollars misspent.
The analysis of threats is based on the experience of the team in the room and guidance from the facilitator and industry standards but somehow inconsistencies happen all the time.
Even in the case of a risk assessment for a small, Management Of Change (MOC) project where the MOC has a conflicting severity with the baseline PHA for the exact same facility! In most cases field engineering teams do not refer back to the most recent facility PHA when completing an MOC risk assessment, they start with a blank slate and a small team or even a single person who is expected to adequately assess the risk of the change. It is important to remember that everyone who participates in PHAs does the best they can to do a complete and comprehensive analysis however they are limited and biased based on their experience.
“That will never happen here”.
A common bias that comes into PHA is Experience Bias. Experience Bias means that, based on the observations and experience of the team, if an incident has never manifested at a given site, they may discount the scenario likelihood and not consider it credible. It may just be that it hasn’t happened in the timeline and experience of the group in the study, but it is a real, credible threat.
Another very challenging bias to overcome is Availability/Recency Bias where, based on recent events, the atmosphere among a group influences their decision making. For example, if an incident has occurred recently that resulted in serious injury or loss of life, it is to be expected that consequence severities will likely be overestimated. Conversely, if an award has just been made for a positive Lost Time Injury Frequency Rate and safety performance has been positive, consequence severities can be underestimated.
With experienced operations personnel and engineers nearing retirement, as well as general staff turnover which naturally happens, it becomes increasingly difficult to assemble experienced PHA teams. One solution some companies use to mitigate this challenge is to generate lists of pre-defined scenarios with corresponding risk rankings. This is intended to help support PHA study groups however, based on the experience of Risk Alive® data engineers, those lists can often be disregarded or don’t capture the full picture of all credible scenarios.
CASE STUDY – AMINE RECOVERY UNITS
A major operating company with a diverse set of assets including upstream production (onshore and offshore), as well as a number of refining and upgrading facilities was faced with these challenges we’ve outlined so far: bias-skewed analysis, missed recommendations, cost implications and safety risks.
Reviewing PHAs from across the company for inconsistencies was a major challenge – by their own estimate, they typically generate 15,000 pages of PHA worksheets every year across their enterprise! By the time they would have finished reviewing just one year’s PHA data, they would be several years behind. It seemed an impossible situation.
The company has 9 operating Amine Recovery Units (ARU) that are all highly similar, so using ARU’s as a test case with Risk Alive® made sense. The company uses similar equipment and similar chemicals, so this was determined to be a great place to start exploring just how differently the PHA teams across the enterprise were analyzing the same threats.
The project included an analysis of nearly 8,000 pages of PHA and Layers Of Protection Analysis (LOPA) worksheets and took the Risk Alive® team of data scientists and risk engineers approximately 10 weeks to complete. What was discovered built the case for this company to then adopt Risk Alive® as an enterprise solution analyzing nearly 100 PHA files annually, every year.
From this table below and the graphs that follow we can see there are inconsistencies in both the numbers of scenarios identified (something for a future blog post – stay tuned!) and in the way the scenarios are being analyzed.
|Fatality Scenarios||Non-Fatality Scenarios||Higher Risk Scenarios||Lower Risk Scenarios|
Table 1 (above) – List of scenarios by ranking
Chart 1 (above) – Bar chart representation of fatality vs. non-fatality scenarios by facility
Chart 1 (above) – Bar chart representation of higher risk vs. lower risk scenarios by facility
These differences demonstrate that the gaps exist, but what is the impact of these differences? Focused on the impacts and implications of the cost associated with a single scenario, analyzed incorrectly, the Risk Alive® team found a scenario, analyzed at 3 facilities but with significantly different results.
The Risk Alive® Results
Using a mind map visualization technique Risk Alive® helped the company pinpoint the overlap and divergence in risk discovery and help clients focus in on areas of interest. What you see below is a study of 9 ARU, looking at the vulnerability of damage to the Amine Regenerator Reflux Pump. At a high level you’ll observe that the majority of sites reviewed that case and 3 identified the initiating event of an automated valve failure (fail open).
The PHA Data
Compare this to the native PHA data, where the verbiage used at each of the three sites is quite different. This is a subtle but very important note. One of the most important elements of Risk Alive® is to help normalize language. Using natural language processing software and machine learning tools to automate the process we are able to achieve higher levels of accuracy than human eyes alone. There is no guarantee that even if a person had reviewed all 8,000 pages of PHA data in this study that they would have found this scenario at all.
Who Got It Right?
With this finding the question left to the operating company was who got it right? Did they all get it right because the scenario was that different at their facility? The path they chose was to complete a consequence study for each site to determine the truth. What they found?
The modelling showed that for a rich amine pump seal failure the H2S amount released was negligible. This was a low severity event and Facility 1 had analyzed it correctly while the others had overestimated the risk. It is important to remember, this could have gone either way. Had it proved out that Facility 5 had the correct analysis then Facilities 1 & 7 would have underestimated the risks and would be exposed.
What Are the Implications In This Case?
With the results of this case being an overestimation of risk the implications are a cost savings opportunity. In Facility 5 and 7 there were a total of 4 PHA recommendations put forward that would likely have been implemented incurring capital costs for the installation of new equipment and operating costs to maintain that same equipment. Especially in the case of Facility 5 where the severity was analyzed as leading to a fatality.
Looking at the cost of those recommendations, even using conservative estimates it is easy to see the value in analyzing these types of inconsistencies.
It’s important to remember: this is one scenario, a single scenario out of tens of thousands in these PHA studies. If you extrapolate the cost of these 4 recommendations and consider the hundreds of others it begs the question “how much is being misspent on PHA recommendations that appear necessary?”.
What Are Inconsistencies Costing Your Company?
Find out. Send your data to Risk Alive® and, with our limited-time offer, you can access no-charge, unlimited risk data hosting, unlimited user accounts predictive analytics and more for you and your team.