Discover Data-Driven Certified Lab PHAs Powered by Risk Alive

PHA Recommendations: Quantity Vs Quality

Written by Risk Alive

June 19, 2017

Imagine if you will. You’ve just graduated, and it’s your first real full time job. No, not a job… your first attempt at a career. There’s so much to absorb and you’ll have alternating thoughts of “am I using anything I learned at school” and “I’m so glad I went to school to learn this”. But for myself, as a young engineer working at an operational facility, I got tunnel vision in terms of what I should focus on.

I learned to help operations.

I learned to track and troubleshoot process upsets.

I learned how to squeeze into a tight vessel.

More importantly, I learned which corporate metrics were used to measure my performance. What indicators did I have to satisfy to ensure I got a push in the direction I wanted? Or at the very least, what did I have to do to make sure I did not get on managements list of, “Personnel I must follow up with because they make my team metrics look bad”?

I learned to not let MOC action items go past their deadline.

I learned to provide monthly metrics in a timely manner.

And I learned, implicitly from team report cards, that the most important thing about the output of a HAZOP was the number of recommendations you implemented.

At that point in my career, I was not really the “questioning corporate metrics” type, but after being away from the plant for a couple of years, I’m able to be more critical of my past self. And now, an immediate question that comes to mind is, why are the number of recommendations we implement so important?

When is it beneficial to be motivated to focus on quantity?

Usually we focus on quantity when quality is not relevant, cannot be changed, or improved. So when we look at operating company metrics or a CCPS Industry Survey, and we see that the “Number of PHA Recommendations Completed” is considered a “New Leading Indicator”, how should we react?

To be blunt, we should not be satisfied.

With all operating companies focused on creating and distributing more product with the same or fewer resources, it’s no wonder that implementing recommendations is such a reoccurring issue. And although the concern of limited budget and resources is unlikely to disappear soon, we can, today, objectively improve our confidence that we are implementing the “most valuable recommendations” instead of simply “as many recommendations as quickly as we can”.

So how do we make this transition from focusing on “number of” to focusing on “minimizing opportunity cost”? Here are 3 key elements to help us make this change.

1) Determine an indicator that represents value of recommendations.

2) Determine the ideal order of implementing recommendations.

3) Compare your actual implementation to your ideal implementation. Begin to understand why there is a difference, and dive into closing that gap.

What represents the quality or value of recommendations? The benefit of a recommendation is the risk reduction it provides to various consequence receptors, such as Health and Safety, Environment, and Financial. By looking at aspects of the recommendation that include its projected reliability, and the severity and frequency of scenarios it would protect against, we can objectively calculate the value of all recommendations and rank them according to their initial benefit.

Next, by knowing our most risk reducing recommendation, we can simulate what would happen to our risk if we were to implement it. For example, “Now that we’ve implemented this recommendation as a safeguard, how has the importance of our other recommendations changed?” What happens to our ranking if we were to implement the next most important recommendation? Using an iterative methodology, we can determine the ideal order of implementation based on the risk reduction of each recommendation, and how implementing recommendations impact other recommendations.

The businesses of the world are in a huge transition of benchmarking and learning from data we already have. If we don’t put the effort into predicting the “ideal”, we are missing a huge opportunity of learning from any gaps. By objectively learning from our history, we can improve our philosophies around recommendation implementation, and reduce our risk and financial opportunity cost.

https://www.aiche.org/sites/default/files/docs/pages/leading-indicator-survey_0.pdf

Want to learn more about Recommendation Sequencing? Let’s Talk.

You May Also Like…

Loading...