Risk management is a continuous process and relies on accurate and complete data. However, we all know data is never perfect, so how do we account for missing and uncertain data? Like all good processes there are a few different approaches and steps to take.
Determine and Categorize the Threats to your Pipeline
Help fill in data gaps by determining what threats to the pipeline can occur during each stage of its service life, and consider defaults and assumptions that can be made for each based on the threats dependency to time.
- Industry statistics and studies on frequency of defects and rates of failures
- Publicly available information on exposure
- Company experience
- Subject Matter Expert input
- Industry studies on decreasing pipe strength/corrosion rates/cracking over time
Overall, leaning on industry studies whenever possible and documenting everything can help prioritize your approach to these defaults and assumptions when creating a risk algorithm.
Good, Better, Best
No operator is in the same spot in their risk process, and whether your organization is just starting off or deciding to go a new direction with your risk model, there’s an approach to take that we like to call good, better, best.
Assume conservative values where you have no information
Improve risk estimates and utilize SME & industry literature to inform your risk approach where data is missing
Periodically quantify & be able to explain uncertainty & level of conservatism resulting from defaults & estimates; Understand what data is most important to improve
Working through these good, better, best levels for defaults and assumptions when building out your risk models and assessments can help highlight areas of uncertainty and variability within your data.
Probability distributions define behavior of a variable by defining limits, central tendency and nature. Types of distributions include:
- Normal (continuous)
- Binomial (discrete)
- Triangular (continuous)
- Uniform (continuous or discrete)
Using these different distributions you can approximate what your data actually looks like by gathering sources of event probabilities from things like historical data and expert opinion while still accounting for uncertainty.
Quantifying Model Sensitivity
On the back end of running distributions you’ll also want to take a look at quantifying the model sensitivity of your approach to help drive data gathering efforts that will improve your risk estimates. This sensitivity analysis can be used to understand biases in your model approach and allow you to adjust accordingly.
In the end it all comes down to acquiring great data to create robust risk assessments that help maximize a pipelines integrity. To learn more about these approaches to addressing missing and uncertain data, check out our webinar, Accounting for Uncertainty and Data Gaps in Probabilistic Risk Assessments or today.