Why do we have pressure calculations and how do they relate to In-Line Inspections (ILI)? Let’s start with why. This comes down to three main reasons: PHMSA requirements, sorting through ILI report “noise”, and to estimate response time.
An In-Line Inspection (ILI) is really just a snapshot of the current condition of your pipeline on a specific date. Depending on the technology used and when the pipeline is inspected, different “views” of the pipe’s condition are created. Which means no single ILI will give you the true condition of your pipeline.
This is where run comparisons come into play. Run comparisons help QC new data building confidence in the results, increases the value of each run, and provides data for deeper and more accurate analytics using the comparison results.
There are a handful of options when it comes to doing an ILI run comparison; completing it in-house, outsourcing to an ILI vendor, outsourcing to a service provider, or maybe your organization just doesn’t do it. Trying to decide what is best for your organization? We’ve got your benefits and drawbacks of each.
I was speaking with a pipeline operator a while back who had just completed an anomaly dig that cost upwards of 1 million dollars. The reason for the dig was to investigate a metal loss anomaly reported by a recent In-Line Inspection (ILI). The pipeline company’s business rules, in the form of their Integrity Management Program (IMP), dictate that metal loss anomalies meeting certain depth or safe pressure calculations must be dug up and repaired. With that kind of money at stake for a dig, or any dig for that matter, the focus quickly turns to how accurate the ILI vendor made the call. Here we’ll look a little more under the hood to understand the genesis and inherent variability of ILI reported features.
In my previous blog Boxes: The Origin of Your Pipe Listing, I covered some of the basics of where features reported by ILI come from. To recap, the ILI tool does not detect anomalies. It records data which is processed and interpreted by software and human beings. Pipeline features and anomalies are then annotated with boxes which contain attributes describing the feature. Each box becomes a reported item in the resultant pipe tally or report.
In a perfect world, these ILI-generated features will match exactly the features expected on the pipe. Of course, we don’t live in a perfect world and real money is at stake adding to the pressure on ILI vendors and operators to get it right.
If you’ve ever performed an Inline Inspection (ILI) for a pipeline, you are likely familiar with the typical product of the survey which takes the form of a spreadsheet. This spreadsheet is usually referred to as a pipeline listing, master list, pipe tally, or similar and is a tabular dataset where each row is a feature reported by the ILI vendor. As seen in the example below, numerous columns are populated with data describing each feature such as: distance from launch, feature description, dimensions, and much more depending on the operator’s reporting specifications.
There is quite a lot of work that leads up to the generation of the listing including prepping the line, running the ILI tool, processing the data, analyzing the data, and generating the report. Inline inspections can cost anywhere from thousands to hundreds of thousands of dollars (or more!), the results of which ultimately culminate into one final deliverable: the pipe listing. I sometimes joke that an operator can say “I spent tons of money on an ILI run and all I got was this spreadsheet.” Rather than review the entire process, for now I want to focus just on that (expensive) pipe listing.
Each row of a pipe listing contains a reported feature or anomaly, but where does that information come from? Is it accurate? Can I trust it? So many decisions are made based on the information reported in the pipe listing so I think it warrants a closer look at its building blocks.