If you’ve ever performed an Inline Inspection (ILI) for a pipeline, you are likely familiar with the typical product of the survey which takes the form of a spreadsheet. This spreadsheet is usually referred to as a pipeline listing, master list, pipe tally, or similar and is a tabular dataset where each row is a feature reported by the ILI vendor. As seen in the example below, numerous columns are populated with data describing each feature such as: distance from launch, feature description, dimensions, and much more depending on the operator’s reporting specifications.
There is quite a lot of work that leads up to the generation of the listing including prepping the line, running the ILI tool, processing the data, analyzing the data, and generating the report. Inline inspections can cost anywhere from thousands to hundreds of thousands of dollars (or more!), the results of which ultimately culminate into one final deliverable: the pipe listing. I sometimes joke that an operator can say “I spent tons of money on an ILI run and all I got was this spreadsheet.” Rather than review the entire process, for now I want to focus just on that (expensive) pipe listing.
Each row of a pipe listing contains a reported feature or anomaly, but where does that information come from? Is it accurate? Can I trust it? So many decisions are made based on the information reported in the pipe listing so I think it warrants a closer look at its building blocks.
An ILI tool, sometimes called a smart pig, is self-contained and can travel many miles in harsh environments. It is expected to gather very detailed information about the condition of the pipe with different technologies specifically utilized to detect corrosion, dents and other pipeline threats. Most smart pig technologies have resolutions on the order of millimeters or less and can even detect cracks not visible to the naked eye. The main point is that large amounts of raw data are collected along a pipeline which eventually gets transformed into raster image-like data. Each vendor has their own viewing and analysis tools designed to efficiently process and analyze the data, albeit with a human in the driver seat. I specifically mention this as there is a human element involved with the raw data analysis process, but more on that in a bit.
Effective analysis of raw tool data has three components. Every feature that is to appear in a pipe listing needs to be: 1) detected 2) identified and 3) sized, in that order. Without detecting the feature or anomaly, there’s no point identifying or sizing it. If a feature is detected, it is almost as important to correctly identify it. No operator wants a pipe listing with a list of “Unknown Features”. Conversely, it can very well happen (and does), where features are misidentified. A dent is only a dent if validated some other way (like a dig) and could very well be intrusion at a tap. Detection and identification both depend highly upon ILI tool technology and the human analysts who use proprietary vendor analysis software.
Assuming a feature in raw data is detected, identified, and sized. Now what? The industry standard, regardless of tool technology or vendor, is to record the feature with a “box” so that the feature can be reported. These are quite literally two dimensional boxes rendered on top of the raw data layer to signify a feature or anomaly was detected. The advantage of using boxes to mark indications in raw tool data is the efficiency of rendering in software as well as capturing length and width of the feature. Note that boxes can be generated automatically by the analysis software or by an analyst who may deem something in the data worth reporting. These boxes have attributes stored in a database connected to the vendor’s analysis software and include length, width, ID, depth, orientation, description or any other pertinent attribute. Why is this discussion on boxing important? Primarily because each box becomes a row in your pipe listing!
For some features, figuring out the box attributes is pretty straightforward. As an example in the screenshot above, a caliper tool has found two large holes in the pipeline. Holes in a pipeline, except the one down the middle, are generally not a good thing! However, most analysts will recognize the signatures in the screenshot above as an offtake tap and a tee and will ensure the boxes used for reporting will represent exactly that, with the green call boxes shown. Recall identification and the human factor mentioned above, which in this case determined potential threats (two holes) as simply known and expected features. Additionally, the resulting boxes would be rendered to capture true length and width of the tap and tee so that the pipe listing could also report the diameter attribute.
With Magnetic Flux Leakage (MFL) as an example, it now gets complicated. MFL is not a direct measurement and, to over-simplify, creates smooth signatures in the data when pipe wall thickness varies. Metal loss, since it is localized variance of pipe wall, is seen as “bumps” in the data. See screenshot above showing about 2.5 feet of corroded pipe as detected by MFL. The blue boxes are the call boxes, each one becoming a row in the pipe listing with predicted depth, length, width, etc. A common misconception about MFL is that the bump signatures look just like the actual corrosion pattern on the pipe wall. This for the most part is not true. The point here is that boxing a bump is not as easy as it sounds. ILI vendors spend countless hours refining their detection and sizing tools in order to estimate accurate dimensions as would be found on the pipe itself during field investigations. Length and width alone are difficult to nail down, and in the interest of keeping this short, depth and other attributes are even more difficult.
This discussion on call boxes may all seem very rudimentary, especially to anyone familiar with ILI analysis, but I’ve discovered that it is not always understood where ILI reported features come from. Many important integrity decisions are made using features and their attributes called out in an ILI report and I believe a closer look at their origins is essential. Hopefully this will help with beginning to understand the inherent variability and limitations of call boxes which we’ll explore in future discussions.