October 2013


Fatal flaws in risk matrix

Reidar Bratvold’s SPE presentation pinpoints three flaws in widely-used, ‘scary’ industry best practices for risk management. Current approaches may contribute to arbitrary risk management decisions.

In his presentation1 at the Society of petroleum engineers’ (SPE) annual conference and technical exposition held this month in New Orleans, Reidar Bratvold of the University of Stavanger began with an inventory of the published use of risk matrices to analyze risk and implement mitigation strategies. Risk matrices (RM) are widely used in oil and gas, frequently cited as best practices and embedded in national and international standards from ISO, Norsok and the American Petroleum Institute. But, asks Bratvold, ‘do they work?’

RMs rank risks so that mitigation efforts can be focused on higher likelihood and cost events. But, for Bratvold, risk matrices contain three irremediable flaws as follows. First, risk matrices produce different results depending on how scoring is done. In one drilling example, using an ascending scoring, the risk category of ‘severe losses’ was prioritized over ‘blowout prevention.’ But simply reversing the scoring scale was enough to change the risk priority order. Bratvold asked ‘Would such a technique withstand scrutiny in a court of law?’

The next flaw is the sensitivity of the risk matrix approach to small changes in cut-offs. A tiny difference here can change the ranking, making for instability in the analysis. None of the standards bodies cited above has anything to say about this flaw.

Thirdly there is the ‘lie factor,’ a concept borrowed from visual display guru Edward Tufte. Here a graphical representation is used to ‘game’ the analysis by presenting information in a misleading way.

Bratvold’s team has analyzed some 1,300 SPE papers—all of which had lie factors greater than one on at least one axis. Such distortion means that risk matrices give the illusion of communicating information simply. But in reality the technique leads to arbitrary risk management decisions. The flaws are inherent to the technique and it is ‘scary that they are considered best practices.’

Bratvold was quizzed on what techniques he recommended other than risk matrices. He replied that risk management was just a subset of decision making and that there is a century or so of analysis and advice on the topic—as practiced at Stanford University’s management sciences department. Other industry practitioners queried Bratvold’s conclusions observing that despite the flaws, RMs were good communication tools. Bratvold was unrepentant, ‘Risk management is important, why should the goal be to keep it simple? There is no need for over sophisticated Monte Carlo analytics but using risk matrices mean we are having the wrong conversation.’

1. SPE paper 166269— www.oilit.com/links/1310_0102.


CGI for Shell, RRC

CGI gets five year global IT/CIO outsourcing deal from Shell and $14 million IT modernization contract from Railroad Commission of Texas.

CGI has seen two major contract wins this month for its IT outsourcing offering in oil and gas. In the UK, Shell has awarded CGI a five year deal for key application services provision in support of Shell’s technical and competitive IT (Tacit) unit. Services cover Shell’s upstream and downstream businesses and IT for capital projects. The deal covers subsurface and wells, engineering and projects, and a technology chief information officer role. Projects and managed services will support Shell globally from primary locations in the Netherlands, North America, the UK and India. CGI has 15 years experience of working with Shell’s different businesses.

CGI was also awarded a two year, $14 million IT modernization contract by the Railroad Commission of Texas. Established in 1891, the RRC has been regulating onshore oil and gas in Texas for over ninety years. The CGI is to help the RRC optimize its regulatory and reporting activity with automated processes, tools and data needed to keep pace with the ‘booming demand for oil and gas production while protecting public safety and the environment.’ The deal targets web-based permitting and more self-service opportunities for users. More from CGI and RRC.


On risk management, dumbing-up and the ‘cost of risk’ of a Macondo

Back from the 2013 New Orleans conference of the Society of Petroleum Engineers, editor Neil McNaughton debates Reidar Bratvold’s demolition job on industry best practice, the risk matrix. Inspired by ENI’s attempt to put a ‘cents per barrel’ on the cost of risk, he does his own analysis, using collateral from the Times-Picayune, with a surprising result.

This month’s lead is a critical analysis of a current ‘best practice’ in risk analysis from Reidar Bratvold (University of Stavanger) and Eric Bickel of the University of Texas at Austin. For those of you who don’t subscribe to the full text edition (cheapskates!), the analysis was presented1 at the Society of petroleum engineers’ annual conference and technical exposition held this month in New Orleans. The author’s critique strikes at the heart of a popular mechanism for ranking industrial risk, the risk matrix, and shows it to be pseudo science.

Bratvold was quizzed for an alternative to the risk matrix approach and he pointed us to the body of work that has come out of Stanford University on how decisions are made. I googled around on the Stanford website and just about the first thing I found was a ‘breakfast briefing’ on, you guessed, risk matrices! A quick email exchange with the authors put me on the right track.

Bratvold pointed us at risk management specialists Elisabeth Paté-Cornell at Stanford and Terje Aven at Stavanger. Co-author Eric Bickel pointed us to a video where George Kirkland explains how Chevron uses decision analysis and also pointed us at the Decision analysis society. I confess that I have not had the time to follow up on all these leads myself but thought that they might be useful to Oil IT Journal readers.

I have a lot of sympathy for Bratvold’s iconoclasm. We have reported previously from conferences on risk and safety where ‘bow tie’ and ‘Swiss cheese’ models have been put forward as ‘best practices.’ Whether these rather poetic approaches are ‘best’ or not, they are hard to qualify as scientific.

Lets now look at the alternative, mathematical modeling of the decision making process. Subject matter experts are tasked with analyzing small bits of the enterprise and making judgment calls on the likelihood of this and that happening and the associated costs. These are then combined, usually with Monte Carlo techniques, to provide a big picture of all options and risks.

One can imagine a situation where the whole enterprise, risk, outcomes and decisions are encompassed in some massive computer model. Value judgments (how much is a life worth, what is the risk of a 40 year old pipeline exploding, how much will it cost to fix) are taken by the modelers. Numbers are duly crunched and the in a process that I call ‘dumbing-up’ because the hard (not to say intractable) decisions are made by the experts leaving the boss to rubber stamp the model.

I admit that this is a dystopian picture and I probably would not have painted it had it not been for another presentation2 made during the SPE session on safety management by ENI’s Annamaria Petrone on ‘Evaluating the HSE risks and costs of major accidents in the upstream.’ Petrone outlined the results of ENI’s Ergo project that seeks to put a monetary value on the cost of major accidents.

While recognizing that the exercise is not easy, Petrone puts forward a methodology that puts a monetary value on the ‘cost of risk’ associated with the production of a barrel of oil. The cost of risk integrates factors such as personnel safety, environmental risk and risk to the asset. The computation assumed inter alia that a lost life ‘costs’ $100 million, a figure drawn from the UK’s health and safety executive.

Petrone’s talk outlined the use of the ‘bow tie’ risk analysis approach and (you guessed it) risk matrices, along with accident statistics from the oil and gas producer’s association. All of which is rolled up using a ‘baseline risk assessment tool’ into a ‘cost of risk’ metric. The study found that the main contribution to the metric was from blowout risk. Not much of a surprise there. What was surprising was the monetary value of the overall risk—which came out to be ‘of the order of one eurocent per barrel.’

It so happened that while Petrone was making her presentation, I had a copy of the New Orleans daily, the Times-Picayune hidden under my computer, which offered some additional data on the cost of a major accident. The Times informed me that BP was potentially liable for $18 billion (€14 billion) damages in respect of the Macondo blowout.

So if you are putting aside one eurocent per barrel, how many barrels do you need to produce to offset this liability? By my reckoning that comes to 1.4 trillion barrels! That is more than the whole world has produced to date. So something is wrong here, with ENI’s sums or with how the legislator has figured the damages or perhaps both.

How does all this relate to dumbing-up in decision making? Well I think that it is saying that the monetary/mathematical approach to decision making and risk management is tricky. While the modelers and statisticians undoubtedly have a role to play, I can sympathize with a management that shoos them all out of the door before taking a major decision. How will this be made in reality? I don’t know. It does appear that current analyses are flawed. Bratvold, in a short discussion following his presentation, observed that management discourse can also be rather misleading, ‘Managers always claim that safety is the number one priority, but if that were true we would not drill at all!’

1. Bratvold et al. SPE 166269

2. Petrone et al. SPE 166245

@neilmcn


IBM cloud computing in oil and gas

New RedBook looks at oil and gas technical computing in the cloud. While it does a fair job of telling us what we know and do, it fails to make a compelling case for doing it in the cloud.

A new IBM RedBook draft, ‘IBM technical computing clouds’ (TCC) describes a flexible high performance compute infrastructure build around IBM’s SmartCloud solutions. TCC includes a chapter on oil and gas where it is claimed that new approaches are needed to ‘improve discovery, production and recovery rates.’ TCC enumerates many of the forces acting on the oil and gas vertical from a macro economic standpoint (we will spare you the details) to reason that a ‘demand for innovation is creating opportunities to push IT boundaries.’

Seismic data is ‘exploding, doubling every year in terms of the data footprint’ and is ‘expected to go up dramatically.’ Seismic imaging is a key focus area where resolution has been constrained by available compute power. Now oil and gas company R&D divisions are planning for ‘exascale’ projects in 2020. This represents a ‘clear roadmap’ to compute requirements ‘a thousand times greater than we have today.’ Reservoir simulation and economics use less parallel algorithms than seismic and can require large amounts of memory per node. Large shared memory machines and high bandwidth interconnect such as InfiniBand are the norm.

To date while the cloud model has been of interest to oil and gas, there have been few deployments. This is because the systems used are so big that commercial cloud offerings do not have the necessary capacity. TCC’s authors claim that this is about to change at least for some parts of the workflow such as ‘remote 3D visualization and collaboration.’ Most visualization software and tools used in seismic imaging and reservoir simulation can leverage 3D desktop virtualization. Enter the IBM platform application center (PAC) remote visualization offering that includes built-in application templates for oil and gas.

TCC also covers more business-oriented applications such as using InfoSphere BigInsights, IBM’s Hadoop/MapReduce implementation for ‘big data’ analytics and PureData/Netezza data warehouse appliances. A ‘basic’ BigInsights edition is available free of charge for data environments up to 10 TB. For spreadsheet aficionados short on horsepower, IBM’s ‘BigSheets’ is available, a browser-based analytic tool that enables business users and users with no programming knowledge to explore and analyze data in the distributed file system. Finally IBM’s text analytics (as featured in IBM Watson) is available as a cloud-based service for unstructured text data analysis.

All in all TCC is a bit of a jumble. It does a better job of telling us what we already know than making a compelling case for doing it in the cloud. Perhaps this reflects IBM’s problem as a vendor of big iron that is trying not to cannibalize its business with services in the cloud.


Dublin Core and fracture data

Conference presentation shows how to capture and share engineering fracture metadata.

You may have been following the debate on the semantic web and its use in technical data. One semantic standard that does appear to have legs is the Dublin core metadata initiative. DCMI’s origins are in textual metadata—but it also has technical application. One such was featured at the DC-2013 conference held last month in Lisbon, Portugal where João Aguiar Castro (University of Porto) and others presented a paper on ‘Designing an application profile using qualified Dublin core using a case study of fracture mechanics datasets.’ The authors observe that metadata production for research datasets is a non-trivial exercise and in general researchers are unconcerned with data preservation.

A standard means of sharing data descriptors should make for interoperable data sets—but attention is required to ‘guarantee metadata comprehensiveness and accuracy.’ The result is a domain-specific DCMI application profile, along with curation tools for researchers to manage and describe their datasets. Note that the fracture studies in question do not relate directly to rock mechanics or non conventional exploration but rather to mechanical engineering. On the other hand, science is science and semantics is ... now what is it again?


DNV’s six critical levers for holistic safety framework

Position paper offers advice on risk management and analyzes safety statistics in US and EU.

A new position paper from DNV, ‘Enhancing offshore safety and environmental performance’ proposes six ‘levers’ that should provide a ‘holistic safety framework for the oil and gas industry.’ A risk-based approach needs to be supplemented by ‘prescriptive regulations and standards’ along with independent verification. Parties involved in oil and gas operations work via multiple contracts and subcontracts. All must clearly understand their roles and responsibilities with regard to safety. DNV advocates a holistic risk approach while recognizing that it can be a challenge to maintain such over the lifetime of a field and as the parties involved change over time. All should have access to a tool that records up-to-date risk identification and provides a complete view of exposures. A similar approach is needed for safety and environmental issues. Other levers are shared performance monitoring, advanced barrier management and a strong focus on ‘people and process’ management.

The DNV report includes an analysis of occupational safety data from oil and gas company reports and suggests a ten-fold improvement over the last couple of decades. On the other hand, financial losses from major accidents has hardly improved. There are also striking geographical differences. In the five years from 2004 to 2009, US offshore fatalities were over four times higher per person hours worked than in European waters, even though many of the same companies work in both areas. The 16 page report is a free download from DNV.


History—the EU ISPDM pipeline data flop

How the data model war was lost and the model found—with help from the Wayback Machine.

As a journal of record, we try to avoid lose ends and offer this brief summary of a project, the ‘Industry standard pipeline data management’ project (ISPDM) that we last reported on back in our October 2002 issue when the pipeline data wars were raging between MJ Harden, PODS and with Esri chipping in with a pipeline geodatabase.

As we reported at the time, the EU Commission saw fit to launch a third competing model—the ISPDM to provide a more ‘international’ footprint. The ISPDM project was endowed with a reported €1.8 million (with €1 million from the EU taxpayer) and produced an impressive if ephemeral set of deliverables that were the subject of a presentation1 to the IPC02 International pipeline conference held in Calgary. These included business object definitions, UML models and XML schemas.

The ISPDM model was briefly available on its own ispdm.org website. Unfortunately, after the millions had been spent, the parties involved (Thales, Andrew Palmer Associates, ETL Solutions, Rosen and POSC Caesar) decided that the PODS model had won the day and the ISPDM website was decommissioned, leaving practically nothing to show for the effort. The only online trace we could find was the Wayback machine’s recording of the ispdm.org website as it was in October 2002. Pipeline nostalgics can check it out on Wayback.

1. IPC02-27422 ISPDM, A 21st Century Data Hub for Pipeline Systems.


Rig-One and the HCC drilling training center

IADC’s workforce attraction program, Houston Community College initiative to boost recruitment.

President emeritus of the International association of drilling contractors (IADC), Lee Hunt, blogging on GE’s Change Forum observed starkly that ‘between retirements and industry growth, we collectively face a chasm of insufficient talent needed for a sustainable industry.’ Hunt outlines some of the efforts currently being pursued by educational establishments to enhance recruitment, training and certification of oilfield entry level staff. The IADC has launched a workforce attraction and development initiative (WADI), an outreach program that advocates competency based training for drilling rig positions such as roustabout, roughneck, floor-hand and driller.

Houston Community College1, has announced a global oil and gas drilling training center, run in cooperation with the University of Texas at Tyler. The program covers advanced skills in mechanics, electronics and hydraulics. HCC has also kicked off the Rig-One experience, a safety and skills laboratory that includes a mock-up of the working environment for offshore roustabouts along with a blended, e-learning and classroom course. Visit Rig-One.

1. Hunt is a strategic advisor to the acting chancellor of the HCC on Rig-One.


WellAware launches SaaS-based oilfield communications

Secure, reliable machine to machine communications network to support Eagle Ford operators.

Startup WellAware of San Antonio, TX is to offer oil country situational awareness and asset tracking services. Its Eagle Ford network is already operational and the company has already signed its first client, Welder E&P. WellAware has developed proprietary software and a machine-to-machine (M2M) data network that enables oilfield and pipeline monitoring for its customers. Using an iPhone, iPad, Android device or a Windows 8 platform, users can track what is happening in the field in real time.

WellAware CEO and co-founder Matt Harrison said ‘Well monitoring information, pipeline and safety data is often unreliable and difficult to obtain because companies still depend on legacy communication, automation and software technologies.’ WellAware is a secure, reliable M2M data network along with interactive, map-based endpoints and bidirectional control of oilfield assets. The system is claimed to prevent theft and enable ‘rapid adjustment of production strategies to improve recovery.’ Communications leverage random phase multiple access (RPMA) patented technology from Chevron Technology Ventures-backed On-Ramp Wireless. More from WellAware.


Paradigm announces Epic data infrastructure

‘Open and integrated’ interpretation platform extends Epos database with third party plug-ins.

At the annual meeting and exposition of the Society of exploration geophysicists in Houston last month Paradigm introduced its new ‘open and integrated’ interpretation and data platform, Epic. Included in the Paradigm 2014 release, Epic introduces a programming interface to Paradigm applications and the Epos database.

Epic also adds a common user interface to Paradigm’s Epos database infrastructure along with functionality for the creation of custom workflows. Developers can integrate their own applications with Epic royalty-free, using either a plug-in or straight into the Epos database.

Paradigm executive VP technology Duane Dopkin said, ‘Users can now select specific Paradigm applications to connect into their primary platform and achieve efficient product integration, multi-disciplinary collaboration and use our applications alongside third-party solutions.’

Epic will be released in stages, beginning with an infrastructure that unifies Paradigm’s application suite into a single integrated console. Connectors to third-party platforms such as Petrel, JavaSeis and ArcGIS, will also become available in 2013 and early 2014, augmenting pre-existing connections into OpenWorks, GeoFrame and OpenSpirit. More from Paradigm.


Software, hardware short takes

Allegro, Aveva, Blueback Reservoir, DAP Technologies, Deloitte, Blue Marble, Kongsberg, Neuralog, GeoTools, Senergy, Geotrace/Tigress.

Allegro has announced V8.0 of its eponymous energy trading and risk management suite with updates to its credit, logistics, midstream and workflow modules.

Aveva has added new piping functionality to its 3D plant design package PDMS including an improved schematic 3D integrator for comparing P&IDs and 3D pipe design, integration with mechanical CAD systems and with Aveva’s own Bocad structural design and fab tool. PDMS is now also ‘Citrix-ready.’

V2.0 of Blueback Reservoir’s Geodata Investigator plug-in for Petrel adds a spatial dimension and new matrix and parallel coordinates plots. A new version of Blueback’s Project Tracker is also out, now integrated with Studio for Petrel, and including a spatial query function.

DAP Technologies has announced the DAP M9000 range of rugged mobile tablets incorporating active and passive radio frequency identification (RFID) reader capabilities.

Deloitte has unveiled PetroInsight, an interface to its data products tailored to new ventures that offers ‘data-rich’ views of global upstream oil and gas activity.

The 2013 release of FFA’s GeoTeric includes ‘Fault Expression,’ an example-driven approach to fault extraction from seismic data volumes.

Blue Marble has announced Global Energy Mapper V15.0 including a geographic calculator for coordinate transformation leveraging a direct connect to the online EPSG registry.

Kongsberg Oil & Gas Technologies has released LedaFlow 1.4, a new version of its transient multiphase simulator for wells and pipelines with a claimed 4X speedup in network calculations and multi-CPU capability. The new version includes Multiflash, a state of the art multiphase equilibrium calculation package.

Neuralog has unveiled three new NeuraLabel 5000e high-speed GHS-compliant chemical drum label printers compatible with SAP and all major label authoring software.

The GeoTools community has announced GeoTools 10.0 with new NetCDF and raster data functionality, a new implementation of its shapefile datastore and new OGC modules for WCS 2.0 and WCS 2.0 EO models.

Senergy has announced Interactive Petrophysics (IP) V4.2 with new cement evaluation and production log analysis modules.

Geotrace announced Tigress 6 earlier this year, its ‘major new’ data management toolset. The release includes a Windows port, a SEGY Toolbox and a SQLite database option. A GeoBrowse for ArcGIS add-on allows users to interact with Tigress projects and data via the ArcGIS Desktop environment. The GTK platform was used to support Geotrace’s cross platform development.


Salvo project to publish process guidebook

Systematic approach to strategic asset information management and best practices toolbox.

The strategic assets lifecycle value optimization (Salvo) project is to share the results of its deliberations in a Process Guidebook to be published later this year. Salvo is a cross-industry initiative (with one oil industry member, Sasol) that has addressed the ‘perfect storm’ of aging infrastructure, capital constraints, uncertain data and risks in the face of growing demands on performance.

Program director John Woodhouse of The Woodhouse Partnership said, ‘Techniques developed under the Salvo project for managing aging assets have been field-proven and are now delivering cost savings, increased transparency and consistency in critical risk-based decisions.’

Salvo describes a systematic approach of problem prioritization and understanding, identifying mitigation strategies and evaluating their value-for-money. The process rolls all this up into an actionable program along with total cost, risk, performance and resource implications.

The Salvo process is supported by an extensive toolbox of recommended best practices, training, decision-support software tools, guidance and templates along with a strategic asset management plan (Stamp) as required by the ISO 55001 standard for asset management. More from Salvo and watch the video.


INT rolls-out GeoToolkit with JavaScript/HTML5

Interactive Network Technologies and Intel agree, HTML5 is ‘the new Java.’

While some vendors seem to have stuck with Adobe Flash and Microsoft Silverlight well beyond their sell-by dates, Interactive Network Technologies (INT) has taken the standards route to endpoint graphics and now offers JavaScript and HTML5 in its GeoToolkit. The popular graphics-components package for display of seismic data, logs and scientific plots now offers cross platform compatibility, running on ‘anything from desktops to mobile devices.’

At the SPE in New Orleans, Olivier Lhemann told Oil IT Journal, ‘These are the tools of the future for developers. HTML5, JavaScript and Google’s webmaster development tools. Everything now can run in the browser, on tablets, iPhones, ipads or desktops, all with the same code base.’

Further endorsement of HTML5 came in an announcement from Intel in its Software Adrenaline magazine, where Intel ‘visionary’ Moh Haghighat argues that HTML5 is the new Java. ‘HTML5 is an attractive alternative to Java. You just write your application and run it on any kind of computing device, whether it’s a phone, tablet, netbook, desktop, or TV.’ More on HTML5 from Intel (with a good online discussion) and on the GeoToolkit from paul.schatz@int.com.


OSIsoft/PI System user group—Paris

Oil IT Journal offers its own PI System 101. Talisman/Sinopec’s PI System usage on mature asset. Mol Group SVP Béla Kelemen on the parlous state of EU refining. Moving from ‘tactical’ to ‘strategic’ use of PI. PI-based enterprise analytics help Columbia Pipeline adapt to non-conventional boom.

At the OSIsoft user conference in Paris last month we sat in on what was billed as the PI System 101 hoping to learn and report on the essentials of OSIsoft’s technology. Unfortunately this turned out to be less of a technology 101 and more of reading of the OSIsoft marketing material. So we start with our own idea of what could be a PI 101. The PI System, OSIsoft’s flagship, is a data historian. That is what a database is called in the process world. PI records data from equipment ‘tags’ i.e. streams of real-time information on temperatures, flow rates and so on from valves, rotating equipment and other assets. PI understands the plethoric data formats used in a plant, recording data as ‘time series,’ i.e. as a sequence of values and time stamps. PI does not care or know if an equipment item is functioning correctly (or at all). Moreover it can only record what is transmitted. While older kit may produce less than perfectly formed data, newer stuff may provide more context and fancier data formats. Making sense of what is captured to the historian will likely require a lot more attention and a constellation of software from OSIsoft and third parties has evolved to turn the raw data into a more usable form.

A good real-world example of PI usage was presented by Sam Scott (Talisman/Sinopec Energy). Talisman’s average asset age is 29 years. Talisman started using PI System in 2001 with a 1,000 tag subsea system. Today it has 250k tags, 50 interfaces and 50 concurrent users. PI System provides a single source of offshore data and feeds hydrocarbon accounting, laboratory systems and corrosion monitoring. Talisman’s aging assets make for a challenge in maintaining safety and production critical equipment. To combat falling production, Talisman has instigated a ‘rotating equipment excellence program’ (REEP) to improve equipment integrity and reliability. REEP covers performance monitoring, spares management, competency, contracts and audits. REEP contractor Ian Gore (CSE Controls) explained that older equipment is either poorly instrumented or unconnected, displays developed over years tend to have inconsistent interfaces and third party packages tend to focus ‘on a single piece of kit or use case.’ Enter Talisman’s ‘Spotlight’ REEP development that presents all equipment data in the same place and with the same look and feel. Spotlight calculates performance KPIs and alarm thresholds, notifying stakeholders by email. The result is better on-offshore collaboration, early detection of performance and integrity problems and a platform for condition-based maintenance. Currently REEP monitors nearly 3000 critical rotating equipment items globally, using PI AF, PI ACE, PI Performance Equations, and PI Notifications . A new Citrix environment replicates Process Book screens onto workers’ iPads.

Béla Kelemen (MOL Group) provided an entertaining exposé on the state of the industry and of ‘getting more from less’ in the face of industrial challenges in the EU. These include global competition, EU regulations and local traditions. Kelemen sees the EU as a ‘retiring old lady.’ The US, with its shale oil production, is a ‘returning rock star.’ And Asia is the hungry youngster following ‘10 years of 10% growth.’ The EU is working towards a 20% CO2 reduction, 20% more energy efficiency and 20% from renewables. ‘Who would want to be in the downstream?’

Downstream, which used to be one of the richest industries, has developed bad habits and failed to see the change that was coming. MOL is fighting back with a program to return to its ‘best in class’ status. PI is to be the backbone of MOL Group’s refining and marketing infrastructure.

Tibor Komroczki drilled down further into MOL’s usage of PI which has evolved from tactical to strategic. MOL has built a PI-based common data model representing its asset hierarchy in time and space. MOL’s new downstream program addresses the difficult economic climate described by Kelemen. Third party applications such as Sigmafine, Semafor for KPI management and Opralog’s E-Logbook are all integrated through PI. While there is less use of PI in process automation, MOL is working to fix this by blending information and automation, again by better leverage of PI. A new energy use dashboard produced a surprise, ‘the first month was completely red!’

Columbia Pipeline’s Dave Johnson showed how high availability PI data feeds its own brand CPG Enterprise Analytics. Columbia is busy now adapting its distribution networks to the US non-conventional boom. Johnson prepared a video demo showing how the system could give early warning of a failing compressor bearing (a common issue at the time). The cost of a timely replacement was much less than that of a random failure. Columbia has also deployed Transpara’s Visual KPI to its mobile devices and is in the process of rolling out iPads, these are ‘really catching on.’

Marco Piantanida and Christina Bottani (ENI) also showed how PI (and SmartSignal) helped in predictive maintenance. ENI’s onshore production from the Vald’Agri field has wells scattered around mountains and valleys. Long pipelines cause irregular flow with slugs of gas and oil causing varying water cut, liquid carry over and multiple equipment problems. Maintenance to date has been at fixed intervals but ENI is moving to a condition-based paradigm. Again PI is the key and has been interfaced with everything including 1991 DCS. SmartSignal catches most faults but additional investigations with Coresight and ProcessBook have provided warning of some imminent failures.

Reinaldo Jimenez described the PI System as Repsol’s single source of real-time data in its new production accounting system. PI is also enhancing the company’s maintenance work order management through real-time integration across multiple corporate systems. Repsol has also leveraged PI to manage bypass request approvals and renewals.

Lars Anton Mygland and Astri Hinna Fjermeros outlined how Statoil (with help from Amitec) has used OSIsoft technology extensively in its NoxTool emissions reporting tool. NoxTool was built around an existing PI system and uses PI web parts to provide a compelling view of Statoil’s assets. The authors observed that it is better to extend an existing PI system rather than build one from scratch.

A complex system of penalties and bonuses drives French utility TIGF’s gas storage business as Christophe Cuyala described. In its first year of operations TIGF incurred a €500k penalty. A new PI System, along with PI ACE (advanced calculation engine), UFL (universal file and stream data) and vCampus for third party data access means that TGIF is now getting bonuses for its reporting. More from OSIsoft.


Geogathering 2013, Colorado Springs

Shale gale generates renewed interest in pipeline information technology and GIS. ESRI keynote—mileage to rise 2.7 fold by 2035. Operators navigate complex regulatory environment. Software deployed from Earth Analytic, New Century, VoyagerGIS. GTI on ASTM F2897-11a standard.

The biennial Geogathering conference1, held at Colorado Springs in August, saw some 150 attendees from 75 companies. The ‘shale gale’ blowing across North America has sparked off a new round of pipeline construction and brought renewed attention from the regulator and environmental movements. ESRI’s Tom Coolidge cited figures from the US administration which suggested that the pipeline mileage in the US is set to grow from a current 240,000 miles to an estimated 650,000 miles by 2035. The hike will only be possible by deploying sophisticated tools and rigorous governance in the collection and management of data for ‘environmentally responsible routing.’

The National environmental policy act’s (Nepa) approval processes documents issues such as wetland avoidance, river crossings, endangered species and archeologically sensitive areas. Combining information on such is highly amenable to GIS processing, leveraging an ‘environmental resources data stack’ to combine data from various sources including public web services. Coolidge showed how this can be achieved using a pipeline routing tool developed by Willbros that generates a minimal impact route along with risk assessments and cost analysis.

A similar process was described by Erik Potter (M3 Midstream) and Wetherbee Dorshow (Earth Analytic Inc.) who stressed the need for smarter routing in the face of social media-fueled opposition to new pipelines. M3 deploys a PODS 4.2 pipeline and facilities relational database linked to a Coler and Colantonio Intrepid4 Esri geodatabase. The company was looking for a ‘more formalized and measurable’ routing process and turned to EAI which helped with a move from a traditional ‘pipeliner’ route reviewing process to an analytical approach using EAI’s ‘SmartFootprint’ and ESRI’s ArcToolbox to automate route selection and estimate construction costs. SmartFootprint produces ‘cost surfaces’ from all available data that can be combined into a ‘suitability surface’ for route selection and reporting.

The growing use of digital data collection in the field means that synchronizing remote devices with HQ can be tricky. Garry Keener showed how DCP Midstream has transformed remote data synch ‘from vaporware to production.’ DCP uses Delorme XMap in the field and a PODS 4.02 Oracle/SDE server in the office. Field personnel can update and fix GIS data from field while disconnected. New Century Software’s Spatial-Synchronizer keeps remote workers devices in synch with HQ.

The complexity of remote working was also a theme of Richard Couture’s presentation of Noble Energy’s ‘new world’ goals that target a single source of digital data and GIS-based integration of pipeline construction and facilities. Noble has developed a new data strategy to support its push into the prolific Niobrara shale. The approach involved an Esri geodatabase fed with data captured during construction from high accuracy GeoXH and Vivax Locator devices and in-field mapping with Delorme Xmap. Teaming with construction has meant better maps and a single data source across pipeline and facilities.

A radical approach to spatial data complexity was suggested by Jason Wilson (SM Energy) and Jon Polay (VoyagerGIS). SME currently has a large amount of disparate spatial data with no repository, making it hard to locate needed data sets. The company has embarked on a comprehensive revamp of its spatial data, starting with a major search and retrieve program that leverages VoyagerGIS’ spatial indexing technology. This is allowing SME to find and consolidate its GIS data to a combination of Esri geodatabase and ArcSDE. Safe Software’s FME and ArcGIS Online also ran. To date some 17,000 spatial files have been indexed, de-duplicated and captured to the geodatabase.

Kevin Miller explained how Summit Midstream has established a Pipeline GIS data strategy including a roll-your-own data model. This, the ‘Summit 1.0 data model,’ incorporates alignment sheets, material test reports, as-built documents, operator knowledge and a data dictionaries.

Mike Harris (Anadarko) with help from consultant Jan Van Sickle asked, ‘how do we keep pace with accelerating change?’ Their focus was the regulatory environment around mechanical integrity a.k.a. 40CFR68 that ensures that ‘process equipment is fabricated from the proper materials, is properly installed, maintained and, if needed, replaced to prevent failures and accidental releases.’ A comprehensive process was outlined for integrity management that included some novel technology. The Anoto digital pen was used for data entry into SharePoint forms, Documentum and SAP. Even more exciting is the potential for use of drones for data collection, such as the Quadcopter.

Monica Ferrer of the Gas Technology Institute unveiled new standards for mobile assets. The ASTM F2897-11a, a standard encoding system of natural gas distribution components (pipe, tubing, valves and more), is a unique identifier that encodes essential asset attribute information as a 16-digit alphanumeric.

This allows manufacturers to barcode pipe and fittings with a unique identifier and allows operators to document the location of specific assets. The scope of the standard is being finalized and GIS data collection algorithm development will be complete in 2014 under the GTI’s Intelligent Pipeline Program. For Ferrer, the combination of tablet computers and mobile GIS is a ‘disruptive innovation’ for the pipeline industry. Visit GeoGathering and download the 2013 presentations.

1. The GeoGathering conference is hosted by New Century Software every two years. The planning committee is chosen from the pipeline community and includes representatives from pipeline operators.

2. Adapted from Title 40 of the US Code of Federal Regulation Part 68.3.


Folks, facts, orgs ...

Commander Drilling Technologies, RSH Energy, Apache, Saudi Aramco, AspenTech, Chevron, Clariant, Ikon, Diversified Well Logging, Energen, Enventure, Exprodat, GE, Honeywell, Baker Hughes, Navigant, LMKR, IFPen, McLaren Software, PIDX, M2M Council, Graco, Skynet Labs, Enersight, International Energy Agency.

Startup Commander Drilling Technologies is led by Stephen Morrison, previously with Applied Drilling Technologies.

Jim Alexander is CEO of RSH Energy. He hails from Hatch Mott MacDonald.

Apache CTO Mike Bahorich has taken on responsibility for worldwide projects, horizontal drilling and completion, special projects and corporate purchasing.

Saudi Aramco, GE and Tata Consultancy Services have launched an all-female business process services center in Riyadh.

Mark Fusco has retired as CEO of AspenTech. His replacement is Antonio Pietri.

Joe Geagea is now senior VP, technology, projects and services at Chevron.

Clariant Oil and Mining Services has opened its global HQ Houston, a campus and lab for oil and mining technology.

Ehsan Naeini of Ikon Science is a visiting scholar at The Center for Wave Phenomena at the Colorado School of Mines.

Aaron Swanson has joined Diversified Well Logging as COO.

Davis Richards is head of drilling and completion operations for Energen Corporation. He hails from EP Energy.

Eric Paulsen has joined Enventure as Country Manager for Norway.

Frauke Diehl has joined Exprodat as account manager. She hails from with ESRI.

Lorenzo Simonelli is president and CEO of GE Oil and Gas replacing Dan Heintzelman, now GE’s vice chairman. GE has opened an oil and gas and digital energy businesses unit in Cary, NC.

Honeywell has opened a safety training center in Houston with simulators of a catwalk, pipe track, climbing pole and more.

Melanie Kania has moved from Weatherford to Baker Hughes as enterprise media relations specialist.

Navigant has opened a new office in Doha, Qatar, headed up by Michael Kenyon.

Ali Ramady is EU sales manager for LMKR Geographix. He comes from Fugro.

Pierre-Henri Bigeard is now vice general manager of IFP Energies Nouvelles (IFPEN) and head of R&D.

David Brazier is VP marketing for McLaren Software’s new asset intensive division. He hails from IBM.

Fadi Kanafani is the new PIDX president and CEO. Oildex’ Michael Weiss is to serve a two year term as an at-large member of the executive committee.

Jürgen Hase is now chairman of the international M2M Council (IMC).

Mark Eason is VP marketing with Graco Oilfield Services.

Steve Devereux is now senior drilling superintendent of Skynet Labs.

Enersight’s new Brisbane location is headed-up by Don Merritt, VP Australia.

The International Energy Agency has placed over 20 years of worldwide energy data online.


Done deals

DNV GL, Emerson, Geoforce, Hexagon, Norris Production Solutions, National Oilwell Varco, RSH Energy, Drillers.com, Skynet Labs, Viking Saatsea.

DNV and GL have merged into DNV GL.

Emerson has acquired safety and environmental equipment supplier Enardo.

Houston Ventures and Palmetto Partners are now minority shareholders in Geoforce.

Hexagon is to acquire airborne laser survey specialist Airborne Hydrography.

Dover unit Norris Production Solutions has acquired Spirit Global Energy Solutions of Midland, TX.

National Oilwell Varco is to spin-off its distribution business. Credit Suisse is advising on the deal.

Following its acquisition by Oaktree’s GFI Energy Group, RSH Engineering has been renamed RSH Energy.

Drillers.com is now a shareholder in Skynet Labs.

Viking has acquired a ‘major stake’ in IT startup Saatsea, a provider of cloud-based onboard training and competence management systems. The unit has been renamed Viking Saatsea.


Kongsberg’s real-time well-monitoring and early-warning system

WellAdvisor addresses casing running, drilling, cementing and blow out prevention.

Kongsberg Oil & Gas Technologies has announced a real-time advisory system for improved well construction operations (a.k.a. drilling). WellAdvisor is a component of Kongsberg’s SiteCom platform that supports drilling and ‘life of well’ reliability. WellAdvisor resulted from a collaboration between Kongsberg and BP that centered on monitoring casing running operations. The casing running system is the first of several systems that BP is evaluating for potential development and deployment. Other target domains include drilling and cementing and blowout prevention.

SiteCom WellAdvisor organizes information in a standardized format on consoles to facilitate information sharing and collaboration. The consoles include an early warning system for users, alerting them to potential issues that may arise and provide ‘the right information to the right place at the right time, integrating recommended practices and expertise with real-time data.’ More from Kongsberg.


Kepware’s distributed comms heralds demise of host centricity

OPC-UA and peer-to-peer architecture reduces control system bandwidth hogging.

A white paper authored by Kepware Technologies’ CEO Tony Paine and Russel Treat, CEO of EnerSys, advocates a distributed architecture for control system communications. The authors observe that while control systems are deployed in many industries, the geographical spread of oil and gas production systems and pipelines make for different requirements.

While industrial control systems may be monolithic and leverage standard off-the-shelf communications, oil and gas deployments will likely be heterogeneous, loosely integrated systems using a mixture of wireless, fiber optic, and telephony.

Communication between applications and field devices requires the use multiple wireless technologies each with its own bandwidth and quality of service limitations. Currently such systems are managed by a central host. The trouble is that, in the absence of a universal protocol, the host-centric approach scales poorly and soon degenerates as it hogs bandwidth and delays transactions. Achieving an overall view of pump stations, compressor stations and processing plants is complicated by the proliferating data collectors.

Enter the new distributed communications architecture, a peer to peer system that spreads data collectors across multiple computers, each closer to field devices, and that handles issues such as intermittent connectivity. What magic underlies such a system? It is our old friend (Oil ITJ Jan 2012) the OPC Unified Architecture, ‘whose purpose is to allow vendors to solve these very problems.’ The authors conclude that ‘The new architecture provides oil and gas operations with an alternative to the current model. One that is more secure and cost effective, and that will scale to meet tomorrow’s needs.’ More from Kepware and EnerSys.


Oil and gas big data meets cloud computing

Ogre Systems and TerraPetro team to offer hosted analytics for upstream and midstream.

Dallas-based Ogre Systems has announced Ogre Data Systems (ODS), ‘a big data and analytics cloud computing system for the upstream and midstream oil and gas industry.’ The ODS offering will be ‘exclusively distributed’ by TerraPetro, also of Dallas.

Ogre’s Petroleum reserves management systems (PRMS) offers reserves and economics tools and is used by national and international oil and gas companies. With ODS, these tools can be accessed from tablets or a PC along with bundled lease specific production data and customizable decline curve analysis tools for forward modeling.

Ogre VP sales Damian Hutchins explained, ‘ODS will simplify how business gets done, particularly for underserved markets such as mineral owners, land managers, attorneys and investors. Anyone who needs to assess financial risk associated with oil and gas production.’

Greg Hibbard, CEO of TerraPetro added, ‘ODS avoids the requirement for third party data purchase and in-house setup of expensive software. Users just log on from anywhere and generate reserves and financial reports. Anyone with basic oil and gas knowledge can use this subscription-based service.’ Visit ODS.


New social network by RigZone founder

Oilpro.com to ‘bridge the boomer to gen Y gap.’

David Kent, creator of Rigzone.com has launched Oilpro.com, a ‘professionally-driven’ social network for the oil and gas industry. Oilpro offers engineers, project managers and executives ‘a dedicated place for knowledge sharing and professional networking.’ Kent said, ‘Oilpro was created as a free and open exchange to facilitate the sharing of oilfield knowledge. An analysis by Schlumberger showed that the oil and gas industry skipped generation X and we now have the challenge of bridging the knowledge gap between boomers and gen Y. Oilpro is set do this using social media strategies that have worked well in other industries.’ More from Oilpro.


Sales, deployments, partnerships …

P2 Energy Solutions, Paradigm, Petrofac, Sigma3, Skynet Labs, Aveva, Wireless Seismic, Deus rescue, 3M Corp., Ameco, Mitsui, FMC Technologies, Geospace, Honeywell, TD Williamson, Jacobs.

Egyptian Komombo Petroleum has implemented P2 Energy Solutions Ideas joint venture accounting solution.

Premier Oil chosen Paradigm’s SeisEarth as part of its interpretation toolset. The selection was based on its ability to work with large 2D/3D regional datasets and interconnectivity with third-party applications.

Repsol Sinopec has also adopted Paradigm software including SeisEarth, for its Campos Basin offshore Brazil projects.

Petrofac has signed a US$120m agreement with Malaysian Petronas, for the operation and management of two high-specification training facilities.

Sigma3 Integrated Reservoir Solutions has been selected by Fasken Oil & Ranchto provide real-time microseismic fracture mapping, processing and interpretation for key assets in the Permian Basin.

Skynet Labs reports sales of its Drilling Formula Sheet DFS product to Halliburton, Performance Drilling, NOV and BP.

Russian engineering contractor Volga Nipitek, is deploying Aveva Plant engineering and design software, including Aveva Diagrams, Instrumentation and PDMS.

Wireless Seismic has announced its first sale of its 3-channel RT System 2 seismic data acquisition system to a ‘major oilfield service company’ for use on passive seismic monitoring projects in North America.

Deus Rescue has signed a deal that gives 3M Company exclusive distribution rights to its fall protection products.

Fluor unit Ameco has formed a joint venture with Mitsui to provide equipment and execution services in Colombia.

FMC Technologies has signed a $650 million deal with Petrobras, for the supply of subsea manifolds for its pre-salt fields. FMC has also received an order from Shell to supply subsea systems for the Parque das Conchas Phase 3 development offshore Brazil. Statoil has ordered $90 million worth of subsea equipment for its Gullfaks Rimfaksdalen project.

Geospace Technologies has entered into a contract to deliver a $5.0 million permanent land data acquisition system via a contract with local partner Makamin Petroleum Services Company.

Honeywell’s UOP unit has teamed with Black & Veatch to develop integrated, small-scale LNG plants capable of processing from 50,000 to 500,000 gallons of LNG per day per train.

T.D. Williamson has performed an inline inspection of a pipeline using its Multiple data set inspection tool with SpirALL MFL technology in the UK for Valero Energy.

Jacobs Engineering Group was awarded a contract worth $200 million over four years, from BP Exploration Operating Company to support its intervention project at the Sullom Voe Terminal, Shetland Island, Scotland.


Standards stuff

Energistics, OGP, Fieldbus, Hart, Fiatech, POSC/Caesar, RDA, ICA-OSgeo, SEG, OGC.

The next Energistics-sponsored national data repository conference (NDR2014) will be hosted by Socar, the state oil company of Azerbaijan in Baku in September 2014. Energistics RESQML team has finalized deliverables for V2.0 with a general purpose schema for grid exchange. V2.0 will be available for public review by February 2014.

The international association of Oil and gas producers (OGP) has set up an earth observation subcommittee to support industry projects aimed at improving emergency response.

The Fieldbus and HART foundations are in discussions on a potential merger to create a single foundation ‘dedicated to the needs of intelligent device communications in the world of process automation.’

Fiatech and POSC Caesar Association are requesting feedback from industry participants on key intelligent piping and instrumentation diagram (P&ID) and 3D deliverables.

The Research Data Alliance is calling for contributions on metadata standards tools and use cases.

ICA-OSGeo have established an open source geospatial laboratory at ETH Zurich, Switzerland—2206.

A new SEGY format (SEGY rev 2) was discussed at the 2013 SEG conference last month. Rev 2 will support up to 231 samples per trace, arbitrarily sample intervals and microsecond time stamps. Possible changes concern alignment with OGP positional standards and support for encryption, encapsulation and compression.

The OGC Energy & Utilities Domain Working Group is asking for comments on location data standards in energy and utilities. OGC is also asking for comments on its ‘well known text’ standard for coordinate reference systems.


Honeywell’s UOP to help with Pertamina refinery revamp

$1,000,000 grant from US development agency to fund ‘bankable feasibility study.’

Honeywell’s UOP unit has obtained a $1 million grant from the US trade and development agency (USTDA) to partially fund a recommendation a.k.a. a ‘bankable feasibility study’ for the revamp of five of Pertamina’s largest Indonesian refineries. Pertamina’s total refining assets have a capacity of nearly 1 million barrels per day. USTDA is a government foreign assistance agency that is funded by Congress. The USTDA’s mission is to help companies create US jobs through the export of goods and services to emerging economies.

Honeywell UOP CEO Rajeev Gautam said, ‘We have specialized in the design and modernization of refineries around the world for nearly 100 years and have worked with Pertamina for more than four decades and recently established an office in Jakarta to expand our presence in Indonesia.’ Pertamina president Karen Agustiawan added ‘We need to modernize our refining infrastructure to meet the rising demand for energy and petrochemical products in Indonesia and reduce our reliance on imports.’ More from Honeywell and USTDA.


Valerus teams with Enbase on Command performance monitor

New performance monitoring and predictive analytics for surface facilities.

Houston based Valerus, a provider of oil and gas handling solutions, has announced new a performance monitoring and predictive analytics toolset for surface facilities. Valerus Command is already in-service on the company’s contract compression fleet and operated facilities. Command is claimed to represent a ‘step change’ from traditional scada and monitoring systems, combining remote monitoring and analytics, enterprise asset management (EAM), standard operating procedures and technical expertise.

Pete Lane, Valerus CEO said, ‘Command will help us deliver the best service to our customers, leveraging our existing compression, production and processing expertise to drive performance improvements from the wellhead to the pipeline. We believe that this is the most comprehensive monitoring and analytics system available for surface facilities.’

Valerus partnered with Enbase Energy Technology to develop the performance monitoring and analytics toolset. This collects real time data into the Valerus EAM system and generates work orders which are routed to service technicians. Activity is managed from a new command center in Houston. More from Valerus and Enbase.


‘Breakthrough’ claim for integrated software dependent systems

DNV standard for offshore software systems adopted by Hyundai for Diamond Drilling newbuild.

Following the recent facelift (Oil ITJ April 2013) of its standard for integrated software dependent systems (ISDS), DNV reports a ‘breakthrough’ with (i.e. a sale to) American drilling contractors. The claim is made in respect of a deal done with Hyundai Heavy Industries for the classification of a new build semisub that is to be owned and operated by Diamond Offshore Drilling. Hitherto the ISDS standard has only been applied by Norwegian owners.

ISDS represents ‘a new way of thinking’ on offshore verification. The methodology ensures that software and integration problems are detected and resolved early in the project design stage, rather than during commissioning and acceptance testing. Instead of focusing on a final review of documents and installations to ensure they meet product requirements, ISDS reviews the work processes used to deliver software, ‘where complete verification of the final product is nearly impossible.’ More from DNV.


Intego group simulator for oil and gas disaster prevention

Training simulator specialist signs-up with National Center for Simulation trade body.

Lake Mary, Fla. headquartered out-sourcing and software engineering provider Intego Group has joined the National Center for Simulation (NCS), a not-for-profit trade association. Intego is to bring its expertise in 3D simulation to the US market. Intego claims a ‘deep understanding’ of the oil and gas industry and believes that major disasters such as Piper Alpha and Macondo could have been prevented ‘if technical solutions in the area of 3D simulation had been sufficiently evolved.’

Intego’s software engineers were recently involved in a 3D simulation of a heavy lift operation. To minimize the risk of an accident during installation, practice sessions were conducted on the simulator. Intego MD Sergey Glushakov said, ‘We import CAD models of real objects into the 3D simulators to recreate exact virtual copies of the platform itself. This allows an operator to identify optimum conditions such as wind strength, wave force and lifting crane position. Based on recommendations from the 3D simulation, the real-life installation performed successfully.’ More from Intego.


Altair user group on oil and gas use cases

Baker Hughes, Chevron, MCS Kenney on HyperWorks and PBS pro. SGI’s new CAE appliance.

Several presentations to the 2013 Altair technology conference held in Garden Grove, CA this month showed the use of the engineering design and simulation toolset in various oil and gas contexts. Baker Hughes’ Yansong Wang described a ‘virtual design’ approach including finite element analysis and reliability-based optimization with Altair HyperStudy to design mechanical seals in drill bits used for HPHT environments. Ganesh Nanaware (also with Baker Hughes) showed how a finite element model of an expandable liner hanger was built and validated in HyperWorks. Arindam Chakraborty with MCS Kenny described a probabilistic analysis of an ultra deepwater buckle arrestor for large diameter pipelines with HyperWorks. High performance computing Unix systems administrator Phil Crawford provided some insights into Chevron’s experiences and requirements with Altair and PBS Professional.

Altair also announced a partnership with SGI on a ‘private cloud appliance’ for computer assisted engineering (CAE). The ‘fully configured’ hardware and software bundle dubbed ‘HyperWorks unlimited’ offers use of all Altair apps and workload management tools running on an SGI/Intel cluster. It is curious that for Altair, an ‘appliance’ is now a ‘private cloud,’ especially in the light of an earlier announcement of an Amazon web services edition of its CFDCalc computational fluid dynamics solution. More from Altair.


New weather monitoring offerings from EarthRisk, Gill

TempRisk Apollo quantifies weather risk. New ‘MetStream’ ruggedized data hub.

San Diego-based EarthRisk Technologies has released TempRisk Apollo (TRA) a weather forecasting tool that claims to ‘quantify weather risk’ up to five weeks out. TRA presents ‘objective’ weather data via an online interface that provides subscribers with daily odds on hot and cold events. TRA integrates numerical forecast guidance from external models such as the European Center for Medium-Range Forecasting into its algorithm to provide a ‘comprehensive view of weather risk.’

ERT CEO John Plavan said, ‘TRA provides decision makers with the probability of a range of outcomes along with forecast deviations from traditional models.’ Adam O’Shay, president and head of trading at Leeward Pointe Capital added, ‘TRA provides probabilistic temperature risk-assessment across the North American natural gas market.’ TRA uses a neural network algorithm to power its medium range forecasts. More from EarthRisk.

In a separate announcement, Gill Instruments has unveiled ‘MetStream,’ a ruggedized meteorological data hub and supporting software that provides streaming multi-channel data to endpoints such as remote tablets, smartphones and PCs. Product manager Richard McKay said, ‘MetStream provides an end-to-end solution to bridge, store and stream real time data to different devices.’ Sensors can be Gill Instruments’ own brand devices or third party equipment. More from Gill Instruments.


ENGlobal deploys first universal master control station

Vendor-independent subsea control system deployed on Gulf of Mexico asset.

Englobal Corporation has announced the first deployment of its universal master control station (UMCS). The UMCS, a ‘vendor independent’ subsea control system, was installed on an offshore platform in the Gulf of Mexico for a major international oil and gas company. The UMCS provides a standardized interface between industry available subsea production systems and topsides production facilities. Englobal CEO Bill Coskey said, ‘This is the foundation of future subsea controls integration projects, including hydraulic power and electrical systems. Our position as integrator means we also offer specialist execution skills to manage technically complex projects.’

The UMCS integrates multiple subsea equipment vendors hardware into a single master control station and leverages ‘scalable object-based’ software and off-the-shelf commercial hardware. The unit provides a standardized interface to subsea communication units, distributed control systems, electrical and hydraulic power units. Graphics, security protection, interlocks, and shutdown sequences are tailored via a configuration tool. Englobal acquired what was then the ‘Dolphin’ universal master control station from Control Dynamics International in 2010 (Oil IT Journal April 2010). More from Englobal.


More flesh on the bones of GE’s ‘Industrial Internet’

New Predix API to enable third party development of predictive analytics.

GE has put a bit more flesh on the bones of its ‘Industrial Internet’ (II) with the announcement of teamings with Amazon, Accenture and Pivotal and the development of ‘Predix,’ a ‘new’ prediction technology that targets multiple verticals including oil and gas. Predix provides a ‘standard and secure way of connecting machines, industrial big data and people.’

Reading between the lines it would appear that ‘Predix’ is, or will be an open (?) API for GE’s existing ‘Predictivity for Oil & Gas’ offering as deployed by BP in its Advanced collaboration environment (ACE). In 2014, GE is to launch a third-party developer program that will allow partners to integrate Predix platform technologies with their own solutions. Meanwhile AT&T and GE are working on a secure wireless network for the II. Cisco and GE are to leverage ‘open standards’ to advance M2M analytics and Intel is collaborating on embedded ‘virtualization and cloud-based interfaces’ for Predix. More from GE and on the BP case study.


© 1996-2021 The Data Room SARL All rights reserved. Web user only - no LAN/WAN Intranet use allowed. Contact.