June 2012


Autonomous agents

Multi-agent systems à la ‘The Sims’ trialed by ExxonMobil in production optimization. Pump, well and facility agents cooperate and conflict to mimic real world scenarios.

Speaking at the SPE/Reed Exhibitions Intelligent Energy event earlier this year, ExxonMobil’s Mike Romer reported on trials of ‘autonomous agents’ (AA) in oil and gas asset management. Autonomous agents, combined into a ‘multi agent system’ (MAS) are software tools for modeling complex, multi-variable systems where an analytical approach is impossible. A popular example of such a tool is ‘The Sims’ computer game which uses the approach to model a virtual world. Romer cited AA guru Michael Wooldridge who defines an agent as being ‘capable of autonomous action in its environment with delegated goals.’ Wooldrige sees agents as a ‘next generation’ programming paradigm ‘beyond objects.’

An asset such as an electric submerged pump can be represented as a pump management agent with certain ‘needs’ such as over current shut off and other constraints. Agents may have conflicting goals. A well management agent may attempt to speed up a pump while the pump agent is trying to avoid emulsion creation. A surface facility agent may not have the capacity needed for the extra fluids.

Exxon’s research includes workflow automation using the Prometheus methodology to inform agents with knowledge from subject matter experts with a wide range of skills. The approach involves scenario development, agent role definition and the development of an analytical model and knowledge-base.

The degree to which a workflow is automated can vary. Some processes will be under automated (closed loop) control. Others, particularly where objectives conflict, may involve human intervention—leveraging model-derived ‘low level’ tasks that can be performed unambiguously. The software agents are intrinsically modular and can be combined and re-configured for use in different circumstances.

Romer observed that ‘workflow modeling at the granularity needed for automation is challenging for real-world applications.’ The full paper is available from the SPE. For more on intelligent agents read the introduction by Ira Rudowsky. AA/MAS appears to be gaining traction in academia with a well attended conference in Valencia, Spain this month. Romer also cited early application of the technology for Statoil by researchers at Norway’s NTNU. Commercial software (Jack) is available.


Paradigm sold

Geophysical software house bought by Apax Partners and JMI Equity for $1 billion cash from Fox Paine which paid $100 million for Paradigm in 2002.

A consortium led by London-headquartered Apax Partners and JMI Equity is to acquire Paradigm, the independent upstream software house, from Fox Paine & Co. for approximately $1 billion in an all-cash transaction. Paradigm’s 700 customers use its software to process and interpret seismic and well log data acquired during oil and gas exploration. The company has a strong market presence in complex environments such as the Gulf of Mexico.

Apax partner Jason Wright said, ‘We believe that the intersection between energy and software is an exciting area for investment and is an opportunity we have been monitoring for some time.’ Apax’ previous investments include Sophos and Autonomy. JMI Equity was an investor in Seismic Micro-Technology.

Paradigm was acquired by Fox Paine in 2002 for $100 million. At the time, this represented a 38% premium on the stock price (Oil IT Journal May 2002). In 2007, an attempted initial public offering (Oil IT Journal November 2007) came unstuck when ‘improper payments’ to foreign intermediaries were disclosed. More from Paradigm, Apax and JMI.


On the devilry of commercials and the sanctity of the company presentation

Editor Neil McNaughton returns to an old subject—disclosure in technical presentations. He finds that while commercial presentations from vendors are generally deprecated, many presentations from oil and gas operators are ‘commercials’ too—for the company, its engineers and researchers. The open source community sees things differently and is calling for the publication of both method and code.

Please reflect on the following generalization: there are too may PowerPoint-type presentations at conferences and trade shows that are made by vendors with a purely commercial objective. The implication is that the world would be a better place if there were more presentations by oil and gas operators—they, after all, are the ones that hold the keys to the ‘truth.’

Well that may be so in some circumstances. But what about the following case, an oil company uses Schlumberger’s Eclipse reservoir simulator in a novel way that merits a paper. What is the relative contribution to the work from the operator and from Eclipse’s developers? While that is a reasonable subject for debate, I can see no circumstances where the contribution from Eclipse is zero. And I have a hard time understanding why, even if the contribution from the software is minimal, this entitles the author to regard Eclipse as a ‘commodity’ that is unworthy of a mention. In general, the likelihood is that the contribution from the software’s developers exceeds that of the authors.

Why have things got to this state? On the one hand a desire to avoid commercialism—some conference organizers have fairly arbitrary rules along the lines of ‘no product names.’ But another reason that such rules are respected is, I am afraid, a degree of vanity of the company man whose gut feeling is that he (or she) does the science or the engineering and that the commercial world simply provides ‘commodity’ software. I say this advisedly because, when I was a company man myself, I presented a paper at the SEG that was hugely dependent on the efforts of our seismic acquisition contractor. Even though I did give credit, I was a little surprised when the contractor failed to express great enthusiasm for my rather derivative oeuvre.

In recent years when I attend such talks, I usually try innocently to elicit some information in the Q&A as to what tools were used in the project. This mojo does not seem to be working so good. Recently I have been fobbed off with a response along the lines of, ‘What software used is not really germane to the debate (read ‘it is commodity’) and we do not want to be seen endorsing one product over another.’ Endorse? Who said anything about endorsement? I would be happy with ‘we used XYZ—but actually it sucks.’

There are many reasons for fessing-up as to what tools are used for the job in that this may be a significant factor in the research. There may be a bug in the software that is revealed at a later date. Its use may be inappropriate for the task in hand. Or it may be the best of breed. Any of which may be good to know if you are thinking about trying to do similar projects yourself.

I stayed on after the Copenhagen EAGE this month (report in in next month’s edition) for the workshop on open source software in geosciences. In her keynote, on the central role of geophysics in the reproducible research movement, Columbia University’s Victoria Stodden referred to a ‘credibility crisis’ in computational science. Stodden paid tribute to John Clarebout who observed that, ‘An article about computational science is not the scholarship itself. It is just an advert for the scholarship.’

I pricked up my ears when I heard this because it seemed to capture the sentiments I expressed above. By not disclosing what tools were used for a project, a paper is effectively reduced to a commercial. Link that in with the large booths on the exhibition floor that hire ‘talent,’ and you understand how the scientific conference is in danger of being subverted. Not just to advertise vendors’ products but to promote oils’ research and operational ‘excellence.’

So what does the reproducibility movement have to say? On the positive side, Stodden suggests that computation may represent a new ‘third branch’ of the scientific method—alongside deductive reasoning and empirical techniques. On the other hand, the ‘third branch’ tends to produce less than perfect results—what has been called the ‘ubiquity of error’ in software.

A lot of loose thinking goes on in the computational world which tends to produce ‘breezy demos’ (we’ve seen a few of these!) rather than reliable knowledge. Some have even claimed that ‘most published research findings are false.’ For Stodden and other open source hotheads, including representatives from BP, ConocoPhillips, Saudi Aramco, BG and more, are pushing for ‘reproducible research’—papers that bundle data, code and text so that they can be properly evaluated and their results used to further the greater scientific good. Stodden herself is promoting the reproducible research standard (RRS) to achieve this.

Of course there are other issues such as protecting intellectual property, copyright and so on. Mostly these are somewhat orthogonal to the publication issue. If a work is patented, then the IP is in the public domain already! In any event, if you want to keep stuff secret, you should not be presenting conference papers.

But this is often what happens as a presentation becomes a PowerPoint version of the dance of the seven veils—teasing its way around the interesting stuff while revealing little flesh. My own default position for analysing these is that what is not disclosed is more likely to fall into the category of ‘ubiquitous error’ than to represent a methodology so clever that ‘if I told you I’d have to shoot you.’

Stodden cited mathematician and aphorist Richard Hamming who in his 1968 Turing Award lecture observed that while Newton said, ‘If I have seen a little farther than others, it is because I have stood on the shoulders of giants,’ Hamming was forced to say, ‘Today we stand on each other’s feet.’

Follow @neilmcn

Semantic Days 2012, Stavanger

Bechtel update on PCA/Fiatech iRing project. TopQuadrant migrates EQHub to AllegroGraph triple store. Fluid Operations on Statoil’s ‘Optique’ big data project. Milan Polytechnic and the Large Knowledge Collider ‘LarKC’ and streaming RDF for complex event processing.

The Semantic Days conference, held in Stavanger, Norway earlier this month continues as a flagship event for the industrial use of the World Wide Web consortium’s ‘semantic’ technology. Semantic technology, notably the resource data framework (RDF) was conceived a decade ago as a way of ‘exposing’ machine readable data in web pages and other resources in an application-neutral way. Subsequently, the technology has failed to live up to its initial promise.

So how are semantics faring in oil and gas—and in Norway in particular? The flagship POSC/Caesar Association’s (PCA) ISO 15926 data standard has embedded semantic technology in part (its ‘canonical’ version is in the legacy Express data modeling language) and has been embraced by the US Fiatech standards body as a vehicle for data exchange around capital projects such as oil and gas offshore production platforms and FPSOs. Bechtel’s Darius Kanga provided a status update on the PCA/Fiatech iRING project—a semantic framework that interoperates with multiple engineering source data formats. iRing has support from Bentley, CH2M Hill, Emerson, Hatch, Worley Parsons, Bechtel and others.

David Price (TopQuadrant) presented recent developments around the EPIM Reporting Hub (ERH), a consolidation of production and drilling data reporting for Norway’s oil and gas industry. ERH has leveraged ‘ontologies’ (lists of accepted terminology) derived from the regulator’s NPD Fact Pages and the PCA upstream ontology. Under the hood these are stored in an RDF triple store, AllegroGraph from Franz, Inc. The database is expected to grow to 300 million triples over the next 4 years.

But neither the data providers nor the recipients are as yet tooled-up for semantic delivery or consumption. Incoming data is in XML from a variety of legacy formats. Delivery is in more XML and PDFs generated from the triple store using the Sparql Inference Notation (Spin). Price sees the NPD Facts as a candidate for ‘linked data,’ an open data sharing initiative that is being promoted by W3C guru Tim Berners-Lee. For the moment, data is delivered over the very ‘closed’ Norwegian SOIL network.

Peter Haase’s company, Fluid Operations, has been working with Statoil on scalable end user access to ‘big data’ defined as ‘data sets so large that they are awkward to work with database management tools.’ Production and exploration data falls into this category—with petabytes of relational data described with diverse schema and spread over multiple individual data bases. For Statoil’s 900 expert users, this can mean up to four days to develop and run a new data access query—requiring help from IT.

The ‘Optique’ project is investigating the use of ‘linked, open data’ as a solution to such issues. Using the RDF data model and Sparql for querying, data that is scattered across information silos can be linked into a ‘web of data ontologies.’ Optique components include real time stream processing and scalable query with ‘elastic clouds.’

Emanuele Della Valle of the Milan Polytechnic observed that RDF was not really designed for stream processing. Valle is involved in the EU-backed Large Knowledge Collider, LarKC and is advocating a paradigm shift from ‘one time’ semantics to transient data that is to be consumed on the fly and leveraged in ‘stream reasoning.’ Here, data streams are unbounded sequences of time-varying data elements such as weather observations or oil and gas production data. Valle is working on ‘RDF Streams’ and SPARQL extensions for querying these. Applications overlap with current stream/complex event processing solutions. A streaming linked data framework is being prototyped. Here sensor networks provide data as ‘pairs’ where each pair is made of an RDF triple and a timestamp—i.e. they are neither ‘pairs’ nor ‘triples!’ More from Streamreasoning.

Other contributions of note include Siemens’ Mikhail Roshchin on reasoning with real time data and Kongsberg’s Kaare Finbak (and others) on Statoil’s Integrated environmental monitoring project. These and other presentations are available from the Semantic Days website.


BGS rolls-out geochemical data model

Open Geoscience Knowledge Exchange releases second data model, a borehole geochemistry database developed by the British Geological Survey.

The British Geological Survey has just released its geochemistry data model (GDM), a simplified version of BGS’ in-house geochemistry database. The database provides a central location for millions of borehole records and supports batch load of large volumes of boreholes data. Data can be viewed in a geographical information system.

The design of the database helps standardise information from a variety of sources. Controlled vocabularies and logical constraints make the information more re-usable and ‘discoverable.’ Data can be extracted and reformatted for a range of uses and clearly defined tables and columns remove the likelihood of ambiguity within the dataset ‘single version of the truth.’ Users can download either a logical data model or ready-to-run scripts for Microsoft Access or SQL Server, Oracle or PostgreSQL. The software is free for both commercial and non-commercial use but users are asked to acknowledge BGS’ copyright.

The geochemistry database is the second download available from the Open-Geoscience Knowledge Exchange project. The first was BGS’ Borehole Data Model released late in 2011. The borehole model provides a representation of boreholes linked to their geological interpretations and associated metadata. The design also makes use of corporate dictionaries or controlled vocabularies such as the BGS Lexicon. The borehole data model also derived from BGS’ in-house database that has been used to manage borehole records for over 15 years.


Logica teams with USC Viterbi on Chevron’s ‘Session’

CiSoft researchers to apply complex event processing to real time production data filtering.

Chevron, Logica and the CiSoft arm of the University of Southern California’s Viterbi School of Engineering are working on a novel approach to the analysis of real time data. The ‘Session’ program includes a new data framework to cleanse incoming data from production sensors in the field and to provide insight into the condition and productivity of reservoirs and underpin field management. According to CiSoft, production sensors are not currently ‘filtered’ and can result in ‘misinformed decisions.’ Moreover, today’s information systems process information in ‘chunks,’ meaning that the data is only filtered after its collection, ‘lowering efficiency.’

USC Research Associate, Farnoush Banaei-Kashani is developing the framework around Logica’s complex event processing middleware (Oil IT Journal, May 2012). A training session was held in USC for the CiSoft team last month. More from CiSoft.


Seismic novelties abound at EAGE

ION’s Calypso seabed seismics. Sercel’s new Sentinel. Schlumberger’s IsoMetrix MEMS system.

ION Geophysical just ‘launched’ its Calypso next generation seismic acquisition system which immediately sank. But this was OK because it is a seabed system! Calypso includes multicomponent VectorSeis broadband digital sensors and a 2,000 meter water depth capability. Ion reports that the worldwide seabed seismic has grown over 250% in the last five years, with $300 million in contracts awarded in Q1 2012 alone. More from ION.

CGGVeritas unit Sercel unveiled its Sentinel RD solid streamer. Sercel has sold over 4,000 km of streamers to 60 seismic vessels worldwide since its introduction in 2005. The new system claims a 15 % weight reduction and reduced cable drag. More from CGGVeritas.

Schlumberger’s new IsoMetrix cable uses multicomponent micro-electro-mechanical (MEMS) accelerometers to interpolate the pressure wave field between streamers. The technology is claimed to remove spatial aliasing induced by current survey geometries that sample the wave field at a much coarser (50m) interval between streamers than along them (3m).

The Nessie-6 system uses a ‘generalized matching pursuit’ algorithm to reconstitute the wave field from cross line and vertical measurement of the up and down going wave fields. A reconstructed dataset at a regular 6 x 6 meter sample interval is claimed. The huge multi component data sets (as much as 70 times more than conventional high res data) is delivered on 4 terabyte capacity IBM 3592 cartridges. More from WesternGeco.


Fuse rolls-out seismic data ‘appliance’

No wheels on this VMWare/EC Cloud image-based data management solution.

FUSE IM’s XSeis seismic data management system (Oil ITJ October 2010) saw its commercial launch at the EAGE this month. XSeis is now described as a ‘web-based seismic appliance.’ But this does not mean that it is necessarily a hardware and software bundle (as Fuse and Teradata showed at last year’s ECIM—Oil ITJ September 2011).

Fuse MD David Holmes told Oil IT Journal, ‘For us, an appliance is deliverable as a VMWare or Amazon EC Cloud image that is ready for deployment. The approach contrasts with the deployment of, say, an Oracle-based application which may take days or weeks to customize to its environment.’ XSeis is a component of Fuse’s XStreamline data management platform.

Users can browse data stored in third party seismic data products from a map or tabular interface, select and load to applications. Initial XStreamline data adaptors are available for OpenSpirit, Teradata EDW and Westheimer’s ISDS. Further XDAs for OpenWorks R5000, ArcGIS Server and PPDM are in development. An XSeis for Petrel is available for browsing and loading datasets to Petrel projects. An XSeis plug-in for Landmark’s DecisionSpace Desktop from is also under development.

According to Fuse CTO Jamie Cruise, XSeis represents a ‘new class of cloud-based applications.’ The technology delivers the capabilities that petrotechnical desktop users need without the ‘cost, drudgery, and delay of traditional data management.’ More from Fuse.


Nvidia ‘virtualizes’ the GPU

Single PCI Express card provisions 100 ‘BYOD’ users with high-end graphics workstation experience.

Nvidia’s VGX allows IT managers to virtualize the desktop, moving graphics and GPU processing into the data center. Users can access a ‘true cloud PC’ from any device, laptop, tablet or smartphone, regardless of its operating system, and run any PC-based application. VGX’ low latency remote display capability makes it suitable for users of 3D design and simulation tools, ‘previously considered too intensive for a virtualized desktop.’ The solution provides ‘BYOD’ (bring your own device) workers with the same functionality as a traditional desktop. The system is said to reduce IT spend, improve data security and minimize data center complexity.

The first VGX board is equipped with four GPUs and 16 GB of memory, and has a PCI Express interface. GPU virtualization runs under control of the GPU hypervisor. User selectable machines allows enterprises to configure the graphics capabilities delivered to individual users—from a PC to a professional 3D design and engineering desktop with Nvidia Quadro or NVS GPUs. Up to 100 users can be served from a single VGX board. More from Nvidia.


Software, hardware short takes

Ikon, ffA, Paradigm, Hampson-Russell, Geovariances, Petris, Honeywell, ExxonMobil, Azbil, Wellstorm.

Ikon Science’s RokDocQED (V6) adds geomechanical calculators for 3D pressure volumes used in well planning and new rock physics-driven inversion modules. A customizable workflow manager guides geoscience teams and non-expert users. Links to Petrel have been developed in native Ocean code.

ffA’s GeoTeric geological image processing package has been certified to run on Nvidia Maximus-powered workstations, a combination of Quadro and Tesla units.

Paradigm’s Geolog 7 includes ‘Determin,’ a suite of deterministic calculation modules ‘Multimin’ statistics for mineral and fluid characterization from logs and core data. Other enhancements cover electrofacies analysis, vendor-independent processing of borehole image logs and aids to hydraulic fracture placement.

Hampson-Russell’s HRS-9 introduces a streamlined, dashboard front end to all previous functionality, centralizing the well database and project directories. Customizable, preloaded workflows, spanning legacy applications offer a ‘solutions-based approach’ to reservoir characterization. Multi-threaded 64-bit computing speeds batch processing.

GeovariancesIsatis 2012 adds automatic fitting of complex experimental variograms, advanced alluvial deposits modeling with Flumy and Reprise Software’s RLM license management system.

PetrisWINDS Enterprise now has a Petrel plug-in that allows Petrel users to access digital well log data, stored in their various application data stores, through one unified search without leaving the Petrel environment. Data stores include Techlog, Interactive Petrophysics, OpenWorks, GeoFrame, Geolog and Recall.

Honeywell’s new Experion process knowledge system, Orion, targets cost reduction at refineries, offshore oil and gas platforms and chemical plants by speeding installation time for controls and safety systems. Orion accommodates late design changes through a configuration manager. Virtualization technology reduces costs by simplifying management and reducing the amount of hardware required onsite.

ExxonMobil has released a free app for the iPad displaying ExxonMobil news, blog posts, stock price, videos and publications . An enhanced, interactive version of the Outlook for Energy: A View to 2040 is available.

Azbil has announced GasCVD a high accuracy, lightweight natural gas calorimeter.

Wellstorm has released its LAS to WITSML converter as open source software under an Apache 2 license.


OSIsoft 2012 User Conference, San Francisco

Industrial Evolution’s proof of concept electric submersible pump optimization. MOL Group leverages PI System in strategic integration and application infrastructure.

Speaking at the 2012 OSIsoft User Conference in San Francisco earlier this year, Tom Hosea outlined Industrial Evolution’s approach to electric submersible pump (ESP) performance analysis and optimization. The proof of concept, performed for a US major, set out to implement PI System at assets in the San Joachim Valley, California and in West Texas. Project focus was capturing trends and exception reporting for historic and real time ESP operations. A dozen or so real time measurements are captured at five minute intervals and compared with predicted reservoir performance curves. If they depart sufficiently from expected values, an email alert is sent to the operator for remedial action. A PI Asset Framework (AF) database holds inventory details of ESP specifications.

The system helped identify situations where imminent gas lock was preceded by increase in motor temperature and a decrease in casing gas flow. Leading indicators like motor temperature can be used to derive business rules that reduce the risk of a shutdown. Excessive gas flow rates are also used to place wells on a ‘watch list.’ The system reduced the mean time between ESP locks and reduced lost production days by 63%. Engineer productivity also rose significantly.

Tibor Komroczi explained how Hungarian MOL Group is leveraging the PI System as a ‘strategic integration and application infrastructure.’ MOL is a vertically integrated EU major with operations in several countries from upstream, through refining to midstream and gas transmission and storage. MOL’s ‘Refis’ refinery information system leverages the PI System to standardize systems and architectures at all five plants.

Previously, MOL suffered from poor knowledge sharing, paper-based logbooks, poor information handover between shift operators (potentially dangerous) and local key performance indicators and business rules.The current business improvement project seeks to ‘close the gap between process control and business governance’ with better real time information flows, a unified model of refining and minimal information loss.

Alongside PI System, MOL is using Sigmafine, Semafor for KPI management and Opralog’s E-Logbook. Laboratory information management systems have been consolidated to a standard PI Systems interface. PI WebParts and ProcessBook have been used to support knowledge sharing, system checking and training. A ‘Health monitor’ dashboard provides an overview of what is happening in the plant and who is using the system. The front end has been adapted to different user profiles. Opralog’s E-Logbook reporting has replaced a mass of reports and logbooks. Log data from rotating machinery is blended with data from the DCS to monitor running hours and equipment behavior—and to optimize maintenance schedules.

The introduction of a PI WebParts-based portal has made for effective information sharing, an electronic logbook has improved search and reporting and safer shift handover. Now all MOL units share KPIs and the same house-keeping rules. Komroczi concluded with a strong endorsement for PI System as enabling a ‘best of breed’ approach to technology integration, as a solution to cyber security and as an enabler of knowledge sharing. More from the OSIsoft UC.


PNEC Data Integration, Houston

Hadoop and ‘big’ oil and gas data. Factory data management for shale gas. Talisman’s data decade. W&T Offshore and 3Gig’s Prospect Director. Qatar Shell and the upstream taxonomy. BP’s new data organization. Schlumberger manages the managers. Shell’s Petrel reference projects. Exxon’s production data framework. Aramco improves BAD! Petris, Neuralog and Petrosys strut data stuff.

Some 500 delegates from 30 countries attended Phil Crouse’s PNEC Data Integration conference in Houston last month. What’s new this year in data management? There were quite a few companies climbing aboard the Hadoop bandwagon—but we are holding off on our Hadoop reporting until we can see beyond the buzzwords. The shale gas bonanza has caused a rapid evolution in data management to adapt to the new ‘factory drilling’ paradigm. Data management is itself maturing. The evaluation and execution of complex projects is now mainstream.

Talisman’s Lonnie Chin provided the keynote with a look back over ten years of information management. The data management landscape of a decade ago covered much of what is of concern today—master data management, centralized stewardship, federated data integration and spatial. Since then, data intensity has risen with novel data types and more sophisticated requirements—including shale gas, crucial to Talisman’s international development. This, mobile devices and performant users are calling for better information quality. Or as Chin puts it, ‘Stop giving smart users dumb apps!’ On the technology front, Chin highlighted ESRI’s move to Microsoft Silverlight, ETL technology for tying disparate geochemical data sources together and Spotfire and INT for data analysis and visualization. Software usage is evaluated with CrazyEgg’s ‘heat mapping’ technology. For Chin, today is an ‘exciting and evolutionary time to be in E&P data management!’

Brian Richardson presented (on behalf of Gerald Edwards) Qatar Petroleum’s data management effort on the North Field—the largest non-associated gas field in the world. North Field has several major joint venture stakeholders each with their own databases and special requirements. A multidisciplinary E&P database (MDDB) has been developed to allow individual JVs to use any application or database. QP itself focuses on operational and archive data for project oversight. Partners Qatargas and RasGas need more detail and are ‘mining the MDDB constantly.’ Sharing data means more scrutiny and a need for diligence. The North Field is a $40 billion capital project and the funds are there to do the MDDB right. Alongside G&G data the project includes well test, laboratory information systems and more. Data entry includes electronic sign-off.

Carol Hartupee presented W&T Offshore’s prospect information management system developed with help from co-presenter Kandy Lukats’ company, 3GiG. Small companies like W&T need prospects to survive, but best practices for prospect management can be hard to identify and capture. Enter ‘software-led design,’ a new way to build and customize information management systems. W&T has leveraged 3GIG’s Prospect Director to replace key decision-support information locked away in spreadsheets. A ‘parking lot’ concept evaluates why a prospect did not work and determines its fate. This could be ‘straight to the graveyard’ or into storage for ‘resuscitation’ if economics change and trigger renewed interest. Lukats says it is important to keep things simple and build an 80% solution that is usable by all—especially the CEO.

Andrew Lennon presented Qatar Shell’s use of Flare Consultants upstream taxonomy to improve subsurface and wells document and records management. In an internal study, Qatar Shell’s document management was found to suffer from uncertainty as to which versions were final and where they were located. QS appointed dedicated technical data and document managers and brought in Houston-based Certis to do a top down analysis and sell the project. Lennon observed that much information management is simple, it just needs done well as in a hard copy library. Document management processes have been designed to be technology independent. A good thing as the current LiveLink repository is being replaced by SharePoint. Folder structures have been simplified and made more consistent. Publishing means preparing documents for ‘consumption’ by thousands of potential users. Titles, authors and other tags are added and the granularity of publishing established as file, folder or a pointer to a hard copy location. The whole system runs under control of a ‘semantic map,’ an ontology and knowledge representation derived from Flare’s Catalog. This works on input terminology with automatic classification and for search. For instance, a search for ‘Eocene foraminifera’ will find ‘priabonian.’ ‘Sand prediction’ is recognized as a production technique and ‘failure’ as a problem. Certis is involved with publication of QS’ legacy data. Version control was rarely used before, it is now, along with e-signatures to speed up the process. As one reservoir engineer remarked, ‘We are finding stuff we never knew we had.’ Tag clouds and tree maps have proved useful—for instance to spot missing documents in a colored QC matrix.

Jean Trosclair revealed how Shell manages Petrel reference projects. The popular Petrel has had reference projects since 2007—Shell was an early adopter and has been pushing for unit of measure and cartographic awareness. As new features are added, awkwardness and complexity also rise. Cartography in Petrel still requires oversight. Interoperability with OpenWorks requires OpenSpirit links. On the other hand, the reference project provides excellent audit features—‘CSI Petrel’ is great for data forensics. A reference project has clean and final data only and one owner. Assets such as complex fields, salt bodies and the like are especially amenable to the reference project approach. Upstream of the reference project, data is reviewed intensively to establish best well paths and curves are checked for ‘final’ status, loaded to Recall and populated ‘safely’ with OpenSpirit. Seismic attributes and perforation data likewise undergoes a thorough review prior to load. Drilling proved to be ‘an unexpected source of accurate gyro surveys,’ as the pre-sidetrack survey is likely to be much more accurate than the initial survey. Trosclair related an interesting Gulf of Mexico geodetic anecdote. Prior to 2000, many wells were drilled using the ‘Molodensky transform’ which gave bad coordinates. Shell resurveyed its platforms and changed data in its internal databases. The situation regarding vendor data was ‘a huge mess!’ The Macondo incident afforded a 14 month breather for Shell to fix the problem. Issues remain in reconciling engineering and geoscience databases. ‘Consistent, enforced well header data is critical. Verified auto synch is the ultimate goal.’

Nadia Ansari (ExxonMobil) presented the results of her University of Houston thesis on a ‘conceptual data framework for production surveillance.’ Forecasting needs data, but work practices and a myriad of tools deny a simple approach. Systems provide overlapping and mutually exclusive functionality. Most focus on presenting data to engineers for decision support and are ‘high maintenance’ solutions. Ansari’s data integration framework includes a ‘forecast’ metadata type and a single shared repository. The system will in production this summer. The project is looking to confirm that the act of forecasting can be codified and to develop ‘organizational data mining’ and rollup to the corporate level. ‘Reservoir engineers spend too much time doing data management in today’s high maintenance environment—we need to speed up the process.’

ExxonMobil’s John Ossege and Scott Robinson have proposed a methodology for quantifying the cost of poor quality data and hence to evaluate the economics of data cleanup. Data cleanup frequently unearths lost proprietary data. Enter the data metric ‘X’ equal to the amount of retrieved lost data that equates to the cost of the cleanup. X is calculated for various projects and projects with low X are prioritized. The method has proved successful in identifying projects with large amounts of boxed paper records where frequently, data is lost ‘in situ.’

Rusty Foreman and Maja Stone presented BP’s new data management strategy ‘for the long haul.’ A 2009 study made BP decide that it had to ‘do something about data management.’ Too many good projects wilted and died from unsustainable data issues. The business case at its most simple is that BP spends around $1 billion per year buying and acquiring data. If data was a physical asset, nobody would think twice about spending the money. But data does not ‘corrode’ visibly. Earlier work by Gavid Goodland had established a data management operational model for upstream technical data. This is now being extended with governance, professional discipline and performance management. Change management specialist Stone outlined BP’s three year execution plan. BP’s renewed interest in data management has sparked off a major recruiting drive, a newly defined career path and training program. BP has also upped its PNEC attendance from ‘around one’ a couple of years back to the current 33!

Day two started out with Jim Pritchett’s (Petris) view from the vendor trenches. Data management has come a long way since PNEC began. It has ‘taken longer than we thought,’ but now ‘we have commercial products and interest in data management is at a peak.’ Integration and workflow management reduce errors and inefficiencies and are real money makers. But even today, projects are challenged by the lack of standard naming conventions. Against this is the need for speed especially in shale plays where data quality issues and constant infrastructure changes abound. But the reality is that most failures are due to lack of budget support for the true cost of a project. Complex operations, the changing roles of stakeholders and systems, and the use of untested tested APIs lead to a situation where the data manager is both a ‘negotiator and technical policeman.’ In the face of which, management may tire and pull the plug.

John Berkin (Schlumberger) observed that ‘data managers don’t leave because of the pay, but because their job does not evolve and they have no prospects.’ Schlumberger now has a career development path for its data managers. The ‘business impact through local development and integrated training’ (Build-IT) program includes course work, on the job training and self study. The program gives IT folk a basic introduction to geosciences even though ‘explaining a deviation survey to an IT guy’ is a tough call. Schlumberger’s ‘Eureka’ technical careers program and the competency management initiative do the rest. Nirvana is the ‘by invitation only’ status of a Schlumberger Fellow.

Rodney Brown presented ExxonMobil’s open source library of Microsoft .NET components for handling Energistics’ WitsML and ProdML data standards. These are available under an Apache 2 license on Sourceforge.

Fast and furious is not only the preserve of shale gas players. Shell/Exxon joint venture Aera Energy drills and logs upwards of 50 wells per month. Robert Fairman advocates a simple solution to a complex situation, ‘get it right first time!’ Aera thinks of the field as a shop floor/factory. The POSC/Epicentre data model is still in use, now a data warehouse with 20 billion facts! Around 2 km of log curves and 1.3 million facts are added per month. Aera has evolved an elaborate data quality process with definitions of ‘quality’ and assigned roles. In fact data quality and data management are used interchangeably. In the Q&A, Fairman described an Aera data manager as ‘part petrophysicist, part geologist, part driller and part IT.’ The discipline itself is in its own ‘center for process excellence,’ outside of both IT and the business.

Muhammad Readean presented Saudi Aramco’s data quality improvement effort including the use of Jaro-Winkler distance estimates of metadata string similarity, data mining to predict missing values and rule-based classification of curve mnemonics.

Volker Hirsinger (Petrosys) outlined new ‘FracDB’ hydraulic fracturing data workflows developed for major operator. Fracking involves a vast range of techniques and details to capture and there are ‘a lot of eyes on the fracking process.’ It is better to stay ahead of pressure from the regulator and optimize capture today.

Tarun Chandrasekhar presented Neuralog’s work for Continental Resources on data management for shale gas exploration. NeuraDB has proved ‘robust and easy to use.’ The project also saw the port from Oracle to SQL Server. PPDM’s versioning capability is used to cherry pick data from different sources according to Continental’s business rules. TechHit’s ‘EZDetach’ application is used to extract emailed partner data into staging folders by region, well name before loading to NeuraDB. The process was designed to be ‘non disruptive’ to existing workflows. LogiXML was used to develop custom web forms.

Jawad Al-Khalaf outlined Saudi Aramco’s ‘BAD’ approach of Business process, Applications and Data to mitigate data growth. Aramco is adding 12 petabytes of seismic data this year and Disk space is very expensive when the whole workflow is considered. Storage virtualization is being used, along with de-duplication and data compression. Making users pay for what they store proved effective—on one file system, 46 terabytes of disk was unused for over a year. Cleanup saved 54% and simple Linux compression another 14% for a total saving of 59%.

Finally a quote from a happy PNEC 2012 attendee, ‘Best conference yet—they keep getting better. Really well run. Relevant, useful topics.’ Scott Robinson, Exxon-Mobil. We couldn’t agree more! More from PNEC.


Folks, facts, orgs ...

Energistics, GITA, Fugro, Baker Hughes, Jee, Cal Dive, CGGVeritas, Spectraseis, SIGMA3, Emerson, EOC, ffA, Exprodat, SeisWare, Fiatech, IPL, Institute for Supply Management, Lazard, Panopticon, Knight Oil Tools, Oceaneering, Tranzap, Oxford Princeton, Prometheus Energy Group.

Ben Williams, VP & CIO of Devon Energy is a new member of the Energistics board.

Following ‘declining conference attendance and membership’ the geospatial IT association, GITA, is transitioning to a volunteer organization.

H.L.J. Noy has been appointed to Fugro’s supervisory board.

Noam Lotan, has joined the board of Acorn Energy unit US Seismic Systems.

Trey Clark is VP investor relations at Baker Hughes. Lynn Elsenhans has been appointed to the board.

Alan Jordan has joined Jee as a Principal Cables Engineer.

John Reed has joined the Cal Dive board. He was formerly with Global Industries.

CGGVeritas has appointed Olivier Gouirand as VP Finance. He was previously a French government advisor.

Spectraseis has appointed Dan Prairie senior sales engineer for seismic monitoring.

SIGMA³ has recruited Sean Spicer as chief software architect, Peter O’Conor as director, business development and Francois Lafferriere as director business development Latin America.

Mark Bulanda has been named executive VP industrial automation at Emerson.

EOC has appointed Yeo Keng Nien as company secretary.

Romario Lima is to head-up ffA’s new office in Rio de Janeiro, Brazil.

Exprodat is to offer Petroleum ArcGIS training in Houston with local partner Resolution Energy Services.

Database specialist Dick Miller has joined the SeisWare development team.

Peter Blake (Hatch Chile) has been elected chairman of Fiatech for a two-year term.

IPL has appointed Chris Bradley its board as chief development officer.

Tom Derry is to head up the Institute for Supply Management.

David Cecil has joined Lazard as MD and head of North American E&P. He was formerly with Scotia Waterous.

Markus Roithmeier has joined Panopticon Software as Senior VP for the EMEA Region. He hails from QlikTech.

Chris Rosson has been promoted to global business manager of Knight Oil Tools, and Greg Zaunbrecher to assistant global business manager.

Mark E. Peterson has joined Oceaneering as Vice President, Corporate Development. He was formerly with McDermott International.

Oildex parent Transzap has appointed Richard Slack as president and CEO.

The Oxford Princeton Program is offering an online course on the fundamentals of and trends in hydrocarbon exploration and production.

Randall Hull has joined Prometheus Energy Group as Director of Business Development, based in Houston, Texas. He was previously President of Petrochem Outsourcing.


Done deals

Fugro, Aker Solutions, Axon Energy Products, Cameron, Hexagon, IFS, Proserv Group, Steel Excel, Technip, TGS Nopec Geophysical, Trimble.

Fugro is undertaking a review of options for its marine streamer seismic data acquisition business and associated activities.

Aker Solutions is to acquire Dubai-based NPS Energy, (part of National Petroleum Services) for $350 million and will assume approx. $110 million in debt.

Axon Energy Products is to acquire Doyle’s Valves through its subsidiary Axon Pressure Products.

Cameron has closed on its purchase of the drilling equipment business of TTS Energy division from TTS Group in an all cash transaction.

Visualization specialist Hexagon has acquired all the shares of Norwegian My Virtual Reality Software.

IFS is acquiring mobility and service management applications provider Metrix in an all cash deal.

Aberdeen-based Proserv Group has acquired the subsea controls business of Weatherford International. Proserv has backing from Intervale Capital. The acquisition includes 300 personnel and control systems operations in the UK, Norway, North America, the Middle East and the Far East.

Steel Excel has completed its acquisition of the subsidiary of BNS Holding’s Sun Well Service, a provider of well services in the Williston Basin in North Dakota and Montana. The paper and cash transaction is valued at approximately $86 million.

Technip is to acquire Stone & Webster process technologies, and the associated oil and gas engineering capabilities, from The Shaw Group for a cash consideration of approximately €225 million. Barclays is acting as financial advisor to Technip and Davis Polk & Wardwell is acting as legal advisor.

TGS-Nopec Geophysical Company has announced the acquisition of Calgary-based Arcis Seismic Solutions for a total of $51 million, reflecting an enterprise value of US $72 million, based on net debt at the date of acquisition.

Trimble has acquired provider of wireless fleet management and worker safety solutions GEOTrac Systems of Calgary. Financial terms were not disclosed.


Best practices for process control networks

Industrial Defender white paper offers seven steps to minimize cyber risk.

Industrial Defender has published a 12 page white paper titled, ‘Report from the field: seven best practices for automation system cyber security and compliance.’ Advanced persistent threats from industrial espionage and viruses like Stuxnet and Duqu are on the rise. At the same time there is a push for more open systems à la smart grid and for interconnection of business and control systems. The relationship between industrial operations and corporate IT is complex and responsibilities may not align with day-to-day activities.

Automation professionals’ responsibilities have extended to security and compliance and this has led to overlapping responsibilities and constrained resources. The recommendations cover security and compliance staffing, secure perimeter firewall and router configuration, proper software patch monitoring and updating, proper separation of corporate and plant networks and good password management. Third-party software with weak default configurations is to be avoided or mitigated. Good documentation of ports and services used is necessary to minimize penetration opportunities.


Don’t touch that dial!

ExperTune’s George Buckbee describes four situations where control loop tuning is not advised.

Despite increasing computer control, not all loop tuning actually optimizes. A new note from ExperTune’s George Buckbee describes four types of PID control loops which tuning cannot improve, in fact it may make matters worse. First case is when the loop is operating at a limit, if, say, a valve is fully open or closed. Here control is a waste of time. It is preferable to find out why the loop is limited and to get the controller back into a normal range.

Failing instrumentation likewise gives rise to situations where control will fail. Another ‘do not tune’ scenario occurs when a loop is already tuned. Different criteria can define ‘tuned’ and operators may need to study some process fundamentals to answer the question ‘what does good tuning really mean?’

Another common problem is trying to tune a loop when the root cause of the problem is somewhere else. ’ Process plants are complex places, full of dynamic interactions that spread through the plant. You need to find the root cause of process upsets and eliminate them at the source.’ Buckbee’s company has software tools to help in such cases. Cross-correlation of sensor data with historical data using PlantTriage’s process interaction map can pinpoint the root cause of a problem.


McElroy giant gets i-Field makeover at age 86

Work flow optimization by i-Field team identifies 1500 bopd of incremental production

Speaking at the SPE/Reed Exhibitions Intelligent Energy event earlier this year, Bill Taylor described Chevron’s hugely successful integrated production systems optimization program (IPSO) at its West Texas McElroy Field, discovered in 1926. Chevron currently drills some 50 wells per year and undertakes hundreds of workovers involving constant pattern realignment and moving huge quantities of fluid around.

Chevron’s i-Field specialists took a look at the project and determined that the automation systems and data were all in place, all that was needed was a workflow analysis—notably to identify areas where automation was feasible and to flag wells which required attention.

A collaboration center was installed and tools developed as follows. An ‘injection management exception management’ (IMET) tool shows above or below target injection. A well event surveillance tool (WEST) does exception reporting and provides daily alerts for producers and injectors. PEST—the ‘pattern event surveillance tool’ checks performance against established logic rules and displays alerts on ESRI ArcGIS via the PetroWeb gateway and Harvard Graphics.

The system helped identify a promising shallow zone that was drilled up and produced an incremental 1500 bopd. In the Q&A, Taylor revealed that the economics of such extra production meant that there was never any question about funding the ISPO/i-Field projects.


AspenTech wins process patent tussle. M3 to appeal judgment.

M3 Technology ordered to cease marketing software components.

The US Court for the Southern District of Texas this month issued a ‘final judgment and permanent injunction’ in favor of AspenTech in its trade secret misappropriation and copyright infringement action against M3 Technology. The injunction prohibits M3 Technology from using, marketing or offering for sale or license certain of its ‘SIMTO’ Scheduling and M-Blend tools for asset planning, process scheduling and optimization. M3 Technology was ordered to pay AspenTech $11 million in damages. The following day, M3 filed an appeal. Brian Wunder, lead trial counsel for M3 Technology said, ‘The judgment is based on a flawed and contradictory jury verdict that is at odds with both the facts of the case and the relevant law. M3 Technology will take all steps necessary in order to overturn the unjust jury verdict and judgment.’ More from AspenTech and M3 Technology.


Sales, contracts, partnerships and deployments

Aker Solutions, TGS, CGGVeritas, Saudi Aramco, eCorp, Advanced Drilling, Emerson, IFS, Setal, Inova, Tesla, Marathon Technologies, Invensys, Lundin, OptaSense, Shell, Cairn, Paradigm, Venari, Divestco, Petrosys, Kelman, ConocoPhillips, Pemex, Recon, Telogis, FleetCor, Yokogawa.

Aker Solutions’ has signed a six-year agreement with TGS for the supply of its hiQbe regional velocity cubes for the UK Central North Sea. Aker also announced an award from Statoil for early phase studies of recent Norwegian continental shelf discoveries.

CGGVeritas has signed a memorandum of understanding with Saudi Aramco for collaborative research and development of geophysical acquisition, processing and interpretation technologies.

eCORP and Advanced Drilling Solutions are to develop a slim hole evaluation drilling unit comprising a coring rig and a mobile geological laboratory.

BP has awarded Emerson a $23 million contract to supply control and safety systems for two new offshore oil platforms in the North Sea.

Rio-based SOG Óleo e Gás (Setal) is to expand its use of IFS Applications, implementing the projects and subcontracts modules.

Inova Geophysical has sold 30,000 channels of its cable-less ‘Hawk’ autonomous nodal system to Tesla Exploration.

Kepware has selected Allied Solutions as a distribution partner for Southeast Asia.

Intermap and Blue Marble Geographics are to integrate the NextMap dataset with Blue Marble’s Global Mapper software.

Marathon Technologies and Invensys are to develop virtualized fault tolerance and disaster recovery solutions for SCADA applications. Archestra and InTouch will be ported to Marathon’s everRun MX software-based fault tolerant solution for symmetric multiprocessing and multi-core servers.

Emerson has won a multi-million dollar contract to deliver its Roxar unit’s subsea instrumentation to Lundin Petroleum’s Brynhild field in the North Sea.

QinetiQ unit OptaSense has signed a framework agreement with Shell for use of its distributed acoustic sensing system, initially for hydraulic fracture monitoring.

Cairn Energy has chosen the Paradigm interpretation toolkit as its next generation 2D and 3D seismic interpretation solution. Venari Resources has also selected Paradigm as its preferred provider of E&P software. Divestco is use Paradigm’s solutions as its primary integration platform.

Petrosys has released a link from its eponymous software to Kelman’s DMASS seismic data store. The link was a joint development between Petrosys and ConocoPhillips, leveraging the PPDM data model.

Pemex has awarded Rohde & Schwarz a contract for a communications solution for six off-shore drilling rigs. The solution includes the IP-based VCS-4G voice communications system and VoIP-capable 4200 series radios.

Recon Technology is to provide Emerson PCS and SIS systems to China National Petroleum Corp.’s South Yolotan Gas Field Project in Turkmenistan. Contract value exceeds $3 million.

Telogis and FleetCor Technologies have announced a partnership for the integration of telematics and fuel card reporting to help reduce unauthorized fuel card use.

Yokogawa is to supply control systems for the Ichthys LNG project in Australia including Centum production control, safety instrumentation and emergency shutdown equipment. The Exaquantum plant information management system, PRM integrated device management package and OmegaLand operator training system are also in the deal.


Standards stuff

Energistics, NDR11, Fiatech, Open Geospatial Consortium, OSGeo, PIDX, PPDM, W3C, USGIN.

Energistics is planning the national data repositories event, NDR 11, to take place in Kuala Lumpur in October. The agenda will cover recruitment, benchmarking, reporting standards, tendering, promotional activities, data quality and more.

The Fitaech iRING project is to publish an eNewsletter, ‘iRINGToday’ and is seeking ‘thought leading content’ to share with ISO 15926 practitioners.

The Open Geospatial Consortium is seeking comments on an OGC candidate standard, the network common data form (NetCDF). NetCDF was developed at the Unidata program center of the university corporation for atmospheric research (UCAR).

OSGeo has announced the release of GeoMoose 2.6, an open source web application framework for displaying geographic data.

PIDX International and Energistics have signed a memorandum of understanding that will allow them to collaborate on industry events and to assist each other with joint standards adoption.

The PPDM 3.9 change management process is underway including an organic geochemistry analysis subject area. Members are invited to request modifications to the data model and vote on such.

The RDF web applications working group of the W3C has published several RDF related recommendations along with the RDFa 1.1 Primer as a working note. The documents outline the vision for RDFa in a variety of XML and HTML-based Web markup languages.

The USGIN metadata wizard is a format agnostic metadata creation web-tool that exports a generic metadata model into various standards such FGDC XML, and ISO 19139 XML files. The tool aims to ‘make metadata creation as simple as possible while still supporting several metadata standards.’


E-biz roundup

Trading and payment software updates from Updata, KSS, AJB, Allegro and CME Group.

Dynegy is to use Updata Professional, a suite of advanced analytics at its trading desks. UP provides technical analysis, strategy back testing and links to external and internal data sources.

MOL is implementing KSS Fuels pricing solution across its retail network of 300 sites in Hungary. PriceNet connects store personnel with the head office. PriceNet Mobile allows field personnel to view competitor prices in real time and approve changes. KPIs are displayed via Google Maps, helping assess local markets and the impact of changes in traffic flow.

AJB Software Design’s integrated payment System FIPay (now at 100 North American sites) enables EMV transactions on existing hardware.

Allegro Logistics automates routing of bulk and liquid commodities across multiple carrier modes including truck, rail, pipeline, barge and ships.

CME Group has launched CME Direct for online trading of multiple benchmark fuels on listed and OTC Markets. CME claims daily trading volumes of over 1.9 million contracts.


Digital seals and signatures

Fiatech publishes white paper on digital signing of engineering documents.

Fiatech has just published a 43 page white paper titled, ‘A practical strategy for digital signatures and seals in electronic AEC* processes.’ Digital signature technology is a key element of business process automation and will solve many problems associated with current ‘wet-signing.’

The cost of current ‘wet-signing’ is put at around $80 per document and a projects may contain thousands of documents. Psychological barriers to the adoption of digital processes are evaluated and the merits of digital duly vaunted. The bulk of the white paper is a high-level ‘how to’ guide to implementing digital signatures—with discussions of PKI/RSA techniques and their approval by standards bodies. Help is offered navigating the standards smorgasbord of Digital Signature Standard (DSS), Secure Hash Standard (SHS), Identity Certificate Standard (X.509 v3) and more. NIST FIPS PUB 180-3 provides visual evidence of tampering. A discussion of APIs and commercial signing software is included.

Fiatech’s digital signature work dovetails with AutoCodes for checking of building information models (BIM). Autocodes and BIM appear to exist in a parallel world to oil and gas industry construction standardization efforts centered on ISO 15926.

* Architecture engineering construction.


CSI Houston—Ernst & Young opens up forensic lab

Fraud investigation and dispute unit to target energy industry financial shenanigans.

Ernst & Young LLP has strengthened its fraud investigation and dispute services (FIDS) practice in Texas and the southwest with several key personnel changes. E&Y’s FIDS leader Brian Loughman observed, ‘We’ve seen a surge in the need for forensic accounting investigations in the energy industry and we are committed to helping our clients address fraud prevention and financial integrity issues.’

The practice targets, inter alia, Ponzi schemes, earnings ‘management’ and manipulation of financial results, infringements to the Foreign Corrupt Practices Act, asset misappropriation and other types of fraudulent activities. Techniques include data analysis and information gathering, corporate compliance and electronic discovery. E&Y’s fraud detection techniques are claimed to improve the effectiveness of corporate compliance programs and to assist clients in the prevention and detection of fraud and other financial malfeasance.

Ryan Pratt heads-up the Houston FIDS practice where Scott Witte has been promoted to director. Dallas-based staffers include Daniel Torpey and Jeff Ferguson who hails from FTI Consulting. Some 80 forensic professionals now serve in the Southwest/energy industry unit.


Enough engineering to be dangerous (and useful)!

Workflow transformation looks beneath tip of data iceberg. AI identifies workover candidates.

Speaking at the SPE/Reed Exhibitions Intelligent Energy event earlier this year, Steve Cassidy described workflow transformation at Chevron’s San Joaquin Valley operations—involving a 100 year old field with 17,000 wells. Steam injection involves a huge daily workload as most wells see some form of intervention at least once per year.

The complexity and scope of operations led to system unreliability and Chevron is looking to apply more automation, and computer-based advisories. Developing these requires new skill sets—such as folks with an IT background combined with ‘enough reservoir engineering to dangerous (and valuable)!’ Artificial intelligence is used to perform predictive analytics. This is to look beneath the ‘tip of the iceberg’ of the vast amount of data currently acquired of which much is wasted.

All Chevron’s forecasts are currently data-driven (sans simulation). These leverage large numbers of low cost sensors to instrument a few wells in an intensive way. Data techniques such as genetic algorithms for cyclic steam injection and neural nets to select workover candidates are being trialed. The problem is that ‘very few people understand this stuff.’ One answer is the collaborate ‘environment.’ Not necessarily with everyone in same room, maybe just ‘well-connected individuals sharing the same data.’


Petrobras enters Top 500 supercomputer list

Petrobras and ENI highest entries in latest HPC list—but with rather modest machines.

Petrobras’ ‘Grifo 4’ cluster is now the top oil and gas performer in the latest issue of the Top500 ranking of worldwide high performance computing (HPC). The number 1 slot is held by IBM’s 16 petaflop ‘Sequoia,’ a BlueGene/Q 1.5 million core system at the Department of Energy’s Lawrence Livermore National Laboratory.

Grifo 4, built by Brazil’s Itautec, comes in at number 68 with a more modest 250 teraflops using a hybrid architecture of Nvidia Tesla 2050 graphics processing units and Intel Xeon X5670 6C CPUs.

Other oil and gas machines in the list include ENI’s 130 teraflop HPC Proliant cluster, now at number 133 and Total’s SGI Altix ICE 8200EX, a 100 teraflop machine at number 173. But both these are older machines which have been in the list for a couple of years. Few oil companies (or contractors) bother with the Top500 list. For an idea of where oil and gas HPC stands today, we have the announcement by BP (Oil ITJ April 2012) of an installed petaflop capacity in Houston (which would come in the top 20) and Total’s 2.3 SGI ICE-X (Oil ITJ February 2012) machine which is notionally at number 6.

462 out of the 500 machines on the list run some variant of Linux. Two run Windows HPC 2008 server including the Chinese ‘Dawning’ machine which continues its inexorable slide down the list, from number 10 in 2008 to number 94 today. Download the TOP500.


‘Mangrove’ frac package announced

Schlumberger’s Well Services unit offers Petrel-based software and services to clients.

Schlumberger’s Well Services (SWS) unit has released ‘Mangrove,’ an ‘industry-first’ integrated completion and stimulation design plug-in for its flagship Petrel geoscience platform. Mangrove models completions and hydraulic fracturing activity in the context of Petrel shared earth model.

SWS VP Salvador Ayala said, ‘Developing unconventional reservoirs has been a challenge with a purely geometrical approach to staging and perforation placement and with empirical fracture treatment design. Mangrove applies a scientific approach that uses all the available data to complete the most productive parts of the reservoir.’

Mangrove includes a Completion Advisor for perforation picking and staging and predictive complex hydraulic fracture as well as conventional planar fracture models. The Completion Advisor helped one Marcellus operator optimize staging and perforation design and resulted in a 50% hike in production in one well by ‘eliminating screenouts.’ Mangrove does not appear to be available on the Petrel/Ocean store. The combined software and services offering is currently only provided through Schlumberger engineering services.


Heterogeneous parallel processing solutions for oil and gas Industry

‘Sheaf’ abstract data model from Limit Point Systems bundled with MBA Sciences’ primitives.

Limit Point Systems and MBA Sciences have teamed to offer oil and gas clients a novel approach to parallelizing data and computing for exploration and production workflows. LPS’ ‘sheaf’ data model will be marketed along with MBA Sciences’ parallel programming toolset for applications such as seismic modelling and numerical simulation.

LPS’ sheath is an abstract representation of gridded and or time variant data that is claimed to offer orders of magnitudes of speedup when translating data between different gridding schemas. One use case might be mapping between the rectangular grids of reservoir simulators and the tetrahedral meshes of geomechanical packages. Such compute-intense transforms can benefit from the parallelization offered by multi-core servers and GPU-based devices.

LPS CEO David Butler said, ‘By combining our expertise in representing and mapping meshes with MBA Sciences’ heterogeneous parallel processing, we can increase the speed and accuracy of geoscientists’ numerical models.’ More from d.m.butler@limitpoint.com.


Waterfall’s ‘unidirectional’ comms gateway at two upstream assets

Read-only link to process control network claimed to eliminate hacking.

Tel Aviv, Israel-based Waterfall Security Solutions, in collaboration with Houston-headquartered W-Industries have installed Waterfall’s Unidirectional Security Gateways (USG) at two facilities of an unnamed ‘large oil and gas exploration and production company’ at one onshore facility and on an offshore platform.

Waterfall ‘s USG links industrial control systems to the business network via a secure unidirectional communications that minimizes the risk of data interception with a laser and photocell combination.

The USG allows the control system’s server to be replicated in the business environment, but makes it impossible to write back and hack the control system. Real time data is available to users on the business network while eliminating the risk of a cyber-attack affecting the facility. Users include production engineers and operations personnel involved in monitoring and optimizing production.

Waterfall claims that its systems reduce management and monitoring costs over conventional firewalls and eliminate safety breaches due to errors and omissions when configuring a firewall. More from Waterfall and W-Industries.


© 1996-2021 The Data Room SARL All rights reserved. Web user only - no LAN/WAN Intranet use allowed. Contact.