May 2009


Complex Event Processing for PI System

OSIsoft and Microsoft have announced a merger of their real-time offerings. PI System is to embed complex event processing technology, originally developed for the financial ‘services’ industry.

At the Microsoft TechEd in Los Angeles this month, the company announced that the next edition of its database application, SQL Server 2008, is to include complex event processing for real-time data streams. Hard on the heels of this announcement, OSIsoft unveiled plans to offer the new technology alongside flagship PI System real time database. PI System forms the core of many ‘digital oilfield’ initiatives—see for instance our report from the 2009 OSIsoft User Group on page 7.

Complex event processing (CEP) takes raw real time data feeds, performs correlations and other rule-based processes to output actionable information. ‘Low latency’ CEP means that the processing takes place in near real time. CEP was pioneered by MIT professor Michael Stonebraker through his StreamBase company. Traditional CEP applications are in the financial services arena for popular activities such as programmed stock trading.

OSIsoft product manager Laurent Garrigues told Oil IT Journal how CEP would benefit upstream users, ‘Raw production data is seldom suited for immediate use. Data needs to be QC’d and filtered—a perfect task for the PI-CEP engine. Quality rules can be implemented as a sequence of reusable logical blocks to support the overall cleansing and filtering process. Another use case is in information aggregation. The PI-CEP engine can be used to apply filtering and business rules in real time across multiple producing wells and under dynamically changing conditions. CEP can be used to provide production allocation information or to manage alarms, finding patterns in sudden, massive bursts of data generated by an abnormal situation. The low latency of the CEP engine means that information can be separated from noise in time to take meaningful action.’

Monitoring and maintenance of rotating equipment such as pumps and compressors will also benefit from the expressiveness of the CEP query language, making it easy to filter, sort and rank equipment based on operating data.

Ted Kummert, senior VP at Microsoft’s business platform division, added, ‘We chose to collaborate with OSIsoft because of its experience in creating real-time event-driven infrastructures. We are already engaged with customers who are interested in the benefits of this new platform.’

Commercial CEP-enabled editions of SQL Server are expected to be released in 2010. The close relationship between OSIsoft and Microsoft will make possible early adoption of the new CEP technology when it is made available next year. Read our interview with OSIsoft’s VP marketing, Jon Peterson on page 2 of this issue for a peek at what CEP will bring to OSIsoft’s technology line-up. More from www.osisoft.com.


Techsia sold

Schlumberger acquires petrophysics boutique Techsia. Techlog is to embed Ocean and provide well log analysis to Petrel users.

Schlumberger has acquired Techsia in a deal whose commercial terms were not disclosed. Techsia’s flagship Techlog well log analysis package is to be ported to Schlumberger’s Ocean development environment and coupled with the Petrel earth modeling solution. Tony Bowman, president of Schlumberger Information Solutions said that ‘Adding log and core petrophysical interpretation to Petrel will be particularly valuable in real-time well construction workflows.’

Techsia’s Montpellier, France HQ will become the Schlumberger Petrophysics Software Center of Excellence. Techsia founder and CEO Stephanie Gottlib-Zeh, said, ‘Accessing Schlumberger’s global organization will bring us greater market penetration and accelerate the realization of integrated workflows from the wellbore to the reservoir.’ Future product development will benefit from collaboration with Schlumberger R&D centers of expertise in geomechanics, well placement, production logging and fluid characterization. Techsia’s ‘Malcom’ solution for chromatographic analysis of fluid samples will also benefit from Schlumberger’s global reach. Techsia currently has 53 employees and will operate under its own name. More from christelle.guillermou@techsia.com.


Good news for subscribers and how OilIT is weathering the downturn

Editor Neil McNaughton explains how Oil IT Journal reconciles ad revenue and paid-for content.

A brief editorial as this month’s actualité has been prolific. I will take the opportunity to provide some housekeeping advice to readers and potential advertisers in Oil IT Journal. For some time now we have offered ‘sponsored links’ to companies mentioned in Oil IT Journal. I editorialized on these in my February 2008 editorial. The links have proved moderately successful, as have our website home page sponsorship slots. But I want to emphasize that such ad-revenue is totally secondary to our main revenues generating activity which is content provision to subscribers.

A few words on the mechanics of our production process will help you understand how this works. First we decide what articles are going in to the newsletter. Then, about a week before publication, we email companies that we will be writing about to invite them to pony-up a modest amount for a sponsored link on a ‘take-it-or– leave-it basis.’ The links appear in print and online, in both our full text, subscription editions and the public headlines-only site.

For a long time I have been reflecting that the links business has actually detracted from our content provision in a small, but to my mind, irritating way. Before the paid-for links, we liberally sprinkled the Journal with hotlinks and website references. Once we started selling links, we cut back on the ‘free’ references, forgetting our pact with our subscribers.

I had my eureka moment this month—actually it was more of a Homer Simpson ‘duh’ moment—why should the advertising process deprive subscribers of useful links? We are changing tack. From now on, link sponsors will see their links added to the public online edition and into the RSS feeds, but we will provide links and/or contact information in the full text and print editions for all of our articles. A return to the status quo ante.

Will this mean a massive defection from our link sponsors? I think not and anyhow, I don’t really care. We introduced our ads to take a little oxygen from our competitors’ air. A strategy that has seen modest success. But we compete by providing quality content, by being on the spot and now, with more useful links.


Oil IT Journal interview—Jon Peterson, VP marketing OSIsoft

Peterson discusses relationship between PI Historian and emerging Complex Event Processing technology.

What is Complex Event Processing (CEP)?

CEP takes multiple streams of real time data and applies standing queries to identify underlying root causes of events in the data stream.

How will the final PI/CEP combo look? CEP is an SQL Server-based time series solution, and PI is a stand-alone RT database. Does the CEP announcement imply a migration or merger of the PI RTDB with SQL Server? Or will the two run in parallel?

CEP is not a database, although databases are an important aspect of CEP. A database could be the source of data for CEP queries and events, once identified, could be stored in a database. There are three main components of CEP: input adaptor, the CEP engine and output adaptor. The adaptors can be written by 3rd parties such as OSIsoft; Microsoft will supply adaptors to and from SQL Server. The CEP engine supports the concept of a standing query, a query that runs continuously on the flow of data into the engine. Microsoft provides a rich query language and toolset that handles just about any conceivable CEP scenario.

How is it going to impact PI System users?

We’ll be leveraging this technology to support high speed, sophisticated analysis of data as it flows into the PI System. Identified events will be stored in PI and can be leveraged by our notification or visualization products. We will do this by creating an input adaptor to CEP, a way to route data heading to the PI Server into the CEP engine. We will also build a query configuration tool tuned to specific customer scenarios at first, and gaining in sophistication over time. And there will be an output adaptor to write query results to the PI System.

It seems that CEP is a pretty good description of what is already done by PI/AF. Pat Kennedy used the CEP term back in 2006*. I am not quite sure if this story is a technical one or if it just reflects the ongoing ‘symbiosis’ between OSIsoft and Microsoft.

I’d say it is both. If you look at our existing analytics—PEs, ACE, Totalizer, there are some CEP aspects. But there are some limitations. Our syntax and queries are rather simple (with the exception of ACE which is VB.Net) and are limited to 1000’s of events/second and to 1000’s of simultaneous queries. Also, to support disparate data sources we must first get the data in PI. Microsoft’s CEP technology will address all of the above. We will likely first leverage the areas where better scaling is required.

Where did CEP start?

Stream prossessing and CEP started making noise in data processing about 7 years ago. Michael Stonebraker (StreamBase) was one of the first. When we read his papers we realized it was similar to some of the things we have been doing for years. But, we were different in that we could easily store all the data coming into our system and store the events we detected or calculated. Of course, CEP is much more generalized then we are—we stick to value/timestamp pairs. The most mainstream CEP scenario is around financial transactions such as stock trading.

This brings up another aspect—CEP will be mainstream technology produced by the big players such as Microsoft and Oracle. They are investing significant resources to this technology and in the long run we are better off leveraging this work. We will focus on the scenarios and issues around manufacturing, utilities and process industries rather than the details of query languages, parsing, etc. Of course, being a strong Microsoft shop, we will leverage their technology.

More from www.osisoft.com.

* www.oilit.com/links/0905_5.


PetrisWINDS OneTouch rollout, Wipro support deal signed

SharePoint/ArcGIS front end to WINDS Enterprise provides iPod-like interface to E&P data.

At the PNEC Data and Information Management conference in Houston this month (full report in next month’s Journal), Houston-based Petris Technology introduced a new E&P ‘knowledge portal.’ Petris Winds OneTouch is a Microsoft SharePoint/Esri ArcGIS-based front end to Petris’ Winds Enterprise data access framework.

OneTouch builds on the SharePoint services-oriented architecture (SOA) paradigm with a hosted E&P web part gallery where users can access user interface or business intelligence components developed by Petris and other clients. Petris will provide technical guidelines to developers to assure compatibility and ease connection to in-house and remote data services. The idea is to offer a rich set of displays and analytics for better data access and collaboration.

While the gallery is hosted, OneTouch/Winds is deployed behind the corporate firewall, the configuration of choice for most clients. OneTouch is not exactly a ‘shrink-wrapped’ solution. Client sites and requirements vary such that deployment requires customization. To achieve this, and also to provide 24x7, world-wide support, Petris has enlisted Indian outsourcing behemoth Wipro. A strategic partnership with Wipro Technologies, Wipro’s IT services unit will add global manpower, expertise and services to Petris’ E&P data management offering. Wipro has previously helped BP, Shell and Saudi Aramco with SharePoint projects. According to Wipro, BP has a staggering 78 SharePoint projects underway!

Petris CEO Jim Pritchett said, ‘We have tried several outsourcing models. We found Wipro’s to be the best. Wipro embeds some developers in our Houston location and outsources the rest to its Indian location. This helps reduce costs and lets us work around the clock with a large resource base and an ‘agile’ development approach. A lot can be achieved overnight by Wipro’s teams in India.’

Pritchett also noted that many SharePoint deployments failed to use all of the advanced features available. This is not the case for OneTouch. At PNEC, the iPod-like interface to SharePoint document stores proved an eye catcher. This displays E&P documents the same way as the iPod displays CD playlists and dust jackets. The ArcGIS front end, while more conventional, adds functionality that was previously lacking to Winds. More from www.petris.com and www.wipro.com.


ISS Group contests CygNet’s SCADA/GIS ‘first’ claim

ISS’ BabelFish puts SCADA data on a map—as do Iconics’ Geo Productivity portals.

ISS Group MD Shane Attwell took issue with our report on CygNet’s claim to be first in the field with a GIS/SCADA offering (OITJ April 09). Attwell points out that ISS’ BabelFish Aspect ‘goes a lot further than simply putting real-time SCADA information on a map.’ Aspect extends ISS’ BabelFish framework tool to integrate GIS with SCADA, relational and other data types like work orders, well logs and schematics. Users can drag and drop real-time data sources onto a map to create a spatially referenced ‘real-time layer.’ A similar process allows objects such as logs and drawings to be dropped onto a map and spatially referenced for re-use. Actually, we knew this, having reported on Santos’ use of BabelFish spatialization in our June 2008 edition!

In the same vein, we learn from Foxborough, MA-based Iconics of an OPC-based, SCADA visualization system. Iconics’ new Geo Productivity portals provide a dashboard of productivity, alarm and local meteorological information. The current target market is wind farm management but Iconics GeoScada is also deployed in oil and gas notably by Russian TransNeft, to visualize pipeline SCADA data streams in Virtual Earth (OITJ March 09). More from www.issgroup.com.au and www.iconics.com.


Sunoco awards Wipro $34 million IT outsourcing extension

Infocrossing unit gets four year extension to infrastructure outsourcing deal.

Sunoco has awarded Wipro’s Infocrossing unit a $34 million four-year contract extension in an expansion of the companies’ 13 year-long relationship. Infocrossing provides Sunoco with managed infrastructure and outsourced servers, storage and network devices.

Sunoco CIO Peter Whatnell said, ‘Since 1996, we have achieved operational improvements and enhanced service delivery. Our IT roadmap includes an outsourcing strategy and we will continue to work with Infocrossing to manage our mainframe, UNIX and Windows environments as well as taking advantage of its shared infrastructure services such as storage arrays.’ More from www.infocrossing.com.


Enigma’s variation on PARS—speedy backup and recovery

‘Application aware’ asset-level backup automates capture of key project information.

Enigma Data Solutions has announced an extension to its flagship PARS E&P data archival solution. PARS Backup leverages PARS ‘application aware’ routines to provide multi-project, asset-level backup and recovery. PARS Backup automates the backup of diverse application data from geotechnical and office applications.

Traditionally PARS archiving is intended for longer term, multi year data security. Pars Backup targets medium term backups that would typically expire after 6 months when storage media are recycled. Data managers can restore complete projects quickly after system failure or corruption—without the complexity and risk of rebuilding datasets from file system and database backups. Pars Backup captures key project data and metadata from mainstream E&P applications from Landmark, Schlumberger and Paradigm. More from www.enigmadata.com.


Venture reveals first results from Information Orientation study

Survey of 25 UK-based E&P shops reveals low IM satisfaction and strange bi-modal response.

UK-based Venture has released preliminary results* from its upstream information management survey, announced last year (OITJ October 2008). Venture quizzed 25 UK-based E&P companies on IT and IM practices, information behavior and values using the ‘information orientation’ (IO) categorization developed by researchers at the University of Stavanger. The main conclusion from the study is that satisfaction with current procedures is low. There is a recognized need for improvement in IM with emphasis on assessment, organization and sharing.

A striking observation was the common occurrence of bi-modal patterns in the query responses, delineating satisfied and dissatisfied groups. These could represent a lack of consensus as to best practices or failed implementations.

Trying to relate IM to business performance proved hard but Venture’s researchers reject the idea that there is no underlying relationship between IM and business performance. The lack of such a correlation in the survey results is likely to represent the fact that such surveys fail to take account of other effects on business performance. For example, although the accuracy and ease access to geological information will favor exploration success, such a correlation may be masked by the acquisition of other companies.

* www.oilit.com/links/0905_7.


Maxeler chairman claims ‘200 fold’ seismic speedup

Mike Flynn claims that Field Programmable Gate Array technology beats CPU hands-down.

Mike Flynn, professor of electrical engineering at Stanford university and chairman of Maxeler Technologies, speaking at a seminar at Stanford this month, claimed that FPGA* technology ‘wins out on performance’ for most applications. Flynn believes that Maxeler’s FPGA compiler toolkit now makes it possible to transform an application into a data flow graph that can be mapped to an ‘unconstrained systolic array.’ This allows the array structure to be matched to the applications structure without the constraint of nearest neighbor communications. Flynn has mapped a forward modeling seismic imaging algorithm to a 2,000 node systolic array on 2 FPGAs. Each node performs an operation every 4 nanoseconds, resulting in a 200 fold speedup ‘compared to an Intel CPU.’ More from www.maxeler.com.

* Field programmable gate arrays.


High Performance Computing users meet at Rice University

Sandia Labs on bugs, ConocoPhillips’ HPC, ECLIPSE benchmark, GeoBenchmark, Solid State disks.

Curtis Ober presented work done at Sandia National Laboratories on verification of complex codes by exploratory simulation. Ober’s starting point was Les Hatton’s 1997 paper* titled ‘The T Experiments, errors in scientific software.’ This concluded that software accuracy is greatly undermined by software errors and that most packages are full of statistically detectable inconsistencies in programming language use.

Ober’s work investigates the impact of software bugs on numerical error and uncertainty. Numerical error can be broken down into discretization, round-off, conversion errors or ‘implementation coding errors’ i.e. bugs. Defect detection leveraged statistical and other analysis of ‘model code.’ The study found that many bugs can be detected using simple tests. Sandia uses these techniques to validate its own codes

A ConocoPhillips presentation by Bill Menger discussed the delicate balance of Bandwidth, Throughput, IOPS, and FLOPS. According to co-author Dave Glover, a supercomputer is a computer that is one order of magnitude slower than what scientists and engineers need to solve their current problems. This definition is timeless and assures job security!

Menger advocates task driven design, adapting hardware to ensure that users get what we need, no more, no less. One use case called for a 12 million CPU hour run to be completed in 50 days. The solution involved running each job on 8 cores, optimizing data loading, adding memory and disk and ‘buying enough computers to make it happen!’ The chosen hardware included 1250 Rackable Dual Socket, Quad-core AMD Opterons, 10GigE card in each node with iWarp and 16 shelves of Panasas Storage for a theoretical 46 Teraflops.

Gilad Shainer (Mellanox Technologies) presented an investigation of Schlumberger’s Eclipse reservoir simulator performance on network type. A 4 million cell three phase black oil model with 800 wells was used for the test which was run on a 24 node Dell PowerEdge SC 1435 cluster with Quad-Core AMD Opteron 2382 processors. Tests showed that InfiniBand interconnect provided highest performance and scalability up to 8 nodes. Running eight jobs per node increased productivity by up to 142%. Infiniband also reduced power consumption by 82% compared to 10GigE.

Evgeny Kurin of Russian GeoLab introduced a new seismic processing benchmark for HPC. GeoBenchmark includes a set of simple seismic processing modules. Tests target specific computer subsystems and are ‘reasonably portable.’ GeoBenchmark can be used to evaluate the relative contributions from code and compiler. Results are presented as a ‘tuning radar,’ a spider plot showing each subsystem’s contribution to performance. GeoBenchmark source code can be downloaded from www.oilit.com/links/0905_4.

Guillaume Thomas-Collignon (CGGVeritas) presented the results of a series of benchmark tests of new solid state drives from Intel. The Intel X25-E SLC SSD produces spectacular improvement in random read/write producing around 20,000 IOP/S compared to 400 for a high end spinning SATA drive. The downside is a smaller capacity and higher cost per GB. A trace sort test showed a ten fold speed up with the SSD. For Collignon, the Intel X25-E SLC is the way to go. The only current drawback is the low capacity of current drives.
HPC papers available on www.oilit.com/links/0905_6.

* www.oilit.com/links/0905_3.


Software, hardware short takes

Encom, Geosoft, Petrosys, Geotrace, Geovariances, GeoStar, Meridium, Petrosys, P2ES, Paradigm, Petris...

Geosoft has announced its Geochemistry for ArcGIS extension for GIS-enabled geochemical QA and analysis.

Nvidia has teamed with Chinese geophysical services provider GeoStar on a ‘transformational’ hardware/software combo for seismic processing. Tests at the Chinese Academy of Sciences show a 600 fold speedup on a 24 Tesla GPU-machine over a 66 CPU cluster.

An independent evaluation of algorithms used in Meridium’s asset performance management software by Joel Nachlas of Virginia Polytechnic Institute has determined them to be ‘accurate implementation of state-of-the-art methods for reliability estimation.’

Petrosys has integrated its mapping capability with the geophysical workflows of SeisWare International’s eponymous seismic interpretation package.

Blue Marble Geographics’s new GeoCore is an all-in-one geospatial data translation toolkit for manipulation of coordinate, geometry, vector, CAD, raster, and LIDAR data.

Confidex has launched ‘SteelByte,’ an RFID tag with 512 bits user memory and a unique tag identifier number and a 13 foot read range on a metal surface.

Pitney Bowes ’ has announced Encom PA 9.0 with advanced visualization, data analysis and a new focus on ease of use and management of potential field and raster data.

Energy Solutions has announced V 5.2 of its PipelineOptimizer solution with predefined ‘helper’ models, improved reporting and graphics and new equipment schematic elements.

Geotrace has released GeoBrowse 3.2, the latest version of its geographical integration system (GIS). The new release extends functionality and compatibility with Google Earth, Virtual Earth, Petrel, Petrosys and Kingdom Suite.

The V9.0 release of Geovariances’ Isatis geostatistical flagship supports georeferencing of PNG images, ‘point’ kriging and multithreaded simulations. The new release includes the latest developments in moving-geostatistics and multivariable recoverable resource estimation research.

Geographic Technologies Group is integrating its ESRI ArcGIS Server-based Geo Blade solution with CartoPac’s Field Server to speed field data capture.

Industrial Defender has announced its ‘Fourth Generation’ technology suite with enhanced SCADA/cyber security protection for several verticals including oil and gas and chemicals.

Intellog has released ‘Onramp,’ a free search engine for Canadian ERCB Directives and the full text of well licenses, drilling activity and pipeline data back to 2001. More from intellog.com/onramp.

New Century Software has been granted an exclusive license to sell the Nysearch cased pipe integrity assurance model to the oil and gas pipeline industry. The model lets operators prioritize pipeline integrity management actions and improve safety, save costs and help comply with federal regulations.

P2 Energy Solutions has announced ‘Tobin All Access,’ a new licensing model offering wide access to state and county maps from the Tobin SuperBase map data at a national or regional level.

Paradigm Skua 2009 release now bridges interpretation and modeling in what is claimed as ‘the first, truly integrated workflow between seismic and geologic interpretation and modeling.’ New stratigraphic interpretation modeling allows for concurrent stratigraphic interpretation, geochronological modeling and 2D seismic paleo-restoration. Parallel processing reduces computation times by a factor of eight. A new ‘engineering modeling’ solution provides ‘unbiased’ flow simulation grids and a collaborative environment for geoscientists and reservoir engineers.

Version 6.2 of Petris’ WindsEnterprise heralds a move from ArcIMS to the ArcGIS engine. New user role functionality improves administration and security. ‘Google-like’ global search pinpoints information in multiple databases. Petris’ DataVera solution now works with non-SQL data stores


Unified Communications backed by Shell, Schlumberger

CERA Week Online interviews and Microsoft vaunt Office Communications Server and soft phones.

In a CERA Week Online interview, Microsoft’s head of oil and gas, Ali Ferling plugged Microsoft’s Unified Communications (UC) offering as being key to enabling upstream clients to react faster to a changing environment. Another key Microsoft value proposition is bridging the gap between ‘solid’ enterprise ERP solutions and the desktop. Microsoft is currently studying how the upstream manages data across multiple repositories. Ferling reported a BP manager complaining that information was not up to date. The solution was, surprisingly, to use Microsoft Outlook.

Microsoft has backed up its earlier announcements on Unified Communications (OITJ March 08) with a couple of video pronouncements from Shell and Schlumberger. Shell senior infrastructure consultant David Griffiths explains how Office Communications Server (OCS) 2007 is used to build virtual teams and to stay in touch with offshore workers. Group IT architect Johan Krebbers explains how the single interface for all communications integrates with the rest of Shell’s Microsoft-based working environment.

Last year, Krebbers told Oil IT Journal that the plan was to scale up the system in 2009. It now appears that Shell is waiting on full deployment of OCS Release 2 before finally replacing its traditional PBX environment. Already, ‘desk phones’ are a thing of the past—replaced with ‘soft phones’ that integrate the desktop and allow for calls from office, home or hotel etc.

Schlumberger has likewise drunk the OCS Kool-Aid which is leveraged in a new ‘Schlumberger Unified Communications’ architecture. Chief software architect Eric Schoen explained how a successful OCS pilot might be deployed across the enterprise. Schlumberger Information Systems’ Keith Tushingham showed how a UC call made from inside Petrel could access the corporate directory to find people by expertise as well as by name. Forget ‘I’m on the train,’ interpreters will soon be calling home to say ‘Hi, I’m inside Petrel!’ More from www.oilit.com/links/0905_10.


SPE Digital Energy Conference 2009, Houston

Despite the downturn, R&D is not dead. Some researchers are now attached to assets and ‘there are no more glossy brochures!’ Most agree that ‘digital’ is only partly about technology. Exxon has developed a ‘Suitcase’ of tools and processes to ensure rapid startup and sustainability.

In her Keynote, Melody Mayer, who heads-up Chevron’s Energy Technology Company noted the impact of current economic conditions that have led to ‘cost management’ and other restrictions. Meyer asked, ‘Is a digital energy strategy important in the low cycle?’ Pundits expect 2009 to be hard on innovation. There are cutbacks on endowments to universities and questions as to where R&D fits-in. Mayer believes that those who keep at it will come out ahead at the end of the recession. Kicking-off the panel discussion, Mayer suggests that digital energy means different things to all. In the late 1990s a powerful idea came along to use existing technology to integrate differently with the dream of optimized oil field operations. But we forgot that some of our fields were ‘built’ 100 years ago! We had to step back and rethink things, starting with the existing data and field systems and rebuild our ‘transformed operations’ to optimize performance. The i-Field is also about upstream business transformation—in fact it is ‘only 5% technology,’ and much more about workflow. The i-Field also represents a shift from a linear, silo-oriented process to an optimized, collaborative approach. Today, Chevron’s Asset Decision Environment (ADE) lets remote field teams plan and trouble shoot field operations. This has led to safer and more efficient operations. While an ADE is dedicated to a particular field, Chevron’s integrated decision environment’s scope is broader and is used to optimize technology development and to leverage subject matter experts. Mayer’s own definition of the digital oilfield is ‘a workflow transformation from well to sales meter.’ Chevron is using integration in these challenging times to reduce costs. Digital energy is transforming the way we work. Asked what the ROI of i-Field might be, Mayer elegantly ducked the question.

According to David Latin, $40 oil is an opportunity for BP and is in fact ‘really good news’ for the digital oilfield, known in BP as the field of the future. Latin’s definition revolves around real time workflows and making better decisions faster, about codifying knowledge and eliminating inefficiencies. Latin agreed with Mayer recalling that mistakes were made with the early focus on tools. BP is now more aware of the importance of people. That said, Latin started enumerating BP’s digital oilfield inventory comprising, inter alia, 2,000 km of fiber, 2 million equipment tags and advanced collaboration environments that today manage 40% of BP’s production base. Digital has added 85 million barrels per day to BP’s production and is ‘cheaper than well work.’ Examples include WITSML data feeds that speed recovery from well incidents, better slug control and the use of physics-based modeling to optimize multi-phase flow. Latin’s team has less money than before but BP is not about to stop doing R&D. Team members have been redeployed to assets and there are ‘no more glossy brochures.’ The focus now is on extracting value from the technology with at-scale deployment. BP is on track for a billion barrel reserve hike and 100,000 barrels per day of increased production over 10 years.

Derek Mathieson (Baker Hughes) defined the digital oilfield as ‘technology and workflow solutions connecting in a spatially-distributed way and in process-consistent time.’ For instance, data from a single sensor might be seen by 100 people around the world. For Baker Hughes, commercial solutions have sped things up greatly. Remote control and intelligent completion are now maturing. In its report on the Digital Oilfield, CERA deemed automation to be ‘a commodity.’ Mathieson does not agree, ‘We are only just coming to terms with control systems theory and optimization—which are an essential prelude to optimization.’ Baker Hughes was a late entrant in the digital oilfield. In 2002 it only had 3 rigs in Norway that were ‘connected.’ Today, 100 rigs are operated from three Beacon centers around the world and 30% of its high end MWD operations are run out of a Beacon center. Technology is no longer an obstacle but the business case is ‘harder to see from the service side.’ Russ Spahr (ExxonMobil) described the digital ‘endgame’ as being about improved reliability, more uptime and about working with partners of choice to improve recovery. Integrated operations for Exxon are about blending expertise, process and technology and again, only 5% about technology. Digital needs to be placed in the context of the bigger picture which, for Exxon, is about ‘applying the right technology to the right asset.’ This is the real challenge because ‘digital is chasing the same barrels as other asset management processes.’ Exxon has twenty production management best practices along with supporting change management processes. The long term is also important, ‘You need to think through how to staff up for the next decade, what will be needed by the business, back room, networks, etc.’ Exxon is also keeping a focus on some R&D activity and proprietary technology ‘where it makes sense.’

John Brutz described Shell’s goal of constant, routine surveillance to highlight production anomalies from its Gulf of Mexico operations. Shell’s ‘graying’ workforce is at odds with a portfolio of greenfield and ‘end of life’ project that are people intensive. Shell has deployed an advanced alarm tool from Matrikon to flag anomalies for technicians and engineers. The SharePoint-based Central Surveillance Center (CSC) is a service that Shell has carved out of from its assets and centralized. A typical CSC use case is pressure transient analysis. While this does need local knowledge, a lot of preparatory work (populating software etc.) can be automated in the CSC. Other CSC uses include ‘operating envelope’ (well) surveillance and subsea flow line surveillance. These are backed up with rigorous ‘who does what’ documentation. Booz & Co also helped with change management.

Jim Hoffman (Occidental) reported on a trial of SharePoint TeamSites’ Wiki component. ‘OxyPedia’ is an employee knowledge base to which all have read/write access. A proof of concept used data from Oxy’s Elk Hills asset. This has 300 employees and 20 contributors to the Wiki. The results are promising. The tool was described the tool as ‘a godsend’ by one user. OxyPedia is now going world-wide.

John Hudson outlined how Shell is using ‘Lagosa’ to improve production operations and gas marketing. Lagosa is a SharePoint-based system that rolls-up components including Honeywell’s Unisim and Prosper from Petroleum Experts. Lagosa has been deployed at Shell’ Bacton (UK) terminal to mimic control system operators’ manual actions and on Sakhalin II to integrate information from well head to onshore processing facility. Exceptions drive Lagosa workflows. Shell gets subject matter experts to explain how they know when the model is not giving correct result. The results are analyzed and embedded in the model. Operators try to ‘crash’ the plant in simulation mode, e.g. by running for a long time at a low rate, then ramping up fast and putting a pig in the line! Today Lagosa is used by the ‘rich kids,’ i.e. Shell’s major projects. But the toolset is being simplified for deployment across all assets.

Russ Spahr was back with an in-depth look at what digital oil technology means to an integrated oil company. Exxon currently runs ten advanced visualization centers, a large drilling information management facility in Houston and other digital stuff—for surveillance, downhole control, 4D seismic, gas lift and process optimization. Exxon has developed a systematic approach from hardware to automation, passing through standardized data management collaboration and automation. Exxon deploys a ‘Suitcase’ of real time tools, HSE, volumetrics etc. At the Kome control room in Chad, the Suitcase enabled rapid startup and now supports operator training and best practices. In 2004, Exxon chartered a new subsurface work environment that includes data access and a portfolio of approved applications all rolled into an integrated system. This is in the process of being rolled out now. Spahr offered some metrics on the effectiveness of digital technologies in Exxon. Use of an advanced visualization center, a ‘shared earth’ environment for geoscientists and engineers saved $10 million drilling costs on one field and contributed to the decision to forego an additional platform, saving a further $100 million. Remote real-time monitoring now happens on 75% of Exxon’s wells—with 20-30 monitored daily. A ‘fast drill’ process has led to a 57% average increase in performance. Improved reservoir modeling allows for fewer well tests. In the North Sea Exxon has leveraged communication and Petroleum Experts’ IPM model for debottlenecking and gas lift optimization. Elsewhere fiber communications enable high bandwidth connections between platforms and the FPSO. Platforms can be converted to remote operations and de-manned, reducing cost and risks. Surveillance is particularly important in the Canada tar sands play where data mining and surveillance have identified work process efficiencies and minimized downtime.

David Feinman described how BP is ‘realizing the value’ from real time well monitoring in its greenfield assets. The year before and the first six months of production are critical in a field’s development. In 2006, BP rolled out its ‘ISIS’ real time well monitoring program and found that adoption was easier in greenfield sites. Cross cultural change management is a major concern. BP has encountered issues with authority, status, individualism (US and UK employees) vs. collective (Indonesia, Angola). Now, ‘systems thinking’ and peer learning are incorporated into rollouts—allowing for fine tuning with regard to local differences. Angola was the first greenfield site to benefit with Portuguese localization. Next Indonesian greenfield gas fields were rolled out with Bahasa language support. Greenfields are generally more amenable to real time well monitoring and they can quickly ‘leapfrog’ brownfields in knowledge management. BP is now trying to understand why and figure out how to replicate this ease of rollout in brownfield sites.

This article is an abstract from The Data Room’s Technology Watch from the 2009 DEC. More from tw@oilit.com and www.oilit.com/tech.


2009 OSIsoft User Group, San Francisco

Chevron, SouthWestern Gas and PEMEX place PI System at center of real-time operations.

Jayanta Sharma presented Accenture’s integration work on Chevron’s digital oilfields. Chevron has integrated OSIsoft’s PI System data historian, the Analytical Framework (AF2) with real time SCADA feeds. Chevron’s MidContinent/Alaska (MCA) business unit deploys a growing number of business applications that require ‘robust access to real-time process data.’ The problem is that process data is stored in multiple local historians with no central repository, no single source of process data ‘truth.’ This means that deploying new applications is hampered by complex data configuration and performance issues—especially as Chevron has dozens of SCADA servers and tens of thousands of tags. The answer was a central aggregating PI historian and an AF2 store of operational meta-data, object hierarchies and geographical context to tag data.

Re-using existing SCADA object models makes it easier to bring new data into PI and AF. New data sources appear automatically in AF and PI. Object updates and deletions also propagate to PI and AF automatically. A web services access layer based on AF SDK provides platform-independent data access for applications. This hides the complexity of internal SCADA and PI/AF2 workings. The system has been successful in keeping SCADA configurations and PI/AF2 in sync. This has led to quicker deployment of i-Field applications in production monitoring, optimization and equipment performance monitoring. Chevron’s spill reduction and surveillance apps also use MCA Solutions tied in to the PI aggregator. AF functionality is used to model complex compound elements derived from multiple tags and to support modeling and mass balance calculations.

Jim Mlachnik and Jeremy Snider told a similar tale of how Southwest Gas (SWG) is ‘stitching together’ disparate data systems to create a ‘single version of the truth.’ SWG replaced its SCADA application’s historical subsystem with PI. Now access to real-time SCADA data is provided via DataLink and ProcessBook internally. RtPortal provides gas usage information to external agents and customers. Over next couple of years, PI data will support SWG’s compliance and maintenance system initiatives. SWG’s master plan is to use technology, rather than hiring full time employees to meet business needs.

Manuel Chávez provided an update on PI deployment in support of Pemex’ control and monitoring emergency systems. PI holds the pole position in Pemex’ real time infrastructure, acting as a bridge between operational data sources and ERP. Transpara’s Visual KPI is used to ‘put PI in the pockets’ of field engineers and managers. A constellation of applications for custody transfer, laboratory systems, SCADA and ERP are channeled to PI. PI then acts as feeder to high level solutions for process integration, planning and scheduling, operations coordination and emergency system. The latter uses a GIS system ‘GEO-Pemex,’ a Virtual Earth-based front end to data on Pemex facilities, Mexican cultural data, pipelines and more. Presentations available on www.oilit.com/links/0905_11.


Folks, facts, orgs ...

Bajer Hughes, ConocoPhillips, DrillingInfo, ERF Wireless, Endress+Hauser, Flowserve, GE Oil & Gas, Geoservices, Geosoft, IDS, Intertek, Invensys, Kadme, P2ES, Paradigm, PPDM Association.

Baker Hughes has appointed Belgacem Chariag as VP and President Eastern Hemisphere Operations. Chariag was formerly with Schlumberger.

Gary Pope has been named to lead the Center for Frontiers of Subsurface Energy Security, one of 46 new Energy Frontier Research Centers announced by the US administration.

ConocoPhillips has made a number of changes: Ryan Lance is senior VP, E&P International, Kevin Meyers VP, E&P Americas, Kevin Mitchell VP, E&P Strategy, Administration.

Melinda Faust, former Director of Marketing, has returned to DrillingInfo as Director, Business Development.

Douglas Gibson, former CEO of Vibtech, has joined ERF Wireless.

Exprodat has launched two new Petroleum GIS training courses, ‘Play Fairway Mapping with ArcGIS’ and ‘Building and Managing an E&P GIS.’

Raimund Sommer, MD of Endress+Hauser Process Solutions, is chairman of the Fieldbus Foundation’s EMEA Executive Advisory Council.

Flowserve has named Dean Freeman VP and Treasurer. Paul Fehlman is now VP Financial Planning and Analysis and Investor Relations.

GE Oil & Gas has appointed Sam Aquillano as VP Drilling and Production Systems.

Geoservices is using Facebook to promote its brand with a game ‘The Oil Conquest Challenge.’

Geosoft has made new marine gravity data from the Scripps Institute of Oceanography available for download via its public DAP Server at no charge.

IDS has appointed Douwe Franssens as General Manager.

Intertek has expanded its services to the upstream with oilfield safety and certification, offshore infrastructure integrity, vendor compliance and more.

Nicholas Pomeroy is to head-up Invensys’ new North Caspian Service Centre at Atyrau, Kazakhstan.

Kjell-Arne Bjerkhaug has been appointed to Kadme’s Board of Directors. Bjerkhaug is also on the ECIM board.

The US Minerals Management Service has signed a memorandum of understanding with the Norwegian Petroleum Directorate to exchange resource management information.

Vornel Walker has been promoted to VP, Marketing at COADE.

P2 Energy Solutions has appointed Eric Thurston VP of Sales Operations. Thurston was previously with SAP Americas.

Paradigm’s Innocentive Challenge (OITJ November 2008) on fracture net representation has been ‘solved’ by Stuart Rosenberg from Washington University in Saint Louis.

Schlumberger has named product group presidents: Paal Kibsgaard, Reservoir Characterization, Doug Pferdehirt, Reservoir Production and Jeff Spath, Reservoir Management. All report to executive VP Chakib Sbiti.

Richard Kluth has been appointed general manager of Sensorsnet, where he was previously COO.

Victor Minor of Blue Marble Geographics has been named Chair of the Open Geospatial Consortium’s Data Quality Working Group. Matt Beare of 1Spatial was named the Vice Chair.

Correction

The PPDM Association took issue with our report from its Spring User Group meeting (OITJ March 09). PPDM points out that it has never been ‘near liquidation’ as we wrongly reported. PPDM chairman David Hood did use the ‘L’ word, but in the context of a potential situation that might have arisen without a change of course. PPDM also wants to make it clear that while it did receive support from ConocoPhillips and Chevron, many other member companies have provided similar support to PPDM in the recent past. Our apologies to PPDM for the mis-representation.


Done deals

Knowledge Reservoir MBO, SGI reborn, GE rebrands, Zokero merges with Blue Castle Corp.

Knowledge Reservoir’s management has completed its buy out with the acquisition of the Ziebel AS’ remaining shareholding in the company. The MBO marks the repurchase by the original founders, Ivor Ellul and Robert Archer, of Ziebel’s stake, acquired in May 2007. Ziebel’s advisor in the transaction was Simmons & Co.

Rackable Systems has completed its $42.5 million cash acquisition of Silicon Graphics. Rackable is to adopt SGI as its global name and brand.

Cortex Business Solutions has closed an initial tranche of its private placement. Agent Wolverton Securities raised $CDN 2,200,000 in the initial closing. Proceeds will be used to accelerate Cortex’s network expansion program and for general working capital. Wolverton received 10% fee plus options on 10% of the sale.

GE Oil & Gas has ‘streamlined’ its operating brands and will transition a single ‘GE Oil & Gas’ moniker. GE Oil & Gas now embraces acquired companies including VetcoGray, Hydril Pressure Control and PII. GE Oil & Gas is to re-group into six categories: Drilling and Production; LNG & Pipeline; Industrial Power Generation; PII Pipeline Solutions; Refinery & Petrochemical; and Global Services.

Production Enhancement Group has announced the failure of its attempts to reach agreement with its senior lender to forbear enforcing its indebtedness. The Board of Directors is to co-operate with the senior lender in the UCC foreclosure of the shares pledged by PEG in its principal operating subsidiary, Wise Well Services.

Terralliance Technologies has secured new funding to scale its Natural Resource Mapping (NRM) technology and address additional exploration opportunities beyond hydrocarbons, including gold and other minerals. Terralliance also announced the appointment of Michael Long as chairman of the board.

Zokero Inc., developers of SeisWare and Blue Castle Corporation, support and services provider have merged. Ed VanWieren, former president of Zokero, is CEO and Murray Brack, former president of Blue Castle is COO of the new SeisWare International company.


Anadarko schedules rig fleet with Primavera P6 project management

Oracle’s project management toolset is component of Anadarko’s Capital Effectiveness System.

Anadarko reports use of Oracle’s Primavera project management toolset to support its deepwater drilling program, said to be one of the industry’s largest. In 2009, Anadarko will operate seven deepwater rigs, four dynamically positioned vessels and three tethered semisubs. Anadarko project manager professional John Reno commented, ‘It’s unusual for an operator to have such a large fleet but having so many rigs under contract in a tight market gives us a significant competitive advantage. The cost of operating a deepwater rig can approach $1 million per day leading to considerable pressure to meet project deadlines and safely manage the impact of storms and underwater currents. We use Oracle’s Primavera P6 Enterprise Project Portfolio Management to integrate rig schedules within our development plans. Such integration is required to tie in with critical completion dates and to make requirements and progress visible.’

Drilling project management is a component of Anadarko’s Capital Effectiveness System (ACES). This stage-gate process leverages the Cost Manager application from UK-based Kildrummy, Primavera and other estimating tools. ACES is used by Anadarko’s project services group to facilitate the transition between exploration and development teams, delineating roles and responsibilities at each stage of the workflow.

Primavera also supports interactive planning sessions at Anadarko’s operations intelligence center. Reno explained, ‘We get everyone together to review and update schedules and project items. This lets us prioritize drilling activities on multiple rigs, building in our lease expiry dates. What-if analyses help prepare for unforeseen changes in rig availability.’ Primavera’s central database also helps in the handover from drilling to development. More from www.oilit.com/links/0905_12.


FIATECH Conference 2009, Las Vegas

ISO 15926 center stage, BP ‘safety from data,’ ISO X3D, Equipment Information Exchange, more...

The FIATECH Conference* held last month is now the show for use of the ISO 15926 construction data standard. ISO 15926 was cited in no less than nine presentations. Petronas Carigali reported enthusiastically on the use of the standard which, along with other ISO, NIST and IEC standards, underpin the company’s PCIM project (OITJ May 2007) and its Virtual Facilities Data Center (VFDC), an engineering data portal and simulation environment that leverages technology from AVEVA. For Petronas, the road to information interoperability is via standardization, meta data and data structures. The VFDC was deployed on Petronas’ Angsi field, a joint venture with ExxonMobil (also a FIATECH member).

As we reported earlier (OITJ January 2009), the Camelot/iRing ISO 15926 demonstrator showed how, with publicly available tools, legacy systems have been mapped to ISO 15926. Data exchange scenarios involved engineering companies located in Athens, Brisbane, Houston, Pune and elsewhere. ISO 15926-compliant apps involved include SmartPlant, PlantSpace P&ID, OpenPlant PowerPID and even Excel. The public iRing should be up and running by the time you read this at iring.ids-adi.org.

Fiatech is also working on a standard for automated procurement and supply of process equipment. Current projects using this Automating Equipment Information Exchange (AEX) include Heat Exchanger datasheets (KBR), and a global valve cross-reference e-Catalog (GVCC). The GVCC incorporates work done by the Process Industry Practices (PIP) organization. GVCC/PIP members include Aramco, AVEVA, Chevron, Conoco-Phillips, FMC, Honeywell, Intergraph and Sunoco. Deliverables include a web based valve selector (due for Q4 2009) and a migration path for valve manufacturers from existing Word/PDF files to the AEX Schema.

Deborah Grubbe, VP Safety with BP explained how Fiatech was helping companies ‘use data to enhance safety.’ Various initiatives are working to make it easier to do hazard and risk analyses through standard methodologies –when to use what tool. The workgroup is also investigating computer assisted FMEA, FTA, HAZOP analyses and how to ‘build’ maintenance into a facility. Engineering applications can be extended for diagnostics—for instance, process hazard analysis changes can be tied to design documentation.

John Arthur, CTO of Norwegian high-end visualization specialist Octaga, outlined how standards and 3D gaming technology have been used in asset life cycle management. Octaga has built high-end virtual reality environments for use on major capital projects such as Chevron’s Agbami FPSO and StatoilHydro’s OrmenLang. The ISO X3D standard provides a rich toolkit for industrial visualization. Octaga has combined X3D with domain specific ISO standards, notably the uniquitous ISO 15926 to create interactive 3D models coupled with engineering design data. The result is that asset data becomes more accessible to operations and maintenance and development and prototyping are quicker. X3D provides ‘game quality’ visualization that can be used in CAD Models. ECMA scripting provides logic and interactive 2D overlays and cinematic camera paths for fly-throughs. Asset information within the models are searchable by tag.

Don Jacob, VP Engineering with Bluebeam Software suggested an alternative, pragmatic alternative to the full-blown business information model. Jacob described Adobe’s PDF format as the ‘overlooked stepchild’ of new technologies. PDF can ‘grease the wheels’ of engineering projects by providing a natural extension to current work practices.

Moreover, few users today are aware of PDF’s rich feature set.

Kopin Corp. was showing its ‘Golden-i’ head mounted display. This embeds a cell phone, wireless networks and ‘Nuance’ natural speech control. Golden-i displays interactive plant models to field engineers that can be shared with remote subject matter experts. Target workflows include live daily reports, red-lining, monitoring field work and ‘instant access’ to maintenance data.

* Presentations available on www.oilit.com/links/0905_9.


Sales, contracts and deployments

Moblize, GE, Infosys, Oracle, Ingrain, Ion, Dyadem, Letton-Hall, Nutech, Quorum, Teradata, XWiki.

Moblize reports that Contango Oil and Gas has successfully deployed the new WITSML-based Control Module for its Darp Rig, an appliance/software rigsite WITSML server. The new Control Module enables remote, operations center-based control of drilling operations.

~

StatoilHydro has awarded GE Oil and Gas a five year, $100 million contract for the provision of subsea operations services. The deal includes engineering, procurement manufacturing and workshop services for corrective and planned maintenance in subsea drilling operations.

~

Infosys Technologies has teamed with Oracle to provide Weatherford International with a global ‘human capital management’ (HCM) system, based on Oracle’s PeopleSoft Enterprise. The HCM deployment is a component of the ‘One Weatherford’ initiative which aims to double the size of the company within four years. The 18 month contract has automated HR processes across 800 service bases around the world resulting in multi million dollar savings.

~

Algerian NOC Sonatrach has signed with Houston-based Ingrain for a license to its ‘digital rock physics’ services.

~

ION Geophysical has signed with Pemex’ Comesa unit for the deployment of an 8,000 station FireFly system. FireFly will be used on three ‘full wave’ projects with ION personnel involved in survey design, data processing and interpretation.

~

Absoft has signed a ‘six figure deal’ to modernize Downhole Products’ IT landscape and integrate its global finance, sales and manufacturing operations. Absoft will also provide ongoing support to the Portlethen, UK-based firm.

~

Safety consultants Kenexis have selected Dyadem’s ‘Stature’ package as the platform for its safety instrumented systems design basis toolkit.

The nonprofit Research Partnership to Secure Energy for America (RPSEA) has awarded the Letton-Hall Group an R&D project to investigate multi-phase, commingled production from multiple deepwater wells.

~

NuTech had a successful 2009 NAPE Expo signing three wells from Dallas-based Petra Solidus and a ‘rush’ order from Kingwood Exploration, Shreveport for its ‘NuLook’ petrophysical analysis service.

AB Resources has selected P2 Energy Solutions’ Excalibur Energy Management System to handle its back-office needs. Excalibur covers finance, accounting, land and production management. P2ES claims over 200 Excalibur clients in the US.

~

Luminant has licensed Quorum Business Solutions’ PGAS solution which will be used as a validation tool for delivered gas. Quorum PGAS provides gas data management and a central repository for volumetrics and analytical data.

~

Xcel Energy announces that its Teradata Active Enterprise Data Warehouse (EDW) has doubled in size. Teradata is used to monitor business processes and maximize revenue by integrating billing, meter, inventory and other data.

~

Total reports at-scale deployment of XWiki Enterprise Manager. XWiki provides Total Group units with a centrally managed enterprise wiki. Over 6,000 wikis can be managed in a single XWiki instance. Now all Total employees can request a wiki service. XWiki provides a summary display showing available wikis along with maps showing georeferenced content.


Oracle Executive Forum, Houston

Business intelligence for Petrobras, Pride. Silver Creek deal on data quality. Paradigm on ROI.

About a hundred turned up for Oracle’s oil and gas event in Houston last month. Rich Clayton, Oracle’s VP business intelligence believes that IT spend should be maintained during the downturn, with a focus on information visibility through a consistent, enterprise-wide approach. There is a need to rationalize analytical tools and link financial, upstream and operation information into an ‘oilfield performance management system.’ Clayton has been working with Petrobras. Four years ago, there was ‘no visibility and no single strategy.’ Since then Petrobras has rectified the situation with ‘phenomenal’ results. Time spent on report preparation is down from two days to four hours.

Clayton, who was previously with Hyperion before its takeover by Oracle, proselytizes in favor of using proper business intelligence tools rather than Excel. Oracle’s answer in this space is Essbase, an analytical tool that offers similar functionality to Excel, but with a database running in the background. Poster child for Essbase deployment is Pride International. Pride uses several Hyperion tools as follows*. Hyperion financial management is used for USGAAP consolidations and SEC reporting. Hyperion Planning provides Prides annual budget and rolling forecasts. Essbase is used for management and ad-hoc reporting. Excel use has not completely disappeared. Essbase OLAP cubes are accessible from Excel, allowing users to create their own reports and queries. Rather than replacing spreadsheets, Essbase has created a new productive use for them.

Oracle has entered the data quality arena through an OEM agreement with Silver Creek Systems. Oracle’s new data Quality Cleansing and Matching Server, based on Silver Creek’s DataLens uses patented semantic technology to analyze, enrich and correct product data from multiple sources. Silver Creek has one oil and gas reference, oil country tubular goods supplier, McJunkin.

Paradigm CFO Gary Morris described how the downturn was impacting the geophysical/software business. Paradigm is seeing competition of various groups within the company for resources. This is leading to some ‘tough choices.’ Morris notes that calling the shots is hard. Return on investment (ROI) estimates vary within the company. There is a need for better ways of predicting ROI from digital technology. Paradigm is taking a new approach, working to figure the ROI of reprocessing seismic data. Compared with a reshoot or with getting the wrong image, getting the velocity model right can be shown to provide a huge ROI. More from www.oilit.com/links/0905_8.

* Additional information kindly supplied by Pride International.


Norwegian study of ‘effective and usable’ control rooms

Institute for Energy Technology paper analyzes impact of computerization in new build and revamp.

A recent study* by the Norwegian Institute for Energy Technology (IET) has identified pros and cons associated with the introduction of modern computer-based systems into the control room. The study, of digital revamps of nuclear power plants, may well have implications for similar upgrades of brownfield oil and gas production sites.

A major driver for modernization is the move to advanced technology and computer-based systems in the control rooms. Most legacy instrument and control equipment in nuclear power plants today is analogue. Decreasing part availability and increasing maintenance costs are forcing operators to look to digital control. Digital systems are also expected to provide more cost-effective production. The major challenge of the computer-based control room is in the design of the human machine interface (HMI). The HMI is a prime factor in facilitating operator problem solving. The concern is that the introduction of a new HMI will require a ‘new style’ of operation and a modified form of interaction between team members.

The study investigates worker psychology and the need for information visibility in collaborative problem solving. Systems such as the Westinghouse Wall Panel Information System have shown the benefits of large scale collaborative environments. The IET is conducting similar tests at its own Halden Man Machine Laboratory.

* www.oilit.com/links/0905_1.


BP’s ‘refiners of the future’ report mega savings with PlantTriage

Frontiers Magazine article claims annual cost savings of $1-2 million per refinery.

In the latest issue of online Frontiers magazine*, BP’s ‘Refinery of the Future’ team has been working with control systems specialist Expertune on the deployment of its PlantTriage performance supervision system. PlantTriage connects to existing control systems and monitors control loop and equipment health. Project lead Lakshman Natarajan was quoted as saying, ‘We have customized parts of PlantTriage to our particular needs. For example, the software’s communications protocol was strengthened to meet our IT security standards. This and other enhancements will be incorporated into future product releases.’

BP, with PlantTriage deployed in seven out of its eleven refineries, is now a leading user of PlantTriage. Benefits of $1-5 million per year per refinery are being realized. Oil IT Journal reported on another successful PlantTriage deployment by Iberdrola in our report from the 2008 OSIsoft User Group (OITJ November 2008).

* www.oilit.com/links/0905_2.


Emerson wireless for Gullfaks retrofit

Clip-on wireless temperature sensors provide real-time alerts to well pressure loss.

Emerson Process Management’s Smart Wireless network has been deployed by StatoilHydro to automate flow monitoring and increase production from 90 wells on the three platforms that constitute the Gullfaks field in the Norwegian North Sea. The WirelessHart-based network was chosen because of its ease of deployment and was installed without interrupting flow. Wireless devices now transmit real time temperature data that allows for quick reaction to loss of well pressure. The deployment follows a successful trial on StatoilHydro’s Grane platform.

StatoilHydro was intermittently losing flow from Gullfaks producers. Remedial action was dependent on early detection of flow losses. The solution was to deploy Emerson’s Rosemount 648 wireless temperature transmitters on individual well flow lines. The clamp-on sensors were deployed without interruption to production.

StatoiHydro’s project manager Anders Røyrøy said, ‘Wireless offers an inherent reduction in cabling infrastructure, complexity and weight, resulting in significantly lower installation costs. Emerson’s Smart Wireless mitigates the impact of radio interference and data reliability is 100%.’ Now instead of once-per-shift manual recordings, StatoilHydro gets temperature readings every 30 seconds. The Smart Wireless Gateway is hardwired into the control room providing operators with the real time information they need to react quickly to any change in flow. Røyrøy commented, ‘Rich process information and plant diagnostics are essential for unmanned operation.’ More from www.emerson.com.


RFID round-up, Merrick Systems, Mojix

Merrick’s kit includes high temperature/pressure ‘diamond’ tags. Mojix targets onshore inventory.

Two RFID-related announcements this month from Merrick Systems and Mojix. Merrick has unveiled its new RFID engineering design kits that let oil and gas companies incorporate Radio-Frequency Identification (RFID) technology into their production and drilling operations. Kits include RFID Tags, equipment and installation instructions for easy implementation of asset-tracking for almost any oil and gas asset. The kits leverage Merrick’s ‘Diamond Tags’ that enable RFID deployment in high temperature and pressure environments. Diamond Tags can be attached to drill pipe, risers, BOP’s, manifolds, valves, diving equipment and many other assets. The ATEX certified tags operate in up to 20,000 PSI pressure and 1,210°C temperature. More from merricksystems.com.

Mojix’ Insight is a passive RFID real-time location system (RTLS) to locate and track goods and assets in warehouses, storage yards and corporate facilities. Mojix Insight integrates enterprise systems, provides dynamic business logic and real-time imaging and mapping of assets. More from www.mojix.com.


Marathon deploys BWise in global compliance and risk effort

Governance, risk and compliance toolset tracks and tests company’s internal controls .

Marathon Oil has selected a governance, risk and compliance (GRC) solution from New York-based BWise to ensure regulatory compliance reduce costs. Marathon was seeking a solution to merge multiple compliance initiatives into a standard, transparent process. BWise tracks business and compliance processes and tests and reports on enterprise-wide controls and risks. Anne Hunt, Marathon’s director of IT compliance said, ‘Our objective was one Marathon compliance process, utilizing an enterprise tool to inventory controls, test their effectiveness, remediate issues, and report to management.’

BWise is addressing the convergence of disparate GRC solutions by integrating risk management and compliance initiatives into a single framework. In a converged solution, a control is tested once and used many times for different regulatory reports and risks of diverse provenance can be roll-up rationally. BWise documents business processes at all levels, from strategic to transactional—along with risks and controls. BWise handles Sarbanes-Oxley, Solvency II, Basel II, MiFID, PCI, GLBA, NAIC MAR, HIPAA and others. More from www.bwise.com.


Dresser Wayne announces new retail solution

Notebook sized appliance future-proofs the forecourt and improves business visibility.

Austin, TX-based Dresser Wayne has released the Fusion Forecourt System (FFS), a petroleum retail management appliance that decreases system complexity and ‘future-proofs’ the forecourt by providing improved visibility of key systems and inventory. FFS integrates fuel pumps, card terminals and tank gauges and interfaces with forecourt devices, fuel equipment and point-of-sale systems from multiple vendors. FFS reduces the number of controllers and appliances needed to operate a multi-device, multi-vendor forecourt.

Dresser Wayne CTO Dan Harrell said, ‘Less complexity and better visibility of business dynamics mean more profits. FFS ties disparate systems together, letting retailers maximize system capabilities and make the most of their investments.’ FFS is a notebook-sized appliance engineered to withstand the harsh temperatures, airborne particles, small spaces and 24/7 processing requirements of fuel sites around the world. FFS provides reporting capabilities for enterprise-level sales, operations, inventory and equipment management. More from www.dresserwayne.com.


GE Oil & Gas/VetcoGray teams with SPT Group

Companies aim for ‘digital oilfield’ solution for remote closed loop control and monitoring.

GE Oil & Gas unit VetcoGray is teaming with SPT Group to provide enhanced online flow assurance solutions. This partnership involves tighter integration of VetcoGray’s SmartCenter and SPT Group’s OLGA-based simulators. SmartCenter is a remote conditioning, monitoring and diagnosis facility located at VetcoGray’s service hub in Nailsea, UK. In the longer term, the companies plan to develop common user interfaces and new technologies for closed loop control and condition monitoring to improve subsea operations.

VetcoGray CTO Dean Arnison said, ‘The oil and gas industry is increasingly focused on remote operations and digital oilfield solutions. This deal builds on our experience in subsea control systems and online flow assurance technology to provide a key component of the e-field.’

SPT Group CEO Tom Even Mortensen added, ‘Our aim is to develop our partners’ capabilities of integrating OLGA-based simulators and moving to real-time use.’ Located in Oslo, Norway, SPT Group also markets Drillbench and MEPO. More from www.ge.com/oilandgas and www.sptgroup.com.


SmartSignal announces ‘threat-based maintenance’ methodology

New approach frees staff looking for ‘needles in haystack’ of real-time condition monitoring data.

Lisle, Ill.-based SmartSignal has introduced a new concept in equipment maintenance dubbed, ‘threat-based maintenance’ (TBM). According to SmartSignal CTO David Bell, ‘Current maintenance methods overwhelm a shrinking pool of maintenance engineers with volumes of data and alarms. These result in surprises, unplanned outages, and excessive maintenance costs.’

TBM delivers high level analytics based on existing investments in data infrastructure including sensors, DCS, historian, vibration and oil analyses. The analytics provide earlier detection, diagnosis and proactive interruption of the probability of failure curve. This in turn frees staff from looking for needles in the haystack and allowing them to act before damage has been caused. SmartSignal-enabled TBM reduces unplanned outages, unnecessary planned maintenance, and maintenance costs. Bell claims ‘TBM creates a paradigm shift that enables a company to gain significant value from implementing proactive maintenance.’

TBM was developed from SmartSignal’s experience gained on a worldwide customer base of over 10,000 assets across the power generation, oil and gas, and aerospace verticals. More from www.smartsignal.com.


© 1996-2021 The Data Room SARL All rights reserved. Web user only - no LAN/WAN Intranet use allowed. Contact.