December 2009


Digital Oilfield in the Cloud

Petrotrek Online eschews Amazon to deploy on Microsoft Azure. Under the hood are Bing Maps, IHS data services and ‘Geneva’ claims-based access control. 90% on-site cost savings claimed.

Speaking at the Microsoft Professional Developers Conference last month, Thinh Pham, senior architect at The Information Store (iStore) fleshed-out the new ‘Digital Oilfield (DO) in the Cloud’ offering, a.k.a. ‘Petrotrek Online’ that we touched on last month. The PetroTrek DO offering is already deployed at larger client sites including BP, Chevron, Pemex, Shell Nigeria and RasGas, but the move to cloud computing is intended to offer digital oilfield functionality to midsize and smaller independents without major IT infrastructure.

Previously, iStore had a version of Petrotrek running on Microsoft Office SharePoint Server 2007 (MOSS—see our interview with iStore CEO Barry Irani and CTO Oscar Teoh, February 2009) which simplified the migration to Microsoft’s hosted Azure ‘cloud.’ Along with the data/software hosting service, Petrotrek’s developers have extended the solution with Microsoft’s ‘Silverlight’ ‘rich’ browser GUI and Microsoft’s Bing Maps enterprise search and hosted GIS. Bing Maps is used to ‘mash up’ data sources including IHS’ production and well databases. Access to sensitive data of diverse ownership and provenance is controlled via Microsoft’s ‘Geneva Server.’ Geneva extends Microsoft’s Active Directory Domain Services to provide ‘claims-based’ access control, supporting the WS-Trust, and SAML protocols. Geneva reduces the need for duplicate accounts and other credential management overhead by enabling federated single sign-on across organizations, platforms and applications. Geneva can also generate and manage Windows Cardspace virtual card identities.

Alongside Azure, iStore evaluated Amazon’s Elastic Compute Cloud (EC2) hosting offering. Pham said, ‘We felt that the Windows Azure platform had more to offer than Amazon EC2 because it is a total platform. In addition to the operating system, Microsoft SQL Azure extends storage to the cloud. With Azure we also have peace of mind knowing that Microsoft is maintaining the image. All we have to do is deploy the software and run it. It’s basically foolproof.’

CSC assisted iStore in the port of Petrotrek to the cloud, leveraging its participation in the Windows Azure Technology Adoption Program. CSC has optimized Petrotrek for Azure.

Azure provisions compute and storage resources on-demand. Previously, where iStore would spend a couple of months working on infrastructure on-site, now this is cut-down to ‘a matter of days.’ Pham concluded, ‘If we compare the upfront cost of the two deployment models, I would estimate that using Windows Azure could save customers as much as 90% of on-site costs.’ iStore CTO Oscar Teoh added, ‘With Azure, we know the data is in good hands because it’s hosted by Microsoft. Microsoft is our IT department—it doesn’t get much better than that.’ More from istore.com.


Free e-business

Amalto’s OnGeT free client translates Excel to PIDX and offers peer to peer e-business transactions and trading hub connectivity.

Amalto Technologies has announced ‘OnGet,’ a ‘free’ business to business (B2B) e-commerce platform for oil and gas operators and suppliers. OnGet supports the secure exchange of business documents such as invoices, purchase orders and field tickets.

Users can download the client software from onget.net and be doing e-business ‘within minutes.’ OnGet digitally signs and encrypts all transactions and attached documents. The system also translates Excel-formatted documents to PIDX-formatted invoices.

Business can be conducted either on a peer to peer basis between trading partners using OnGet clients. Alternatively, OnGet can connect to trading hubs or directly to large organizations’ IT systems. This combined model provides full interoperability between trading partners regardless of their size.

OnGet technology was originally developed by Amalto to connect Chevron and GE Oil & Gas to their supplier networks. The Amalto-sponsored LinkedIn Oil & Gas e-Transaction Forum recently passed the 10,000 member mark. More from amalto.com.


Geology rears its ugly head in subsea engineering mishap

Oil IT Journal editor gets out and about—at SMi’s Production Optimization masterclass and GE Oil & Gas’ Smart Center. Researching a piece on subsea production, he stumbles upon the ‘Tordis incident,’ a fascinating tale of high tech engineering, ‘faulty’ geology and perceived CO2 injection risk. Reminiscing on his formative years as a seismic interpreter in the North Sea, he concludes that it’s the things we take for granted that are likely to surprise—especially when geology is involved.

In my early days as a seismic interpreter I was stuck on a completely intractable problem. A horizon—I think it was the top Palaeocene—refused to tie across to the next well—and after a few hours cussing and checking, I called in my boss. He cussed and checked some more and then, to my relief, agreed with me that ‘we had a problem.’ The problem was that we had a certain idea of the marker as a single volcanic event—and the seismic just refused to cooperate with this accepted view of the geology.

My boss was something of a renaissance man and got on the phone to call in some support. Over the next few weeks, well cuttings were retrieved and analyzed by company petrographers and palynolygists (do these still exist?). After a couple of months, it was determined that the volcanic marker found in the well was a different event. In fact, contrary to our world view, there were two volcanic events in the Palaeocene. The seismics (and me) was ‘right.’

Of course, if I had been wrong, I would probably not be telling you this story, but I am not relating this just in a spirit of self-aggrandizement, but also because it is a great example of what we might call today a ‘holistic’ workflow. Although the silo boundaries crossed were all inside the ‘geo’ domain, there was integration and great organization in action. I was impressed by the company that I was working for—rather unusual for a twenty-something year old.

Events this month made me remember this formative event. While attending the SMi/TNO ‘Masterclass’ on Production Monitoring (a report will appear in next month’s Oil IT Journal) we were set an exercise that involved the design of a retrofitted monitoring system to a subsea development that was experiencing slugging problems (big bubbles of gas collecting in the gathering system and blasting dangerously large liquid slugs up the riser.)

Not having an engineering background made this a tricky problem involving musings as to how much data you might expect to monitor on a subsea well and how much control over it you might expect to have. I imagined that there would be some measure of pressure, maybe temperature, maybe flow and choke position. Although my problem solving colleague knew a bit more than I did about production monitoring, in the end our answer was more buzzword bingo than real-world solution.

Fortuitously, the next day I travelled west to Bristol to visit GE Oil and Gas’ Vetco Grey unit that manufactures subsea control systems for the ‘digital oilfield.’ (A report from Nailsea will appear in a future edition of Oil IT Journal.) There I learned that a modern subsea control system streams thousands of data points per day to the topside and perhaps on to the shore. Interestingly, currently, most operators use only a very small subset of the information. I sensed a great story in development for a future issue on the potential to use much more of this data.

It seemed like a good place to start was the fact that in our July 2009 issue, we reported on the ‘monster’ contract for subsea computers that Statoil (the company dropped the ‘Hydro’ tag last month) awarded to GE for the Tordis Vigdis Controls Modification (TVCM) revamp. As I am sure many of you know, Tordis is a groundbreaking subsea development in that it uses a novel system of subsea separation of water from the oil. The world’s first full-scale commercial subsea separation, boosting and injection system was installed by FMC Technologies in 2007. The Tordis improved oil recovery (IOR) project was a state-of-the art development that relied on high end hardware like multi phase meters and control systems and communications. Tordis IOR was estimated to increase recovery from 49% to 55%, around 35 million bbls of extra oil.

And yet there was a problem on Tordis. In May 2008 it emerged that the produced water, rather than being injected into the Tertiary Utsira formation, was leaking to the seabed. Sonar imagery showed a magnificent pockmark where the water reached the seabed. Statoil launched an internal investigation into the incident to conclude that the Utsira reservoir was absent in the Tordis area. A report from the Norwegian Petroleum Directorate (NPD), ‘Faulty Geology Halts Project’ was published last month confirming the new interpretation.

In the meanwhile unfortunately, with Copenhagen approaching, Greenpeace seized on the Tordis incident to warn of the potential danger of CO2 sequestration. Statoil has an ongoing carbon capture and storage (CCS) project on Sleipner East, some 300km south of Tordis.

NPD noted however that the ‘the Tordis leak cannot be used as a general argument against storage in the Utsira since this formation is not actually present in the area.’ Statoil confirmed that ‘no leakage of CO2 is ongoing or is to be expected from the Sleipner CO2 injection project.’

I think this is a great story about the hard realities of oil and gas production and the obligatory nature of cross silo integration. Who would have thought that a flagship subsea engineering project would come unstuck because of a mis-picked Tertiary horizon. While it is easy with hindsight to find fault, it is often the case that it is the things we take for granted, like the nature of the top Palaeocene or the existence of the Utsira reservoir that are most likely to surprise.

On the positive side, the Tordis incident demonstrates the merit of openness from the operator and an apparently good, if robust, relationship with the regulator.

Have a great holiday break!

More on the Tordis incident on www.oilit.com/links/0912_1. The NPD report is available at links/0912_2.


Oil IT Journal Interview—Ali Ferling, Paul Nguyen, Microsoft

Ali Ferling, MD Oil and Gas and Paul Nguyen, Oil & Gas CTO speak of their aim to make oil and gas a Microsoft ‘stronghold’—through a partner ‘ecosystem’ and an upstream ‘reference architecture.’

Ali Ferling—My mission is to take what Microsoft has done in oil and gas in the US to the rest of the world, driving standardization and innovation and make oil and gas a Microsoft stronghold. We are working with our alliance partners—systems integrators like Accenture, Wipro and Infosys, and software vendors like Schlumberger and Halliburton. We also have a significant downstream involvement. There are over 400 partner companies in our overall oil and gas ‘ecosystem.’ Oil and gas is one of Microsoft’s fastest growing industries with activity in 70 countries and globally, around 700 Microsoft people spend a big part of their time on the vertical. Microsoft is not into acquisitions in this space. We prefer to work with partners like Schlumberger’s Merak which includes more proven Microsoft components—MOSS 2007, Communication Server, SQL Server.

Sure but with a reported 20,000 license count for Petrel this is not much of a market for Microsoft!

AF—OK but Schlumberger’s Petrel runs on Vista and soon on Windows 7—showing the power of the OS and integration of the desktop with high performance computing (HPC). It is driving mindshare more than revenue.

In the past we have criticized Microsoft’s claims to ‘HPC dominance.’ What exactly do you mean by HPC here?

AF—You are right that we are not in the seismic processing space, intentionally. But we are in cluster-based reservoir simulation—where we are addressing scalability. In this space our offering is comparable to our competitors’. Our sweet spot is in HPC collaboration and connectivity tools.

What of the E&P reference architecture?

AF—We joined Energistics with the clear objective of supporting the upstream standards, namely ProdML and WitsML. But Energistics is not our first standards body. We started collaborating with Mimosa during a Statoil Integrated Operations project. This covered condition-based maintenance in refining and petrochemicals. We developed the Microsoft Manufacturing Toolkit to provide guidance on maintenance scenarios leveraging our technology.

Paul Nguyen—All this rolls into what is now our reference architecture approach. We have been working with our Microsoft Utility teams around a performance-oriented infrastructure—rolling-in economics, cost effectiveness and a holistic live user experience. This architecture also aligns with our ‘three screens and a cloud’ vision, where software is seamlessly delivered across PCs, phones and TVs, all connected by cloud-based services. We are also bringing network optimization experience from manufacturing to oil and gas with our ‘sensor to server’ offering. This enables our partners to plug in to the architecture and build solutions around it. We are very much into interoperable data—hence our joining Energistics. We will be working with partners to tune our oil and gas reference architecture—which will be published.

AF—The plan is to offer the application platform —the ‘plumbing,’ and let partners build applications such as Petrel as line of business solutions. We take key industry specs and add value by building them into an ‘out of the box’ integration framework for our partners. This lets our partners focus on their business—not on technology integration.

It is easier to diagram the standards landscape than to integrate with it! In the SERA position paper you mention ‘taxonomies’ but this kind of work has been attempted before in the upstream and proven rather intractable.

PN—Sure, but you should also check out what has been achieved with the Microsoft Manufacturing Toolkit. This architecture has evolved and now can ‘plug and play’ with other industry standards. It is a service model that leverages other industries’ expertise from our ISVs partner ecosystems to deliver modules to plug into the ‘plumbing’ which supports not just the information exchange, but also the business rules enforcement and context awareness for interactions between systems. The Manufacturing Toolkit was released in May 2009. It provides guidance documentation and working code samples for people who want to connect systems that utilize MIMOSA and OPC/OPC-USA organization specifications using Microsoft technology. This is a freely available published spec—not a commercial offering.

AF—We need more oil and gas specific protocols and these are under development. We will publish what we come up with. You will be hearing more from us! Our ‘3 screens plus cloud’ paradigm will for instance allow equipment manufacturers to monitor their equipment remotely.

But the big rotating machinery folks do this already…

AF—Yes but this will democratize the approach—make it available to all.

So the future is the cloud?

AF—These architectures support our offering which can be in the cloud, such as our Azure platform, or the traditional way on-site, on Windows Server, or a blend of both. The oil and gas industry is ready to move to the next level of optimization—where the whole industry value chain will be optimized, not just its components. This is our vision. And the key to this will be our software plus services strategy,

PN—We have some great examples from key technology providers. The early work on the Digital Oilfield focused on custom service-oriented solutions to larger customers. With partners like The Information Store (iStore) we have delivered on-premises solutions to our larger clients (majors). This left out the mid size independents to a large degree. But at the PDC* in Los Angeles this month iStore announced that they will be offering an online Digital Oil Field solution—all hosted on Windows Azure. This will provide a subscription-based solution for asset management in the cloud. We have also been working with a major customer on scenarios that keep all of its data inside the firewall—but still offer hosted applications to users around the world. This leads to a solution with data inside the firewall and applications in the Azure cloud.

AF—If you are interested to explore this Online subject further read the case study** and video*** iStore´s Petrotrek Online solution.

See also this month’s lead on Petrotrek Online.

* Professional Development Conference.
** links/0912_3
*** links/0912_4


Total’s high performance computing—GPU accelerators

SGI’s ICE+ GPU-based supercomputers and CAPS Enterprise’ HMPP provide major speedup.

Speaking at the French high performance ORAP event earlier this year and later at the SEG, Total’s HPC guru Henri Calandra revealed how GPU-based accelerators and SGI hardware are contributing to its seismic imaging effort. Calandra presented a roadmap of algorithmic complexity and compute horsepower. Today, state of the art for Total is a sub petaflop machine running anisotropic reverse time migration. But before 2020 we can expect hundred petaflop machines running full waveform visco-elastic inversion at higher frequencies.

Total has seen a huge increase in its compute power this year with the arrival of an SGI ICE+ MPP machine which is being extended with Nvidia GPUs. This is currently offering 450 teraflop bandwidth. The SGI ICE+ houses a low latency hypercube interconnect, necessary because, although seismic code is ‘embarrassingly parallel,’ Amdahl’s ‘law’ works the other way in that ‘only 99.99% parallel code is scalable.’

The ICE+ has 16,284 Intel Harpertown cores with 2GB memory/core and 256 NVidia GPUs. A ‘Fat Tree’ hypercube interconnect links all groups and can scale to 32,000 nodes. The interconnect is the key to efficient implementation of the 3D wave equation finite difference solver. According to Calandra, ‘SGI’s hypercube topology offers a scalable single system that is easy to manage.’ The GPU is considered a very promising route to RTM—with a 35x advantage over a single Intel core. As has been noted elsewhere (see out reports from the SEG HPC session and SC09 in this edition), all this comes at the expense of increased programming complexity. Here Total leverages Caps Enterprise HMPP (Oil ITJ April 2008) to overcome the difficulty of adapting its code base to multi core/GPU architectures. More from sgi.com and caps-enterprise.com.


Open Geophysical—open source seismic processing

Startup’s OpenCPS offers open source based toolkit and API for JavaSeis-based processing.

Houston-based startup Open Geophysical is setting out to leverage open source software for seismic data processing. The company has installed alpha versions of its Bluefin Tools API and OpenCPS toolset at customer sites. Bluefin is an open-source processing engine and algorithm framework while OpenCPS is a commercially licensed product.

OpenCPS includes interactive processing, C++, Java, Fortran 90 APIs and 2D/3D visualization and job replication for parallel processing.

OpenCPS leverages ConocoPhillips’ JavaSeis (OITJ October 2007) open source seismic processing environment, adding interactive processing capabilities and data set navigation. A ‘spreadsheet-like’ view of processing job templates shows key variables and job and host computer status. More from opengeophysical.com.


Statoil pilots SBED—Petrel plug-in announced

Geological modeling tool said to better capture fine scale, heterogeneities for fluid flow studies.

Statoil is piloting Geomodeling Technology’s SBED modeler with the aim of its becoming a component of its ‘multi-scale’ modeling workflow. According to Geomodeling, Statoil geologists and engineers have been able to capture the fluid flow impact of complex, fine-scale inter-layering within heterogeneous reservoir units, improving property simulations and reserve predictions.

Statoil principal researcher Alf Birger said, ‘We have successfully used SBED for several years to develop more accurate characterizations of our fields on the Norwegian Continental Shelf. Ongoing pilot studies with SBED on thin-bedded, heterogenous reservoirs have shown excellent result—we expect to incorporate SBED into our standard workflows for reservoir characterization across all of our business units.’

SBED was developed by a consortium of major oil and gas companies. The package bridges the gap between pore-scale and full-field models. Statoil researchers have successfully generated effective porosity, absolute permeability and two-phase relative permeability values from SBED for input to large-scale reservoir simulations and reserve estimations. Results from SBED models are providing more realistic distributions of rock properties in reservoir intervals and more accurate reserve calculations and production profiles.

This month Geomodeling released SBED 4.0 and SBED 4.0 for Petrel. SBED 4 is a new, cross-platform, Windows and Linux version. SBED for Petrel leverages the Ocean API to transfer facies and well data from Petrel to SBED. More from geomodeling.com.


Exprodat announces ESRI ‘bridge’ for SMT Kingdom Suite

Bi-directional ArcMap link lets interpreters ‘take control’ of connectivity, audits data provenance.

Exprodat has released ‘Team-GIS KBridge,’ (TGKB) a GIS-enabler for Seismic Micro-Technology’s Kingdom interpretation suite. TGKB transfers Kingdom well, seismic and interpretation data to and from ESRI’s ArcMap. TGKB is a ‘low cost’ alternative to heavyweight data integration, enabling end-users to ‘take control of data connectivity rather than rely on enterprise level connections.’

Exprodat CTO Chris Jepps said, ‘Getting data from E&P interpretation suites’ GIS can be challenging. TGKB lets users import project data into ArcMap as a shapefile or geodatabase. Data is automatically tagged with ArcGIS metadata, providing an audit trail of data provenance.’

TGKB is tightly integrated with Exprodat’s KWeb, allowing GIS users to drill-down from ArcMap to an overview of Kingdom project data via a web-browser. Team-GIS is a suite of ArcGIS Desktop extensions that provide ‘out-of-the-box’ functionality designed specifically for the petroleum sector. More from exprodat.com.


Software, hardware short takes

IHS, INT, Midland Valley, Petrosys, SeisWare, Tecplot RS, SmartSignal, Dexter Magnetic, Parallel.

IHS has announced GeoSyn, a seismic modeling tool for synthetic generation, AVO analysis and 2D impedance modeling.

INT has released INTViewer 4.0 now based on the Netbeans Java integrated development environment. INTViewer offers a cross platform API for plugin development, GIS functionality including support for ESRI shape files and EPSG projections and GoCad support.

Midland Valley is shipping Move 2010 with a new toolbox for 2D/3D/4D integration, enhanced workflows, a ‘MovePetrel’ link and integration with Google Maps in 4DMove.

Petrosys v16.7 sees the introduction of checkshot data for depth conversion, WMS image display and direct access to well data in Petrel, Petra and SeisWare.

SeisWare (formerly Zokero) has released SeisWare 7.1 with PETRA data import and export, a link to ESRI shapefiles or SDE layers time to depth conversion and custom coordinates. SeisWare’s interpretation package now includes crossplot, a 3D Visualizer and basemap generation. Tecplot RS 2010, cross platform reservoir engineering pre/post processor now offers speeded data loading, drag and drop file management, and multiple simulation comparison and management.

SmartSignal’s EPI Center 3.0 claims ‘game changer’ status in predictive diagnostics with technology to manage, control, and optimize industry maintenance and operations. EPI Center predicts impending equipment and process failures—adding advanced analysis, reporting, incident management and knowledge capture and recall. A web services PI allows integration with enterprise applications.

Dexter Magnetic has announced a downhole electricity generator that delivers 200 watts in ‘extreme’ HPHT conditions—up to 250 °C and 20,000 psi.

Parallel’s SatManage V claims a 5x performance improvement. SatManage integrates and automates the management of Network Operations Centers and is used by many of the world’s biggest Oil and Gas and maritime VSAT providers.


SEG High Performance Computing special session

Hess trials GPUs, Barcelona SC on ‘million core’ computers, Stanford—’industry is running scared!’

A 150 plus audience attended the SEG Special Session on high performance computing in Houston last month. Chairman Masoud Nikravesh (CITRIS) noted that ‘15 years of exponential growth has ended’ as microprocessor clock speeds reach their limits. Today, the deal is how to how to take advantage of multi-core architectures—both inside the micro-processor and on the graphics card.

Scott Morton reported on Hess’ seismic imaging effort—which has strong backing from John Hess himself. Hess has investigated co-processors from PeakStream and Nvidia along with more ‘esoteric’ hardware such as digital signal processors, FPGAs and the IBM Cell BE. Early FPGA tests with SRC Computers showed a 10x speedup for wave equation migration (WEM), albeit at a 10x system cost. FPGAs proved hard to program and hard to tune for performance.

Hess has now moved to Nvidia’s CUDA which runs on a CPU host and spawns a multitude of threads on GPUs. Memory management remains a big headache and CUDA is easy to learn and write but harder to optimize. Hess reports a 24x WEM speedup over a CPU. For reverse time migration (RTM), a single GPU is equivalent to 20 Intel Harpertown cores. As CPU and CUDA codes ‘diverge,’ one solution may be OpenCL. Hess’ system is now moving up to 1,200 GPUs, a large percentage of its compute power. Hess uses Tesla boxes with PCI Express interconnect to dual Harpertown CPUs.

José Cela outlined an evaluation of 3D RTM on hardware accelerators performed at the Barcelona Supercomputing Center. Echoing Nikravesh’s introductory remarks on the growing core count, Cela wondered how we will program a million-core computer? These should be arriving within the next 5 years! The key is to reduce power consumption without impacting performance. IBM’s Blue Gene system was built for energy saving from the ground up. But this meant a limited amount of memory per core. This is good for some apps, but not for seismic processing. Accelerators (GPU, Cell) break the memory linkage but now memory movement needs to be done in code—resulting in an increase of program complexity by ‘1 to 2 orders of magnitude.’ Cela noted the standard-less programming landscape—which may merge to OpenCL in the future. This is a critical issue, ‘because code outlives hardware.’ In the Q&A, Robert Clapp (Stanford) remarked that, ‘We have a working FPGA RTM demonstrator—don’t write them off yet!’ Clapp later presented work done on stream programming with FPGAs to conclude that ‘comparing vanilla implementations does not give an accurate measure of different architectures.’

Ryan Schneider from Nvidia’s Acceleware unit was ‘sceptical that FPGAs will have much impact in HPC.’ Particularly because of the relatively low level of R&D compared with Cell/GPU. Schneider claimed that it should be possible to get 100x improvement with GPU vs. the CPU. Nvidia’s ‘Fermi’ architecture is coming soon—with 3.5 billion transistors and a promise of 1400/770 GFlops (single and double precision). Schneider concluded by suggesting that we ‘look out a few years—how are your algorithms going to deal with hundreds of cores? The cost of computing is tending to zero while the cost of coding is rising steadily.’ This touched a nerve with the audience—Clapp remarked that ‘Industry is running scared—we are moving to massively parallel with no software to run on it!’ Another curious issue is ‘code taint.’ This is what happens when a computer guy optimizes a program, after which the scientist refuses to touch it!

John Falsh (Berkeley Lab NERSC) questioned exaflop projections for 2020, noting the implied 100 MW power requirement. For Falsh, the issue is, ‘how to get 1000x without locating next to a nuclear power plant!’ Performance improvement in the next decade will be harder to achieve and program. We need a 100 x energy improvement over the ‘mainstream COTS’ approach. Some suggestions, ‘self optimizing hardware and software, ‘co-tuning’ and leveraging low power embedded COTS technology such as that used in the iPhone or MP3 player.

Finally, Bill Menger (ConocoPhillips) announced the creation of the Society of HPC Professionals. More (but not much more!) from hpcsociety.org.


PPDM AGM and Fall User Meet, Calgary

PPDM CEO Trudy Curtis describes a growing organization. Talisman and Hess report on enterprise-scale deployment of the Public Petroleum Data Model. geoLOGIC Systems offers ‘PPDM in a box.’ IHS and P2ES report on PPDM’s capture and revamp of the venerable API well numbering system.

Trudy Curtis reported on a growing PPDM organization. PPDM is hiring 3 new positions and has introduced a new ‘individual’ membership category. While Canadian membership is down, US and international is up and PPDM is now a CDN$1.2 million organization (up from $800k).

Lonnie Chin described Talisman’s strategy for integrating unstructured E&P data. Talisman has deployed MetaCarta’s GIS/Lexicon application as a component of its well master data management solution. Schlumberger, as MetaCarta’s upstream integrator, was involved in the project. Talisman opted for a home-brew master data management solution using a PPDM data store. A two tier strategy has been implemented, with key documents subjected to manual tagging and review, less critical stuff is classified automatically by MetaCarta.

Pat Rhynes GeoLogic Systems (GLS) described the ‘PPDM in a Box’ project– a standard approach to a ‘full service,’ pre-populated PPDM 3.8 implementation. PPDM in a Box addresses use cases such as master data management, asset lifecycle data management, business process management and more. Potential users are start ups, ‘green field’ developments, Agencies and National Oil Companies. GLS, along with Noah Consulting was commissioned by PPDM to study the possibility of a standard implementation. The plan is to continue development along the lines of an open business model which will develop an ‘open’ data mode, open rules and an open meta-model. Expressions of interest are sought for an ‘at scale’ project that will kick off in 2010.

James Stolle (P2ES) and Bruce Smith (IHS) reported that the American Petroleum Institute is working to hand over control of the API well numbering standard to PPDM. The venerable API was last revised in 1985 before widespread horizontal drilling. The plan is to review and incorporate similar standards from organizations such as the MMS and to enhance the standard from its original regulatory role to one that enables data integration. Results will be out next year and available online to all.

A further strong endorsement of PPDM came from Hess’ Fred Kunzinger who described a Technically Validated Database (again with Noah). Hess first attempted to implement a vendor OTS solution before developing its own database using the PPDM Data Model and Volant’s Enerconnect middleware. A presentation from systems integration behemoth Infosys underscored PPDM’s maturing role in the upstream.


SC09, Microsoft in HPC, Nvidia and AMD/ATI...

Cray at number 1. Microsoft down 10. ATI (not Nvidia) tops-out GPU-accelerators. Brown Deer Technology’s David Ritchie compares CUDA, OpenCL and AMD’s Cypress gigaflop boards.

At the SuperComputing event held last month in Portland, Oregon, Cray’s ‘Jaguar’ system at Oak Ridge National Lab took the top spot with a 1.75 petaflops of linpack performance. Also of note was Microsoft’s slide down the rating scales—from number 10 last year to number 20. The problem for Microsoft is that the Windows HPC 2008-based ‘Dawning’ cluster at the Shanghai Supercomputer Center has not been upgraded in the interim—and in HPC, standing still is not an option.

How does this affect the upstream? Microsoft’s HPC solution specialists Mark Ades, speaking at an IBM-sponsored event at last month’s SEG acknowledged that, ‘Linux is very dominant in this [HPC] space—especially in seismic processing.’ Microsoft is now pushing its agreement with Novell/Suse and its Linux management platform for pure-play HPC, while its own Windows HPC 2008 is to focus on ‘high requirement workstation jobs’ like reservoir engineering and the Excel Runner for humungous spreadsheets.

Intriguingly, the fastest GPU-accelerated machine in the TOP500 is the National SuperComputer Center in Tianjin. This uses ATI’s Radeon HD 4870 accelerators to achieve 563 teraflops and got the number 5 slot. We were curious to see how ATI was shaping up against Nvidia in number crunching and quizzed Brown Deer Technology*’s David Ritchie who provided the following.

’NVIDIA’s CUDA has certainly done well in the GPGPU community and you will have heard a good deal about ‘Fermi,’ due to launch next year, promising 520-630 GFLOPS double and 1.0-1.25 TFLOPS single precision peak performance. On the other side of the fence, AMD/ATI is now shipping Cypress boards providing peak performance of 544 GFLOPS double, 2.7 TFLOPS single, along with a dual GPU board that provides 928 GFLOPS/4.6 TFLOPS! Cypress also provides many of the hardware ‘advances’ that Fermi promises such as fused multiply-add instructions. Paper specs are nice but meaningless if you cannot program the boards. For this reason, OpenCL is a welcome introduction to GPGPU, with its industry-wide support.

From my point of view, the industry standardization of the programming API for GPGPU puts AMD/ATI in a good position with their superior hardware specs since we are likely headed for a time of code portability more akin to what we find with multicores where the battle is between hardware, not SDKs. This should be good for the industry since it puts real competition in place, from a programmer’s point of view, since applications will be portable.

Unlike the situation with multicore, getting good performance with GPGPU will still require a good deal of algorithm tuning since the compilers are far less mature than GCC or Intel compilers.We have had good success with the last generation of AMD/ATI hardware in several projects and are focusing now on exploiting the increased performance of the Cypress boards to accelerate existing algorithms, while transitioning those algorithms to OpenCL.’

* BrownDeer provides HPC/GPU optimizing services to clients including Exxon Mobil and Shell. More from browndeertechnology.com.

Lexicon: GPU—graphics processing unit. GPGPU—general purpose GPU (for computing rather than graphics). CUDA—Nvidia’s GPU programming language. OpenCL—an embryonic cross-platform accelerator language.


Invensys User Group

Use cases of DynSim and Olga include topside modeling, from design through training and operations. PipePhase and NetOpt used to optimize ESP performance. Stan de Vries explains why ‘most integrated asset management projects fail.’ Production optimization on Petronas’ Baronia.

Cal Depew (Invensys) described how DynSim and Olga are used across the facility lifecycle from integrated subsea pipeline and topside modeling, through DCS checkout against dynamic models and on to training and operations. This is a rich field for front end design and optimization—investigating issues such as how topside operations impact pipeline design in terms of slug formation, produced water handling and flow assurance.

Alexander Chamorro showed how electrical submersible pump (ESP) operations are optimized using PipePhase and NetOpt. Chamorro noted that 60% of the world’s oil wells require artificial lift and of these, 14% use ESPs. The ESP is perhaps ‘the most versatile and profitable piece of equipment in a petroleum company’s arsenal’ but ESPs can become an expensive nightmare if not properly operated. Hence the need to optimize ESP performance to maximize run life. ESP sizing is key at design time and many factors such as drawdown and motor load must remain in an optimal range—in the face of changing conditions over the pump’s lifetime. Chamorro noted that ‘most complaints regarding pump performance stem from placing a pump in an application that requires it to operate outside its optimal flow or pressure ratings—causing for example gas lock or pump-off.’ ESP optimization is achieved by detecting deviations from established trends—and acting on them a.s.a.p. to reduce risk of pump failure. Invensys’ NetOpt was demoed on a gathering system, using the objective function to maximize flow rate through the network, while keeping pump rate, head and motor horsepower in range.

CMG VP Jim Erdle showed how the STARS reservoir simulator, coupled with the PipePhase surface network simulator is used to optimize steam assisted gravity drainage (SAGD). This high cost, low margin exercise mandates optimization. The objective function is designed to maximize NPV in the face of the ‘competing’ objectives of producing more oil and reducing steam consumption.

Invensys’ Stan de Vries claimed that most attempts at integrated asset models (IAM) fail because there is not automated application and data management. Global optimization is different from local optimization and requires a move up the data/information/knowledge stack. Unstructured control system data needs processing for event recognition and transforming into actionable info. Along the way, we need to automate data quality management, adding-in virtual metering and data reconciliation. Fortunately, moving up from local to global optimization, taking account of well interactions, results in a 4-5x speedup in convergence. de Vries illustrated this with a case study of hydrate formation in a gathering system. Here the software computes phase conditions, triggered by temperature difference and flow reduction—and suggests a fix. The spin-off is that this can be used to put a monetary value on the digital oilfield approach.

Brian Dickson offered a more prosaic digital oilfield example using the Invensys/Foxboro digital Coriolis dual phase measurements for well test, CO2 and water injection management. This hardware produces a massive data set which was plugged in to a neural net solution for wet gas monitoring, allocation and reservoir optimization.

Harpreet Gulati showed how production optimization is used to address problems such facilities bottlenecks. Such techniques can be used even in the face of incomplete information from the field. Gulati showed how a combination of PipePhase and Romeo in an automated asset model can provide key performance indicators and even generate new setpoints for enhanced operations. PipePhase is used to model complex and extensive networks, including constraints. Network models can be combined. A study by Genesis Consulting validated the Romeo/PipePhase approach. Invensys’ Romeo online optimizer is used by ‘most major oils’.

Gulati then turned to a case history of a process optimization advisory developed for Petronas’ Baronia Platforms. This was used to optimize production and minimize gas venting, leveraging installed IT and the Foxboro/Invensys DCS. Again a ‘rigorous’ PipePhase/Romeo IAM was developed for Baronia’s 21 wells and 4 production manifolds. The Romeo facilities model downloads data in real rime, performs data reconciliation and model tuning. The model then back calculates individual well production which is used for optimization. The IAM considers all process interactions to optimize choke settings, gas lift rates, separator pressures and compressor suction. All of which is done while considering constraints such as gas venting, gas water dew point, gas and oil export pressure and so on. Sub components of the extensive Baronia IAM include a real time system model, a steady state detection model and a model sequence activation controller.

The Baronia IAM performs extensive data management (as proposed by Stan de Vries above), leveraging Invensys’ InSQL historian to capture operating data, well status, price data etc. Operator and engineers review and then implement the advisory set points. Petronas uses the IAM as a motor for a continuous improvement loop around data reconciliation, optimization, execution (set points) and so on.

Gulati concluded saying that information—especially oil and gas compositional data are pre-requisites in developing a reliable model. The IAM resulted in a $40,000 per day revenue hike, reduced gas vent, improved production allocation and compressor performance monitoring.

Hesh Kagan’s presentation (with Motorola’s June Ruby) described a ‘wireless win’ in the refit of a refinery. Wireless was deployed to connect remote tank farms. PLC’s in each farm converted gauge data to Modbus over Ethernet—with dual wireless paths for redundancy. At the pump house data was converted back for consumption by the legacy I/O modules.

Larry Balcom presented more work performed with CMG, on SimSci/Stars integration—again in a heavy oil context. In a horizontal well, PipePhase models what happens inside the liner while CMG’s Stars models what happens from the liner to the reservoir. A new interface is under construction to map from PipePhase to Stars, integrating with Sim4Me. The idea is to be able to run SAGD simulations across heating, water flood, steam injection, bottom hole reactions and refining capacity. A simulation executive allows models to be scripted and produces AVI movies of simulator results. More from invensys.com.


Folks, facts, orgs ...

Aker, AGR, Numerical Rocks, Energy Ventures, ENGlobal, Enventure, FMC, Senergy, GE, ICIS, Iron Mountain, OHM, P2ES, PODS, Reality Mobile, Satyam, Schlumberger, SMT, Total, MMS ...

Aker Solutions has opened a new subsea facility at its yard in Egersund, Norway, to accommodate large structures such as the Ormen Lange subsea compression station.

Johan Warmedal will be Executive VP of the Drilling Services division of AGR Group early 2010. Warmedal was formerly with Kongsberg.

ResLab founder Odd Hjelmeland is now CEO of Numerical Rocks and Ivar Erdal has moved to sales and marketing.

EAGE and SPE have agreed to cooperate on workshops and conferences including Offshore Europe.

Energy Ventures’ new Brazilian operation will be led by Roberto Paschoalin and Erik Hannisdal. Jorge Camargo, (Petrobras),Vik Rao, (Halliburton) and Foo Kok Seng, (Keppel Offshore) are on the company’s Advisory Board.

ENGlobal announced a search for a new CEO—the position is currently held by Bill Coskey who is expected to continue to serve the company.

SET specialist Enventure Global Tech. has announced that its online technical library is now live on EnventureGT.com.

Brad Beitler is now VP Technology of FMC Technologies. He was previously Director of Technology.

The US Department of Energy appointed Chris Smith, Deputy Assistant Secretary for Oil and Natural Gas.

John Fraser is to head-up Senergy’s new software division with Nigel Blott in the role of global operations manager. Senergy claims over 500 clients for its Interactive Petrophysics and Oilfield Data Manager products.

GE Oil & Gas has opened a new global services facility in Montrose, Scotland.

Steve Finneran has been appointed Sr. GIS Programmer/Analyst with GeoNorth.

ICIS Heren has launched a new website providing subscribers with price assessments, news of developments and analysis of the global energy markets on icis.com/heren.

Ramana Venkata is now president of Iron Mountain Digital, the company’s technology business unit.

Chris Heaver has joined Calgary-based MicroSeismic as Canadian sales manager.

Arthur Cheng has joined Offshore Hydrocarbon Mapping (OHM) as VP Research based in Houston. Cheng hails from Baker Hughes.

Andrew Hicks is CFO of P2 Energy Solutions.

Sheila Wilson has resigned as Executive Director of the Pipeline Open Data Standard (PODS). The Association is now recruiting to fill the position.

Mike Odell is to head-up Reality Mobile’s new Oil & Gas arm in Houston. Odell was formerly with Geomodeling. He is joined by Bob Blair, formerly of SkyBlitz, as CFO/VP Operations.

Mahindra Satyam (formerly Satyam Computer Services) has appointed Vineet Nayyar as chairman and M. Damodaran and Gautam S Kaji as directors.

Schlumberger’s management consulting unit has announced the appointment of five new VPs: Herve Wilczynski, (ex Booz Allen Hamilton) in Houston, André Olinto (McKinsey) at Rio de Janeiro, Jerome Luciat-Labry (McKinsey) for Abu Dhabi and Aileen Chang, (Deloitte) in Singapore. Olivier Perrin has been promoted to Vice-President.

Sensorlink has signed with Kentron Systems of Calgary for sales and distribution of its corrosion monitoring products.

Andrew Burr heads-up SMT’s new office in Abu Dhabi.

Technip and SaudConsult have created a new joint venture for an engineering center in Al Khobar, Saudi Arabia.

Olivier Cleret de Langavant has been appointed Senior VP, Finance Economics Information Systems in Total E&P, replacing Philippe Chalon.

Neil Carpenter has joined WellPoint Systems as Senior VP worldwide sales. Carpenter was formerly with AspenTech.

Corrections

Earthworks points out that the MPSI software latest release is V1.3 not V2.0 (OITJ Oct 09).

MMS CTO is Vonia Ashton-Grigsby, not Robert Prael as we wrongly stated last month. Prael works for the Minerals Revenue Management Program within the MMS. Our apologies for the errors.


Done Deals

Drillinginfo, HPDI, Geokinetics, PGS, Global Geophysical, McDermott, SemGroup.

Drillinginfo has acquired HPDI, an energy industry software and information services company. HPDI provides production data and web-enabled tools to clients in finance and marketing sectors. HPDI’s production database will be merged with Drillinginfo’s historical production database.

Geokinetics is acquiring the global onshore seismic business of Petroleum Geo-Services in a cash and stock transaction valued at approximately $210 million on a ‘cash free, debt free basis’ which includes net working capital of $37.5 million. The combined unit has expected pro-forma 2009 revenues of over $700 million and the new company will be the ‘second largest provider of onshore seismic data acquisition services in the world in terms of crew count.’ Following the transaction, PGS will become Geokinetics’ second-largest shareholder (with 20%) after Avista Capital Partners. Geokinetics received bridge financing from RBC Capital Markets.

Global Geophysical Services has filed with the SEC to raise up to $150 million in an initial public offering. The Missouri City, TX-based company, which booked $327 million sales over the last 12 months, plans to list on the NYSE under the symbol GGS. Credit Suisse and Barclays Capital are the lead underwriters on the deal. No pricing terms were disclosed.

McDermott International is to split into two independent companies: Babcock & Wilcox (B&W) and J. Ray McDermott (J. Ray). B&W is to focus on power generation and nuclear while J.Ray will continue as an EPC to the offshore upstream oil and gas market. Between 2006 and 2008, J. Ray generated an average of $2.4 billion in annual revenues with approximately $245 million in average annual operating income. J. Ray employs approximately 16,000 people worldwide.

Midstream service company SemGroup has emerged from Chapter 11 restructuring and expects to be listed on a national exchange by mid-2010. Norm Szydlowski replaces Terry Ronan as president and CEO.


US National Geoinformatics update

GEON announces open LIDAR topography network, Open Earth framework for online geology.

The US National Geoinformatics System (GEON) provided an update on its activities at the American Geophysical Union’s fall meet in San Francisco. GEON’s OpenTopography network is hosted at the San Diego Supercomputer Center and will operate an internet-based national data facility for high-resolution LIDAR topographic data. The facility will also provide online processing tools and act as a community repository for information, software and training materials.

Another project, the OpenEarth Framework (OEF), is to provide visual analytics for multidimensional geoscience data. OEF includes a suite of software libraries and applications for the analysis, visualization, and integration of large multi-dimensional multi-disciplinary geophysical and geologic data sets.

Finally GEON is helping prototype standardized interfaces to geology metadata in the online USGIN catalog. USGIN is working on standard services to make data resources of the state and federal geological surveys accessible online in a distributed network using a few standards and protocols. The network is open to all providers and users. Existing data formats such as GeoSciML, ChemML, and Open Geospatial Consortium sensor, observation and measurement markup languages will provide the necessary interchange formats. More from opentopography.org, oef.geongrid.org and usgin.org.


OpenSpirit User Meet, Ocean-Petrel app adapter unveiled

OpenSpirit’s connectivity ecosystem grows as applications extend to quality, taxonomy and more

In the last year OpenSpirit (OSP) has released WITSML 2009 support, a new devkit (with sample code for C++, .Net, and Java) and an ‘application adapter’ toolkit. There is a new SEGY viewer for OSP Web Server 2008. The OSP ‘ecosystem’ is growing—with some 15 new partners joining in 2009.

CTO Clay Harter introduced the Ocean-Petrel application adapter (OPAA). The previous OSP/Petrel plugin was embedded in Petrel and supported by Schlumberger. The new version will be a ‘true’ Ocean-based application adapter supported by OSP. An XML file controls what to use for data matching, and the plugin creates extended attributes from external data sources, along with an audit trail for tracking changes.

OSP partner manager Brian Boulmay highlighted application adapters for SMT’s Kingdom Suite and IHS’ Petra. The PPDM datastore connector now underpins workflows involving Neuralog, Fugro/Trango, Geolog and Petrosys. ESRI is to build an application adapter for ArcGIS Explorer 900. Petris, DataVera and SAS’ DataFlux unit are building OSP connectors to access more data stores for quality assurance workflows. Other newcomers included the Orchestra ontology engine from PointCross, ISS’ ‘BabelFish’ dashboard for real-time production data, Fusion for pre-stack seismic data, Roxar and Recon. All in all an impressive array of potential connections. Harter wound-up his presentation with a pet project—an impressive ‘mashing-up’ of maps, logs and seismic served from OSP web services. The finished screen looked much like a Millennial GEN-Y desktop!

Jess Kozman from CLTech Consulting reported that mid continent shale players are turning to PPDM as a back end data store, both for third party applications and for newly implemented data management solutions. Kozman expects this trend to continue and sees OSP connectivity as vital to these companies as their data requirements grow through mergers and acquisitions.

Lynn Babec demoed the new WITSML interface, which can take data change events from a WITSML server and update an OSP-enabled application. The tool supports WITSML 1.3.1 and has been tested in the Gulf of Mexico.

Gimmal Group’s Lisa Derentahl presented a Chevron case study using OSP to ‘harvest’ G&G data repositories. Gimmal provides search results based on fixed taxonomies and spatial metadata, faceted by type and source, and displayed in a tag cloud.

Clay Harter wound up the proceedings, demonstrating OSP REST web services with some examples of real world searches, highlighting implementations from Eni and Chevron. He also showed solutions from Idea Integration, for embedding ESRI mapping technology in SharePoint and from InfoSys, for an IM intranet portal and the ISS/Babelfish real-time data visualization application.

Looking forward, Harter unveiled some new features in the upcoming 3.2 release. These include data connectors for EPOS 3 and OpenWorks 2003.12, a Kingdom 64-bit connector, GeoFrame 4.5 support, Petra 3.2 support, Application Adapter Toolkit Updates, and API Enhancements to events, sessions and security.

2010 will see new OSP tools including an ArcGIS Extension and updates to the scan utility, copy manager, job scheduler and new ‘Desktop’ plug-ins. The OpenSpirit Framework Release 3.3 will include new style data connectors for OW R5000, GF 4.5 and Seabed, an EPOS 4 data connector and a read/write connector for PPDM. More from openspirit.com.


Sales, contracts and deployments

ABB, SPT Group, Transpara, InSource, EnergySolutions, DEFSA, ERF Wireless, The Computer Works, Central Petroleum, Fugro, Seawell, IFS, ISS, KBC, GASCO, Technip, Schlumberger, WellPoint.

Automation specialist ABB is teaming with SPT Group to provide ‘Integrated Operations’ (IO) solutions to the oil and gas industry. The move expands ABB’s IO portfolio and offers SPT Group’s products, notably Olga and Mepo, access to a larger market. Initially, the companies will collaborate on the control and monitoring of subsea installations such as wells and production networks. This is a growing market according to Per-Erik Holsten, manager of ABB’s Oil, Gas and Petrochemicals business in Norway who said, ‘We hope to be able to introduce pioneering technology faster on the Norwegian continental shelf where the potential is vast. The industry uses more and more seabed facilities connected to topsides and land-based facilities where effective control and monitoring are vital success factors.’

~

Transpara Corp. has signed with InSource Solutions to deliver real-time data visualization to the oil and gas vertical. Transpara’s Visual KPI will be offered as an option to InSource’s continuous improvement and productivity solutions.

~

Energy Solutions International (ESI) reports an upgrade of its PipelineManager solution at Greece’s Hellenic Gas Transmission System Operator (DESFA). DEFSA system is the first pipeline network to connect Caspian and Middle East gas to Europe. DESFA will deploy PipelineManager to provide real time monitoring and prediction. The upgrade includes VisualPipeline, PipelineManager’s multilingual GUI. The new solution will enable DESFA to monitor its entire pipeline network, minimize inventory losses via better leak detection, and improve security with user audit trails.

~

ERF Wireless has teamed with The Computer Works to offer broadband wireless connectivity in North Central Arkansas. The deal adds 3,000 sq. mi. of coverage in the Fayetteville shale area. The Computer Works’ enterprise-class wireless broadband network currently serves more than 5,000 customers from 42 towers.

~

Central Petroleum has awarded Fugro Data Solutions a contract to implement Fugro’s ‘Trango’ data management solution. The deal includes the design and implement of data management and disaster recovery policies prior to the installation of the Trango Manager suite. This now comprises Trango Seismic Manager, Trango Well Manager and a newly developed connector to M-Files electronic document management system. The latter promises a ‘seamless’ workflow between managed source data and Central’s interpretation environment.

~

Drilling and well services specialist Seawell is to expand its use of IFS’ component-based ERP suite, IFS Applications. Following its initial roll out of purchasing, logistics and maintenance components, Seawell is now adding engineering, project management, material management, financials and HR. Seawell reports 2,500 users of the system.

Following its deal last year with Schlumberger for upstream distribution of its Babelfish production integration toolset, ISS Group has signed a non-exclusive downstream deal with KBC Advanced Technologies. The multi-year global agreement envisages the joint provision of performance management solutions to refining and petrochemicals. Babelfish will be integrated.

~

Abu Dhabi Gas Industries (GASCO) has awarded Technip a $415 million lump sum turnkey EPC contract for its ASAB 3 project, a revamp of existing facilities to handle associated gas from the Asab, Shah and Sahil oil fields.

~

Technip has also signed a global cooperation agreement with Schlumberger for the joint development of monitoring and integrity solutions for subsea flexibles. The deal targets novel applications of flexible tuning in areas such as deepwater and subsalt targets in Brazil and the Gulf of Mexico. One application involves fiber optic monitoring of flexible behavior and production data.

~

Plains Midstream Canada has selected WellPoint System’s Energy Broker commodity trading and risk management solution for its crude oil marketing business. Energy Broker is an integrated solution for oil and gas powered by Microsoft Dynamics AX.


Standards Stuff

PPDM announces V3.9. Field device convergence, CEN releases EU e-Invoicing guidelines.

PPDM has announced that version 3.9 of its petroleum data model will release ‘mid to late’ 2010. Drafts of the model will be posted for member review starting in Q1 2010. Sample collection and preparation and organic geochemical analysis are being added to PPDM 3.9 now. PPDM is calling for data modelers to join the modeling committee. More from ppdm.org.

Standards bodies EDDL, FDT, Fieldbus, HART, OPC and PROFIBUS are to cooperate, along with major device vendors on the Field Device Integration (FDI) standard. The intent is to assure a uniform device integration solution for process industries across all host systems, devices and protocols. The FDI spec is due out in 2010 and will provide design and test tools, common binary format and an EDDL interpreter. More from fdt.org and eddl.org.

The EU CEN organization has published the results of its Workshop on ‘Electronic Invoices and Compliance.’ The document covers e-invoicing solutions that meet the needs of tax authorities across member states. An e-invoice standardization initiative sets out to implement the EU directives and national legislation on electronic invoices. CEN provides an eInvoicing portal at e-invoice-gateway.net and guidelines for achieving compliance. More from cen.eu/isss.


Shell rolls-out mobile workforce solution from Invensys

WonderWare IntelaTrac mobile solution supports Shell’s ‘Ensure Safe Production’ initiative.

Royal Dutch Shell’s Downstream Manufacturing Division is to deploy a mobile workforce solution from Invensys Operations Management as a component of its Ensure Safe Production (ESP) initiative at 29 of its global manufacturing facilities. Shell has signed a multi-year partnership agreement with Invensys covering the ‘Wonderware IntelaTrac’ mobile workforce and decision-support solution. IntelaTrac will provide downstream operations with configurable software and rugged, intrinsically-safe mobile hardware to manage equipment-surveillance process and regulatory compliance tasks. Field operators will be able to optimize work processes, resulting in improved asset reliability and availability and reduced overall maintenance costs.

IntellaTrac lets operators, field engineers and supervisors create, define and execute equipment surveillance and procedures following best practices, corporate policies and regulatory mandates. When executing in-the-field procedures, operators are made aware of potential equipment issues in real time via their mobile devices. More from invensys.com.


Knowledge Support Systems—fuel pricing ‘command & control’

New solution promises integration of fuel pricing process, from price decision to signage.

New Jersey-based Knowledge Support Systems (KSS) has just announced ‘Fuel Pricing Command & Control (FPCC), heralding the ‘integration of the fuels pricing process from price decision to price sign.’ FPCC gives retailers control and visibility over each stage of the retail process along with instant price change capabilities from the corporate office, store or remote mobile device. ‘Closed-loop’ integration provides feedback on successful signing changes and generates alerts when price change issues are encountered.

KSS president and COE Bob Stein said, ‘Fuel retailers are seeking ways to gain more control over their fuel margins by automating the setting and implementation of prices and quickly dealing with any issues that might impact volume, margin or brand image. KSS and its partners offer certified integration of our respective solutions, giving retailers the most responsive, consistent and error-free management of fuels prices from corporate office to price sign.’

KSS also announced a ‘preferred integration partner’ program with FutureMedia Displays, PWM Electronic Price Signs and SunShine Electronic Display Corp. These companies will integrate FPCC with their electronic price displays. More from kssg.com.


Dresser Wayne and Johnson Controls for Shell

Multi-year agreement adds 1,600 new stations to Dresser Wayne’s global network.

Dresser Wayne has teamed with Johnson Controls to provide maintenance services to Royal Dutch Shell’s network of petrol stations throughout Germany. The multi-year agreement will add more than 1,600 new stations to the Dresser Wayne global network of retail fuels maintenance locations.

Phillip Casburn, global supply chain lead for the Shell account at Johnson Controls said, ‘Dresser Wayne is one of our strategic suppliers for the Shell account and its global presence has allowed our organizations to clearly align short-term and long-term objectives.’

Dresser Wayne president Neil Thomas added, ‘This agreement enhances customer support in Germany and complements our recent acquisition of the former Rohe subsidiaries in Switzerland, Poland, Hungary, Czech Republic, and Slovakia.’ Johnson Controls currently manages services for some 15,000 Shell retail outlets worldwide. More from johnsoncontrols.com and dresser.com.


Statoil sponsors ‘mega project’ R&D program at Berkeley

UC Berkeley and Norwegian University of Science and Technology get $580,000 R&D stimulus.

Statoil (previously StatoilHydro) has sponsored a new academic research program at the University of California at Berkeley and the Norwegian University of Science and Technology (NTNU). Statoil has been working with Berkeley’s Center for IT Research in the Interest of Society (CITRIS) since 2006 to develop and run a program for managing its large and complex development projects. The $580,000 agreement kicks off a new R&D initiative, ‘Understanding Success and Developing Management Leadership on International Mega Projects,’ to be headed by CITRIS professor Iris Tommelein and NTNU professor Asbjørn Rolstadås.

Statoil Projects & Procurement VP Gunnar Myrebøe said, ‘We wish to develop the best project and functions managers in the industry and we believe that collaboration between academics and industry is crucial to develop our skills in project management.’ The ‘P2SL Mega-projects’ initiative will develop educational material to strengthen Statoil’s supplier companies’ capabilities in managing mega-projects in geographically and geopolitically diverse settings. Myrebøe concluded, ‘Large-scale projects may cost billions of dollars to perform and deliver. As our projects become more geographically diverse and complex, they also become increasingly challenging to lead and manage.’ More from statoil.com.


Hosted production reporting for Ithaca’s Beatrice field

Production allocation and reporting from ENERGYSYS configured by partner Kelton.

Ithaca Energy is to deploy a production reporting and hydrocarbon allocation solution from EnergySys on its UK North Sea Beatrice field. The hosted solution is to consolidate existing IT systems, software, security and data management into a ‘single, low cost online service.’ The standard production reporting application will be configured to Ithaca’s requirements by EnergySys partner, Aberdeen-based Kelton, a specialist in oil and gas flow and quality measurement.

EnergySys 4 is configured and operated from a web browser. New applications can be built without the need for programming. Users can build their business rules in a familiar Microsoft Excel environment and deploy the resulting spreadsheet to EnergySys, where versioning, audit trails and security are managed to company standards. Kelton’s operations manager Iain Pirie said, ‘We will support Ithaca through all the phases of their project, from definition of the allocation and metering philosophy through to operation of the system. EnergySys allows us to maintain quality of service while delivering cost-effective solutions rapidly.’ More from sales@energysys.com.


Microsoft solution for Medco Energi’s mobile workforce

As workers increasingly rely on email, continuous replication and security concerns rise.

Jakarta-based Medco Energi International has migrated its mobile work force’s communications infrastructure to a combination of Microsoft Exchange Server (MES) 2010 and Microsoft Office Communications Server (MOCS) 2007 R2. Medco’s upstream through downstream operations are spread across seven countries.

The company first deployed MOCS in 2008 and consolidated voice mail and e-mail into one inbox for unified messaging with Microsoft Exchange Server 2007 Enterprise Edition. As Medco reliance on email and voice communications increased, the company installed failover clusters for increased reliability. But this was insufficient to guarantee service levels. MES 2010 was installed in June 2009 and allowed for continuous replication using defined database availability groups. Following a successful pilot involving 150 users, Medco is now scaling up to its 2,800 users.

Medco will implement information rights management (IRM) policies to protect company information. The company is replacing its BlackBerry smart phones with Windows Mobile-based devices (that’ll be popular!). Medco also deploys Microsoft Forefront Security for Exchange Server, which integrates multiple scan engines into a multilayered defense against viruses, worms, spam, and inappropriate content. More from microsoft.com/oilandgas.


Mustang evaluates emissions monitoring at Cherry Point

Project aims to increase BP’s continuous emissions monitoring system’s reliability.

Wood Group unit Mustang has completed a study of a continuous emissions monitoring system (CEMS) at BP’s Cherry Point Refinery, Washington state. The CEMS reliability project includes the automation of BP’s emissions monitoring to meet environmental agency compliance requirements and provided recommendations for improvements in environmental data management activities. Mustang’s Automation and Control business unit leveraged its environmental consulting expertise in process operations and data management of air emissions from fired sources.

The study included programmable logic controller (PLC) function evaluation, data management and custom applications. The project aims to increase CEMS reliability across the refinery and to make recommendations as to network and data flow from the field instrumentation to the corporate business applications. Cherry Point refinery has a 225,000 bopd capacity. More from mustangeng.com.


Statoil migrates to Windows 7/Server 2008 R2

Folloiwng a successful pilot, Statoil to deploy Windows 7 on 40,000 desktops.

Statoil, currently on Windows XP, has trialed 64-bit Windows 7 on desktops under Microsoft’s early adopter program. The company also implemented a pre-release version of Windows Server 2008 R2. The trial has demonstrated the usefulness of Windows 7 BranchCache and DirectAccess functions which improve replication to remote sites and communication with a mobile workforce—especially in areas with poor connectivity. Statoil’s IT advisor Petter Wersland said, ‘We wanted all of our employees, regardless of location, to be able to seamlessly use their collaboration tools, whether they were sending or receiving e-mail, or accessing documents on the SharePoint site, we wanted them to be able to use all of these services without a VPN*.’

With the expansion of the number of portable computers, Statoil was concerned about data security and is leveraging the enhanced access control and ‘BitLocker’ encryption to protect data on USB drives. This alone has led to a $330,000 annual saving on third-party encryption software currently in use. Statoil is to extend its enterprise-search capability that leverages Microsoft FAST ESP–with Windows 7’s native search capability and improved offline file cache.

Windows 7 company-wide deployment is scheduled for 2010 and will leverage Microsoft System Center Configuration Manager 2007 to automate deployment on some 12,000 existing desktops. A further 28,000 new machines will come with Windows 7 preloaded. More from microsoft.com/oilandgas.

* Virtual private network.


© 1996-2021 The Data Room SARL All rights reserved. Web user only - no LAN/WAN Intranet use allowed. Contact.