April 2012


Total value of ownership

The theme of the Netherlands’ WIB mini seminar was the added value that timely replacement of control systems brings. We report from TWP on use of APT’s Lifespan to optimize the process.

Speakers at the WIB-NL* mini seminar in The Hague last month, emphasized that control system replacement should be move from ‘cost’ to ‘value.’ John Woodhouse (CEO of asset management specialist The Woodhouse Partnership) outlined the methodology developed by his company to optimize the timing of digital control systems’ replacement. TWP advocates using the British Standards Institute’s PAS 55:2008 guidelines. These are owned and maintained by TWP and provide publicly available specification for the ‘optimized management of physical assets.’

Plant operators are frequently influenced by KPI pressures, lack of data, departmental conflicts/boundaries and ‘uncertain’ equipment life. A lifecycle view of an asset is hampered by hard to evaluate risk, tax/accounting issues and in general short term reasoning. Such issues were the focus of the EU ‘Macro’ project, a five year, $2.5 million R&D programme exploring best practices in risk-based industrial decision-making. Macro members included Shell and PDVSA and deployed software from Asset Performance Tools.

The main finding was that the ‘true’ optimum time to replacement may differ from a simplistic balance of planned replacement and failure risk. The big problem in risk assessment is poor data and assumptions. There can easily be a fourfold variation in risk estimates and ‘you need to be careful who/what questions to ask, and you need to tell people what the data will be used for.’

A proper risk-optimized strategy allows assets’ life to be extended beyond a rule-based useful life determination, while staying short of ‘high risk reactive decisions.’ The process includes a mathematical evaluation of immediate and future cash flows into an ‘equivalent annual cost.’

APT’s Lifespan is the tool of choice, allowing for the combination, at a very granular level, of the present value and time to replacement of DCS components. The approach is said to avoid errors due to interacting failure modes, and deprecates models (FMEA, Weibull) that treat risks separately. Key to the process is the ‘structured use of tacit knowledge.’ Range estimating is better than ‘lots of dubious quality hard data.’ Business impact can be 10 times efficiency improvements. Flagship use of the approach was an analysis of Saudi Aramco Basic Chemicals’ (SABIC) DCS.

The approach is now being extended with another EU-backed project, ‘Strategic assets life cycle optimization,’ extending the Macro results to an asset portfolio. Participants in the three year include Cambridge University, Sasol, and Centrica. More from the WIB-NL on page 9 of this issue.

* The international instrument users association.


Shell’s Goal Zero

Tibco Spotfire powers Shell’s ‘Goal Zero’ health, safety, security and environment (HSSE) effort. Shift from lagging to leading indicator monitoring.

Speaking at the 2012 Spotfire Energy Forum, Holly Soepono described Shell Upstream Americas’ HSSE goal as ‘zero harm to people and the environment.’ Not so easy when you consider that alongside its operations on offshore platforms and critical plants, Shell employees drive nearly 1.5 billion kilometers per year. In a drive to move from the analysis of lagging to leading indicators, Shell has abandoned its spreadsheet-based safety reporting system. A Spotfire Server has been linked to Shell’s Fountain incident management safety database. A Spotfire Web Player provides scheduled updates and an interactive analysis of safety trends.

The result is faster access to important safety data and a move from monthly reviews to continuous analysis. Safety data is now part of an ongoing dialog with staff. The system has been in widespread use within Shell since 2010. It is now easy to see the impact that the safety program has had, helping to justify ongoing investment and changing the way Shell leaders view safety data. This is an ‘active tool to manage safety process, not just another report to review.’ Soepono reports that Spotfire’s ease of use and the Web Player were key to take-up. More from mds@tibco.com.


More musings on Microsoft’s MURA

Last month, Oil IT Journal noted the conspicuous absence of the Microsoft Upstream Reference Architecture from the 2012 Global Energy Forum. But we failed to spot a new MURA white paper that appeared shortly after the GEF. Editor Neil McNaughton checks out the new material and wonders if the ‘Global Exploration Forum’ will ever evolve into the ‘MURA User Group.’

In Microsoft’s latest whitepaper, ‘Microsoft Upstream Reference Architecture, looking back on 2011 and forward to 2012’ we are offered a refresher of what a reference architecture is as follows... ‘a reference architecture is an architecture that identifies elements generically in the context of a generic system […] used to develop specific architectures that conform to the reference architecture by constraining the reference architecture to the unique characteristics of specific systems, which are specializations of the original generic system.

For those who are still scratching their heads, ‘Looking Back’ uses the analogy of a railroad system (a rather anachronistic choice for IT?), explaining at length ‘how interconnection between systems with different rail gauges used variable gauge rail cars, replacing rail car wheels and axles, using adapter flatcars, or simply transferred the passengers or freight.

We have here a great example of a common trait in technical literature. You are reading along happily—say about black holes, quantum mechanics or indeed a ‘reference architecture.’ Things are getting interesting. Your curiosity is reaching a state of heightened arousal. You are about to understand something that was previously obscure. Maybe you will even be able to hold forth at the next dinner party on the curvature of space, particle duality or what have you.

But just as you get to the interesting bit, when all will revealed, the author changes tack with, da da! the completely useless analogy. Instead of telling you how the universe was created, the author begins a laborious explanation of something blindingly obvious.

Several explanations for this come to mind. We may have a savvy author who invokes an accurate analogy that the dumb reader fails to grasp. As this possibility is not very flattering, I will dismiss it out of hand. (Gee it’s good to be an editor!)

We can also envisage the case where the author has a good grasp of the subject at hand, but whose capacity to translate it into a suitable analogy is poor.

Alternatively, it could be that the analogy is as hard to grasp as the original subject. This is a particularly pernicious turn of events for the reader as he or she now has two hard-to-grasp concepts to puzzle over instead of one.

Other possibilities are a) that the author does not really understand the subject matter and explains instead an analogy that he or she is more comfortable with or b) that the subject itself does not really make any sense. Hence the need to explain an analogy that does.

I submit that MURA falls into the latest category—that it does not make any sense beyond the statement that involvement in MURA equates to the use of any Microsoft product—especially SharePoint. In fact this is made a lot clearer on the Microsoft oil and gas website where the products in the MURA ‘solution’ are enumerated as follows ... Microsoft SQL Server, SharePoint, Project, Office, Silverlight, Visual Studio, Windows 7, ASP.NET, BizTalk and .NET. That is pretty well the whole shebang and most IT departments likely spend a good part of their waking hours trying to get these components to interact already, without even thinking of oil and gas specifics.

But there is another side to this. How can it be that Microsoft thinks it can get away with publishing such a lot of drivel masquerading as ‘technical’ information?

IT has done a great job over the years of carving out a space for itself apart from the real world of either science or business. It wasn’t meant to be like this. Early computer languages like Fortan and Cobol were (and actually still are) close to their domains and use terminology that their user/developers understood. There was no need to ‘explain’ a program with an analogy—it was all in the code.

Some 30 years ago or so, there was a feeling that such a move would continue with the arrival of ‘fourth generation languages’ which would enable programs to be written in even more general natural language. But what actually happened was the opposite. IT got the abstraction religion and saw no need to have code that was tied in any way to a silly old ‘domain.’ This has led to the big divide between users and coders and has led to a vast amount of pretty impenetrable computer literature.

I am always surprised to hear from folks in the ‘digital oilfield/intelligent energy’ space talk of the ‘IT department’ as something different from their own activity. You want to hook up some real time production data with pricing information from the ERP system? The digital oilfield folks do the design and specs, but the donkey work, which involves black arts of database wizardry, objet relation mapping, reformatting—all that is up to ‘IT.’

The irony in all this is that probably the closest thing we have got today to a fourth generation language is the facility that a generic tool—like Microsoft’s SharePoint—offers to the end user. This exposes a whole host of information in the form of drop down lists and ‘web parts’ that users can customize into an application without writing code. OK—you may have to roll up your sleeves and delve into some Visual Basic to do the extra smarts. And OK again—the end result may or may not actually work ‘at scale’ as folks like to say.

What to make of all this? On the one hand, listing your software line up does not make a framework. This would involve quite a lot of plumbing that MURA has as yet failed to publish. In our 2009 interview with Microsoft’s Paul Nguyen and Ali Ferling, one stated aim was ‘freely available documentation and working code samples’ as in the Microsoft Manufacturing Toolkit.

I guess that the proof of the pudding will be whether the GEF eventually becomes the MURA User Group or, and this is my bet, the SharePoint Oil and Gas User Group. Time will tell.


Interview—Steve Roberts and John Foot, BP

Oil IT Journal hears from BP Field of the Future evangelists Steve Roberts and John Foot on ISIS, Data2Desk, patented well test technology and high-end infrastructure for remote operations.

Does BP now build or buy technology?

Steve Roberts—The Field of the Future (FotF) program generally includes about 80% of bought-in technology and 20% of BP’s own R&D. Our ISIS* program is the foundation for activities such as production optimization and, at Tangguh, remote well monitoring from 3000km.

John Foot—ISIS is built on a third party platform for well test analysis. We have automated workflows to compare multi phase flow meter and well test data—the technology is now patented by BP.

Why patent your technology?

SR—So we have the right to use it! It need not necessarily be exclusive to BP—we may hand over to a third party...

The FotF sounds great for a new development—but what about older sites?

SR—All this is great for a greenfield. But we are also working on brownfield retrofits e.g. Valhall and Skarv. Here we are using the collaboration environment to move people ashore—linked with high bandwidth fiber.

Isn’t there a risk that the link is compromised?

SR—All the control systems remain offshore. But generally the fiber has proved rather reliable—there is a 1,300 km loop in the Gulf of Mexico. The DO is deployed on around 80% of our most significant wells representing nearly a million barrels a day of oil equivalent.

How can you distinguish barrels added by ‘digital’ from a base case?

JF—A decade ago, only two fields had downhole sensors, they were novel, untested devices. Today our ‘Data2Desk’ infrastructure provides readings every fifteen seconds. This is revealing stuff that we have never seen before. In one field we added an acoustic sand detector to pinpoint sanding issues—and found that the crew had turned all the alarms off and were not aware of the problem. Since then we have not had sanding issues and have far less lost production.

SR—Likewise for slug control, digital algorithms allow faster control than can be performed by a human operator. Digital is the key to production optimization through pattern recognition in large data sets.

JF—While ISIS covers wells and ‘smart wells,’ Data2Desk is about facilities, performing, for instance, slow loop, model-based optimization over a facility or for gas lift optimization.

Is BP still a big Microsoft fan?

SR—BP’s IT is aligned with Microsoft .NET. The FotF collaborates with the IT department which provides infrastructure, communications et cetera. All our applications run on our internal compute environment with browser-based access to applications. We also help with prototypes, at-scale deployment and management of change.

But how is it that we hear of all this great stuff at Intelligent Energy while elsewhere, folks are struggling with data issues?

JF—Maybe its because we deal with real time data—where ‘management’ is not such an issue.

What is BP’s take on standards?

SR—We deploy many—OPC (UA and DA), PRODML, WITSML, RESQML and ISO 15926. I myself am on the Energistics board.

What does the digital oilfield have to offer in the face of catastrophes such as Macondo?

SR—Digital technology is widely used to reduce risk and monitor safety valve performance.

Why doesn’t the SPE talk about safety issues? Surely digital has a lot to say about alarm filtering etc?

SR—We do know that folks don’t want yet another screen of data in front of them. We need to synthesize what is displayed, as is done in an airplane cockpit. We are working on these issues but the safety issues are mostly dealt with in other forums than the SPE.

To what extent do you buy into the idea of the next generation engineer? What about all this ‘Facebook’ stuff?

SR—It is not so much about Facebook, more about a data rich environment and reliable software. In fact part of our role as ‘older’ evangelists is trying to share our digital know how in a world that is somewhat at threat from the Facebook/Wikipedia generation!

* Integrated Subsurface Information Systems.


More from the 2012 SMi E&P data conference

An ‘overflow’ from last month’s conference—Troika on tape and CDA on certification.

Jill Lewis (Troika) is on a ‘one woman mission’ to improve seismic data management so that we don’t run into the same problems time and time again. Note that many data managers have a limited knowledge of older tape formats. A lot of data is still on very old tapes. It is not necessarily bad. You don’t get sticktion in pre 1980 9 track tapes which were high quality. Lewis recommends a 10 year life cycle for any technology. The future is the IBM TS 1140, a 4TB tape capable of holding around 1 million 2D lines. 10TB tapes will be there real soon now and robots carry 2.5 exabytes of data. Lewis wonders why more QC isn’t done in the field. More from Troika.

CDA’s Malcolm Fleming observed that while oil companies have a hard time recognising the value of data, this is not true for spec seismic companies. PGS has $344million worth of data on its balance sheet. Fleming thinks we need to ‘professionalize’ data management and CDA, along with ECIM, OGP and PPDM are to do just that. These organizations are developing a competency map as the basis of a data managemen training program spanning well, seismic drilling, production, geospatial and reservoir. The draft competency framework for well data management is the first deliverable. The idea is to be application neutral i.e. no ‘load to Petrel.’ The idea is to develop a ‘Certified E&P Data Manager’ curriculum. BG, BP, Schlumberger, Shell and Total are on board. More from CDA.


BP’s high performance computer hits the petaflop

300 GB dataset free to academia. Houston HPC cluster now among fastest in commercial world.

A few fascinating facts from the latest issue of the BP Magazine. First, for those in academia who complain that they never have any real data to work with, BP’s upstream innovation board has released some 300 gigabytes of high-resolution geophysical data covering the Atlantis, Holstein, Mad Dog and Thunder Horse developments in the Gulf of Mexico. The idea is to ‘encourage the development of geotechnical, geological and engineering concepts in the deep water.’ Elsewhere, BP’s head of technology, David Eyton, revealed that company’s advanced seismic computing centre in Houston now has a capacity of a little more than one petaflop, ‘making it one of the world’s fastest civil supercomputers.’

Another program involves the installation of hundreds of corrosion sensors, co-developed with Imperial College London, at all of its refineries worldwide. The sensors help refinery teams understand the impact of acidic crude oils in real-time. Eyton observes, ‘Investment in technology needs balance, the danger of doing everything in house is that you become insular and miss an important development. But if you do too much outside, then you might not be able to generate as much value from the intellectual property generated.’


ULTRA Consortium kicks-off for shale gas flow prediction.

Spectraseis’ UltraSens 3 component array passes field test. CSM and CMG join consortium.

Business at Zurich, Switzerland headquartered Spectraseis is good with ‘high customer demand’ for its fracture monitoring solutions in North American unconventional plays. A new ‘UltraSense’ three-component recording array has been successfully used to record fracture data at 2000 meters below the surface.

A research partnership hosted by the Universities of Calgary and Alberta, the ‘Microseismicity Industry Consortium,’ is currently investigating low frequency, long-duration events. Spectraseis’ software accelerates full elastic wave-equation imaging and leverages Nvidia Fermi GPUs which provide ‘over an order of magnitude’ speed-up.

Spectraseis has also announced its ‘unconventional leading technology reservoir analysis’ (Ultra) joint industry project, a two-year program to develop tools and software for fluid flow predictions from fracture completions along with new standards for microseismic data. Initial partners are Chevron, the Colorado School of Mines and Calgary-based Computer Modelling Group.

Spectraseis reported a strong backlog of monitoring projects and a high utilization of equipment projected through Q4 2012, including a recently awarded $4 million surface survey for a major North American customer, the third contract with this client. Spectraseis has now engaged Simmons & Co. to advise on ‘strategic options to position the company for continued strong growth.’ More from Spectraseis.


Roxar rolls-out RMS 2012

Added seismic inversion, conditioned ‘facies probability cubes’ and planning for ‘factory’ drilling.

Emerson Process Management has launched Roxar RMS 2012, the latest release of its reservoir modeling solution. RMS 2012 continues to extend modeling into the geophysical domain with a new modeling workflow including seismic interpretation, reservoir simulation, reservoir behavior predictions and uncertainty management. Roxar Software Solutions MD Kjetil Fagervik said, ‘Accurate, predictive reservoir models that realistically represent the underlying seismic data and that offer a seamless route from seismic to simulation are central to current efforts to improve oil and gas recovery. These are the underlying goals behind Roxar RMS 2012.’

The new release adds seismic inversion to blend high frequency well log data with band limited seismics. The resulting elastic parameters are used to condition facies and petrophysical properties which are displayed as ‘facies probability cubes.’ A new visualization toolkit enables modelers to create attributes that define reservoir structure and guide the facies modeling process. Opacity control and color manipulation capabilities are used to interpret rock properties, structural features and hydrocarbon accumulations.

A new field planning module optimizes well and pad location planning across multiple targets. These can include user defined constraints such as required for SAGD and shale gas ‘factory’ drilling. RMS 2012 operates on Linux 64-bit, Windows XP and the Vista 32 and 64-bit platforms, as well as Windows 7 64-bit. More from Roxar.


Terraspark bundles non conventional seismic workflows

New Insight Earth release adds shale resource toolkits for Bakken, Eagle Ford and other plays.

The new 1.7 release of Bolder, CO-based TerraSpark Geosciences’ Insight Earth seismic interpretation package includes shale resource play toolkits for North American shale plays including the Bakken, Niobrara, Eagle Ford and others.

Terraspark CEO Geoff Dorn explained, ‘The combined use of microseismics, well and seismic data in our toolkits helps interpreters identify areas of enhanced fractures and permeability and reduces drilling risk. Localizing our shale play solutions to these distinct regions enhances ease of use. No two plays are alike. By addressing the unique characteristics of each, we help customers achieve more with faster and better decision making.’ TerraSpark’s 3D seismic interpretation platform undergoes continuous refinement thanks to the industry-funded geoscience interpretation visualization consortium (GIVC) whose members include Chevron, BP, ConocoPhillips, BHP Billiton, Repsol and Stone Energy. More from Terraspark.


Software, hardware short takes

Austin Geo, Pegasus, Fugro, LMKR, OpenSpirit, Geomodeling, Petris, Neuralog, INT, Invensys.

Austin GeoModeling’s 4.2 Recon release adds horizontal well interpretation and an enhanced 2D seismic viewer.

The 2.1 release of Pegasus VertexCTEMP virtual circulating temperature gauge estimates wellbore circulating temperature in HPHT wells.

Fugro’s Van Oord unit is developing a ‘walking’ jack-up drilling rig that operates either in conventional, 4-legged mode, or as an 8-legged mobile platform for site investigations—1004.

LMKR’s GeoGraphix 2012 release adds integration between engineering and geosciences, 3D visualization of well, seismics and modeled surfaces and support for novel data types.

V4.0 of the Tibco/OpenSpirit runtime adds data connectors for Kingdom 8.7, EPOS 4.0, Petra 3.7.0 and OpenWorks R5000.3. Other enhancements include a process monitor, an installation configurator and .NET 4 support.

Geomodeling’s ReservoirStudio 5.0, available stand-alone or as a Petrel plug-in, uses built-in sedimentological rules to produce realistic geological models for complex depositional environments.

PetrisWINDS ZEH Plot Express now supports continuous log printing on Neuralog’s NeuraJet17 printer.

The 4.4 release of INTViewer supports deviated well log tracks in cross-section view, Matlab integration and grid data display.

The 2.1 release of Invensys Off-Sites’ tank farm and terminal operations management system enhances blend optimization and movement management and adds support for industrial handheld devices.


Oil and Gas High Performance Computing at Rice

Intel’s non-revolutionary path to exascale. Rice’s IWAVE finite difference framework ported to TI’s Shannon DSP and AMD Fusion. Co-array Fortran parallelizer. IT in the Hess Tower.

Attendees to the Oil and Gas High Performance Computing event held in Rice University last month heard from Intel’s Rajeeb Hazra who summed-up current thinking on the path to ‘exascale’ computing thus: ‘There are those that believe that virtually everything we use today, hardware and software, cannot evolve to work at the exascale. They would have it that a revolution is needed.’ There are many reasons for such a world view, with considerations in both hardware and software field. Except that, for Hazra, these are mostly myths!

Intel is getting HPC back on the path to exascale with the general purpose many core architecture a.k.a. Knights Corner and the Intel MIC. Along with the new hardware comes a scalable ‘ecosystem.’ ‘Today’s architectures and applications can evolve to exascale. A complete revolution is both unnecessary and unaffordable.’

The MIC was the subject of a presentation by Lars Koesterke of the Texas Advanced Computer Center which should be running at 10 petflops next year—’80% down to the MIC.’

Perhaps not revolutionary (Texas Instruments have been making digital signal processors for decades), but certainly different was Murtaza Ali’s presentation on the use of TI’s ‘Shannon’ multi-core DSPs for seismic imaging. The Shannon is based on the KeyStone multicore architecture integrated with eight C66x CorePac DSPs per chip. A 1 teraflop card from Advantech should be available real soon now. Ali is working with Rice University’s Jan Odegard to implement ‘Iwave,’ a framework for scalable finite difference seismic modeling. Early results are promising.

Another Iwave port was presented by AMD’s Ted Barragy, who teamed with Rice’s Bill Symes to translate Iwave into OpenCL and run it on AMD’s ‘Fusion’ processor. According to AMD, the GPU is passé. What’s hot is combo CPU/GPU technology that is taking off ‘exponentially.’ Such a beast is AMD’s Fusion, combining the CPU and GPU into a single device. Will Fusion be good for seismic processing? Yes according to Barragy, at least when the ‘full’ Fusion arrives. On the coding front, the OpenCL port went well thanks to Iwave’s clean design and the OpenCL helper libraries and the existence of a strong tablet to desktop market to commoditize the technology.

University of Houston researcher Deepak Eachempa, with backing from Total, presented an evaluation of Co-array Fortran (CAF). CAF is intended to simplify porting of the substantial Fortran code base to modern parallel compilers. The idea is to effect the smallest change required to make Fortran an effective parallel language. Early results show CAF to beat IntelMPI and OPenMPI. CAF is ‘very promising as a programming model for oil and gas HPC applications.’ CAF is now available in the OpenUH compiler which can be downloaded.

Jeff Davis gave a more down to earth presentation of technical computing in the new Hess tower. This has gained ‘leadership in energy and environmental design’ (LEED) certification for its general greenness. The Tower contains 944 miles of Fiber optic cabling and ‘even the window shades have IP addresses.’ Hess’ HPC cluster includes 2,256 Nvidia GPUs, 5,160 CPU cores and a NetApp client server system for interpretation. Geoscience workstations come with 192GB Ram, 12 cores, 10 Gig networking, SSD disks and more. The Tower also houses 50 high end visualization rooms linked with a connection broker supporting both Microsoft and RedHat operating systems. The latter was developed with Mechdyne using HP’s Remote Graphics. More from O&G HPC at Rice.


SPE Intelligent Energy, Utrecht

Aramco moots open source ‘training sandbox.’ Schlumberger warns of unpreparedness for new technology. BP—’oil and gas is technology laggard!’ Shell calls for a process automation system for the upstream. Total’s ‘smart meter’ project with Qatar Petroleum. AI in managed pressure drilling.

After the bizarre warm-up session (see last month’s editorial) the SPE/Reed Expo Intelligent Energy event retrieved its composure with a plenary session on ‘preparing to meet the grand challenge.’ Nabeel Al-Afaleg sees the I-Field as an answer to the graying workforce. Saudi Aramco is bridging the experience gap with bespoke education delivered from its upstream development center, where immersive technology is used to run virtual build up tests. Elsewhere Aramco is automating and integrating business processes, leveraging AI, expert systems, real-time data and remote control. ‘Nano robots,’ gigacell simulation and instrumented wells also got a mention.

Satish Pai observed that the industry is getting younger and smarter with intelligent technology, centralized operations, better knowledge management and remote operations. Schlumberger isn’t scared of the big crew change which will largely be mitigated by technology. The regulatory/environmental scene is a much bigger headache. Pai noted that today, both operators and service companies have their own remote operating centers. Coordinating workflows across them is a challenge. The digital oilfield has been a success but has not yet brought fundamental change—we still have a way to go and ‘most on the rig are not ready for the technology that is coming.’

Gerald Schotman enumerated Shell’s intelligent technology—with its million channel seismic system under development with HP, another low frequency fiber optic based system (with PGS) and ‘flying nodes,’ hundreds or thousands of sensors in an ‘adaptive mobile grid.’ Shell’s GeoSigns (a single Shell platform for processing and interpretation), the Bridge (exception-based surveillance) all got a plug as did ‘autonomous drilling’ which ‘provides consistency and reduces dangerous human intervention.’

Ellen Williams (BP) observed that while the digital revolution has transformed society, ‘oil and gas has lagged behind.’ This notwithstanding BP’s ten year old Field of the Future program which has brought ‘demonstrable benefits on a business scale.’ Current point solutions now must be grouped at the systems level—and choices must be made as to the degree of automation, with either actuation with a ‘man in the loop’—or systems that override the operator. We are faced with ‘a daunting at-scale infrastructure transformation.’

Kjell Pedersen (Petoro) thinks we may have turned the corner in integrated operations—at least for green fields. Mature fields will take more time, drilling and prepping wells takes longer and uncertainty is perceived as high. IT advances are used to manage risk and justify investment.

In the debate, Afaleg observed that you can’t let inexperienced folk loose on a reservoir. We need a sandbox for training, perhaps ‘an open source platform to share across industry.’ On standards, Schotman observed that industry could move faster by leveraging packages across suppliers. A questioner raised the issue of safety and the digital oilfield. Williams believes that digital will make safety part of the culture. Digital lets more people have better access to information. Pedersen thinks intelligent operations is really about culture and safety and that automation will move people away from dangerous offshore platforms. But how do you persuade management to pay for sensors and databases when their focus is on production? As a recent digital convert, Pedersen observed that knowledge is the key and digital—data, communications—means that we can ‘turn around quickly and tell people what to do.’ Cost is an issue, and we can’t continue with current huge day rates for sophisticated equipment.

Keith Killian described ExxonMobil’s upstream digital infrastructure—largely inspired from the company’s downstream business—which has been using digital technology for over a decade to ‘squeeze out every bit of margin.’ One early example of downstream to upstream technology transfer is Esso Australia’s Longford plant where, with help from the downstream, severable multivariable constraint controllers have improved recovery and optimized plant capacity. But their use in the upstream remains relatively rare. Exxon’s digital technology in asset management (DTAM) is currently addressing surveillance by exception and predictive monitoring of rotating equipment health.

Shell’s Ron Cramer offered a slightly different take on the upstream/downstream divide. Cramer observed that process automation systems (PAS) have evolved over time with little attention to oil and gas needs. PAS solutions were designed for refineries and chemical plants are ‘fragmented’ and a ‘force fit’ to the upstream requiring lots of system integration. This is in part because the upstream ‘never really told vendors what its requirements were!’ Which is just what Cramer is attempting to do now. A PAS for the upstream needs to handle the ever increasing data volumes—Shell is to drill 20,000 wells by the end of decade. We need to ensure data is ‘owned,’ otherwise instrumentation is not maintained. Data needs to be grouped into objects such as a ‘well’ along with context and intelligence. The upstream also needs WAN/LAN integration—unlike a refinery which is all on a LAN. Autonomous systems such as beam pump need to be integrated with intelligence at the central facility. Cramer came up with more specs than you could shake a stick at.

BP’s Steve Roberts described some digital oilfield challenges. This is a broad, complex and cross functional activity that impacts many technologies and workflows. Prioritizing activities and making solutions sustainable can be hard. BP is on a path to add 100 million barrels of oil equivalent through digital technology by 2017. The company has already delivered a (remarkably precise) 73 million bbl/d, thanks to its proprietary technology. BP is engaged in building ‘deep capability’ for faster decision making, provide early warning of risks and suggest mitigation and optimization strategies. BP is also planning to make the client ‘irrelevant’ and provide engineers with access to information everywhere. Roberts elaborated on the use of mobile devices in the Q&A observing that most control room operators don’t want another screen. What is needed is more information in a smaller footprint. BP is working on security issues to facilitate access to information from devices such as the iPad, while ‘future proofing’ the technology.

One of BP’s digital flagships is its work with dynamic modeling to control slugging. The P22 well on BP’s West of Shetlands Foinaven field could not initially be flowed because it upset the whole gathering network. Fortunately, BP has seen a ‘revolution’ in dynamic modeling that is now coming into its own thanks to data visualization and analysis. Patrick Calvert described how BP now color codes data to distinguish between stable operations and slugging to pinpoint the safe operating range. Modeling can be used to select an optimum rate as an ‘advisory’ for operators. But true slug control takes this further to automate actuation of production control valves. One 23 km tieback experienced 3,000 bbl/d of deferred production due to high back pressure. This ‘very complex system’ was modeled in SPT’s Olga and Matlab. Dynamic modeling initiatives led to 17.5 million bbl/d of annualized incremental production in 2011.

Total’s Mohamed Haouche described a ‘smart meter’ pilot on joint venture with Qatar Petroleum. Modeling technology from Belsim was used to perform online data validation and reconciliation (DVR) a.k.a. an ‘advanced virtual flow meter.’ DVR, technology transferred from the downstream, is used for production optimization, energy management, condition-based metering and production allocation. The idea is to measure a range of accessible parameters and compare them with models of the field to infer an unknown quantity such as flow in an unmetered well. In this study, the virtual meter provided better consistency than a multi phase flow meter. This is likely due to the fact that MPFMs are also partly software based and present the same kind of problems.

Gulio Gola of Norway’s Institute for Energy Technology has applied artificial intelligence to managed pressure drilling. This involves a model-based approach to bottom hole pressure estimation. Models can be physical (first principles) or empirical (data driven). The latter can capture unknown processes at work. Gola used a combination of both to analyze a 450,000 sample dataset from four days of North Sea drilling. Four models (a flow model from Sintef, EnKF, virtual sensor and support vector machine) were combined. The median was a good approximation to measured BHP. AI appears to work and improves predictions.

Attending Intelligent Energy is a bit like drinking from the fire hose for us at Oil IT Journal—there will be more in next month’s issue when we’ll report on Exxon’s intelligent agents, Chevron’s McElroy i-field and more artificial intelligence applications.


Shell unveils Smart Apps, Smart Solutions at GEF

Siemens XHQ, OpenSpirit middleware and ‘EDaM’ enterprise data model spanning upstream and downstream make up Shell’s data foundation—as revealed at 2012 Microsoft Global Energy Forum.

Shell’s ‘Smart Apps’ program which spans upstream and downstream began with the realization that every application had to supply the complete stack of middleware and connectivity to all relevant data sources. The result was multiple connections, poor application integration and ‘siloed’ workflows. Much of Shell’s development effort started from scratch because there was no single underlying data model. Enter the single version of the truth with a master location and data owner for every piece of data in Shell—housed in a global data model spanning the upstream and downstream.

Shell’s Smart Solutions Platform (SSP) sits atop the data foundation adding common elements for visualization, reporting, alarm and event data services and more. The SSP V1.0 is up and running providing real time analytics a.k.a. complex event processing. Data from production data sources and the process control systems are captured to Shell’s PI Historians and fed on to a Microsoft SQL Server Stream Insight instance for processing. This can include real time analytics augmented by comparison with historical data, trending and training of AI-type model.

Output is captured to SharePoint and the visualization tools including tree maps and heat maps from Siemens. The system provides equipment health and performance monitoring. End to end (user to data source) performance monitoring allows Shell to track performance from an end-user perspective.

Shell’s data services layer is a federation of data brokers that includes Siemens XHQ data services that hooks in to Energy Components, SAP, LIMS, and various PI data stores. XHQ provides a data abstraction layer and exposes data as services through a common information model allowing for the ‘rapid deployment of strategic apps.’

Moving up a level, we see that XHQ is actually one of several data brokers that make up the Shell enterprise data services layer. Landmark’s PowerHub and Tibco/OpenSpirit provide access to geoscience data and Shell’s own web services platform is also in the mix. All four brokers are combined thanks to a common data model and a web service mediation layer.

The Enterprise Data Model (EDaM) provides a unified way of addressing data-types, irrespective of the format of the original data-source. Is the EDaM slideware? Not according to Shell. The EDaM is ‘not a paper exercise’ already, Smart Solution and enterprise data warehouse solutions are being delivered in what is described as a ‘true enterprise effort.’

The system is being enhanced and V1.1 is scheduled for release in Q2 2012 with a single enterprise logical data model spanning equipment, facility, materials, events, alarms, field, reservoir and geopolitical entities. The EDaM was based on a logical data model deployed at the Californian Shell/Exxon AERA joint venture. Read the GEF presentations.


Folks, facts, orgs ...

Belsim, Chevron, CiDRA, Cortex, Glori Energy, Jee, Kadme, Kalido, KBR, Latitude, MVE, TD Williamson, OGP, Pervasive, TGS, PNEC, Panopticon, SAIC, SCA, Senergy, SGI, Wood Group, WSP Environment, Wipro, Pearson-Harper, Odin, Scandicorp.

Pierre-Boris Kalitventzeff has returned as CEO of Belsim, and Pierre Talmasse has joined the development team.

Charles Moorman has been nominated for election to Chevron’s board of directors.

Ryan Houston has been appointed asset integrity manager at CiDRA Oilsands. He hails from Acuren Group.

Cortex Business Solutions has appointed Bill Evelyn to its Advisory Committee. He was previously with Microsoft.

Oil recovery specialist Glori Energy has appointed of Bob Button as president of Glori Holdings. He hails from BP.

Roozbeh Ganjvar and has joined Jee as principal structural engineer. Subsea controls specialist Barrie Horsburgh has also joined the team.

IT consultant and former journalist Adriana Arcila is the new Kadme rep in Colombia.

Kalido has promoted Darren Peirce to the role of CTO, reporting directly to Bill Hewitt, President and CEO. Pierce was previously with Shell.

Richard Ambrose has joined KBR as President, North American Government & Logistics, reporting to Mark Williams, Group President for Infrastructure, Government and Power.

Latitude Solutions has appointed Bill Brennan of Summit Global Management to its Board of Directors.

Structural Geologist Irene Mannino and software engineer Alistair Baxter have joined Midland Valley.

Bruce Thames has been promoted to Sr. VP and COO of T.D. Williamson.

OGP secretariat member Lucyna Kryla-Straszewka is now manager of the Metocean and Geomatics Committees.

The OPC Foundation has named Jane Gerold as director of marketing.

Daryl Fullerton has taken up a new role as oil and gas principal at Pervasive Software. He retains his current roles with PIDX.

Marla Wunderlich has moved from Petris to TGS as Marketing Manager.

This year’s PNEC Cornerstone awardees are ExxonMobil’s Madelyn Bell and Jess Kozman of Westheimer Energy Consultants.

Måns Hultman, former CEO of business intelligence software provider QlikTech, and Charles Kane, board member at Progress Software have signed as advisors to Panopticon’s board.

SAIC has announced a regional cyber-security R&D center in Melbourne, Australia.

Subsurface Consultants & Associates has named Bob Shoup to its consulting division.

Senergy has appointed Alasdair Buchanan as COO and MD of Energy Services. He was with Halliburton. He takes over from current COO, Mike Bowyer.

SGI’s new president and CEO and is Jorge Luis Titinger. He was previously CEO of Verigy.

Wood Group directors Les Thomas and Mark Papworth are to be replaced by Cinzia De Santis, and Mark Dobler. Bill Vicary is now director of business development at the company’s Mustang unit.

WSP Environment & Energy has appointed John Romano as director, focusing on the unconventional gas sector. He was previously with SCE Environmental Group.

Andrew Zolnai is now geospatial service lead upstream oil and gas with Wipro.

Co-founder of Pearson-Harper, Steve Pearson, is now executive chairman and Alex Hayward has moved up to MD. Martyn Pellew continues as non-executive director.

Following sales agreements between Kadme, Odin Offshore Solutions and Scandicorp, Odin’s Gunnar Ekmann will lead North American sales and Scandicorp’s Harald Riise heads-up the Nordic region.


Done deals

Schlumberger, National Oilwell, Recon Technology, Intertek, ATI, Jee, Hexarus, Bureau Veritas, TH Hill, Acorn Energy, US Seismic Systems, Pansoft, Pelican, Fugro, BB Visual, SIGMA3.

Schlumberger has entered into an agreement with National Oilwell Varco to sell its Wilson distribution business. Schlumberger acquired Wilson along with Smith International in 2010.

Chinese non-state-owned oil and gas automation services provider Recon Technology has returned to compliance with all Nasdaq listing rules.

Intertek has acquired California-based ATI, a provider of asset integrity management software.

Absoft has completed its ‘seven-figure’ purchase of UK-based Hexarus Consulting, provider of performance management and business intelligence solutions based on SAP BusinessObjects.

Bureau Veritas is to acquire TH Hill, a provider of oil and gas drilling failure prevention and analysis services.

Acorn Energy has now ‘fully funded’ an additional $5 million investment in US Seismic Systems.

Chinese ERP software house Pansoft has acquired, at no cost, the remaining 20% stake in its Japanese unit from joint venture partners Management Information Center and Seven Colors Corp. following their ‘failure to meet the terms of the joint-venture agreement.’

SIGMA3 Integrated Reservoir Solutions has acquired APEX Petroleum

Engineering, APEXHiPoint and HiPoint Reservoir Imaging.

Private equity fund Pelican Energy Partners has raised $103.6 million. Partners include Mike Scott, Philip Burguières and John Huff.

Fugro has acquired EMU, an independent marine survey and environmental consultancy specialist with annual revenues of over € 20 million.

BB Visual Group has acquired OilTeams SRL, a provider of WITSML-based tools and services for well monitoring.


DSM—not total cost, but total value of ownership

WIB-NL presentation weighs-up technology cycles and expected lifetime of plant equipment.

The Netherlands-based WIB—a.k.a. the international instrument users association held a ‘mini-seminar’ in The Hague last month to debate getting the best of obsolescence in plant instrumentation and equipment. WIB members include ExxonMobil, BP, Shell, Total Wintershall, DSM, Saudi Aramco and many others.

Frank Pijnenburg explained how DSM evaluated obsolescence risk as the product of the likelihood of equipment failure and the impact of such an event.

DSM weighs up technology cycles and the expected life of plant with an evaluation matrix of business horizon (years) vs. the risk of a breakdown. A second matrix shows the effect of different strategies—‘reactive’ (hoping it will not fail), adaptive (can be updated/parts are available), and proactive (upgrade system). Each has its place in the plant strategy—the trick is to stay on the straight line with an appropriate level of action over time. DSM uses an in-house developed Excel tool to rank its plant’s controllers, consoles and network modules. The tool combines mean time between failure (MTBF), and effectiveness of repair into a ‘years of functionality’ scorecard.

DSM considers its plants are now ‘risk positioned’ i.e. the overall migration effort is understood, allowing planning and costing and mitigating risk by positioning spares appropriately. This has allowed DSM to translate its costs into ‘value creation opportunities,’ allowing the company to think in terms of ‘total value of ownership.’ Evaluating how components contribute to the business baseline , helps evaluate suppliers’ efficiency and differentiate offerings. Read the WIB presentations.


Chemical Safety Board to investigate Macondo blow-out

Federal body calls for more rigorous accident prevention. Computer modeling of BOP underway.

Two years on, the US Chemical Safety Board’s (CSB) investigation into the 2010 Macondo well blowout is progressing, despite delays due to legal issues surrounding ongoing court actions. The CSB, an independent federal agency, has so far found a need for companies and regulators to institute better major accident prevention. The US onshore process safety requirements are more rigorous and apply both to operators and key contractors.

One issue under investigation is the regulation of ‘human factors.’ CSB investigator Cheryl MacKenzie observed ‘There are no human factors standards or regulations in US offshore drilling that focus on major accident prevention. As an example, we are investigating whether fatigue was a factor in this accident. Transocean’s rig workers, originally working 14-day shifts, had been required to go to 21-day shifts on board.’

The CSB investigation is also using computer modeling of the BOP to evaluate deficiencies such as lack of safety barrier reliability requirements, inadequate hazard analysis for evaluating BOP design, and insufficient management of change requirements for hazard control. Recommendations for reforms are due for release in August. CSB Western Regional Office manager Don Holmstrom heads-up the investigation. More from the Chemical Safety Board.


Honeywell’s upstream workflows

Microsoft Workflow Foundation extends Honeywell’s ‘Intuition’ to well test, performance monitoring.

Honeywell’s principal architect, Jay Funnell, has followed up on his introduction to the ‘Intuition’ semantic model (Oil ITJ February 2012) with another whitepaper on the Intuition ‘Executive,’ a workflow engine built on Microsoft’s workflow foundation (WF), a SharePoint component. Funnell begins with a rather lengthy explanation of what a workflow is, in general terms, and how the technique has been built into the Executive. The default SharePoint engine triggers a workflow when an item is added to a list, when a list entry is changed, or if it is manually invoked. The Intuition Executive event notifier takes this a step further, linking workflows to OPC UA events. Honeywell’s stream processor can register a data pattern which is used to generate an event. Honeywell has developed canned workflows for common tasks such as well test and well model validation, production surveillance, pressure transient analysis, pump performance monitoring and operations and maintenance activity. Read the whitepaper.


New Intel chip shines in tNavigator benchmark

Rock Flow Dynamics’ simulator sees performance double.

Rock Flow Dynamics (RFD) reports trials of Intel’s new Xeon 2600 series of processors with its tNavigator fluid flow simulator. The new platform replaces the previous Intel Xeon 5600 series. RFD CTO Kirill Borgachev was invited to present tNavigator performance results using the new processors to the Intel launch forum in Moscow on March, 15. The tests showed a doubling of performance with the new processors over the previous platform. Tests compared new 8 core Xeon E5 2600 chips with a 12 core dual 5650 server. On test was a three phase black oil model with 2.4 million blocks and 10 years of history from 40 wells. The performance hike is attributed to fast cache, the NUMA architecture and tNavigator’s ‘parallel throughout’ implementation. More from tom.robinson@rfdyn.com.


Sales, contracts, partnerships and deployments

Aveva, Belsim, Hyperion, CartoPac, SynerGIS, Cemtrex, Sensor Technologies, EOR Alliance, Pearson-Harper, ControlPanel, Expro, Fluor, Inova, Intertek, Asset Answers, Optimization Petroleum Technologies, Deep Draft Energy, Oyo Geospace, Baker Hughes, RigNet, Siemens, TGS, Geo Webworks, Triple Point Technology.

Engineering consultant Pöyry has selected Aveva PDMS as its preferred 3D engineering design application. Centroprojekt do Brasil also selected Aveva as sole provider of 3D design software for complex engineering projects.

Belsim has announced a partnership with Hyperion Systems Engineering.

IFP Energies Nouvelles affiliate Beicip-Franlab is now the exclusive reseller of Transform Software in the EAME, Central and South America.

CartoPac’s WebOffice is available in the American market in partnership with SynerGIS.

Emissions monitoring systems provider Cemtrex has signed a representation agreement with Sensor Technologies.

The Chemical EOR alliance (Rhodia/Beicip-Franlab/IFP Energies Nouvelles) is to team with Champion Technologies in the USA and Canada on water and produced fluids treatment.

Chevron Australia has extended Pearson-Harper’s engineering information management contract on the Gorgon LNG development to over £8m.

Provider of GRC compliance automation solutions Control Panel has announced that PPG Europe and Nyrstar have implemented modules of the solution as part of their compliance automation efforts.

Expro has announced the success of a project measuring oil and gas rates for land production wells in Brazil’s Amazonas state in collaboration with Petrobras.

Fluor Corporation has won a contract with PETRONAS Gas Berhad, to provide FEED services for a new LNG regasification terminal in Malaysia. The value of the contract was undisclosed.

Inova Geophysical has announced the first sale and delivery of G3i, its newly released mega channel recording system, to BGP for use for a project in Western China.

Intertek has won more than £5 million of new contracts for oil and gas technical inspection, training and staffing in the last six months through its North Sea Moody businesses.

Dow Chemical has chosen Asset Answers for enterprise asset performance benchmarking.

Optimization Petroleum Technologies has named Deep Draft Energy as agent for its PE Office suite throughout Africa, with an ‘exclusive focus’ on Nigeria and Ghana.

OYO Geospace has received a $14.0 million order from TGC Industries to purchase 13,000 additional single-channel wireless data acquisition units and related equipment.

Recon Technology unit Beijing BHD Petroleum Technology Co has introduced the Baker Hughes Frac-Point System to China Petroleum and Chemical Corporation’s (Sinopec) Zhongyuan oilfield, to help complete fracturing of a dense sandstone horizontal well. BHD has signed several contracts with a total value of $4.75 million.

RigNet has deployed an iDirect Evolution hub to enhance VSAT services for the oil and gas sector in the Middle East.

Siemens Industry Automation division has signed a five year Enterprise Framework Agreement to deliver process gas chromatographs to Shell, its subsidiaries and joint ventures worldwide.

TGS has entered into an agreement with Geo Webworks of Calgary to provide access to TGS’ online library of digital well log data across Western Canada, with data available for 450,000 Canadian wells.

China National Offshore Oil Corporation has selected Triple Point Technology’s oil trading software to manage trading, risk management and logistics.


Standards stuff

Energistics—EIP V1.0. OGP—EPSG polygons and Life Saving Rules. ISACA—COBIT 5. OASIS—Energy Interoperation 1.0 and Interoperability Guidelines. PPDM—Global Well ID Framework.

Energistics expects V1.0 of the energy industry profile of the ISO/DIS 19115-1 geographic metadata standard will be released real soon now.

The International Association of Oil & Gas Producers (OGP) has announced the release of area polygons for the EPSG geodetic parameter dataset. OGP has also published its Life-Saving Rules, some 18 ‘reminders’ that can mean the difference between life and death for people involved in upstream oil and gas activities. The rules have been compiled by the OGP’s safety data subcommittee, and are meant to modify worker and supervisor behaviors by raising awareness of the activities that are most likely to result in fatalities. The report is a free download.

ISACA has released COBIT 5, a framework for IT governance and management. The new release ‘promotes continuity between enterprise IT and its business goals.’ The framework is a free download.

Oasis has approved its Energy Interoperation 1.0 as a committee specification. EI describes information and communication models for energy transactions—2105. Oasis has also released Interoperability Guidelines for specification writers—2106.

PPDM is calling for participants for its global well identification framework committee.


Pipeline Open Data Standard member survey

Frank analysis of PODS usage provides insights into deployment, model strengths and weaknesses.

The Pipeline Open Data Standard (PODS) association has just published its 2012 member survey, a remarkably frank analysis of the pros and cons of this member-supported initiative. 50% of operators deploy PODS on Oracle, 40% on SQL server and 5% on an Esri geodatabase. For service providers, the picture is similar except for the popularity of the geodatabase—supported by 30% of the sample. For GIS/visualization, Esri leads with 85% although 60% of operators report use of ‘other’ tools—MapInfo, Delorme X-Map and Google Earth inter alia.

The 70 page survey is replete with statistics and insightful comments. One service provider reported serious performance issues with inline inspection data and is considering a ‘totally different model outside of PODS for ILI data. Lack of history and a ‘cumbersome’ Esri spatial model were reported as issues by service providers.

ExxonMobil is interested in an extension of the PODS model to design and construction information. This work has already been initiated by Eagle Information Mapping. Other performance issues were reported as well as some fundamental problems regarding performance (querying PODS is ‘very expensive,’ data could be more modular and (again) spatialization is poor. Another service provider complained of the lack of guidance as to real world use of the model.

Notwithstanding the niggles, PODS is being used. Operators have tens of thousands of miles of pipeline data in PODS—often with millions of events and stations. The model is linked to other systems such as risk assessment (30%), SCADA and work order management (29%) and corrosion (23%). Many operators have developed sub models (e.g. for HCA reporting, environmental, facility data and an extended PI table) that they would like to be considered for inclusion in a future PODS release. Highest on the wish list for operators is better documentation of the model—considered a ‘critical’ or ‘important’ by some 88%. Download the survey.


MURA—Looking back, looking forward...

Microsoft’s Upstream Reference Architecture fails to shake the earth.

Microsoft has just release a whitepaper, ‘MURA, Looking Back to 2011 and Forward to 2012.’ The 2011 recap covers two whitepapers, one by MURA participants Esri, OSIsoft and PointCross on ‘Declarative integration for composite solutions leveraging an upstream reference architecture.

Declarative integration is said to be a ‘pragmatic approach’ for achieving workable interfaces between applications, at run time. It is an alternative to point-to-point interfaces between applications. The solution is a ‘composite user interface’ approach to integration—contrasting with approaches focusing on data exchange.

‘Looking Back’ describes ‘in detail’ the process for ‘defining and creating a composite solution from diverse solution components.’ The technologies under the hood include SharePoint Web Parts, AJAX and Silverlight. The other proof of concept, of ‘Complex event processing’ was penned by Logica, showing how CEP is put into practice in a remote operations center. In this case, technologies used were SharePoint, Stream Insight and Bing Maps.

We have expressed skepticism before as to MURA’s reality. The fact that a systems integrator managed to make a composite application from two vendors’ SharePoint Web Parts is hardly earth shaking. No more is the fact that Logica can deploy Stream Insight and Bing Maps!

Two years since its launch, the effort has, as the French say, given birth to a mouse. But the MURA headcount is on the up. MURA had around 20 companies signed up in 2011, this has now grown to some 35. This means that either folks are keen to sign up for the (free?) marketing clout that Microsoft is offering through the nebulous MURA, or that by 2013 we will see a major shift in upstream software as apps all plug and play with each other, with data and all! Read this month’s editorial for more on MURA and watch this space.


Accelrys Enterprise Platform for science R&D

Scientifically ‘aware’ service-oriented architecture targets data management, LIMS and simulation.

San Diego-based Accelrys has announced the Accelrys Enterprise Platform,(AEP) a ‘scientifically aware, service-oriented architecture for the integration and deployment of scientific solutions spanning data management and informatics, enterprise lab management, modeling, simulation and workflow automation.’ The AEP’s ‘science-enabled’ infrastructure, has its origins in the 2010 acquisition of Symyx, a provider scientific information management tools for scientists working in life sciences, chemicals and energy. The system targets organizations using atomic or molecular-scale modeling where a ‘productivity gap’ exists between innovation and commercialization.

The EAP blends Accelrys’ expertise in chemistry, biology and materials science, informatics, and electronic laboratory management. Both structured and unstructured information from ERP and product lifecycle management (PLM) systems are covered. Compliance with regulations from REACH and the EPA is also claimed as a driver for AEP deployment. One user is Halliburton researcher Jim Weaver who uses computational chemistry alongside experimentation to accelerate product development at Halliburton’s Duncan Research Center in Houston. Accelrys/Symyx’ chemicals and petroleum client base includes BP, Sinopec, and ExxonMobil. More from Accelrys.


Kongsberg, PGS develop Ramform training simulator

Kinect interface tracks trainee actions as streamers deploy in moving vessel.

Kongsberg Maritime has teamed with Petroleum Geo-Services (PGS) to develop a seismic streamer deck operations training system (SSDOT), installed at Vestfold University College in Norway. The SSDOT is built around Kongsberg’s offshore vessel simulator engine which has been interfaced to Microsoft’s Kinect motion sensing device. Kinect tracks student’s movements and displays them in a realistic depiction of the streamer deck. Another device worn around the waist lets students operate the streamer winch. The simulator also models motion of a PGS Ramform Viking with an accurate hydrodynamic model and 3D hull design. Accurate model behavior is important for realistic winch operation in different sea conditions.

PGS VP Einar Nielsen explained, ‘Back-deck operations have been increasing in complexity over the years and personnel are getting less exposure to these critical operations, so we decided that simulator training was a natural step to ensure safety and efficiency.’ Each student station consists of three 65” TFT-LCD screens mounted vertically showing their simulated position on the streamer deck and the actions they are carrying out. More from PGS.


Lynx’ Resource Portal ready for international operations

10 years in development, UKOGL portal now marketed to NOCs for licensing rounds.

UK-based Lynx Information Systems has announced the commercial release of its web-portal and data base management system, the Lynx Resource Portal (LRP). The LRP was originally developed for the UK’s onshore geophysical library (UKOGL) interface. Lynx designed and built the UKOGL portal which it has been operating since 1994. UKOGL is said to have contributed to the revival of UK onshore exploration by making oil and gas data accessible. Geoscientists can view seismic, geological and cultural data and make a selection of SEG-Y data for purchase. The LRP, developed over a 10 year period, is based on industry standard Oracle and ArcGIS Server technologies.

The LRP targets National Oil Companies wishing to launch exploration licensing rounds. The system can either be installed in-country on the NOC’s own servers, or on secure web-servers operated from Lynx’s offices in London and Houston. All the appropriate data - seismic, well and block data—plus background geology, documentation on licensing conditions, Petroleum Laws and Contracts - can be hosted and served to approved users in real time. More from Lynx.


Genscape provides fundamental Forties production data

Benchmark crude flows under scrutiny from sophisticated real-time tracking and analysis.

Loisville, KY headquartered energy supply monitoring specialist Genscape has launched a real time ‘fundamentals’ oil flow data service targeting the UK Forties benchmark supply. Genscape uses infrared camera systems, EM monitors and high resolution aerial photography to track oil inventories and flow rates. The acquired data is processed by Genscape’s meteorologists, economists and analysts using mathematical models and presented as maps and graphs of delivery and price information. Cargo delays and production declines at the Forties complex (actually a network of 70 fields and 169 kilometers of pipeline) have accentuated the need for greater transparency of oil fundamentals.

Subscribers get half-hourly updates on flows in the Forties pipeline and the system tracks oceangoing tankers loaded at Hound Point. It also measures oil storage at Dalmeny daily and yields a complete and accurate picture of oil available for export. The system tracks operations of the fractionators at Kinneil and the Ineos Grangemouth refinery. More from Genscape.


‘Advantage’ special edition celebrates oil and gas simulation

ANSYS CFD and simulation management tools see take-up from Baker Hughes, Cognity, ODS.

Ansys has just published a special issue of its in-house magazine ‘Advantage’ dedicated to its simulation activity in the oil and gas vertical. Ansys advocates simulation-driven product development to evaluate multiple designs and predict real-life performance. Ansys sees simulation in oil and gas as extending from its current use in reservoir engineering and into the fields of drilling and completion.

One case history covers Baker Hughes’ use of a systems engineering approach to design ‘drone tools,’ unmanned rigs and sensors for high temperature formation evaluation. Baker Hughes is planning to deploy Ansys’ computational fluid dynamics CFD tool in the Microsoft Azure cloud. Ansys’ Engineering Knowledge Manager (EKM) is used to index, manage and track engineering models. Another client, Cognity is using the Ansys Workbench platform to design a steerable conductor providing real-time positioning information as it is pounded into the ground with 600 tons of force.

Lloyd’s Register’s ODS unit used Ansys to determine the cause of damaging vibrations and assessed new designs for offshore oil and gas equipment. Again, EKM is used to manage simulation results. Swift Technology Group uses the toolset to design its separators with full physics simulation of liquid sloshing behavior in separators secured to moving platforms. Simulation of a complex fluid behavior in a vertical cyclone vessel is now possible with high performance extensions to the base product.


© 1996-2021 The Data Room SARL All rights reserved. Web user only - no LAN/WAN Intranet use allowed. Contact.