When it comes to the digital oilfield, High-Performance Computing (HPC) is an asset to be relied upon. From upstream discovery and the mapping of ocean floors to production, optimization and design; powerful HPC systems are analyzing, interpreting and manipulating unfathomable amounts of data.
FileCatalyst has been helping a number of natural resources companies accelerate their workflows by moving data to and from HPC facilities at multi-gigabit speeds. Let’s take a look at some use cases where oil & gas companies leverage HPC, and how FileCatalyst helps them streamline these workflows.
Discovery
Upstream Discovery, the process of locating and extracting oil, has relied on the processing power of HPC for some time. There are many different discovery and extraction methods, from robots exploring the ocean to geophysicists studying the earth’s surface, and each method generates large amounts of data. A single exploration initiative may generate multiple petabytes of raw data, and processing this data is very resource intensive, not to mention time-consuming. HPC facilities provide high-power compute clusters that do the heavy lifting and process this data so insights can be gained from it.
Simulation
Evaluating oil reservoirs and deposits are complex, potentially dangerous and expensive endeavours to undertake. So to ensure success, drive decision making and manage risk, oil and gas companies create very complex 3D simulations of the reservoir before embarking on the real thing to iron out all the details and evaluate the possible outcomes. These 3D simulations require a lot of parallel processing power, a hallmark of HPC, to render these live simulations.
Seismic
Seismic readings are created using sound waves which reflect off of the ocean floor to create a “sonar” map of the earth’s surface. In order to create a visual representation of the data, it is again. Once the data is gathered and sent to an HPC facility, it can then be visualized, like the simulations above, into complex renderings that can give insights.
Design
The environments where gas is extracted from is extreme, to say the least. Radiation and extreme temperatures are some of the environmental challenges standing in the way of extraction. New materials that can handle these extreme environments need to be created – and it can be a very time-consuming, expensive and resource-intensive process. Initiatives like the HPC for Materials Program have decided to leverage the power of HPC to help create these new materials.
The Common Thread – Data
From a spreadsheet of data to the highly durable materials – all of the aforementioned use cases are very unique and diverse. But, what they all have in common (aside from the use of HPC) is that they all generate massive amounts of data. These datasets are often multi-terabyte in size and they are all growing in volume, variety and velocity. All of this data coming from a variety of different sources has to make it to HPC facilities, and these distances can span the globe.
The Common Challenge – Moving This Data
HPC workflows in themselves are challenging, but one challenge that emerges on top of the complexity is efficiency, or a lack thereof. Organizations leveraging HPC to analyze data are global and far-reaching, and in many cases (especially with upstream exploration and discovery) the data is being generated in a remote location.
Since these locations are remote, they rely on wireless internet transfers to move data. These connections are typically based on TCP/FTP, and they move across a wide variety of different connections, all with very different speeds. Since TCP/FTP is based on a “two-way” communication of sending and receiving packets and acknowledgements, time is spent waiting for acknowledgements before a new packet can be sent. This time waiting is compounded as transfer distance increases, so even in the best of circumstances, long-range file transfers are bottlenecked by the TCP/FTP.
And once this data arrives at HQ, it may need to be transferred again to the HPC facility. Long-distance transfers via TCP/FTP can be a challenge when moving moderate amounts of data, but when transferring data at a petabyte scale, it becomes near impossible, with multi-day transfers that could fail altogether.
The Common Solution – FileCatalyst’s Fast Transfers
While we may not be able to perform HPC tasks for your oil & gas use cases, we can help you get your data to the HPC facility at unprecedented speeds. We do this by using our patented, built in-house, UDP-based protocol. Our protocol can send larger packets of data, and it doesn’t rely on acknowledgements. This yields speeds that maximize your connection and accelerate file transfers, up to 10Gbps.
FileCatalyst in Action
Let’s take a look at a real-world example. Using our trusty Bandwidth Calculator, I set up the following scenario: sending 1 TB of data from a satellite connection with the average latency of 638ms (common for a remote drilling site) to London, England across a 10 Gbps bandwidth.
Now, let's compare the results:
The numbers speak for themselves, don’t they? With results like this, It’s clear to see that FileCatalyst can exponentially enhance your HPC workflows by overcoming the limitations that conventional file transfer methods place on your productivity.
If you would like to get more details on how our solutions work, visit our How it Works Page. Also, read our case study on NICE Software to see how we let enabled them to drastically reduced run times for CPU intensive applications such as seismic analysis and reservoir simulation software.