Data models and their interpretation

The data sets we gather from our seismic campaigns and through geochemical and geophysical surveys are extremely voluminous and complex. They are derived from various surveys, are of differing resolution (i.e. levels of detail) and can easily exceed 100 GB – the equivalent of an MP3 player with 20,000 songs! Our specialists now have the job of ensuring that all the data are comparable, irrespective of which method was used to collect them, and bundling them in a single model. Only an integrated geo-model can supply us with some idea of how the subsurface in a particular area has developed over millions of years, what geological configuration it presents, and what characterises the underground geological formations.

Despite high data density, there is still scope for interpretation

When drawing up such a model, we are always aware that there are ambiguities in nearly all the data we gather. In other words, even when the data density is extremely high, there is still plenty of room for interpretation. Ultimately, all we have is a model and not an absolutely faithful picture of the subsurface. Our aim is to keep the uncertainties as low as possible, continually improve the accuracy of our reservoir forecasts, and minimise the risks for our drilling and production operations.

To come up with such high-quality interpretations we need state-of-the-art systems combining interactive visualisation with enormous computing power. Such systems enable us to carry out our model calculations faster and more accurately. Speed and precision are important for DEA because they give us a time advantage over our competitors in applying for concessions, and increase our chances of actually striking oil or gas.