Вы находитесь на странице: 1из 29

July - November 2006

SeismicFieldSchoolReport
Revised version February 2007 Utrecht University Netherlands Supervisor Kabir Chowdhury

Pablo Ortega 0500941 Wijb Sommer 0217158

Tableofcontents
1 2 Introduction Processing 2.1 Loading of data 2.2 Geometry setup 2.3 Cross-correlation 2.4 True Amplitude Recovery 2.5 Bandpass 2.6 FK filter 2.7 Trace killing 2.8 Sorting 2.9 Velocity analysis 2.10 Stack 2.11 Time to depth Special project: GPR 3.1 GPR processing 3.2 True Amplitude Recovery 3.3 Velocity analysis Results and conclusions 4.1 Seismic results 4.2 GPR results 4.3 Combining seismic and GPR 4.4 Evaluation Word of gratitude References 3 5

21

26

5 6

29 29

1 Introduction
In July 2006 a seismic survey was carried out to image the shallow subsurface of Reppel. The objective was to try to locate the fault south of the Peelrandblock. The fault is part of a graben system which is formed by the collision of the Eurasian plate with the African plate on the south and stresses caused by the spreading ridge in the North Atlantic Ocean.

Figure 1: Seismic survey area. Reppel, Belgium. In this report we will deal mainly with the steps taken to obtain the best processed data and describing the possible problems encountered during processing. Finally this processing results in information on the shallow structure and will possibly gives an indication on the presence of faults. Measurements were taken by fellow students during a 3 day fieldtrip. A P-wave vibro-seis source was used (Figure 2) and a string of 48 geophones. The vibro-seis produces a 5000 ms sweep with frequencies going linearly up from 20 to 300 Hz. The acquisition was conducted as a push survey, using the following geometry.

Record Length Sampling Rate Geophone spacing Number of channels Minimum source-receiver offset Number of shots Survey length Geophone type

6000 ms 0.5 ms 1 m 48 20 m 169 403 m 10 Hz horizontal

Figure 2: Vibroseis Every particular shot is repeated four times with exactly the same setup to improve the signal to noise ratio. The data are stacked in-situ. As a special project a GPR study is conducted over the same line. The main goal of this is to compare the results of both surveys. The survey is carried out as part of the course Seismic Fieldschool, given at Utrecht University by Dr. K. Roy Chowdhury.

2 Processing
Seismic data processing is the process by which raw uncorrelated data is converted into useful information. The process is a sequentialreiterative one, as different steps are readjusted several times optimizing the output. As we now know: garbage in means garbage out. The following stages can be identified: A.B.C.D.E.Loading of raw-uncorrelated seismic data. Geometry definition. Signal processing. Velocity picking. Time to depth conversion.

In the following pages the complete process will be described in depth. All the processing done in our data was done using the Landmarks interactive seismic processing system: Promax.

2.1 Loading of data


Using a simple work flow in Promax the seismic data that was on disk is read in and converted to Promax format.

2.2 Geometry Setup


We defined the geometry in order to correctly place the seismic survey parameters. The parameters defined include S-R interval (1) S-S interval (2) and total number of live stations (404). Specific UTM coordinates were not defined, instead a simple coordinate system was used were the first shot point was given the 0.0 coordinate, and the following sources and receivers points were defined according to this first coordinate. Even though some slight hills (half a meter) were observed in the survey area, it was decided that it would not affect the final result of the seismic survey. Following this conclusion no elevation or static corrections were defined in the geometry. The survey consisted of an array of 48 live channels (shot fold) per shot. The first CMP is located at 10.0 meters from the first shotpoint and the CMP interval is given by the formula:

Minimum [Shot distance, Receiver distance ] 2


so for this survey the interval is 0.5 meters. The last CMP is located at 369.5 meters from the first source. The maximum nominal fold can be computed using the following formula:

in-line fold =

Receiver line length 2 x Source line Interval

For our specific case this results in a maximum fold of 12 traces, this fold is reached after 12 shots (34 meters from first shotpoint). When the geometry was finally applied to the seismic shot gather, the two auxiliary traces (trace 50 and 51) which contained the source signature were rule out of the output gathers. Although it may seem as a trivial step a correct geometry is vital because it has a direct influence on the velocity picking process.

2.3 Cross-Correlation
In order to resolve the reflectivities of the beds, it is necessary to remove the sweep from the seismic traces. Cross-correlation is the standard method to do this. The source signature, which is contained in the two auxiliary traces, is then cross-correlated with the seismic data. After this process, theoretically, the trace will only contain information that is related to the convolution of an impulsive seismic source (delta source) with the reflectivity series. Figure 3 shows the effect of the cross-correlation. Note the difference in time scales as before applying the cross-correlation the information about any individual reflection is distributed over the whole time duration of the sweep.

Figure 3: Left: uncorrelated shotgather, Right: correlated shotgather.

2.4 True Amplitude Recovery


The goal of TAR is to get the data to a state where the reflector amplitudes relate directly to the change in rock properties giving rise to them. In order to recover the true amplitude, a correction for geometrical spreading is applied. A spherical spreading correcting (1/distance) is used. To correct for absorption of wave energy by the medium an inelastic attenuation correction (constant 0.0002) is applied. Figure 4 shows a resulting shotgather.

Figure 4: CMP gather resulting from TAR.

2.5 Bandpass
The vibro-seis source function contains frequencies from 20-300 Hz. As can be seen in figure 5, the frequency spectrum contains also higher frequencies. These frequencies are not useful signal; therefore a filter is designed in order to reject them (NB also some low frequencies are filtered to prevent aliasing). Figure 6 shows the characteristics of the bandpassfilter (black).

Figure 5: Left: Frequency spectrum of original data, Right: Frequency content after applying bandpass filter.

Figure 6: Bandpassfilter black: [8-20-250-300], blue: [8-20-70-90].

2.6 FK filter
The next step in processing is to improve the signal to noise ratio by designing a filter in the frequency domain which will remove random and coherent noise without diminishing the useful signal. By doing a Fourier transform of the seismic traces in both time and offset, a frequency wavenumber plot can be generated. Noise and signal should ideally have different slopes in this graph. It is then possible to remove noise without loosing useful signal, improving the signal to noise ratio. We removed the ground roll cone, which has a low velocity, low frequency and overshadowed the shallow events in our seismic data. In figure 7 and 8 the original and FK filtered data are shown for one CMP gather. The left panel shows the traces and the right panel shows the FK plot with the designed filter polygon. All frequencies inside the polygon will be removed.

Figure 7: CMP gather and FK transform.

10

Figure 8: CMP gather and FK transform after FK filtering. In the intermediate unstacked data much high frequency noise was observed. Therefore another zero-phase Ormsby bandpass filter is used in the frequency domain (blue line in figure 6). Figure 9 shows the result of the 2e bandpass.

11

Figure 9: Filtered data without 2nd bandpass (left) and with 2nd bandpass (right). A top and bottom mute are applied (figure 10) to remove instrument and field noise and to remove the deep structure which has a low signal to noise ratio. This way the processing time in later steps will be reduced.

12

Figure 10: Data with top/ bottom mute.

2.7 Trace killing


After scanning the seismic gathers, it was noticed that some traces had an out of the norm behavior. A trace killing flow was done in order to remove them, eliminating then the possible effect they might have had on the stack. As can be see in the figure 11, trace number 39 has a complete different behavior than the neighboring traces. This is a common behavior in all gather, so this trace was killed for all of them. Some other traces that had anomalous frequency content and a time shift were hand picked and removed from the gathers.

13

Figure 11: shotgather with killed traces.

2.8 Sorting
In this step Common mid point (CMP) gathers were generated. CMP sorting is the process by which traces that correspond to the same reflection point in the subsurface are sequentially placed from small to larger source receiver offset. This was done as a basic pre-step prior to stack velocity analysis. The sorting was done using the CDP bin number and the signed source-receiver offset.

2.9 Velocity analysis


The main goal of all processing is to increase the signal to noise ratio. Velocities are picked in common midpoint gathers to correct for a geometrical effect known as normal moveout. The velocity that results in the best line-up of seismic events that correspond to the same reflection is assigned to a particular time. This process is repeated for every 11th CDP. In this way a 2D velocity model is built. To pick the velocities we used an environment as shown in figure 12. In the figure 5 areas can be seen. From left to right we have: velocity sprectrum, CMP gather, dynamic stack, flipped stack, stacks with an

14

applied increasing NMO velocity correction. In the last panel the picked velocity function in time can be seen in red. The methodology followed was to choose the velocity which produced the best CMP gather without recurring to steep velocity changes in time. The NMO correction hyperbola is given by:

, where t is the traveltime at offset h, t0 is the zero-offset (normal incidence) traveltime and VNMO is the normal moveout velocity (Dix, 1955). Because of NMO corrections, the farer the offset the more the trace is stretched. This is known as NMO stretch. This changes the waveform and frequency content thus decreasing the stack efficiency. To remove this effect a maximum percentage of stretch is allowed. In our specific case for a stretch larger than 45%, the data is muted.

Figure 12: Screenshot of the velocity picking environment. The resulting velocity model for the complete seismic section is shown below in figure 13.

15

Figure 13: RMS velocity model.

2.10

Stack

Stacking is the process by which seismic traces that correspond to the same CMP that have been NMO corrected are summed up. This greatly improves the signal to noise ratio (this ratio is proportional to the square root of the fold) and reduces (1/fold) the data volume. Figure 14 shows the NMO correction graphically and the effect of stacking. Events that not have a hyperbolic moveout (random noise) will not line up after correction and therefore have low amplitude in the stacked section.

16

Figure 14: Schematic representation of applying the NMO correction and stack. A stack section was generated first using a linear velocity model. Another stack (figure 15) was generated using the velocities picked in the velocity analysis.

Comparing these two sections is used as a QC for the picking. The RMS velocities are re-picked several times until a consistent stack is produced. A consistent stack has lateral continuity and deeper events are easier to observe. There were also erroneous picks (i.e. same reflection picked with two different velocities in the same gather) that were observed during the Quality control of the velocity model.

17

Figure 15: Stacked section with picked velocity model.

2.11

Time to depth

The stack section is still in the time domain. With our velocity model we can convert this to depth. To make sure we have realistic velocities for this conversion, first a smoothing filter is applied to the RMS velocities. This 1) improves consistency and 2) reduces picking noise. Figure 16 shows the smoothed velocity model.

18

Figure 16: Smoothed velocity model. Red square corresponds to the subcrop shown in Figure 25. From these smoothed RMS velocities, interval velocities are derived using a 50 ms vertical interval. An unsmoothed version of the interval velocity model is shown in figure 17. For the depth conversion we used a smoothed version of the model shown in figure 17.

19

Figure 17: Unsmoothed interval velocity model.

Figure 18 shows a plot of our final depth converted stacked section.

Figure 18: Depth converted stacked section. Red square corresponds to the subcrop shown in Figure 26.

20

3 Specialproject:GPR
As a special project for this course GPR data taken on the same line are processed and compared with the seismic reflection data. The GPR data was acquired in two different frequency ranges, one high frequency (100 MHz) and one with lower frequency (50 MHz). The 100 MHz data has 0.5 m receiver distance, while the 50 MHz data has 1 m spacing. There is a trade out between frequency and penetration depth. For high frequencies there is a higher vertical and horizontal accuracy, but less penetration depth (because of dispersive attenuation). GPR sections can be treated as zero-offset stacked data. This makes processing less extensive.

3.1 GPR processing


The processing of the data is divided in the following steps: 1) loading of the data, 2) velocity analysis and 3) depth conversion. The loading of the data initially gave rise to different problems. This was due to a format conversion problem of the original dataset. Once this was corrected for the raw data was loaded using the acquisition parameters stated earlier. On this case it was not necessary to define the geometry by the use of libraries. A simple remapping of the header values was sufficient. Even though this is a simple procedure, it is important when comparing the data with the seismic data from the previous section. Specifically we defined CDP-X, CDP-Y coordinates, offset to be zero and CDP number. These headers were needed to be able to use the picking velocity tool within ProMax. The raw data is plotted in figures 19 and 20.

Figure 19: raw GPR data 50 MHz.

21

Figure 20: raw GPR data 100 MHz.

3.2 True Amplitude Recovery


To correct for geometrical spreading and attenuation we applied a true amplitude recovery similar to the TAR applied to the seismic data. In figure 21 and 22 (for 50 MHz, 100 MHz respectively) it can be seen, that this procedure improves the visibility of the reflections significantly.

22

Figure 21: GPR data 50 MHz with TAR applied.

Figure 22: GPR data 100 MHz with TAR applied.

23

3.3 Velocity analysis


GPR uses reflection of electromagnetic waves. Reflections are generated by a change in dielectric constant of a medium. To do a time to depth conversion a velocity model is needed (analogous to the seismic velocity model). The velocity of electromagnetic waves in a medium depends also on the dielectric constant of that medium as:

v=

c K'

where c is the speed of electromagnetic waves in vacuum (2.99792458 108 m/s), and K` is the real part of the dielectric constant. We assume the dielectric constant to have an average value in our survey and equal to the one of glass. This results in a speed of 200 m/ns. We also derived the velocity by analyzing a reflection hyperbola from the data using the velocity picking tool from ProMax. This resulted in a velocity of about 210 m/ns. Because the GPR survey penetrates only the shallow subsurface (about 50 m), no significant change in vertical velocity is expected. For the depth conversion a constant velocity of 200 m/ns is used. The resulting zero-offset GPR section is shown in figure 23 and 24. Note that Figures 19, 20, 21 and 23 is flipped, because of the way it was acquired in the field, so in order to compare it should first be mirrored.

Figure 23: Depth converted GPR data 50 MHz (with TAR).

24

Figure 24: Depth converted GPR data 100 MHz (with TAR).

25

4 ResultsandConclusions
The goal of this project was two-fold. The main goal was to get hands on experience on seismic processing. As a special project seismic data is compared with GPR data, to investigate whether the two complement on each other and reflect the same geologic structure. This is done by investigating the presence and location of a fault as it was expected in the area of this study.

4.1 Seismic results


Velocities can be used as a preliminary attribute for some structural definition. The presence of a fault in an area could cause a rapid lateral change in velocities. By analyzing our velocity model (figure 15) an area with lateral change can be spotted between CDP 120 and 270 to a depth of 250 ms. Lateral change is also observed in other parts of the velocity model (i.e. in the deeper parts (600 ms) under CDP 100). But we concentrate on the shallow part, for later this will be tied with (shallow) GPR data. In Figure 25 a subset showing this effect is plotted.

Figure 25: Subset of Smoothed velocity model. CDP vs. time [ms]. If this behavior truly corresponds to a geological structure, it should manifest in our final depth stack. The final seismic stack in depth shows good continuity of some seismic events. The signal to noise ratio was greatly improved by the seismic processing, as reflector are now more evident and continuous. The vertical seismic resolution at greater depths (larger than ~80 m.) limits the range where competent seismic reflections can be interpreted. As was shown in our velocity model, there exists the possibility of the presence of a fault in the area. By analyzing our seismic section in can be seen that in the same area as in the velocity model, a fault like behavior can be observed in the seismic data. Figure 26 shows that the coherent pack of seismic reflectors terminates at what could possibly be a fault. Also in red a likely interpretation of the fault is shown. The ending of the possible fault is not clear as

26

there is a decrease in resolution performing a good interpretation.

with

depth

that

does

not

allow

Figure 26: Subcrop of the depth stack. In red fault interpretation. It can be seen in figure 18 that the top 10 meters of the section are muted. This is because this area has really low signal to noise ratio due to the presence of guided waves.

4.2 GPR results


As expected the two (50/ 100 MHz) sections have a different vertical resolution. This is shown clearly be observed in figure 27. The seismic layering in the 100 MHz data is much thinner although they show the same structural pattern. The 50 MHz data was expected to have a greater exploratory depth. However comparing figure 23 and 24 does not show this behavior. On the 50 MHz data, some signal is observed below 16 m, but it is merely noise, which cannot correspond to a real structure.

Figure 27: Comparison of two subsets of GPR data. Left 50 MHz, Right 100 MHz.

27

4.3 Combining seismic and GPR


In this section a comparison is made between seismic and GPR data. Ideally this would involve overlapping both final sections and comparing. Nevertheless this wont be possible because both sections have different depth ranges so only small overlap occurs. The GPR data ranges from 6 to 14 m depth while the seismic data starts at 10 m. A figure is made showing the GPR data plotted above the seismic data. It should be noted that GPR and seismic data are by nature sensitive to different physical properties. Specifically GPR reflections are caused by changes in dielectric constant while seismic reflections react on changing impedance (velocity times density). Because faulting offsets the medium a jump in reflections is expected both in GPR as seismic data. For comparison we use the 50 MHz GPR data for it has comparable wavelength as the dominant seismic data. Figure 28 shows GPR data superposed to the seismic data.

Figure 28: Overlap of Seismic stack in time with 50 MHz GPR stack Continuity can be observed between both stacks, i.e. under the CMP 140 it is easy to follow the anticline going from the top GPR section into the seismic data. It is important to point out that effect of the possible fault is also observed in the GPR data in a similar location as can be seen in Figure 27.

4.4 Evaluation
Doing this study gave us a really good grasp on how to conduct a seismic study, from acquiring data in the field to processing and producing a final section in depth. Our fellow classmates Gijs and Jelle worked parallel on the same data set. It would be interesting to compare both results. During processing some crude comparison was done. Although not completely different, interesting differences were observed. This is not unexpected, as seismic processing is a subjective procedure.

28

5 Wordofgratitude
We would like to thank Henk van den Meer, Kabir Roy Chowdhury and Stefan Carpentier for their time and support.

6 References
Picture NMO-correction 3D Seismic Survey Design, C. Peter Ashton et al., April 1994 Exploration Press, 1995 Google Maps Schlumberger glossary oilfield http://www.glossary.oilfield.slb.com/ Seismology, Sheriff and Geldart, Cambridge University

29

Вам также может понравиться