Академический Документы
Профессиональный Документы
Культура Документы
SeismicFieldSchoolReport
Revised version February 2007 Utrecht University Netherlands Supervisor Kabir Chowdhury
Tableofcontents
1 2 Introduction Processing 2.1 Loading of data 2.2 Geometry setup 2.3 Cross-correlation 2.4 True Amplitude Recovery 2.5 Bandpass 2.6 FK filter 2.7 Trace killing 2.8 Sorting 2.9 Velocity analysis 2.10 Stack 2.11 Time to depth Special project: GPR 3.1 GPR processing 3.2 True Amplitude Recovery 3.3 Velocity analysis Results and conclusions 4.1 Seismic results 4.2 GPR results 4.3 Combining seismic and GPR 4.4 Evaluation Word of gratitude References 3 5
21
26
5 6
29 29
1 Introduction
In July 2006 a seismic survey was carried out to image the shallow subsurface of Reppel. The objective was to try to locate the fault south of the Peelrandblock. The fault is part of a graben system which is formed by the collision of the Eurasian plate with the African plate on the south and stresses caused by the spreading ridge in the North Atlantic Ocean.
Figure 1: Seismic survey area. Reppel, Belgium. In this report we will deal mainly with the steps taken to obtain the best processed data and describing the possible problems encountered during processing. Finally this processing results in information on the shallow structure and will possibly gives an indication on the presence of faults. Measurements were taken by fellow students during a 3 day fieldtrip. A P-wave vibro-seis source was used (Figure 2) and a string of 48 geophones. The vibro-seis produces a 5000 ms sweep with frequencies going linearly up from 20 to 300 Hz. The acquisition was conducted as a push survey, using the following geometry.
Record Length Sampling Rate Geophone spacing Number of channels Minimum source-receiver offset Number of shots Survey length Geophone type
Figure 2: Vibroseis Every particular shot is repeated four times with exactly the same setup to improve the signal to noise ratio. The data are stacked in-situ. As a special project a GPR study is conducted over the same line. The main goal of this is to compare the results of both surveys. The survey is carried out as part of the course Seismic Fieldschool, given at Utrecht University by Dr. K. Roy Chowdhury.
2 Processing
Seismic data processing is the process by which raw uncorrelated data is converted into useful information. The process is a sequentialreiterative one, as different steps are readjusted several times optimizing the output. As we now know: garbage in means garbage out. The following stages can be identified: A.B.C.D.E.Loading of raw-uncorrelated seismic data. Geometry definition. Signal processing. Velocity picking. Time to depth conversion.
In the following pages the complete process will be described in depth. All the processing done in our data was done using the Landmarks interactive seismic processing system: Promax.
in-line fold =
For our specific case this results in a maximum fold of 12 traces, this fold is reached after 12 shots (34 meters from first shotpoint). When the geometry was finally applied to the seismic shot gather, the two auxiliary traces (trace 50 and 51) which contained the source signature were rule out of the output gathers. Although it may seem as a trivial step a correct geometry is vital because it has a direct influence on the velocity picking process.
2.3 Cross-Correlation
In order to resolve the reflectivities of the beds, it is necessary to remove the sweep from the seismic traces. Cross-correlation is the standard method to do this. The source signature, which is contained in the two auxiliary traces, is then cross-correlated with the seismic data. After this process, theoretically, the trace will only contain information that is related to the convolution of an impulsive seismic source (delta source) with the reflectivity series. Figure 3 shows the effect of the cross-correlation. Note the difference in time scales as before applying the cross-correlation the information about any individual reflection is distributed over the whole time duration of the sweep.
2.5 Bandpass
The vibro-seis source function contains frequencies from 20-300 Hz. As can be seen in figure 5, the frequency spectrum contains also higher frequencies. These frequencies are not useful signal; therefore a filter is designed in order to reject them (NB also some low frequencies are filtered to prevent aliasing). Figure 6 shows the characteristics of the bandpassfilter (black).
Figure 5: Left: Frequency spectrum of original data, Right: Frequency content after applying bandpass filter.
2.6 FK filter
The next step in processing is to improve the signal to noise ratio by designing a filter in the frequency domain which will remove random and coherent noise without diminishing the useful signal. By doing a Fourier transform of the seismic traces in both time and offset, a frequency wavenumber plot can be generated. Noise and signal should ideally have different slopes in this graph. It is then possible to remove noise without loosing useful signal, improving the signal to noise ratio. We removed the ground roll cone, which has a low velocity, low frequency and overshadowed the shallow events in our seismic data. In figure 7 and 8 the original and FK filtered data are shown for one CMP gather. The left panel shows the traces and the right panel shows the FK plot with the designed filter polygon. All frequencies inside the polygon will be removed.
10
Figure 8: CMP gather and FK transform after FK filtering. In the intermediate unstacked data much high frequency noise was observed. Therefore another zero-phase Ormsby bandpass filter is used in the frequency domain (blue line in figure 6). Figure 9 shows the result of the 2e bandpass.
11
Figure 9: Filtered data without 2nd bandpass (left) and with 2nd bandpass (right). A top and bottom mute are applied (figure 10) to remove instrument and field noise and to remove the deep structure which has a low signal to noise ratio. This way the processing time in later steps will be reduced.
12
13
2.8 Sorting
In this step Common mid point (CMP) gathers were generated. CMP sorting is the process by which traces that correspond to the same reflection point in the subsurface are sequentially placed from small to larger source receiver offset. This was done as a basic pre-step prior to stack velocity analysis. The sorting was done using the CDP bin number and the signed source-receiver offset.
14
applied increasing NMO velocity correction. In the last panel the picked velocity function in time can be seen in red. The methodology followed was to choose the velocity which produced the best CMP gather without recurring to steep velocity changes in time. The NMO correction hyperbola is given by:
, where t is the traveltime at offset h, t0 is the zero-offset (normal incidence) traveltime and VNMO is the normal moveout velocity (Dix, 1955). Because of NMO corrections, the farer the offset the more the trace is stretched. This is known as NMO stretch. This changes the waveform and frequency content thus decreasing the stack efficiency. To remove this effect a maximum percentage of stretch is allowed. In our specific case for a stretch larger than 45%, the data is muted.
Figure 12: Screenshot of the velocity picking environment. The resulting velocity model for the complete seismic section is shown below in figure 13.
15
2.10
Stack
Stacking is the process by which seismic traces that correspond to the same CMP that have been NMO corrected are summed up. This greatly improves the signal to noise ratio (this ratio is proportional to the square root of the fold) and reduces (1/fold) the data volume. Figure 14 shows the NMO correction graphically and the effect of stacking. Events that not have a hyperbolic moveout (random noise) will not line up after correction and therefore have low amplitude in the stacked section.
16
Figure 14: Schematic representation of applying the NMO correction and stack. A stack section was generated first using a linear velocity model. Another stack (figure 15) was generated using the velocities picked in the velocity analysis.
Comparing these two sections is used as a QC for the picking. The RMS velocities are re-picked several times until a consistent stack is produced. A consistent stack has lateral continuity and deeper events are easier to observe. There were also erroneous picks (i.e. same reflection picked with two different velocities in the same gather) that were observed during the Quality control of the velocity model.
17
2.11
Time to depth
The stack section is still in the time domain. With our velocity model we can convert this to depth. To make sure we have realistic velocities for this conversion, first a smoothing filter is applied to the RMS velocities. This 1) improves consistency and 2) reduces picking noise. Figure 16 shows the smoothed velocity model.
18
Figure 16: Smoothed velocity model. Red square corresponds to the subcrop shown in Figure 25. From these smoothed RMS velocities, interval velocities are derived using a 50 ms vertical interval. An unsmoothed version of the interval velocity model is shown in figure 17. For the depth conversion we used a smoothed version of the model shown in figure 17.
19
Figure 18: Depth converted stacked section. Red square corresponds to the subcrop shown in Figure 26.
20
3 Specialproject:GPR
As a special project for this course GPR data taken on the same line are processed and compared with the seismic reflection data. The GPR data was acquired in two different frequency ranges, one high frequency (100 MHz) and one with lower frequency (50 MHz). The 100 MHz data has 0.5 m receiver distance, while the 50 MHz data has 1 m spacing. There is a trade out between frequency and penetration depth. For high frequencies there is a higher vertical and horizontal accuracy, but less penetration depth (because of dispersive attenuation). GPR sections can be treated as zero-offset stacked data. This makes processing less extensive.
21
22
23
v=
c K'
where c is the speed of electromagnetic waves in vacuum (2.99792458 108 m/s), and K` is the real part of the dielectric constant. We assume the dielectric constant to have an average value in our survey and equal to the one of glass. This results in a speed of 200 m/ns. We also derived the velocity by analyzing a reflection hyperbola from the data using the velocity picking tool from ProMax. This resulted in a velocity of about 210 m/ns. Because the GPR survey penetrates only the shallow subsurface (about 50 m), no significant change in vertical velocity is expected. For the depth conversion a constant velocity of 200 m/ns is used. The resulting zero-offset GPR section is shown in figure 23 and 24. Note that Figures 19, 20, 21 and 23 is flipped, because of the way it was acquired in the field, so in order to compare it should first be mirrored.
24
Figure 24: Depth converted GPR data 100 MHz (with TAR).
25
4 ResultsandConclusions
The goal of this project was two-fold. The main goal was to get hands on experience on seismic processing. As a special project seismic data is compared with GPR data, to investigate whether the two complement on each other and reflect the same geologic structure. This is done by investigating the presence and location of a fault as it was expected in the area of this study.
Figure 25: Subset of Smoothed velocity model. CDP vs. time [ms]. If this behavior truly corresponds to a geological structure, it should manifest in our final depth stack. The final seismic stack in depth shows good continuity of some seismic events. The signal to noise ratio was greatly improved by the seismic processing, as reflector are now more evident and continuous. The vertical seismic resolution at greater depths (larger than ~80 m.) limits the range where competent seismic reflections can be interpreted. As was shown in our velocity model, there exists the possibility of the presence of a fault in the area. By analyzing our seismic section in can be seen that in the same area as in the velocity model, a fault like behavior can be observed in the seismic data. Figure 26 shows that the coherent pack of seismic reflectors terminates at what could possibly be a fault. Also in red a likely interpretation of the fault is shown. The ending of the possible fault is not clear as
26
with
depth
that
does
not
allow
Figure 26: Subcrop of the depth stack. In red fault interpretation. It can be seen in figure 18 that the top 10 meters of the section are muted. This is because this area has really low signal to noise ratio due to the presence of guided waves.
Figure 27: Comparison of two subsets of GPR data. Left 50 MHz, Right 100 MHz.
27
Figure 28: Overlap of Seismic stack in time with 50 MHz GPR stack Continuity can be observed between both stacks, i.e. under the CMP 140 it is easy to follow the anticline going from the top GPR section into the seismic data. It is important to point out that effect of the possible fault is also observed in the GPR data in a similar location as can be seen in Figure 27.
4.4 Evaluation
Doing this study gave us a really good grasp on how to conduct a seismic study, from acquiring data in the field to processing and producing a final section in depth. Our fellow classmates Gijs and Jelle worked parallel on the same data set. It would be interesting to compare both results. During processing some crude comparison was done. Although not completely different, interesting differences were observed. This is not unexpected, as seismic processing is a subjective procedure.
28
5 Wordofgratitude
We would like to thank Henk van den Meer, Kabir Roy Chowdhury and Stefan Carpentier for their time and support.
6 References
Picture NMO-correction 3D Seismic Survey Design, C. Peter Ashton et al., April 1994 Exploration Press, 1995 Google Maps Schlumberger glossary oilfield http://www.glossary.oilfield.slb.com/ Seismology, Sheriff and Geldart, Cambridge University
29