Академический Документы
Профессиональный Документы
Культура Документы
Directional
Objectives for Surveying
– Directional Sensors
measure:
• Survey Data (Static or
Dynamic)
– Inclination
– Hole Direction (Azimuth)
• Steering Data (Dynamic)
– Magnetic Toolface
– Gravity Toolface
What is Survey Data?
• A survey, or more appropriately a survey station,
consists of the following components:
– Inclination
– Hole Direction (Azimuth)
– Measured Depth
• The highest quality survey data is best achieved as a
static measurement
• Survey data tells the directional driller where the hole
has been
• Inclination and hole direction are downhole directional
sensor measurements
• Measured depth is a surface derived depth monitoring
system measurement
Inclination
Bv = Btotal
Bh = 0
• At the magnetic poles, Bh =
0, Bv = Btotal
Bh = Btotal(cos Dip)
• Bh is the projection (using
the dip angle) of Btotal into
the horizontal plane
Btotal
Bv = Btotal(sin Dip)
Magnetic Pole Movement (1945 – 2000)
North South
Pole Pole
Magnetic Declination
• For rectangular
coordinates,
arbitrary values
have been
established within
each grid
Comparing Grid Projections
2 2 2 1/2
Gtotal = (Gx + Gy +Gz )
Sources of Real-time Azimuth Errors
• These factors can introduce error into the hole direction
value presented to the directional driller:
– Magnetic Interference (axial or cross-axial)
– Magnetometer or associated hardware failure
– Calibration out of specification
– “Bad” accelerometer input (inclination and highside toolface are
part of the calculation!)
– Mathematical Error (at 0° and 90° inclination)
– Sensor measurement accuracy
– Real-time Data resolution
– Latitude, Inclination, Hole direction
– Wrong Declination and/or Convergence
Azimuth Quality Checks
• Is the calculated Magnetic Dip value within +/- 0.3º of the Local Magnetic
Dip value?
• MDIP utilizes inputs from the accelerometers and magnetometers but is not
as sensitive of a quality check as Gtotal and Btotal
• It is possible for the MDIP to be out of specification even if the Gtotal and
Btotal are not
• NOTE: MDIP should not be used as sole criteria to disqualify a survey if
Gtotal and Btotal are within specification
Survey Quality Checks
2 2 2 1/2
• Gtotal = (Gx + Gy +Gz )
2 2 2 1/2
• Btotal = (Bx + By +Bz )
Given the following survey data, decide whether each quality check is
within limits
Given the following survey data, decide whether each quality check is
within limits
Given the following survey data, decide whether each quality check is
within limits
Local References: Gtotal = 1.000 g Btotal = 58355 nT Mdip = 75.20°
Given the following survey data, decide whether each quality check is
within limits
1
MWD Survey Measurements
Introduction
The Directional Sensor is made up of electronic printed circuit boards, a Tensor Tri-Axial
Magnetometer and a Tensor Tri-Axial Accelerometers, and Temperature sensor. These
modules are configured into a directional probe and are run in the field mounted in a
nonmagnetic drill collar. The Directional Sensor provides measurements, which are used to
determine the orientation of the drill string at the location of the sensor assembly.
The Directional Sensor measures three orthogonal axis of magnetic bearing, three orthogonal
axes of inclination and instrument temperature. These measurements are processed and
transmitted by the pulser to the surface. The surface computer then uses this data to calculate
parameters such as inclination, azimuth, high-side toolface, and magnetic toolface.
The sensor axes are not perfectly orthogonal and are not perfectly aligned, therefore,
compensation of the measured values for known misalignments are required in order to
provide perfectly orthogonal values. The exact electronic sensitivity, scale factor and bias, for
each sensor axis is uniquely a function of the local sensor temperature. Therefore, the raw
sensor outputs must be adjusted for thermal effects on bias and scale factor. Orthogonal
misalignment angles are used with the thermally compensated bias and scale factors to
determine the compensated sensor values required for computation of precise directional
parameters.
2 4/24/2002
MWD Survey Measurements
The microprocessor provides the control and timing to interface the logic circuit controls the
analog power switch. With the analog power switch off only the 5 volt circuits are active and
the current drain from the sub bus is approximately 8 milliamps. When the logic board
switches on the analog power switch, battery power is directed to the 12 volt regulator on the
analog circuit. The current drain with the analog power switch on and the sensors off is
approximately 80 milliamps. With the accelerometers powered up the current drain is
approximately 120 milliamps. With the magnetometer powered up the current drain is
approximately 140 milliamps.
Analog Circuit
The Analog Circuit provides an interface with the inclinometer, magnetometer, and pressure
transducer sensors. The 16 channel multiplexer on the analog circuit takes input from various
sensor outputs and sends the data to the logic circuit for transmission. A sensor power switch
takes power from the 12 volt regulator and selectively powers up the accelerometers and
magnetometers. A 5 volt excitation supply from the 12 volt regulator is used to power the
pressure transducer. The status voltages appear on the surface probe test and are defined as
follows:
2. 5 Volt Supply - the 5 volt excitation supply from the 12 volt regulator that
powers the pressure transducer.
4/24/2002 3
MWD Survey Measurements
Tensor Inclinometer
The TENSOR Tri-axial Accelerometer measures three orthogonal axes of inclination (Gx, Gy,
and Gz) and also includes a temperature sensor. The inclinometer has a 1g full scale output in
survey mode and a 7 g full scale output in steering mode. The sensor operates within the
following parameters:
4 4/24/2002
MWD Survey Measurements
change in capacitance and applies current to the torquer coil to restore the proof mass to its
original position.
The amount of current required to restore the proof mass to its original position is a function
of the amount of force applied to the accelerometer. Force is related to acceleration by F = ma.
We measure the acceleration of gravity in g's (gravity units) in three orthogonal directions
relative to the Directional Sensor probe. This allows us to calculate the inclination of the tool
relative to vertical.
The scaling of the X and Y accelerometer channels depends on the operational mode
(survey or steering), while the Z channel and the temperature sensor have the same
scaling for both modes. The full scale output voltage sensitivity for each mode is as
follows:
Tensor Magnetometer
The Tensor Tri-axial Magnetometer measures three orthogonal axes of magnetic bearing (Bx,
By, and Bz) as well as temperature. The Tensor Model 7002MK Magnetometer has an output
operating range of plus and minus 100,000 Nanotesla (the earth's field is about 50,000
Nanotesla) and operates within these parameters:
4/24/2002 5
MWD Survey Measurements
The Tensor magnetometer is a saturable core device. It consists of two coils with a core
between them, which has a certain magnetic permeability. A magnetic field produced by one
coil travels through the core and induces a current in the other coil. The core will only transmit
a certain amount of magnetic field, that is , when the level of magnetic flux gets to a certain
point the core will become saturated and greater amounts of flux will not pass through the
core. The point at which a substance becomes saturated is a property of that substance, i.e.
certain metals will saturate sooner than others. The magnetometer continually drives the core
to saturation. In the presence of an external magnetic field the point that the core saturates is
shifted. The signal shift is detected, amplified, and fed back as a bucking magnetic field to
maintain the core at a balanced around zero magnetizing force. The servo amplifier offset
caused by the signal shift is further amplified and presented as the output of the
magnetometer.
In the tri-axial set of magnetometers, the three flux gate channels and temperature channel are
supplied power conditioned by a common pair of internal regulators. The individual
magnetometer transducers come in biaxial sets. The magnetometer package contains two
biaxial magnetometers, of which only three axes are used. The sub bus around the
magnetometer requires particular attention because the current through the sub bus is
alternating current, any change in that current will produce a magnetic field that can affect the
magnetometer.
6 4/24/2002
MWD Survey Measurements
The measurements that we make with the DIRECTIONAL SENSOR are made relative to
these axes. The X-axis is perpendicular to the tools long axis and is in the direction of the
scribe line etched on the DIRECTIONAL SENSOR nonmag sub. The Y-axis is also
perpendicular to the long axis. The Z-axis is along the long axis of the DIRECTIONAL
SENSOR; in the direction the hole is being drilled. The scribe line on the DIRECTIONAL
SENSOR sub allows measurement of the relationship
TOOL PHYSICAL AXIS between the tools axis and the bent sub or mud motor
scribe line. This measurement is called the toolface
offset. The toolface offset is measured by extending the
X
bent sub scribe line to the DIRECTIONAL SENSOR
scribe line and measuring the degrees offset with a
compass. The measurement is made from _________
scribe line scribe line to _____________ scribe line using the right
hand rule, thumb pointing in the direction of the hole,
measure in the direction the fingers of your right hand
are pointing. Running a highside orientation program
in the MWD software can also make the measurement.
Y The main parameters that we calculate with the raw
Z
data from the DIRECTIONAL SENSOR are as follows:
4/24/2002 7
MWD Survey Measurements
Highside Toolface is the angle between the deflection tool scribe line and the top or highside
of the hole. This is calculated using the X-axis and Y-axis inclinometer measurements.
Magnetic Toolface is the direction that the deflection tool scribe line is pointing relative to
true or grid north. This is calculated using the X-axis and Y-axis magnetometer measurements.
Inclination is the angle between vertical and the wellbore in the vertical plane. We measure
this angle by measuring the direction that gravity acts relative to the tool. Gravity acts in a
vertical direction and has a magnitude of 1 g at sea level at the equator.
Azimuth is the direction of the wellbore relative to true or grid north in the horizontal plane.
We measure this angle by measuring the direction of the earth's magnetic field relative to the
tool.
Magnetic Declination is the difference in degrees between magnetic north and true north or
grid north for a particular location on the earth. This value changes with time and location and
must be determined using the software program. On a directional well it is important that the
value for magnetic declination that we use is the same one that the directional driller is using.
Usually there will be a difference between the value that the software calculates and the one
that the directional driller provides, however, always use the value provided by the directional
driller.
8 4/24/2002
MWD Survey Measurements
Magnetic Field Strength is the total magnitude of the earth's magnetic field in Nanotesla for
a particular location on the earth. This value also changes with time and location and can be
determined using the software program.
Highside Toolface
The X-axis and Y-axis
inclinometer measurements are
required to calculate highside
toolface. The figure below is a
vector diagram showing the
highside toolface measurement. On
the left is a diagram of the tool and
its relationship to the X - Y plane and the gravity vector, along with the components of gravity
in the X - Y plane and on the Z-axis. Gxy is the vector sum of the X and Y components of the
gravity vector measured by the tool. On the right is a diagram of the X - Y plane showing the
X and Y components of the gravity vector and the sum Gxy. Highside toolface is the angle
between the X axis and the highside of the hole and is calculated as follows:
4/24/2002 9
MWD Survey Measurements
Magnetic Toolface
The X-axis and Y-axis magnetometer measurements are required to calculate magnetic
toolface. Bxy is the vector sum of the X and Y components of the magnetic vector measured
by the tool. Magnetic toolface is the direction the scribeline is pointing and is calculated as
follows:
10 4/24/2002
MWD Survey Measurements
Inclination
To calculate inclination we use the X, Y, and Z inclinometer measurements. The figure shows
a diagram of the tool and the relevant axes. Again, Gxy is the sum of the X and Y components
of the gravity vector as calculated above. Gz is the Z component of the gravity vector as
measured by the tool. Gtotal is the total gravity vector and is the sum of the X, Y, and Z
components. This sum should be equal to 1 g, as long as your elevation is relatively close to
sea level. Inclination is the angle between the Z axis and vertical and is calculated as follows:
4/24/2002 11
MWD Survey Measurements
Note that since we know that Gtotal is 1 g, we can calculate inclination from only the X and Y
measurements, or only the Z measurement if one of the accelerometers fail, however, if only
Gz is known the accuracy at low angles is less because the Z accelerometer is near full scale.
For Gz only:
Not accurate for inclination less than 15o
+/- 1/2o accuracy for inclination greater than 15o and less than 30o
+/- 1/4o accuracy for inclination greater than 30o and less than 45o
+/- 1/8o accuracy for inclination greater than 45o
5. INC = inclination
Azimuth is referenced in the horizontal plane to true or grid north. The magnetic field that we
measure, however, is at some angle from the horizontal, that is the magnetic dip angle.
Therefore to reference our measurement to true north in the horizontal plane we must project
the magnetic vector to the horizontal. This is why you need HSTF and inclination to calculate
azimuth.
12 4/24/2002
MWD Survey Measurements
Combining the above equations for raw azimuth yields the following:
4/24/2002 13
MWD Survey Measurements
Survey Quality
The following items will be used to validate a MWD survey:
G total - this value is equal to (Gx2 + Gy2 + Gz2)1/2, and should be within +0.003 g of the
local gravity, which is 1.000 g in most locations. A Gtotal value outside of these limits may
indicate that the Directional Sensor did not achieve stability during accelerometer polling,
there was a hardware failure, BHA movement or improper misalignment and/or scale/bias
values were used.
B total is equal to (Bx2 + By2 + Bz2)1/2, and should trend consistently over the interval of a
bit run. Under ideal conditions, i.e., no cross-axial or axial magnetic interference, Btotal
should read the earth's local magnetic field strength. Abrupt variations in Btotal during a bit
run will be caused by a "fish", a nearby cased well bore, certain mineral deposits, solar events,
localized magnetic anomalies, or a hardware failure. Since all of the above will typically
affect all three magnetometer responses, magnetic interference will be detectable by tracking
the Btotal value.
14 4/24/2002
MWD Survey Measurements
As a general guideline, Btotal should not vary by more than +- 350 Nanotesla from the local
magnetic field strength or from survey to survey during a bit run. The local magnetic field
strength is determined by using magnetic modeling software or directly measuring it through
infield referencing. Surveys which do not conform to this guideline should alert the field
engineer that some magnetic interference is probable or that there was a hardware failure.
Btotal may also change abruptly from bit run to bit run due to a change in BHA configuration,
which does not have the correct Monel spacing.
Magnetic Dip Angle should trend consistently over the interval of a bit run. Under ideal
conditions, (i.e., no cross-axial or axial magnetic interference or pipe movement), MDIP
should read the earth's local magnetic dip angle. Abrupt variations in MDIP during a bit run
will be caused by a "fish", a nearby cased well bore, certain mineral deposits, solar events,
localized magnetic anomalies, pipe movement or a hardware failure.
As a general guideline, MDIP should not vary by more than +- 0.3 degrees from the local
magnetic dip angle or from survey to survey during a bit run. The local magnetic dip angle is
determined by using magnetic modeling software or directly measuring it through infield
referencing. Surveys which do not conform to this guideline should alert the field engineer
that some magnetic interference or pipe movement is probable, or that there was a hardware
failure. MDIP may also change abruptly from bit run to bit run due to a change in BHA
configuration, which does not have the correct Monel spacing.
Magnetic Interference
Magnetic interference problems when surveying a well are usually due to casing or a fish that
has been left in the hole. Unfortunately, the majority of the magnetic interference problems
occur when the accuracy of our azimuth is very critical. A well is usually kicked off just
below a casing shoe or through a window in the casing. The casing is a large concentration of
magnetic material, the ends of which act like magnetic poles from which the curving flux lines
cause magnetic interference. On production platforms or pads nearby wells can cause
interference as well.
The magnetic interference that we are primarily concerned with is in the X and Y direction.
This is due to the fact that magnetic toolface uses the X and Y magnetometers to calculate
toolface. Also with the Short collar method of surveying, only the X and Y magnetometers are
used. A good way of determining how much magnetic interference we are getting on the Z-
axis with the Short collar method is to compare Btotal measured with Btotal calculated.
4/24/2002 15
MWD Survey Measurements
The X and Y magnetometers will react to magnetic interference in the same manner as the Z
magnetometer. This would mean that a perpendicular distance of about 30' would be required
when kicking off near casing. The orientation of the casing with respect to the magnetometers
may have some effect on how much azimuth is affected. As the tool is rotated, X and Y
interference changes, but Btotal should stay the same.
Non-Mag Spacing
When kicking off a well below casing, it is necessary to have at least 10 diameters of
clearance between the shoe and the DIRECTIONAL SENSOR. When kicking off next to
another well or a fish, where the magnetic interference is perpendicular to the tool, up to 30'
clearance may be required to obtain good magnetic toolface or surveys.
Take special care when running a magnetic survey to prevent the effects of magnetic
interference. Such interference can be caused by proximity to steel collars and by adjacent
casing, hot spots in nonmagnetic collars, magnetic storms, and formation with diagenetic
minerals.
Nonmagnetic drill collars are used to separate the electronic survey instrumentation from the
magnetic fields of Drill string both above and below and prevent the distortion of the earth's
magnetic field at the sensor. The collars are of four basic compositions: (I) K Monel 500, an
alloy containing 30% copper and 65% nickel, (2) chrome/nickel steels (approximately 18%
chrome, 13% nickel), (3) austenitic steels based on chromium and manganese (over 18%
manganese) and (4) copper beryllium bronzes.
Currently, austenitic steels are used to make most nonmagnetic drill collars. The disadvantage
of the austenitic steel is its susceptibility to stress corrosion in a salt mud environment. The K
Monel and copper beryllium steels are to expensive for most drilling operations; both however
are considerably more resistant to mud correction than austenitic steels. The chrome/nickel
steel tends to gall, causing premature damage to the threads.
16 4/24/2002
MWD Survey Measurements
When the electronic survey instrumentation is located in a nonmagnetic collar between the bit
and steel collars the distortion of the earths magnetic field is minimized and it is isolated from
drill string interference generate both above and below the electronic survey instrumentation
unit. The number of required nonmagnetic collars depends on the location of the well bore on
the earth and inclination and direction of the well bore. The figure above is a compilation of
empirical data that are fairly reliable in selecting the number of nonmagnetic drill collars.
First, a zone is picked where the well bore is located either zone 1, 2 or 3. Then the expected
inclination and direction are used locate the curve, either A, B or C.
Example, on the north slope of Alaska a well plan calls for an inclination of 60 degrees and a
magnetic north azimuth of 50 degrees.
Solution, The north slope of Alaska is in zone 3. From the chart for zone 3 at 60 degrees
inclination and 50 degrees magnetic north azimuth, the point falls in Area B, indicating the
need for two 30’ magnetic collars with the electronic survey instrumentation unit 8 -10 feet
below the center.
This is just a recommendation and the survey should always be checked to make sure it is with
in acceptable tolerances of the (non-corrupted) earth's magnetic field.
4/24/2002 17
MWD Survey Measurements
10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 70 80 90
Direction Angle from Magnetic N or S Direction Angle from Magnetic N or S Direction Angle from Magnetic N or S
18 4/24/2002
MWD Survey Measurements
Survey Accuracy
Survey accuracy is a function of both instrument related uncertainties and systematic
uncertainties. Instrument related uncertainties include such things as sensor performance,
calibration tolerances, digitizer accuracy, and resolution. This is defined as the baseline
uncertainty and it is present in all survey sensors. Systematic uncertainties are a function of
magnetic interference from the drill string and can be reduced by housing the instrument in a
longer nonmagnetic drill collar. The total uncertainty is equal to the baseline uncertainty plus
the systematic uncertainty.
The long collar azimuth, when measured in an environment free from magnetic interference,
will always provide the most accurate azimuth, the only uncertainty being the baseline
uncertainty. The Short collar algorithm corrects for systematic uncertainties due to the
presence of magnetic interference along the Z axis of the magnetometer. For the Short collar
method, the systematic uncertainty is in the values that we obtain for the magnetic field
strength and dip angle. Due to the fact that this uncertainty is along the Z axis, survey
accuracy will be a function of inclination and azimuth, as well as dip angle and magnetic field
strength.
If we consider only the baseline uncertainty, in the absence of magnetic interference, survey
accuracy will be a function of inclination and magnetic dip angle. This relationship is shown
in figures below, where Bn (Bnorth) is defined as the projection of the magnetic field vector in
the horizontal plane, Berror is defined as the baseline uncertainty and has a constant value, and
Bref is defined as the measured magnetic field vector (Bref = Bn + Berror).
As shown in the figure below, as the inclination increases, the horizontal projection of Berror
is a larger percentage of Bref resulting in a decrease in survey accuracy. In the figure below,
the effect of magnetic dip angle on survey accuracy is shown. As the magnetic dip angle
increases, the size of the horizontal projection of Bn decreases, resulting in a larger percentage
of Berror in Bref. Thus anything that causes the horizontal projection of Berror to increase or
Bn to decrease results in decreased survey accuracy.
4/24/2002 19
MWD Survey Measurements
For systematic uncertainty, the uncertainty is along the Z-axis. This will result in decreased
survey accuracy when drilling east or west as opposed to drilling north or south. This is due to
the fact that Berror will tend to pull Bref in the direction of the Z-axis, away from Bn. This
relationship is shown in the figure below.
20 4/24/2002
MWD Survey Measurements
alignment corrections over a range of temperatures from room temperature to the upper
operating limit. The data is fit to a third order polynomial so that the correction factors can be
applied at any given temperature within the operating range of the tool. To be certain that the a
calibration technique will meet the performance as well as maintenance objectives it must
meet the following objectives:
2. Repeatability
The calibration is performed at the highest level of assembly through the instruments data
acquisition system and final housing. This allows a total package model to be built so that
errors do not accumulate as separate modules are incorporated into each other. Repeatability
and tolerance to positioning errors during calibration is achieved by establishing specific
performance standards for each sensor and through the methodology of the calibration itself.
Reliability under down hole conditions is addressed at the Materials Testing Laboratory by
exposing each sensor to vibration and thermal cycling while monitoring their output.
Reliability is also achieved through failure analysis and design and modification of the sensor
package.
Calibration Methodology
The calibration procedure consists of rotating the sensor through the field of investigation for
each axis and comparing the output with known values. Examine the ideal response of a single
axis rotation, at 0 degrees the sensor axis is aligned with the field and the output voltage is at a
maximum. As you rotate the sensor counter clockwise the voltage decreases until at 90
degrees the output goes to 0 volts. As you continue to rotate the sensor counter clockwise the
output voltage goes negative above 90 degrees and reaches a maximum negative value at 180
degrees. The response as you go from 180 to 360 degrees is similar. Note that this response
applies to both accelerometers and magnetometers when rotated through the gravity or
magnetic field.
Scale factor corrections scale the output of the sensor to a given standard so that all sensors
will have the same voltage response to a given field. Alignment errors are positioning errors
between the individual transducers and the DIRECTIONAL SENSOR probe true physical
axis.
The computation of bias, scale factor, and alignment corrections based on the examination of a
single axis would put considerable accuracy requirements upon both the calibration fixtures
and the personnel that operate them. By performing an analysis using data simultaneously
obtained from multiple axes greatly reduces sensitivity to positioning errors and improves
repeatability.
4/24/2002 21
MWD Survey Measurements
References
1. Estes, R. A., and Walters, P. A., "Improvement of MWD Azimuth Accuracy by use
of Iterative Total Field Calibration Technique and Compensation for System
Environmental Effects", SPE paper presented at the 1986 MWD Seminar, May 16.
2. Russell, A. W., and Roesler, R. F., "Reduction of Nonmagnetic Drill Collar Length
Through Magnetic Azimuth Correction Technique", paper SPE / IADC 13476 presented
at the 1985 Drilling Conference, New Orleans
22 4/24/2002
SONATRACH MWD MODULE
Gamma Ray
Shale
Gas
Oil
Salt Water
Shale
Salt
Gamma Ray Sensor Theory
• Natural Gamma Ray devices are
“passive” detectors of radioactive
gamma ray decay occurring within
formations
• The three most common gamma
emitting isotopes found in the earth’s
crust are Potassium-40, Thorium-
232, and Uranium-238
• High gamma counts measured by the
sensor indicate a high concentration
of radioactive material
• Natural gamma devices cannot
distinguish the origin of the gamma
radiation because of the type of
detector they employ (Geiger-
Mueller tubes)
Gamma Ray Sensor Theory
• Potassium and Thorium are typically associated with clay
minerals which are a large component in SHALE
• Log analysts generally infer that high gamma count formations
are shale and low gamma count formations are “non-shales”
(sandstone, limestone, halite, gypsum, coal, etc.)
• Gamma count values higher than the shale baseline are
uncommon and are typically seen in rock of volcanic origin or in
permeable reservoir rock where uranium has precipitated out in
the pore space
• Gamma Ray sensors indicate matrix clay content, but DO NOT
directly reveal fluid contents (i.e., gas, oil, water)
• Can be run in any environment – air, any salinity fluid, oil-based
fluids, open hole or cased hole wells
Gamma Ray Sensor Theory
Decay Spectrums of Potassium,
• Spectral Gamma Ray devices Thorium, & Uranium
are also “passive” detectors of
radioactive gamma ray decay 1.46
• Lithology Identification
• Formation Thickness
• Stratigraphic Correlation
• Geosteering
• Shale Volume Estimation
Gamma Ray Sensor Applications
• Lithology Identification
• Shale versus “non-shale”
indicator
• Low gamma response can
indicate potential reservoir
rock
• Formation Thickness
• Differences in the radioactivity
level between formations allows
log analysts to use gamma data to
determine formation thickness
• The thick sandstone interval in
the example is well defined on the
gamma curve
Gamma Ray Sensor Applications
• Stratigraphic Correlation
• Gamma data can be used
to correlate formation tops
and “marker beds”
between nearby wells to
help determine geologic
structure and the areal
extent of the reservoir
• Marker beds generally
show responses that are
very different from the
surrounding beds
Gamma Ray Sensor Applications
• Geosteering
• The intentional directional control
of a well based on the results of
downhole geological logging
measurements rather than three-
dimensional targets in space,
usually to keep a directional
wellbore within a pay zone
• In mature areas, geosteering may
be used to keep a near horizontal
wellbore in a particular section of
a reservoir
• Azimuthal Gamma Ray sensors
were designed specifically for
geosteering applications
Gamma Ray Sensor Applications Clean Line
(25 api)
Shale Baseline
(75 api)
• The maximum radial distance from which the detectors can measure gamma
counts
• Dependent upon the travel distance of a gamma ray
• Typically, 50% of the measured gamma rays come from a radius of 4” (10
cm)
• The stated depth of investigation of the gamma module is 9 - 12” (23 - 30 cm)
Vertical Resolution
detector
length of the detector
• Gamma module vertical 4”
resolution is stated to be 12” (31
cm)
Shale
Gamma Ray Data Interpretation
• Lithology response is
different between shale and
sandstone due to the varying
amounts of radioactivity
within the matrix of each
• No change in gamma
response in the sandstone Shale
despite the change in fluid
type through the formation Gas
• Gamma data can NOT Sandstone Oil
be used to identify the
presence or type of Salt H2O
hydrocarbon in the
formation Shale
Gamma Ray Data Interpretation
Halite
Gamma Ray Data Interpretation
1
Gamma Ray Sensor Theory
2 4/8/2002
Gamma Ray Sensor Theory
A gamma sensor counts the gamma rays emitted by the formation. The relationship between
counts and gamma ray energy from potassium, thorium, and uranium is shown in the figure
below.
Figure 2 - Complex Spectrum Observed from a Radioactive Source Containing K, Th, and U
4/8/2002 3
Gamma Ray Sensor Theory
For energy to be conserved, the total kinetic energy of the two balls after the collision cannot
be greater than the total kinetic energy of the balls before the collision. If the second ball is at
rest before the collision, as we have said, then the total kinetic energy before the collision is
simply that which the first ball had before the collision. To conserve momentum, the vector
sum of the momentum of the balls following the collision must be the same as the vector sum
of the momentum of the balls preceding the collision.
In the interaction between a gamma ray and an electron, the gamma ray is the incoming
particle with both energy and momentum. In the collision, some energy and momentum are
transferred to the electron from the gamma ray. The Compton interaction is perfectly “elastic”,
in that all of the energy lost by the gamma ray appears as kinetic energy gained by the
electron. This is different from the billiard ball case in that the sum of the kinetic energies of
the balls after the collision is always slightly less than the kinetic energy of the cue ball before
the collision because some energy is transformed into heat and sound during the collision. The
billiard ball collision is always an “inelastic” collision. The directions of travel of the electron
and gamma ray after the collision are such that their combined momentum after the collision is
the same as before the collision.
The specific case of head-on collisions is of particular interest in both billiards and Compton
scattering. For such a collision the maximum amount of energy is transferred when two
billiard balls collide head-on, the energy and momentum of the first ball is completely
transferred to the second, with the result that the first stops completely at the interaction site
and the second continues on with the velocity and energy that the first ball had before the
collision. The laws of physics allow this complete transfer of energy and momentum to occur
only in head-on collisions between balls -- or particles -- of equal mass.
For Compton scattering, however, the gamma ray has no mass; as a consequence, it is
impossible for it to completely transfer its energy to the electron. So a head-on collision
4 4/8/2002
Gamma Ray Sensor Theory
between a gamma ray and electron may occur, but the maximum amount of energy that can be
transferred from the gamma ray to the electron in the Compton interaction will always be less
than the total energy of the gamma ray, and the gamma ray will always still exist after a
Compton scattering event.
Photoelectric Effect
While gamma rays always emerge from the Compton interaction, albeit with a reduced
energy, they do completely disappear as a result of the third form of gamma ray interaction
with matter, the photoelectric interaction, or photoelectric effect. In this case, a gamma ray
encounters an electron bound in an atom and the energy of the gamma ray is completely
absorbed by the electron - atom system. The energy of the gamma ray is distributed between
the electron, which is completely ejected from the atom as a result of the interaction, and the
remainder of the atom (now a positive ion after the ejection of the electron). The reason the
gamma ray can completely disappear in this case, and not in the case of the Compton
interaction, is the fact that after the interaction there are two particles -- the electron and the
positive ion -- which in combination can always allow momentum to be conserved in the
interaction.
4/8/2002 5
Gamma Ray Sensor Theory
It may be useful at this point to imagine a cloud of gamma rays in the vicinity of the source.
Near the source, the cloud has its highest density. The gamma rays have their highest energies
near the source and are moving primarily in a direction radially away from it in this region. As
the distance from the source increases, gamma rays are likely to have scattered at least once,
with the result that the average energy of the gamma rays in the cloud is decreased.
With greater distances from the source, the probability of additional scatterings is larger and
the average energy of the gamma rays in the cloud is even lower. The density of gamma rays
in the cloud decreases with increasing distance from the source. This is true for two reasons:
first, simple geometrical spreading occurs; second, the decreased energy of the gamma rays
with increasing distance renders their disappearance due to the photoelectric effect
progressively more likely. At sufficiently great distances from the source, the gamma rays will
all have been reduced in energy through multiple scatterings to be removed altogether through
the photoelectric interaction.
6 4/8/2002
Gamma Ray Sensor Theory
4/8/2002 7
Gamma Ray Sensor Hardware
Geiger-Mueller Tubes
The G-M tube has a crude detection capability. The tube consists of a metal cylinder, which
functions electrically as a cathode and a single anode wire down its center. The tube contains inert
gases (e.g., bromine, neon) in which ionization takes place. A high voltage (usually >1000 volts) is
placed between the wire and the cylinder, with the wire functioning as the (positive) anode.
Operationally, a gamma ray is detected when it interacts with an electron in the wall of the tube
(either through Compton scattering or the photoelectric effect) causing the electron to be ejected
into the gas of the tube. Interactions in the gas are much less likely than in the wall because of the
much lower density of the gas.
Once the negatively charged electron is in the gas it is attracted to the positive anode wire. As it
approaches the wire, it gains energy from the electric field, and collides with atoms of gas in the
tube causing further ionization in the tube gas. The free electrons created in these collisions are
also attracted to the anode, and as they gain energy, they ionize more atoms themselves. A
multiplicative effect ensues, which eventually results in a momentary electrical breakdown of the
tube. This is observed as a voltage pulse across the cathode and anode of the tube, and may be
counted electronically.
The G-M tube is a simple, rugged device. However, one of its disadvantages is its low counting
efficiency. The most likely place for an interaction causing a breakdown is in the small volume
constituting the inner surface of the tube. The gas itself is of low enough density that ionizing
events are seldom initiated in it. Another disadvantage is the fact that the G-M tube gives no
information about the energy of the gamma ray it detected. Low and high energy gamma rays will
produce pulses that are, for all practical purposes, indistinguishable.
Scintillation Detectors
Detection of gamma rays using Scintillators represents a significant improvement over both the
efficiency and spectral deficiencies of G-M tubes. The most common scintillation detector, and the
one used in MLW/LWD sensors, is the sodium iodide (NaI) crystal.
Again, the detection occurs as the gamma ray interacts with the electrons in the NaI crystal, either
through Compton scattering or the photoelectric event. However, the effective volume of the
detector is that of the crystal (to be compared with the small volume defined by the inner surface of
the G-M tube). That is, an interaction occurring anywhere in the crystal will be detected. This
means that an NaI crystal will be as effective at detecting gamma rays as a G-M tube(s) occupying
many times the NaI volume.
8
Gamma Ray Sensor Theory
A second desirable feature of the NaI detector is its energy, or spectral, sensitivity. Spectral
sensitivity follows from the way detection is actually accomplished. When a gamma ray interacts
with an electron in the NaI crystal through Compton or photoelectric mechanisms, all
(photoelectric interaction) or part (Compton) of the gamma energy will be imparted to the electron.
In its turn the electron interacts with atoms in the NaI crystal, and it is a property of the crystal that
a repeatable fraction of the energy of the electron is converted to light within the crystal. Hence the
term “scintillation detection”. The amount, or intensity, or brightness of the light pulse is
proportional to the energy of the electron.
The light that is produced by even the highest energy gamma ray is still much too dim to be seen
with the unaided eye; however, it may be detected by converting it to an electronic signal and
amplifying that signal using a photomultiplier tube (PMT). Such a tube is connected optically to
the crystal, and light entering the tube from the crystal is converted to an electrical pulse. The
electrical pulse amplitude is proportional to the brightness of the light pulse and, therefore,
proportional to the energy of the electron in turn. Thus, the NaI crystal/PMT combination
constitutes a detector capable of measuring the energy given to the electron by the gamma ray.
Note again that the energy obtained by the electron as a result of Compton scattering is not that of
the gamma ray but some lesser amount. However, if the scintillation crystal is large enough, it is
possible that successive multiple scatterings of an individual gamma ray can occur within the
crystal until all of the gamma energy is converted to light. Since the gamma ray travels with the
speed of light, such multiple events are all simultaneous within the resolution of the electronics.
Thus, the energies of the electrons created by all the interactions all add together to produce an
optical pulse, and in its turn an electronic pulse that is proportional to the energy of the fully
absorbed gamma ray. In this case then, a relationship may be drawn between electronic pulse
amplitude and gamma ray energy.
4/8/2002 9
Gamma Ray Sensor Theory
Borehole Effects
On the other hand, the opposite effect is caused by the use of radioactive drilling fluid additives,
such as KCl. The Gamma Ray sensor detects radioactivity in the potassium and results in an
increase in absolute values, particularly in the area of a washout. Introduction of KCl into the mud
system causes a positive baseline shift, as shown in the figure below.
10 4/8/2002
Gamma Ray Sensor Theory
Reservoir Rocks
Reservoir rocks are generally sandstone (SiO2), limestone (CaO3), or dolomite (CaMgCO2). In
“clean”, clay-free reservoirs the gamma ray response will be fairly low, but not zero, due to the
presence of some radioactive impurities. However, if clay is present in a reservoir rock, the gamma
ray response will be somewhere between the clean zone and a shale response.
Salt
Halite is crystalline NaCl, with no radioactive impurities. The gamma response will be very close
to zero.
Note: Coal, anhydrite, and gypsum appear similar to salt on a log.
Hot Streaks
High radiation zones, or hot streaks, are usually thin layers due to depositional conditions of the
time period associated with volcanic activity. These streaks typically have a much higher gamma
response than the shale baseline
Uranium
Uranium is soluble in water and can migrate into permeable beds. This can mask the response of a
reservoir rock if uranium is present
Overpressured Zones
In an overpressured zone, excess bound water replaces matrix thereby reducing the overall gamma
response. This trend is very slight and not very noticeable over a short interval on the Gamma
curve.
4/8/2002 11
Gamma Ray Sensor Theory
Depth of Investigation
Generally speaking, the gamma ray sensor measures naturally occurring radiation from within 30
cm (both horizontally and vertically) of the sensor detector. The relationship between the percent
of contribution to the radiation signal measured by the detector and the distance from the borehole
wall is shown in the figure below.
12 4/8/2002
Gamma Ray Sensor Theory
Important: Data density and vertical resolution are frequently confused. Vertical resolution is
defined by the sensor design, not be data rate. Effective resolution can be decreased, however,
for any sensor if the sample rate is insufficient.
In order to resolve these two requirements, we must develop guidelines based on minimum
acceptable precision and minimum acceptable data density, and then choose the appropriate
sampling intervals and data densities to meet these minimum requirements.
Gamma sensors count gamma rays during the entire sampling period. This means that the
distance the drill pipe moves during the sampling interval is significant. Although we plot
only one point at the end of the sampling interval, this value represents an average of the
formation encountered during the entire sampling interval. Nuclear sensors will provide a
legitimate average of the formations traversed, and will merely tend to smear the beds rather
than miss them entirely.
However, as we increase the sampling interval in an attempt to increase data density, we know
that the precision of the measurement decreases. The statistical error in a 16 second
measurement will be exactly twice the error at 64 seconds. At 4 seconds, the error will be
double the 16 second error, and so on. There is a point (different for each sensor) where the
error (noise) becomes very large compared with the measurement itself, and it at this point
that the data becomes essentially meaningless.
Important: Any time there is a question about sample rates, it is better to be in error on the
side of choosing a faster sampling rate. In the event the statistics are poor, we can always
average multiple measurements and achieve the same statistical precision as would have been
achieved with the longer sampling interval. Going in the reverse direction is impossible. There
is no way to improve the effective resolution of a log made with long sampling intervals.
A full understanding of the effects of nuclear statistics on log quality is imperative for
individuals working with such logs. Accuracy and precision are not interchangeable terms.
Accuracy is the extent to which a measuring device is capable of determining the true value of
the parameter being measured. Precision is the extent to which a single measurement of the
device may differ from the true value of the parameter.
This is significant because a very precise measurement will have very little variation from one
measurement to the next, but may or may not be accurate. An accurate device may yield
individual measurements that differ greatly from the true value, but the average of multiple
measurements will be equal to the true value.
Since gamma sensors rely on measurements of the rate at which nuclear particles enter a
detector, they are, by definition, statistical in nature. All purely statistical measurements can
be assessed for accuracy only through probability theory. This means that unless there is some
non-statistical method for measuring the parameter of interest, one can never measure the
absolute accuracy of the parameter. The only way to measure gamma values in any zone is to
make repeat measurements with a sensor of known precision, and average them to determine
the true value.
4/8/2002 13
Gamma Ray Sensor Theory
When designing a gamma ray sensor, trade-offs between precision, accuracy, and sampling
interval must be considered. Accuracy of gamma ray sensors is adjusted through appropriate
calibration procedures. Precision, however, is a function of the detector efficiency. In nuclear
statistics, the number of counts per second detected is generally directly related to the
measurement precision. Thus, a sensor with 90 percent efficiency is likely to have better
precision than a sensor with 10 percent efficiency, since an error of 10 counts in 100 is much
greater than an error of 10 counts in 1000.
To calculate the shale volume (Vsh) of any zone, determine the denominator of the equation
by subtracting the clean sand reading (GRclean) from the shale baseline reading (GRsh).
Then, determine the numerator of the equation by subtracting the reading in the zone of
interest (GRzone) from the shale baseline (GRsh). Calculate the shale volume (Vsh) by
dividing the numerator by the denominator.
• Well offset correlation - The gamma ray sensor provides excellent correlation in field
appraisal and development drilling, particularly if initially used on exploratory and delineation
wells for development well correlation. In many fields, the gamma ray sensor alone is suitable
for real-time casing and core point selection.
• Safety - The gamma ray sensor will accurately chart bed stratification. In combination with
the resistivity sensor, the gamma ray sensor enables pore pressure prediction, leading to faster,
safer exploratory drilling and operations in difficult fields.
• Log of record – In combination with the resistivity sensor, the gamma ray sensor will
provide intermediate logs of definitive quality for archive uses while providing information to
improve drilling operational efficiency.
• Enhanced interpretation of wireline logs - Higher data sampling rates (recorded logs) give
greater definition and more exact bed delineation, which aids in identifying the
smoothing/averaging effects of high wireline traverse speeds. Because at typical drilling rates
the gamma ray sensor passes formations slower than typical wireline gamma ray sensors, the
resulting logs have higher definition and less statistical uncertainty.
• Directional control - The gamma ray sensor allows for improved trajectory monitoring. By
removing one bank of detectors and replaying them with shielding, azimuthal readings may be
taken. In its azimuthal configuration, the gamma ray may be used not only to differentiate
between shale and reservoir rock, but to also determine whether the wellbore has exited out of
the top or bottom of the reservoir.
14 4/8/2002
SONATRACH MWD MODULE
Resistivity
Shale
Gas
Oil
Salt Water
Shale
Salt
Resistivity Sensor Theory
• Physical Principles
• Electromagnetic wave resistivity sensors respond to the way radio
frequency (RF) waves propagate (move) through the formation
• The propagation of an RF wave is controlled by the following
physical properties of the material through which the wave is
moving:
• Electrical Conductivity, which is the ability of a material to conduct an
electrical current
• Dielectric Permittivity, which is the ability of a material to store electrical
charge
• Magnetic Permeability, which is the ability of a material to become
magnetized
• At transmission frequencies below 10 MHz, the formation
conductivity is the dominant factor
• If reasonable assumptions are made for the dielectric permittivity
and magnetic permeability, measured wave parameters can be
related to the formation resistivity
Resistivity Sensor Theory
• What does the
Electromagnetic Resistivity
sensor measure?
• Phase Shift - the time
difference of arrival of the RF
wave between the two receivers
• Attenuation - the difference in
intensity of the RF wave signal
at each of the receivers
• Both the phase shift and
attenuation data can be used to
compute a formation resistivity
value
Resistivity Sensor Theory
100
• The dynamic range
of the phase
Phase Shift (°)
10
1
measurement is
between 0.1 and
0.1
1000 ohm-m
0.01
0.1 1 10 100 1000
Resistivity (ohm-m)
Resistivity Sensor Theory
10
• The dynamic
Attenuation (-dB)
1 range of the
0.1
attenuation
measurement is
0.01
between 0.1 and
0.001 100 ohm-m
0.1 1 10 100 1000
Resistivity (ohm-m)
Resistivity Sensor Theory
• Determine Rt in
Invaded Zones
• MWD data is less
affected by mud
invasion than
wireline data
• Typical MWD
exposure time is less
than one hour,
whereas wireline
exposure time is
generally from one to
seven days
Resistivity Sensor Applications
• Quantitative Petrophysical
Evaluation to calculate
formation porosity, water
saturation, and in-situ
reserves
• Archie’s equations provide
a quick-look estimation
• Other calculation methods
are much more rigorous
and take into account
many more parameters
Resistivity Sensor Applications
• Time-Lapse Logging aids
in identifying movable
fluids
• Re-logging a potential Deep Shallow
pay zone and comparing
the resistivity values from
each pass can
qualitatively indicate
formation permeability
• Multiple spacing
resistivity sensors can
provide similar
information in a single
pass
Resistivity Sensor Applications
• Predict Abnormal
Formation Pore Pressure
• By monitoring shale
resistivity values, the
presence of an overpressure
transition zone can be seen
• Drilling into formation
pressure that is higher than
borehole pressure can cause
a “kick” and if uncontrolled
can result in a “blowout”
Resistivity Sensor Applications
•Geosteering
•Objective: Keep wellbore in oil
zone (avoid shale, gas, and water) SHALE
Water
Saturation
-18 1 % 0
Water
SW
Neutron Porosity
(LS pu)
CNφ®
42 200
Resistivity
Resistivity
(Ohm-m)
EWR®
0 2
100
Gamma Ray
Rate of
DGR™
(AAPI)
(ft/hr)
ROP
Penetration
Gamma
500
Ray
0
X00
Well Path
Well Path
TVD Zone A
(FT)
X50 Zone B
X000 Measured Depth (Ft) X500
Resistivity Sensor Data Interpretation
• General Resistivity
Response
Shale • Shale response is typically low
due to the high amount of
associated water with clays
Gas • The hydrocarbon response (gas
Oil and oil) is generally high, and
very different from the salt
Salt Water
water zone (low)
• Salt has no fluid associated with
Shale it therefore its’ response is
infinite (off scale high)
Salt
Resistivity Theory and Application
Physical Principles
The Resistivity sensor responds to the way in which RF electromagnetic waves propagate
through the formation. The propagation of an RF wave is controlled by the following physical
properties of the material through which the wave is propagating:
• Electrical conductivity (σ), which is the ability of a material to conduct an electrical
current.
• Dielectric permittivity (ε), which is the ability of a material to store an electrical charge.
• Magnetic permeability (µ), which is the ability of a material to become magnetized.
At frequencies below about 10 MHz, the formation conductivity is the dominant factor
affecting RF wave propagation. Thus, by making reasonable assumptions for the dielectric
permittivity and magnetic permeability values, measured wave propagation parameters (phase
shift and attenuation) can be related to the formation conductivity or Resistivity.
1
Resistivity Sensor Theory and Application
The wavelength, frequency, and velocity of a propagating wave are related by the following
equations:
V= ω∗λ or V = 2πf * λ
where λ is the wavelength, V is the velocity of the propagating wave, f is the frequency, and
ω is the angular frequency. The wave travels at higher speeds in resistive formations than it
does in conductive formations. Thus, the transmitted Resistivity signal will have a longer
wavelength in higher-Resistivity formations and a shorter wavelength in more conductive,
lower-Resistivity formations. The velocities of propagating electromagnetic waves can be
expressed in the traditional units of velocity (length/time), or alternatively, in what electrical
engineers called phase shift, which has units of degrees. Historically, people have preferred to
use the concept of phase shift when describing the velocity measurement made by an MWD
propagation Resistivity tool.
The shift in phase that occurs between two receivers is basically a measurement of the fraction
of one wavelength that occurs between the 8 – inch spacing that separates the two receivers
(see Figure 1). For example, if the wavelength (one complete 360-degree cycle) were 12
inches, then we would expect to measure a phase shift of 180 degrees, or one-half of a
complete cycle, between two receivers spaced 8 inches apart. In a more resistive formation, in
which the wavelength was 80 inches, we would expect to measure a phase shift of (8/80) ×
360, or 36, degrees between the two receivers. Finally, as the conductivity approaches zero
(Resistivity approaches infinity), the wavelength becomes many meters in length and the
measured phase shift that occurs over the Resistivity tool’s 8 - inch spacing becomes very
small. Table 1 summarizes the interrelationships among these parameters.
2 4/24/2002
Resistivity Sensor Theory and Application
Table 1 Velocity, Phase Shift, and Wavelength, as a Function of Resistivity and Conductivity
The three phase shift-to-Resistivity transforms that we use are graphically depicted in Figure 2
below. These transforms are based on well established theoretical models that have been
bench-marked with laboratory data. The lab data were acquired in very large fiberglass tanks
filled with salt water of known Resistivity.
Attenuation Measurement
An EM wave will decay exponentially as it propagates through a conductive formation. The
rate of decay, or attenuation, is directly proportional to the formation conductivity.
Attenuation (sometimes referred to as amplitude ratio) is calculated from the “ratio” of the
amplitudes of the signals detected at the two receivers, which are at different distances from
the transmitter. A root-mean-squared (RMS) circuit in the receiver electronics is used to
measure these signal amplitudes. This circuit outputs an RMS DC voltage, which is
proportional to the amplitude of the detected AC signal.
4/24/2002 3
Resistivity Sensor Theory and Application
The most common unit for quantifying the level of attenuation is the decibel (dB).
This “amplitude ratio” unit is defined as:
4 4/24/2002
Resistivity Sensor Theory and Application
CPA Resistivity
Because the measured values of both phase shift and attenuation are inversely proportional to
Resistivity, a formation Resistivity measurement can be computed from any one of the
following:
1. phase shift measurement
2. attenuation measurement
3. a combination of the phase shift and attenuation measurements
The Resistivity service provides Resistivity values computed from the phase shift
measurement, and from a mathematical combination of phase shift and attenuation
measurements; the latter is known as the combined phase and attenuation (CPA) Resistivity.
To produce the CPA Resistivity value, the measured phase shift and attenuation values are
mathematically combined to produce a new computed parameter, known as the CPA value. A
transform is then used to compute the CPA Resistivity from the computed CPA value for each
of the three transmitter-receiver spacings. These transforms are depicted in Figure 4 below.
Calibration Theory
The Resistivity tool measures basic physical parameters (phase shift and attenuation) of
electromagnetic waves. Thus, it is not necessary to calibrate each tool in a simulated formation
or “calibration pit”. The fundamental mathematical transforms between Resistivity and the
measured values of phase shift and attenuation are based on well-established electromagnetic
principles that are described by Maxwell’s equations, and have been experimentally verified
by logging large bodies of water of known Resistivity with the tool.
4/24/2002 5
Resistivity Sensor Theory and Application
For the phase shift measurement, we assume that the velocity of the EM wave in air is the
speed of light, and consequently, the observed phase shift in air or a vacuum should be a very
small constant value. When we “air hang” a tool and observe its reading in air, we assume that
ALL of the observed offset from zero degrees of phase shift is due to tool “imperfection.” This
airhang value, in degrees of phase shift, is subsequently used to calibrate the raw
measurements made by the tool. In the Resistivity tool software this value is subtracted from
each raw phase shift measurement made, before it is pulsed to the surface and/or recorded in
memory. Of course, for the Resistivity tool, we have a unique airhang calibration factor for
each of the four spacings. Furthermore, since these hardware dependent “imperfections” can
be temperature sensitive, we characterize them as a function of temperature from 20oC to
150oC.
A propagating EM wave is attenuated by the conductivity of the medium through which it
passes, and when its amplitude is measured at two different distances from a transmitter,
additional attenuation, due to what is commonly referred to as geometrical spreading, is also
observed. This geometrical spreading loss term is significant and must be accounted for;
however, it is not Resistivity dependent. Consequently, we determine its magnitude in air, and
similar to the phase shift data downhole processing, it is subtracted from each raw
measurement of attenuation that is made. Other hardware-induced “measurement
imperfections” are implicitly included in these airhang attenuation values. These
“imperfections” can also be temperature dependent; therefore, we characterize them as a
function of temperature from 20oC to 150oC.
NOTE: The output of each receiver insert’s RMS circuit is also extensively calibrated against
precision lab equipment to ensure that the amplitude measured by any insert will be the same
for a particular received signal. These calibration data are used by the tool’s software to
compute calibrated amplitudes for both the near and far receiver signals, before the attenuation
is calculated and the airhang correction is applied.
The raw amplitude of the signal detected at each receiver, V (in volts), is converted into a
calibrated value of amplitude in the receiver insert. During the calibration step, the unit used to
quantify the amplitude of the signal is also changed from the volt to the decibel-milliwatt, or
dBm. The equation defining this unit of relative (to 1 milliwatt) power is:
The reference power for the dBm unit is one milliwatt, or 0.001 watt. The impedance, R, of
the receiver circuit is nominally 50 ohms, and the receiver insert amplitude calibration
performed in the lab uses a 50-ohm load. Substituting 50 for R in the above equation yields:
The above equation is used in the Resistivity 4 tool’s software and the surface software to
convert the raw, uncalibrated receiver voltages into calibrated values of relative power.
Using the units of dBm for the two signal amplitudes (relative powers) in the amplitude ratio,
or attenuation, equation results in the following simple relationship:
Therefore, the amplitude of the FAR receiver’s signal, in dBm, can be easily calculated, since
we record the near receiver signal’s amplitude (in dBm) and the downhole calculated value of
the amplitude ratio (in dB). Since the far receiver’s signal amplitude is typically lower than the
near receiver’s, the amplitude ratio (in dB) is typically a negative quantity.
6 4/24/2002
Resistivity Sensor Theory and Application
Accuracy
The accuracy of the Resistivity measurements varies with the formation Resistivity. Like
wireline induction tools, propagation Resistivity sensors are most accurate at low Resistivity,
where small changes in Resistivity correspond to large, easily-measured changes in phase shift
and/or attenuation. The measurements become less accurate at higher Resistivity where small
changes in phase shift and/or attenuation correspond to large changes in Resistivity.
Since phase shift-derived Resistivity measurements are useful over a wider range of
Resistivity than either attenuation-derived or CPA-derived Resistivity measurements, their
accuracies will be discussed in more detail. Table 3 illustrates the relationships between
measured phase shifts and derived Resistivity.
Table 3 Resistivity vs. Phase Shift
The accuracy of the Resistivity raw measurements should be constant in terms of degrees of
phase shift and decibels of attenuation. However, the non-linear relationship between
Resistivity and these measured parameters results in an accuracy, expressed in ohm-m, which
varies with formation Resistivity. As the Resistivity increases, a given error in the phase shift
and attenuation measurements translates into a larger error in the derived phase shift
Resistivity and CPA Resistivity values.
Table 4 illustrates the effect of a constant error of +0.1 degree in phase shift on the percentage
error in the medium phase shift-based Resistivity value in ohm-meters. In terms of ohm-
meters, the percentage error increases dramatically as the Resistivity increases from 0.1 to
1000 ohm-m.
Table 4 Resistivity Error Resulting from a +0.10 Degree Medium Phase Shift Error
4/24/2002 7
Resistivity Sensor Theory and Application
Based on reviewing many sets of Resistivity tool airhang data, a reasonable measurement
uncertainty to assume for a stable, well-calibrated tool is ±0.05 degrees of phase shift. This
amount of uncertainty should be considered “best case” for this tool design. Figure 5 through
Figure 8 illustrate the effect this small constant error in phase shift has on the inferred
Resistivity. In Figure 5, an example depicts the situation in which the X-shallow phase shift-
based Resistivity log reading (R a) is 300 ohm-m. For this assumed uncertainty, the true
Resistivity (Rt) could be as low as 210 ohm-m or as high as 520 ohm-m, depending on the
sign of the error.
NOTE: how the effect of this constant error in phase shift varies dramatically with Resistivity.
Also, note how the effect varies with transmitter-receiver spacing and frequency, remembering
that the deep transmitter operates at 400 KHz.
8 4/24/2002
Resistivity Sensor Theory and Application
4/24/2002 9
Resistivity Sensor Theory and Application
10 4/24/2002
Resistivity Sensor Theory and Application
Antenna Configurations
The First Resistivity tool had a single transmitter antenna and two receiver antennas. The
receivers are 6 inches apart and the transmitter is 24 inches from the near receiver. The current
Resistivity design begins with the a basic configuration and then adds additional transmitters:
two shorter-spaced transmitters located 20 and 30 inches from the near receiver, and one
longer-spaced transmitter located 48 inches from the near receiver. The purpose for employing
multiple transmitter-receiver spacings is to provide multiple formation Resistivity
measurements with different depths of investigation. Generally, Resistivity measurements
acquired from longer transmitter-receiver spacings will “read deeper” into the formation than
measurements from shorter transmitter-receiver spacings.
4/24/2002 11
Resistivity Sensor Theory and Application
Depth of Investigation
The depth of investigation (DOI) of a particular Resistivity measurement is dependent on
several parameters, including: the transmitter-to-receiver spacing, the transmitter frequency,
and the measured parameter from which Resistivity is computed, i.e., phase shift, attenuation,
or CPA. However, the parameter with the greatest effect on the DOI of the Resistivity
measurement is the formation Resistivity itself.
Transmitter-to-Receiver Spacing
The depth of investigation of Resistivity measurements increases with increasing transmitter-
to-receiver spacing. This principle is illustrated by the generalized diagrams below, which
show the lines of constant phase of the shallow and medium-spaced transmitters in an
isotropic medium. The shaded area indicates the region that will influence the phase shift
measured between the near and far receivers. Note that for a longer transmitter-receiver
spacing, this area of investigation extends farther laterally into the formation, providing a
greater depth of investigation. Also note, however, that this increase in DOI is accompanied
by a decrease in the vertical resolution for longer spacing measurements, as the area of
investigation also extends farther “up and down” in the direction of the tool axis.
12 4/24/2002
Resistivity Sensor Theory and Application
The depth of investigation is also a function of which measured parameter (phase shift,
attenuation, or CPA) is used to compute Resistivity. This difference in depth of investigation
results from a difference in the spatial distribution of the phase and amplitude fields, as
illustrated below.
Phase Amplitude
For medium measurements, these are the lines of constant phase and amplitude. Note the
different depths of investigation of phase shift and amplitude attenuation Resistivity
measurements.
In an isotropic medium, the electromagnetic field will radiate from the transmitter at the same
velocity in all directions. Thus, lines of constant phase will form spheres around the transmitter
(see the left side of Figure 11). However, the field is radiated perpendicular to the tool axis with
greater intensity than in directions closer to the tool axis. Thus, the lines of constant amplitude are
not spherical and are as shown on the right side of Figure 11. Therefore, the shaded area affecting
the attenuation measurement is different (and deeper into the formation) than the area which
affects the phase shift measurement.
4/24/2002 13
Resistivity Sensor Theory and Application
This difference in the spatial distribution of the phase and amplitude fields may seem difficult
to understand, but consider the simple acoustic analogy of a trumpet. Sound waves emanate
from the horn of the trumpet and travel at the speed of sound in all directions. Thus, a listener
standing 10 meters to the side of the trumpet would hear notes at exactly the same time as a
listener standing 10 meters in front of the trumpet. However, the person standing directly in
front of the trumpet would hear a louder volume than the person standing off to the side. Both
the Resistivity transmitters and the trumpet are directional transmitters, radiating a stronger
signal in one direction, even though the signal will travel at the same velocity in all directions.
Thus, lines of constant phase and amplitude of the acoustic energy around a trumpet would
show a different spatial distribution, not unlike the phase and amplitude fields surrounding an
Resistivity transmitter.
CPA
A Resistivity measurement derived from a combination of the phase shift and attenuation
measurements can have a depth of investigation which is either deeper than the attenuation
Resistivity, shallower than the phase shift Resistivity, or intermediate between the phase shift
and attenuation Resistivity values, depending on the way in which the phase shift and
attenuation values are mathematically combined. Although the exact formula for combining
the phase and attenuation values to produce the CPA value (from which the CPA Resistivity is
computed) is rather complex, consider the simple case illustrated below.
This diagram depicts the radial response of two overlapping measurements. The vertical Y-
axis represents the relative sensitivity, or the amount of signal, coming from various
diameters. The shallower measurement is defined by areas A and B, whereas the deeper
measurement is defined by areas B and C. These two raw measurements could be combined in
several different ways to give a different effective depths of investigation. For example, if the
shallower measurement were subtracted from the deeper measurement, we would be left with
the response illustrated by area C. By subtracting the shallower component, B, of the deeper
measurement, the resultant combined measurement (represented by area C) would have a
deeper depth of investigation than either of the two basic measurements. In other words, by
using the shallower phase shift measurement to cancel-out the shallower component of the
deeper attenuation measurement, the resultant CPA value yields an effective depth of
investigation that is deeper than either the phase shift or the attenuation measurements.
14 4/24/2002
Resistivity Sensor Theory and Application
Frequency Effects
The depth of investigation of a propagation Resistivity measurement is also a function of the
frequency of the transmitted signal. Generally speaking, lower frequency measurements will
have a greater depth of investigation than otherwise-equivalent higher frequency
measurements. Aside from depth of investigation issues, lower frequency measurements will
also be less sensitive to dielectric effects, but will exhibit poorer precision at high Resistivity.
• There is no borehole.
• The formation is isotropic.
• The formation is infinitely thick.
• In the step invasion model, the difference between Rt and Rxo is infinitesimally small
(uninvaded, for all practical purposes).
4/24/2002 15
Resistivity Sensor Theory and Application
16 4/24/2002
Resistivity Sensor Theory and Application
4/24/2002 17
Resistivity Sensor Theory and Application
18 4/24/2002
Resistivity Sensor Theory and Application
The historical definition for depth of investigation for Resistivity tools is the 50% point on the
IRPGF curve, meaning that half of the total signal sensed comes from within this diameter and
half of the total signal comes from outside this diameter. In other words, when the diameter of
invasion reaches the IRPGF 50% point, the tool reading will be mid-way between the
conductivity of the flushed zone (CX0) and the conductivity of the uninvaded formation (Ct) in
the step invasion model.
A few very important insights into the nature of the response of a propagating wave
Resistivity tool can be gleaned by close examination of these IRPGF curves:
1 For low values of Rt, all four phase shift-based Resistivity measurements (and, to a lesser
degree also, the four CPA-derived resistivities) exhibit a diametrical area, between
approximately 25 and 55 inches, in which the IRPGF significantly exceeds a value of 1.0.
Consequently, for the case of a low Resistivity zone that is invaded to this depth range with
mud filtrate, the various Resistivity measurements provided by the Resistivity tool can
overshoot (either higher or lower, depending on the values of Rt and Rxo) the theoretical limit
(calculated assuming 100% flushing) of Rxo.
2 For low values of Rt, all four CPA-based Resistivity measurements exhibit a diametrical
area, between approximately 10 and 45 inches, in which the IRGPF is significantly less
than zero, or negative. This region of negative IRPGFs is not unusual for Resistivity tools,
such as the Resistivity, that use the difference or the ratio of measurements made at two
receivers to compute apparent Resistivity. This phenomenon provides inherent borehole
compensation by rejecting (subtracting) the conductivity contributions from nearby
regions, e.g., the borehole.
Therefore, for the case of a low Resistivity zone that is invaded to this depth range with mud
filtrate, the CPA-based Resistivity measurements provided by the Resistivity service can
overshoot (either higher or lower, depending on the values of Rt and Rxo) the true formation
Resistivity in what may be considered the counterintuitive direction. Shallow resistive (Rxo >
Rt) invasion will cause the apparent CPA Resistivity to be too low, while shallow conductive
(Rxo < Rt) invasion will cause the apparent CPA Resistivity to be too high. This also applies to
plain attenuation measurements.
Alternatively, IRPGF data can be presented as a family of curves, one for each spacing, for a
specific apparent Resistivity (Ra). Figure 21 through Figure 24 de scribe the depths of
investigation of both the phase shift- and CPA-derived measurements for apparent resistivities
of 0.5 and 50 ohm-m.
4/24/2002 19
Resistivity Sensor Theory and Application
20 4/24/2002
Resistivity Sensor Theory and Application
4/24/2002 21
Resistivity Sensor Theory and Application
The depths of investigation (using the IRPGF 50% point as the definition for DOI) for the
Resistivity measurements at these two different resistivities (0.5 and 50 ohm-m) are listed in
Table 5. Note the dramatic increase in depth of investigation as the formation resistivity
changes, such that the phase shift-based resistivity measurements at 50 ohm-m “read deeper”
than the CPA Resistivity measurements at 0.5 ohm-m.
An interesting point to note is that at high resistivities the shallow, medium, and deep phase
shift resistivity curves have DOI values which are very similar to wireline SFL, medium
induction, and deep induction curves, respectively.
22 4/24/2002
Resistivity Sensor Theory and Application
Dielectric Effects
The dielectric permittivity of a material is quantified by what is called the dielectric constant.
Typically, scientists ratio a material’s dielectric constant, ε, to the dielectric constant of free
space (vacuum), εo, to simply change the range of typical values. This relative value is
referred to as the relative dielectric constant, εr, and is defined as follows:
ε r = ε /ε o
where,
εo = 8.8542 x 10-12 farads/m
All the resistivity transforms we use assume a relative dielectric constant (εr) of 10. For a
given formation resistivity, if the actual formation εr value is higher than this assumed value of
10, the measured phase shift will be slightly higher than expected and the measured
attenuation will be significantly lower than expected. Consequently, the inferred resistivities
will be in error (see Table 6). Thus, because the attenuation is more sensitive to changes in the
relative dielectric constant than is the phase shift, the CPA-derived resistivity is more likely to
suffer significant dielectric-related errors than is the phase shift-derived resistivity.
Table 6 Effect of (εr) on Measured Phase Shift & Attenuation and Phase Shift & CPA Resistivities
4/24/2002 23
Resistivity Sensor Theory and Application
24 4/24/2002
Resistivity Sensor Theory and Application
4/24/2002 25
Resistivity Sensor Theory and Application
26 4/24/2002
Resistivity Sensor Theory and Application
4/24/2002 27
Resistivity Sensor Theory and Application
28 4/24/2002
Resistivity Sensor Theory and Application
Anisotropy Effects
Because of their layered nature, many sedimentary formations are electrically anisotropic.
This means that the electrical resistivity measured parallel to the bedding planes is different
from the resistivity measured perpendicular to the bedding plane. This phenomenon can be
caused by either of two distinctly different geologic situations. However, to the Resistivity
tool, these two different geologic settings appear the same. One type of anisotropy is
commonly called microscopic anisotropy. This refers to a single sedimentary rock that
exhibits intrinsic anisotropy due to its structure. Shale, with its flat clay platelet structure, is a
classic example of an electrically anisotropic rock. During deposition and subsequent
overburden loading, these platelets tend to align themselves horizontally. Consequently,
electricity flows easier parallel to these flat platelets than perpendicular to them (Figure 37).
Another anisotropic phenomenon is caused by the layering of formations and is commonly
referred to as macroscopic anisotropy. This can occur when the scale of the layering is much
less that the resolution of the measuring device. A laminated sand-shale sequence is a classic
example of this type of electrical anisotropy (Figure 38). Also, each layer (bed) can be
isotropic and the unit can still be anisotropic.
4/24/2002 29
Resistivity Sensor Theory and Application
If we think of the different rock layers as being electrical resistors in a circuit, they act as
resistors wired in parallel to current flowing parallel to the bedding planes, and as resistors
wired in series to current flowing perpendicular to the bedding planes. Like wireline induction
tools, the Resistivity tool induces an eddy current which circles the tool in a plane
perpendicular to the tool axis. Thus, the Resistivity tool measures the resistivity in a plane
perpendicular to the borehole. Therefore, in a vertical well with horizontal beds, the
Resistivity will read the horizontal resistivity of the formation. However, in a horizontal well,
the induced eddy current will circle the borehole in a vertical plane, encountering both the
horizontal and vertical components of the formation resistivity. In this latter situation, the
Resistivity tool response will be a complex function of both the horizontal and vertical
components of the formation resistivity, as well as the angle between the tool axis and the
formation bedding planes. Because this function varies with transmitter-receiver spacing and
frequency, the different Resistivity curves respond differently to anisotropic formations
encountered at high angles.
Figure 39 illustrates the modeled responses of the phase shift-based Resistivity measurements
in a formation having a horizontal resistivity, Rh, of 1 ohm-m and a vertical resistivity, Rv, of
4 ohm-m. The first letter, M, in each curve name means it is modeled or calculated (not
measured) data.
30 4/24/2002
Resistivity Sensor Theory and Application
For relative dip angles (see Figure 40 for definition) less than about 30 degrees, all four curves
read close to the horizontal resistivity value of 1 ohm-m (this would be typical of a vertical
well with horizontal or slightly dipping beds). However, as the relative dip angle increases
above about 50 degrees (as would be the case in high-angle wells) the phase shift resistivity
values depart from the Rh value and increase toward Rv. In fact, in some cases, the measured
(apparent) resistivities will exceed the value of Rv. Also notice that the shallow, and medium
curves are affected to differing degrees by the anisotropy, resulting in significant curve
separation with RSP < RMP . The medium and deep Resistivity curves read almost the same
value because the effect of the different transmitter spacing is almost exactly offset by the
effect of different frequencies (1 MHz for the deep transmitter; 2 MHz for the medium and
shallow transmitters). Thus, the characteristic log signature of formation anisotropy is
RSHALLOW < RMEDIUM < RDEEP .
A computer program has been developed which computes Rv, Rh, and the relative dip angle
based on the shallow, and medium phase shift-based Resistivity log values. Note that this
inversion calculation yields only the magnitude of the relative dip angle, not the azimuth
direction. The computed Rh value can be compared with induction or 2MHz type logs
recorded in vertical wells. Also, the Rh value can be used to compute water saturation in a
“laminated sand” or parallel resistor interpretation model.
The major limitation of this program is that it assumes that all of the separation between the
three curves is due only to anisotropic effects. While this may be the case in impermeable
shales or hydrocarbon-bearing reservoirs drilled with oil-based mud, a number of other effects
such as invasion, dielectric constant differences, residual borehole effects, and/or shoulder bed
effects may also contribute to the Resistivity curve separation. If present, these other effects
will induce error in, or perhaps completely invalidate, the anisotropy inversion calculation.
Thus, this model must be applied cautiously, with careful consideration given to these other
potentially disturbing effects.
4/24/2002 31
Resistivity Sensor Theory and Application
Reference:
1. Ball, S., and Hendricks, W.E.: “Formation Evaluation Utilizing a New MWD
Multiple Depth of Investigation Resistivity Sensor,” paper presented at the 15th
European Formation Evaluation Symposium, May 5-7, 1993.
2. Woodhouse, R., Opstad, E. A., and Cunningham, A.B.: “Vertical Migration of
Invaded Fluids in Horizontal Wells,” paper A presented at the SPWLA 32nd Annual
Logging Symposium, Midland, Texas, U.S.A, June 16-19, 1991.
3. Beck, G. F., Oberkircher, J., and Mack, S.: “Measurement of Invasion Using an
MWD Multiple Depth of Investigation Resistivity Tool,” paper SPE 24674 presented
at the 67th Annual Technical Conference and Exhibition, Washington, DC, U.S.A,
October 4-7, 1992.
32 4/24/2002
Resistivity Sensor Theory and Application
4/24/2002 33
Resistivity Sensor Theory and Application
34 4/24/2002
Resistivity Sensor Theory and Application
4/24/2002 35
Resistivity Sensor Theory and Application
Log example 3 North Sea Oil Zone Logged After Coring Operation
36 4/24/2002
Resistivity Sensor Theory and Application
The phase shift resistivity curves displayed in Log ex. 3 (Track II) were run in a MAD mode
due to a prior coring operation through this interval. Formation exposure time was 45 to 60
hours. With an oil-based mud and low formation water saturation (i.e., low relative
permeability to water), significant differences among the various resistivity curves, due to
filtrate invasion, would not be anticipated. Slight curve separations seen in the higher
resistivity intervals are due to small differences in the vertical resolutions of the various
spacings in this resistivity range.
Track III is a conventional 2 MHz log presentation, containing the medium phase shift and
medium attenuation resistivity curves only. An inexperienced user might wrongly interpret the
separation of these two curves as an indication of mud filtrate invasion. However, the correct
explanation for the observed differences is simply the different vertical resolutions of these
two measurements.
Attenuation resistivity curves are plotted in Track IV. Comparison of these curves with their
phase shift counterparts in Track II highlights the much sharper vertical resolution of the
phase shift-derived resistivity measurements. The magnitude of this difference in vertical
resolution is a function of resistivity and it increases with increasing resistivity.
In contrast, Zone A in Log ex. 4 exhibits the effects of invasion by the resistive oil-based mud
filtrate within three hours after drilling. All four phase shift resistivity values plotted in Track
II are affected to varying degrees in this permeable water-bearing sandstone. In comparison,
the attenuation resistivity values shown in Track III exhibit invasion effects on the shallow
curve only. The medium and deep attenuation resistivity curves overlay, confirming the
measurement of Ro. Note that in this range of resistivities, the attenuation-based measurement
has good vertical resolution.
Zone B in Log ex. 4 is similar to Zone A, except that the formation exposure time in only 1.4
hours. Increases in the shallow phase shift resistivity values above the expected water zone
resistivity values indicate invasion of the oil filtrate. The medium and deep phase resistivity
curves provide a direct measurement of Ro. Note that the four attenuation resistivity curves are
in good agreement with the medium and deep phase resistivity curves, thereby confirming
measurement of Ro.
4/24/2002 37
Resistivity Sensor Theory and Application
38 4/24/2002
Resistivity Sensor Theory and Application
The sand shown in Log example 5 is usually deeply invaded at the time of wireline logging,
thereby complicating accurate evaluation of its fluid saturations. It is also well known for
producing commercial quantities of hydrocarbons when calculated values of water saturation
are relatively high. The connate water resistivity, calculated in a nearby water sand, is 0.02
ohm-m at formation temperature. The mud filtrate resistivity is 0.11 ohm-m at formation
temperature. After several unsuccessful attempts to record wireline logs through this sand, it
was logged with the Resistivity tool approximately seven days after drilling.
The separations of the phase resistivity curves (Track II) and the attenuation resistivity curves
(Track III) are indicative of a deeply invaded formation. Utilizing a step-profile model of
invasion and the phase shift resistivity data as inputs, Di-P (Track I), Rxo-P (Track IV), and Rt-
P (Track IV) have been calculated continuously through this sand. The fact that the shallow
phase curve overlay one another from ×400 to ×410 feet is a good indication that they are
providing a measurement of Rxo. This supposition is confirmed by their very close agreement
with the computed Rxo curve through this interval.
The deep attenuation measurement is the only one of the eight measurements that is providing
accurate values of Rt in this sand. It is consistent with, though slightly lower than, the value of
Rt-P computed from the phase resistivity data. Using a step-profile invasion model and the
attenuation resistivity data as inputs, an Rt curve (not shown) that is similar to the Rt-P curve,
but with less character, was computed. Invasion of fresh mud filtrate has caused the other
seven curves to read too high. Correct analysis of this sand’s water saturation requires a
multiple depth of investigation resistivity tool that includes a relatively deep-reading
measurement and modeling software to confirm that its readings have not been compromised
by very deep invasion.
4/24/2002 39
Resistivity Sensor Theory and Application
40 4/24/2002
Resistivity Sensor Theory and Application
North Sea Horizontal Well Project: Oil-Based Mud vs. KCl Mud
Log ex. 6 displays a section of the pilot hole drilled in conjunction with a horizontal well in
the North Sea. The pilot well, drilled with an 8.5 inch diameter bit and at an inclination of 60
degrees through the target formation, utilized a KCL/polymer mud system. At the circulating
temperature of 77o C, Rm was 0.05 ohm-m and Rmf was 0.036 ohm-m. Formation water
resistivity was 0.075 ohm-m at 77o C.
The phase shift-derived resistivity curves exhibit a slightly erratic character in the ××58 to
××61 meter interval, due to the combination of thin resistive coal streaks, irregular hole size,
and the very conductive mud. The attenuation measurements are similarly affected, but to a
lesser degree.
A whole core was cut from ××77 to ×142 meters, which included an oil/water transition zone
from ××96 to ×104 meters. The long formation exposure time, due to the two-day coring
operation, created the potential for deep invasion in the hydrocarbon-bearing interval from
××61 to ×104 meters. The effect of the very conductive invading mud filtrate on the eight
Resistivity curves is apparent in Log ex. 6. The attenuation-derived resistivity curves plotted
in Track III exhibit much less separation than do the phase shift-derived resistivity curves
plotted in Track II, because of the apparent deeper depths of investigation of the attenuation
measurements in this environment. On the other hand, the vertical resolutions of the phase
shift resistivity measurements are superior to their attenuation counterparts.
Some values of Rxo, Di, and Rt, calculated using a step-profile model of invasion and the
phase resistivity data as inputs, are annotated on Log ex. 6. From ××65 to ×125 meters,
invasion diameters vary from 16 inches to 43 inches. The computed values of Rt are consistent
with the medium and deep attenuation resistivity values in zones where these two
measurements agree. When the computed value of Di is less than approximately 30 inches, the
deep phase resistivity measurement agrees with the modeled value of Rt. When Di exceeds 30
inches, the computed Rt value is slightly greater than the deep phase resistivity value.
4/24/2002 41
Resistivity Sensor Theory and Application
Log example 6 Phase vs. Attenuation Resistivities in North Sea Pilot Hole
42 4/24/2002
Resistivity Sensor Theory and Application
Figure 7 illustrates a section of the horizontal well through the hydrocarbon zone shown in
Figure 4. The attenuation resistivity data are not presented because they add minimal
additional information in this particular environment. In contrast to the pilot hole, this
horizontal borehole was drilled with an oil-based mud. The coal interval at ×221 to ×227
meters is more easily identified because borehole effects are much less pronounced in oil-
based mud than in the very conductive mud used in the pilot hole.
The resistivity peak at ×256.5 meters is a “polarization horn.” This phenomenon is caused by
a discontinuity in the propagating electrical field as the tool crosses the boundary between
beds having different resistivities. The size of the polarization horn depends on the contrast
between the resistivities of the adjacent beds and the relative dip angle between the borehole
and the formation. The higher the relative dip angle and greater the resistivity contrast, the
larger the “horn.”4 Also, note how the magnitude of this horn diminishes with decreasing
transmitter-receiver spacing for the four phase resistivity curves plotted.
The different depths of investigation of the Resistivity tool are apparent in the varying degrees
of anticipation of the formation top at ×256.5 meters. The deepest reading curve plotted in
Figure 7, D-RES, “sees” the approaching bed first, followed by the M-RES, S-RES, and X-
RES curves respectively.
Reference:
4. Anderson, B., Bonner, S., Luling, M.G., and Rosthal, R.: “Response of 2-MHz LWD
Resistivity and Wireline Induction Tools in Dipping Beds and Laminated
Formations,” paper A presented at the SPWLA 31st Annual Logging Symposium,
Lafayette, Louisiana, U.S.A., June 24-27, 1990.
4/24/2002 43
Resistivity Sensor Theory and Application
Log example 7 North Sea Horizontal Well Phase Resistivities in Oil-Based Mud
44 4/24/2002
Resistivity Sensor Theory and Application
4/24/2002 45
Resistivity Sensor Theory and Application
One of the customer log presentations is the triple-combo log. This log includes data from the
gamma, resistivity, density, and neutron sensors on the same log.
In Zone 1 of Figure 8, the bulk density decreases and the neutron porosity increases, both
indicating an increase in porosity. The gamma ray moves from the shale baseline towards a
less shaley formation. The resistivity decreases, indicating either a decrease in the resistivity
of the pore fluids or an increase in porosity. A qualitative interpretation of this zone based on
these curves is a porous wet zone.
In Zone 2 the bulk density increases and the neutron porosity decreases, both indicating a
decrease in porosity. The gamma ray moves from the shale baseline towards a less shaley
formation. The resistivity increases, indicating either an increase in the resistivity of the pore
fluids or a decrease in porosity. A qualitative interpretation of this zone based on these curves
is a low porosity zone.
In Zone 3 the bulk density decreases, indicating an increase in porosity, however, the neutron
SS Porosity is showing a decrease in porosity. The gamma ray moves from the shale baseline
towards a less shaley formation. The resistivity increases, indicating either an increase in the
resistivity of the pore fluids or a decrease in porosity. The density-neutron cross-over is a
typical response in a gas zone. It is caused by the differing effect of the gas’s decreased
hydrogen content on the Neutron and Density sensors. The Neutron primarily measures the
formation’s hydrogen content, whereas the Density measures the formation’s electron density.
When a formation’s pore space fluid, such as water or oil, is replaced by gas there is a
reduction in both the hydrogen content and the electron density. The Neutron responds to this
reduction in hydrogen content by calculating a lower porosity, whereas the Density responds
to the decreased electron density by calculating a higher porosity. This density-neutron cross-
over is a result of these responses. A qualitative interpretation of this zone based on these
curves is a porous gas zone.
46 4/24/2002
SONATRACH MWD MODULE
Pressure
Shale
Gas
Oil
Salt Water
Shale
Salt
Pressure Sensor Theory
• Downhole pressure
sensors are drilling
performance tools
that provide
continuous and
direct downhole
measurement of
absolute bore and
annular pressure
Pressure Sensor Theory
• Real-time LWD pressure measurements
provide information on downhole hydraulics
and fluid performance that help the driller
avoid drilling problems and optimize the
drilling process
• Safe Operating Envelope
• For safe drilling the equivalent mud density must
remain between
• Minimum Fracture Pressure
• Maximum Pore Pressure
Pressure Sensor Theory
• Equivalent Circulating Density (ECD)
• Static (pumps off) density is equal to the average
density of the static mud column plus annular
frictional pressure losses
• Dynamic (pumps on) density is equal to the average
density of the static mud column plus annular
frictional pressure losses
• ECD = Pressure
TVD*K
where K=.052 (english), .00981 (metric)
Pressure Sensor Theory
• Mud Weight
• The measured surface mud weight is the
primary factor controlling downhole
pressure
• The mud weight sets the baseline around
which all other factors vary
Pressure Sensor Theory
• Breaking a Gel
• The gel strength of a drilling mud
determines its ability to hold solids in
suspension during non-flowing conditions
• The force needed to break the gel and return
the mud to a fluid state adds to the annulus
pressure losses until the mud is fluid again
Pressure Sensor Theory
• Cuttings Load
• Suspended solids in the mud increase the mud
weight, which increases the pressure of the mud
column
Pressure Sensor Theory
• Flow Rate
• The pressure necessary at the bit to push the
mud up the annulus increases with increasing
flow rate
Pressure Sensor Theory
• Rig Heave
• When the bit is off bottom
and there is no active rig
heave compensation, rig
heave causes low
frequency reciprocal
surge-swab variations in
the ECD
• In this North Sea
example, rig heave during
high seas in winter caused
enough swab to collapse
the hole and induce
packoff, lost circulation,
and hole fill
Pressure Sensor Data Interpretation
• Poor Hole
Cleaning
• Uneven ECD during
drilling is an indication
of poor hole conditions
and varying
restrictions to
circulation
Pressure Sensor Data Interpretation
• Gas Influx
• Gas influx into the
annulus appears as a
rapid and sometimes
dramatic decrease in the
ECD
Pressure Sensor Data Interpretation
• Gel Strength
Pressure Spikes
• Gellation produces an
initial resistance to
circulation which may
require significant Fracture Pressure
pressure to overcome
• In this example, the gel
pressure spikes initiated
fractures in the formation
which caused lost
circulation
Pressure Sensor Data Interpretation
Detecting Formation Fluid A decrease in ECD that does The size of the decrease is
Influx when drilling with not correlate to rig relative to the mud densities
mud operations and the volume of influx
Detecting Hole Instability Hole collapse causes a Often correlates with high
sudden increase in ECd as rotary torque
solids load up in the annulus
Downhole Pressure Theory & Equivalent Mud Weight
Downhole Pressure
The actual downhole pressure has 3 primary components.
1. Hydrostatic pressure
2. Dynamic pressure components
3. Shut in pressure
Hydrostatic Pressure
The density of the fluid and the true vertical depth of the fluid column determine the
hydrostatic pressure. The density of the mud can change due to:
• Changes in the mud density going in the well
• Changes in solids load (barite, sand, cuttings. etc.)
• Influx of formation fluids or gas.
These events show up as changes in the annular pressure and therefore, a change in the EMW.
Dynamic Pressure
Dynamic pressure is the pressure required to move the fluid. This pressure must overcome a
combination of five factors that resist the fluid flow.
• Frictional effects from the surfaces in contact with the flow
• Fluid resistance to movement (viscosity, solids loading, gel properties, etc.)
• Type of fluid flow (turbulent, laminar, etc.)
• Annular geometry (size of the annulus, restrictions, washouts, etc.)
• Rate of flow
Any changes to these factors cause a change in the downhole pressure. This is most clearly
seen when the pressure is expressed as EMW.
There are four prominent rig activities that effect these factors.
1. Circulation rate
2. Rotation of the drill pipe
3. Axial movement of the drill pipe
4. Mud properties
1
Downhole Pressure Theory & Equivalent Mud Weight
In the imperial system with pressure in psi, mud weight in lb/gal and depth in feet,
K = 0.052.
In the metric system with pressure in KPa, mud weight in gm/cc or specific gravity and depth
in meters,
K = 9.81.
The presence of the TVD in the equivalent mud weight calculation makes this value sensitive
to depth and survey errors. Accurate depth is vital to report accurate Pressure data. This
applies during all drilling conditions (tripping, reaming, etc.) because the Pressure sensor
collects data during all these operations.
EMW is the total pressure exerted on the formation by the drilling fluid, expressed as though all
pressures were from mud weight alone. This includes the ECD plus all other factors (such as surge
pressure or choke setting) that effect pressure. EMW is a dynamic value that is constantly changing
as the circulating conditions change.
2 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
Example:
EMW = (200 psi + 400 psi + 9360 psi)/ (0.052 *15 000ft) = 12.8 ppg
Even though the annulus has 12 ppg mud in it, the formation sees pressure equivalent to that
of a 12.8 ppg mud during the reaming down surge.
Engineering models cannot accurately calculate the effect of factors such as ECD, cuttings
load, poor hole cleaning, swab and surge pressures, or gel pressure spikes. The most accurate
measure of the pressure exerted on the formation, and therefore most accurate EMW, comes
from a direct downhole measurement.
ECD is the annulus pressure loss plus the hydrostatic pressure. ECDs normally range from 0.2
to about 1.0 ppg above the original mud weight. They may be as high as 3.0 ppg above the
original mud weight under some conditions.
4/8/2002 3
Downhole Pressure Theory & Equivalent Mud Weight
Equivalent mud weight includes all factors that affect the pressure exerted on the formation.
These may be long term or very short duration.
Breaking a Gel: Mud sets up in a thixotropic time-dependent gel when it is not flowing. The
force needed to break the gel and return the mud to a fluid state adds to the annulus pressure
losses until the mud is again fluid.
Cuttings Load: Suspended solids in the mud increase the mud weight. This increases the
hydrostatic pressure of the mud column.
Downhole Pressure: Downhole pressure causes a small compression of the drilling fluid and
therefore, a small change in density.
Flow Rate: The pressure necessary at the bit to push the mud up the annulus increases with
increasing flow rate.
Formation Fluid Influx: When formation fluid flows into the annulus, it changes the mud
properties and, therefore, the hydrostatic pressure exerted by the mud. The extreme case is a
gas kick. The gas bubble displaces mud and expands as it rises in the annulus. This drastically
reduces the hydrostatic pressure.
Original Mud Weight: The measured surface mud weight is the primary factor controlling
downhole pressure. The mud weight sets the baseline around which all other factors vary.
Swab and Surge Pressures: Moving the drillstring axially in the hole, displaces the drilling
fluid like a piston in a cylinder. This adds to or subtracts from the existing pressure.
Type of Fluid Flow Pattern and Regime: The annular fluid flow path when sliding is
different from the flow path when rotating the drill pipe. This changes the resistance to the
flow, the type of flow regime, and the path length of the flow. These factors change the
pressure loss in the annulus and therefore, change the EMW felt by the formation.
4 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
Recorded data usually has a much higher data density than real-time or pumps-off data and is
recorded regardless of the status of the pumps. This allows recorded data to capture short term
pressure transients. Real-time data may miss these events because of the slower transmit
frequency. Figure 1 displays both real-time and recorded data for comparison.
4/8/2002 5
Downhole Pressure Theory & Equivalent Mud Weight
EMW measured downhole can differ by more than ±0.5 ppg from that predicted by hydraulic
modeling or surface measurements. The many possible causes for the surface-to-downhole
difference include:
• Uncertainties in the hydraulic model (pipe rotation, drillstring eccentricity)
• Changes in the mud at downhole temperature
• Fluid compression due to confining pressure
• The presence of solids in the mud (cuttings, sand, cavings, etc.) These significantly
increase the EMW.
• Error in surface measurements.
Shut In Pressure
Shut in pressures are the pressure components present when the well is not an open system.
This includes:
6 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
Circulation
The dynamic pressures are present only when the fluid is circulating. Therefore, a pumps-
on/off change causes a prominent immediate shift in the EMW. Figure 1 shows this effect
clearly. The amplitude of this shift is determined by the rate of circulation, well geometry, and
the mud properties.
Rotation
The presence of rotation of the drill pipe increases the EMW in several ways. • When steering
in a non vertical well, the drill pipe lays against the low side of the annulus. With rotation, the
drill pipe is more centralized in the annulus but it constantly moves about laterally. This
change in eccentricity changes the flow path and flow regime of the fluid. This adds
turbulence that requires more pressure to move the fluid. These changes increase the pressure
drop in the annulus. • This effect is increased with increasing hole inclination and rotation
speed. • It keeps solids suspended which increases mud weight. The amplitude of this increase
caused by rotation is determined by the rate of rotation, mud properties and the well geometry:
size of the annulus, concentricity of the pipe in the annulus, etc. (see Figure 4).
4/8/2002 7
Downhole Pressure Theory & Equivalent Mud Weight
Movement of the drillstring in or out of the well causes a movement of fluid displaced by the
pipe (surge) or a movement of fluid to fill the space vacated by the pipe (swab). In a surge, the
fluid is moving up the annulus with increased velocity. The pressure for this increased velocity
comes from the pipe acting like a piston in the annulus. This shows up as an increase in EMW.
In a swab, the fluid flows down the annulus. This subtracts from the pressure felt at the sensor
and lowers the EMW (see Figure 5). The speed of pipe movement, mud properties, and the
annulus geometry determine the amplitude of the change in EMW.
8 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
Rig Heave
When the bit is off bottom and there is no active heave compensation, rig heave causes a short
frequency reciprocal surge swab variation in EMW. These variations are shorter frequency
than the real-time data frequency and show up as scatter in the real-time data and a very noisy
trace in the recorded data (see Figure 6). The severity of this effect depends more on the speed
of the rig movement than on the height of the heave.
In this example from the North Sea in winter, swab was sufficient to collapse the hole and
induce packoff, lost circulation, and hole fill.
4/8/2002 9
Downhole Pressure Theory & Equivalent Mud Weight
Hole Cleaning
Circulating cuttings out of the hole decreases the mud density. As the cuttings are removed,
the density of the annular fluid drops and the EMW reflects this decrease (see Figure 7).
10 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
Formation Influx
Formation fluids are usually less dense than drilling muds. Therefore, any formation fluid in
the mud decreases the mud density. This shows up in the EMW as a rapid and sometimes
dramatic decrease (see Figure 8). The exception to this is in riser-less drilling using sea water
as the drilling fluid. A water kick from an unconsolidated formation can carry sand into the
annulus and increase the density of the annular fluid. This causes an increase in the EMW (see
Figure 9).
4/8/2002 11
Downhole Pressure Theory & Equivalent Mud Weight
• The wellbore geometry and the size of the drill pipe determine the size of the annulus. A
larger annulus allows a slower movement of displaced fluid past the moving drill pipe.
This reduces the swab or surge pressure effect. Any restrictions such as packoff or
swelling of the well bore reduces the annulus size and increases the swab or surge effect.
See Figure 11.
12 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
• The mud properties (viscosity, gel properties, solids load, etc.) determine the resistance to
movement and, therefore, the pressure required to move the fluid. The more viscous or
gelled the fluid, the greater the swab or surge effect.
• Pipe running speed determines how fast the fluid must move as it is displaced and
therefore, how much pressure is necessary to move it. This is the main factor in
controlling surge and swab pressures.
• Circulation pressures adds to surge pressure because the displaced fluid adds to the
annulus flow. Circulation reduces the swab effect because the circulating fluid is filling
the space vacated by the pipe.
Surge pressure is maximum at the bit because this is the point of maximum annular pressure
loss by the moving fluid. Below the bit, there is no fluid flow, and the fluid below the bit acts
as a closed hydraulic system. In a non-gelled mud, surge or swab pressure at the bit is felt
equally throughout the fluid below the bit. A gelled mud reduces the pressure felt below the
bit by an unknown amount.
Above the bit the surge is felt throughout the annulus. The swab or surge is the result of fluid
flow, not hydrostatic pressure. Therefore, it is not effected by the inclination of the well.
4/8/2002 13
Downhole Pressure Theory & Equivalent Mud Weight
Figure 11 shows the effect of a tight hole on swab pressure. Between 12:15 and 12:25, the
string was pulled through a tight section of the hole. The EMW dropped to 1.20 gm/cc.
Between 12:35 and 12:40 a slightly higher running speed produced a swab of 1.23 gm/cc.
14 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
The LOT or FIT is normally performed after drilling a few meters past the casing shoe to
expose the formation. The well is shut-in using the BOPs, and the cement pumping unit
applies pressure, usually through the choke line. The pump is then stopped and the pressure is
observed for at least 10 minutes to determine the rate of pressure decline. In a FIT, pressure is
applied to a predetermined level that will not fracture the formation (usually the level needed
to drill to the next casing point). Figure 12 is an example of a FIT plot.
During a LOT, pressure is applied to the open formation until a fracture develops.
Figure 12 and Figure 13 show the results of a typical leak-off test.
4/8/2002 15
Downhole Pressure Theory & Equivalent Mud Weight
The straight-line pressure increase continues to point A, where the formation starts to fracture.
The leak-off pressure, point B, where the formation starts to take whole mud is used to
calculate the fracture gradient. Pumping stops at point B, to observe the pressure decline. The
rate of pressure decline indicates the rate at which mud is being lost.
LOT Errors
When measured from the surface, hydrostatic pressure is added to the surface pressure
readings to obtain the actual pressure exerted on the formation. Hydrostatic pressure
calculations use the surface mud weight. Cuttings in the annulus or poorly conditioned mud
can introduce errors into this hydrostatic measurement. This is why it is standard practice to
circulate uniform mud for 1 to 2 hours before performing the LOT or FIT.
There are also pressure transmission losses associated with surface pressure measurements
which lead to errors in LOT interpretation.
• Gelled muds may not transmit pressure effectively. This results in a difference between
surface and downhole pressure readings.
• Surface lines and valves may have pressure drops that introduce error in the pressure
readings.
16 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
The direct measurement of downhole pressure eliminates these sources of error and reduces
the need for the circulation time.
Figure 14 shows data from a FIT test in a high pressure high temperature North Sea well. The
pumps-off maximum in Figure 14 gives the actual pressure of the FIT as soon as the pumps
came on.
In this example the surface measurement was significantly lower than the pumps off
maximum pressure. The main source of error was an out of calibration surface gauge. If the
erroneous surface measurement was not detected, the well would have developed severe
problems and not been able to reach TD in the subsequent section.
4/8/2002 17
Downhole Pressure Theory & Equivalent Mud Weight
A large amount of Pressure data shows this phenomena to be a rare occurrence, but when it
does occur it can be significant. The pressure required to break the gel is currently not
modeled due to the uncertainty in the time-dependent gel properties of the mud. When drilling
at pressures close to the fracture pressure, gel pressure spikes can initiate fractures in the
formation that propagate at drilling pressures and initiate lost circulation. Figure 16 is an
example of fractured formation and lost circulation due to a large gel pressure spike.
18 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
4/8/2002 19
Downhole Pressure Theory & Equivalent Mud Weight
Cuttings Settling while Sliding in High Angle Wells The effect of sliding on hole cleaning
depends on the inclination of the hole. In vertical sections the effect of cuttings settling out of
the mud is minimal. However, as the angle increases, so does the tendency of cuttings to settle
on the low side of the wellbore. See Figure 18.
20 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
As the cuttings accumulate on the low side of the well, they form into small dunes under the
force of the circulation. Unstable hole conditions, such as cave-in and washout, increase this
tendency in two ways. They provide places for the cuttings to accumulate, and they reduce the
velocity of the fluid in the area of the enlarged hole allowing more cuttings to settle.
This produces an uneven EMW as the piles build and are washed away. As the size of these
piles increase, they increase the difficulty sliding the pipe and keeping the weight on the bit. If
they are not removed before the pipe is pulled up, they can cause stuck pipe.
4/8/2002 21
Downhole Pressure Theory & Equivalent Mud Weight
A packoff can result from hole collapse or from allowing cuttings to accumulate in the
annulus. Pulling up the drillstring without sufficient cleaning often causes a packoff as the
cuttings pile up around the stabilizers or bit. In severe cases, this leads to stuck pipe.
The real-time sample rate may not catch the short duration packoff spikes. The pumps-on
maximum pressure captures the highest pressure experienced during the pumps-on cycle. This
may be a packoff pressure spike, if it is higher than any gel or surge pressures generated.
Recorded data captures all packoff spikes and displays them in the log.
22 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
The Pressure recorded pressure log can also help determine the fracture propagation pressure
more accurately. This fracture propagation pressure is a useful measure of the formation
strength, perhaps more so than the LOT, which normally includes the tensile strength of the
formation and so may be anomalously high.
The fracture propagation pressure is necessary to calculate the EMW that will prevent lost
circulation.
4/8/2002 23
Downhole Pressure Theory & Equivalent Mud Weight
If the Static weight is correct but the ECD is different than expected, the viscosity may be out
of specification.
24 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
When these conditions exist, mud flows into these fractures while the pumps are on, then flow
back into the annulus when the pumps are stopped. The returns are more noticeable than the
losses because the return occurs rapidly and during a period when no flow is expected. This is
often confused with a formation influx.
The traditional cure for formation influx is to increase mud weight to ensure an adequate
overbalance in the absence of circulation. However, if the flow is mud returns from fractures
that are closing, not an influx, increasing the mud weight will increase the problem. Mud
losses continue, and eventually the fracture propagation pressure is exceeded, resulting in total
losses.
The characteristic pressure signature for loss/gain occurs on the pump-off cycle when real-
time data cannot be transmitted. Only severe cases are detected on real-time data. The
recorded pressure data does show the characteristics of loss/gain.
4/8/2002 25
Downhole Pressure Theory & Equivalent Mud Weight
Figure 24 shows the EMW for a pumps cycle at a depth of 16,679 ft. When the pumps were
turned off, the EMW fell sharply to 16.16 lb/gal and then fell more gradually over the next 20
minutes to a near static level of about 16.12 lb/gal. When the pumps were restarted, the EMW
rose quickly back to 16.47 lb/gal, essentially the level before the connection. No flow was
reported at this time, although the well was giving back 45 bbls, 10 bbls more than previously.
Figure 9.25 shows a connection and flow check at 17,230 ft where loss/gain was reported.
When pumps were stopped, the EMW fell rapidly from 16.42 lb/gal to 16.37 lb/gal and then
gradually to 16.12 lb/gal just before the pumps were restarted. Loss/gain was reported and
flow checks had returns of +85 bbls.
The gradual fall in pressure to the static level, shown in Figure 25, indicates returning mud
flow when the pumps stop. The return mud flow prevents the pressure from falling rapidly to
the static level as shown in Figure 23.
26 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
After restarting the pumps, the EMW eventually increased to 16.40 lb/gal, or nearly the same
as before the connection, but took more than 15 minutes even with a slightly higher circulation
rate.
In addition, on the decline curve there is typically a break in the EMW slope which is
interpreted to be the fracture closure pressure similar to that interpreted in many leak-off tests
when pumping is halted.
4/8/2002 27
Downhole Pressure Theory & Equivalent Mud Weight
When drilling with weighted mud, the formation fluid is less dense than the mud weight.
Therefore, an influx dilutes the mud and lowers the EMW. However, if the formation fluid
carries a lot of solids, the reduction in EMW may not be apparent.
A kick in a weighted mud is detected on the EMW log by a decrease in EMW that cannot be
explained by a change in operating conditions. The rate of change in EMW depends on the
amount of the influx and the relative difference in the weight of the mud and the weight of the
formation fluid. A slow flow into the annulus causes a gradual decrease as the formation fluid
rises in the annulus and dilutes more of the mud. A large flow causes a rapid decrease in
EMW. Figure 26 shows a large water kick in a weighted mud.
28 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
Figure 27 Water and Sand Influx in Salt Water Fluid (Riserless Drilling)
Figure 28 Water and Sand Influx into Salt Water Mud, Time Depth Log
4/8/2002 29
Downhole Pressure Theory & Equivalent Mud Weight
30 4/8/2002
Downhole Pressure Theory & Equivalent Mud Weight
4/8/2002 31
Downhole Pressure Theory & Equivalent Mud Weight
32 4/8/2002
SONATRACH MWD MODULE
Neutron
Shale
Gas
Oil
Salt Water
Shale
Salt
Neutron Porosity Sensor Theory
• The Life of a Neutron
• A chemical source (Am241Be) generates neutrons
which scatter into the formation (free neutrons do FAR
not occur naturally) – “1” DETECTOR
CRYSTAL
• These epithermal, or “fast” neutrons are slowed
by collisions with nuclei in the formation
• Hydrogen nuclei are the most efficient at slowing NEAR 4.
Cl
DETECTOR
down neutrons because their atomic masses are CRYSTAL
1000
100
10
1
0 10 20 30 40 50 60 70 80 90 100
Porosity (pu)
• If there is high hydrogen content in the formation in the vicinity of the source, the
emitted neutrons will be slowed rapidly, resulting in a short travel distance from the
source and low count rates at the detectors
• If there is low hydrogen content in the formation in the vicinity of the source, the
emitted neutrons will be not be slowed rapidly, resulting in a long travel distance from
the source and high count rates at the detectors
• The measurement relationship is as follows:
• High Hydrogen Content = High Porosity = Low Counts
• Low Hydrogen Content = Low Porosity = High Counts
Neutron Porosity Sensor Theory
• Most LWD Neutron tools are thermal neutron devices
• Thermal Neutron devices utilize He3 tubes as detectors
• The He3 gas in the tube is very efficient in capturing
thermal neutrons
• Despite the apparent simple nature of the
measurement and counts to porosity relationship, the
initial derived porosity value is typically far from
correct
• Neutron data requires significant environmental
correction because of the limited depth of investigation
of the sensor (4 – 6”)
Neutron Porosity Data Interpretation
• “Bound” Water
• The overall negative
charge of clays
combined with their
large surface area
means that a relatively
high volume of water
can be associated with
each clay grain
• This “bound water” is
not free to move; it is
tightly held to the grain
by strong adsorption
forces
Neutron Porosity Data Interpretation
• The characteristic
Neutron Porosity crossover of the
neutron and density
Bulk Density curves is an indication
of the presence of gas
Neutron Porosity Sensor Applications
Neutron-Density Crossplot
1
Neutron Theory of Measurement
Alpha particles, which are emitted from the americium, consist of two protons and two
neutrons. They are also the nuclei of helium-4 atoms, and are one of the products of the
radioactive decay of heavy elements such as radium and uranium. Alpha particles carry two
units of positive charge, and interact very strongly with other charged particles such as
electrons. As a result they experience a tremendous amount of “drag” from electrons in nearby
atoms, and are slowed very rapidly as they pass through solid matter. For example, the 5.5-
MeV alpha particles emitted by americium will not penetrate an ordinary sheet of paper or the
dead outer layer of human skin. It follows that none of the alpha radiation escapes from the
source.
Most alpha particles simply slow down, gaining two electrons and thereby become helium gas
inside the source (the volume of gas that is produced is never a problem for the structural
integrity of the source). Occasionally, however, an alpha particle will interact with a beryllium
nucleus in the AmBe source. This is a very rare event, which occurs only once for about every
17,000 alpha particles. When such an interaction does occur a neutron is emitted with energy
of up to approximately 10 MeV 1. Neutrons are emitted from the source with equal probability
in all directions.
The nominal activity of the neutron source refers to the americium material within the source.
The stated activity of 8 Curies corresponds to the emission of about 8 × 3.7 × 1010 alpha
particles per second. Since only one in 17,000 alpha particles causes emission of a neutron,
about 17 million neutrons are emitted each second from such sources.
2 4/24/2002
Neutron Theory of Measurement
1. “MeV” stands for 106 electron volts. The electron volt is the unit of energy typically used to
refer to phenomena on the atomic and nuclear level. It is the energy gained by an electron
when it falls through an electric potential of one volt, and is equivalent to 1.6x10-19 joules, or
3.8 × 10-20 calories.
4/24/2002 3
Neutron Theory of Measurement
When the laws of conservation of energy and momentum are applied to such an event, it can
be shown that the most energy that can be lost by the neutron is given by the expression:
Ef = Eo [2m/(M + m)]
The term in parenthesis is then the fraction of initial energy that can be lost in a head-on
collision, and it can be evaluated for the most common elements in formation rocks: hydrogen,
carbon, oxygen, silicon, and calcium. We do this in the table below:
It is clear from the entries in the third column of the table that while the presence of various
elements will affect the moderation of neutrons (limestone, consisting of calcium, oxygen, and
carbon, moderates somewhat differently from sandstone, consisting of silicon and oxygen),
hydrogen in any appreciable amounts will dominate. Typical reservoir rocks, such as
sandstone, dolomite, and limestone, have no hydrogen in their basic matrix material. Thus
hydrogen will be present only in the pore space in either water, oil or gas. In such rocks, then,
the moderation of neutrons is governed primarily by hydrogen in the pore space and only
secondarily by the type of rock itself.
A thermal neutron detector (exactly how thermal neutrons are detected is discussed at length
below) placed at an appropriate distance from the source will detect neutrons at a rate
determined by their density in the thermal neutron cloud. If the moderating properties of the
formation are changed, say by increasing the hydrogen-filled porosity of the formation, the
radius of the neutron cloud will shrink, and the counting rate of the detector will decrease as
well. This simple picture is useful for typical neutron tool design – higher counts are
associated with low porosity, lower counts with high porosity.
Detection of Neutrons
Thermal neutrons, that is neutrons whose energies are less than one electron-volt, are detected
by the Neutron sensor. How this occurs within the detector, and how other radiation such as
gamma rays is discriminated against, is discussed in this subsection.
Radiation is detected through the ionization that occurs when it interacts with matter. One
often hears the phrase “ionizing radiation” as a consequence. The detection technology,
4 4/24/2002
Neutron Theory of Measurement
4/24/2002 5
Neutron Theory of Measurement
The pulses from proportional counters are smaller than those from GM tubes, but the benefit is
the additional information that is related to the amount of energy causing the ionization in the
tube. This property renders proportional counters energy sensitive as are scintillation crystals.
Proportional counters can be rendered sensitive to neutrons by inclusion of appropriate
materials. He3 gas is one such material. Each time absorption occurs a tritium (H3) nucleus and
a proton (H1) are released with combined energies of 765 KeV. The strong ionization ability of
these particles means that this amount of energy will be dissipated in the gas of the tube
through ionization. Because only a few eV is required to produce an electron-ion pair, many
thousands of electron ion-pairs will result. Since the same amount of energy is released with
each absorption, the amplitude of each anode pulse resulting from neutron absorption will be
essentially the same.
Other radiations in the environment – primarily gamma rays – can also produce pulses in the
He3 proportional counters. However, the efficiency of He3 proportional counters for counting
neutrons can approach 100%; whereas for gamma rays it is a fraction of one per cent. In
addition under normal conditions there are simply very few gamma rays downhole that are in
the energy range which could interfere with neutrons in the proportional counter. The
consequence of this is that under normal conditions, the neutron count rate can be very high
relative to the gamma rays that are detected by a He3 proportional counter, and most gamma
ray counts that will occur will be of much lower amplitude than those from neutrons which
will produce pulses of relatively high amplitude. The relatively low pulse amplitude produced
by gamma rays allows them to be discriminated against relative to neutrons through setting
suitable electronic thresholds. With appropriate proportional counter and electronic design, the
only pulses exceeding the threshold will be those produced by neutrons.
6 4/24/2002
Neutron Theory of Measurement
Sensor Calibration
Master Calibration
The master calibration is defined using the tool response in three limestone formations and a
water tank. The approximate limestone porosities used are zero, 17 and 25 p.u. The porosity of
water is, of course, 100 p.u. The master calibration borehole diameter is specific for each
neutron tool diameter as well: for slim tools it is 8.5 inches.
The master calibration blocks are kept and maintained in the AES facility. The primary
function of the master calibration is to establish the tool response, which except for relatively
minor manufacturing variations, and normal detector efficiency variations, is expected to be
uniform from tool to tool.
Near Far
The purpose of the calibration is to normalize the response of each tool to that of a “master
tool”. The “master tool” is simply a relationship of tool response to porosity that is determined
for an early, representative Neutron device. Since the basic manufacture and design of each
sensor is nearly identical (the variations are due to manufacturing tolerances and differences in
counting efficiency and tolerances of the He3 tubes), the response of each tool to formation
porosity will be similar. The principle variances in tools of exactly the same design will be due
to differences in efficiency of the He3 tubes themselves.
4/24/2002 7
Neutron Theory of Measurement
The response of the master tool is characterized as follows. First the response of each detector
bank in each of the master calibration formations is recorded. These responses then become
the standards to which all other tools of the same manufacturing design are normalized during
their own calibration. The ratio of the average count rate per He3 tube for each bank is
computed, and a transform for computing porosity from the ratio is determined.
Mathematically this works as follows: if we let the average near and far count rates be Navg
and Favg, respectively, the ratio r is:
r = Navg / Favg
and, the transform to apparent limestone porosity in a 6-inch diameter borehole is:
r = ratio (near/far)
bh = borehole size (in)
porcal = apparent porosity (p.u.)
Environmental corrections, included corrections for actual hole size, can be computed relative
to this apparent porosity value.
8 4/24/2002
Neutron Theory of Measurement
Environmental Effects
While the moderating effect of a formation is dominated by its hydrogen content, it is affected
to a lesser extent by the other elements that are in the formation. The effect of the rock type,
formally called the lithology effect, can amount to several porosity units (p.u.). The lithology
effect is compensated for through prior knowledge of the formation type coupled with
corrections based on experiment and mathematical modeling.
A number of other factors can also affect the moderating capability of the downhole
environment, and will be discussed briefly here. Details specific to the Neutron sensor will be
covered below in the context of tool calibration.
The borehole itself can be a significant environmental factor in neutron porosity logging.
Since drilling mud, whether oil or water based, is significant in its hydrogen content, borehole
enlargement can make the porosity appear too high. Correction for borehole diameter requires
either knowledge of or assumption of the true borehole diameter. Of lesser, but non-negligible
importance, is the effect of various mud weights as well.
Another important effect can be the presence of certain elements which have an unusually
high affinity for capture of thermal neutrons. The presence of such elements will reduce the
population of thermal neutrons faster than would otherwise be the case, and increase the
apparent porosity as determined by the tool. One common element producing this effect is
chlorine which is present in formation water, and drilling mud. Corrections for the presence of
chlorine require knowledge of the salinity of the formation water or drilling mud, and are
formation and borehole diameter dependent.
Downhole temperature and pressure can also affect the apparent porosity. However, since
increasing temperature which tends to lower density occurs at increasing depths, its affects are
usually largely compensated by increasing pressure at those depths. Compensation for these
two variables usually occurs in tandem as a result.
4/24/2002 9
Neutron Theory of Measurement
10 4/24/2002
Neutron Theory of Measurement
4/24/2002 11
Neutron Theory of Measurement
12 4/24/2002
Neutron Theory of Measurement
4/24/2002 13
Neutron Theory of Measurement
Temperature/Pressure Correction
As the temperature increases, the borehole and formation fluids expand, causing a reduction in
both density and hydrogen content of the fluids. However, as the pressure increases the fluids
compress slightly, causing an increase in density and consequently an increase in hydrogen
content of the fluids. With increases in hole depth it is normal to see both temperature and
pressure increasing, consequently these are competing effects. Change in temperature has a
greater effect than change in pressure, as most liquids are nearly incompressible.
The inputs are the Temperature and Pressure Records found in the Neutron Information Page.
The records can be any temperature or pressure files. The effect of the Temperature Correction
is a positive correction, less than +1.0 p.u. per 50°F above 70°F. The effect of the Pressure
Correction is a negative correction, roughly -0.1 p.u. per 2,000 psi increase.
14 4/24/2002
Neutron Theory of Measurement
4/24/2002 15
Neutron Theory of Measurement
Lithology Correction
Corrections are required for lithologies other than limestone. In addition to limestone the main
lithologies that are encountered in logging are sandstone and dolomite. Although the presence
of hydrogen in the pore space of the rock is the primary variable in neutron porosity
measurements, the moderating effects of the rock type can be significant, amounting to several
porosity units above or below the porosity that would be computed were the rock a limestone.
The sandstone correction is positive relative to limestone, and the dolomite correction is
negative. For sandstone, this is a consequence of the fact that limestone has a greater
percentage of very light atoms, than does pure sandstone. In particular it contains carbon
which is 25% lighter than oxygen, the lightest element in pure sandstone. As a result,
limestone is more effective at moderating neutrons than sandstone, and the same Near/Far
ratio in a sandstone as in a limestone implies a higher porosity for the sandstone.
The reverse is true in dolomite. This is because in dolomite many calcium atoms, which are
very low in their moderating effect, are replaced by magnesium atoms which have almost half
the mass of calcium atoms. This increases the moderating effect of the rock matrix relative to
pure limestone. Thus a dolomite rock of the same porosity as a limestone rock will have a
greater apparent porosity and the correction will have to be negative.
16 4/24/2002
Neutron Theory of Measurement
Bibliography
Glenn F. Knoll, Radiation Detection and Measurement
Darwin V. Ellis, Well Logging for Earth Scientists
4/24/2002 17
Neutron Theory of Measurement
18 4/24/2002
SONATRACH MWD MODULE
Density
Shale
Gas
Oil
Salt Water
Shale
Salt
Density Porosity Sensor Theory
• The Life of a Gamma Ray
• A Cs-137 source emits gamma radiation
that is focused into the formation at an
energy of 0.662 MeV
• Gamma rays collide with electrons in
the formation and are scattered, losing
energy, not speed, in the process
(Compton Scattering)
• Eventually, when the gamma ray is at
very low energy (< 100 keV) it is
absorbed by an electron-atom system
(Photoelectric Effect) and completely
disappears
• The sensor measures the number of
gamma rays that are scattered back to
the detectors
Density Porosity Sensor Theory
• The objective of the density porosity
measurement is to infer the bulk density
of the formation by measuring the
attenuating effect that the matrix and
pore fluids have on emitted gamma rays
(function of bulk electron density)
• As gamma radiation interacts with
materials of high electron density, it
loses energy more rapidly
• For example, a 5-cm thick piece of lead
would attenuate a gamma ray more
efficiently than the human body
• Another by-product of the measurement
is the Photoelectric Effect (Pe) which
allows the log analyst to determine
mineralogy
Density Porosity Sensor Theory
• The matrix will attenuate
Matrix Bulk Density (g/cc)
gamma radiation more than
the pore fluid since it is denser Sandstone 2.65
-0.05
correction, which Correction Available (Far - Near Density)
compensates for mud
density and blade distance
from the formation
Density Porosity Data Interpretation
Gamma rays that have traveled through the formation are sensed by two sodium iodide (NaI)-
based detection systems, which are mounted in the drill collar underneath one of the stabilizer
blades. Electrical pulses produced by the systems indicate not only that detection of a gamma
ray has occurred, but also provide information about the energy of the gamma ray (this is the
so-called “spectral” capability of such detectors). It is this energy sensitivity that enhances the
density measurement and provides information about the lithological characteristics of the
formation being logged.
The electronics store the information available in the pulses from the detectors. They perform
the additional functions of gain stabilization and data storage. Optional capabilities will also
be calculation of density downhole, and correction for extremely large borehole enlargements
through employment of what is termed a “fast sampling” correction.
1
Density Sensor Theory of Operation
ρe = ( 2Z / A ) * ρb
where:
ρe = electron density
Z = atomic number of the material
A = atomic weight of the material
ρb = bulk density of the material
2 = two
The electron density index, which is often just referred to as simply the electron density, is
proportional to the ratio Z/A. It is directly proportional to the bulk density if this ratio remains
constant. In fact this ratio is very nearly 1/2 for most elements in the surface of the earth, with
the element hydrogen being a significant exception. Thus, the three main sedimentary rock
matrices in the earth--calcite, dolomite, and quartz--have very nearly identical bulk densities
and electron density indices. The other common downhole material, water, differs
significantly from this rule by virtue of its hydrogen content.
2 4/24/2002
Density Sensor Theory of Operation
To take account of the effect of water on the electron density index of porous rocks, density
tools are calibrated to electron density rather than to bulk density. Furthermore, the density log
is adjusted to give the bulk density of water-filled limestone. The relationship between the
bulk density ρb and the electron density index ρe is:
ρb = 1.0704 ρe - 0.188
Thus, in practice, the density tools produce a direct measure of the electron density index,
which in turn is converted back into a bulk density using this transform.
4/24/2002 3
Density Sensor Theory of Operation
scattered out of the path to the detector and even fewer are detected. A density meter can be
constructed for a particular source and detector configuration, using standard sample
geometry. Calibration of such a device can be accomplished with as few as two standards,
since the response is an exponential function of the product of the density of the material and
its thickness. The density of unknown materials can be determined by a device such as this
when the source and detector can be placed on either side of the unknown material. For
example, density of mud in the flowline may be measured by such a technique.
In the logging situation it is impractical for a portion of the formation to be placed between the
source and detector. What is done instead is to look at the flux of scattered gamma rays. In this
case high density shielding material, usually lead or tungsten, is placed between the detector
and source. This material must be thick enough to prevent gamma rays traveling directly from
the source to the detector. Were such a tool to be placed in a vacuum no gamma rays from the
source would be registered in the detector, since none could travel directly to the detector
because of the shielding, and those not on a path to the detector would not be scattered into it.
However, if the tool is placed against a material of some density (Figure 4), then some gamma
rays, which otherwise would miss the detector completely, scatter randomly in the material
and enter and are detected in the scintillating crystal. The number of gamma rays that enter the
crystal is determined, once the source strength, the crystal volume, and the source to crystal
spacing are set, by the electron density of the unknown material. Again, the tool may be
calibrated using known standards.
4 4/24/2002
Density Sensor Theory of Operation
Density tools, then, allow us to infer the density of material by observing how gamma rays
from a source are scattered by the material. Logging tools detect gamma rays that are scattered
from the material whose density is being measured. The significance of the photoelectric
effect for the density measurement is that it can interfere by affecting the number of gamma
rays detected. For example, imagine two materials of the same density, say aluminum, and a
lead/water mixture adjusted to have the same density as the aluminum (i.e., small amount of
lead, but lots of water). The path in the lead/water mixture might be identical by virtue of the
identical electron densities; however, it would probably be shortened by a photoelectric event
occurring earlier along the path. All other things being equal, materials with heavier elements
will appear denser since the gamma ray flux will decrease more rapidly because of the greater
photoelectric absorption.
4/24/2002 5
Density Sensor Theory of Operation
To produce the pulse height distribution the pulses produced by the scintillation detector and
PMT are digitized so that they may be analyzed and stored in the sensor electronics. The
digitized pulses are then sorted according to pulse height, or voltage, and recorded in the pulse
height distribution (PHD). For example, it may be that a pulse height of 10 volts corresponds
to the gamma ray of 662 KeV energy. Each time a pulse of 10 volts amplitude is observed,
channel 200 in the pulse height distribution is incremented by one. Correspondingly, each
time a gamma ray of 3.31 KeV is detected, the first channel in the distribution is incremented.
The application of this process to a large number of gamma rays results in a statistical
distribution as a function of their energy.
Two important examples of spectral distributions will be discussed in the next sections: the
cesium reference source spectrum and the logging source spectrum.
6 4/24/2002
Density Sensor Theory of Operation
The first thing to notice is that the overall structure of both the near and far reference source
spectra is very similar. The dissimilarities that are evident are due to the differing strengths of
the individual cesium sources and to differences in the geometries of the two detector systems
(the far detector has more volume than the near, the materials in the vicinity of the detectors
are different, etc.).
At the extreme right of each spectrum is a feature that is designated “Cs137 Photopeak”. The
counts in this portion of the spectrum correspond to pulses that were produced by the
absorption of all the energy of the 662 KeV gamma rays from the source. For total absorption
to occur, a 662 KeV gamma ray must either undergo a single photoelectric interaction (this is
more likely in relatively high density NaI than in lower density materials such as limestone or
sandstone), or it must Compton scatter one or more times, finally disappearing through a
photoelectric interaction, all within the crystal.
The location of the photopeak is important because it is used to calibrate the spectrum. The
tool software locates this peak and assigns to the channel position of its center the energy
value of 662 KeV. Throughout the normal operation of the tool, the software continually
locates this peak, and then adjusts the electronic parameters (the high voltage to the PMTs,
and the gains of the near and far detectors) so as to place and maintain the centroid of the peak
in channel 200. This has the desirable effect of using most of the available channels in tool
memory and maintaining a spectral gain of approximately 3.31 KeV per channel.
4/24/2002 7
Density Sensor Theory of Operation
It is evident that the total number of counts in the lower energy regions of the reference source
spectrum is much greater than the total in the photopeak. While it is possible that some of
these may be produced by other sources of background radiation, most of them are from 662
KeV reference source gamma rays that did not deposit all their energy in the crystal. This
occurs when a gamma ray scatters once in the crystal, depositing part of its energy, and then
exits the crystal before depositing the rest of its energy. For such an event, a count is
registered in the channel corresponding to the energy that was lost by the gamma ray when it
interacted, rather than to the full amount of energy the gamma ray possessed on entering the
crystal. Since 662 KeV gamma rays can only produce these counts through Compton
scattering in the crystal, this portion of the spectrum is sometimes referred to as the “Compton
background”.
The Compton background has an upper bound that forms another significant feature of the
spectrum. This is labeled the “Compton edge”. It is separated by a low count valley from the
photopeak. The valley is present because of single collisions, which would transfer energies in
the range between the edge and the maximum, corresponding to the photopeak. A gamma ray
may lose energy in the range between the Compton edge and the photopeak, but it can only do
so by scattering two or more times before escaping from the crystal. The probability of this
event is low, but it does occur, explaining the presence of the small number of events in the
region.
Below energies corresponding to the Compton edge, any energy value between zero and that
of the Compton edge may be transferred. This accounts for the continuous Compton
background down to zero. In the actual spectrum, an electronic cutoff is imposed to prevent
saturation of the counting system by low energy noise from the photomultiplier. Were the
cutoff not applied, the noise, which increases exponentially at very low energies, could
saturate the counting circuits. If the cutoff is not high enough to cut out all of this low energy
noise, some of the noise remaining in the spectrum may appear as an artifact resembling a
peak on the very left most portion of the spectrum.
A true low energy peak may be seen in some spectra, particularly in that of the near detector.
This peak is due to x-rays from the element tungsten. There is an extra amount of tungsten
shielding around the near detector, and these x-rays are created in this material. Just as gamma
rays are absorbed in the detector, some may be absorbed in this tungsten. In the process, they
ionize and/or excite atoms inside the tungsten. As these atoms return to their ground, or
unexcited, states, they do so by emitting x-rays whose energies correspond to the energy levels
that were excited in the electronic structure of the tungsten. Some of these x-rays, which are
typically at lower energy than gamma rays, are detected in the crystal and appear at this
position in the spectrum.
8 4/24/2002
Density Sensor Theory of Operation
To extract the density information, the reference source spectrum must be subtracted from the
acquired spectrum. To make this possible, a copy of the reference source spectrum is acquired
without the logging source in the tool prior to the logging run, usually during the sensor
calibration process. The spectrum from the logging source can be obtained by subtracting,
channel for channel, the reference source spectrum from the spectrum acquired downhole.
Field examples of such spectra for each detector are illustrated in Figure 7. On the left side of
the figure are the cesium spectra acquired with no logging source in the tool, plotted on the
same scale with examples of logging spectra acquired later with the source in the tool. It
should be noted that the vertical scales for the near and far detectors are different, since the
downhole count rates in the near detector are typically higher than those in the far. This
explains the relative magnitudes of the cesium reference spectra, which can be confusing,
since the count rates for the cesium sources in the near and far detectors are similar (see Figure
6 in the previous section).
The computed logging spectra are shown on the right side of the figure. It can be seen that
upon subtraction of the reference source spectrum from the downhole spectrum the Cs137
photopeak disappears. This is because there is no contribution of gamma rays at the 662 KeV
energy from the 2-curie logging source, although it emits 662 KeV gamma rays as does the
much weaker reference source. This is a consequence of the tool design, which makes the
direct path through tool nearly impossible for the logging source gamma rays to negotiate.
Thus, since in the reference source spectrum and the acquired spectrum the photopeaks are
identical, both coming from the reference source, they disappear on subtraction.
4/24/2002 9
Density Sensor Theory of Operation
Gamma rays that take the indirect path from the logging source through the formation to the
detector necessarily scatter at least once (most scatter more than once, particularly those
detected in the far detector), and consequently have substantially less energy than the initial
value of 662 KeV. Thus in the computed logging source spectrum for the near detector there
are no counts above about 450 KeV, and above about 400 KeV in the far detector. The
maximum energy of the gamma rays in the far detector is lower than in the near detector,
because the gamma rays have more opportunities to scatter and hence are more likely to be
lower in energy by the time the reach the far detector.
Both computed spectra are peaked, with the peak of the far spectrum being a bit lower in
energy than the near. If we look down in energy in either spectrum we find that from about
400 to 200 KeV the count rate increases. This is because the scattering increases the
population of low energy gamma rays, at the expense of the population of the highest energy
gamma rays. However at even lower energies the population again decreases. This is because
at sufficiently low energies the photoelectric effect becomes significant and instead of
scattering to even lower energies, gamma rays disappear.
There are many different materials in the earth that may have the same electron density, but
different chemical constituents. While an accurate density tool should produce the same result
in two such materials, the different chemical composition will have an effect on the spectra
obtained by the detectors. Consider as an example the case of water filled 3.6 p.u. limestone,
which has the same electron density as zero porosity sandstone. Density readings taken in
these two materials should be identical. However, thanks to their different chemical
constituencies, the logging source spectra obtained in these two materials will be somewhat
different, primarily in the lower energy range in which photoelectric absorption occurs. The
highest atomic number element in sandstone is silicon (Z=14), while in limestone the highest
is calcium (Z=20). Consequently, the photoelectric effect in limestone is greater than in
sandstone, with the result that there will be fewer counts in the low energy region of the
limestone spectrum than in the sandstone. In the upper portions of the spectra, where the
Compton scattering interaction is dominant, the counts are more nearly alike.
Although the case of 3.6 p.u. limestone compared to zero porosity sandstone is not available,
Figure 10 shows a similar comparison between spectra produced in aluminum and marble.
The photoelectric properties of aluminum are similar to sandstone, while the density of
aluminum and marble are almost the same. Using the tool calibration data, the marble
spectrum has been normalized to the aluminum spectrum, transforming it to what would be
obtained if the marble and aluminum had the same densities. It can be seen that the higher
energy portions of the spectra are identical. However, with lower energies, the spectra begin to
diverge, building to a substantial difference in the 50-200 KeV range.
Since the photoelectric effects on the logging spectra are determined by the elemental
constituents of the formation rather than the electron density, spectral density tools uses only
the upper portion of the spectrum to compute the formation density. The shading labeled
“Density Window” in Figure 9 illustrates the locations of these energy intervals, or windows,
in the near and far detectors. The window shown is actually something of a compromise. In
order to minimize the photoelectric influence it is desirable to choose the lower limit of the
window as high as possible. As the window energy is increased to reduce the photoelectric
effect on the measurement, the count rates in the window decrease also. Since the statistical
accuracy of nuclear measurements depends on the number of counts acquired, the statistical
uncertainty of a measurement made during a given sample period will increase as the lower
limit of the window is raised. The window values are set too high if too much time is required
to obtain a sufficient number of counts to make a statistically meaningful measurement of
10 4/24/2002
Density Sensor Theory of Operation
density. On the other hand, if the low end value of the window is set too low, the photoelectric
effect on the density reading can be unacceptably high. Typically, then, the actual energy of
the low end of the window is set at an “optimum” value, which produces a statistically
accurate density measurement, accompanied by a small but tolerable photoelectric effect on
that measurement.
An additional window is usually set in the lower portion of the spectrum for the purposes of
computing the “photoelectric index.” This is done over the energy interval where the
photoelectric effect is the most prominent in downhole formations, which turns out to be the
interval between 50 and 120 KeV or so. Since this region of the spectrum is affected also by
the density (more so than the very high energy region is affected by the photoelectric
properties of the formation), the ratio of the data in this window to that from the density
window is used to compute a photoelectric index. Since the photoelectric factor, Pe, varies
significantly with lithology, the photoelectric index can be used as a lithology indicator -- that
is, it may provide the log analyst with information about the type of rock (i.e., sandstone,
shale, limestone, or dolomite) through which the hole is drilled.
4/24/2002 11
Density Sensor Theory of Operation
The first of these properties is the rate of change of the detector count rates with changes in
formation density. Count rates decrease exponentially with increasing density. Thus, a plot of
the logarithm of the count rate versus the density results in a straight line (Figure 8).
Mathematically, this dependence can be expressed by the equation:
N = N0e-upx
where N is the detector count rate at some effective distance (x) from the source, (N0) is a
reference count rate (the intercept of the straight line on the above-mentioned plot), ρ is the
formation density, and α is a constant that depends on the probability that gamma rays and
electrons of a given material will interact.
The slope is a measure of the sensitivity of the detector to density. That is, the larger the slope,
the greater will be the change in count rate for a given density change. The sensitivity depends
on the spacing of the detectors from the source. This dependence appears in the “x” term in
the expression for the slope. Thus, the greater the spacing, the greater is the slope, and the
greater is the sensitivity of the detector to formation density. In general, it is desirable to
maximize this formation sensitivity of the far detector, which is the primary density
measurement.
12 4/24/2002
Density Sensor Theory of Operation
Unfortunately, just as there is a tradeoff between count rate and sensitivity to photoelectric
effects in the setting of the lower bound of the energy window in the logging source spectrum,
so there is a tradeoff with count rate in the establishment of the source to detector spacing. For
a given density, while the formation sensitivity increases with increasing source to detector
distance, the count rate decreases. The time required to acquire sufficient counts for a
statistically accurate measurement thus
places an effective upper limit on the spacing. A significant issue in sensor design is the
process of optimizing the competing effects of increased formation sensitivity and decreased
count rate with distance.
The second property of the detectors that is affected by their spacing from the source is the
average distance of penetration of the gamma rays into the formation. This determines the
effective distance from the tool at which the formation density can be determined, a distance
that is often called the “depth of investigation”. In general, the greater the spacing of the
detector from the source, the greater will be this depth of investigation. It follows also that the
more sensitive a detector is to material farther away from the tool, the less sensitive it is to
material nearer the tool (i.e., in the borehole). The near and far detectors need to comply: the
far detector is more sensitive to the formation and less sensitive to the borehole; whereas, the
opposite is true for the near detector.
The result of the computation of electron density for each detector is referred to as the
apparent density. The term “apparent” is used because while the density that is computed
depends on the combined effects of the borehole fluid density and the formation density, it is
not the mathematical average of their densities. The measurement result depends on both these
densities as well as the amount of standoff, and can vary quite widely from true formation
density. In any event, the apparent density will only be equal to the true formation density if
the stabilizer blade is against the borehole wall during the complete sampling period (i.e.,
there is no standoff). The near detector, being more dependent than the far detector on the
density of the borehole material near the tool, is more severely affected than the far by any
standoff. The resulting difference in detector readings is used to generate a correction to the
far detector reading.
4/24/2002 13
Density Sensor Theory of Operation
In the downhole logging situation, environmental corrections must often be made. In LWD
density logging, the most common is a correction for the effect of standoff from the borehole
wall. The correction that must be made is dependent on both the amount of standoff and on the
density of the drilling fluid that occupies the standoff space between the tool and wall. The
spine and rib crossplot, in which the apparent density of the far detector is plotted against the
apparent density of the near detector, is a common method for displaying the response of a
density logging tool to these effects (Figure 10). The “spine” of the cross-plot corresponds to
the calibration condition where the stabilizer blade is pressed against the borehole wall. In this
condition there is zero to minimal interference from the borehole fluid, and the apparent
densities indicated by the two detectors agree.
In the presence of standoff, however, the apparent densities indicated by the two detectors
disagree. The near detector is more severely affected than the far by the standoff from the
borehole wall. The “ribs” of the spine and rib plot indicate the trends in the detector responses
at various standoff distances from the formation. In Figure 10, the ribs for an 8 1/ 2-inch
14 4/24/2002
Density Sensor Theory of Operation
diameter tool are shown relative to the magnesium and aluminum calibration standards. These
were determined experimentally in the 9 7/8-inch boreholes using standoff intervals ranging
from zero to 1 1/2-inch in 1/4-inch increments. The distance between the points on the ribs is a
function of the contrast between borehole fluid density and formation density. For large
contrasts produced with low density fluids (e.g., water) successive points on the rib are much
farther apart than they are on the same rib for the heavier fluids. Or, considering the case of
the same fluid in differing formation densities, note that with water in the magnesium
formation, the intervals are more closely spaced than they are with water in aluminum. This is
because there is a greater contrast in the densities of aluminum and water than there is in those
of magnesium and water.
Because of its greater formation sensitivity, the far detector is the primary density reading, and
it is to its value that a correction must be applied. For a given amount of standoff, the
correction that is required is the value that must be added to the far detector reading to obtain
the actual value of the formation density. Put another way, the near and far apparent density
values, when cross-plotted on the spine and rib picture, will fall on a given rib. The required
correction is the difference between the far detector reading and the density value
corresponding to the intersection of the rib and the spine.
The appearance of the crossplot can indicate how well the tool may perform in the logging
situation. For an accurate correction to be possible, the near detector must change much more
rapidly with standoff than the far detector. The degree of deviation of the ribs from the spine is
indicative of the degree with which the detector readings separate for a given standoff from
the borehole wall. For comparison the published data for the rib of a competitor’s first
generation density tool is shown as a broken rib on the crossplot as well (the second
generation spine and rib response has been improved somewhat). The greater separation of the
ribs from the spine indicates its superior ability to correct out to 1 inch of borehole standoff.
A second feature of note is the fact that the response of the tool is very nearly independent of
mud weight out to nearly 1 inch to standoff. The behavior of the rib varies significantly with
mud weight only at the larger standoffs. This property allows first order corrections to the
standoff to be independent of mud weight.
The actual method by which the correction for standoff is derived is illustrated in Figure 11.
This is a crossplot of the difference in the far and the near apparent densities (Pfar-Pnear) on the
horizontal axis against the difference between the actual density of the formation and the far
apparent density (Ptrue-Pnear) on the vertical axis. The latter is called the standoff correction
(SOC) and is the amount of correction that must be added to the far detector to obtain the
correct, or true density reading. Mathematically the SOC is the same as what is known as the
“delta-rho”, correction in wireline logging. However, in the wireline case, it is assumed that
the tool is always against the borehole wall, with the correction accounting for intervening
mudcake. The SOC, on the other hand, is designed assuming that standoff will be more
common than mudcake.
4/24/2002 15
Density Sensor Theory of Operation
The correction algorithm is obtained by fitting a polynomial to the data in Figure 11. The
polynomial has the form:
SOC = a + bx + cx2 + dx3
where x = Pfar - Pnear.
Those who are somewhat familiar with mathematical functions can appreciate from the
symmetry of the data that the coefficients a and c will be small in comparison with b and d. In
practice, the “a” value may be forced to zero. This is the case for the results shown in Figure
11, where the equation for the fit contains no constant term. In the field software, the
implementation of the SOC is as follows. First, the apparent electron densities of the near and
far detectors are computed from the basic calibration data. Then the difference in the far and
near densities is computed. This difference is the value “x” in the equation for SOC. The
constants a, b, c, and d determined from calibration data are then used to compute the SOC to
be applied to the far detector. The corrected electron density is then:
Pe = Pfar + SOC
Finally, once this corrected value is computed the electron density is converted to bulk density
using the equation:
Pb = 1.0704 Pe - 0.188
16 4/24/2002
Density Sensor Theory of Operation
As an example, assume a tool has the SOC calibration constants shown in Figure 11, and that
the far and near measured densities are 2.2 and 1.8 g/cm3, respectively. The difference in the
two detectors is 0.4 g/cm3, resulting in a computed SOC of approximately 0.170 g/cm3. The
corrected electron density will then be 2.37 g/cm3, and the resulting bulk density will be 2.348
g/cm3.
Note that the sequence in which these mathematical operations are performed is somewhat
arbitrary. That is, the conversion to bulk density could be performed following the
determination of the apparent electron density for each detector, before the SOC is invoked.
The SOCs would then be made from an appropriately modified algorithm.
4/24/2002 17
Density Sensor Theory of Operation
Figure 12 enables a rough prediction of the circumstances under which the switch from
positive to negative corrections will occur. The two lines in the figure are the locus of points
in a water-based mud and an oil-based mud (the oil is assumed to have a density of 0.7 g/cm3)
for which the formation density and the borehole fluid density as measured by the sensor
appear to be the same. The densities at which this occurs are not equivalent in g/cm3 because
of the differences in the photoelectric properties of the typical formation and barite-loaded
mud.
To see how this figure might be used, for example, consider a mud density of 13 ppg. This
cross plots with a formation density of 1.95 g/cm3. For formation densities above 1.95 g/cm3,
the SOC should be positive, while for formation densities lower than 1.95 g/cm3, it should be
negative. In the case of an oil-based mud with a fluid base of 0.7 g/cm3, the oil-based curve
can be used. If the fluid portion of the oil-based mud is significantly different, the values for
the two curves shown should be obtained and then an interpolation or, if appropriate,
extrapolation should be done to estimate the formation density “equivalent” to the mud
density.
In concluding this discussion, it is important to note that the curves in Figure 12 while being
derived from both model and experimental data, are themselves the results of interpolations.
There is thus some uncertainty inherent in the plot. Furthermore, the results that are obtained
in real downhole situations will depend on the standoff, particularly as the hole size becomes
much greater than the tool diameter. That is, the interpolations that were used in determining
the curves of Figure 12 depend on the “good behavior” of the ribs, and thus the results can be
expected to deteriorate substantially with large standoff, highly enlarged situations.
18 4/24/2002
Density Sensor Theory of Operation
4/24/2002 19
Density Sensor Theory of Operation
Depending on the statistical accuracy required the tool sample times are from 10 to 30 seconds
in duration. Typical rotation speeds are such that a rotating tool can go through several
complete revolutions during such a sample period. The resulting count rate will be the average
for all the various standoffs experienced during that period. One might assume that the
apparent density obtained from these average counts would correspond to the appropriate
average density value for a proper correction (that is, that the apparent densities would lie on
the appropriate rib allowing an accurate SOC to be made). Unfortunately, this is not the case,
and the reason is the non-linear (logarithmic) dependence of density on the count rate. This
can be a significant factor for which an accounting must be made in LWD density
measurements.
The best way to consider the example introduced above is to modify it slightly to that in a
“worst case”. Specifically, let us assume that the blade spends half of its time in contact with
the borehole wall, and half of its time at the maximum standoff from the wall. This happens to
be a good approximation to the case of a slotted or elliptical borehole (Figure 13). Thus the
average count rate obtained while rotating in such a borehole” will be midway between the
count rates measured on the spine and at the 1-inch standoff. We will assume further that the
formation has the density of aluminum, and use the aluminum rib in our spine and rib plot to
illustrate.
20 4/24/2002
Density Sensor Theory of Operation
The discrete points plotted in Figure 14 illustrate the effects of averaging with tool rotation
widely varying count rates. The solid circular points represent the near and far crossplotted
densities that would be obtained with a motionless tool in the cases of zero and 1 inch
standoff. The open triangle is the cross-plot of the apparent densities computed from the
average count rates for these two cases, and the open box is the corrected density value that is
obtained from the SOC algorithm using the apparent densities obtained from the average count
rates. The crossplot for the average count rates does not fall on the rib defined by the non-
rotating measurements, and hence the standoff corrected density is incorrect.
There are two reasons for this error that is produced with rotation. One is that the rib itself
(solid line in the figure) is non-linear, particularly at higher standoffs in high contrast
situations. The average zero and maximum standoff densities for the near and far detectors
would cross plot on this rib only if it were a straight line. However, if one looks closely at the
triangle, it is evident that it is not even on a straight line drawn between the zero and 1-inch
standoff points. This happens for a mathematical reason. The tool samples two different count
rates, R1 and R2, to obtain the apparent densities for the zero and 1-inch standoff cases. The
densities computed from these count rates are proportional to the natural logarithms of each of
these count rates, ln(R1) and ln(R2). Averages of these densities would then be proportional to
the average of the logarithms, or [ln(R1) + ln(R2)]/2. However, if the tool is rotating the count
rates that are used to obtain the rotating near and far densities are the averages of the count
4/24/2002 21
Density Sensor Theory of Operation
rates, (R1+R2)/2, and the computed apparent density is proportional to the natural logarithm of
this value. If the two values R1 and R2 are different it is a mathematical fact that:
The right side of this equation is proportional to the apparent density that is measured by the
rotating tool, and the left side is proportional to the average of the individual densities. Thus, even
if the rib were linear, the crossplot of the near and far densities computed from the count rates
averages would not fall on the rib and there would be rotation induced error.
It is possible to determine a rib based on the assumption that the count rates used to compute
density are the averages over maximum and minimum standoffs. Included in Figure 14 is such a
“rib” which corresponds to the apparent densities that would be expected for “averaged” 50% zero
offset data in aluminum and 50% maximum offset data. As the maximum offset increases the
difference between this rib and the rib determined with individual, non-rotating measurements by
the tool increases. At 1/2-inch standoff with water in the borehole (interpolating in the figure) the
difference is very small; however, at 1 inch, it is about 0.1 g/cm3. This example is close to the
“worst case” for standoff ranges between zero and 1 inch, since the error is computed using water
as a borehole fluid. For lower density contrasts, either due to lower formation densities, or higher
mud weight densities, the discrepancy between this “rotating” rib and the static rib decreases.
A question one might ask is why we don’t use the rib represented by the dashed line in Figure 14
since it takes into account rotation, whereas the one that we do use is derived from static
measurements. There are several reasons that could be mentioned but the primary one is that the
fast sampling technique, which is discussed in the next section, produces count rates while rotating
that are very close to those that correspond to non-rotating conditions. It would be inappropriate to
derive a SOC to this sort of data using a rotating rib. Thus, were such a rib to be used, it would be
appropriate only to apply it to rotationally averaged data, and not to fast sample data.
Figure 15 and Figure 16 have been generated to illustrate the effect of the contrast between the
drilling mud and formation densities errors in density introduced by tool rotation. In each figure,
the SOC density is plotted versus increasing standoff, assuming the density, that is assuming the
tool spends 50% of its rotation in contact with the borehole wall and 50% in the situation of
maximum standoff. In very light formations (Figure 15) the computed density remains within the
accuracy of the tool (± 0.015 g/cm3) as indicated by the shaded region for standoffs out to 1 inch
over a wide range of mud densities. However in Figure 16, corresponding to a dense formation, the
accuracy is a strong function of mud weight and standoff. For very light muds corresponding to
high contrasts (see the water curve) the effect of rotation is significant (outside the shaded area) for
standoffs of 1/2 inch and more, but as the contrast decreases with heavier muds the effects of
standoff and rotation decrease in severity.
22 4/24/2002
Density Sensor Theory of Operation
Before concluding this section, two points should be reiterated. First, the errors in density due
to rotation in an oversized hole as shown in the latter part of this section are estimates, and are
intended to be worst case. Second, the errors are the result of averaging the counts as the tool
is rotating in an oversized hole. These errors are not applicable when the fast sampling
technique, which is discussed in the next section, is invoked. Indeed, the fast sampling
technique removes the adverse effect of rotation on the log.
4/24/2002 23
Density Sensor Theory of Operation
Since the spine and rib picture can be used to illustrate the behavior of the rib at large
standoffs, one might ask if a more detailed mathematical description of the rib, which would
take into account the mud weight dependence at higher standoffs, might be used to correct for
larger standoffs. In Figure 17, we sketch the behavior of a low density rib from the spine
through several inches of standoff. Moving away from the spine, the rib is initially well
behaved and nearly linear; this part is described well by the SOC algorithm. However, at
approximately 1 inch of standoff the rib begins to curve substantially, and for even larger
standoffs it actually turns around and moves back to the spine. The shape and extent of the rib
for large standoffs is a function of the mud density. Describing it mathematically requires
more complexity than is needed for smaller standoffs. More importantly, the curvature of one
rib at large standoffs can result in the actual crossing of another rib at smaller standoffs. Such
a crossing is circled in Figure 17. Thus even if ribs were correctly described, for such cross-
plot points multiple values of the corrected density would be possible -- both a high formation
density solution and a low formation density solution would be available, and some means
would have to be found to choose between them. These facts together make use of the spine
and rib approach to density correction both ambiguous and inaccurate for correcting standoffs
of more than 1 inch.
24 4/24/2002
Density Sensor Theory of Operation
In the measurement of density when standoffs are greater than 1 inch, the basic principle that
will be used is that the best quality result is obtained when the standoff is small. Of course,
this is done all the time in wireline logging, where the tool is pressed against the borehole
wall, and an accurate log is obtained for a wide range of hole sizes. Although in LWD logging
the tool is not pressed against the borehole wall, wireline quality results may be obtained by
logging with the blade of the tool oriented to the low side of the hole. This is possible while
steering, or in measurement-after-drilling (MAD) wiper runs.
It is possible, in principle, to obtain accurate logs in large holes while rotary drilling as well,
since it is usually true that the stabilizer blade is in contact with the wall during at least part of
the rotation. However, some method must be used to discriminate data acquired when the
standoff is small from data acquired when it is large. Several approaches to this problem are
currently in use in the industry. One is to use a tool face indicator when drilling in a deviated
hole. The indicator discriminates between high and low side orientations of the stabilizer
blade. If the hole is deviated, the tool will tend to lie on the low side of the hole. Data acquired
when the orientation is to the low side will then correspond to zero or small standoffs and will
be superior for the density determination. Alternatively, the standoff may be continuously
monitored with an acoustic device. Counts acquired during small standoff would be preferred
for the density computation. Both of these approaches have some possible failure modes. The
tool face method will fail if the tool moves around the hole, which it will be likely to do in a
vertical hole, and may do in a deviated hole. The acoustic method of determining standoff
becomes more difficult with increasing mud weight, and it can fail due to cuttings loading in
the hole or because of skewed reflections of the acoustic signal from the borehole wall.
4/24/2002 25
Density Sensor Theory of Operation
The purpose of this method is to discriminate small standoff data from large standoff data, and
since it is purely statistical it has the advantage of not requiring input from additional sensors.
In the discussion below some general principles followed by the method will be discussed
initially. Then the actual steps in the method will be summarized, and then the implementation
in software will be discussed. This will be followed by several examples of the application of
the method. The discussion will be concluded by examining features of the method in the light
of competitor techniques.
General Principles
The method for correcting density measurements for large standoff is referred to as the “Fast
Sampling Technique”. The first step in the method is to divide the conventional sampling
period, which is usually between 10 and 30 seconds in duration, into a large number of shorter
(“fast”) samples. Most of the processing which then follows is associated with the task of
deciding which of these “fast samples” were acquired when the tool was near the borehole
wall, and which were acquired when the tool was further away. Once this determination is
made, average near and far count rates are determined for the samples taken near the borehole
wall, and a corrected density is computed using the SOC method as before. This method uses
the fact that the count rates of the detectors change with standoff. Count rate changes on the
time scale of the rotation of the tool are detected, and these changes are presumed to be due to
varying standoff.
To understand how discrimination is made among the samples taken at large and small
standoffs, we consider the data that are shown in Figure 18. These data were acquired in an
oversized hole in a limestone test pit using a Density tool programmed to acquire “fast
sample” data in increments of small fractions of a second. The tool was eccentered, similar to
what would be the case in a deviated or horizontal hole. Data taken with the tool stationary are
shown in the upper half of the figure. The tool stabilizer blade was oriented in the direction of
minimum standoff, which is the optimum orientation for measuring formation density. In the
lower portion of the figure similar data are shown, but with the tool rotating.
The stationary and rotating data are each presented in two ways. First the data are plotted as a
time series of counts in each successive fast sample versus increasing time. In the stationary
case, since neither the density nor the standoff are changing, the numbers of counts should be
the same for each sample. The noise that is evident in the data reflects the random nature of
the nuclear counting process. Nuclear measurements are necessarily of average count rates,
computed by dividing the counts acquired in a sampling by the duration (in seconds) of the
sample. The accuracy of such measurements increases with the total number of counts
acquired. The statistical noise of such measurements is mathematically predictable, as can be
seen in the second presentation of the data on the right side of the figure.
26 4/24/2002
Density Sensor Theory of Operation
On the right side two curves are displayed: first, a histogram, which is the distribution of the
number of samples as a function of the counts per sample, and second, a plot of the predicted
distribution of the samples based on statistical theory. The theory of nuclear counting statistics
says that the statistical noise is dependent solely on the count rate, and as a result the
mathematical description of the distribution is a function of only one variable, the square root
of the mean counts per sample. The agreement that is evident between the histogram and the
predicted distribution is a good indication that the statistics of nuclear counting is the source of
the noise in the distribution. This would be the situation when the tool is motionless, when the
tool is not rotating and merely sliding along a uniform borehole wall, or when the tool is
rotating in a gauge hole surrounded by a formation of uniform density.
The effect of rotating the tool is clearly seen in the time series in the lower portion of Figure
18. While the statistical noise is still obvious, there is also a clear oscillation imposed on the
time series as well, resulting in changes between a minimum value, which is the same as the
average for the non-rotating situation discussed above, and a maximum value, which
corresponds to the orientation of the tool in the opposite direction, where the gamma rays pass
through the maximum amount of low density borehole fluid (water). This increased difference
between the minimum and maximum count rates results in a higher mean count rate than in
the non-rotating case. The resulting prediction of the distribution, superimposed once again on
a histogram of the distribution of the samples, is peaked at the new average and is somewhat
broader than in the case of the non-rotating prediction. However, nuclear statistics are no
longer the only factor contributing the width of the distribution of the samples. The
periodically changing count rate, which further broadens the actual distribution, produces a
substantial disagreement between the predicted and actual distributions.
It should be clear from the foregoing that a reasonable determination of whether the standoff is
or is not changing can be made downhole by comparing predicted distributions to the actual
distributions. Actually, the comparison can be made even more simply by comparing the
4/24/2002 27
Density Sensor Theory of Operation
widths of the predicted and measured distributions. The width in counts of the predicted
distribution, measured at the point of half the maximum peak value, is simply just the square
root of the mean counts per sample. When counting statistics are the only cause of the noise in
the measured distribution, this is the same as the standard deviation of the measured
distribution. Both of these quantities are straightforward to compute downhole, and a
comparison of them provides a useful indication of whether fast sampling is needed.
The first task then, in the downhole software is to determine whether a variation in standoff is
occurring. This can be done by monitoring the ratio of the actual and predicted distributions as
discussed above. Once it is known that the standoff is varying, the low standoff data must be
isolated so that it alone may be used in the density computation. It follows that the correct
counts, that is those for which the standoff is a minimum, are those that are on one extreme of
the distribution or the other. Which extreme is a function of the relative apparent densities of
the borehole fluid and the formation. If the density of the borehole fluid is low compared to
that of the formation, such as was the case in Figure 18 where the borehole fluid was water,
then the minimum count rates should be used. If the apparent density of the borehole fluid is
higher than that of the formation, then the counts on the high count side of the distribution are
preferred since the counts can be expected to decrease with increasing standoff.
28 4/24/2002
SONATRACH MWD MODULE
Data Acquisition Methods
• No real-time feedback
• recorded data is not as useful for drilling mechanics
data such as pressure and vibration (historical only)
• difficult to use for pore pressure prediction and casing
and coring point selection
• impractical and very expensive to use recorded data
for directional drilling and geosteering applications
Real-time Data Measurement Process
• MWD/LWD real-time data is obtained by sampling the
downhole sensors, encoding the data into a binary
format, and transmitting the data through some medium
to the surface
• The transmission is decoded at the surface, processed
into a sensor data value and associated with depth to
create real-time logs
• The process sounds simple, but it is extremely complex
and requires a combination of events to happen perfectly
for a data point to be processed
Real-time Telemetry Methods
Diameter
Pulser . . . . . . . . . . . . . . . . 2.64 in. (67 mm)
Pressure housing. . . . . . . . . 2.24 in (57 mm)
Weight . . . . . . . . . . . . . . . . 180 lb (81.6 kg)
Tool configuration . . . . . . . . complete MWD tool fits in pulser
Sub/NMDC/orienting sub-combination
Adjustment to NMDC length . . by means of MWD extension bars