Вы находитесь на странице: 1из 78

3D Speckle-Tracking

Sub-voxel Techniques and Tracking Accuracy

Ingvild Haraldsen Bostad

Master of Science in Communication Technology


Submission date: June 2006 Supervisor: Tor Audun Ramstad, IET Co-supervisor: Hans Torp, ISB Jonas Crosby, ISB Svein Arne Aase, ISB

Norwegian University of Science and Technology Department of Electronics and Telecommunications

Problem Description
A number of sub-pixel and sub-voxel techniques have been proposed in the literature, in order to overcome the limited lateral resolution of 2D and 3D speckle-tracking in medical ultrasound images. The aim of this project is to investigate how the different techniques affect the accuracy of 3D speckle-tracking. In this project, the student should implement a selection of these techniques, and quantify the resulting tracking errors.

Assignment given: 16. January 2006 Supervisor: Tor Audun Ramstad, IET

Abstract Speckle tracking is a method that is useful for examining myocardial function. The method allows the cardiologist examine the left ventricular wall function when there are indications of a myocardial infarction. The method is well developed and in diagnostic use for 2D cardiac ultrasound imaging. Dierent techniques for sub sample interpolation is investigated and tested. Sub sample methods can be useful due to the fact that they allow the tracking algorithm to track smaller displacements. The assessment is based on four dimensional ultrasound images of a tissue phantom, with time being the fourth dimension. Tracking in 3D ultrasound images is useful for being able to track those points in a 2D ultrasound image that moves in and out of the 2D imaging plane during a heart cycle. Good tracking in several dimensions demands good resolution ultrasound images. The resolution needs to be suciently good in both lateral and axial directions. This is needed to be able to track in sub pixel distances. To accomplish this, one needs to nd techniques for sub pixel resolution and to nd how these aects the quality of the sampling. The results indicate that interpolation improve the tracking accuracy. This is consistent for all the dierent interpolation methods, but not for the dierent depths in the ultrasound images nor for the dierent ultrasound images.

ii

Preface
This report is the result of the Master Thesis project at the Department of Electronics and Telecommunications at the Norwegian University of Science and Technology. The work was carried out at the Department of Circulation and Medical Imaging under supervision of Professor Hans Torp and PhD student Jonas Crosby. I would like to thank my supervisors Jonas Crosby and prof Hans Torp for sharing their knowledge and guiding me through this process. I would also like to thank Svein Arne Aase for helping me with the laboratory work. Last, but not least, I would like to express my gratitude to my fellow Master Students for creating a positive and inspiring environment.

Trondheim, 26. June 2006

Ingvild Haraldsen Bostad

iii

iv

Contents
1 Introduction 2 Theory 2.1 The heart . . . . . . . . . . . . . . . . . 2.1.1 Anatomy and Physiology . . . . 2.1.2 Myocardial Infarction . . . . . . 2.2 Ultrasound Imaging . . . . . . . . . . . 2.2.1 The Ultrasound Imaging System 2.2.2 2D Ultrasound Imaging . . . . . 2.2.3 3D Ultrasound Imaging . . . . . 2.3 Image Quality . . . . . . . . . . . . . . . 2.3.1 Spatial Resolution . . . . . . . . 2.3.2 Contrast resolution . . . . . . . . 2.4 Speckle Tracking . . . . . . . . . . . . . 2.4.1 Speckle . . . . . . . . . . . . . . 2.4.2 3D Speckle Tracking . . . . . . . 2.4.3 Quality Measurements . . . . . . 2.5 Spherical/Cartesian Coordinates . . . . 2.6 Interpolation . . . . . . . . . . . . . . . 3 Sub Pixel Methods 4 Methods 4.1 Tools . . . . . . . . . . . . . 4.1.1 Matlab . . . . . . . . 4.1.2 GcMat . . . . . . . . 4.1.3 Vivid7 . . . . . . . . 4.1.4 Robot Arm . . . . . 4.1.5 Ultrasound Phantom 4.2 Experimental Setup . . . . 4.3 Speckle Tracking Algorithm 4.4 Quality Measurements . . . 1 3 3 3 4 5 5 7 9 12 13 14 15 16 17 18 19 20 23 29 29 29 29 31 32 32 33 36 38

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . v

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

5 Results 5.1 Kernel size and Search area . . . . . . . 5.2 Interpolation of kernel and search area . 5.2.1 Nearest Neighbour Interpolation 5.2.2 Linear Interpolation . . . . . . . 5.2.3 Spline Interpolation . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

41 41 43 43 45 47

6 Discussion 49 6.1 Size of Search Area and Kernel Region . . . . . . . . . . . . . 49 6.2 Robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6.3 Ultrasound Image Recordings . . . . . . . . . . . . . . . . . . 50 7 Conclusion 51 7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 7.2 Further work . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 References A Robot Arm Code B beam2cart 53 57 61

vi

List of Figures
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14 2.15 3.1 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 5.1 5.2 5.3 5.4 5.5 5.6 5.7 Figure of the heart . . . . . . . . . . . Myocardial Infarction . . . . . . . . . Image data storage and display format Beamspace - Scan Converted . . . . . Scan Conversion . . . . . . . . . . . . Transducer array formats . . . . . . . 3D ultrasound probe . . . . . . . . . . 3D Full volume . . . . . . . . . . . . . Parameters for resolution in the lateral 2D speckle tracking example . . . . . . Speckle . . . . . . . . . . . . . . . . . Interference pattern . . . . . . . . . . 3D speckle tracking . . . . . . . . . . . The spherical coordinates . . . . . . . Interpolation visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and elevation direction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 5 6 7 8 9 10 12 14 15 16 17 19 20 22 24 30 31 32 34 34 35 36 37 41 42 42 43 44 44 45

Optical Flow Visualization . . . . . . . . . . . . . . . . . . . . The GcMat user interface, GcViewer. . . . . . The GE Vingmed Vivid7 ultrasound scanner. The ultrasound phantom . . . . . . . . . . . . The experimental setup . . . . . . . . . . . . The experimental phantom setup . . . . . . . Azimuth Image . . . . . . . . . . . . . . . . . Elevation Image . . . . . . . . . . . . . . . . . Angular Image . . . . . . . . . . . . . . . . . Small kernel size . . . . . . Medium kernel size . . . . . Large kernel size . . . . . . Azimuth, nearest neighbour Elevation, nearest neighbour Angular, nearest neighbour Azimuth, linear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

vii

5.8 5.9 5.10 5.11 5.12

Elevation, linear Angular, linear . Azimuth, spline . Elevation, spline Angular, spline .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

46 46 47 48 48

viii

List of Tables
4.1 4.2 4.3 4.4 Ultrasound images . . . . . . . . . Interpolation parameters . . . . . . Kernel and search area parameters Interpolation parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 38 39 39

ix

Abbreviations
1D 2D 3D 4D A-mode B-mode A/D CW DTI ECG FR GE GUI M-mode MLA pixel psf RF SAD SP SSD subpixel subvoxel US voxel One dimensional Two dimensional Three dimensional Four dimensional Amplitude mode Brightness mode Analog to Digital Continuous Wave Doppler Doppler Tissue Imaging ElectroCardioGram Frame Rate General Electric Graphical User Interface Motion mode Multi Line Aquisition Picture Element Point Spread Function Radio Frequency Sum of Absolute Dierence Service Pack Sum of Square Dierence small part of Picture Element small part of Volume Element Ultrasound Volume Element

xi

xii

Chapter 1

Introduction
Ultrasound has become a standard clinical tool in the treatment and diagnosis of illness and injury. It is a good non-invasive and non-radiative method to create images of dierent organs and the method does not have any known negative side eects. The best known use of ultrasound is probably cardiac and foetal imaging. Ultrasound has the advantage that it is relatively easy to use and it has many areas of use in addition to the two mentioned above. Ultrasound in cardiac imaging has many areas of application. It is possible to measure blood ow velocity, tissue movement and Doppler among other things. The response of the tissue is a function of elasticity and it is directly related to the healthiness of the tissue. If a person is experiencing a myocardial infarction, a part of the myocardium will die. This part of the myocardium will not have the ability to contract with the same eciency as before the ischemia occurred. After a myocardial infarction the response of the tissue will be deteriorated. Over the past years there have been several preferred methods to assess the myocardial function. These methods include wall-motion analysis and quantitative echocardiography techniques such as integrated backscatter, automatic border detection and Doppler tissue imaging (Leitman et al. 2004). After a myocardial infarction it is important to examine the patients heart function so that the aected area of the heart can be identied. The fact that the tissue elasticity is related to the healthiness of the tissue is an advantage that is being explored when it comes to speckle tracking. Speckle tracking tracks the unique speckle pattern in the patients heart from frame to frame to nd how large an area of the heart is aected by the ischemia. The speckle pattern remains relatively constant from one frame to another and hence it is possible to track these patterns using a pattern matching technique (Bohs et al. 2000). Over the last few years, some dierent pattern matching techniques have been described and tested. Among these techniques are

Chapter 1. Introduction

dierent correlation methods, the sum of square dierences (SSD) and the sum of absolute dierences (SAD) (Ramamurthy & Trahey 1991). The purpose of this paper is to present dierent techniques for sub pixel resolution and to investigate how dierent interpolation techniques aect on the performance of the speckle tracking algorithm. Dierent methods for sub pixel resolution has been implemented and tested. This paper is the result of a continuation of the work done in the 9th semester of the Master of Science study at NTNU. The work done was a performance analysis of an existing algorithm for speckle tracking in 3D on simulated ultrasound images (Bostad 2005). This paper starts out with a literature review on dierent sub pixel methods. Then an introduction to the physiology and anatomy of the heart follows, before some background theory of medical ultrasound imaging and speckle tracking is presented. This is included to give the reader a better understanding of the following contents of this paper. The algorithm is described, before it is tested with the use of dierent interpolation techniques. In the last sections, the results are presented and commented. The results are then analysed and discussed.

Chapter 2

Theory
This chapter provides some background theory for the reader who is unfamiliar with the anatomy and physiology of the human heart, ultrasound imaging, speckle tracking and interpolation. First, a short introduction of the anatomy and physiology of the heart is included. This is required to understand the needs for the speckle tracking method and its importance in heart diagnostics. Next, some basic ultrasound imaging theory is described. Then, the speckle tracking algorithm and the speckle tracking method is introduced. Finally, some interpolation theory is included. If nothing else is specied, the information in this section is from either Menneskekroppen (Bjlie et al. 2001) or Human Physiology (Vander et al. 2001).

2.1

The heart

In this section I will provide some background information of the anatomy and physiology of the heart and also some explanation of what happens during a myocardial infarction (Vander et al. 2001).

2.1.1

Anatomy and Physiology

The heart is mainly composed of cardiac muscle cells. These cells are striated muscle cells and they are called myocardium. The cells of the myocardium are organized in layers that are tightly bound together. Pericardium is the brous sac that encloses the heart in the thorax. In an adult human being the heart has the size of a st and weighs about 300 grams. The human heart is divided into two halves; the left half and the right half. One can divide the left and the right halves of the heart into two parts; the atrium and the ventricle, see Figure 2.1. 3

2.1. The heart

Chapter 2. Theory

Figure 2.1: The interior of the heart (Patton & Thibodeau 2003).

The myocard has varying thickness depending on which part of the heart one is looking at. This is caused by the fact that there are various pressures inside the dierent chambers. Higher pressure is needed to transport the blood out to the systemic system than to the pulmonary system. Hence, the left ventricle has the thickest myocard because it pumps the blood to the aorta which distributes the oxygen rich blood to the body. The atria are the thinnest chambers because they have the lowest pressure. The myocard does not extract oxygen from the blood within the atria and ventricles; it relies on its own blood supply which it gets from the coronary arteries that branch from the aorta. The inner surface of the heart is called endocard. The pericard is the sac in which the heart is enclosed.

2.1.2

Myocardial Infarction

When a patient gets a myocardial infarction the blood ow to the coronary arteries is insucient. This can cause severe damage to the myocard in the aected region, and it can cause parts of the myocard to die. The severity of the myocardial infarction depends upon how large part of the ventricle that dies. When a part of the myocard dies, the thickness of the myocard will be reduced and the hearts ability to transport blood to the body will be reduced. If a too large part of the left ventricle dies, the patient will not survive the myocardial infarction. 4

Chapter 2. Theory

2.2. Ultrasound Imaging

Figure 2.2: Myocardial Infarction (HeartPoint 2005).

2.2

Ultrasound Imaging

Ultrasound imaging is a non-invasive and non-radiative method to produce images of dierent parts of the human body. The method is real-time and inexpensive. It is used in many applications like obstetrics, heart diagnostics, vascular imaging and abdominal imaging among others. The earliest known attempt to scan a human organ by ultrasound was conducted by two Austrian brothers, Karl Theo Dussik and Friedrich Dussik, in the early 1940s (Woo 1998-2002). The early version of ultrasound imaging of the heart, also known as echocardiography, was rst developed by Dr I. Edler and Professor C. H. Hertz in Sweden in 1953 (Szabo 2004). Ultrasound images are made from sound waves with high frequencies that are sent into the body by a transducer and reected back to the transducer. The reected signals are then processed into the appropriate image mode. The frequencies used are usually in the area 2-10 MHz. In echocardiography the typical frequency is 2.5 MHz (Angelsen & Torp 2000).

2.2.1

The Ultrasound Imaging System

The ultrasound imaging system is divided into four main modules; the front end, the mid processors, the display and control unit and the user interface (Vingmed 2001). The front end is the gateway of signals going in and out of the selected transducer through the beamformers. This is where the steering and focusing of the ultrasound beam is made possible by adding and varying time delays to the transducer elements. The reected signal from the dierent elements in the transducer is added in the beamformer. The 5

2.2. Ultrasound Imaging

Chapter 2. Theory

nal summed ultrasound data is called RF data (Vingmed 2001). The mid processors consist of digital signal processing modules which perform the adequate signal processing for the dierent data types. This is where the RF data is converted to scanline data through IQ demodulation. The display and control module performs the scan conversion of the data so that it will be in a presentable form for display. That is, the scanline data is converted from rectangular to polar coordinates for proper display on the monitor. The user interface consists of the keyboard, switches, printer and monitor, i.e., all in/out devices. This is the interface between the machine and the user. Scan Conversion Scan conversion is the process that converts the RF data to a format that is suitable for display. The image data is stored in a rectangular format, but to display it on the monitor it is preferable to display it in the same geometrically form it was acquired, e.g., in sector shape (Vingmed 2001), see Figure 2.3.

Figure 2.3: Rectangular storage format vs Sector display format (Vingmed 2001). This is done by converting the x and y coordinates from the pixel domain to r and theta coordinates in the angular format. The rst step in the scan conversion process is to reduce the dynamic range down to 55-60 dB from as much as 120dB (Szabo 2004). The IQ demodulation process comes next. In this process the beamformed digitized signals are converted to IQ components with I being the real component and Q being the quadrature component. The following process is called envelope detection and what happens is that the I and Q components are combined to obtain the analytic envelope of the signal through the operation I 2 + Q2 (Szabo 2004). Following is an amplifying process to achieve further dynamic range compression. In order to make the data, that so far are detected, amplied and resampled presentable for a TV screen or PC monitor it is necessary to spatially remap the data points. The sector scan format is a dicult format 6

Chapter 2. Theory

2.2. Ultrasound Imaging

Figure 2.4: Left: Beam space image, Right: Beam space data converted to scan format (Lvstakken 2005). A visualization of Figure 2.3 to convert to TV format. The scan lines rarely intersect the pixel locations and, thus, one have to nd a way to align these, see Figure 2.5. The value of the pixel will depend on the value of several surrounding samples and this is accounted for by interpolation of four neighbouring samples, two in r direction and two in theta direction, are simultaneously read out of memory and weighted to calculate the pixel value (Vingmed 2001, Szabo 2004). The near eld of an ultrasound scan format image consists of far to many stored horizontal samples while the far eld do not have enough stored horizontal samples. This problem is solved by throwing away samples during interpolation in the near eld and generate more samples through interpolation in the far eld.

2.2.2

2D Ultrasound Imaging

Two dimensional ultrasound images are the kind of images everyone today is familiar with. The 2D ultrasound images are acquired by scanning the ultrasound beam transversely over the image eld (Angelsen & Torp 2000). This means that the transducer sends out a pulse, and then waits for the echo to return. Then a new pulse is sent out to generate the next line in the image. There are dierent transducer techniques to acquire the 2D ultrasound images; there are linear array transducers, curved linear transducers and phased array transducers. Linear transducers are used for imaging a narrow sector and a near eld, like the carotid arteries. The curved linear transducers are suited for imaging the abdominal area because they create a wide sector. The phased array transducers are typically used for cardiac imaging because they can create a 7

2.2. Ultrasound Imaging

Chapter 2. Theory

Figure 2.5: A: Sector scan display, image vectors. B: Magnied view comparing vector data in polar coordinates to rectangular pixel positions. (Szabo 2004).

Chapter 2. Theory

2.2. Ultrasound Imaging

Figure 2.6: The gure shows the three scan formats for the dierent transducer types described above. Leftmost one can see a phased array format, in the middle a linear array format is shown and to the right a curved linear array format is shown. wide eld of view at depth with a small element transducer. This is because the transducer needs to make an image between the ribs and the opening between two ribs is approximately 20 mm (Vingmed 2001). 2D ultrasound imaging is widely used to assess cardiac function.

2.2.3

3D Ultrasound Imaging

Three dimensional ultrasound imaging was proposed as early as in the fties (Angelsen & Torp 2000). In spite of this, it is still a relatively new feature of medical ultrasound imaging, which is due to the need for powerful computers to do the computations. The development of high capacity computers has happened quite fast over the last decade and, hence, 3D ultrasound has become a clinical feature. The main goal of 3D ultrasound imaging is the ability to present anatomical information in real-time and in a more user friendly format. 3D ultrasound is mostly used in obstetrics and for early detection of tumours, thus for applications that do not require true real-time 3D. For foetal imaging one makes use of surface-rendered imaging (Szabo 2004). True real-time 3D imaging is much more challenging than 2D imaging because it involves 2D arrays with thousands of elements. It also require an adequate number of channels to process and to perform the beamforming of the data. The acquisition time required and the amount of data processed is a serious challenge (Szabo 2004). The use of 3D ultrasound for real time imaging of the heart is a complex task. To produce 3D ultrasound images of the heart one need a stable heart rate and one have to take the hearts position in the chest into account as well. The consequence is that it might take several minutes to acquire good enough 3D data from the heart (Angelsen & Torp 2000). The rst 3D images were made by joining several 2D images by rotating or moving the probe linearly (Holm 1999). Nowadays, 3D imaging requires the use of special probes and software to acquire and render the images. And 9

2.2. Ultrasound Imaging

Chapter 2. Theory

Figure 2.7: Transducer for 3D imaging. with the use of high capacity computers the processing can be done in the matter of fractions of seconds rather than minutes. The 3D imaging process consists of three steps: acquisition, volume rendering and visualization. During acquisition the transducer array is creating a scan plane at every time instant. The position data does also need to be saved in the memory at all times. This is to be able to reconstruct the image into a sector format to be displayed on the monitor. To create a volume image of the heart one need enough frames from the same point in the cardiac cycle, which makes it necessary to synchronize the data acquisition to the ECG, M-mode or Doppler signals. Then, the data needs to be converted into a format usable for displaying on a monitor. This process is in reality the same as the scan conversion process previously described in Section 2.2.1. The data is being interpolated into a volume of data and the voxel positions is being mapped onto the spatial domain. Volume rendering of the data is a technique that produces surface like images of the internal anatomy. The volume rendering technique is most commonly used to display images of foetuses because these are the easiest to distinguish from the surrounding environment. The visualization process consists of a software package that presents the data in an interactive way for imaging. (Szabo 2004)

Signal Processing Challenges There are some signal processing challenges in real-time 3D medical ultrasound. These challenges need to be overcome to be able to move from 2D ultrasound, being the most common type, to 3D, becoming the most common type. One problem is that of having a transducer with 2000-10000 elements; ordinary 2D ultrasound images are acquired using 1D transducer arrays with 48-192 elements, while to be able to acquire 3D images near real-time it is necessary to use 2D arrays with close to the squared number of elements (Holm 1997). Another problem is that of frame rate (FR). In 3D the FR 10

Chapter 2. Theory

2.2. Ultrasound Imaging

is so low that real-time acquisition will be impossible unless some form of parallelism is exploited (Holm 1997). There are various ways of performing parallelism; multiple receive beams, coded transmit excitation and limited diraction beams. Holm (1997) discusses these dierent methods in his paper. If the transmit beam is made a little wider than usual one can use a receive beamformer with several parallel beams to acquire beams at slightly oset angles. For cardiac imaging this would still give unsatisfactory volume frame rate, but for stationary organs this method will be satisfactory. The coded transmit pulse method is sending out one individually beamformed signal for each unique direction. This method allows for 2D data acquisition at the same rate as 1D data acquisition. This method would, however, need a new transducer design with ability to send coded pulses in many directions simultaneously. It would also need to be validated against usage in a medium which is abberating and has attenuation. Limited-diraction beams could be expressed as a single frequency beam with a Bessel transverse prole (Cheng & Lu 2006). It was rst described as a non-diracting beam, or diraction-free beam, but as every beam eventually will diract it is renamed limited-diraction beam by Cheng & Lu (2006). The limiteddiraction beams method sums dierent limited-diraction beams to give array beams. These beams can be used in collaboration with transmission of plane waves where the array-beams are used for reception. Since a single plane wave is used, it is not possible to steer the beam, which is the main disadvantage of this method. This limits the application for use only in linear arrays. The advantage of limited-diraction beams is that they have a very large depth of eld even though they are produced with nite aperture and energy.

Full Volume The full volume data acquisition method is a way to obtain images of larger volume without having to reduce the resolution. The data acquisition happens by means of trigging by electrocardiogram (ECG), the trigging is done at the QRS complex. This method activates the ECG triggered sub-volume acquisition. This technique enables the acquisition of a larger volume without compromising the resolution by combining several sub-volumes acquired over two to six heart cycles, see gure 2.8. The process repeats and replaces the oldest sub-volume when the acquisition is done for the number of heart cycles initially decided. ECG triggered acquisition may involve some artifacts caused by motion of the probe during acquisition, movement of the patient during acquisition or irregular heart rate. The best images will be recorded if the patient is able to hold his/her breath during the data acquisition (GE Vingmed 2005). 11

2.3. Image Quality

Chapter 2. Theory

Figure 2.8: Full volume ultrasound data aquisition. Multiple Line Acquisition The multiple line acquisition (MLA) technique is another method to increase the temporal resolution. The MLA technique receives multiple beams for every beam that is being sent out. In 3D imaging one can for instance receive two beams in azimuth direction and two beams in elevation direction. The MLA technique can be used to increase the Frame Rate (FR) while preserving the resolution or it can increase the resolution while maintaining the FR. It is also possible to combine these two properties to obtain a compromise.

2.3

Image Quality

To acquire images from deep inside the body one will use low frequencies, while to improve the resolution of the images one will use high frequencies. Hence, there is a wide range of frequencies being used to acquire the best possible image of the chosen organ. Image resolution increases as one increase the frequency, but there is a simultaneous decrease in penetration of the ultrasound into tissue (Webb 2003). The resolution in an ultrasound image is a function of the F-number used during transmission and reception (GE Healthcare 2003). Image quality can be described by two main factors; spatial and contrast resolution. Spatial resolution describes lateral-, radialand elevation resolution. Contrast resolution describes the ability to detect small variations in the intensity of the back-scattered signal from targets that are close to each other (Angelsen & Torp 2000). The sector width of an ultrasound image is aecting the lateral resolution and also related to the 12

Chapter 2. Theory

2.3. Image Quality

FR. The lateral resolution will be reduced as the number of lines in the image is reduced. This will also have an eect on the time for acquiring the image; the fewer the lines, the less time it takes (Stylen 2005a). The temporal resolution in an ultrasound image is limited by the sweep of the beam. To increase the temporal resolution of an ultrasound image one can reduce the number of beams in one sector or one can decrease the sector angle. Both of these methods have a negative aspect to them, which is decreased lateral resolution and decreased image eld respectively (Stylen 2005a).

2.3.1

Spatial Resolution

Spatial resolution can be described as a spatial smearing in the image of a small target. It is given by the minimum scatterer spacing at which the system is able to distinguish between two closely spaced scatterers. Laterally it is determined by the width of the main lobe of the beam and radially it is determined by the length of the transmitted pulse. Radial resolution Radial resolution is resolution along the beam. It is also known as axial resolution. It is proportional to the length of the transmitted pulse and inversely proportional to the frequency (Angelsen & Torp 2000): R= cTs 2 (2.1)

In the above equation c is the speed of sound, assumed to be 1540 cm/s in soft tissue, and Ts is the length of the transmit pulse. The radial resolution will improve if the pulse length is short, but a short transmit pulse will also result in a greater bandwidth in the frequency plane. Azimuth- and elevation resolution/Lateral Resolution The azimuth- and elevation resolutions are resolutions transverse to the beam. These are both determined by the beam width. The lateral resolution decreases with depth with the use of sector probes. It could be expressed by the Rayleigh criteria (Angelsen & Torp 2000): L= Fc = f# Df (2.2)

F is the focal length, i.e., the distance from the transducer to the focal point, c is the speed of sound, f is the transmit frequency, D is the size of 13

2.3. Image Quality

Chapter 2. Theory

the aperture and is the wavelength. See Figure 2.9 for illustration of the dierent parameters. The ratio between the focal length and the size of the aperture is also known as the f-number. The f# is the ratio between the focal depth and the size of the aperture: F D

f# =

(2.3)

The Rayleigh criteria is often referred to as the measure of the resolving power of the ultrasound system. It is given by f# , and is often used as the measure of the width of the main lobe.

Figure 2.9: This gure shows the parameters described in equation 2.2.

2.3.2

Contrast resolution

Contrast resolution is sometimes referred to as the dynamic range of the ultrasound image. This is described as the ratio between the amplitude of the signal around the cyst to the signal from sidelobes and reverberations inside the cyst. The contrast resolution refers to the ability to show black areas in the body, e.g., a uid-lled cyst, as black spots in the image. Signal generated noise is the main limitation of the contrast resolution, caused by side lobes and reverberations. Electronic noise in the receiver is also a limitation (Angelsen & Torp 2000). Contrast resolution can also be described as intensity resolution. It can be improved by increasing the magnitude of the transmitted sound wave, allowing smaller fractional changes in received pulses to be detected. Intensity resolution is also limited by the storage space and speed of the ultrasound device that processes all the information. Reverberations are multiple reections between the transducer and the tissue interface as well as between tissue interfaces. The transducer will receive a signal that corresponds to a location beyond the farthest edge of 14

Chapter 2. Theory

2.4. Speckle Tracking

the tissue layout. In the ultrasound images this appears as artifacts, noise or ghosts (Vingmed 2001). Reverberations are an eect that can cause stripelike artifacts in the image. When imaging the heart the body wall might give a reverberation that can be seen as a bright, non-moving area during the heart cycle (Stylen 2005b). Side lobes are skirt-shaped around the main lobe and will pick up signals from dierent directions, including directions that are outside the image plane. A low sidelobe level is important for good contrast resolution of a system. If the sidelobe level is too high, the image will show signals from other parts of the image inside a cyst for instance. The sidelobes can be diminished by apodization, which is achieved by reducing the signal from the outer elements of the probe (Angelsen & Torp 2000).

2.4

Speckle Tracking

Speckle tracking is a method that is being used by cardiologists to visualize the left ventricle wall movement during a heart cycle. By examining the wall movement in the ultrasound image, it is possible to determine how large an area is aected by myocardial infarction. The aected area is the part of the myocardial muscle that has died.

Figure 2.10: An example of speckle tracking in a 2D ultrasound image of the left ventricle (Stylen 2005a).

15

2.4. Speckle Tracking

Chapter 2. Theory

2.4.1

Speckle

Speckle is the natural characteristics of ultrasound images. It is caused by local variations in intensity in an ultrasound image. The texture of the speckle pattern does not correspond to the underlying structure that is being imaged. Speckle is a random, deterministic interference pattern generated by the reected ultrasound signal. The reected ultrasound beams creates irregular interference patterns in the ultrasound image. The interference pattern depends on the frequency and the shape of the transmitted pulse and it depends on the beam width of the pulse. As the frequency of the transmitted pulse increase, the speckle pattern becomes more nely grained (Angelsen & Torp 2000). Speckle accounts for a decrease in ultrasound image quality.

Figure 2.11: Interference between the signals from many close point targets is creating speckle in the image as seen to the right (Angelsen & Torp 2000).

The speckle pattern has random distribution caused by the fact that the myocardium reects the ultrasound beams dierently. This gives rise to interference between the reected ultrasound beams from neighbouring points, and will give both increased and decreased signal intensity. This again will give the dierent regions of the myocardium a unique pattern, which will shift slightly from one frame to another. These patterns will be possible to track using speckle tracking (Stylen 2005a). Due to the fact that during a heart beat, the heart will contract, twist and expand, the speckle pattern will move in and out of the 2D plane. This movement of the speckle pattern out of the 2D plane cause the speckle pattern not to repeat perfectly in a 2D image. Hence, there is a need for 3D speckle tracking, so that the tracking algorithm can track with a higher accuracy. 16

Chapter 2. Theory

2.4. Speckle Tracking

Figure 2.12: Left: Interference pattern. Two points that are reecting the waves. As the amplitude increases, the reecting waves create a speckled pattern. Right: Irregular interference pattern created by randomly distributed scatterers. (Stylen 2005a).

2.4.2

3D Speckle Tracking

As mention above, speckle tracking is a method to assess myocardial function after an infarction. This is a method that is being used both as a visual aid to see how the myocardial function is and to actually measure the myocardial function. The method, as it is today, is insucient as 3D full volume ultrasound data now is available. 2D speckle tracking can not follow points in the heart muscle during deformation while the speckle points move in and out of the 2D plane. This is a feature that the 3D speckle tracking method will have. The essence of the speckle tracking technique is to compare two consecutive frames of the image and nd out where the speckle elements of interest have moved. The speckle pattern from one specic area will ideally be unique and, hence, it is possible to track this area using a comparing method like, i.e., SAD. The speckle tracking method is FR sensitive (Stylen 2005a). Too low FR will give too large variety in the speckle pattern from frame to frame. This will make the tracking unsatisfactory. Then again, too high FR will reduce the lateral resolution and, thus, result in poorer tracking in the transverse direction. (Stylen 2005a) According to Stylen (2005a) optimal FR for speckle tracking in 2D with todays equipment seems to be 50-70 frames/sec. The speckle tracking method is possible to use on dierent data sets. It is possible to track the speckle in both RF data, beam space data and in scan converted data. From the reected ultrasound signal one can extract both grey scale information and Doppler data. The disadvantage by extracting and using the RF data is that it requires large storage space and heavy computational power. The tracking in RF data has been shown feasible with 17

2.4. Speckle Tracking

Chapter 2. Theory

use of dierent methods like SAD, SSD, cross correlation and normalised cross correlation (Langeland et al. 2003). Ramamurthy & Trahey (1991) conducted an experiment that indicated that tracking in RF data provided improved performance. Other experiments have, however, provided conicting results showing inconsistency caused by an insucient axial sampling rate relative to the transmitted frequency and the following quantization of the tracking grid (Bohs et al. 1995). In this work the speckle tracking has been performed on beam space data, then converted to scan converted data. It is possible to do speckle tracking on scan converted data as well, this is done in 2D on the GE Vingmed Vivid 7 ultrasound machine.

2.4.3

Quality Measurements

Block matching techniques can be used to track the displacement through a sequence of frames. The block matching technique uses a kernel in one frame and a larger search region in the subsequent frame, then compare the kernel to all possible displacements within the search region to nd the most similar area. The block matching technique utilizes dierent mathematical methods to calculate the value used to compare image blocks. These mathematical methods include the Sum of Absolute Dierence (SAD), the Sum of Square Dierences (SSD), the normalized and non-normalized correlation coecient method. The geometry of the block matching technique for a 3D image is shown in gure 2.13. The dierent methods have dierent qualities. With the SAD method the most widely used. The SAD method gives the same quality results as the normalized correlation method, except that it needs fewer calculation steps. The SAD and the SSD give the same result, but SSD has a larger amplitude and needs more calculation steps than the SAD. The correlation method uses a statistical comparison of the speckle pattern in two following frames (Bashford & Ramm 2000).

SAD =

|k k |

(2.4)

The SAD algorithm calculates the value of the kernel with the least difference in pixel value from frame to frame. The SAD method is found to perform similarly well as the normalized correlation. The dierence is that the SAD method is much easier to implement and requires only a single absolute dierence operation (Bohs et al. 2000). 18

Chapter 2. Theory

2.5. Spherical/Cartesian Coordinates

Figure 2.13: Visualization of the block matching technique.

2.5

Spherical/Cartesian Coordinates

Cartesian coordinates are rectilinear two-dimensional or three-dimensional coordinates which are also called rectangular coordinates. The three axes of three-dimensional Cartesian coordinates, the x-, y-, and z-axes are chosen to be linear and orthogonal. The r, and are the symbols for the radial, azimuth and elevation coordinates respectively. The following equations represent the relation between the spherical coordinates and the Cartesian coordinates. To calculate the translation in the x, y and z direction these equations will be utilized. x = r cos sin (2.5)

y = r sin sin

(2.6)

z = r cos

(2.7)

r=

x2 + y 2 + z 2 19

(2.8)

2.6. Interpolation

Chapter 2. Theory

Figure 2.14: A visualization of the spherical coordinates (Weisstein 2005).

2.6

Interpolation

Interpolation can be described as a combination of two or more data values to produce intermediate data values by various kinds of rules (Angelsen & Torp 2000). Spatial interpolation of the ultrasound data is needed to investigate whether the tracking algorithm will follow the speckle pattern more accurate than without interpolation. Classical interpolation can be expressed as

x(t) =
k

xk (t k)

(2.9)

where xk is the samples of the function and (t) is the basis function. The interpolation is done in two steps; rst the coecients are calculated followed by the interpolation process itself. 20

Chapter 2. Theory

2.6. Interpolation

Nearest
Nearest neighbour interpolation determines the grey level from the closest pixel to the specied input coordinates, and assigns that value to the output coordinates. This method is considered the most ecient in terms of computation time. Because it does not alter the grey level value, a nearest neighbour interpolation is preferred if subtle variations of the grey levels need to be maintained. A disadvantage with the nearest neighbour interpolation is that it introduces a small error into the new image. The image may be oset spatially by up to 1/2 a pixel, causing a jagged or blocky appearance if there is much rotation or scale change.

Linear
When the signals are of the slowly varying kind, or they have short sampling intervals, the cubic spline method might not be necessary. In these cases linear interpolation might be sucient. Bilinear interpolation determines the grey level from the weighted average of the closest pixels to the specied input coordinates, and assigns that value to the output coordinates. This method generates an image of smoother appearance than the nearest neighbour method. The disadvantage is that the grey level values are altered in the process, resulting in blurring or loss of image resolution. Bilinear interpolation requires three to four times the computation time of the nearest neighbour method.

Spline
The idea of spline interpolation is to model the signal as a piecewise polynomial. There are two main approaches to this interpolation method; the cubic spline method, which is the one MatLab is using, and piecewise polynomial splines. Cubic spline interpolation of 0th order is the same as nearest neighbour interpolation, while interpolation of 1st order is equal to linear interpolation. As the order goes to innity, the form of the cubic spline interpolations basis function approaches a Gaussian form. The resulting image using this method is slightly sharper than that produced by bilinear interpolation, and it does not have the incoherent appearance produced by nearest neighbour interpolation. Cubic spline interpolation requires longer computation time than the nearest neighbour method.

21

2.6. Interpolation

Chapter 2. Theory

Figure 2.15: A 2D visualization of the dierent interpolation techniques used.

22

Chapter 3

Sub Pixel Methods


The purpose of this chapter is to present an overview of some of the sub pixel-methods to assess tissue motion that have appeared in the literature over the past few years. There exist several dierent techniques for assessment of tissue motion. Hein & OBrien (1993) have divided 2D motion estimation algorithms into two main groups; optical ow and block-matching algorithms. The optical ow algorithms estimate tissue motion from spatial and temporal changes in the brightness patterns of images while the block-matching algorithms search for displacement in the brightness patterns by matching techniques. Muscle contraction has a rather complex movement pattern, rotation, deformation and compression, and, hence, more complex optical ow techniques can be used to extract information. Optical ow is the velocity eld which warps one image into another image. It is visualized as vectors originating or terminating in pixels in a digital image sequence. The rst known presentation of optical ow calculation was by Horn & Schunck (1981). Their paper gave rise to what is known as the Horn-Schunck method of estimating optical ow. They state that optical ow is the distribution of noticeable velocities of movement of brightness pattern in an image. Optical ow can give important information about the spatial arrangement and rate of change of the objects in the image. According to Horn & Schunck (1981), the optical ow cannot be computed at a point in the image independently of the neighbouring points without introducing additional constraints. This is because the velocity eld at each image point has two components, whereas the change in the brightness pattern at a point in the image plane only has one constraint. Figure 3.1 shows a visualization of the vector eld derived from a moving image of a sphere. There is not necessarily any obvious connection between the optical ow in the image plane and the velocities of objects in 3D (Horn & Schunck 1981). 23

Chapter 3. Sub Pixel Methods

Figure 3.1: A 2D visualization of the optical ow technique (McCane et al. 2001).

Motion is easily perceived when projected onto a 2D screen, whereas motion in 3D is not as easily detected. This could be due to the fact that motion in 2D is perceived as a change in the brightness pattern, while motion in 3D can possess a constant brightness pattern. Take the case of a uniform sphere which exhibits shading because of its rugged surface; when it is rotated, the optical ow is still zero at all points in the image since the shading does not move with the surface. Also, specular reections move with a velocity characteristic of the virtual image, not the surface in which light is reected (Horn & Schunck 1981). One application of optical ow in echocardiography is to compute the interframe velocity eld of the heart (Mailloux et al. 1987). This is a method that shows the direction of motion of local areas of the cardiac muscle. The analysis of cardiac wall thickness and motion can be used to diagnose conditions such as ischemia and infarction. This method is, however, very time consuming. Processing the images takes approximately 4 hours (Mailloux et al. 1987). Use of optical ow methods has both advantages and disadvantages compared to block matching methods (Hein & OBrien 1993, de Jong et al. 1990). One advantage is that optical ow methods can obtain a more complete image of the cardiac motion, while a disadvantage is that it require strong computational capacity and of that reason it is generally restricted to o-line analysis. Bertrand et al. (1989) have described a faster method of calculating motion by optical ow. This method calculates the optical ow by assuming that the brightness of a point is constant even when it moves, and that the vector of the same point is linear. By using these constraints 24

Chapter 3. Sub Pixel Methods

and assuming that the brightness changes are negligible for small interframe deformation, the method described by Bertrand et al. (1989) is faster than the one described by Mailloux et al. (1987). Optical ow algorithms can be used to calculate blood ow velocity from the changing speckle pattern in B-scan ultrasound images. The problem here is again the extreme calculation time required to accomplish satisfying results. For this reason, one would rather make use of block-matching algorithms to be able to make the method more clinically useful. For an analysis to be clinically useful it is preferred to have the analysis done and the results ready as close to real time as possible. To detect motion in an image there is sucient to subtract one B-scan image from another. The disadvantage of this method is that it is only qualitative; it does not give any information of direction of the motion. It only gives an indication of motion. Speckle tracking methods will, however, provide some information about the direction and the velocity of the motion. There exist dierent blockmatching algorithms that can track the scatterers as they move from frame to frame. The block-matching methods track a kernel region over a search region and compute the similarity from one frame to the next. The mathematics in these methods are diverse. One can compute the dierence between the matching kernel from one frame to the other by using methods such as correlation, sum of absolute dierence, sum of squared dierence or meansquared error. Some of these methods are previously discussed in section 2.4.3. To achieve as good tracking as possible one would assume it would be benecial to explore the possibilities of improving the resolution of the ultrasound image. The need for improved resolution would especially be useful in 3D images where the elevation resolution tends to be limited by few beams. In general, both the lateral directions in 3D ultrasound imaging could benet from improved resolution. There has been developed numerous techniques to improve resolution of ultrasonic velocity estimation. Geiman et al. (2000) presents a novel method for estimation of lateral sub sample speckle motion and they are comparing this method with traditional interpolation techniques. To achieve an acceptable spatial resolution and velocity quantication it is necessary to interpolate the lateral sampled data. Multiple dimensional velocity estimation techniques in ultrasound have been explored because of potentially increased diagnostic value over 1D Doppler techniques. 2D techniques estimate both lateral and axial movement components. To measure the lateral component of movement is a challenge because the lateral sampling is much more coarser than the axial sampling. Axial sampling depends on the A/D conversion rate,

25

Chapter 3. Sub Pixel Methods

while the lateral sampling depends on the distance between neighbouring beams. According to Geiman et al. (2000) there are dierent techniques that can be used to measure the lateral component of velocity; spatial quadrature (Anderson 1998), transverse modulation (Jensen 1998), phase-sensitive tracking (Cohn, Emelianov, Lubinski & ODonnell 1997, Cohn, Emelianov & ODonnell 1997), weighted interpolation (Konofagou & Ophir 1998), a triple beam lens transducer method (Hein & OBrien 1993), spectral broadening (Newhouse et al. 1987), dual-angle Doppler (Fox 1978) and speckle tracking (Trahey & Allison 1987). These are all methods mainly explored in the blood ow velocity area. In the following I will focus on dierent sub sample speckle tracking methods to estimate the velocities. Most speckle tracking methods estimate velocity by using the block matching technique SAD. This algorithm is most common because it is easy to implement and require only a single absolute dierence operation, see section 2.4.3 for more detailed explanation of the SAD method. To increase the lateral velocity estimation accuracy one solution could be to record the ultrasound images using the MLA technique. This will improve the lateral resolution in the ultrasound image because the image then will receive multiple beams compared to what was sent out. The MLA technique could also increase the FR which could lead to an improved velocity estimation. Disadvantages with the MLA technique is that the ultrasound scanner cost will increase because of increased producing costs for transducers and another disadvantage is degraded spatial resolution because of increased transmit beam width. Other suggested solutions to the problem of too coarse resolution and not good enough tracking include various interpolation methods for sub sample resolution. There has been presented several dierent interpolation techniques for this purpose. Many of these explore dierent interpolation techniques for one dimensional (1D) axial cross-correlation function. Foster et al. (1990) perform interpolation of the 1D axial cross-correlation function by tting a parabola to the maximum point of the correlation function and the two surrounding points. The results from this study showed that the resulting tracking bias was directly related to the size of the resolution cell. Other 1D axial cross-correlation interpolation techniques include cosine t (de Jong et al. 1990, Cspedes et al. 1995), parabolic t and reconstructive methods (Cspedes et al. 1995). The results for the cosine t and the parabolic t showed that there was a signicant error for sampling frequencies with much stricter limits than the Nyquist limit. The reconstructive methods showed that the performance depended on the use of the maximum correlation sample point and neighbouring points. These methods are all tested for 1D axial cross-correlation functions. For multiple dimensions like 3D the before men-

26

Chapter 3. Sub Pixel Methods

tioned methods can also be applied, but there might be other methods that will have more favourable outcome. Geiman et al. (2000) investigates a novel method for measuring lateral subsample speckle pattern. This technique, called grid slopes, is tested for multiple dimensions data. This method perform interpolation on the SAD grid. The purpose of this method is to be able to choose a point between two SAD coecients to be the most likely point the speckle has moved. If two neighbouring points could be a possible match have the same value, Geiman et al. (2000) assume that the speckle pattern has moved to a point exactly midway between these two points. If one of the values is greater, one can assume that the correct new position is closer to the smallest SAD-value point. The interpolation methods tested is the cubic spline method and the parabolic t method. The results show that the cubic spline method is the one that performed best. None of the two methods performed especially well for subsample translation greater than 0.5 samples. Lu et al. (2003) describes a new algorithm that makes use of the phase of the 2D complex cross-correlation for robust and ecient estimation of displacement from speckle data. The method originates from the fact that the gradient vectors produced from the magnitude of the 2D cross-correlation approach the true peak along the orthogonal to the zero-phase contour. It is possible to nd the true peak from the knowledge that the zero-phase contour pass through this peak. Assuming that the vectors originate from a grid point suciently close to the peak makes it possible to nd this true peak by nding the point where the magnitude gradients are orthogonal. The algorithm is experimentally validated and results show that this method produce more accurate displacement estimates than other methods for 2D displacement estimation. There are three relevant areas too look into when it comes to sub pixel methods for speckle tracking in 3D ultrasound imaging: Calculate mean from tracked neighbouring points Interpolation of the SAD function across the search area Interpolation of the beam space data in the two lateral directions

27

Chapter 3. Sub Pixel Methods

28

Chapter 4

Methods
This chapter describes the tools used and the algorithms utilized throughout this project.

4.1
4.1.1

Tools
Matlab

MatLab version 7.0.1 SP1 from The Mathworks was used to carry out the interpolation methods and testing of the algorithm on the recorded data. MatLab was installed on a PC with 1.67 GHz AMD Athlon XP 2000+ processor and 736 MB RAM. On the computer it was installed Microsoft Windows XP Professional SP2 operating system.

4.1.2

GcMat

GcMat is a tool developed by the Department of Circulation and Medical Imaging at NTNU in collaboration with GE Vingmed. GcMat enables postprocessing of ultrasound raw data, and use of the GE Vingmed software component GcViewer through MatLab. The version used in this thesis work was GcMat v6.0.11. The GcMat GUI is shown in Figure 4.1. The GcViewer is the display unit of the GE Vingmed Ultrasound scanners. The reading, storing and display of ultrasound data in GcMat is controlled by a set of dll-les written in C++. These same dll-les is used in the GE Vingmed Vivid 7 Dimension ultrasound scanner. 29

4.1. Tools

Chapter 4. Methods

Figure 4.1: The GcMat user interface, GcViewer.

30

Chapter 4. Methods

4.1. Tools

4.1.3

Vivid7

The ultrasound scanner used for recording the ultrasound images used in this report was the GE Vingmed Vivid7, see Figure 4.2. The scanner is especially designed for cardiac and vascular imaging and it has 128 channels which scans electronically.

Figure 4.2: The GE Vingmed Vivid7 ultrasound scanner.

V3S Probe The probe used when recording the images was the V3S probe. This probe was recently commercialized and is a at array 3D probe. The V3S probe enables real time acquisition of volume ultrasound data. The probe enables improved spatial understanding of the anatomical structures and functions of the heart by enabling free rotation of 3D images combined with zooming and 4D imaging. There are two dierent display modes available; volume rendering mode for 3D scanning and slice mode for measurements and volume reconstruction purposes. The 4D imaging mode is only available from Bmode imaging. The footprint of the probe is 21x26mm and the frequency range is 1.5-4MHz (Vingmed 2005). 31

4.1. Tools

Chapter 4. Methods

4.1.4

Robot Arm

The robot arm used in these experiments is manufactured by Physik Instrumente, Germany. It is a M-5x1.5i IntelliStage. This is a linear positioner with integrated motor controller that allows to make the probe move in three directions and makes it possible to record 4D ultrasound images of a phantom. The IntelliStage is based on a 5-phase stepping motor operated by a high resolution micro stepping controller. It has a linear resolution of 1m. The step motor moves at high enough speed so that the movement will be approximately continuous (Physik Instrumente 2001).

4.1.5

Ultrasound Phantom

The ultrasound images are images of the General Purpose Multi-Tissue Ultrasound Phantom (CIRS 2006). The advantage of using a phantom when recording images is that the resulting image is known, see Figure 4.3. By knowing the structure of what is being imaged it is easier to analyse the results. At the same time the homogeneous areas in the phantom makes it easier to study the characteristics of the speckle pattern.

Figure 4.3: The ultrasound phantom. Left: An image of the phantom. Right: A visualization of the targets in the phantom. (CIRS 2006) The General Purpose Multi-Tissue Ultrasound Phantom from CIRS Tissue Simulation Technology (CIRS 2006) oers a dependable medium with specic, known test objects for repeatable qualitative assessment of ultrasound scanner performance. The phantom is made of a material that is not aected by temperature changes, it can sustain both boiling and freezing 32

Chapter 4. Methods

4.2. Experimental Setup

without being damaged. The phantom will, at normal room temperature, accurately simulate liver tissue characteristics. The model used (Model 040) is designed for assessment of uniformity, axial and lateral resolution, depth calibration, dead zone measurement and registration with two dierent attenuation coecients. In Figure 4.3 one can see a visualisation of the dierent targets in the phantom.

4.2

Experimental Setup

In order to test the eect of dierent sub-pixel techniques the following strategy was adopted: A test set of three B-mode ultrasound images was recorded using the V3S probe on the Vivid 7 ultrasound scanner. The experimental setup is shown in Figure 4.4. The probe was attached to the robot arm and the robot arm was programmed to move in certain directions. The code used to program the robot can be seen in the appendix. The sector width for the recordings were 75 degrees in the azimuth plane () and 15 degrees in the elevation plane (). There was recorded three 3D ultrasound images. One with translation only in azimuth direction, one with translation in elevation direction and one with the phantom angled so that the image was being recorded in a angular direction, see table 4.1. In the azimuth direction the velocity was programmed to be 200 000 (20mm/s). In the elevation direction the velocity was programmed to be 100 000 (10mm/s). In the angular recording the phantom was angled, see Figure 4.5, while the probe was programmed to move in azimuth direction. This resulted in an absolute velocity of 200 000 (20mm/s). lename Image03.dcm Image05.dcm Image06.dcm Direction of translation azimuth elevation angular Velocity 20 mm/s 10 mm/s 20 mm/s

Table 4.1: This table shows the direction of translation in the recorded ultrasound images. To be able to measure the eect of interpolation it is necessary to calculate the relationship between beam space and Cartesian coordinates. This was done using the MatLab function beam2cart. This code can be seen in the appendix.

33

4.2. Experimental Setup

Chapter 4. Methods

Figure 4.4: The gure show the experimental setup. The V3S probe is attached to one of the robot arms and the phantom is placed according to the setup parameters. The top of the phantom is lled with a layer of water to ensure good connection with the probe and the phantom. The red arrows indicate the directions of motion the robot arm is able to conduct.

Figure 4.5: The gure show the experimental setup of the phantom vs the probe. Left: Setup for recording of Image03. Middle: Setup for recording of Image05. Right: Setup for recording of Image06.

34

Chapter 4. Methods

4.2. Experimental Setup

Figure 4.6: Image03.dcm, movement in azimuth direction.

35

4.3. Speckle Tracking Algorithm

Chapter 4. Methods

Figure 4.7: Image05.dcm, movement in elevation direction.

4.3

Speckle Tracking Algorithm

Professor Hans Torp at the Department of Circulation and Medical Imaging at NTNU has implemented the speckle tracking algorithm. It is a further development of an existing 2D speckle tracking algorithm. The algorithm estimate displacement in three dimensions. The algorithm import the ultrasound images through the GcMat interface. I have further developed this algorithm in an attempt to improve it when it comes to accuracy of the tracking. The kernel and search area has been interpolated using dierent interpolation factors and dierent interpolation 36

Chapter 4. Methods

4.3. Speckle Tracking Algorithm

Figure 4.8: Image06.dcm, movement in angular direction.

37

4.4. Quality Measurements

Chapter 4. Methods

types. This was done in order to be able to detect sub sample displacement of the speckle pattern. The kernel and search area was interpolated in azimuth and elevation directions. There was assumed that the resolution in range was sucient for good enough tracking in addition to the fact that movement in range direction was minimal. The method described above use the inbuilt MatLab function, interp2, to interpolate in azimuth and elevation directions. Three dierent interpolation types was tested; spline, linear and nearest. The tracking was performed in three depths of the image; at 40% of maximal depth, 55% and 70%. The size of the kernel denes the spatial velocity resolution, while the size of the search region relative to the kernel denes velocity range. The size of the kernel and the size of the search region is a factor that has to be taken into consideration when performing the tracking. When tracking in the heart the kernel size must not exceed the size of the wall of the myocard. In this case the kernel size was set to be 10x4x4 and the search region 6x5x5. The search region is limited by a maximum velocity parameter and hence the size of the search region in mm or pixels/voxels will depend on the frame rate of the ultrasound image. The interpolation factors and interpolation types used are shown in table 4.3. lename Image03.dcm interpolation type spline nearest linear spline nearest linear spline nearest linear interpolation factor 1-3-5-7-9-11 1-3-5-7-9-11 1-3-5-7-9-11 1-3-5-7-9-11 1-3-5-7-9-11 1-3-5-7-9-11 1-3-5-7-9-11 1-3-5-7-9-11 1-3-5-7-9-11

Image05.dcm

Image06.dcm

Table 4.2: This table shows the interpolation parameters used.

4.4

Quality Measurements

The kernel size used in the tracking algorithm was in this case 10x4x4 in range, azimuth and elevation directions respectively. By increasing the kernel size one does not necessarily improve the precision of the tracking algo38

Chapter 4. Methods

4.4. Quality Measurements

rithm. On the contrary, one might actually degrade the performance of the algorithm by doing that. A larger search area will improve the chance for a SAD value close to zero. However, it does not necessarily improve the tracking algorithm. The larger the search area, the bigger the trade o between computing time and accuracy. Nkx Nky Nkz Nsx Nsy Nsz 10 4 4 6 5 5

Table 4.3: This table shows the kernel size and search area parameters used. The kernel size and search area are calculated as follows kernel = [2Nkx, 2Nky, 2Nkz] search = [2(Nkx+Nsx), 2(Nky+Nsy), 2(Nkz+Nsz)] Myocardial tissue velocity is up to about 10 cm/s. When imaging with a FR of 20 frames/s, this gives a displacement between frames equal to 0.5 cm/frame. To test the performance of the speckle tracking algorithm when further interpolation is implemented, the ultrasound images were recorded with the robot arm at a velocity of 2 cm/s. name k1 k2 k3 Nkx 3 5 5 Nky 2 3 4 Nkz 2 3 4 Nsx 2 4 5 Nsy 2 4 5 Nsz 2 4 5 total time <30s 30min 1h 50min

Table 4.4: This table show the kernel and search area parameters used. And the computation time for interpolating with factors 1-3-5-7-9.

39

4.4. Quality Measurements

Chapter 4. Methods

40

Chapter 5

Results
This chapter will present the results as MatLab plots. The results are accompanied by some observations and comments.

5.1

Kernel size and Search area

The interpolation type used in this section is the nearest neighbour method. It was tested on the image with movement in the azimuth direction (image03). The interpolation technique was tested using dierent kernel sizes and search areas, see Table4.4. The rst gure shows the smallest kernel size and search area, while the last gure shows the largest kernel size and search area. The settings used in the last gure is approximately the same as the settings used in the following experiments.

Figure 5.1: Tracking using a small kernel size and search area, see table 4.4

41

5.1. Kernel size and Search area

Chapter 5. Results

Figure 5.2: Tracking using a medium kernel size and search area, see table 4.4

Figure 5.3: Tracking using a large kernel size and search area, see table 4.4

42

Chapter 5. Results

5.2. Interpolation of kernel and search area

5.2
5.2.1

Interpolation of kernel and search area


Nearest Neighbour Interpolation

This section show the gures when the nearest neighbour interpolation is performed on the three ultrasound images. The gures show that the velocities varies for the dierent depths in the image, with the middle depth going from being the worst to the best estimate in the azimuth image and the angular image respectively. For the nearest neighbour interpolation one can see that the velocity in the elevation image seems to be zero all the way. This is some kind of error that I have not been able to nd the correct solution to.

Figure 5.4: Tracking of azimuth image using the nearest neighbour interpolation method.

43

5.2. Interpolation of kernel and search area

Chapter 5. Results

Figure 5.5: Tracking of elevation image using the nearest neighbour interpolation method.

Figure 5.6: Tracking of angular image using the nearest neighbour interpolation method.

44

Chapter 5. Results

5.2. Interpolation of kernel and search area

5.2.2

Linear Interpolation

In this section linear interpolation is performed on the three ultrasound images. The gures show that the velocities varies for the dierent depths in the image. For the linear interpolation one can see that the velocity decrease the deeper down in the image the tracking is performed. It can be observed that for the elevation image, the velocity seems to be zero when no interpolation at all is performed.

Figure 5.7: Tracking of azimuth image using the linear interpolation method.

45

5.2. Interpolation of kernel and search area

Chapter 5. Results

Figure 5.8: method.

Tracking of elevation image using the linear interpolation

Figure 5.9: Tracking of angular image using the linear interpolation method.

46

Chapter 5. Results

5.2. Interpolation of kernel and search area

5.2.3

Spline Interpolation

Last, the gures for the spline interpolation is shown. Again, the gures show that the velocities varies for the dierent depths in the image. It can be observed that for the elevation image, the velocity seems to be zero when no interpolation at all is performed, just like for the linear interpolation method. This method seems to perform equally well, and give approximately the same velocity estimate for each recorded image.

Figure 5.10: Tracking of azimuth image using the spline interpolation method.

47

5.2. Interpolation of kernel and search area

Chapter 5. Results

Figure 5.11: Tracking of elevation image using the spline interpolation method.

Figure 5.12: method.

Tracking of angular image using the spline interpolation

48

Chapter 6

Discussion
In this chapter I will discuss the results presented in chapter 5 and state the reason for my choice of interpolation method.

6.1

Size of Search Area and Kernel Region

The choice of the kernel region plays a critical role in tracking tissue motion. In the case of tracking cardiac motion the motion is periodic. This should result in the scatterers, if tracked correctly, return to the same point at the end of each cycle. In experiments with varying search- and kernel-sizes it has been found that there exists an optimal size for these regions where the cardiac muscle cells motion will be correctly tracked. The optimal size is not necessary constant for dierent regions. The kernel size should be less than the thickness of the left ventricular wall, which is 0.8 cm. The size of the kernel and the search area is larger the deeper down in the image one get. This is caused by the fact that the beams are narrower in the shallow parts of the image and wider deeper down. The performance will improve as the kernel and the search area is made larger because the areas will be more unique and less sensitive to noise. The disadvantage with a large kernel size is that it will degrade spatial velocity resolution.

6.2

Robot

The robot is based on a stepping machine that could cause the movement of the probe to be less continuous. In this case it is assumed that the step motor have no eect on the movement and that the movement seem con49

6.3. Ultrasound Image Recordings

Chapter 6. Discussion

tinuous. It can be discussed whether the acceleration have any eect on the velocity of the robot arm. This should not be a problem as the tracking algorithm is only tested on a few frames in the middle of the recording. There are no radial movement or translation of the robot arm, hence, the probe is stationary during the recordings. The fact that we forgot to set the acceleration parameter may have an eect on how fast the robot reach maximum velocity. This will, as mentioned above, probably not be an issue due to using only the three middle frames in each image. The movement is probably maximum velocity in middle of the recording.

6.3

Ultrasound Image Recordings

All three ultrasound images were recorded using B-mode imaging. The full volume application was not applied to the recordings. This could, however, improve the resolution and, hence, improve the eect of the interpolation. In the ultrasound images one can observe some reections from the water in the surface. This is what is known as reverberations and may aect the tracking in shallow depths. The reverberations can be observed in the image as shown in Figure 4.1. From the plots in chapter 7 it can be observed that there do not seem to be any signicant eect when interpolating with any larger factor than factor three. A consequence of reducing the interpolation factor is that the computation time required drastically decrease. The computing time for the experiments conducted in this paper was almost exponentially increased for every interpolation step. To calculate the interpolation for one image/one interpolation type, the elapsed time was almost 8 hours. Of these 8 hours, ve was spent calculating the interpolation with factor 11. The time consuming calculations was not the interpolation function itself, but the calculation of the SAD values. This would be because the interpolation increase the number of points of which the SAD is supposed to be calculated. If, however, one tried to interpolate the SAD instead of the search and kernel area on could probably reduce the computation time drastically. There are dierent actions to take to reduce the computation time. One can reduce the interpolation factor, as discussed above, another possibility is to reduce the search area. The search area size is discussed in section 6.1. The tracking seems to be poor in depth C of the images (70%), this could be caused by the fact that the beam density is worse deeper down in the image. As mentioned earlier the beams are wider deeper down in an ultrasound image than in shallow depths. The results presented does also indicate that there is a need for a certain density of beams for the interpolation to have any eect on the speckle tracking. 50

Chapter 7

Conclusion
7.1 Conclusion

A literature review was conducted to look into what was previously done in the area of sub pixel resolution and sub sample motion estimation. As a result of this there was implemented a method to test the eect one of these sub pixel methods would have on the 3D speckle tracking algorithm. The kernel and search area were interpolated with increasing interpolation factors to explore whether this would make the speckle tracking algorithm more accurate and track the tissue velocities with improved accuracy. The performance analysis conducted on the speckle tracking algorithm last semester indicated that the algorithm proved best performance if the displacement from frame to frame was either a whole sample or half a sample. This could indicate that interpolation of the search area could have a positive eect on the speckle tracking algorithm. The results show that there are no signicant eect whit interpolation factors higher than factor 3. This is a somewhat surprisingly nding.

7.2

Further work

The results indicate that a natural further step in this work would be to test other methods for sub pixel resolutions. One method that could be interesting to investigate would be to interpolate the SAD function in the search area. That could give more accurate and satisfactory results than the method presented here is. Another method could be to track neighbouring points in the image and then nd the average SAD for those points and determine the following point in the next frame from that. This method was 51

7.2. Further work

Chapter 7. Conclusion

briey looked into in this work, but not investigated enough for including it in the report. The preliminary result was, however, promising. In this report the algorithms and interpolation methods are implemented in Matlab. Matlab programming generally gives a longer running time than other programming languages. When interpolating the search and kernel area it showed that the SAD function needed very long computing time. This could be improved by implementing in a more ecient programming language.

52

References
Anderson, Martin E. (1998), Multi-dimensional velocity estimation with ultrasound using spatial quadrature, IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control 45(3), 852861. Angelsen, Bjrn A.J. & Hans G. Torp (2000), Forelesningsnotater TTK4160/TTK4165, NTNU. Bashford, Gregory R. & Olaf T. von Ramm (2000), Ultrasound threedimensional velocity measurements by feature tracking, IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control 43(3), 376 384. Bertrand, M., J. Meunier, M. Doucet & G. Ferland (1989), Ultrasonic biomechanical strain gauge based on speckle tracking, IEEE Symposium on Ultrasonics pp. 859863. Bjlie, Jan G., Egil Haug, Olav Sand, ystein V. Sjaastad & Kari C. Toverud (2001), Menneskekroppen Fysiologi og Anatomi, Gyldendal. Bohs, L.N., B.H. Friemel & G.E. Trahey (1995), Experimental velocity proles and volumetric ow via two-dimensional speckle tracking, Ultrasound in Medicine & Biology 21, 885898. Bohs, L.N., B.J. Geiman, M.E. Anderson, S.C. Gebhart & G.E. Trahey (2000), Speckle tracking for multi-dimensional ow estimation, Ultrasonics 38, 369375. Bostad, Ingvild H. (2005), Performance analysis of a 4D speckle tracking algorithm. 9th semester project at Norwegian University of Science and Technology. Cspedes, I., Y. Huang, J. Ophir & S. Spratt (1995), Methods for estimation of subsample time delays of digitized echo signals, Ultrasonic Imaging 17, 142171. Cheng, Jiqi & Jian-yu Lu (2006), Extended high-frame rate imaging method with limited-diraction beams, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 53(5), 880899. 53

References

References

CIRS (2006), General purpose multi-tissue ultrasound phantom, URL: http://www.cirsinc.com . [Read: 12.June 2006]. Cohn, N. Abraham, Stanislav Y. Emelianov, Mark A. Lubinski & Matthew ODonnell (1997), An elasticity microscope. part i: Methods, IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control 44(6), 13041319. Cohn, N. Abraham, Stanislav Y. Emelianov & Matthew ODonnell (1997), An elasticity microscope. part ii: Experimental results, IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control 44(6), 1320 1331. de Jong, P.G.M., T. Arts, A.P.G. Hoeks & R.S. Reneman (1990), Determination of tissue motion velocity by correlation interpolation of pulsed ultrasonic echo signals, Ultrasonic Imaging 12, 8498. Foster, Steven G., Paul M. Embree & William D. OBrien (1990), Flow velocity prole via time-domain correlation: Erros analysis and computer simulation, IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control 37(2), 164175. Fox, Martin D. (1978), Multiple crossed-beam ultrasound doppler velocimetry, IEEE Transactions on Sonics and Ultrasonics 25(5), 281286. Geiman, Beth J., Laurence N. Bohs, Martin E. Anderson, Sean M. Breit & Gregg E. Trahey (2000), A novel interpolation strategy for estimating subsample speckle motion, Phys.Med.Biol. 45, 15411552. Healthcare, Wipro GE (2003), Wide aperture/low f-number imaging, URL: http://www.gehealthcare.com/inen/rad/us/technology/ msuwidap.html . [Read: 4.April 2006]. HeartPoint (2005), Myocardial Infarction, URL: http://www.heartpoint. com/mi.html . [Read: 24.May 2006]. Hein, I.A. & William D. OBrien (1993), Current time-domain methods for assessing tissue motion from reected ultrasound echoes a review, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 40(2), 84102. Holm, Sverre (1997), Real-time 3d medical ultrasound signal processing challenges, Institutt for informatikk, Universitetet i Oslo . Holm, Sverre (1999), Medisinsk ultralydavbildning, Institutt for informatikk, Universitetet i Oslo . Horn, Berthold K.P. & Brian G. Schunck (1981), Determining optical ow, Articial Intelligence 17, 185203. Instrumente, Physik (2001), IntelliStages, Linear Stages with Integrated Mo54

References

References

tor Controller, IntelliStages Technical Documentation and User Instructions. Jensen, Jrgen Arendt (1998), A new method for estimation of velocity vectors, IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control 45(3), 837851. Konofagou, Elisa & Jonathan Ophir (1998), A new elastographic method for estimation and imaging of lateral displacements, lateral strains, corrected axial strains and poissons ratios in tissues, Ultrasound in Medicine & Biology 24(8), 11831199. Langeland, S., J. Dhooge, H. Torp, B. Bijens & P. Suetens (2003), Comparison of time-domain displacement estimators for two-dimensional rf tracking, Ultrasound in Medicine & Biology 29(8), 11771186. Leitman, Marina, Peter Lysyansky, Stanislav Sidenko, Vladimir Shir, Eli Peleg, Michal Binenbaum, Edo Kaluski, Ricardo Krakover & Zvi Vered (2004), Two-dimensional strain a novel software for real-time quantitative echocardiographic assessment of myocardial function, Journal of the American Society of Echocardiography 17, 10211029. Lu, Xiliang, Hanwoo Lee & E.S. Ebbini (2003), Phase-coupled twodimensional speckle tracking algorithm, IEEE Symposium on Ultrasonics 2, 19311934. Lvstakken, Lasse (2005), Rf data akkvisisjon og prossessering, URL: http: //www.ntnu.no/~lovstakk/sigMedBilled/oppgaver/oving6.pdf . [Read: 16.December 2005]. Mailloux, G.E., A. Bleau, M. Bertrand & R. Petitclerc (1987), Computer analysis of heart motion from two-dimensional echocardiograms, IEEE Transactions on Biomedical Engineering 34, 356364. McCane, B., K. Novins, D. Crannitch & B. Galvin (2001), On benchmarking optical ow, Computer Vision and Image Understanding 84(1), 126 143. Newhouse, V.L., D. Censor, T. Vontz, J.A. Cisneros & B.B. Goldberg (1987), Ultrasound Doppler probing of ows transverse with respect to beam axis, IEEE Transactions on Biomedical Engineering 34, 779789. Patton, Kevin T. & Gary A. Thibodeau (2003), Anatomy & Physiology, Mosby. Ramamurthy, Bhaskar S. & Gregg E. Trahey (1991), Potential and limitations of angle-independent ow detection algorithms using radiofrequency and detected echo signals, Ultrasonic Imaging 13, 252268. Stylen, Asbjrn (2005a), Basic ultrasound for clinicians, URL: http://

55

References

References

folk.ntnu.no/stoylen/strainrate/Ultrasound/ . [Read: 15.June 2006]. Stylen, Asbjrn (2005b), Strain rate imaging, URL: http://folk.ntnu. no/stoylen/strainrate/ . [Read: 25.May 2006]. Szabo, Thomas L. (2004), Diagnostic Ultrasound Imaging: Inside Out, Elsevier Academic Press. Trahey, G.E. & J.W. Allison (1987), Angle independent ultrasonic detection of blood ow, IEEE Transactions on Biomedical Engineering 34(12), 965967. Vander, Arthur, James Sherman & Dorothy Luciano (2001), Human Physiology The Mechanisms of Body Function, 8th edn, McGraw Hill. Vingmed, GE (2001), SYSTEM FIVE Theory of operation, SYSTEM FIVE Service Manual. Vingmed, GE (2005), Vivid7 User Manual. Webb, Steve, ed. (2003), The Physics of Medical Imaging, IoP Publishing. Weisstein, Eric W. (2005), Spherical coordinates. MathWorld A Wolfram Web Resource, URL: http://mathworld.wolfram.com/ SphericalCoordinates.html . [Read: 26.May 2006]. Woo, Joseph SK (1998-2002), History of ultrasound in obstetrics and gynecology, URL: http://www.ob-ultrasound.net . [Read: 29.May 2006].

56

Appendix A

Robot Arm Code


function r = move(r, newpos, axis) % R = MOVE(R, NEWPOS) % Move the robot to the position specified by NEWPOS. % The new position is an array of three numbers [X Y Z]. % % R = MOVE(R, NEWPOS, AXIS) % Move the axis specified by AXIS to the position % specified by NEWPOS. The new position is a number, % and the axis is one of {X Y Z 1 2 3 1 2 3}. % % $Id: move.m,v 1.10 2002/09/12 14:15:19 joha Exp $ % if called with two args, if nargin == 2 r = move(r, newpos(1), r = move(r, newpos(2), r = move(r, newpos(3), return end we call ourselves. 1); 2); 3);

% find axis number axis = findstr(lower(axis), [xyz123 1 2 3]); axis = mod(axis,3); if axis == 0 axis = 3; end %chech boundaries 57

Appendix A. Robot Arm Code

maxpos = r.maxpos(axis); minpos = r.minpos(axis); if newpos > maxpos newpos = maxpos; elseif newpos < minpos newpos = minpos; end % invert z axis if axis == 3 newpos = -newpos; end %send command to robot fprintf(r, sprintf(%dMA%d, axis, floor(newpos*1e7)));

58

Appendix A. Robot Arm Code

function r = physik(port) %r = physik(port) - PI robot "constructor" % % input arguments: % port (string): {COM1,COM2,...} % specify which serial port scanny is connected to % returns: % robot object % functions: % move move to a specific position % fprintf send commands to the robot % fscanf read replies from the robot % get get properties % relmove move to a position relative to the current position % set set properties % wait wait for robot to stop % TODO: check if serial port is available % $Id: physik.m,v 1.9 2002/09/12 14:15:20 joha Exp $ r.comm = []; r.maxpos = [0.02;0.02;0.04]; r.minpos = [-0.02;-0.02;0]; r = class(r,physik);

r.comm = serial(port,BaudRate,9600,TimeOut,60); fopen(r.comm) % Stop robot, if moving fprintf(r, 123AB);

59

Appendix A. Robot Arm Code

60

Appendix B

beam2cart
%% Funksjon for konvertere fra beamspace til romlige koordinater function [x,y,z] = beam2cart(r,b1,b2); h = gcglobalget(h); iGridSize = gcudtparam(h,Tissue,GridSize); nSamples = iGridSize(1); nBeams = iGridSize(2); nElevBeams = iGridSize(3); % Dybdeinformasjon: fDepthStart = gcudtparam(h,Tissue,DepthStart); fDepthEnd = gcudtparam(h,Tissue,DepthEnd); dr = (fDepthEnd-fDepthStart)/(nSamples-1); % Scan-Sektor (Azimut): fAzWidth = gcudtparam(h,Tissue,Width); fAzTilt = gcudtparam(h,Tissue,Tilt); fAzAngleStart = (fAzTilt-fAzWidth)/2;% fAzAngleEnd = (fAzTilt+ fAzWidth)/2; db1 = (fAzAngleEnd-fAzAngleStart)/(nBeams-1); % Scan-Sektor (Elevasjon): fElWidth = gcudtparam(h,Tissue,ElevationWidth); fElTilt = gcudtparam(h,Tissue,ElevationTilt); fElAngleStart = (fElTilt-fElWidth)/2;% fElAngleEnd = (fElTilt+ fElWidth)/2; 61

Appendix B. beam2cart

db2 = (fElAngleEnd-fElAngleStart)/(nElevBeams-1); % Finn dybde og vinkler til beamspace-punkt: R = fDepthStart + dr * (r-1); THETA = fAzAngleStart + db1 * (b1-1); PHI = fElAngleStart + db2 * (b2-1); % Konverter fra sfriske til kartesiske koordinater: [x,y,z] = sph2cart(THETA,PHI,R); end%func;

62

Вам также может понравиться