Вы находитесь на странице: 1из 6

1

April Moore
DOS 711 Research Methods Med Dos I
April 1 2015
Trade Publication Article & Peer Reviewed Article Comparison
With so many medical journals and articles that are available today, it is important to be
educated in the differences of them in order to obtain accurate and valid information when
researching. I have selected two articles, one is a trade publication and the other is a peer
reviewed journal. I will compare and contrast these two types of articles generally as well as in
detail to the information in the specific article. While comparing these articles I will discuss the
usefulness of the article to professionals in this area as well as the strengths and weaknesses of
the articles.
A Trade publication is an article related to popular and interesting subjects. They are
meant to be informative and entertaining. These trade publications are informal and usually
written for a broad audience.1 The writers are usually paid journalists and use an informal writing
style. Several Trade journals offer free access online because they allow job ads and
advertisements throughout the readings. Since trade publications are not peer reviewed their
credibility is not certain. I will review an article from Radiology Today Magazine entitled
Decider Support; New Real-time tool seek to help Radiologists create consistent, evidence based
reports written by Orenstein. This article addressed an increasingly common issue- incidental
finding on imaging. Due to the advances in imaging and the growing use of CTs Radiologists
are discovering some incidental findings that they feel medically and legally required to
recommend follow up imaging and tests. These follow ups can lead to unnecessary imaging,
possible risk to patients and an increase in costs. The article noted that in an era of the
Affordable Care Act some physicians face pressure to contain health costs.2 About half of the
Radiologist in the United States use Nuance Powerscribe 360 to dictate reports. The Nuance
Company is developing a tool that works with Powerscribe so that the physician can enter some
descriptors of the incidental findings and the exact language and recommendations generate for
the Radiologist to include in the report. The recommendations that are offered are based on
published papers from the American College of Radiology, Fleischner Society and the Society of
Radiologist in Ultrasound.2 If there is no data available for the recommendations they have

organized a group of expert Radiologist who come up with conclusions and provide expert
advice. The article also notes that Radiologists have the discretions to not use the guidelines in
specific situations.
This article was, just like any other trade publication, meant to be extremely interesting
and was based on a topic directed to a broad audience. I enjoyed reading it and feel that anyone
who works in a radiologic field as well as anyone (healthcare career or not) who is aware of the
rising costs of medical exams can benefit from reading this article. The article seemed
particularly biased to the Nuance Company, it did not mention the other dictating systems that
Radiologist might use and if they were also developing similar software. The article also
neglected to discuss the possible risks associated with using the wrong recommendation
guidelines. Since the new Powerscribe tool will use guidelines that are based on published
papers, I feel that they are credible since those are formal papers that are peer reviewed. If there
are no guidelines in place for certain findings I am not sure that a panel of experts giving advice
is the most research based information.
Overall this article provided an interesting look into the tools to help Radiologist who
come across incidental findings. It reviewed both benefits to the physician as well as the patient.
It did however neglect to discuss more than one company. The information is helpful to both
healthcare providers as well as the general public. I did feel the article was not truly credible
since it was biased to a specific company as well as the author being a freelance medical writer.
While she might be extremely familiar with medical subjects I do not feel she is an expert in
them and her work for a trade publication is not required to be peer reviewed. I would
recommend this article to my peers for an enjoyable article to read, I would not recommend this
to be any sort of researched based data. For further data on incidental findings and follow up
recommendations I would redirect someone to the published papers mentioned in this article.

Evaluation of a Peer Reviewed Article


Peer reviewed articles are geared toward professionals; academics and researchers.1 They
are scientific and formal in writing and are also reviewed by a panel of peers that can challenge
the information to ensure that it is highly accurate. These scholarly sources include books,
journals and published reports. They are written by experts and are highly credible - offering
sources to which the information was collected.3 The Peer reviewed article I chose to evaluate is
from the Journal of Applied Clinical Medical Physics entitled CT Image Quality Over Time:
Comparison of image quality for six different CT scanners over a six- year period by Roa, et al. I
will provide the same critique as I did for the trade publication as well as thoroughly exploring
the study, information reported and conclusion.
The aim of the study in the article was to evaluate and analyze various image quality
parameters and variability measured throughout time for 6 different CT scanners, to evaluate the
current methodology for QA controls of CT systems.4 As CT scanners develop and they are able
to offer new technology such as organ perfusions, 3D angiography and virtual colonoscopies the
attention concerning radiation exposure also increases. When radiation exposure is reduced in
CT scanners there tends to be a magnification of noise and loss of signal.4 This study took six CT
scanners from four vendors; GE Medical Systems, Toshiba Medical Systems, Phillips Medical
Systems and Siemens Medical Solutions and preformed different QA tests on uniformity and
noise, linearity and CT number in different materials, spatial resolution and slice thickness. The
QA tests performed were based on recommendations from the International Electrotechnical
Commission (IEC), Institute of Physics and Engineering in Medicine (IPEM) and the American
Association of Physicists in Medicine (AAPM).4 The study was controlled by using six scanners
with equal workload in the same hospital as well as the same QA phantoms for testing.
This article explained in depth the specific phantoms used to test for uniformity and
noise, linearity and CT numbers, spatial resolution and slice thickness. Once the QA phantoms
were scanned all images were burned on CDs as DICOM files and were accessed and selected
using an in-house made MATLAB (Math Works Inc, Natick, MA) program which allows users
to determine whether the images were correctly positioned and of adequate quality.4 All noise
calculations were also done in this software. Additional software, AutoQALite, was used to
measure uniformity, CT numbers, linearity and slice thickness. The results of the QA test were as

follows: in regards to uniformity the values were within the advised limit (HU 4) with two
exceptions both in the Toshiba scanner. The first was a failure due to unknown reason in an
image which generated a reading of HU= 4.38, the second was a ring artifact that generated a
reading of HU = 4.19.4 Service engineers were called to check the CT scanner and did
calibrations. Generally the results show improved uniformity and small variations when using a
helical protocol.4 Since acquisition protocols, which vary from scanner to scanner, have a strong
influence on noise the data for each CT scanner was represented separately from the rest in graph
form. In order to compare the data the noise values were normalized with respect to the slice
thickness. The lowest amount of noise was seen in the GE Lightspeed and highest in the
Seimens.4 The linearity over time was appropriate for all CT scanners. The CT number QA test
revealed that the Toshiba registered higher values for a positive sesitometric insert and lower for
negative CT value materials compared to other scanners. The GE lightspeed was the opposite.
The example that was given to show this discrepancy was that air with an expected value of
-1000 had an average difference of ~65 and ~72 HU.4 It was noted that the average percentage
deviation from the expected values and yielded a very large range for all protocols. In
determining spatial resolution only three of the six scanners were used because of significant
differences in frequency of tests and utilization methods in the other scanners.4 Both GE scanners
preformed to standards except in one of the GE scanners the spatial resolution was reported
~21% lower than expected when using the standard reconstruction algorithm. In the Toshiba,
50% of the data obtained was lower than the GE scanners.4 When testing the QA for slice
thickness it was noted that variability was large for the Phillips scanners and relatively small for
the GE and Toshiba scanners.
The conclusion of the study suggested that the uniformity of the scanners was the same
over time so it might not be necessary to measure uniformity more often than once a year. There
was however minor drifting that took place over time in noise, CT number and spatial
resolution.4 In the case of image noise; the smaller detectors were more stable. The HU had a
wide range of variation between scanners and it was suggested that the HU should be used
carefully for diagnostic purposes. There is no reason, based on this study, to suspect spatial
resolution deviations over time unless there is a tube upgrade or algorithms have changed.4 This
study demonstrated that some QA test should be performed more regularly than others.

Like other peer reviewed articles, the abstract should appropriately define the goal as well
as briefly describes the study process. The articles abstract did describe the goal to evaluate
different recommended image QA over time. The information gathered in this study that was
used to analyze was scientific and highly supported by references. Several graphs and tables
were shown throughout the article to give the reader a clear understanding of the data collected
and results as they pertained to the specific QA testing. The methodology in analyzing the data
was slightly confusing for me to follow however, the writing was formal. The article was geared
to professionals in the radiology field more specifically medical physicists. I feel that these
authors were credible and, based on the research done over the six year time frame were experts
on QA testing. This peer reviewed article provided the information one would need in
researching subjects on CT scanner quality assurance. I would refer to this article as needed as
well as recommending it to a colleague.

References
1. Lenards N, Weege M. Radiation Therapy and Medical Dosimetry Reading. [PowerPoint].
LaCrosse, WI: UW-L Medical Dosimetry Program; 2015.
2. Orenstein B. (2015 March).Decider support: new real- time tools seek to help radiologists
create consistent, evidence-based reports. Radiology Today, (16)3:22. Retrieved from
http://www.radiologytoday.net/archive/rt0315p22.shtml.
3. Carbonneau K, Ragsdale L. Research fundamentals evaluating sources. [PowerPoint].
American Society of Radiologic Technologist; 2013.
4. Roa AA, Andersen HK, Martinsen AT. CT image over time: comparison of image quality
for six different CT scanners over a six- year period. J of Appl Clin Med Phys.
2015;16(2):350-365. http://dx.doi.org/10.1120/jacmp.v16i2.4972

Вам также может понравиться