Вы находитесь на странице: 1из 37

SM

MTA JOURNAL
Summer-Fall 2000

Issue 54

A Publication of
MARKET TECHNICIANS ASSOCIATION, INC.
One World Trade Center

Suite 4447

New York, NY 10048

212/912-0995

A Not-For-Profit Professional Organization

Fax: 212/912-1064

e-mail: shelley@mta.com

Incorporated 1973

www.mta.org

THE MTA JOURNAL TABLE OF CONTENTS


SUMMER - FALL 2000

ISSUE 54

MTA JOURNAL EDITORIAL STAFF

ABOUT THE MTA JOURNAL

MTA MEMBER AND AFFILIATE INFORMATION

1999-2000 BOARD OF DIRECTORS AND MANAGEMENT COMMITTEE

EDITORIAL COMMENTARY
ADDRESS TO MTA 25TH ANNIVERSARY SEMINAR MAY 2000 LIVING LEGENDS PANEL

Robert J. Farrell

1
2
3
4
5

EXPLOITING VOLATILITY TO ACHIEVE A TRADING EDGE: MARKET-NEUTRAL/DELTA-NEUTRAL


TRADING USING THE PRISM TRADING SYSTEMS

Jeff Morton, M.D., CMT

MECHANICAL TRADING SYSTEM VS. THE SP 100 INDEX: CAN A MECHANICAL TRADING SYSTEM
BASED ON THE FOUR W EEK RULE BEAT THE SP 100 INDEX?

13

Art Ruszkowski, CMT, M.Sc.

SCIENCE IS REVEALING THE MECHANISM OF THE WAVE PRINCIPLE

19

Robert R. Prechter, Jr., CMT

TESTING THE EFFICACY OF THE NEW HIGH/NEW LOW INDEX USING PROPRIETARY DATA

25

Richard T. Williams, CFA, CMT

BIRTH OF A CANDLESTICK - USING GENETIC ALGORITHM TO IDENTIFY USEFUL


CANDLESTICK REVERSAL PATTERNS

31

Jonathan T. Lin, CMT

MTA JOURNAL

Summer-Fall 2000

THE MTA JOURNAL


SUMMER - FALL 2000

ISSUE 54

EDITOR
Henry O. Pruden, Ph.D.
Golden Gate University
San Francisco, California

ASSOCIATE EDITORS
David L. Upshaw, CFA, CMT
Lake Quivira, Kansas

Jeffrey Morton, M.D.


PRISM Trading Advisors
Missouri City, Texas

MANUSCRIPT REVIEWERS
Connie Brown, CMT
Aerodynamic Investments Inc.
Pawley's Island, South Carolina

Charles D. Kirkpatrick, II, CMT


Kirkpatrick and Company, Inc.
Chatham, Massachusetts

Michael J. Moody, CMT


Dorsey, Wright & Associates
Pasadena, California

John A. Carder, CMT


Topline Investment Graphics
Boulder, Colorado

John McGinley, CMT


Technical Trends
Wilton, Connecticut

Richard C. Orr, Ph.D.


ROME Partners
Marblehead, Massachusetts

Ann F. Cody, CFA


Hilliard Lyons
Louisville, Kentucky

Cornelius Luca
Bridge Information Systems
New York, New York

Kenneth G. Tower, CMT


UST Securities
Princeton, New Jersey

Robert B. Peirce
Cookson, Peirce & Co., Inc.
Pittsburgh, Pennsylvania

Theodore E. Loud, CMT


Tel Advisor Inc. of Virginia
Charlottesville, Virginia

J. Adrian Trezise, M. App. Sc. (II)


Consultant to J.P. Morgan
London, England

PRODUCTION COORDINATOR
Barbara I. Gomperts
Financial & Investment Graphic Design
Marblehead, Massachusetts

PUBLISHER
Market Technicians Association, Inc.
One World Trade Center, Suite 4447
New York, New York 10048

MTA JOURNAL

Summer-Fall 2000

ABOUT THE MTA JOURNAL


DESCRIPTION OF THE MTA JOURNAL
The Market Technicians Association Journal is published by the Market Technicians Association, Inc.,
(MTA) One World Trade Center, Suite 4447, New
York, NY 10048. Its purpose is to promote the investigation and analysis of the price and volume activities of the world's financial markets. The MTA Journal is distributed to individuals (both academic and
practitioner) and libraries in the United States,
Canada, Europe and several other countries. The
MTA Journal is copyrighted by the Market Technicians
Association and registered with the Library of Congress. All rights are reserved.

A NOTE TO AUTHORS ABOUT STYLE


You want your article to be published. The staff of the MTA Journal wants to help you. Our common
goal can be achieved efficiently if you will observe the following conventions. You'll also earn the
thanks of our reviewers, editors, and production people.
1. Send your article on a disk. When you send typewritten work, please use 8-1/2" x 11" paper. DOUBLESPACE YOUR TEXT. If you use both sides of the paper, take care that it is heavy enough to avoid
reverse-side images. Footnotes and references should appear at the end of your article.
2. Submit two copies of your article.
3. All charts should be provided in camera-ready form and be properly labeled for text reference. Try
to avoid using the words "above" or "below," but rather, Chart A, Table II, etc. when referring to your
graphics.
4. Greek characters should be avoided in the text and in all formulae.
5. Include a short (one paragraph) biography. We will place this at the end of your article upon
publication. Your name will appear beneath the title of your article.
We will consider any article you send us, regardless of style, but upon acceptance, we will ask you to
make your article conform to the above conventions.
For a more detailed style sheet, please contact the MTA Office, One World Trade Center, Suite 4447,
New York, NY 10048.
Mail your manuscripts to:
Dr. Henry O. Pruden
Golden Gate University
536 Mission Street
San Francisco, CA 94105-2968

MTA JOURNAL

Summer-Fall 2000

MARKET TECHNICIANS ASSOCIATION, INC.


MEMBER AND AFFILIATE INFORMATION
MEMBER
Member category is available to those "whose professional efforts are spent practicing financial technical
analysis that is either made available to the investing public or becomes a primary input into an active
portfolio management process or for whom technical analysis is a primary basis of their investment decisionmaking process." Applicants for Member must be engaged in the above capacity for five years and must be
sponsored by three MTA Members familiar with the applicant's work.

AFFILIATE
Affiliate status is available to individuals who are interested in technical analysis, but who do not fully
meet the requirements for Member, as stated above; or who currently do not know three MTA members for
sponsorship. Privileges are noted below.

DUES
Dues for Members and Affiliates are $200 per year and are payable when joining the MTA and thereafter
upon receipt of annual dues notice mailed on July 1. College students may join at a reduced rate of $50 with
the endorsement of a professor.

APPLICATION FEES
Applicants for Member will be charged a onetime, nonrefundable application fee of $25; no fee for
Affiliates.

BENEFITS OF THE MTA

Invitation to MTA educational meetings


Receive monthly MTA newsletter
Receive MTA Journal
Use of MTA library
Participate on various committees
Colleague of IFTA
Eligible to chair a committee
Eligible to vote

Members

Affiliates

Annual subscription to the MTA Journal for nonmembers:

$50 (minimum two issues).

Single issue of the MTA Journal (including back issues):

$20 each for members and affiliates and


$30 for nonmembers.

MTA JOURNAL

Summer-Fall 2000

2000-2001 BOARD OF DIRECTORS AND MANAGEMENT COMMITTEE


OF THE MARKET TECHNICIANS ASSOCIATION, INC.
Board of Directors

Management Committee

(4 Officers, 4 Directors & Past President)

(4 Officers, Past President and Committee Chairs)

Director: President

Accreditation

Journal

Philip B. Erlanger, CMT


Phil Erlanger Research Co. Inc.
978/263-2536
Fax: 978/266-1104
E-mail: phil@erlanger.com

David L. Upshaw, CFA, CMT


913/268-4708
Fax: 913/268-7675
E-mail: upshawd@juno.com
Admissions

Henry (Hank) O. Pruden


Golden Gate University
415/442-6583
Fax: 415/442-6579
E-mail: hpruden@ggu.edu

Director: Vice President

Neal Genda, CMT


City National Bank
310/888-6416
Fax: 310/888-6388
E-mail: ngenda@cityntl.com

Daniel L. Chesler, CTA, CMT


561/793-6867
Fax: 561/791-3379
E-mail: dan@crowd-control.com

Body of Knowledge

Membership

Richard A. Dickson
Scott & Stringfellow Inc.
804/780-3292
Fax: 804/643-9327
E-mail: DDickson@ScottStringfellow.com
Director: Secretary

Bruno DiGiorgi
Lowry's Reports Inc.
561/842-3514
Fax: 561/842-1523
E-mail: bdigiorgi@lowrysreports.com
Director: Treasurer

Andrew Bekoff
Bloomberg Financial Markets
212/495-0558
Fax: 212/809-9143
E-mail: abekoff@bloomberg.net
Director: Past President

Dodge Dorland, CMT


LANDOR Investment Management
212/737-1254
Fax: 212/861-0027
E-mail: dodge@dorlandgroup.com
Director

Bruce M. Kamich, CMT


wallstreetREALITY.com, Inc.
732/463-8438
Fax: 732/463-2078
e-mail: Barcharts@aol.com
Director

John C. Brooks, CMT


Yelton Fiscal Inc.
770/645-0095
Fax: 770/645-0098
E-mail: jbrooksgcm@aol.com
Computer

TBA
Distance Learning

Richard A. Dickson
Scott & Stringfellow Inc.
804/780-3292, Fax: 804/643-9327
E-mail: DDickson@ScottStringfellow.com
Education

Philip J. Roth, CMT


Morgan Stanley Dean Witter
212/761-6603
Fax: 212/761-0471
E-mail: Philip.Roth@msdw.com
Ethics & Standards

Lisa M. Kinne, CMT


Salomon Smith Barney
212/816-3796
Fax: 212/816-3590
E-mail: lisa.kinne@ssmb.com
Foundation

Charles Kirkpatrick II, CMT


Kirkpatrick & Co.
508/945-3222
Fax: 508/945-8064
E-mail: kirkco@capecod.net

Bruce M. Kamich, CMT


732/463-8438
Fax: 732/463-2078
e-mail: Barcharts@aol.com

Director

Mike Epstein
NDB Capital Markets Corp.
617/753-9910
Fax: 617/753-9914
E-mail: mepstein@ndbcap.com

Philip J. Roth, CMT


Morgan Stanley Dean Witter
212/761-6603
Fax: 212/761-0471
E-mail: Philip.Roth@msdw.com
Director

Kenneth G. Tower, CMT


UST Securities Corp.
609/734-7747
Fax: 609/520-1635
E-mail: kenneth_tower@ustrust.com

IFTA Liaison

Internship Committee

John Kosar, CMT


Bridge Information Services
312/930-1511
Fax: 312-454-3465
E-mail: jkosar@bridge.com

MTA JOURNAL

Summer-Fall 2000

Library

Larry Katz
Market Summary & Forecast
805/370-1919
Fax 805/777-0044
E-mail: lpk1618@aol.com
Newsletter

Michael N. Kahn
Bridge Information Systems
212/372-7541
E-mail: mkahn@bridge.com
Placement

Rick Bensignor
Morgan Stanley Dean Witter
212/761-6148
Fax: 212/761-0471
E-mail: rick.bensignor@msdw.com
Programs (NY)

Bernard Prebor
MCM MoneyWatch
212/908-4323
Fax: 212/908-4331
E-mail: bprebor@mcmwatch.comt
Regions

M. Frederick Meissner
404/875-3733
E-mail: fmeissner@mta.org
Rules

George A. Schade, Jr., CMT


602/542-9841
Fax: 602/542-9827
E-mail: aljschade@aol.com
Seminar

Nina G. Cooper
Pendragon Research, Inc.
815/244-4451
Fax: 815/244-4452
E-mail: ngcooper@grics.net

ADDRESS TO MTA 25TH ANNIVERSARY SEMINAR MAY 2000


LIVING LEGENDS PANEL
Robert J. Farrell
So, I came up with a plan. I incorporated more long-term trend
and cycle work in my analysis so portfolio managers could look at
my analysis as something beyond short-term trading. I also realized I could get their attention by giving them fundamental reasons for the conclusions I had arrived at using market indicators.
Then I figured out that if I wanted to have impact, effective communication was everything. Of course, I had to be right a good
percentage of the time and make sense, but the ability to write and
speak in a common sense style without arrogance was crucial to
getting their attention.
I also realized, as I am sure many of you have figured out, that
most professional money managers have strong views that you are
not going to change in a single meeting or with a single report.
When I got a conviction about a sector or a market change, I knew
it had to offer more than a conclusion or opinion. We had to supply information and present it logically to prove a point. Today,
there is more information available more quickly than ever before
but, interestingly, results of most managers are still worse than a
passive index. Most want and need to be told which information is
important. One of the things I capitalized on was the idea that I
had information not available elsewhere, i.e., Merrill Lynch internal transaction figures. We, in fact, applied the term sentiment
analysis to our figures back in the mid 1960s and used them to
advantage as contrary indicators. Even though they were only one
tool, they gave us an edge in supplying unique information to clients. Today, of course, many firms have such data and it is less
unique.
I don't believe in us versus them when it comes to technical analysis
and fundamental analysis. The goal is to come up with profitable
ideas, not whose tools are best. Nevertheless, I had one chance to
turn the tables on fundamental security analysts which I enjoyed
immensely. When I went to Columbia Business School in 1955 to
get a Master's in Investment Finance, I had both Ben Graham and
David Dodd as professors. As you know, they were the original
value investors who wrote the bible of fundamental analysis called,
"Security Analysis." Published in 1934, there was a 50th anniversary seminar in 1984 at Columbia to which I was invited as a speaker.
When the Dean first invited me, I asked him incredulously, "Do
you know what I do?" Even though he understood that most technical analysis was poles apart from the fundamental value training
of Graham & Dodd, he said, "Just tell us how they influenced you."
I was the last speaker on the all-day program which included Warren Buffet, Mario Gabelli and others, and I felt very intimidated.
But I decided to try a different approach and gave a speech entitled, Why Ben Graham Was A Closet Technician. Surprisingly, it was
well received. I cited many references he made to the characteristics of a market top and his references to measures of speculation.
The fact that I was rated number one in 16 of the 17 years I
competed in the Institutional Investor All-Star Research poll as Chief
Market Analyst was not because I was more right than anybody else.
I did have a good platform at Merrill Lynch but not everybody at
Merrill was ranked #1 either. I think it was my ability to communicate what was happening or changing in the markets with an his-

[Editor's Note: At the 25th Anniversary Seminar in May 2000 in


Atlanta, Georgia, Bob Farrell was a member of a panel called The Living
Legends: A tribute to, and remarks by, the eight winners of the MTA Annual Award. The winners were: Art Merrill, Hiroshi Okamoto, Ralph
Acampora, Bob Farrell, Don Worden, Dick Arms, Alan Shaw, and John
Brooks and all were in attendance. The panel was hosted by your editor,
Henry Pruden. The following is the text from Bob Farrell's presentation:]
I appreciate being included in this 25th anniversary year for
MTA Seminars. I also appreciate being one of the living recipients
of the annual
award. I remember participating
in the first seminar as part of what
was called the 1-23 panel for the
first time. Institutional Investor
magazine had included a market
timing category in
its annual AllAmerican Research Team poll
and Don Hahn,
Stan Berge and I were those chosen. That was an indication of
greater institutional recognition of market analysis and timing. Before the great Bear Market of 1972-74, technical analysis was mostly
regarded with suspicion by professionals. Institutional portfolio
managers generally denigrated its importance even though in every meeting I had with them, they all carried chart books. The big
breakthrough came, however, after so many of them got hurt in
the 1972-74 Bear Market. They started asking how could we have
anticipated the collapse of the nifty-fifty and most other stocks. They
then began to notice that many market analysts and technicians
had issued warnings about the coming debacle. From then on,
they started paying more attention. But just as they did not care
about technical timing at the top of the bull market in the late
1960s, by the mid-1970s they wanted to hear more about how to
avoid the next bear market. In fact, the Financial Analysts Federation asked me to speak at their annual conference in New York in
1975 on using technical tools to avoid the next bear market. What
I chose to speak about was how to use market timing tools to help
identify where to be invested for the coming long-term bull market. It seemed clear to me bear markets of the 1974 intensity did
not come along often and set the stage for new long bull runs.
They wanted to me to talk about the past instead of the future.
When I chose to be a market analyst instead of a security analyst
in the early 1960s, I soon realized that what I needed as a goal was
professional recognition. I also realized that it could only come
from institutions as their dominance was growing in the market.
But I knew most portfolio manager's eyes glazed over when I spoke
of technical indicators or they were outright hostile to technicians.

MTA JOURNAL

Summer-Fall 2000

ROBERT J. FARRELL

torical perspective in a form that mostly fundamental clients could


understand. I never talked down to them and always had a sector
opinion that I emphasized where my conviction level was high. I
thought they usually took away something useful from my presentation even if they disagreed with some of the general conclusions.
As a result of the integration of fundamental reasoning to back
up technical conclusions, I became less regarded as a technician
and more as a market strategist who used historical precedent and
technical tools. I have never liked the term technician because it is
too limiting and am very much in favor of finding another way to
describe what we do. We study so many things such as price trends,
momentum, money flows, cycles and waves, investor behavior and
sentiment, supply-demand changes, volume relationships, insider
activity, monetary policy and historical precedent. We have a broad
field of study that has grown more inclusive with time and the computer age. It is just not adequately summed up in the term technician. At Merrill Lynch, we use the broader term of market analyst
to avoid the limiting label of technician. Despite all the attempts
at upgrading and professionalizing our craft by our association, we
still have the press calling technicians sorcerers, elves, entrail readers and other denigrating terms. We have come a long way, but we
have not shaken the negative image of the past that goes with the
term technician, particularly with the press. You may disagree or
even not care, but experience tells me to emphasize our broader
range of skills.
I am impressed with the advanced techniques being used to analyze the market data and the progress made in working with the
Financial Analysts Federation and the academic community. There
is much more substance in our craft as a result of your efforts. Nevertheless, the world at large needs to be educated. Investor's Business Daily does an excellent job of explaining how to use technical
tools and integrate technical and fundamental information on an
ongoing, real-time basis. We should use this model as an organization and have our members publish regular educational articles in
the mainstream press or on a net website. We have created excellent professional credentials over the years. Now we need to market our profession if not as technicians, perhaps as market behavioral strategists or market timing and behavioral strategists. You
deserve recognition for your broader range of skills as well as your
ability to provide profitable market and stock conclusions.
Thank you for inviting me.

MTA JOURNAL

Bob Farrell is Senior Investment Advisor of Merrill Lynch,


Pierce, Fenner & Smith, Inc., the nations largest securities
firm, and one of Wall Streets most highly respected stock market analysts.
He had been named Number One in the Market Timing
category of Institutional Investors annual All-American Research Team poll for 16 years prior to assuming his new role.
Bob has spent his entire business career with Merrill Lynch.
As Manager of Market Analysis, he pioneered the use of sentiment figures using Merrill Lynch internal data. His Weekly
Market Commentary, published since 1970, was followed by
thousands of professional money managers in this country and
abroad.
In his current role as Senior Investment Advisor, he has
been writing quarterly on longer-term theme changes in the
market. He will continue advising clients on market strategies
implementing themes.
Bob was a charter member of the Market Technicians Association and its first president from 1972-1974. Bob was also
the recipient of the MTA Annual Award in 1988. In 1993 he
was inducted into the Wall Street Week Hall of Fame.
He was graduated from Manhattan College in 1954 with a
BBA in Economics & Finance, and received an MS in investment finance from the Columbia Graduate School of Business in 1955.

Summer-Fall 2000

EXPLOITING VOLATILITY TO ACHIEVE A TRADING EDGE:


Market-Neutral/Delta-Neutral Trading Using
the PRISM Trading Systems

Jeff Morton, MD, CMT


Purpose
This study was designed to evaluate the theoretical returns for a
simple non-directional option strategy initiated after a sudden and
significant volatility implosion of an underlying stock.

INTRODUCTION
For options-based trading, the price action of any freely-traded
asset (e.g., stocks, futures, index futures, etc.) can be grouped into
three generic categories (however defined by the trader): (a) bullish price action; (b) bearish price action; (c) congestion/trading
range price action. Specific options-based strategies can be implemented which result in profits if any two out of the three outcomes
unfold. For example, the purchase of both call and put options on
the same underlying asset for the same strike price and same expiration date is termed a straddle position (e.g., buying XYZ $100
strike March 1999 call and put options = XYZ $100 March 1999
straddle). This straddle position can be profitable if either (a) or
(b) quickly occur with significant magnitude (i.e., price volatility)
prior to option expiration. In this sense, a straddle trade is nondirectional since it can profit in both bull and bear moves.
Price volatility can be described by several common technical
indicators including ADX, average-true-range, standard deviation,
and statistical volatility (also called historical volatility). Volatility
has been observed to be mean-reverting. Periods of abnormally
high or low short-term price volatility are followed by price volatility that is closer to the long-term price volatility of the underlying
asset.(1,3) A short-term drop in price volatility (volatility implosion)
can be reliably expected to be followed by a sudden volatility increase (volatility explosion). Connors, et. al. have shown that multiple days of short-term volatility implosion is a predictor of a strong
price move.(1,2)
The volatility implosion does not predict the direction of the
impending price move, but only that there is a high probability
that the underlying asset is going to move away from its current
price and by a significant amount. In addition, the volatility implosion does not predict when (how quickly) the explosion price move
will develop. We can predict which direction the price of the stock,
commodity, or market is not going to move with a high degree of
probability. It most likely will not move side-ways indefinitely. Knowing this, one can devise a trading strategy that is able to profit, or at
least not lose money, if the stock moves quickly higher or lower
such as the straddle strategy earlier described above.
In the option straddle strategy described above (e.g., XYZ $100
March 1999 straddle), as the price of the underlying asset moves
away from the option's strike price in either direction, the option
that is gaining in value will increase at a greater rate than the opposing option that is losing value. The position is said to be gamma
positive in both directions. The straddle will lose if the price of the
asset stays at or near the strike prices of the options, i.e. the stock
moves side-ways. The straddle position deteriorates because of
continued decrease in the volatility of the underlying asset, plus
the time-decay value of the option as it approaches expiration.
This study was designed to explore the potential investment returns that could be obtained using the basic option straddle strategy. At PRISM Trading Advisors, Inc., this strategy has been successfully implemented to generate superior returns at lower risk
than traditional investment portfolio benchmarks.

Methods and Materials


The 30 Dow Jones Industrial stocks from November 1, 1993,
through May 30, 1998, were chosen for this study. Delta neutral/
gamma positive straddle positions were initiated on the opening
price of the stock after the near-term historical volatility of the stock
had significantly imploded relative to its longer-term historical volatility. Any signals generated in the same stock before the 6- week
termination date of a prior trade were ignored. On the date of
calculation, the options prices were determined with the actual
implied volatility using the Black-Scholes model, assuming moderate slippage. All trades were equally weighted. The value of the
options positions were calculated based on the closing stock price
at the 2-, 4-, and 6- week periods respectively. Two trading systems
were evaluated. In the first system (time-based system), time was
the sole determinant used to determine when the option positions
would be closed out. In the second trading system (money management system), simple money management rules were added to
reduce draw-downs and to lock-in profits in profitable trades.
Given the wide variability of brokerage fees, the results are presented without commission costs deducted.
Results
A total of 280 trades were generated between November 1, 1993,
and May 30, 1998. For the time-based trading system (trading system
1), the 2-week, 4-week, and 6-week cumulative return was -191.9%,
+334.7%, and -84.3% and the average return per trade was -0.69%,
+1.20%, and -0.30% respectively. For the money management
trading system (trading system 2), the 4-week and 6-week cumulative
returns were +993.4%, and +1188.6% and the average return per
trade was +3.55% and +4.25% respectively. The use of a simple
money management system significantly reduced the draw-downs
of the system.
Conclusions
The simple time-based volatility trading strategy produced a
positive return holding the options for four weeks. This simple
straddle-based options strategy had significant draw-downs that
preclude it as a viable trading strategy without modifications. The
addition of some very simple money management rules significantly
improved the returns while simultaneously decreasing the drawdowns. This volatility-based, market-neutral, delta-neutral (gamma
positive) trading strategy yielded a very substantial positive return
across a large number of large-cap stocks and across a broad five
year period. These results demonstrate the potential positive returns that can be obtained from a market-neutral/delta-neutral
strategy. The benefit of a market-neutral strategy as demonstrated
here is of significant importance to institutional portfolio managers in search of non-correlated asset classes.

MTA JOURNAL

Summer-Fall 2000

METHODS AND MATERIALS

RESULTS

System 1 (Time-Based Strategy)


To test the robustness of this trading strategy, the Dow 30 Industrial stocks from November, 1, 1993, through May 31, 1998, were
chosen for this study. They were chosen because they are a wellknown group of stocks that have been designed to represent the
market at large. Volatility is defined by the price statistical volatility formula: s.v. = s.d.{log(c/c[1]),n} * square-root (365). Statistical (or historical) price volatility can be descriptively defined as
the standard deviation of day-to-day price change using a log-normal distribution and stated as an annualized percentage. Detailed
information on statistical volatility is available from the references.(1,2,3)
Rule 1: 6-day s.v. is 50% or less than the 90-day s.v.
Rule 2: 10-day s.v. is 50% or less than the 90-day s.v.
Rule 3: Both rule #1 and rule #2 must be satisfied to
initiate the trade.
Thus in this study, a volatility implosion was defined as when
the 6-day and 10-day historical volatilities were 50% or less than
the 90-day historical volatility. When this condition is met, a signal
to initiate a straddle position was taken the following trading day.
The Black-Scholes model was used to calculate the options prices
that were used to establish the straddle positions. The opening
price of the stock, the actual implied volatility, and the yield of the
90-day U.S. Treasury Bill were used to calculate the price of the
options. The professional software package OpVue 5 version 1.12
(OpVue Systems International) was used to calculate the prices of
the options assuming a moderate amount of slippage. For the purposes of this analysis, it was assumed that each trade was equally
weighted and that an equal dollar amount was invested into each
trade. Based on the closing stock price, the value of the option
straddle positions were then calculated using the same method
described above after 2 weeks, 4 weeks, and 6 weeks respectively.
Any trading signals generated in a stock with a current open option straddle position before the end of the 6-week open trade period were ignored. To minimize the effect of time decay and volatility, options with greater than 75 days to expiration were used to
establish the straddle positions. The positions were closed out at
the end of the 6-week time period with more than 30 days left until
expiration. To further minimize the effect of volatility, options were
purchased at or near the money. Given the current large variability of brokerage fees, the results were calculated without deducting commission costs.

System 1 (Time-Based Strategy)


A total of 280 trades were generated between November 1, 1993
and May 30, 1998. Numerous parameters of the 280 trades were
analyzed. The results are summarized in Table 1. The 2-week, 4week, and 6-week cumulative returns were +191.9%, +334.7%, and
-84.3% respectively and are shown in Figure 1. The return of the
DJIA over the same time period was +241.8% (3,680.59 to 8,899.95).
The maximum draw-downs for the 2-week, 4-week, and 6-week series were, -424.3%, (November 12, 1993 - April 28, 1995), -450.8%
(November 8, 1993 - May 17, 1995), and -763.3% (December 6,
1993 - May 19, 1995). The maximum draw-ups for the 2-week, 4week, and 6-week series were, +373.9% (April 7, 1995 - July 1, 1997),
+933.2% (April 18, 1995 - November 11, 1997), and +948.2% (April
7, 1995 - November 17, 1997).
System 2 (Money Management Strategy)
A total of 280 trades were generated between November 1, 1993
and May 30, 1998. Numerous parameters of the 280 trades were
analyzed. The results are summarized in Table 2. The 4-week, and
6-week cumulative returns were +993.4%, and +1188.6% respectively, and are shown in Figure 2. The return of the DJIA over the
same time period was +241.8% (3,680.59 to 8,899.95). The maximum draw-downs for the 4-week and 6-week series were -188.1%,
(August 5, 1994 - Februar y 23, 1995) and -246.2% (August 5, 1994
- Februar y 23, 1995). The maximum draw-ups for the 4-week and
6-week series were +641.4% (September 20, 1996 - October 20, 1997)
and +704.1% (September 20, 1996 - October 20, 1997).

DISCUSSION
It has been observed that short-term volatility will have a tendency to revert back to its longer-term mean.(1,3) Connors et.al.(1)
have published the Connors-Hayward Historical Volatility System
and showed that when the ratio of the 10-day versus the 100-day
historical volatilities was 0.5 or less, there was a tendency for strong
stock price moves to follow.
In this study, PRISM Trading Advisors, Inc., have confirmed the
phenomenon of volatility mean reversion by presenting the first
large scale option-based analysis while maintaining a strict marketneutral/delta-neutral (gamma positive) trading program. We have
shown that a significant price move occurs 75% of the time following a short-term volatility implosion (as defined in the Methods
and Materials section).
For this analysis we chose a relatively straightforward strategy:
to purchase a straddle. A straddle is the proper balance of put and
call options that produce a trade with no directional bias. A straddle
is said to be delta neutral and will generate the same profit whether
the underlying assets price moves higher or lower. As the asset
price moves away from its initial price one option will increase in
value while the other opposing option will decrease in value. A
profit is generated because the option that is increasing in value
will increase in value at a faster rate than the opposing option is
decreasing in value. The straddle is said to be gamma positive in
both directions.
This option strategy has a defined maximum risk of the trade
that is known at the initiation of the trade. This maximum risk of
loss is limited to the initial purchase costs of the straddle (premium
costs of both put and call options). There is no margin call with
this straddle strategy. There is an additional way that this strategy
can profit. Since the options are purchased at the time there has

System 2 (Money Management Strategy)


A second trading strategy was explored. It was identical to the
first trading strategy except a set of simple money management
rules were added. The rules were designed to 1) cut losses short,
2) allow profits to run, and 3) lock in profits.
Rule #1: A position was closed immediately if a 10% loss occurred.
Rule #2: If a 5% profit (or greater) was generated, then a trailing stop of one-half (50%) of the maximum open profit achieved
by the position was placed and the position closed if the 50%
trailing stop was violated.
Rule #3: If neither rule #1 or #2 was violated then the position
was closed out after either 4 weeks or 6 weeks.

MTA JOURNAL

Summer-Fall 2000

10

been an acute rapid decrease in volatility, one should theoretically


be purchasing under valued options. As the price of the asset
subsequently experiences a sharp price move, there will be an associated increase in volatility which will increase the value of all the
options that make-up the straddle position. The side of the straddle
which is increasing in value will increase at an even faster rate, while
the opposite side of the straddle which is decreasing in value will
decrease in value at a slower rate. So as to not further complicate
the analysis, the exit strategy for the first system (time-based strategy) for this study was even more basic using a time-stop exit criteria.
Prior to the study, it was our impression that a 4-week time period would be the most optimal of the three. This is what was seen.
The 4-week exit produced a positive return over the study period
(334.7%). However, the use of a 2-week time-stop was frequently
not sufficient time to allow for the anticipated price move. Note
that in Figure 1, the 2-week maximum open-profit draw-up was significantly less than the draw-ups for both the 4-week and 6-week
time-stops (373.9% vs. 933.2% and 948.2% respectively). The 6weeks strategy was too long, allowing for substantially greater maximum draw-down secondary to the adverse effects of time decay,
volatility, and price regression back toward the stocks initial starting price that eroded the value to the straddle position when compared to the 2-week and 4-week strategies. All other aspects of the
trades of the three exit strategies were similar. There were no significant differences in the percentage of wining/losing trades or
number of consecutive winning or losing trades.
A second system using a simple set of money management rules
was tested (money management system). These rules were designed
to close-out non-performing trades early before they could turn
into large losses and kept performing positions open as long as
they continued to generate profits. These goals were accomplished
by closing out any position if its value decreased to 90% of its initial value (10% loss). A position with open profits had a 50% trailing stop of the maximum open profit achieved by the position at
anytime open profits exceeded 5%. If neither of these two conditions occurred, the position was closed out at the end of six weeks.
As predicted, the 6-week money management strategy produced
both a greater total return (+993.4% versus 1,188.6%) and a slightly
greater maximum draw-down than the 4-week money management
strategy. By closing positions when a loss of 10% had occurred, we
were able to significantly decrease the amount of losses incurred.
This is evidenced by the maximum draw-down for the 6-week
positions being decreased significantly from (-763.3%) to
(246.2%) employing no money management versus implementing
the above money management rules. Also the total returns were
markedly improved with the total return increasing from (-84.3%)
to (+1188.6%).
While the first trading system (time-based strategy) study demonstrated that this trading strategy with a 4-week time-stop exit produced a positive return, it is not sufficient as a stand-alone system
for real-time trading. It does, however, indicate that this strategy
can be used as the foundation to design a viable trading system
that can capture the majority of the gains while simultaneously eliminating the majority of the loses. There are almost an infinite number of possibilities one could explore to achieve this goal.
The second method, and the one explored in this paper, was
the application of a simple set of money management rules. As
discussed above, this dramatically improved the overall returns while
simultaneously decreasing the draw-downs experienced in the first
strategy (time-based strategy). Other possibilities include the addition of a second entry filter such as a momentum indicator like

MTA JOURNAL

the RSI, ROC or MACD indicator. One could design a more sophisticated exit strategy such as exiting the position if the stock
price exceeds a predetermined price objective as defined by price
channels, parabolic functions, etc. An additional possibility would
be to re-establish a nondirectional options position at a predetermined price objective, thereby locking in all the profits generated up to that point. The myriad of options-based strategies available to adjust back to a delta neutral position based on technical
indicators and predetermined price objectives are beyond the scope
of this paper.
Although both systems had positive expectations based on 280
trades, there are several limitations of the study design. Although
moderate slippage was used in all the calculations, the robustness
of this study might have been improved if access to real-time stock
option bid-ask prices were available for all of the trades investigated. Unfortunately, such a large, detailed database is not readily
available. Given that the real-time bid-ask prices were not available, the use of the Black-Scholes formula with the known historical inputs (stock price, implied volatility, 90-day T-Bill yield) is an
acceptable alternative thereby minimizing any pricing differences
between the actual and theoretical option prices systematically
throughout the time period used in the study.
The current study revealed that a simple straddle options-based
strategy designed to exploit a sudden implosion of a stocks volatility with time as the only existing criteria produced draw-downs that
preclude it as a viable trading strategy in its own right. However,
this simple strategy had a positive expectation of generating superior returns, and therefore can be used as the basis to develop trading strategies capable of producing superior returns without the
need to correctly predict the direction of a given stock, commodity, or market being traded. The addition of some simple money
management rules dramatically improved the overall returns while
simultaneously decreasing the excessive draw-downs that plagued
the original trading strategy, thereby transforming it into a applicable trading system for every day use. This volatility-based, deltaneutral strategy also is independent of market direction. A market-neutral strategy and portfolio may be considered as a separate
asset class by portfolio managers in the efficient allocation of their
clients investment portfolios to boost returns while simultaneously
decreasing their clients risk exposure.
In conclusion, this is the first large-scale trading research study
to be shared with the trading public that clearly demonstrated how
the phenomenon of price volatility mean-reversion can be exploited
by using an options-based delta-neutral approach. Price, time and
volatility factors using options-based strategies to further maximize
positive expectancy represent active areas of real-time trading research at PRISM Trading Advisors, Inc. These results will be the
subject of future articles.

REFERENCES
1. Connors, L. A., and Hayward, B.E., Investment Secrets of a Hedge
Fund Manager, Probus Publishing, 1995.
2. Connors, L. A: Professional Traders Journal, Oceanview Financial
Research, Malibu, CA. March 1996, Volume 1, Issue 1.
3. Natenberg, S., Option Volatility and Pricing. Advanced Trading Strategies and Techniques, McGraw Hill, 1994.

(Over)

Summer-Fall 2000

11

TABLE 1

TABLE 2

System 1 (Time-Based System)

System 2 (Money Management System)

2 Week

4 Week

6 Week

-191.9%

+334.7%

-84.3%

Total Return

-0.69%

+1.20%

-0.30%

Average Return per Trade

Maximum Draw-Up

+373.9%

+933.2%

+948.2%

Maximum Draw-Down

-424.3%

-450.8%

-763.3%

Total # Winning Trades

91

106

100

185

174

177

Max. # Consecutive Wins

Total Return
Average Return per Trade

Total # Break Even Trades


Total # Losing Trades
Max. # of Consecutive Wins
Max. # of Consecutive Loses

4 Week

6 Week

+993.4%

+1188.6%

+3.55%

+4.25%

Maximum Draw-up

+641.4%

+704.1%

Maximum Draw-Down

-188.1%

-246.2%

Total # Winning Trades

120

117

160

161

Total # Break Even Trades


Total # Losing Trades

14

13

Max. # Consecutive Loses

Greatest Gain in One Trade

+87.8%

+132.1%

+109.0%

Greatest Gain in One Trade

+132.1%

+109.0%

Greatest Loss in One Trade

-48.0%

-51.8%

-59.0%

Greatest Loss in One Trade

-10.0%

-10.0%

Figure 1

Figure 2

System 1 (Time-Based System)

System 2 (Money Management System)

JEFF MORTON, MD, CMT


Jeff Morton is Chief Technical Analyst & Executive Vice
President at PRISM Trading Advisors (Electronic Signature
Only). He received his bachelors degree from Stanford University Medical School in 1981 and his Medical Degree from
the Yale School of Medicine in 1985. He began his career as a
technical analysts in 1992 as a consultant for Schea Capital
Management. In 1995, he helped start PRISM Trading Advisors, Inc. a Houston, Texas based proprietary trading firm.
His major areas of expertise include options strategies, volatility trading, and trading strategies based on ADX, ATR, point
& figure charting. His other duties at PRISM Trading Advisors, Inc. include compliance/due diligence, trader education,
and journal publications. Dr. Morton is very active in the
MTA. He currently is serving as an associate editor of the MTA
Journal, and on the accreditation committee.

MTA JOURNAL

Summer-Fall 2000

12

MECHANICAL TRADING SYSTEM VS. THE S&P 100 INDEX


Can a Mechanical Trading System Based on the
Four-Week Rule Beat the S&P 100 Index?

Art Ruszkowski, CMT, M.Sc.

PREFACE

A custom computer software was designed and created to test


this system in the time frame from January 1, 1984 to January 1,
1989. During this time frame the following performance statistics
were calculated for NS-20BS-EQ system and were compared to the
performance statistics of the Index (S&P 100) with a Buy-and-Hold
Strategy.

In their quest to outperform the Index, equity fund managers


must solve a four-piece puzzle: which stocks should they buy, when
should they buy them, when should they sell them and how much capital
should they allocate to each stock. The performance of different fund
managers varies greatly. Some are able to outperform the Index,
and others cannot. This paper investigates the question of whether
technical analysis in its most simplistic form along with simple
money management can be used to outperform the Index.

Performance Statistics Measured


The following performance statistics were measured for each
case (definitions are included in Appendix 1):
Average Annual Compounded Return (R)
Sharpe Ratio (SR)
Return Retracement Ratio (RRR)
Maximum Loss (ML)

THE FOUR-WEEK RULE


Most market technicians will agree that the simplest technical
market analysis rule is the Four-Week Rule. The Four-Week Rule
(4WR) was originally developed for application to futures markets
by Richard Donchian, and can be expressed as follows:
Cover shorts and go long when the price exceeds the highs of the
four preceding full calendar weeks and conversely liquidate longs
and go short when the price falls below the lows of the four preceding full calendar weeks.

Results
System
NS-20BS-EQ

3.

4.

5.
6.

System Code: NS-20BS-EQ (No Shorts, 20 days for Buy and


Sell Rules, Equally Allocate Capital)
Money management rule - Equal allocation rule
Use $100,000 of capital into one hundred S&P 100 Index stocks,
allocating an equal amount of money into each stock ($1000).
Technical analysis rule - Buy
Buy a stock if its closing price is higher than the high of last 20
trading days.
Technical analysis rule - Sell
Sell a stock if its close price is lower than the low of last 20 trading days
Money management rule - Redistribute profits equally
If the profit from the sale of a stock is greater than the initial
allocation of capital to this stock, then that profit is equally distributed among all stocks which are in a potential Buy position.
Money management rule - Earn interest on cash
All cash on hand earns fix rate interest @ 5% per annum.
Money management rule - Transaction costs
A fixed transaction cost of $50 is applied to each transaction
(this cost represents a fair average of commissions and slippage).

MTA JOURNAL

Equal

Days used for


Buy and Sell rules
20
Index

Average Annual Compounded Return

7.58%

9.00%

Sharpe Ratio

4.15%

3.85%

Return Retracement Ratio

0.45

0.34

Maximum Loss

0.36

0.58

It is clear that the above system is performing less than the Buyand-Hold Strategy of S&P 100 Index.
There are several choices to improve performance of the system by modifying system parameters. The most natural change is
to search for better performance by modifying the number days
used for the Buy and Sell rules. The performance of the NS-xBSEQ system where x is the number of days for Buy and Sell rules was
tested for x between 10 days and 90 days. Results of the test are
provided in the Appendix 2.1.
Testing proved that the best performing system was the one with
50 days used for the Buy and Sell rules.

With this modification, the 4WR (no shorts) system can be


easily applied by many equity fund managers because very few of
them can go short.
Let us formally define our modified mechanical system:

2.

01/01/1984 - 01/01/1989

Money Allocation

System

The rationale behind this rule is that the four-week or 20-day


trading cycle is a dominant cycle that influences all markets.
For the purpose of further discussion, lets modify the FourWeek Rule system as follows:
Buy if the price exceeds the highs of the four preceding full calendar
weeks and liquidate open positions when the price falls below the
lows of the four preceding full calendar weeks.

1.

Time Frame

Results
System
NS-50BS-EQ

Time Frame
01/01/1984 - 01/01/1989

Money Allocation
Equal

Days used for


Buy and Sell rules
50

System

Index

Average Annual Compounded Return

7.70%

9.00%

Sharpe Ratio

4.49%

Return Retracement Ratio

0.42

Maximum Loss

0.40

3.85%
0.34
0.58

Still the performance of the above system is not very impressive, so lets consider further research. Lets modify Rule 1 from
the system definition and replace it by following rule:
1A. Money management rule - Proportional allocation rule
Use $100,000 of capital into one hundred S&P 100 Index stocks,
allocating money into each stock according to its percentage
participation in the index at the starting date of the testing period (January 1, 1984).

Summer-Fall 2000

13

So we consider the new system:

1989 and January 1, 1994). Here is the comparison of market statistics between the Index and the NS-50B-P-S25 system in the time
frame from January 1, 1989 to January 1, 1994.

System Code: NS-xBS-P (No Shorts, x Days for Buy and Sell
Rules, Proportionally Allocate Capital)
The new system consists of Rule 1A and Rules 2-6.
The performance of the NS-xBS-P system where x is the number of days for Buy and Sell rules was tested for x between 10 days
and 90 days. Results of the test are provided in the Appendix 2.2.
Testing proved that the best-performing system was one with 50
days used for Buy and Sell rules.

Average Annual Compounded Return


Sharpe Ratio

NS-50BS-P

Time Frame
01/01/1984 - 01/01/1989

Money Allocation

Days used for


Buy and Sell rules

Proportional

50

System

Index

10.57%

9.00%

Sharpe Ratio

4.44%

3.85%

Return Retracement Ratio

0.51

0.34

Maximum Loss

0.48

0.58

Average Annual Compounded Return

The last system outperforms the S&P100 Buy-and-Hold Strategy but lets consider further research. So far modifications were
limited to systems with different number of days for the Buy and
Sell rule and for using different initial allocations of the capital
equal and proportional. Lets consider the following hybrid of the
original NS-20BS-EQ system by replacing Rule 3 with following
new rule:
3B. Money management Stop Loss Rule - Sell losing positions
Sell a stock if it is losing more than y% of its buy price, where y
is a system parameter
Lets name this system:

NS-50B-P-S25

01/01/1984 - 01/01/1989

Money
Allocation

Days used
for Buy

Proportional
System

Average Annual Compounded Return


Sharpe Ratio

50

9.00%

Return Retracement Ratio

0.52

0.34

Maximum Loss

0.64

0.58

0.59

0.49

0.40

System
Index

50,000
40,000
30,000
20,000
10,000
0
Jan-84

Jan-86

Jan-88

Jan-90

Jan-92

Jan-94

Jan-96

Some may argue that the good performance of the NS-50B-PS25 system is the result of a continuous Bull market between 1984
and 1997. So, at such conditions, buying low and not selling unless
the stock loses significant percent of its value will always result in a
winning strategy. However, such a system will perform very poorly
in the bear market.
To investigate this claim, lets test performance of the NS-50BP-S25 over the time period from August 25, 1987 to August 25,
1992. August 25, 1987 was chosen as the new start date became it is
the high of the S&P 100 market before 1987 crash. The next
graph presents the performance of this test.

CONCLUSION

The last system which is the result of several cycles of modifications to the initial 4WR outperforms the S&P100 Buy-and-Hold
Strategy by a good margin. To find out how time-stable the above
system was, a blind test was conducted.

It is clear that when applying simple proven rules of technical


analysis (like 4WR) any modifications to the rule (in this case removal of short selling option) can significantly effect profitability
of a system. Also it was demonstrated that mechanical trading systems can be transformed by parameters and rules modifications so
that their performance improves.

Blind Test Results: System (NS-50B-P-S25)


The system was tested in a new time period (between January 1,

MTA JOURNAL

0.68

Maximum Loss

60,000

Index
3.85%

Return Retracement Ratio

70,000

25%

4.48%

6.92%

System NS-50B-P-S25
50 days 25%

% Loss used
in Rule 3B

14.78%

5.47%

The following graph presents the performance of one of the


optimum combinations of parameters (50 days and 25% ) between
January 1, 1984 and Januar y 1, 1997.

Results
Time Frame

10.00%

GRAPHS

System Code: NS-xB-P-Sy (No Shorts, x Days for Buy Rule,


Proportionally Allocate Capital, Sell When Drops y%)
Only systems with proportionally allocated capital are analyzed
due to the fact that they perform better than equally allocated ones
in a considered period of time. The performance of the NS-xB-PSy system where x is the number of days for Buy rule was tested for
x between 10 days and 90 days and for y between 10% and 70%.
Results of the test are provided in the Appendix 3.
Testing of systems NS-xB-P-Sy proved that the best performing
system was one with 50 days used for Buy rule and 25% money
management stop loss rule.
System

Index

13.88%

Comparing these values, we see that the system still outperformed the Index in respect to the Average Annual Compounded
Return (by 40%) and the Return Retracement Ratio, but marginally under-performed with the other two statistics. This can be explained by comparing the monthly DMI readings in the time period of 1984-1989 and 1989-1994. In the first time period, the standard 14-month Directional Movement Index (DMI) was well above
25 (a strong trending market); however, in the second time period, the DMI was only marginally above 25 (a weak trending market). So in such a time period, a trend following system doesnt
display as impressive results.

Results
System

System

Summer-Fall 2000

14

systematic traders favor backtesting, analyzing patterns and eliminating emotions. According to Barclay Trading Group Ltd., over
the last ten years systematic traders have yielded higher annual returns than discretionary traders six times. iii
Optimization of the trading system: The process of finding the
best performing parameter set for a given system. The underlying
premise of optimization is that the parameter set must work not
only in its initial time frame but any time frame. Almost any mechanical system can be optimized in a way that it will show positive
results in any given period of time. ii
Parameter: A value that can be freely assigned in the trading system in order to var y the timing of signals. ii
Parameter Set: Any combination of parameter values. ii
Parameter Stability: The goal of optimization is to find broad regions of parameter values with good system performance, instead
only one parameter which can represent an isolated set of market
conditions. ii
Time Stability: In the case of positive performance of the mechanical system in a specific time frame, it should be analyzed in different time frames to make sure the good performance is not dependent only on the initial time frame. ii
Blind Simulation: This is the test of an optimized parameter set in
a different time frame to see if the good results reoccur.
Average Parameter Set Performance: The complete universe of
parameter sets is defined before any simulation. Simulations are
then run for all the selected parameter sets, and the average of
these is used as an indication of the systems potential performance.ii
Average Annual Compounded Return: R = exp(1/N(ln(E) - ln(S)) - 1
S - starting equity
E - ending equity
N - number of years ii
Return Retracement Ratio: (RRR) = R/AMR
R - average annual compounded return
AMR - average maximum retracement for each data point. Using
drawdowns (the worst at each given point in time) to measure
risk, the risk component of RRR (AMR) comes closer to describing risk than standard deviation.
n
AMR=1/n( MRi)
i=1
MRi=max(MRPPi,MRSLi)

4WR_NS_B50_P_M25
50 days 25%

20,000

System
Index

15,000
10,000
5,000
0
Aug-87

Aug-88

Aug-89

Aug-90

Aug-91

One very interesting observation worth further study is the fact


that the large difference in performance was affected by the amount
of money allocated into each stock. This is due to fact that the S&P
100 Index is a capitalization-weighted index of 100 stocks. The
component stocks are weighted according to the total market value
of their outstanding shares. The impact of the components price
change is proportional to the stocks total market value, which is
the share price times the number of shares outstanding. In other
words, the S&P 100 Index can be considered as a relative-strength
based index. An index-based capital allocation system (NS-50B-PS25) performs best and gives an objective measure of the validity
of its trading rules as well as its money management rules when its
performance is compared to the performance of the Index. Systems with sound trading and money management rules, as well as
capital-allocation based on relative strength should, in general,
outperform both the index and equally-allocated systems.
It is worth observing that the proportionally-allocated system
outperformed the equally-allocated system and the Index during
the tested time periods regardless whether during those periods
the Large Cap outperformed the Small Cap or vice versa.

BIBLIOGRAPHY
i John. J. Murphy, Technical Analysis of the Futures Markets, New
York Institute of Finance,1986.
ii Jack D. Schwager, Schwager on Futures - Technical Analysis, John
Wiley & Sons, Inc., 1996.
iii Carla Cavaletti, Trading Style Wars, Futures, July 1997.

MRPPi=(PEi - Ei)/Pei
MRSLi=(Ei - MEi)/Ei-1
Ei - equity at the end of month i,
PEi - peak equity on or prior to month i,
Ei-1 - equity at the end of month prior to month i,
MEi - minimum equity on or subsequent to month i.

APPENDIX 1
Glossary of Terms:
Mechanical Trading System: A set of rules that can be used to generate trade signals and trading performed according to the rules
of mechanical system. Primary benefits of mechanical trading systems are elimination of emotions from trading, and consistency of
approach and risk management. Mechanical trading systems can
be classified as Trend-Following (initiating a position with the
trend), and Counter-Trend (initiating a position in the opposite
direction to the trend). Trend following systems can be divided
into fast and slow. Fast a more sensitive system responds quickly
to signs of trend reversal and will tend to maximize profit on valid
signals, but also generate far more false signals. A good trend following system should not be too fast or too slow. ii
Trading according to signals generated by mechanical trading
system is called systematic trading which is opposite to discretionary trading. Discretionary traders claim that emotions, which are
excluded from systems trading, offer an edge. On the contrary,

MTA JOURNAL

RRR represents better return/risk measure than Sharpe ratio.ii


Sharpe Ratio: SR=E/sdv
E - expected return
Sdv - standard deviation of returns.ii
Expected Net Profit Per Trade: ENPPT= P*AP - L*AL,
P - percent of total trades that are profitable
L - percent of total trades that are in net loss
AP - average net profit of profitable trades
AL - average net loss of losing trade.ii
Maximum Loss: ML= max(MRSLi)i<=n, this represent worse-case
possibilityii
Trade-Based Profit/Loss Ratio: TBPLR= P*AP/L*ALii

Summer-Fall 2000

15

APPENDIX 2.1

APPENDIX 3

Results for NS-xBS-EQ where x represents number of days


used in Buy and Sell rules, x>=10 and x<=90
To find optimal number of days used in Buy and Sell rules follow procedure:
1. In each column select five best performing results (according
to the column definition. So for example in the case of the
average annual compounded return, we select five highest numbers, but in the case of maximum loss five lowest numbers).
Mark the results by bold typeface.
2. Find the rows which are marked in each column - mark the rows
by italic typeface.
3. From selected rows choose one with the optimal results.

To find the best performing system we follow procedure:


1. For each column in first table select five best-performing rows.
2. Select best-performing rows from each column in the second
table only if the row was selected in first table.
3. Repeat step two for each subsequent table.
4. Select optimal cell from still left cells.

Number of
Average
Days for Buys
Annual
and Sell
Compounded
Rules
Return (%)

Sharpe
Return
Ratio Retracement
in %
Ratio

0.30

3.93

2.82

96.20

20

7.58

4.15

0.45

296.53

1.81

36

30

8.46

4.27

0.48

471.96

2.05

40

40

8.04

4.36

0.44

559.21

2.09

40

50

7.70

4.49

60

7.84

4.50

0.43

770.62

638.58

1.34

2.13

40
40

70

7.76

4.60

0.42

859.30

2.41

41

80

7.59

4.64

0.40

930.93

2.49

41

90

7.56

4.64

0.39

1011.86

2.59

42

10%

End

01/01/1984

01/01/1989

15%

20%

25%

30%

35% 40%

50%

60%

70%

14.82 14.95

15.04 15.05 15.11 15.17 15.16

20 13.79 14.40 14.76

14.92 14.95

15.09 15.14 15.22 15.26 15.25

30 13.96 14.57 14.68

14.81 14.94

15.08 15.13 15.20 15.26 15.25

14.03

14.36

50 13.74 14.00 14.23


60 13.32 13.94 14.24

14.56 14.68
14.56 14.68
14.38 14.33

14.70 14.78 14.87 14.92 14.94


14.67 14.78 14.88 14.92 14.94
14.52 14.51 14.73 14.73 14.50

70 13.59

14.08 14.38

14.35 14.39

14.42 14.40 14.63 14.62 14.65

80 13.30

14.08 14.18

14.16 14.22

14.21 14.21 14.42 14.41 14.43

90 13.23 14.00 14.10

14.07 14.17

14.12 14.16 14.36 14.35 14.37

This table shows the Sharpe Ratio in %. The columns are the
percentages used in the Sell Losing Positions Rule and the rows are
the number of days used in the Buy rule.

APPENDIX 2.2
Results for NS-xBS-P where x represents number of days
used in Buy and Sell rules, x>=10 and x<=90
Sharpe
Return
Ratio Retracement
in %
Ratio

Start

No

10 14.47 14.43 14.84

40 13.83

The optimal the system with 50 days used for Buy and Sell rules.

Number of
Average
Days for Buys
Annual
and Sell
Compounded
Rules
Return (%)

Sell Rule

Proportional

This table shows the Average Annual Compounded Return (R)


in %. The columns are the percentages used in the Sell Losing Positions Rule and the rows are the number of days used in the Buy rule.

26

2.30

Money Allocation

NS-xB-P-Sy

Expected Net Trade Based Maximum


Profit Per
Profit Loss Loss (ML)
Trade
Ratio
in %
(ENPPT)($)
(TBPLR)

10

0.42

System

Expected Net Trade Based Maximum


Profit Per
Profit Loss Loss (ML)
Trade
Ratio
in %
(ENPPT)($)
(TBPLR)

10%

15%

20%

25%

30%

50%

60%

70%

10 4.32

4.29

4.38

4.38

4.41

35% 40%
4.42

4.43

4.44

4.43

4.43

20 4.22

4.34

30 4.33

4.45

40 4.29

4.37

4.41

50 4.31

4.37

4.43

4.37

4.43

4.42

4.45

4.47

4.48

4.49

4.49

4.49 4.49

4.46

4.49

4.49

4.53

4.53

4.53

4.53

4.46

4.46

4.47

4.47

4.47

4.46 4.46

4.47

4.47

4.48

4.48

4.48

4.47

4.45

4.45

4.47

4.47

4.45

4.45

4.53
4.47

10

4.48

3.07

0.30

109.72

1.40

29

60 4.25

20

8.85

4.18

0.48

361.66

2.04

40

70 4.33

4.37

4.43

4.43

4.43

4.45

4.45

4.43

4.43

4.45
4.43

4.36

4.39

4.39

4.39

4.40

4.40

4.39

4.39

4.39

30

10.05

4.37

0.52

594.16

2.44

44

80 4.28

40

10.15

4.37

0.52

763.29

2.73

46

90 4.32

50

10.57

4.44

60

10.90

4.38

70

10.96

4.55

0.51

1206.84

3.85

49

80

10.62

4.60

0.48

1479.84

3.91

50

10%

15%

20%

25%

30%

35%

40%

90

10.30

4.61

0.47

1585.23

4.19

50

10 0.49

0.48

0.50

0.49

0.49

0.50

0.50

0.50

0.50

0.50

20 0.49

0.49

0.50

0.50

0.50

0.50

0.50

0.50

0.50

0.50

30 0.51

0.51

0.51
0.53

979.46
1206.84

2.95
3.43

48

4.39

4.39

4.39

4.40

4.40

4.38

4.38

4.38

This table shows the Return Retracement Ratio. The columns


are the percentages used in the Sell Losing Positions Rule and the
rows are the number of days used in the Buy rule.

48

The optimal is the system with 50 days for Buy and Sell rules.

0.51

40 0.51

0.50

0.51

50 0.51

0.51

0.51

60 0.51

0.51

0.51

0.50
0.50
0.51
0.51

0.51
0.51
0.52
0.51

0.51

0.51

50%

0.51

60%

0.51

0.51

0.51

0.51

0.51

0.51

0.52

0.52

0.52

0.51

0.51

0.52

0.52

70 0.52

0.53

0.53

0.53

0.53

0.53

0.53

0.54

0.54

80 0.51

0.53

0.53

0.53

0.53

0.53

0.53

0.54

0.54

90 0.51

MTA JOURNAL

4.38

0.53

0.53

Summer-Fall 2000

0.53

0.53

0.53

0.53

0.54

0.54

70%

0.51
0.51
0.52
0.52
0.54
0.54
0.54

16

This table shows the Expected Net Profit Per Trade (ENPPT)
in $. The columns are the percentages used in the Sell Losing Positions Rule and the rows are the number of days used in the Buy rule.
10%

15%

20%

25%

30%

10 3511.64 4507.22 5905.14

6597.08 7739.21

20 3385.28 4695.31 5949.73

7252.84 7716.12

35% 40%

50%

60%

This table shows the Trade-Based Profit/Loss Ratio (TBPLR).


The columns are the percentages used in the Sell Losing Positions
Rule and the rows are the number of days used in the Buy rule.

70%

10%
10 9.76

8520.50 8823.50 9802.04 10245.87 10447.28

15%
10.04

20%
14.62

25%
16.25

30%
19.87

35% 40%
25.01

50%

27.07

60%

39.85

70%

62.18

81.81

20 8.95 11.17 15.42

21.44 22.65

26.89 32.67 41.55 95.37 93.95

30 3709.77 5067.45 5923.62

7009.01 7807.08

8492.72 9224.62 9817.10 10472.16 10465.79

30 10.31 13.94 15.41

19.61 22.47

26.39 32.00 40.02 93.59 92.20

40 3805.97 4834.92 5955.58

7140.84 7725.92

8265.74 8924.55 9607.69 10041.16 10156.11

40 10.24 12.44 15.63

20.79 23.31

25.28 31.02 37.71 60.18 90.27

50 3925.73 5107.18 6188.47

7345.42 7969.82

8372.83 9164.75 9608.2110030.1710149.77

50 10.15 12.93 16.78

60 3710.81 5241.00 6428.99

7492.27 7966.95

8677.11 9264.20 9869.52 10502.81 10498.79

8540.02 8931.42 9557.56 9840.63

60 9.11 13.88 19.17

9961.49

70 4053.34 5541.77 6717.81

7384.87 8043.70

8427.56 8805.56 9528.22 9711.08

9830.62

70 11.45

16.57 23.84

80 4078.14 5771.52 6622.01

7355.29 8109.93

8316.63 8708.21 9420.13 9502.17

9619.90

80 11.08

19.83 23.73

90 4020.47 5821.46 6611.41

7275.61 8127.05

8230.48 8736.40 9347.20 9432.68

90 11.36 20.05 24.62

9544.71

This table shows the Maximum Loss (ML) in %. The columns


are the percentages used in the Sell Losing Positions Rule and the
rows are the number of days used in the Buy rule.
10%

15%

20%

25% 30%

35% 40%

50%

60%

0.66

0.66

0.66

0.66

0.66

0.67

0.67

0.67

0.67

20 0.63

0.65

0.65

0.66

0.66

0.66

0.66

0.66

0.67

0.67

30 0.62

0.64

0.64

0.65

0.65

0.65

0.66

0.66

0.66

0.66

40 0.61

0.62

0.63

0.64

0.64

0.64

0.65

0.65

0.65

0.65

50 0.61
60 0.60

0.62
0.61

0.63
0.62

0.63
0.63

0.64
0.63

0.64
0.63

0.64
0.63

0.64

0.64

0.64

0.64

0.61

0.61

0.62

0.62

0.62

0.62

0.62

0.62

0.62

0.61

0.61

0.61

0.61

0.61

0.61

0.61

0.61

0.61

90 0.59

0.60

0.61

0.61

0.61

0.61

0.61

0.61

0.61

24.85 29.52
24.23 31.79

33.52 38.51

61.27

93.06

29.73 30.79 64.39 63.26 99.61


29.36 31.24 66.51 64.05 102.01
29.51 33.90 71.46 69.73 111.06

Art Ruszkowski combines his strong scientific background


with knowledge and practice of technical analysis specializing
in quantitive analysis, and mechanical trading system design
and testing. He is currently a partner in a private investment
fund, and is responsible for development of models, studies,
portfolio selections and money management strategies.
Art is a member of the MTA and the CSTA.

0.64

80 0.59

24.46 27.89

24.91

30.36 33.32 63.53 64.58 102.15

ART RUSZKOWSKI, CMT, M.SC.

0.64

70 0.60

24.92 25.00

The best performing is the system with 50 days used for Buy
rule and 25% Sell Losing Position rule.

70%

10 0.65

21.31 24.21

0.61

MTA JOURNAL

Summer-Fall 2000

17

SCIENCE IS REVEALING THE MECHANISM OF THE


WAVE PRINCIPLE
Robert R. Prechter, Jr., CMT
It is one thing to say that the Wave Principle makes sense in the
context of nature and its growth forms. It is another to postulate a
hypothesis about its mechanism. The biological and behavioral sciences have produced enough relevant work to make a case that
unconscious paleomentational processes produce a herding impulse with Fibonacci-related tendencies in both individuals and
collectives. Mans unconscious mind, in conjunction with others, is
thus disposed toward producing a pattern having the properties of
the Wave Principle.

these brain functions insure lifesaving or life-enhancing action


under most circumstances and are fundamental to animal motivation. Due to our evolutionary background, they are integral to
human motivation as well. In effect, then, portions of the brain are
hardwired for certain emotional and physical patterns of reaction2
to insure survival of the species. Presumably, herding behavior, which
derives from the same primitive portion of the brain, is similarly
hardwired and impulsive. As one of its primitive tools of survival,
then, emotional impulses from the limbic system impel a desire
among individuals to seek signals from others in matters of knowledge and behavior, and therefore to align their feelings and convictions with those of the group.
There is not only a physical distinction between the neocortex
and the primitive brain but a functional dissociation between them.
The intellect of the neocortex and the emotional mentation of the
limbic system are so independent that the limbic system has the
capacity to generate out-of-context, affective feelings of conviction
that we attach to our beliefs regardless of whether they are true or false.3
Feelings of certainty can be so overwhelming that they stand fast in
the face of logic and contradiction. They can attach themselves to
a political doctrine, a social plan, the verity of a religion, the surety
of winning on the next spin of the roulette wheel, the presumed
path of a financial market or any other idea.4 This tendency is so
powerful that Robert Thatcher, a neuroscientist at the University
of South Florida College of Medicine in Tampa, says, The limbic
system is where we live, and the cortex is basically a slave to that.5
While this may be an overstatement, a soft version of that depiction, which appears to be a minimum statement of the facts, is that
most people live in the limbic system with respect to fields of knowledge and activity about which they lack either expertise or wisdom.
This tendency is marked in financial markets, where most people
feel lost and buffeted by forces that they cannot control or foresee.
In the 1920s, Cambridge economist A.C. Pigou connected cooperative social dynamics to booms and despression.6 His idea is that
individuals rountinely correct their own errors of thought when
operating alone but abidicate their responsibility to do so in matters that have strong social agreement, regardless of the egregiousness of the ideational error. In Pigou's words,
Apart altogether from the financial ties by which different businessmen are bound together, there exists among them a certain measure of psychological interdependence. A change of tone in one part
of the business world diffuses itself, in a quite unreasoning manner, over other and wholly disconnected parts. 7

THE PALEOMENTATIONAL HERDING IMPULSE


Over a lifetime of work, Paul MacLean, former head of the Laboratory for Brain Evolution at the National Institute of Mental Health,
has developed a mass of evidence supporting the concept of a
triune brain, i.e., one that is divided into three basic parts. The
primitive brain stem, called the basal ganglia, which we share with
animal forms as low as reptiles, controls impulses essential to survival. The limbic system, which we share with mammals, controls
emotions. The neocortex, which is significantly developed only in
humans, is the seat of reason. Thus, we actually have three connected minds: primal, emotional and rational. Figure 1, from
MacLeans book, The Triune Brain in Evolution,1 roughly shows their
physical locations.
Figure 1
The Three Sections of the Triune Brain

Source: The Triune Brain in Evolution

The neocortex is involved in the preservation of the individual


by processing ideas using reason. It derives its information from
the external world, and its convictions are malleable thereby. In
contrast, the styles of mentation outside the cerebral cortex are
unreasoning, impulsive and very rigid. The thinking done by the
brain stem and limbic system is primitive and pre-rational, exactly
as in animals that rely upon them.
The basal ganglia control brain functions that are often termed
instinctive: the desire for security, the reaction to fear, the desire to
acquire, the desire for pleasure, fighting, fleeing, territorialism,
migration, hoarding, grooming, choosing a mate, breeding, the
establishment of social hierarchy and the selection of leaders. More
pertinent to our discussion, this bunch of ner ves also controls coordinated behavior such as flocking, schooling and herding. All

MTA JOURNAL

Wall Street certainly shares aspects of a crowd, and there is


abundant evidence that herding behavior exists among stock market participants. Myriad measures of market optimism and pessimism8 show that in the aggregate, such sentiments among both the
public and financial professionals wax and wane concurrently with
the trend and level of the market. This tendency is not simply fairly
common; it is ubiquitous. Most people get virtually all of their ideas
about financial markets from other people, through newspapers,
television, tipsters and analysts, without checking a thing. They
think, Who am I to check? These other people are supposed to be
experts. The unconscious mind says: You have too little basis upon
which to exercise reason; your only alternative is to assume that the
herd knows where it is going.

Summer-Fall 2000

18

sensus leads to herding behavior among earnings forecasters.10

In 1987, three researchers from the University of Arizona and


Indiana University conducted 60 laboratory market simulations using as few as a dozen volunteers, typically economics students but
also, in some experiments, professional businessmen. Despite giving all the participants the same perfect knowledge of coming dividend prospects and then an actual declared dividend at the end of
the simulated trading day, which could vary more or less randomly
but which would average a certain amount, the subjects in these experiments repeatedly created a boom-and-bust market profile. The extremity of that profile was a function of the participants lack of experience in the speculative arena. Head research economist Vernon L.
Smith came to this conclusion: We find that inexperienced traders never trade consistently near fundamental value, and most commonly generate a boom followed by a crash.... Groups that have
experienced one crash continue to bubble and crash, but at reduced volume. Groups brought back for a third trading session
tend to trade near fundamental dividend value. In the real world,
these bubbles and crashes would be a lot less likely if the same
traders were in the market all the time, but novices are always
entering the market.9
While these experiments were conducted as if participants could
actually possess true knowledge of coming events and so-called fundamental value, no such knowledge is available in the real world.
The fact that participants create a boom-bust pattern anyway is overwhelming evidence of the power of the herding impulse.
It is not only novices who fall in line. It is a lesser-known fact
that the vast majority of professionals herd just like the nave majority. Figure 2 shows the percentage of cash held at institutions as
it relates to the level of the S&P 500 Composite Index. As you can
see, the two data series move roughly together, showing that professional fund managers herd right along with the market just as
the public does.

Olsens study shows that the more analysts are wrong, which is
another source of stress, the more their herding behavior increases.11
How can seemingly rational professionals be so utterly seduced
by the opinion of their peers that they will not only hold, but change
opinions collectively? Recall that the neocortex is to a significant
degree functionally disassociated from the limbic system. This
means not only that feelings of conviction may attach to utterly
contradictory ideas in different people, but that they can do so in
the same person at different times. In other words, the same brain can
support opposite views with equally intense emotion, depending upon
the demands of survival perceived by the limbic system. This fact
relates directly to the behavior of financial market participants, who
can be flushed with confidence one day and in a state of utter panic
the next. As Yale economist Robert Schiller puts it, You would
think enlightened people would not have firm opinions about markets, but they do, and it changes all the time.12 Throughout the herding process, whether the markets are real or simulated, and whether
the participants are novices or professionals, the general conviction of the rightness of stock valuation at each price level is powerful, emotional and impervious to argument.
Falling into line with others for self-preservation involves not
only the pursuit of positive values but also the avoidance of negative values, in which case the reinforcing emotions are even stronger. Reptiles and birds harass strangers. A flock of poultry will peck
to death any individual bird that has wounds or blemishes. Likewise, humans can be a threat to each other if there are perceived
differences between them. It is an advantage to survival, then, to
avoid rejection by revealing your sameness. D.C. Gajdusek researched a
long-hidden Stone Age tribe that had never seen Western people
and soon noticed that they mimicked his behavior; whenever he
scratched his head or put his hand on his hip, the whole tribe did
the same thing.13 Says MacLean, It has been suggested that such
imitation may have some protective value by signifying, I am like you.
He adds, This form of behavior is phylogenetically deeply ingrained.14
The limbic system bluntly assumes that all expressions of I am
not like you are infused with danger. Thus, herding and mimicking are preservative behavior. They are powerful because they are
impelled, regardless of reasoning, by a primitive system of mentation that, however uninformed, is trying to save your life.
As with so many useful paleomentational tools, herding behavior is counterproductive with respect to success in the world of modern financial speculation. If a financial market is soaring or crashing, the limbic system senses an opportunity or a threat and orders
you to join the herd so that your chances for success or survival will
improve. The limbic system produces emotions that support those
impulses, including hope, euphoria, cautiousness and panic. The
actions thus impelled lead one inevitably to the opposite of survival
and success, which is why the vast majority of people lose when
they speculate.15 In a great number of situations, hoping and herding can contribute to your well-being. Not in financial markets. In
many cases, panicking and fleeing when others do cuts your risk.
Not in financial markets. The important point with respect to this
aspect of financial markets is that for many people, repeated failure
does little to deter the behavior. If repeated loss and agony cannot overcome the limbic system's impulses, then it certainly must have free
rein in comparatively benign social settings.
Regardless of their inappropriateness to financial markets, these
impulses are not irrational because they have a purpose, no matter
how ill-applied in modern life. Yet neither are they rational, as they

Figure 2
Stock Mutual Funds Cash/Assets Ratio vs. Aggregate Stock Prices
Monthly Data 12/31/65 - 12/31/98 (log scale)

Source: Ned Davis Research

Data: Investment Company Institute

Apparent expressions of cold reason by professionals follow


herding patterns as well. Finance professor Robert Olsen recently
conducted a study of 4,000 corporate earnings estimates by company analysts and reached this conclusion:
Experts earnings predictions exhibit positive bias and disappointing accuracy. These shortcomings are usually attributed to some
combination of incomplete knowledge, incompetence, and/or misrepresentation. This article suggests that the human desire for con-

MTA JOURNAL

Summer-Fall 2000

19

are within mens unconscious minds, i.e., their basal ganglia and
limbic system, which are equipped to operate without and to override the conscious input of reason. These impulses, then, serve
rational general goals but are irrationally applied to too many specific situations.

Fibonacci proportion: 62% to 38%. When he asks subjects to sort


indistinguishable objects into two piles, they tend to divide them
into a 62/38 ratio. When subjects are asked to judge the lightness of gray paper against solid white and solid black, they persistently mark it either 62% or 38% light,18 favoring the former. (See
Figure 3.) When Adams-Webber asks subjects to evaluate their
friends and acquaintances in terms of bipolar attributes, they choose
the positive pole 62% of the time on average.19 When he asks a
subject to decide how many of his own attributes another shares,
the average commonality assigned is 0.625.20 When subjects are
given scenarios that require a moral action and ased what percentage of people would take good actions vs. bad actions, their answers average 62%.21 When people say they feel 50/50 on a subject, Lefebvre says, chances are it's more like 62/38.22
Lefebvre concludes from these findings, We may suppose that
in a human being, there is a special algorithm for working with
codes independent of particular objects.23 This language fits MacLeans
conclusion and LeDouxs confirmation that the limbic system can
produce emotions and attitudes that are independent of objective
referents in the cortex. If these statistics reveal something about
human thought, they suggest that in many, perhaps all, individual
humans, and certainly in an aggregate average, opinion is predisposed to a 62/38 inclination. With respect to each individual decision, the availability of pertinent data, the influence of prior experiences and/or learned biases can modify that ratio in any given
instance. However, phi is what the mind starts with. It defaults to phi
whenever parameters are unclear or information insufficient for
an utterly objective assessment.
This is important data because it shows a Fibonacci decisionbased mentation tendency in individuals. If individual decisionmaking reflects phi, then it is less of a leap to accept that the Wave
Principle, which also reflects phi, is one of its products. To narrow
that step even further, we must be satisfied that phi appears in group
mentation in the real world. Does Fibonacci-patterned decisionmaking mentation in individuals result in a Fibonacci-patterned
decision-making mentation in collectives? Data from the 1930s and
the 1990s suggests that it does.
Lefebvre and Adams-Webbers experiments show unequivocally
that the more individuals decisions are summed, the smaller is the
variance from phi. In other words, while individuals may vary somewhat in the phi-based bias of their bipolar decision-making, a large
sum of such decisions reflects phi quite precisely. In a real-world
social context, Lefebvre notes by example that the median voting
margin in California ballot initiatives over 100 years is 62%. The
same ratio holds true in a study of all referenda in America over a
decade24 as well as referenda in Switzerland from 1886 to 1978.25
In the early 1930s, before any such experiments were conducted
or models proposed, stock market analyst Robert Rhea undertook
a statistical study of bull and bear markets from 1896 to 1932. He
knew nothing of Fibonacci, as his work in financial markets predated R.N. Elliotts discovery of the Fibonacci connection by eight
years. Thankfully, he published the results despite, as he put it,
seeing no immediate practical value for the data. Here is his summary:
Bull markets were in progress 8143 days, while the remaining 4972
days were in bear markets. The relationship between these figures
tends to show that bear markets run 61.1 percent of the time required for bull periods.... The bull market[s]...net advance was
46.40 points. [It] was staged in four primary swings of 14.44,
17.33, 18.97 and 24.48 points respectively. The sum of these advances is 75.22. If the net advance, 46.40, is divided into the sum

PHI IN THE UNCONSCIOUS MENTATIONAL PATTERNS


OF INDIVIDUALS AND GROUPS
At this point, we have identified unconscious, impulsive mental
processes in individual human beings that are involved in governing behavior with respect to ones fellows in a social setting. Is it
logical to expect such impulses to be patterned? When the unconscious mind operates, it could hardly do so randomly, as that would
mean no thought at all. It must operate in patterns peculiar to it. Indeed, the limbic systems of individuals produce the same patterns
of behavior over and over when those individuals are in groups.
The interesting obser vation is how the behavior is patterned. When
we investigate statistical and scientific material on the subject, rare
as it is, we find that our Fibonacci-structured neurons and microtubules
(see Science is Validating the Concept of the Wave Principle)
participate in Fibonacci patterns of mentation.
Perhaps the most rigorous work in this area has been performed
by psychologists in a series of studies on choice. G.A. Kelly proposed in 1955 that every person evaluates the world around him
using a system of bipolar constructs.16 When judging others, for
instance, one end of each pole represents a maximum positive trait
and the other a maximum negative trait, such as honest/dishonest, strong/weak, etc. Kelly had assumed that average responses in
value-neutral situations would be 0.50. He was wrong. Experiments
show a human bent toward favor or optimism that results in a response ratio in value-neutral situations of 0.62, which is phi. Numerous binary-choice experiments have reproduced this finding,
regardless of the type of constructs or the age, nationality or background of the subjects. To name just a few, the ratio of 62/38 results when choosing and over but to link character traits, when
evaluating factors in the work environment, and in the frequency
of cooperative choices in the prisoners dilemma.17
Figure 3

Source: Poulton, 1989

Psychologist Vladimir Lefebvre of the School of Social Sciences


at the University of California in Irvine and Jack Adams-Webber of
Brock University corroborate these findings. When Lefebvre asks
subjects to choose between two options about which they have no
strong feelings and/or little knowledge, answers tend to divide into

MTA JOURNAL

Summer-Fall 2000

20

powerful evidence that our neurophysiology is compatible with,


and therefore intimately involved in, the generation of the Wave
Principle.
Lefebvre explains why scientists are finding phi in every aspect
of both average individual mentation and collective mentation:
The golden section results from the iterative process. ...Such a process must appear [in mentation] when two conditions are satisfied: (a) alternatives are polarized, that is, one alternative plays
the role of the positive pole and the other one that of the negative
pole; and (b) there is no criterion for the utilitarian preference of
one alternative over the other.31

of advances, 75.22, the result is 1.621. The total of secondary


reactions retraced 62.1 percent of the net advance.26
To generalize his findings, the stock market on average advances
by 1s and retreats by .618s, in both price and time.
Lefebvre and others work showing that people have a natural
tendency to make choices that are 61.8% optimistic and 38.2% pessimistic directly reflects Robert Rheas data indicating that bull markets tend both to move prices and to endure 62% relative to bear
markets 38%. Bull markets and bear markets are the quintessential expressions of optimism and pessimism in an overall net-neutral environment for judgment. Moreover, they are created by a
very large number of people, whose individual differences in decision-making style cancel each other out to leave a picture of pure
Fibonacci expression, the same result produced in the aggregate
in bipolar decision-making experiments. As rational cogitation would
never produce such mathematical consistency, this picture must come from
another source, which is likely the impulsive paleomentation of the limbic
system, the part of the brain that induces herding.
While Rheas data need to be confirmed by more statistical studies, prospects for their confirmation appears bright. For example,
in their 1996 study on log-periodic structures in stock market data,
Sornette and Johansen investigate successive oscillation periods
around the time of the 1987 crash and find that each period (tn)
equals a value (l) to the power of the periods place in the sequence
(n), so that tn= ln . They then state outright the significance of the
Fibonacci ratio that they find for l:
The Elliott wave technique...describes the time series of a stock
price as made of different waves. These different waves are in
relation with each other through the Fibonacci series, [whose numbers] converge to a constant (the so-called golden mean, 1.618),
implying an approximate geometrical series of time scales in the
underlying waves. [This idea is] compatible with our above estimate for the ratio l @ 1.5-1.7$.27

This description fits peoples mental struggle with the stock


market, it fits peoples participation in social life in general, and it
fits the Wave Principle.
It is particularly intriguing that the study by Caldarelli et al. purposely excludes all external input of news or fundamentals. In
other words, it purely records all the infighting and ingenuity of
the players in trying to outguess the others.32 As Lefebvres work
anticipates, subjects in such a nonobjective environment should
default to phi, which Elliotts model and the latest studies show is
exactly the number to which they default in real-world financial
markets.

CONCLUSION
R.N. Elliott discovered before any of the above was known, that the
form of mankinds evaluation of his own productive enterprise,
i.e., the stock market, has Fibonacci properties. These studies and
statistics say that the mechanism that generates the Wave Principle,
mans unconscious mind, has countless Fibonacci-related properties. These findings are compatible with Elliotts hypothesis.

NOTES
1. MacLean, P. (1990). The triune brain in evolution: role in
paleocerebral functions. New York: Plenum Press.
2. Scuoteguazza, H. (1997, September/October). Handling emotional intelligence. The Objective American.
3. MacLean, P. (1990). The triune brain in evolution, p. 17.
4. Chapters 15 through 19 of The Wave Principle of Human Social
Behavior explore this point further.
5. Wright, K. (1997, October). Babies, bonds and brains. Discover, p. 78.
6. Pigou, A.C. (1927). Industrial fluctuations. London: F. Cass.
7. Pigou, A.C. (1920). The economics of welfare. London: F. Cass.
8. Among others, such measures include put and call volume ratios, cash holdings by institutions, index futures premiums, the
activity of margined investors, and reports of market opinion
from brokers, traders, newsletter writers and investors.
9. Bishop, J.E. (1987, November 17). Stock market experiment
suggests inevitability of booms and busts. The Wall Street Journal.
10. Olsen, R. (1996, July/August). Implications of herding behavior Financial Analysts Journal, pp. 37-41.
11. Just about any source of stress can induce a herding response.
MacLean humorously references the tendency of governments
and universities to respond to tension by forming ad hoc committees.
12. Passell, P. (1989, August 25). Dow and reason: distant cousins? The New York Times.

This phenomenon of time is the same as the one that R.N. Elliott described for price swings in the 1930-1939 period recounted
in Chapter 5 of The Wave Principle of Human Social Behavior.
In the past three years, modern researchers have conducted experiments that further demonstrate Elliotts observation that phi
and the stock market are connected. The October 1997 New Scientist reports on a study that concludes that the stock markets Hurst
exponent,28 which characterizes its fractal dimension, is 0.65.29 This
number is quite close to the Fibonacci ratio. However, since that
time, the figure for financial auction-market activity has gotten even
closer. Europhysics Letters has just published the results of a market
simulation study by European physicists Caldarelli, Marsili and
Zhang. Although the simulation involves only a dozen or so subjects at a time trading a supposed currency relationship, the resulting price fluctuations mimic those in the stock market. Upon measuring the fractal persistence of those patterns, the authors come
to this conclusion:
The scaling behavior of the price returns...is very similar to that
observed in a real economy. These distributions [of price differences] satisfy the scaling hypothesis...with an exponent of H =
0.62.30
The Hurst exponent of this group dynamic, then, is 0.62. Although the authors do not mention the fact, this is the Fibonacci
ratio. Recall that the fractal dimension of our neurons is phi. These
two studies show that the fractal dimension of the stock market is
related to phi. The stock market, then, has the same fractal dimensional
factor as our neurons, and both of them are the Fibonacci ratio. This is

MTA JOURNAL

Summer-Fall 2000

21

13. Gajdusek, D.C. (1970). Physiological and psychological characteristics of stone age man. Symposium on Biological Bases of
Human Behavior, Eng. Sci. 33, pp. 26-33, 56-62.
14. MacLean, P. (1990). The triune brain in evolution.
15. There is a myth, held by nearly all people outside of back-office employees of brokerage firms and the IRS, that many
people do well in financial speculation. Actually, almost everyone loses at the game eventually. The head of a futures brokerage firm once confided to me that never in the firms history
had customers in the aggregate had a winning year. Even in
the stock market, when the public or even most professionals
win, it is a temporary, albeit sometimes prolonged, phenomenon. The next big bear market usually wipes them out if they
live long enough, and if they do not, it wipes out their successors. This is true regardless of todays accepted wisdom that
the stock market always goes to new highs eventually and that
todays investors are wise. Aside from the fact that the new
highs forever conviction is false (Where was the Roman stock
market during the Dark Ages?), what counts is when people
act, and that is what ruins them.
16. Kelly, G.A. (1955). The psychology of personal constructs, Vols. 1
and 2.
17. Osgood, C.E., and M.M. Richards (1973). Language, 49, pp.
380-412; Shalit, B. (1960). British Journal of Psychology, 71, pp.
39-42; Rapoport, A. and A.M. Chammah (1965). Prisoners dilemma. University of Michigan Press.
18. Poulton, E.C., Simmonds, D.C.V. and Warren, R.M. (1968). Response bias in very first judgments of the reflectance of grays:
numerical versus linear estimates. Perception and Psychophysics,
Vol. 3, pp. 112-114.
19. Adams-Webber, J. and Benjafield, J. (1973). The relation between lexical marking and rating extremity in interpersonal
judgment. Canadian Journal of Behavioral Science, Vol. 5, pp.
234-241.
20. Adams-Webber, J. (1997, Winter). Self-reflexion in evaluating
others. American Journal of Psychology, Vol. 110, No. 4, pp. 527541.
21. McGraw, K.M. (1985). Subjective probabilities and moral judgments. Journal of Experimental and Biological Structures, #10, pp.
501-518.
22. Washburn, J. (1993, March 31). The human equation. The
Los Angeles Times.
23. Lefebvre, V.A. (1987, October). The fundamental structures
of human reflexion. The Journal of Social Biological Structure,
Vol. 10, pp. 129-175.
24. Lefebvre, V.A. (1992). A psychological theory of bipolarity and reflexivity. Lewinston, NY: The Edwin Mellen Press. And Lefebvre,
V.A. (1997). The cosmic subject. Moscow: Russian Academy of
Sciences Institute of Psychology Press.
25. Butler, D. and Ranney, A. (1978). Referendums Washington,
D.C., American Enterprise Institute for Public Policy Research.
26. Rhea, R. (1934). The story of the averages: a retrospective study of
the forecasting value of Dows theory as applied to the daily movements
of the Dow-Jones industrial & railroad stock averages. Republished
January 1990. Omnigraphi. (See discussion in Chapter 4 of
Elliott Wave Principle by Frost and Prechter.)
27. Sornette, D., Johansen, A., and Bouchaud, J.P. (1996). Stock
market crashes, precursors and replicas. Journal de Physique I
France 6, No.1, pp. 167-175.

MTA JOURNAL

28. The Hurst exponent (H), named for its developer, Harold
Edwin Hurst [ref: Hurst, H.E., et al. (1951). Long term storage:
an experimental study] is related to the fractal, or Hausdorff dimension (D) by the following formula, where E is the embedding Euclidean dimension (2 in the case of a plane, 3 in the
case of a space): D = E - H. It may also be stated as D = E + 1 - H
if E is the generating Euclidean dimension (1 in the case of a
line, 2 in the case of a plane). Thus, if the Hurst exponent of a
line graph is .38, or /-2, then the fractal dimension is 1.62, or
; if the Hurst exponent is .62, or -1, then the fractal dimension is 1.38, or 1 + -2. [source: Schroeder, M. (1991). Fractals,
chaos, power laws: minutes from an infinite paradise. New York: W.H.
Freeman & Co.] Thus, if H is related to , so is D.
29. Brooks, M. (1997, October 18). Boom to bust. New Scientist.
30. Caldarelli, G., et al. (1997). A prototype model of stock exchange. Europhysics Letters, 40 (5), pp. 479-484.
31. Lefebvre, V.A. (1998, August 18-20). Sketch of reflexive game
theory, from the proceedings of The Workshop on Multi-Reflexive Models of Agent Behavior conducted by the Army Research
Laboratory.
32. Caldarelli, G., et al. (1997, December 1). A prototype model
of stock exchange. Europhysics Letters, 40 (5), pp. 479-484.

ROBERT R. PRECHTER, JR., CMT


Robert Prechter first heard of the Wave Principle in the
late 1960s while an undergraduate studying psychology at Yale.
In the mid-1970s, he began investigating the literature and
labeling waves in hourly records of the Dow Jones Industrial
Average and prices for gold. In 1976, while a Technical Market Specialist at Merrill Lynch in New York, Prechter began
publishing studies on the Wave Principle. In 1978, he coauthored, with A.J. Frost, Elliott Wave Principle - Key To Market Behavior, and in 1979, he started The Elliott Wave Theorist, a
publication devoted to analysis of the U.S. financial markets.
During the 1980s, Prechter won numerous awards for market timing as well as the United States Trading Championship,
culminating in Financial News Networks conferring upon him
the title of Guru of the Decade. In 1990-1991, he was elected
and served as president of the MTA in its 21st year. Prechters
firm Elliott Wave International, now serves institutional subscribers around the world 24 hours a day via on-line intraday
analysis of the worlds major markets. In November 1997,
Prechter addressed the International Conference on the Unity
of the Sciences (ICUS) in Washington, DC, an international
forum on interdisciplinary scientific issues. The paper he presented at that conference was later expanded into his most
recent book, entitled The Wave Principle of Human Social
Behavior and the New Science of Socionomics which was published in 1999.

Summer-Fall 2000

22

TESTING THE EFFICACY OF THE NEW HIGH/NEW LOW INDEX


USING PROPRIETARY DATA
Richard T. Williams, CFA, CMT

INTRODUCTION

filtering method to new high/low data taken from publicly available sources and applied to the TDHLI.
The purpose of this paper shall be to evaluate the efficacy of
the TDHLI using four years of historical prices to calculate new
high/low data on the largest 5000 stocks on the NYSE, Amex and
NASDAQ from January 31, 1996 to December 31, 1999. The conclusions of my earlier paper were that the traditional rules suggested by both Messrs. Cohen and Redegeld did not perform particularly well. The dynamic rules that I suggested improved performance and flexibility, but did little to explore the impact of several
factors like market cap on the performance and predictive power
of indicators like the TDHLI. The aim of the current study is to
explore the merits of calculating new high/low data and parsing it
by market cap to provide a deeper look into the technicals of the
stock market.
The attempt at compiling a relatively large database of stock
prices necessary to create the capability to generate new high/low
data nearly undid my project to submit for the 2000 Charles Dow
Award. Despite my best efforts over several months and a surprisingly ineffectual collection of state-of-the-art PC hardware, the entire enterprise nearly collapsed from the strain on memory and
processing speed available today. The final, successful effort required four PCs with Pentium 500-600 megahertz processors with
a combined 704 Megabytes of RAM running in parallel on small
subsets of data that had to be reorganized and recompiled after
the initial run. To say that the additional capabilities made possible by using proprietary data comes at a cost is an understatement. Still, the benefits of flexibility may yet prove to be worth the
ultimate effort.
My suggested approach, as before, uses percentage filters held
to a tolerance of two standard deviations from the mean for past
signals. Filter percentages varied as one might expect with the
market cap and volatility of the stocks. When the value of the indicator changes, for example, by 20% from its most recent high or
low, a buy or sell is signaled. The percentage hurdle was derived by
taking the percent move that correctly captured 95% (or 2 standard deviations from the mean) of the historic index moves. The
mid cap data required a 21% filter for the period studied while
small caps needed a 25% band to meet the criterion. Totals for all
the new high/low data, perhaps as a result of the time period utilized and the relative numbers of small cap stocks in the data, required a 23% filter. The benefit of dynamic rules appears to be
supported by my experience with both studies conducted to date.

Based on comments by fellow MTA members shortly after submission of my 1999 CMT paper, Testing the Efficacy of New High/
New Low Data, I began to ponder how I might explore further the
possibilities of using new high/new low data as a stock market indicator. The NYSE new highs and new lows have been used in technical analysis and by market watchers for many years. The theory is
that the stocks reaching new 52-week highs or lows represent significant events relative to the market and its sectors. If possible,
the study of the actual prices underlying new high/low data might
suggest intriguing new ways to explore different applications of
indicators like the 10-Day High/Low Index (referred to herein after as TDHLI) for predicting market action.
There are a number of ways to use new high and new low data.
In order to test the efficacy of new high/new low indicators using
proprietary data, we will employ the same 10-day moving average
of the percentage of new highs over new highs plus new lows that I
employed last year to test publicly available new high/new low data.
The traditional rules, by way of review, were introduced to me by
Jack Redegeld, the head of technical research at Scudder, Stevens
& Clark in 1986. A definition of terms from my 1999 CMT paper
follows below:
What differentiates the TDHLI from other indicators that
use new high and new low data is that it tracks the oscillation
from 0 to 1 of the net new highs (new highs/(new highs + new
lows)) and then uses a 10-day simple moving average to smooth
the results. The TDHLI signals a buy when the indicator rises
above 0.3 (or 30% of the range), and indicates a sell when it
falls below 0.7 (or 70% of the range). The origin of the 70/30
filter is from Jack Redegeld's work over time. A.W. Cohen published an approach in 1968 using similar rules to Jack Redegeld's
application of the TDHLI while at Scudder (1961-1989):
The extreme percentages on this chart are above 90%
(occasionally above 80%) and below 10% (occasionally below 20%). Intermediate down moves and bear markets usually end when this percentage is below the 10% level. The
best time to go long is when the percentage is below the 10%
level and turns up. This is a bull alert signal. Short positions
should be covered and long positions taken. A rise in the
percentage above a previous top or above the 50% level is a
bull confirmed signal. The best time to sell short is when this
percentage is above the 90% level and turns down. This is a
bear alert signal. Long positions should be closed out and
short positions established. A drop in the percentage below
a previous bottom or below the 50% signals a bear confirmed
market.
In my CMT paper, I suggested a dynamic filtering method to
improve performance of the TDHLI. The technique is simply to
apply a percentage filter based on two standard deviations from
the mean of the data to the TDHLI. The net effect of the dynamic
filter method is to capture a greater portion of the market's move,
but at the cost of higher transaction costs and more frequent signals (some of which will be false signals that incur loses further
increasing the cost of doing business). In my prior study, the data
showed substantial performance gains from employing the dynamic

MTA JOURNAL

Table 1
TDHLI Standard Deviations
Large Cap

Mid Cap

Small Cap

Total

4 year

20.67%

20.87%

25.00%

23.95%

3 year

21.77%

21.95%

26.16%

25.17%

2 year

23.65%

22.80%

25.58%

25.22%

1 year

14.29%

14.12%

19.55%

18.47%

Summer-Fall 2000

23

Chart 3: Small Cap TDHLI

METHODOLOGY
New highs are defined as stocks reaching a new high over the
previous year of daily prices. New lows conversely are stocks descending below the lowest price over the prior year. Stocks were
selected based on the 5000 largest market capitalizations of the
three main US exchanges and may represent biased data to the
extent that prior performance influenced the market caps at the
time that the data was sorted (the end date). Spreadsheets were
then constructed to reflect new highs/lows from yearly databases
of daily closing prices.
The tables below show buys and sells in the first column, the
trade date in the second, the index value in the third, the return
for each trade in decimal format next and the final column shows
cumulative results (1.04 = 4% gain, 0.99 = 1% loss) for all trades in
sequence. At the bottom of each table the total index and indicator returns can be found.

Chart 4: Total Cap TDHLI

Chart 1: Large Cap TDHLI

Chart 2: Mid Cap TDHLI

MTA JOURNAL

Summer-Fall 2000

24

Table 2
Large Cap Rates of Return

Date

SPX

Return

b 01/31/96

636.02

s 03/08/96

633.5

b 03/22/96

650.62

s 06/26/96

664.39 2.12%

b 07/05/96

657.44

s 07/12/96

646.19 -1.71%

b 07/25/96

631.17

s 09/05/96

649.44 2.89%

b 09/19/96

683

s 12/13/96

728.64 6.68%

b 12/30/96

753.85

s 01/27/97

765.02 1.48%

b 02/07/97

789.56

s 03/20/97

782.65 -0.88%

b 04/21/97

760.37

s 08/21/97

925.05 21.66%

b 08/29/97

899.47

s 10/23/97

950.69 5.69%

b 11/10/97

921.13

s 12/15/97

963.39 4.59%

b 01/06/98

966.58

-0.40%

Table 3

Large Cap using Traditional Rules

Index

Date

SPX

100.000

s 03/12/96

Return

Index

Date

215.67

100

b 07/19/96

221.66

100

228.51 5.95%

105.9535

s 12/18/96

249.87 12.73%

112.7267

s 12/17/96

726.04 13.45%

112.9456

b 03/20/96

230.28

b 04/16/97

251.47

s 06/17/96

237.87 3.30%

s 10/27/97

308.39 22.63%

b 07/02/96

236.98

b 09/16/98

308.5

s 07/15/96

218.6

s 02/12/99

367.09 18.99%

164.4971

Total

27.68%

MID

65.61%

101.7118

b 08/24/98 1088.14
127.6837

99.97137
102.8652

221.66

SPX

93.09%

s 09/03/96

231.21 4.31%

b 09/16/96

238.76

s 12/16/96

247.16 3.52%

b 01/02/97

252.11

s 03/13/97

263.14 4.38%

b 04/08/97

255.84

s 04/10/97

253.76 -0.81%

b 04/16/97

251.47

s 10/24/97

329.48 31.02%

b 11/06/97

325.13

s 12/19/97

320.26 -1.50%

b 01/05/98

333.39

s 05/26/98

363.22 8.95%

b 06/26/98

356.97

s 07/24/98

357.91 0.26%

b 08/17/98

329.85

s 08/31/98

299.86 -9.09%

b 09/08/98

291.65

s 10/07/98

289.31 -0.80%

b 10/16/98

305.41

s 02/08/99

367.23 20.24%

b 03/10/99

364.25

s 03/29/99

363.05 -0.33%

b 04/08/99

362.46

s 06/01/99

395.87 9.22%

b 06/16/99

399.73

s 07/26/99

415.37 3.91%

b 08/19/99

400.3

s 09/16/99

401.09 0.20%

b 10/06/99

384.82

s 10/19/99

369.5

b 10/29/99

391.2

s 12/15/99

411.75 5.25%

134.2985
141.9461
148.4583
166.6635

149.4369

b 10/14/98 1005.5
182.8158

b 03/01/99 1236.2
189.9501

0.23%

190.3942

b 06/08/99 1317.3
b 09/02/99 1319.1
s 09/24/99 1277.4

-3.17%

184.3682

-4.82%

175.4817

b 10/07/99 1317.6
s 10/18/99 1254.1
b 10/28/99 1342.4
s 12/31/99 1469.3

9.45%

105.3069

b 07/19/96

110.3903

3.90%

100.9572

27.68%

111.3649

192.0581

109.4458

-7.76%

Total

109.7389

155.3455

s 08/31/99 1320.4

Index

s 03/04/96

994.24 -8.25%

s 05/25/99 1284.4

Return

b 01/25/96

169.3186

s 02/12/99 1230.1 22.34%

MID

99.55309

1.59%

970.68 -3.80%

Date

100.000

b 09/11/98 1009.1
s 10/07/98

Mid Cap using Traditional Rules

Index

639.95 -0.45%

b 08/17/98 1083.7
s 09/01/98

Return

637.09

b 06/25/98 1129.3
s 07/27/98 1147.3

MID

99.60379 b 07/31/96

s 02/12/99 1230.13 13.05%

s 04/28/98 1085.1 12.26%

Mid Cap Rates of Return

-3.98%

109.0117
113.7811
112.856
147.8658
145.651
158.683
159.1009
144.6354
143.475
172.5167
171.9483
187.7978
195.1456
195.5307
187.7465
197.609

Total

92.06%

Total

97.61%

SPX

131.01%

MID

90.92%

MTA JOURNAL

Summer-Fall 2000

138.2423

25

Table 4
Small Cap Rates of Return

Date SML

Return

Table 5

Small Cap using Traditional Rules

Index

Date SML

b 02/02/96

122.95

100

b 08/02/96

128.73

s 06/18/96

134.96 9.77%

109.7682

s 10/29/96

136.1

b 07/30/96

123.87

s 11/01/96

136.48 10.18%

b 11/13/96

139.67

s 03/05/97

146.21 4.68%

b 04/16/97

137.27

s 10/27/97

174

Return

b 07/31/96

639.95

100

s 11/01/96

703.77 9.97%

109.9727

b 07/05/96

657.44

b 04/14/97

743.73

135.2556

s 07/15/96

629.8

-4.20%

99.3368

s 10/27/97

876.99 17.92%

b 07/30/96

635.26

s 11/04/96

706.73 11.25%

110.5127

b 11/13/96

731.13

s 12/18/96

731.54 0.06%

b 01/02/97

737.01

s 03/17/97

795.71 7.96%

b 04/21/97

760.37

190.86

b 03/04/99

161.45

s 05/24/99

177.79 10.12%

b 08/16/99

179.48

s 11/22/99

185.95 3.60%

200.67 9.46%

172.4588

188.79

s 07/24/98

183.18 -2.97%

b 08/07/98

175.83

s 08/26/98

160.54 -8.70%

b 09/08/98

152.35

s 10/07/98

133.73 -12.22%

b 10/15/98

139.4

s 12/14/98

161.26 15.68%

b 12/29/98

171.51

s 01/27/99

172.17 0.38%

b 03/01/99

160.47

s 03/24/99

155.23 -3.27%

b 04/07/99

158.75

s 05/25/99

175.68 10.66%

b 06/28/99

182.07

s 07/28/99

183.64 0.86%

b 08/17/99

180.22

s 09/22/99

175.96 -2.36%

b 10/07/99

176.07

s 10/18/99

168.96 -4.04%

b 10/28/99

174.11

s 11/30/99

182.97 5.09%

b 12/29/99

195.47

Index

100.0000

164.75 -13.68%

b 06/25/98

Return

103.6964

b 06/08/98

s 05/06/98

SPX

662.06 3.70%

s 12/10/98

157.5566

Date

638.46

126.6057

176.1 -1.82%
183.33

Total Cap using Traditional Rules

Index

s 06/18/96

178.17 27.93%

179.37

Return

b 02/01/96

139.27

s 01/16/98
b 02/04/98

SPX

100

s 10/28/97

b 12/03/97

Date

105.7252

b 04/08/97

160.4822

Index

5.73%

120.9426

26.76%

Total Cap Rates of Return

116.7524
128.5687
133.2034

s 06/24/98 1132.88
s 07/21/98 1165.07 2.84%

133.362

b 02/23/99 1271.18
110.5747

s 05/25/99 1284.4

1.04%

134.749

b 10/05/99 1301.35

Total

33.20%

s 10/27/97

876.99 15.34%

SML

44.45%

b 11/11/97

923.78

167.3341

129.6773

s 05/07/98 1095.14 18.55%

119.3815

s 11/24/99 1417.08 8.89%

146.7323

137.6914

Total

46.73%

SPX

121.44%

163.233

b 06/24/98 1132.88
152.7829

s 07/24/98 1140.8

0.70%

164.3741

s 08/26/98 1084.19 0.00%

164.3696

b 08/12/98 1084.22
134.11

b 09/08/98 1023.46
155.1405

s 10/07/98

970.68 -5.16%

155.893

b 10/15/98 1047.49
155.7375

s 12/14/98 1141.2

8.95%

169.8394

s 01/28/99 1265.37 1.90%

173.0617

b 12/29/98 1241.81
150.652

b 02/24/99 1253.41
166.7184

s 03/24/99 1268.59 1.21%

175.1576

b 04/07/99 1326.89
168.156

s 05/25/99 1284.4

-3.20%

169.5487

b 06/28/99 1331.35
164.1812

s 07/27/99 1362.84 2.37%

173.559

b 08/17/99 1344.16
157.5513

s 09/21/99 1307.58 -2.72%

168.8357

b 10/06/99 1325.4
165.5686

s 10/19/99 1261.32 -4.83%

160.6729

b 10/28/99 1342.44
Total

65.57%

s 12/01/99 1397.72 4.12%

SML

58.98%

b 12/29/99 1463.46

167.2892

Total

67.29%

SPX

118.92%

Table 6
Trades Detailed
Cumulative Loses
Traditional Rules
Dynamic Rules
Largest Loss
Cumulative Loses
Traditional Rules
Dynamic Rules
Largest Loss

MTA JOURNAL

Summer-Fall 2000

Large Cap

Mid Cap

Small Cap

-0.45%

0%

-13.68%

Total Cap
0%

-23.03%

-24.26%

-35.38%

-20.11%

-8.25%

-9.09%

-12.22%

-5.16%

Large Cap

Mid Cap

Small Cap

Total Cap

28.13%

27.68%

46.88%

46.73%

115.09%

121.87%

100.95%

87.40%

22.34%

31.02%

26.76%

18.55%

26

OBSERVATIONS

against the averages. The large cap dynamic indicator returned


92.06% vs. 27.68% for traditional rules and 131.03% for the SPX.
The mid cap indicator was up 97.61% vs. traditional rules with
64.50% and MID at 90.92%. The small cap results were 65.57% vs.
traditional methods with 33.20% and SML at 58.98%. The aggregate return was 67.29% vs. traditional rules with 46.73% and SPX
at 118.92%. The essential difference was that by tracking the relative movements of the TDHLI and signaling reversals of intermediate magnitude, significant moves in the market were captured by
the indicator. On the other hand, the TDHLI in the current period tended to prematurely exit the market while meaningful returns remained. One interpretation of this result is that a divergence between a few stocks with high returns and the majority of
stocks with much less robust results over the period has created a
distortion in market returns that is not fully captured by the new
high/low data.

After evaluating the TDHLI performance characteristics based


on proprietary new high/low data, the first conclusion was that as
my previous study showed, the traditional buy/sell rules did not
work effectively. Another conclusion was that the TDHLI using
dynamic rules performed better than the traditional rules, but did
not perform as well in the current period than it did in the earlier
study. The slippage between trades based on the TDHLI was partially responsible for a return deficit compared to the S&P 500 over
the evaluation period. In strongly trending markets, any trading
activity tends to negatively impact overall performance. The S&P
500 was interrupted in its upward march only by two corrections,
one 22.5% and the other 13%. It is interesting to note that using
aggregate data, the TDHLI lost only 5.16% during the 22.5% October 1998 decline in the S&P 500. Similarly during the pullback
last Fall, the TDHLI fell 7.55% versus 13% in the S&P 500 In fact,
the most significant slippage in relative performance occurred
around periods of high performance: by either starting late or finishing prematurely during a significant market move, the TDHLI
underper-formed the averages.
Loss management was a significant issue for the TDHLI over
each of the subdivisions of the data. Relatively large losses were
incurred as the TDHLI moved to a sell, but the market reacted
more sharply. Given the significant bifurcation of the markets during the last four years, an increasing number of stocks are lagging
the performance of the averages. Put another way, fewer and fewer
stocks are leading the way higher and the volatility of the averages
is increasing. This effect has been documented in the media. This
stratification implies that the timing value of new highs/lows will
be diminished for the time being. Due to the magnitude of the
data management task, a more comprehensive study was not possible using PC based computing.
Still, the predictive power of new high/low data remains formidable. In each case, the periods of time excluded by the TDHLI
underperformed substantially in each market cap segment and
across all the data. For large caps, the excluded returns were 12.37%
vs. 92.06% for the TDHLI. For mid cap data, the excluded returns
were slightly negative compared to nearly triple digit positive returns for the TDHLI. The small cap segment mirrored this result,
but the TDHLI provided a less robust performance, in line with
the mid cap index, MID. Only in the aggregate data did the excluded returns amount to much, with a 37.02% gain versus 67.09%
for the TDHLI and 118.92% for the S&P 500. In spite of the obvious applicability of the Wall Street maxim that its the time in the
market, not the market timing that yields the best results, the risk
as measured by standard deviation suggests that the TDHLI was
exposed to less risk over the period. Granted that the market volatility recently has been extraordinary, the TDHLI standard deviations for each market cap and for aggregate data remained in single
digits (8.3%, 9.5%, 9.6%, 6.9% for large, mid, small and total cap
groups respectively) while the market indices ranged between 630%
for Nasdaq and 23% for the small cap, SML index.
The dynamic TDHLI signaled trades more often than the traditional rules, which is consistent with prior results. The number of
total trades (and losing trades) for each segment were 20 (7) for
large caps, 20 (7) for mid caps, 17 (7) for small caps and 18 (5) for
aggregate data. The magnitude of losses averaged about 25% of
gains. Looking beyond the market's extraordinary performance,
the TDHLI provided fairly respectable results.
Performance of the dynamic filter method for the TDHLI was
respectable compared to the traditional rules, but less robust

MTA JOURNAL

CONCLUSION
The traditional TDHLI and, in the current period, the dynamic
TDHLI as well, failed to keep pace with the market despite posting
strong results. The utilization of proprietary data made the TDHLI
considerably more flexible and provided new and interesting dimensions to the indicator. While it provided useful sell and buy
signals under most conditions, the TDHLI even with the added
functionality of proprietary data did not perform well enough and
robustly enough to be considered an effective indicator solely on
its own. The ability to predict market direction, to provide reasonably competitive performance particularly considering results from
my prior study and to track the market along the lines of market
capitalization may prove over time to be a worthwhile addition to
the art of technical analysis as embodied by the TDHLI.

ATTRIBUTION

Joseph Redegeld, The Ten Day New High/New Low Index, 1986.
A.W. Cohen, Three-Point Reversal Method of Point & Figure Stock
Market Trading, 8th Edition, 1984, Chartcraft, Inc., pg 91.
2Data was provided by Factset, Inc.

RICHARD T. WILLIAMS, CFA, CMT


Richard Williams is a Senior Vice President and Fundamental/Technical analyst for Jefferies & Company. He specializes
in enterprise software and e-commerce infrastructure stocks.
During 1999, his stock and convertible (equity substitute) recommendations returned in excess of 360%, making him the
top performing software analyst based on Bloomberg data.
Prior to joining Jefferies in 1997, Mr. Williams was an institutional salesman at Kidder Peabody/Paine Webber from 199297 and a convertible and warrant sales trader from 1988-92.
Mr. Williams received his MBA in Finance from NYU's Stern
School of Business in 1991. He received a B.A. in Government
and Computer Science from Dartmouth College. He is a Chartered Financial Analyst and a Chartered Market Technician.
Mr. Williams is a member of the Association for Investment
Management & Research and the Market Technicians Association. He was recently voted runner-up for the Charles Dow
award, for contributions to the body of technical knowledge.
He has published several articles in the MTA Journal, been a
frequent Radio Wall Street guest and is regularly quoted in
the foreign/domestic press and magazines.

Summer-Fall 2000

27

BIRTH OF A CANDLESTICK
Using Genetic Algorithms to Identify Useful Candlestick Reversal Patterns
Jonathan T. Lin, CMT

INTRODUCTION

sign as the bears were not


able to move the price
much lower as they did the
first day. The third day
marks a comeback by the
bulls, completing the
morning star, a bottom reversal pattern.
There is really no need
for me to cover too many
candlestick patterns here.
The examples are given
Bearish Engulfing
merely to illustrate the fact
Pattern
Morning Star
that most candlestick patterns are nothing more than collections of up to three sets of openhigh-low-close prices and their relative positions to the others.

Basics of Candlestick Charting Techniques


Candlestick charts are the most popular and the oldest form of
technical analysis in Japan, dating back almost 300 years. They are
constructed very much like the Open-High-Low-Close bar charts
that most of us use everyday, but with one difference. A real body,
a box instead of a line as in a bar chart, is drawn between the opening and closing prices. The box is colored black when the closing
price is lower than the opening price and colored white if the close
is higher than the open. With the colors of the real bodies adding
a new dimension to the charts, one can spot the changes in market
sentiments at a glance - bullish when the bodies are white, and
bearish when black. The lines extending to the high and to the
low remain intact. The part of the line between the real body and
the high is called the upper shadow, while the part between the
real body and the low is termed the lower shadow.

Importance of Size and Locations of Candle Patterns


The other important points to keep in mind are
the sizes of the candlesticks in the patterns and the
locations of the patterns within recent trading
range of the price. Let us consider a stock that
trades around $60. Scenario One: On the first
day, it opened at $60 and closed at $63.75. On the
next day, it opened at $64 and closed at $58.75.
Those two days trading constitutes a bearish engulfing pattern. This pattern should be considered
meaningful since a roughly $4 run-up followed by
a $5+ pullback is of considerable impact on a $60
stock. Scenario Two: The same stock opened at
Doji
$60 and closed at $61 on the first day, and opened
White Candlestick
Black Candlestick
(one without real body)
at $61.25 and closed at $59.875 the next day. Those
two days trading did indeed still constitute a bearThe strength of candlestick charting comes from the fact that it
ish engulfing pattern, but the effect of the pattern would not be
adds an array of patterns to the technical toolbox, without taking
considered as meaningful. A $1 fluctuation for a $60 stock is pretty
anything away. Chart readers can draw trendlines, apply computer
much a non-event. It should be evident that the sizes of the candleindicators, and find formations such as ascending triangles and
sticks within a pattern do matter as much as the pattern itself.
head-and-shoulders on candlestick charts as easily as they can on
We should now look at the importance of the location of the
bar charts. Let us now examine some candlestick patterns, their
pattern relative to its trading range. Let us assume that the aforeimplications, and the rationale behind them.
mentioned $60 stock has been trading between $45 and $55 for
A bearish engulfing pattern is formed when a black real body
five months, broke out to new high, and ended the last two trendengulfs the prior days white real body. As the name implies, it is a
ing days as described in Scenario One, one could speculate that
signal for a top reversal of the proceeding uptrend. The rationale
support at $55 probably will be tested in the near future. If the
behind a bearish engulfing pattern is straightforward. A white
stock, however, has been trading between $58 and $64 for five
candlestick is normal within an uptrend as the bulls continue to
months, tested the support at $58, rallied up to $63.75, just slipped
enjoy their success. An engulfing black candlestick the next day
back to $59.875, the bearish engulfing pattern described in Scewould mean that the open price of the next day is higher than the
nario One should have little meaning. The validity of this pattern
close of the first day, signaling possible continuation of the rally.
is limited here since there was not much of a uptrend proceeding
However, the bulls of the first day turn into losers as the price closes
the pattern and the downside risk to $58 is only $1.875 away.
lower than the first days open. Such a shift in sentiment should be
From these comparisons, it should be obvious that the usefulalarming to the bulls and signals a possible top.
ness of candlestick patterns rely on: 1) The size of the candlestick
A morning star is comprised of three candles: a long black
components real bodies, upper and lower shadows; 2) The relacandlestick, followed by a small real body that gaps under that black
tive positions among themselves gapping from one another, overcandlestick, followed by a long white real body. The first black
lapping. After all, that is how patterns are defined; and 3) The
candlestick is normal within a downtrend. The subsequent small
patterns locations relative to the previous periods trading range
real body, whose close is not far off the open, is the first warning
and 4) The size of the trading range itself. In order to find an

MTA JOURNAL

Summer-Fall 2000

28

effective candlestick pattern, these points should all be considered integral parts of the pattern definition.
[Authors Note: Although volume and open interest accompanying
the candlestick patterns could be used as confirmation, the author has decided not to include either as part of the pattern definition for two reasons:
1) A stock can rally or decline without increasing volume. Volume tends to
be more evident around structural breakouts and breakdowns, but not always so around early reversal points. This is especially true for thinlytraded stocks where the price can move either way quickly without much
volume. Depending on volume as confirmation is impractical at times. 2)
The pattern the author plans to find should be rather universal just like the
other candlestick patterns. Shooting stars are not unique to crude oil futures; nor is the bearish engulfing pattern only designed for Microsoft stock.
Since some market data, such as spot currencies and interest rates, do not
contain volume nor open interest information, a candlestick pattern with
volume or open interest as an integral part of it will not be universally
useful.]

then mix five more glasses using proportions somewhere between


those found in two okay glasses, and throw in one wildcard, a randomly mixed sample. Now let them try picking the two best ones
again. After you get your friends stone-drunk after repeating this
process 20 times, you will have yourself two nice glasses of margarita.
Most importantly, those two glasses should be of similar proportion of lime juice and tequila; that is, the solution to this problem
converged.
Let us now review our margarita experiment. The goal was to
find the best tasting margaritas, and therefore the way to evaluate
them was to taste them and rate them. Assuming that you have an
objective way to total your friends opinions of the samples, the
ones that survived by this somewhat natural selection the okay
glasses will get to reproduce. The wildcard thrown in represents the mutated one. Just like mutations importance in nature
as a way of injecting new genes into the gene pool and bringing
about a more diverse array of species, artificial mutations are very
important in opening up more possibilities in the range of solutions for the problem we intend to solve.
In a more involved problem with more variables, the procedure
should be repeated many more than 20 times for any half-way decent solutions to prevail. The more iterations the program perform, the more optimal the solution should be. In fact, if the surviving solutions do not resemble each other at all, they are probably far from optimal and require more time to evolve. What we
are searching for here are sharks. Sharks, one of the fastest species in water, have been swimming in the ocean for millions of years.
For generations, the faster ones survived by getting to their food
faster in a feeding frenzy. Only the fast genes are left after all
these years. As sharks efficient hydrodynamic lines became perfect, all of them started to look alike. (At least to most of us.)
End Note: To explain the concept of genetic algorithm in an
academic manner, I will turn to the principles set forth by John
Holland, who pioneered genetic algorithms in 1975:
1. Evolution operates on encodings of biological entities, rather than on
the entities themselves.
2. Nature tends to make more descendants of chromosomes that are more
fit.
3. Variation is introduced when reproduction occurs.
4. Nature has no memory. [Author: Evolution has no intelligence
built in. Nature does not learn from previous results and failures; it just selects and reproduces.]
Now the steps of the algorithm:
1. Reproduction occurs.
2. Possible modification of children occurs.
3. The children undergo evaluation by the user-supplied evaluation function.
4. Room is made for the children by discarding members of the population
of chromosomes. (Most likely the weakest population members.)

Basics of Genetic Algorithm Survival of the Fittest


Survival of the Fittest. Darwins theory of evolution remains
one of the scientific theories that has had the most profound impact on humankind to date. In his theory, Darwin proposed that
species evolve through natural selection, that is, species chance of
existence and their ability to procreate depend on their ability to
adapt to their natural habitat. Since only the organisms fit for their
natural environment survive, only the genes they carry survive.
During the reproductive process, the next generation of organisms are created from the genes drawn from the surviving, and
hopefully superior, gene pool. The new generation of organisms
should, in theory, be even more adaptive to their environment. As
some of the offspring produced will certainly be more adaptive to
the environment than their peers and therefore survive, the natural selection process repeats itself, and again only superior genes
will be left in the gene pool. As the process is repeated generations
after generations, nature will preserve only the fittest genes and
dispose of the inferior ones. It should be noted that during the
process, the genes sometimes will experience certain degrees of
mutation that might create combinations of genes never seen in
previous generations. Mutation actually brings about a more diversified pool of genes, perhaps creating even more adaptive organisms than otherwise possible. At times in nature, an array of
species evolve from the same origin, with each of them as fit for its
habitat as the others in its own right.
So, what is genetic algorithm? In plain English, genetic algorithm is a computer programs way of finding solutions to a problem by the process of eliminating poor ones and improving on the
better ones, mimicking what nature does. In constructing a genetic algorithm, one would start out by defining the problem to be
solved in order to decide on the evaluation procedure, imitating
the process of natural selection. Try a few possible solutions to a
problem. Rank them based on their performance after applying
the evaluation process. Keep only the top few solutions and let
them reproduce, or mix elements of the top solutions to come up
with new ones. The new solutions are then, in turn, evaluated.
After a few iterations, or generations, the best solutions will prevail
For a better explanation, we should now try a fun, practical exercise. Let us consider the process of finding that perfect recipe
for a margarita. We start out with six randomly mixed glasses; record
the proportion between tequila and the lime juice and taste them.
The evaluation process here is your friends reactions. They agreed
that two glasses were found okay, three of them so-so, and one which
yielded the response, My dog is a better bartender than you. You

MTA JOURNAL

Definition of the Projects Intent: Applying Genetic Algorithm to


Identify Useful Candlestick Reversal Patterns
Let me now define the problem I intend to solve in this study. I
intend to find one candlestick pattern that has good predictability
in spotting near-term gains in future price by using the bond futures contract as my testing environment. Since I have cited sharks
as the marvelous products of natural selection, I would like to call
the organisms that should evolve through my study candlesharks.
The candlesharks will have genes that tell them when to signal
potential gains, or eat if one would parallel them to the real sharks.
The first generation candlesharks should be pretty dumb and not

Summer-Fall 2000

29

really know when or when not to eat maybe some are eating all
the time while some simply do not move at all. As some of them
over ate or starved to death, the smarter ones knew how to eat
right, correctly spotting potential for profit. As only the smart
ones survive, they begin to preserve only the smart genes. As these
smarter ones mate and reproduce, some of the next generation
may contain, by chance, some even better combination of genes,
resulting in even smarter candlesharks. As the process continues,
the candlesharks should evolve to be pretty smart eaters. Once in a
while, some genes will mutate, creating candlesharks unlike their
parents. Whether the newly injected genes will be included in the
gene pool will depend on the success of these mutated creatures in
adapting to their environment.
One thing that should be pointed out is that the late generation candlesharks will probably swim better in bond futures pool
better than in a spot gold pool or equity market pool which
they have never been in before. Survival of the fittest is more like
survival of the curve-fittest here. It should be understood that
the candlestick pattern found here has evolved within the bond
environment and thus is best-fit for it. If thrown into the spot
gold pool, these candlesharks might die like the dinosaurs did
when the cold wind blew as the Ice Age hit them so unexpectedly,
as one of the many theories goes. As in many cases in life, the finer
a design with one purpose in mind, whether natural or artificial,
the less adaptive it will be when used for other purposes. For example, a 16-gauge wire stripper, while great with 16-gauge wires,
will probably do a lousy job stripping 12-gauge wires even when
compared to a basic pair of scissors, which can strip any wire though
slowly.

The number of digits needed within each gene can be calculated. The size of a real body for a bond contract can not be larger
than 96 ticks since the daily trading limit is set to three points. A
seven-bit binary number is capable of handing numbers up to 128
and therefore needed for a gene like RB-m of C2. The 40-day trading could be no larger than 96 ticks times 40, or 3840 ticks. (Besides the fact that it seems very unlikely that the bond contract
would go up, or down, 3 points day after day for 40 days.) A 12-bit
number, capable of handing a decimal number up to 4096 is needed
here.
As the reader might have noticed after referencing Exhibit 1, a
number of trading ranges are used. As previously mentioned, the
size of the recent trading range and the candle patterns position
within the range are crucial elements that might make or break
the pattern. How does one define recent though? It then seemed
obvious to me that a multitude of day ranges is needed here, much
like the popular practice of using a number of moving averages to
access the crosscurrents of shorter and longer-term trends. I have
chosen to include 5-day, 10-day, 20-day and 40-day trading ranges,
which are more or less one, two, four, and eight trading weeks, in
my study. The advantage of using a multitude of day ranges could
be demonstrated using two examples. Let us say a morning star, a
bullish candlestick reversal pattern, was found near when the price
is at the bottom of both the five-day trading range and the 40-day
trading range, as it would if it was just making a new reaction low.
A morning star around this level is probably less useful since that
pattern is more indicative of a short-term bounce. While this morning star, a one-day pattern, could be signaling a possible turn of
the trend of the last five days, it is unconvincing that this one-day
pattern could signal an end to the trend that lasted at least 40 days.
Let us now say that the same morning star was found near the
bottom of the five-day trading range, but near the top of the 40-day
trading range, as the price of a stock would if it experienced a shortterm pullback after breaking out of a longer-term trading range.
This morning star now could be a signal for the investor to go long
the stock. The morning stars in both examples could be of the
same size, both found near the bottom of the five-day trading range,
but would have significantly different implications just because they
appear at the different points of the 40-day trading range. It should
be clear now that the inclusion of multiple trading ranges in the
decision-making process could be very beneficial.
Since each gene is a binary number, a string of 0s and 1s, each
will have a MSB (most significant bit, the leftmost digit, like 3 in
39854) and LSB (least significant bit, the rightmost digit.) Since
the MSB obviously has more impact on the candlesharks behavior,
the common state of the more significant bits among the candlesharks
are what we should closely examine after the program has let the
candlesharks breed for a while. The candlesharks main features
should be similar after a while. That is, if we do have a nice batch
of candlesharks to harvest, they should all have the same sets of 0s
and 1s among the more significant bits within each gene. The 0s
and 1s among the trailing bits are less significant by comparison,
much like the curvature of an athletes forehead should have less
to do with his speed than his torso structure. The trailing bits are
in the genes and do make a difference, but are basically not significant enough for us to worry about.
When we are ready to harvest our candleshark catches as the
performance improvement from one generation to the next decreased to a very small level, we could reverse-engineer the gene to
find the criteria that trigger their bullish signals. For instance, if
all eight candlesharks have -00011 as the first five digits in RB-m of
C2 (that means RB-m is between -00011000 and -00011111, or -24

BUILDING A SUITABLE ENVIRONMENT FOR


EVOLUTION
Defining the Genes
As described in the previous sections, there are, but not limited
to, four major deciding factors of the significance of an occurrence
of a particular candlestick pattern. They include the size of the
recent trading range, current position within that range, relative
position of the candlesticks to each other, and the sizes of the those
candlesticks. To successfully sur vive in the bond futures pool, it
must be in the genes of the candlesharks to be able to distinguish
variations in these environmental parameters. I have therefore
designed a candleshark to possess the listed genes in Exhibit 1. Genes
within Chromosome C-2, for example, tells the candleshark the range
of position and size of the candlestick of two days ago should be
within, combined with other chromosomes parameters, before
giving a bullish signal. Actually, think of all these genes as simply
tandem series of on-off switches for a candleshark to decide to give a
bullish signal or not.
All the genes defined here come in pairs. The first, with suffix
-m, tells the candleshark the minimum of the range in question.
The second, with suffix +, signifies the width of the range. Here
is one example. If RB-m of C2 is -24 and RB+ of C2 is 16, the
candleshark will only give a bullish signal when the candlestick of
two trading days ago has a black real body sized between 8 (-24 + 16
= -8) ticks and 24 ticks, or the contract closed between _ to _ lower
than it opened. Please keep in mind that C2 only contributes to a
part of the decision-making process for the candleshark. Even if the
real body of two days ago fits the criteria, the other criteria have to
be met as well before the candleshark will actually give a bullish signal.

MTA JOURNAL

Summer-Fall 2000

30

EVALUATION OF RESULTS

and -31) and 00000 as the first five digits of RB+ of C2 (that
means RB+ is between 0000000 and 0000011, or 0 and 3), these
candlesharks would only give bullish signals when the real body of
two days ago is black, and between size of -21 and -31. (The minimum = -31 + 0 = -31; the maximum = -24 + 3 = -21)

Preliminary Results and Modifications


As some people might have suspected, what I thought was a wonderful study had a pretty rocky start. The final version of the gene
definition, as the readers know it, is actually the third revision. A
large number of genes slows down the processing time dramatically. Having too many trading ranges defined also proved to be a
waste of time.
Choosing the right gene pool to start with, much to my surprise, was in fact quite tricky. I first wrote a small routine using the
Visual Basic Macro in Excel to randomly generate eight candlesharks.
These candlesharks did indeed reproduce and the evolution program, also written in Visual Basic, performed its duty and evaluated each one of them. After testing the programs and becoming
convinced of their ability to perform their functions, I let the program run overnight, evolving 50 generations. What I found the
next morning were eight candlesharks that did absolutely nothing.
None of them gave any signal. As they all performed equally poorly,
the two selected to reproduce were basically chosen arbitrarily. What
I had come up with were the equivalents of the species that would
have been extinct in nature.
The next logical step then was to use two organisms that would
give me signals all the time and to write a small routine to generate
six offspring from them. The eight of them would then be my
starting point. This proved to be much more effective. Since the
first batch of candlesharks did indeed provide me more signals than
I needed, their offspring could only give an equal amount or less
signals. As some of the children became more selective, their performance did improve. After viewing the printed results of the
first seven or so generations, I was glad to see the gradual, yet steady
improvement in the ability to find bullish patterns. I again let them
grow overnight.
What I found the next morning was a collection of candlesharks
that gave either one or two signals. Each of the signals was wonderful, pointing to large gains without much risk. As it turned out, a
few of them were simply pointing to the same date. These
candlesharks basically curve-fitted themselves just to pick up the day
followed by the best five-day gain in the data series. They were
again useless.
It then occurred to me that I had to set a minimum number of
signals per organism that the candlesharks would have to meet before the program would consider evaluating them as the breeding
ones. Since I was using 1,500 days, or roughly six years, worth of
data, the minimum of 20 signals, or a little over three a year, should
suffice.
I am glad to report that things started to look up afterward. I
have also decided that running the program with smaller iteration
could be beneficial. I began running only five generations each
time so that I could observe the progress and make any necessary
adjustment. It was not until that most things had been fine-tuned
that I began my overnight number-crunching again.

Defining the Evaluation Process


I have weighed several methods to evaluate the performance of
each of the organisms. First of all, a suitable method has to be of
short-term nature since the pattern in question is basically formed
in three days. It is very unlikely that, for instance, a morning star
formed in three days that occurred twenty days ago has much influence on current price. Secondly, the upward price movement
that comes after the pattern has to be greater than the downward
movement, at least on the average. It should be realized that no
matter how useful a candlestick pattern, or any technical tool, may
be, there will be a time that it would not have predictive ability, or
even give a downright wrong signal. It then seems reasonable that
including total downward moves is essential in the evaluation process. That is, we would like to find a pattern that is not only right
and right enough most of the time, but one that will not take anyone to the cleaners when it is wrong.
What I have decided on is the average of the maximum potential gain less the maximum potential loss. First, we find the maximum potential gain by totaling all the differences between the highest of the high prices reached by the contract within the next five
trading days from the closing price when the pattern gave the signal. We then find the maximum potential loss by totaling all the
differences between the lowest of the low prices reached by the
contract within the next five trading days from the closing price
when the pattern gave the signal. The maximum loss is subtracted
from the maximum gain, and is then divided by the number of
signals generated. This ratio is the average of the maximum potential gain less the maximum potential loss I am looking for.
Defining the Reproductive Process
We desire enough permutations of the parents genes to create
diversity among the offspring. Yet, too many offspring in each generation would greatly increase the processing time needed to evaluate the performance of all the offspring. After a fair amount of
contemplating, I believe that eight offspring from two parents is
adequate. A trial run shows that the evaluation of eight organisms
with 1,500 days worth of data requires roughly four minutes of processing time on my personal computer. That equals 1,080 generations after three straight days of processing. Since a large number
of generations might be required for the effective evolution, eight
organisms per generation would have to do. Besides, allowing only
two out of eight offspring to survive is a stringent enough elimination process. Many large mammals have fewer offspring in their
life times.
The second crucial element of the reproduction is the introduction of mutation. Under the principles set forth by John Holland, mutations come in two modes: binary mutation, the replacement of bits on a chromosome with randomly generated bits, and
one-point crossover, the swapping of genetic material on the children
at a randomly selected point. I have favored a higher level of mutation since it would introduce more diverse gene sequences into
the gene pool more quickly. Both the binary mutation level and
one-point crossover level have been set to 0.1, or 10% of the time.

MTA JOURNAL

Finding Useful Gene Sequences


I first wrote the program with the intention to let the candlesharks
evolve and hoped to see them converge into a batch of similarlyshaped creatures. In other words, I was looking for a group of
consistent performers. It so happened that in the evolution program, I had coded a line to print out the gene pool with the evaluation results after every generation. This feature was first put in
place as a way to monitor the time needed for each generation,

Summer-Fall 2000

31

and to ensure that the reproduction process was performed correctly. As it turned out, this feature had an extra benefit.
Looking through pages and pages of printouts, I every so often
spotted one candleshark with excellent performance. Since a genetic algorithm is based on the principle that nature has no memory,
this candlesharks excellent gene sequence could not be preserved.
The only traces of the sequence is in its children which might not
be as good a predictor as it was. This does occur in nature as well.
Einsteins being an incredible physicist did not imply that his children would have been, too. Even if some of his genes would live
on within his children, none might be as great as he was. A number of Johann Sebastian Bachs sons were outstanding composers,
but none as great as he was.
With the printouts in my hands, though, I could reconstruct
that one unique gene sequence. I could examine the organism by
itself and let it reproduce several times to see if it could be improved upon. Better yet, I could find two great performing
candlesharks who did not have to be of the same generation, and let
them mate, something impossible in nature. Imagine the possibility of seeing the children of J.S. Bach and Clara Schumann if they
were ever married, or turning Da Vinci into a female to mate with
Picasso. One might hesitate doing so in nature, but I face no moral
dilemma moving a few 0s and 1s all over the place.

nounced upper and lower shadows. The last candlestick is a small


real body, usually white or at times doji, with upper and lower shadows of considerable sizes as well. Very importantly, all three real
bodies rarely, if at all, overlap one another.
Let us examine a few of these patterns. All the charts included
here are those of bond perpetual futures contract, a weighted moving average of the prices of the current contract and those of the
forward contracts. The perpetual contract, versus the continuous
contract, is ideal for long-term study of futures since it includes
more than one active contract and eliminates price gaps around
contract month switching dates that have plagued the continuous
contracts.
As one can see by examining the sample charts, the last candlestick within the pattern is always near the bottom of the five-day
trading range. This observation does follow the fact that the two
preceding days are both down, by definition of this pattern.
I hope that some readers will find this pattern useful in making
their trading and investment decisions. I myself now have another
tool in my technical tool belt, and I am planning on constantly
re-evaluating the validity of this pattern by looking for it in the
bond contracts future price activities.

Evaluating Gene Sequences for Useful Patterns


Running the program from different starting points yielded varying results. One of the runs that intrigued me the most was one
that ended with at least five or six organisms out of the last three
generations with similar if not identical performance numbers.
Upon more careful inspection of the signals they generated and
referencing candlestick charts for the bond futures contract, one
pattern seemed prominent. (For a sample of the signal results, see
Exhibit 6.) The first day of the pattern shows a large black candlestick, usually with modestly sized upper and lower shadows. The
second candlestick is a slightly smaller black one with more pro-

While this study has some interesting results, the possibilities


are endless. This study was set to find candlestick patterns that
would identify possible up moves in bond prices in the next five
days. The next study could be set to find a complete entry/exit
trading system with both long and short positions. A different version of the devised program could be used to find an optimal combination of technical indicators and parameters for a particular
security instrument. In fact, the use of genetic algorithms as a way
to optimize neural networks has been widespread.
Looking not so far out, one could even use the existing program to find other candlestick patterns. By simply changing the

FUTURE POSSIBILITIES

A Wonderful Example

Small Overlap of the First Two Candles

MTA JOURNAL

Summer-Fall 2000

32

Small Overlap of the First Two Candles

Slightly Different Example that Developed into a


Rising Three

Less Successful Example, Perhaps Due to the Larger


Second Black Candle. Developed Only into a
Consolidation.

Overlapping of Real Bodies Again Led to a Less Timely


Pattern.

MTA JOURNAL

Summer-Fall 2000

33

EXHIBITS AND BIBLIOGRAPHY

gene pool one starts with, very different organisms from the
one we found might evolve. Much like different species in nature,
of which each excels in its habitat and its own way, different candlestick patterns can be found to be effective under different situations. Change the gene pool a little and let the computer go at it.
One day, your machine might find something to surprise you.
As the computers become faster and faster, one day I should be
able to include a large number of organisms. Using a multitude of
selection/evaluation criteria, several species might evolve at the
same time, just like an ecosystem in nature. I might be able to find,
after numerous iterations, one pattern particularly good for a fiveday forecast while another one is found to have an excellent oneday forecasting ability. The possibilities are endless.

Exhibit 1
Definition of Genes and Chromosomes
Chrom

Gene

Name

Name

Description

C2

RB-m

Minimum of real body size 2 days ago.

C1

C0

R5D

Format

7-bit binary with sign-bit

RB+

Dev. from min. of real body size 2 days ago.

8-bit binary

US-m

Minimum of upper shadow size 2 days ago.

7-bit binary

US+

Dev. from min. of upper shadow size 2 days ago.

7-bit binary

LS-m

Minimum of lower shadow size 2 days ago.

7-bit binary

LS+

Dev. from min. of lower shadow size 2 days ago.

7-bit binary

RB-m

Minimum of real body size 1 day ago.

7-bit binary with sign-bit

RB+

Dev. from min. of real body size 1 day ago.

8-bit binary

US-m

Minimum of upper shadow size 1 day ago.

7-bit binary

US+

Dev. from min. of upper shadow size 1 day ago.

7-bit binary

LS-m

Minimum of lower shadow size 1 day ago.

7-bit binary

LS+

Dev. from min. of lower shadow size 1 day ago.

7-bit binary

CH-m

Change of closing level from 2 days ago

7-bit binary with sign-bit

CH+

Change of closing level from 2 days ago

8-bit binary

RB-m

Minimum of current real body size.

7-bit binary with sign-bit

RB+

Dev. from min. of current real body size.

8-bit binary

US-m

Minimum of current upper shadow size.

7-bit binary

US+

Dev. from min. of current upper shadow size.

7-bit binary

LS-m

Minimum of current lower shadow size.

7-bit binary

LS+

Dev. from min. of current lower shadow size.

7-bit binary

CH-m

Change of closing level from 1 day ago

7-bit binary with sign-bit

CH+

Change of closing level from 1 day ago

8-bit binary

TR-m

Minimum of 5-day trading range.

9-bit binary

TR+

Dev. from min. of 5-day trading range.

9-bit binary

RP-m

Minimum of position within trading range.

9-bit binary

RP+

Dev. from min. of position within trading range.

9-bit binary

Minimum of 10-day trading range.

10-bit binary

TR+

Dev. from min. of 10-day trading range.

10-bit binary

RP-m

Minimum of position within trading range.

10-bit binary

RP+

Dev. from min. of position within trading range.

10-bit binary

Minimum of 20-day trading range.

11-bit binary

TR+

Dev. from min. of 20-day trading range.

11-bit binary

RP-m

Minimum of position within trading range.

11-bit binary

RP+

Dev. from min. of position within trading range.

11-bit binary

R10D TR-m

R20D TR-m

R40D TR-m

Minimum of 40-day trading range.

12-bit binary

TR+

Dev. from min. of 40-day trading range.

12-bit binary

RP-m

Minimum of position within trading range.

12-bit binary

RP+

Dev. from min. of position within trading range.

12-bit binary

Note: All the numbers in the genes are in ticks. A tick = 1/32
move in bond futures. Bond futures has a limit of daily movement
of no more than 3 points, or 96 ticks.

MTA JOURNAL

Summer-Fall 2000

34

Exhibit 2
Test Bed: Sample Bond Futures Data Spreadsheet

Exhibit 3
Gene Pool: Sample of Binary Bits

Exhibit 4
Gene Pool: Sample of Gene in Decimals for Printing & Reading Purpose

MTA JOURNAL

Summer-Fall 2000

35

Exhibit 5
Excel Macro Visual Basic Program:
EVOLUTION
Const pop = 8, num_gene = 38, mutation_level = 0.1, crossover_level
= 0.1
Const TB_sheet = Test Bed, GP_sheet = Gene Pool, SR_sheet =
Signal Results, GPD_sheet = GP in Dec.
Const GP_row1 = 5, GP_col1 = 6
Const SR_row1 = 4, SR_col1 = 1
Const TB_row1 = 43, TB_col1 = 7
Const generation_rep = 5, TB_sample_size = 1500, days_per_signal
= 5, min_num_signal = 20
Dim gene_signed(num_gene), gene(pop, num_gene),
top_genes(2, num_gene) As String
Dim gene_len(num_gene), limits(num_gene / 2, 2) As Integer
Sub EVOLUTION()
Set GP = Worksheets(GP_sheet)
Set GPD = Worksheets(GPD_sheet)
Set SR = Worksheets(SR_sheet)
Set TB = Worksheets(TB_sheet)
For j = 1 To num_gene
gene_len(j) = GP.Cells(3, GP_col1 + j - 1).Value
gene_signed(j) = GP.Cells(4, GP_col1 + j - 1).Value
Next j
gen_num = GP.Cells(1, 2).Value
For generation = gen_num To gen_num + generation_rep - 1
SR_rowcount = SR_row1
SR.Select
Clear enough area for new Signal Results coming in
Rows(SR_row1 & : & pop * (Int(TB_sample_size /
days_per_signal) +
2)).Select
Selection.ClearContents
GPD.Select
GPD.Cells(1, 2).Value = GP.Cells(1, 2).Value
For org = 1 To pop Evaluation each organism within population
utimate_gain = 0
utimate_loss = 0
total_gain = 0
total_loss = 0
num_signal = 0
GPD.Cells(GP_row1 + org - 1, 1).Value = GP.Cells(GP_row1 +
org - 1,
1).Value
For i = 1 To num_gene Convert binary gene bits to number
limits
gene(org, i) = GP.Cells(GP_row1 + org - 1, GP_col1 + i - 1).Value
If gene_signed(i) = (signed) Then
If Left(GP.Cells(GP_row1 + org - 1, GP_col1 + i - 1).Value,
1) =
+ Then mult = 1 Else mult = -1
shift = 1
Else
mult = 1
shift = 0
End If
num = 0
For n = shift + gene_len(i) To shift + 1 Step -1
If Mid(gene(org, i), n, 1) = 1 Then
num = num + 2 ^ (gene_len(i) - n + shift)
End If
Next n
pointer = Int((i + 1) / 2)
If i - (pointer - 1) * 2 = 1 Then
limits(pointer, 1) = mult * num
Else
limits(pointer, 2) = limits(pointer, 1) + mult * num
End If
Next i
For i = 1 To num_gene / 2
GPD.Cells(GP_row1 + org - 1, GP_col1 + (i - 1) * 2).Value =
limits(i, 1)
GPD.Cells(GP_row1 + org - 1, GP_col1 + (i - 1) * 2 + 1).Value
= limits(i, 2)
Next i
Start looking for signals
signal = NO
For i = 1 To TB_sample_size
row_num = TB_row1 + i - 1
curr_date = TB.Cells(row_num, 2).Value

curr_close = TB.Cells(row_num, 6).Value


If signal = NO Then
signal = YES
For j = 1 To num_gene / 2
If TB.Cells(row_num, TB_col1 + j - 1).Value < limits(j,
1) Or TB.Cells(row_num, TB_col1 + j - 1).Value > limits(j, 2)
Then
signal = NO
End If
Next j
If signal = YES Then
SR_rowcount = SR_rowcount + 1
SR.Cells(SR_rowcount, 1).Value =
Application.Text(generation, 0000) & x &
Application.Text(org, 00)
SR.Cells(SR_rowcount, 2).Value = curr_date
SR.Cells(SR_rowcount, 3).Value = curr_close
signal_row = row_num
signal_price = curr_close
num_signal = num_signal + 1
max_gain = 0
max_loss = 0
End If
Else
curr_high = TB.Cells(row_num, 4).Value
curr_low = TB.Cells(row_num, 5).Value
If curr_high - signal_price > max_gain Then
max_gain = curr_high - signal_price
high_date = curr_date
high_price = curr_high
End If
If signal_price - curr_low > max_loss Then
max_loss = signal_price - curr_low
low_date = curr_date
low_price = curr_low
End If
If row_num - signal_row >= days_per_signal Or i =
TB_sample_size
Then
signal = NO
SR.Cells(SR_rowcount, 4).Value = high_date
SR.Cells(SR_rowcount, 5).Value = high_price
SR.Cells(SR_rowcount, 6).Value = low_date
SR.Cells(SR_rowcount, 7).Value = low_price
total_gain = total_gain + max_gain
total_loss = total_loss + max_loss
If max_gain > utimate_gain Then utimate_gain = max_gain
If max_loss > utimate_loss Then utimate_loss = max_loss
SR.Cells(SR_rowcount, 8).Value = num_signal
SR.Cells(SR_rowcount, 9).Value = max_gain
SR.Cells(SR_rowcount, 11).Value = max_loss
End If
End If
Next i
Signal summary for each organism
SR_rowcount = SR_rowcount + 1
SR.Cells(SR_rowcount, 1).Value = Application.Text(generation,
0000) & x & Application.Text(org, 00)
SR.Cells(SR_rowcount, 2).Value = Summary:
SR.Cells(SR_rowcount, 8).Value = num_signal
SR.Cells(SR_rowcount, 9).Value = total_gain
SR.Cells(SR_rowcount, 10).Value = utimate_gain
SR.Cells(SR_rowcount, 11).Value = total_loss
SR.Cells(SR_rowcount, 12).Value = utimate_loss
SR.Cells(SR_rowcount, 13).Value = total_gain - total_loss
GP.Cells(GP_row1 + org - 1, 2).Value = num_signal
GP.Cells(GP_row1 + org - 1, 3).Value = total_gain
GP.Cells(GP_row1 + org - 1, 4).Value = total_loss
GPD.Cells(GP_row1 + org - 1, 2).Value = num_signal
GPD.Cells(GP_row1 + org - 1, 3).Value = total_gain
GPD.Cells(GP_row1 + org - 1, 4).Value = total_loss
If num_signal > min_num_signal Then
GP.Cells(GP_row1 + org - 1, 5).Value = (total_gain - total_loss)
/ num_signal
GPD.Cells(GP_row1 + org - 1, 5).Value = (total_gain - total_loss)
/ num_signal
Else
GP.Cells(GP_row1 + org - 1, 5).Value = 0
GPD.Cells(GP_row1 + org - 1, 5).Value = 0
End If
Next org

num_gene 1)).Select
Selection.Sort Key1:=range(E + Trim(Str(GP_row1))),
Order1:=xlDescending, Header:= _
xlGuess, OrderCustom:=1, MatchCase:=False, Orientation:=
_ xlTopToBottom
GPD.Select
range(Cells(GP_row1, 1), Cells(GP_row1 + pop - 1, GP_col1 +
num_gene 1)).Select
Selection.Sort Key1:=range(E + Trim(Str(GP_row1))),
Order1:=xlDescending, Header:= _
xlGuess, OrderCustom:=1, MatchCase:=False, Orientation:=
_ xlTopToBottom
GPD.PrintOut Copies:=1
For org = 1 To 2
For i = 1 To num_gene
top_genes(org, i) = GP.Cells(GP_row1 + org - 1, GP_col1 + i 1).Value
Next
Next
MIXING GENES FOR THE NEXT GENERATION
GP.Cells(1, 2).Value = generation + 1
For org = 1 To pop
GP.Cells(GP_row1 + org - 1, 1).Value = Application.Text(generation
+ 1, 0000) & x & Application.Text(org, 00)
For i = 1 To num_gene - 1 Step 2
Randomize
which_parent = Int(Rnd * 2) + 1
gene_string1 = top_genes(which_parent, i)
gene_string2 = top_genes(which_parent, i + 1)
If Rnd < mutation_level Then
gene_mutate = Int(Rnd * (gene_len(i) + gene_len(i + 1))) + 1
If gene_mutate <= gene_len(i) Then
If gene_signed(i) = (signed) Then shift = 1 Else shift = 0
gene_bit = Mid(gene_string1, gene_mutate + shift, 1)
If gene_bit = 0 Then
gene_bit = 1
Else
gene_bit = 0
End If
gene_string1 = Left(gene_string1, gene_mutate - 1 + shift) &
gene_bit & Right(gene_string1, gene_len(i) - gene_mutate)
Else
gene_mutate = gene_mutate - gene_len(i)
gene_bit = Mid(gene_string2, gene_mutate, 1)
If gene_bit = 0 Then
gene_bit = 1
Else
gene_bit = 0
End If
gene_string2 = Left(gene_string2, gene_mutate - 1) &
gene_bit & Right(gene_string2, gene_len(i + 1) - gene_mutate)
End If
End If
GP.Cells(GP_row1 + org - 1, GP_col1 + i - 1).Value = &
gene_string1
GP.Cells(GP_row1 + org - 1, GP_col1 + i).Value = &
gene_string2
Next
Next
crossover = Rnd
If crossover < crossover_level Then
Do
child1 = Int(Rnd * pop) + 1
child2 = Int(Rnd * pop) + 1
Loop While child1 = child2
XO_point = Int(Rnd * num_gene / 2) * 2 + 1
For j = 1 To XO_point
new_gene = GP.Cells(GP_row1 + child1 - 1, GP_col1 + j 1).Value
GP.Cells(GP_row1 + child1 - 1, GP_col1 + j - 1).Value =
GP.Cells(GP_row1 + child2 - 1, GP_col1 + j - 1).Value
GP.Cells(GP_row1 + child2 - 1, GP_col1 + j - 1).Value =
new_gene
Next
End If
Next generation
End Sub

SORTING RESULTS TO FIND THE 2 FITTEST ORGANISMS


GP.Select
range(Cells(GP_row1, 1), Cells(GP_row1 + pop - 1, GP_col1 +

MTA JOURNAL

Summer-Fall 2000

36

Exhibit 6
Sample of Signal Results

BIBLIOGRAPHY

JONATHAN T. LIN, CMT


Jonathan Lin has been with Salomon Smith Barney since
1994. In his capacity as a technical research analyst there, he
contributes to the weekly publications Market Interpretation,
and Global Technical Market Overview. Prior to Salomon, he
was a technology specialist at Price Waterhouse for one year,
and spent six years at Merrill Lynch as a senior programmer/
analyst.
Jonathan has an MBA in management information systems
from Pace University, Lubin Graduate School of Business, and
a BE in electrical engineering & computer science from Stevens
Institute of Technology.

Nison, Steve. Japanese Candlestick Charting Techniques: A Contemporary Guide to the Ancient Investment Technique of the Far East. New
York, N.Y.: New York Institute of Finance, 1991.
Deboeck, J. Guido, Editor. Trading on the Edge: Neural, Genetic
and Fuzzy Systems for Chaotic Financial Markets. New York, N.Y.:
John Wiley & Sons, Inc., 1994. Chapter 8 by Laurence Davis.
Genetic Algorithm and Financial Applications.
Note: All gene definition as described in Exhibit V-1 and the
Excel Macro program Evolution written in Visual Basic are of
original work, and therefore no further reference is given. The
author drew on the knowledge and experience as an electrical
engineering and computer science major during undergraduate years and seven years as an programmer/analyst to develop
the program.

MTA JOURNAL

Summer-Fall 2000

37

Вам также может понравиться