Вы находитесь на странице: 1из 79

Anna University B.

Tech BIOTECHNOLOGY

Syllabus GE1451 Total Quality Management

1. INTRODUCTION (9)

Definition of Quality, Dimensions of Quality, Quality Planning, Quality costs - Analysis Techniques
for Quality Costs, Basic concepts of Total Quality Management, Historical Review, Principles of
TQM, Leadership Concepts, Role of Senior Management, Quality Council, Quality Statements,
Strategic Planning, Deming Philosophy, Barriers to TQM Implementation.

2. TQM PRINCIPLES (9)

Customer satisfaction Customer Perception of Quality, Customer Complaints, Service Quality,


Customer Retention, Employee Involvement Motivation, Empowerment, Teams, Recognition and
Reward, Performance Appraisal, Benefits, Continuous Process Improvement Juran Trilogy, PDSA
Cycle, 5S, Kaizen, Supplier Partnership Partnering, sourcing, Supplier Selection, Supplier Rating,
Relationship Development, Performance Measures Basic Concepts, Strategy, Performance
Measure.

3. STATISTICAL PROCESS CONTROL (SPC) (9)

The seven tools of quality, Statistical Fundamentals Measures of central Tendency and Dispersion,
Population and Sample, Normal Curve, Control Charts for variables and attributes, Process
capability, Concept of six sigma, New seven Management tools.

4. TQM TOOLS (9)

Benchmarking Reasons to Benchmark, Benchmarking Process, Quality Function Deployment


(QFD) House of Quality, QFD Process, Benefits, Taguchi Quality Loss Function, Total Productive
Maintenance (TPM) Concept, Improvement Needs, FMEA Stages of FMEA.

5. QUALITY SYSTEMS (9)

Need for ISO 9000 and Other Quality Systems, ISO 9000:2000 Quality System Elements,
Implementation of Quality System, Documentation, Quality Auditing, QS 9000, ISO 14000
Concept, Requirements and Benefits.
UNIT I

INTRODUCTION

QUALITY DEFINED

A subjective term for which each person or sector has its own definition. In technical usage, quality
can have two meanings:

1. the characteristics of a product or service that bear on its ability to satisfy stated or implied needs;

2. a product or service free of deficiencies.

According to Joseph Juran, quality means fitness for use.

According to Philip Crosby, it means conformance to requirements.

DIMENSIONS OF QUALITY

The eight critical dimensions or categories of quality that can serve as a framework for strategic
analysis are : Performance, features, reliability, conformance, durability, serviceability, aesthetics,
and perceived quality.

1. Performance

Performance refers to a product's primary operating characteristics. For an automobile, performance


would include traits like acceleration, handling, cruising speed, and comfort. Because this dimension
of quality involves measurable attributes, brands can usually be ranked objectively on individual
aspects of performance. Overall performance rankings, however, are more difficult to develop,
especially when they involve benefits that not every customer needs.

2. Features

Features are usually the secondary aspects of performance, the "bells and whistles" of products and
services, those characteristics that supplement their basic functioning. The line separating primary
performance characteristics from secondary features is often difficult to draw. What is crucial is that
features involve objective and measurable attributes; objective individual needs, not prejudices,
affect their translation into quality differences.

3. Reliability

This dimension reflects the probability of a product malfunctioning or failing within a specified time
period. Among the most common measures of reliability are the mean time to first failure, the mean
time between failures, and the failure rate per unit time. Because these measures require a product to
be in use for a specified period, they are more relevant to durable goods than to products or services
that are consumed instantly.
4. Conformance

Conformance is the degree to which a product's design and operating characteristics meet established
standards. The two most common measures of failure in conformance are defect rates in the factory
and, once a product is in the hands of the customer, the incidence of service calls. These measures
neglect other deviations from standard, like misspelled labels or shoddy construction that do not lead
to service or repair.

5. Durability

A measure of product life, durability has both economic and technical dimensions. Technically,
durability can be defined as the amount of use one gets from a product before it deteriorates.
Alternatively, it may be defined as the amount of use one gets from a product before it breaks down
and replacement is preferable to continued repair.

6. Serviceability

Serviceability is the speed, courtesy, competence, and ease of repair. Consumers are concerned not
only about a product breaking down but also about the time before service is restored, the timeliness
with which service appointments are kept, the nature of dealings with service personnel, and the
frequency with which service calls or repairs fail to correct outstanding problems. In those cases
where problems are not immediately resolved and complaints are filed, a company's complaints
handling procedures are also likely to affect customers' ultimate evaluation of product and service
quality.

7. Aesthetics

Aesthetics is a subjective dimension of quality. How a product looks, feels, sounds, tastes, or smells
is a matter of personal judgement and a reflection of individual preference. On this dimension of
quality it may be difficult to please everyone.

8. Perceived Quality

Consumers do not always have complete information about a product's or service's attributes;
indirect measures may be their only basis for comparing brands. A product's durability for example
can seldom be observed directly; it must usually be inferred from various tangible and intangible
aspects of the product. In such circumstances, images, advertising, and brand names - inferences
about quality rather than the reality itself - can be critical.

QUALITY PLANNING

Quality Planning establishes the design of a product, service, or process that will meet customer,
business, and operational needs to produce the product before it is produced.

Quality planning follows a universal sequence of steps, as follows:

1. Identify customers and target markets


2. Discover hidden and unmet customer needs
3. Translate these needs into product or service requirements: a means to meet their needs (new
standards, specifications, etc.)

4. Develop a service or product that exceeds customers' needs

5. Develop the processes that will provide the service, or create the product, in the most
efficient way

6. Transfer these designs to the organization and the operating forces to be carried out

Upon completion of the Quality Planning steps, major customers and their vital needs have been
identified. Products have been designed with features that meet the customers' needs, and processes
have been created and put into operation to produce the desired product features.

Occasionally, an organization may find that as operations begin, some features of the product or
process design are in error, have been overlooked, or have been poorly executed resulting in failures.
The yield of operations in this particular situation fluctuates around 20% (Cost of Poorly Performing
Processes). Over time, this becomes the chronic level of performance. What is the source of this
chronic level? In a sense, it was planned that way, not on purpose of course, but by error or
omission:

Say an organization finds that their product or process design is flawed. Their operating forces have
no skill in or responsibility for, product or process design or redesign, making it impossible for them
to do any better. As time passes, this chronic level of waste becomes regarded as the norm. People
say, "That's just how it is in this business," thinking of it as fate, not susceptible to improvement

With the passage of more time, almost by default, the "norm" eventually becomes designated as the
"standard," and the associated costs of poor quality are unknowingly built into the budget! In our
example, unless the COPQ (cost of poor quality) does not in effect exceed 20%, performance is not
regarded as abnormal, bad, or exceeding the budget! The organization has thus desensitized itself
from recognizing its major opportunities for bottom-line-boosting improvements. Juran would say
that the management alarm system has been disconnected.

Quality Planning enables organizations to create a product, service, or process that will be able to
meet established goals and do so under operating conditions.

COST OF QUALITY

The cost of quality.

Its a term thats widely used and widely misunderstood.

The cost of quality isnt the price of creating a quality product or service. Its the cost of NOT
creating a quality product or service.

Every time work is redone, the cost of quality increases. Obvious examples include:

The reworking of a manufactured item.


The retesting of an assembly.
The rebuilding of a tool.
The correction of a bank statement.
The reworking of a service, such as the reprocessing of a loan operation or the replacement
of a food order in a restaurant.

In short, any cost that would not have been expended if quality were perfect contributes to the cost
of quality.

Total Quality Costs

As the figure below shows, quality costs are the total of the cost incurred by:
Investing in the prevention of nonconformance to requirements.
Appraising a product or service for conformance to requirements.
Failing to meet requirements.

Quality Costsgeneral description

Prevention Costs: The costs of all activities Failure Costs: The costs resulting from products or
specifically designed to prevent poor quality in services not conforming to requirements or
products or services. customer/user needs. Failure costs are divided into
internal and external failure categories.
Examples are the costs of:
Internal Failure Costs: Failure costs occurring
New product review prior to delivery or shipment of the product, or the
Quality planning furnishing of a service, to the customer.
Supplier capability surveys
Process capability evaluations Examples are the costs of:
Quality improvement team meetings
Quality improvement projects Scrap
Quality education and training Rework
Re-inspection
Appraisal Costs: The costs associated with Re-testing
measuring, evaluating or auditing products or Material review
services to assure conformance to quality Downgrading
standards and performance requirements.
External Failure Costs
These include the costs of:
Failure costs occurring after delivery or shipment of
Incoming and source inspection/test of the product -- and during or after furnishing of a
purchased material service -- to the customer.
In-process and final inspection/test
Product, process or service audits Examples are the costs of:
Calibration of measuring and test
equipment Processing customer complaints
Customer returns
Associated supplies and materials Warranty claims
Product recalls

TOTAL QUALITY MANAGEMENT

Total Quality Management is a management approach that originated in the 1950's and has steadily
become more popular since the early 1980's. Total Quality is a description of the culture, attitude and
organization of a company that strives to provide customers with products and services that satisfy
their needs. The culture requires quality in all aspects of the company's operations, with processes
being done right the first time and defects and waste eradicated from operations.

Total Quality Management, TQM, is a method by which management and employees can become
involved in the continuous improvement of the production of goods and services. It is a combination
of quality and management tools aimed at increasing business and reducing losses due to wasteful
practices.

Some of the companies who have implemented TQM include Ford Motor Company, Phillips
Semiconductor, SGL Carbon, Motorola and Toyota Motor Company.

TQM Defined

TQM is a management philosophy that seeks to integrate all organizational functions (marketing,
finance, design, engineering, and production, customer service, etc.) to focus on meeting customer
needs and organizational objectives.

TQM views an organization as a collection of processes. It maintains that organizations must strive
to continuously improve these processes by incorporating the knowledge and experiences of
workers. The simple objective of TQM is "Do the right things, right the first time, every time". TQM
is infinitely variable and adaptable. Although originally applied to manufacturing operations, and for
a number of years only used in that area, TQM is now becoming recognized as a generic
management tool, just as applicable in service and public sector organizations. There are a number of
evolutionary strands, with different sectors creating their own versions from the common ancestor.
TQM is the foundation for activities, which include:

1. Commitment by senior management and all employees


2. Meeting customer requirements

3. Reducing development cycle times

4. Just In Time/Demand Flow Manufacturing

5. Improvement teams

6. Reducing product and service costs

7. Systems to facilitate improvement

8. Line Management ownership


9. Employee involvement and empowerment

10. Recognition and celebration

11. Challenging quantified goals and benchmarking

12. Focus on processes / improvement plans

13. Specific incorporation in strategic planning

This shows that TQM must be practiced in all activities, by all personnel, in Manufacturing,
Marketing, Engineering, R&D, Sales, Purchasing, HR, etc.

Principles of TQM

The key principles of TQM are as following:

1. Management Commitment
2. Plan (drive, direct)

3. Do (deploy, support, participate)

4. Check (review)

5. Act (recognize, communicate, revise)

6. Employee Empowerment

7. Training

8. Suggestion scheme

9. Measurement and recognition

10. Excellence teams

11. Fact Based Decision Making

12. SPC (statistical process control)

13. DOE, FMEA

14. The 7 statistical tools

15. TOPS (FORD 8D - Team Oriented Problem Solving)


16. Continuous Improvement

17. Systematic measurement and focus on CONQ

18. Excellence teams

19. Cross-functional process management

20. Attain, maintain, improve standards

21. Customer Focus

22. Supplier partnership

23. Service relationship with internal customers

24. Never compromise quality

25. Customer driven standards

The Concept of Continuous Improvement by TQM


TQM is mainly concerned with continuous improvement in all work, from high level strategic
planning and decision-making, to detailed execution of work elements on the shop floor. It stems
from the belief that mistakes can be avoided and defects can be prevented. It leads to continuously
improving results, in all aspects of work, as a result of continuously improving capabilities, people,
processes, technology and machine capabilities.

Continuous improvement must deal not only with improving results, but more importantly with
improving capabilities to produce better results in the future. The five major areas of focus for
capability improvement are demand generation, supply generation, technology, operations and
people capability.

A central principle of TQM is that mistakes may be made by people, but most of them are caused, or
at least permitted, by faulty systems and processes. This means that the root cause of such mistakes
can be identified and eliminated, and repetition can be prevented by changing the process.

There are three major mechanisms of prevention:

Preventing mistakes (defects) from occurring (Mistake - proofing or Poka-Yoke).

Where mistakes can't be absolutely prevented, detecting them early to prevent them being passed
down the value added chain (Inspection at source or by the next operation).

Where mistakes recur, stopping production until the process can be corrected, to prevent the
production of more defects. (Stop in time).
Implementation Principles and Processes
A preliminary step in TQM implementation is to assess the organization's current reality. Relevant
preconditions have to do with the organization's history, its current needs, precipitating events
leading to TQM, and the existing employee quality of working life. If the current reality does not
include important preconditions, TQM implementation should be delayed until the organization is in
a state in which TQM is likely to succeed.

If an organization has a track record of effective responsiveness to the environment, and if it has
been able to successfully change the way it operates when needed, TQM will be easier to implement.
If an organization has been historically reactive and has no skill at improving its operating systems,
there will be both employee skepticism and a lack of skilled change agents. If this condition prevails,
a comprehensive program of management and leadership development may be instituted. A
management audit is a good assessment tool to identify current levels of organizational functioning
and areas in need of change. An organization should be basically healthy before beginning TQM. If
it has significant problems such as a very unstable funding base, weak administrative systems, lack
of managerial skill, or poor employee morale, TQM would not be appropriate.

However, a certain level of stress is probably desirable to initiate TQM. People need to feel a need
for a change. Kanter addresses this phenomenon be describing building blocks which are present in
effective organizational change. These forces include departures from tradition, a crisis or
galvanizing event, strategic decisions, individual "prime movers," and action vehicles. Departures
from tradition are activities, usually at lower levels of the organization, which occur when
entrepreneurs move outside the normal ways of operating to solve a problem. A crisis, if it is not too
disabling, can also help create a sense of urgency which can mobilize people to act. In the case of
TQM, this may be a funding cut or threat, or demands from consumers or other stakeholders for
improved quality of service. After a crisis, a leader may intervene strategically by articulating a new
vision of the future to help the organization deal with it. A plan to implement TQM may be such a
strategic decision. Such a leader may then become a prime mover, who takes charge in championing
the new idea and showing others how it will help them get where they want to go. Finally, action
vehicles are needed and mechanisms or structures to enable the change to occur and become
institutionalized.

In summary, first assess preconditions and the current state of the organization to make sure the need
for change is clear and that TQM is an appropriate strategy. Leadership styles and organizational
culture must be congruent with TQM. If they are not, this should be worked on or TQM
implementation should be avoided or delayed until favorable conditions exist.

Remember that this will be a difficult, comprehensive, and long-term process. Leaders will need to
maintain their commitment, keep the process visible, provide necessary support, and hold people
accountable for results. Use input from stakeholder (clients, referring agencies, funding sources, etc.)
as possible; and, of course, maximize employee involvement in design of the system.

Always keep in mind that TQM should be purpose driven. Be clear on the organization's vision for
the future and stay focused on it. TQM can be a powerful technique for unleashing employee
creativity and potential, reducing bureaucracy and costs, and improving service to clients and the
community.

TQM encoureges participation amongst shop floor workers and managers. There is no single
theoretical formalization of total quality, but Deming, Juran and Ishikawa provide the core
assumptions, as a "...discipline and philosophy of management which institutionalizes planned and
continuous... improvement ... and assumes that quality is the outcome of all activities that take place
within an organization; that all functions and all employees have to participate in the improvement
process; that organizations need both quality systems and a quality culture.".

THE HISTORY OF QUALITY

The quality movement can trace its roots back to medieval Europe, where craftsmen began
organizing into unions called guilds in the late 13th century.

Until the early 19th century, manufacturing in the industrialized world tended to follow this
craftsmanship model. The factory system, with its emphasis on product inspection, started in Great
Britain in the mid-1750s and grew into the Industrial Revolution in the early 1800s.

In the early 20th century, manufacturers began to include quality processes in quality practices.

After the United States entered World War II, quality became a critical component of the war effort:
Bullets manufactured in one state, for example, had to work consistently in rifles made in another.
The armed forces initially inspected virtually every unit of product; then to simplify and speed up
this process without compromising safety, the military began to use sampling techniques for
inspection, aided by the publication of military-specification standards and training courses in Walter
Shewharts statistical process control techniques.

The birth of total quality in the United States came as a direct response to the quality revolution in
Japan following World War II. The Japanese welcomed the input of Americans Joseph M. Juran and
W. Edwards Deming and rather than concentrating on inspection, focused on improving all
organizational processes through the people who used them.

By the 1970s, U.S. industrial sectors such as automobiles and electronics had been broadsided by
Japans high-quality competition. The U.S. response, emphasizing not only statistics but approaches
that embraced the entire organization, became known as total quality management (TQM).

By the last decade of the 20th century, TQM was considered a fad by many business leaders. But
while the use of the term TQM has faded somewhat, particularly in the United States, its practices
continue.

In the few years since the turn of the century, the quality movement seems to have matured beyond
Total Quality. New quality systems have evolved from the foundations of Deming, Juran and the
early Japanese practitioners of quality, and quality has moved beyond manufacturing into service,
healthcare, education and government sectors.

TOTAL QUALITY

The birth of total quality in the United States was in direct response to a quality revolution in Japan
following World War II, as major Japanese manufacturers converted from producing military goods
for internal use to producing civilian goods for trade.

At first, Japan had a widely held reputation for shoddy exports, and their goods were shunned by
international markets. This led Japanese organizations to explore new ways of thinking about quality.
Deming, Juran and Japan

The Japanese welcomed input from foreign companies and lecturers, including two American quality
experts:

W. Edwards Deming, who had become frustrated with American managers when most programs for
statistical quality control were terminated once the war and government contracts came to and end.

Joseph M. Juran, who predicted the quality of Japanese goods would overtake the qualiy of good
produced in the United States by the mid-1970s because of Japans revolutionary rate of quality
improvement.

Japans strategies represented the new total quality approach. Rather than relying purely on
product inspection, Japanese manufacturers focused on improving all organizational processes
through the people who used them. As a result, Japan was able to produce higher-quality exports at
lower prices, benefiting consumers throughout the world.

American managers were generally unaware of this trend, assuming any competition from the
Japanese would ultimately come in the form of price, not quality. In the meantime, Japanese
manufacturers began increasing their share in American markets, causing widespread economic
effects in the United States: Manufacturers began losing market share, organizations began shipping
jobs overseas, and the economy suffered unfavorable trade balances. Overall, the impact on
American business jolted the United States into action.

The American Response

At first U.S. manufacturers held onto to their assumption that Japanese success was price-related,
and thus responded to Japanese competition with strategies aimed at reducing domestic production
costs and restricting imports. This, of course, did nothing to improve American competitiveness in
quality.

As years passed, price competition declined while quality competition continued to increase. By the
end of the 1970s, the American quality crisis reached major proportions, attracting attention from
national legislators, administrators and the media. A 1980 NBC-TV News special report, If Japan
Can Why Cant We? highlighted how Japan had captured the world auto and electronics markets.
Finally, U.S. organizations began to listen.

The chief executive officers of major U.S. corporations stepped forward to provide personal
leadership in the quality movement. The U.S. response, emphasizing not only statistics but
approaches that embraced the entire organization, became known as Total Quality Management
(TQM).

Several other quality initiatives followed. The ISO 9000 series of quality-management standards, for
example, were published in 1987. The Baldrige National Quality Program and Malcolm Baldrige
National Quality Award were established by the U.S. Congress the same year. American companies
were at first slow to adopt the standards but eventually came on board.

BEYOND QUALITY
As the 21st century begins, the quality movement has matured. New quality systems have evolved
beyond the foundations laid by Deming, Juran and the early Japanese practitioners of quality.

Some examples of this maturation:

In 2000 the ISO 9000 series of quality management standards was revised to increase emphasis on
customer satisfaction.

Beginning in 1995, the Malcolm Baldrige National Quality Award added a business results criterion
to its measures of applicant success.

Six Sigma, a methodology developed by Motorola to improve its business processes by minimizing
defects, evolved into an organizational approach that achieved breakthroughs and significant
bottom-line results. When Motorola received a Baldrige Award in 1988, it shared its quality practices
with others.

Quality function deployment was developed by Yoji Akao as a process for focusing on customer
wants or needs in the design or redesign of a product or service.

Sector-specific versions of the ISO 9000 series of quality management standards were developed for
such industries as automotive (QS-9000), aerospace (AS9000) and telecommunications (TL 9000
and ISO/TS 16949) and for environmental management (ISO 14000).

Quality has moved beyond the manufacturing sector into such areas as service, healthcare, education
and government.

The Malcolm Baldrige National Quality Award has added education and healthcare to its original
categories: manufacturing, small business and service. Many advocates are pressing for the adoption
of a nonprofit organization category as well.

QUALITY COUNCIL

1. Coordinates all corporate efforts to TQM


2. Members from each unit of the company
3. Develops strategic plans
4. Addresses key questions for implementation
5. Responsible of the TQM Mission

STRATEGIC PLANNING

In almost any endeavor, quality can serve as a core value that sets the expectations for performance.
For a community, the issue may be the quality of living. For a business, it may be the quality of
products or services, or work life within an organization. In health care, quality might focus on
correct diagnosis and treatment of illnesses. And quality is a key factor for nonprofit organizations in
their ability to achieve their missions.

In each case, a process exists for synthesizing strategic planning and quality. While people often
have an intuitive grasp of qualitys importance in an organizations goals, they may not know the
specific steps they can take to bring quality into clear focus. The steps discussed below will enhance
an organizations ability to create the quality future its leaders, customers and employees seek.

Outlining the process

Several broad steps comprise the strategic planning process for any organization (see Figure 1). The
process always begins with the customers in mind: who these customers are, what they want and
what trends will impact them in the future.

Having established the customers perspective, an organization must determine where it should be in
relation to its customers. Should it keep the same ones? Expand into new customer areas? Give up,
maintain or expand market positions? Quality should be a major factor in answering these questions.
A successful organization will focus on its areas of excellence, where it provides high-quality service
and products to customers in the most profitable manner. Endeavors where quality is low, scrap costs
are high or customers have complaints must either be targeted for change or abandoned as untenable.

When an organization has clarified its position regarding its customers, it should next work on
internal vision. An organizations structure may need to change in order to achieve the desired
outcome. At this time, the organizations planners also must attend to the trends and future
conditions that will impact the organization. Depending upon the organizations mission,
demographic studies, long-term economic forecasts, technical projections and a quick review of the
companys history all can help assess the future.

Once the planners envision what the organization will become, they must identify where the
organization is in relation to where it wants to be. Planners then should determine the specific steps
required to get there. This often becomes the hardest part of the planning process. Numerous day-to-
day forces will divert attention and resources away from the strategic plan. Ingrained behaviors must
be resisted in order to take new steps toward achieving the vision.
Implementing the steps in the plan means taking action and assessing its effectiveness. Undoubtedly,
any change will meet with resistance, which must be overcome. Periodic reassessments of the
strategic plan will be necessary to ensure the vision is still accurate. As demonstrated in Figure 1, the
strategic planning process becomes a cycle within the organization.

Creating a planning team

An organizations management must initiate the strategic planning process and make sure it
continues. However, the actual planning is best carried out by a team, ideally a diagonal cross-
section of people within the organization to ensure that all interests are met.

Such a team also provides the best information about what is really happening in the organization. A
planning team consisting of only senior-level managers runs the risk of filtering out vital information
and perspectives from middle levels and those who perform hands-on work or have direct contact
with customers. At the same time, the team must have senior-level people who can provide a broad
view of the organizations situation. For example, the Cadillac Quality Council is co-chaired by an
international representative of the United Auto Workers and involves people from a broad range of
perspectives.

In small organizations, it may be possible to assemble everybody in one room at the same time. A
facilitator can lead the process and aim for total involvement and commitment from the whole group.
In large businesses, the planning team must include senior and middle management, front-line
supervision and people who work directly on production or with customers. If the organization has a
union, then union leaders also should be invited to participate in the process. In social service
organizations, managers and service providers must be involved, along with people who can speak
for the service recipients -- i.e., the customers.

The quality of the planning process is enhanced by the planning groups diversity. Care must be
taken to ensure that a planning team is diverse, within the context of the organization. An effective
team will offer a balance of perspectives from within and without the organization. Its important to
include perspectives from men and women, newcomers and established employees, people from
different racial or ethnic backgrounds, and different age groups. The plans validity is strengthened
by the assurance that all perspectives are involved in the creative process.

The quality managers role

While an organizations senior managers typically are responsible for achieving the organizations
mission -- and thereby its standards for quality -- many organizations now employ quality managers.

The quality managers role varies greatly from one organization to the next. In successful
organizations, the quality manager works closely with senior managers and serves as a vocal
proponent for quality. He or she should be a key player in the organizations strategic planning by
championing the process.

Examples abound of successful organizations in which the quality manager plays a strong role as
quality advocate among the senior managers. At the Ritz-Carlton Hotel Co., a 1992 Baldrige Award
winner, the vice president of quality drives the quality emphasis. At Xerox Corp., a 1989 Baldrige
winner, the director of corporate quality champions the Xerox 2000, Leadership Through Quality
program. At IBM Rochester, a 1990 Baldrige winner, the director of market-driven quality leads the
company in its efforts to be customer-driven.1 These people must get beyond the numerous
roadblocks and initiate the quality process.

Once the quality manager obtains an agreement to initiate a strategic planning process, he or she
must assemble the right mix of people to form a planning team. The quality manager must obtain
assurance from other managers that team members will be given adequate time to work on this
process.

The quality manager will either facilitate the process or ensure that an experienced facilitator leads
it. Often, the quality manager needs to participate as a team member, in which case its advisable to
employ a neutral facilitator who will focus on the planning process.

If a skilled facilitator is unavailable inside the organization, the quality manager can often find
someone through the local chapter of the American Society for Quality. In some communities, the
Chamber of Commerce can recommend people. Many community colleges and regional universities
have faculty members who also can serve as neutral facilitators.

Quality managers often are far ahead of their organizations in taking a proactive attitude toward
quality. Because they attend quality conferences, read quality books and sit in meetings for quality
professionals, they may become frustrated by the rate of progress within their own organizations.
Conversely, its easy for quality managers to be considered overzealous by other managers.

The participative process of developing a strategic plan provides quality managers with an excellent
mechanism to bring these managers up to speed. For example, while the quality manager sponsors
the process, line managers develop an ownership of the content. This helps quality managers avoid
becoming isolated from their organizations over time and provides further leverage for
implementing the quality program to benefit the organization.

The quality attitude

There is fairly universal agreement among quality experts that effective organizations will wrap
quality into their strategic planning process as a vital element. When quality is left as an
afterthought, it cant be built effectively into the organization; the strategic plan must be built around
quality planning.

This point is best illustrated by looking at three companies that have won the Baldrige Award.
Cadillac Motor Car Co. maintains that the business plan is the quality plan, and the business
planning process is focused on continuously improving the quality of Cadillac products, processes
and services.2 Senior leadership at the Wallace Co. develops annual quality strategic objectives that
drive the quality process.3 At AT&T Universal Card Services, the organization integrated its total
quality management objectives into its strategic and business plans. Our planning process is driven
by our vision, values and culture, and allows us to maneuver rapidly and effectively in a dynamic
business environment,4 it declares.

Any organization that doesnt respect quality as a strategic principle will fail. Perhaps the best
example in this context is the quality circle movement. There was nothing wrong with quality
circles, and some excellent training procedures were developed to support them. However, when
proponents attempted to graft quality circles onto existing structures that didnt value quality, the
endeavor always failed. Unless an organization came to accept quality as an essential factor in its
success or failure, the quality circle was doomed to be a passing fad.5

Many excellent people have burned out striving to bring the quality revolution to their organizations.
Because the organizations didnt embrace quality as a fundamental value, any efforts to bring quality
in through the back door inevitably met with limited success until a fundamental breakthrough in
thinking could occur. It has taken much effort within many organizations to establish qualitys
current level of success. This evolution will continue as organizations integrate quality into their
strategic planning process and begin operating at entirely new levels.

DEMINGS 14 PRINCIPLES

1. Create constancy of purpose for improving products and services.


2. Adopt the new philosophy.

3. Cease dependence on inspection to achieve quality.

4. End the practice of awarding business on price alone; instead, minimize total cost by working
with a single supplier.

5. Improve constantly and forever every process for planning, production and service.

6. Institute training on the job.

7. Adopt and institute leadership.

8. Drive out fear.

9. Break down barriers between staff areas.

10. Eliminate slogans, exhortations and targets for the workforce.

11. Eliminate numerical quotas for the workforce and numerical goals for management.

12. Remove barriers that rob people of pride of workmanship, and eliminate the annual rating or
merit system.

13. Institute a vigorous program of education and self-improvement for everyone.

14. Put everybody in the company to work accomplishing the transformation.


UNIT II

TQM PRINCIPLES

CUSTOMER SATISFACTION

Organizations of all types and sizes have come to realize that their main focus must be to satisfy
their customers.

This applies to industrial firms, retail and wholesale businesses, government bodies, service
companies, nonprofit organizations and every subgroup within an organization.

Two important questions:

Who are the customers?

What does it take to satisfy them?

Who Are the Customers?

Customers include anyone the organization supplies with products or services. The table below
illustrates some supplier-customer relationships. (Note that many organizations are simultaneously
customers and suppliers.)

Supplier-customer relationship examples

Supplier Customer Product or Service

Automobile manufacturer Individual customers Cars

Automobile manufacturer Car dealer Sales literature, etc.

Bank Checking account holders Secure check handling

High school Students and parents Education

County recorder Residents of county Maintenance of records

Hospital Patients Healthcare

Hospital Insurance company Data on patients


Insurance company Hospital Payment for services

Steel cutting department Punch press department Steel sheets

Punch press department Spot weld department Shaped parts

All departments Payroll department Data on hours worked, etc.

What Does It Take to Satisfy Customers?

Dont assume you know what the customer wants. There are many examples of errors in this area,
such as new Coke and car models that didnt sell. Many organizations expend considerable time,
money and effort determining the voice of the customer, using tools such as customer surveys,
focus groups and polling.

Satisfying the customer includes providing what is needed when its needed. In many situations, its
up to the customer to provide the supplier with requirements. For example, the payroll department
should inform other departments of the exact format for reporting the numbers of hours worked by
employees. If the payroll department doesnt do this job properly, it bears some responsibility for the
variation in reporting that will occur.

CUSTOMER SATISFACTION, CUSTOMER LOYALTY AND CUSTOMER RETENTION

A large factor in determining the likelihood of success and profits in an organization is customer
satisfaction. When there is customer loyalty the customer retention rate is high and business results
tend to follow.

Customer Satisfaction
There exists an interaction between the desired results and customer satisfaction, customer loyalty
and customer retention. They may go by other names such as patients, clients, buyers, etc. Without
the customer it is impossible for any business to sustain itself. Achieving the desired results is
frequently a result of customer actions. Any business without a focus on
customer satisfaction is at the mercy of the market. Without loyal customers eventually a competitor
will satisfy those desires and your customer retention rate will decrease.

There are several levels of customers:


1. Dissatisfied customer--Looking for someone else to provide product or
service.
2. Satisfied customer---Open to the next better opportunity.
3. Loyal customer--Returns despite offers by the competition.

Dissatisfied customers are an interesting group.


For every one that complains there are at least 25 who do not.
Dissatisfied customers by word of mouth will tell eight to sixteen others
about their dissatisfaction. With the web some are now telling thousands.
91% of dissatisfied customers never purchase goods or services from the
company again. A prompt effort to resolve a dissatisfied customer's issue will result in
about 85% of them as repeat customers

Depending upon the business, new customer sales may cost 4 to 100 times
that of a sale to an existing customer.

There has been less research on satisfied customers to determine what it


takes for a satisfied customer to change. Why take a chance on mere
satisfaction? Loyal customers don't leave even for an attractive offer
elsewhere. At the very minimum they will give you the opportunity to meet
or beat the other offer. Maintaining loyal customers is an integral part
of any business.

One of the ways to help obtain loyal customers is by having products and
services that are so good that there is very little chance that the
customer requirements will not be met

Of course one of the difficulties is understanding the true customer


requirements. Even when you have the requirements in advance the customer
can and will change them without notice or excuse. Having a good recovery
process for a dissatisfied customer is a necessity.

Several surveys have been done on why customers do not give a business
repeat business. Reasons given by customers for not returning for repeat
business:
Moved 3%
Other Friendships 5%
Competition 9%
Dissatisfaction 14%
Employee Attitude 68%

These surveys would indicate that in addition to the technical training


and job skill training provided to employees, some effort aimed at customer
satisfaction and employee attitude is appropriate. Remember these may not
be the people normally thought as "Sales People". For example Managers,
Supervisors, Secretaries, Accounts Payable, Engineers, Accountants,
Designers, Machine Operators, Security, Truck Drivers, Loading Dock, etc.
if not helping to cultivate Loyal Customers are hurting your customer
retention. 68% of lost customers are due to one cause, employee
attitude!!!

In order to know how you are doing in this area there must be some
measurement. Data indicate that less than 4% of dissatisfied customers
ever bother to lodge a complaint. Most just take their business elsewhere.
Test this on yourself. The next time you get less than what you consider ideal
at a store, business supplier, restaurant, movie theatre, hotel, or any
other business what do you do?

Cultivating the customer relationship is key in achieving the desired


business results. A passive system that depends upon your customers to
inform you without effort on your part is not likely to yield the information necessary to improve
customer retention.

Business Results

Every organization wants to achieve some level of results. Unfortunately


too often not everyone in the organization has the same results in mind.
Having agreement on the desired results tends to focus efforts. Even if
they do agree upon the basic description of the results how to measure
achievement of those results is in disagreement.

Understand what are the desired results.


Some common result areas:

Sales volume
Profit before tax
Market share
Earnings per share
Repeat business %
New customers %
Cash Flow

Cycle Time
Patents issued
Safety performance
Returns
Warranty claims
Environmental performance
Defect level
Scrap
First pass prime
Cost per unit produced
Debt to equity
Many others

Agree on how to measure the result area.


Is the measurement system capable of producing numbers that are useful for
the intended result area?
Do all of the affected people have confidence in the measurement system?
Can numbers be generated quickly enough to be useful?
Does the measurement depend upon the level of the result?
Generally measurements over a continuum (e.g. % completion) are more
useful than yes/no (e.g. done/not done) type measurement.

EMPLOYEE EMPOWERMENT

Employee empowerment is based on the belief that employees have the ability to take on more
responsibility and authority than traditionally has been given to them, and that heightened
productivity and a better quality of work life will result.

Different words and phrases are used to define employee empowerment, but most are variations on a
theme: to provide employees with the means for making influential decisions.

Employee empowerment means different things in different organizations, based on culture and
work design. However, empowerment is based on the concepts of job enlargement and job
enrichment.

Job enlargement: Changing the scope of the job to include a greater portion of the
horizontal process.
Example: A bank teller not only handles deposits and disbursement, but also distributes
traveler's checks and sells certificates of deposit.
Job enrichment: Increasing the depth of the job to include responsibilities that have
traditionally been carried out at higher levels of the organization.
Example: The teller also has the authority to help a client fill out a loan application, and to
determine whether or not to approve the loan.

As these examples show, employee empowerment requires:

Training in the skills necessary to carry out the additional responsibilities.


Access to information on which decisions can be made.
Initiative and confidence on the part of the employee to take on greater responsibility.

Employee empowerment also means giving up some of the power traditionally held by management,
which means managers also must take on new roles, knowledge and responsibilities.

It does not mean that management relinquishes all authority, totally delegates decision-making and
allows operations to run without accountability. It requires a significant investment of time and effort
to develop mutual trust, assess and add to individuals' capabilities and develop clear agreements
about roles, responsibilities, risk taking and boundaries.

What does an empowered organizational structure look like?

Employee empowerment often also calls for restructuring the organization to reduce levels of the
hierarchy or to provide a more customer- and process-focused organization.

Employee empowerment is often viewed as an inverted triangle of organizational power. In the


traditional view, management is at the top while customers are on the bottom; in an empowered
environment, customers are at the top while management is in a support role at the bottom.
TEAMS

A team is a group of people who perform interdependent tasks to work toward a common mission.

Some teams have a limited life: for example, a design team developing a new product, or a process
improvement team organized to solve a particular problem. Others are ongoing, such as a department
team that meets regularly to review goals, activities and performance.

Understanding the many interrelationships that exist between organizational units and processes, and
the impact of these relationships on quality, productivity and cost, makes the value of teams
apparent.

Types of Teams

Many of todays team concepts originated in the United States during the 1970s, through the use of
quality circles or employee involvement initiatives. But the initiatives were often seen as separate
from normal work activities, not as integrated with them.

Team designs have since evolved into a broader concept that includes many types of teams formed
for different purposes.

Three primary types of teams are typically used within the business environment:

Process improvement teams are project teams that focus on improving or developing specific
business processes. These teams come together to achieve a specific goal, are guided by a well-
defined project plan and have a negotiated beginning and end.

Work groups, sometimes called natural teams, have responsibility for a particular process (for
example, a department, a product line or a stage of a business process) and work together in a
participative environment. The degree of authority and autonomy of the team can range from
relatively limited to full self-management. The participative approach is based on the belief that
employees will be more productive if they have a higher level of responsibility for their work.

Self-managed teams directly manage the day-to-day operation of their particular process or
department. They are authorized to make decisions on a wide range of issues (for example, safety,
quality, maintenance, scheduling and personnel). Their responsibilities also include processes
traditionally held by managers, such as goal-setting, allocation of assignments and conflict
resolution.

The Value of Teams

Team processes offer the following benefits Individuals can gain the following benefits from teams:
to the organization:
Enhanced problem-solving skills.
Synergistic process design or problem
solving. Increased knowledge of interpersonal dynamics.

Objective analysis of problems or


opportunities. Broader knowledge of business processes.

Promotion of cross-functional New skills for future leadership roles.


understanding.
Increased quality of work life.
Improved quality and productivity.
Feelings of satisfaction and commitment.
Greater innovation.
A sense of being part of something greater than what
Increased commitment to organizational one could accomplish alone.
mission.

More flexible response to change.

Increased ownership and stewardship.

Reduced turnover and absenteeism

CONTINUOUS IMPROVEMENT

Continuous improvement is an ongoing effort to improve products, services or processes. These


efforts can seek incremental improvement over time or breakthrough improvement all at once.

Among the most widely used tools for continuous improvement is a four-step quality modelthe
plan-do-check-act (PDCA) cycle, also known as Deming Cycle or Shewhart Cycle:

Plan: Identify an opportunity and plan for change.

Do: Implement the change on a small scale.

Check: Use data to analyze the results of the change and determine whether it made a difference.

Act: If the change was successful, implement it on a wider scale and continuously assess your
results. If the change did not work, begin the cycle again.

Methods of continuous improvement emphasize employee involvement and teamwork; measuring


and systematizing processes; and reducing variation, defects and cycle times.

Continuous or Continual?
The terms continuous improvement and continual improvement are frequently used interchangeably.
But some quality practitioners make the following distinction:

Continual improvement: a broader term preferred by W. Edwards Deming to refer to general


processes of improvement and encompassing discontinuous improvementsthat is, many
different approaches, covering different areas.
Continuous improvement: a subset of continual improvement, with a more specific focus on linear,
incremental improvement within an existing process. Some practitioners also associate continuous
improvement more closely with techniques of statistical process control.

JURAN TRILOGY

Deploying the Trilogy enables an organization to maximize customer satisfaction (by economically
producing ideal product features) and minimize dissatisfaction (by reducing or eliminating
deficiencies and the costs of poor qualitywasteassociated with deficiencies) leading to greater
business results.

It focuses on changing the culture of an enterprise. It empowers the employees to:

Be proactive in understanding customer needs and in satisfying them

Provide high quality services and products to customers, while improving the efficiency (lean) of the
enterprise itselfIn other words, the JMS will enable a company to improve quality while
simultaneously reducing costs

Become information-driven and solve problems faster with data

Stay involved in meeting customer needs

View management as a quality leader

Reduce the costs of non-performing processes

Utilizing the The Juran Trilogy requires management to understand the following three key
methodologies:

The Planning Methodology: This methodology develops and puts in place the strategic and tactical
goals that must be achieved in order to attain financial, operational, and quality results. Setting
organizational goals is called strategic planning. Next, there is the planning of new goods and
services, which must take into account customer needs to achieve customer satisfaction. We refer to
this as the quality planning (product and process design) process. The umbrella term planning is
used to refer to the activities carried out in preparation to do something. Quality Planning
establishes, among other things, specific standards/specifications for specific products and processes.
Financial planning sets out the financial goals and the means to achieve them.

The Control Methodology: The second management methodology is utilized to prevent or correct
unwanted or unexpected change. This process is known as control. More precisely, control consists
of measuring actual performance, comparing it to the target or standard, and taking necessary action
to correct the (bad) difference. Control maintains the standards/requirements defined during the
planning stage. Its goal is stability and consistency.

The Improvement Methodology: The third methodology constructs a breakthrough system to create
planned, predictable, and managed change. This process is called breakthrough. Breakthrough is a
deliberate change; a dynamic and decisive movement to unprecedented levels of organizational
performance than are presently active in the plan and maintained by current controls. Breakthrough
results in achieving higher targets, meeting competitive standards and specifications, reducing waste,
reducing cost, and offering better products and services to customers.

The Planning, Control, and Breakthrough methodologies are essential to organizational vitality.
Proper integration is necessary in order to achieve sustainable breakthrough performance. This
requires managers to carry out and accomplish all three methodologies at different times, levels, and
scopes. In doing so, they make economic decisions on behalf of the welfare of the organization. Each
method arises from a specific prerequisite for company survival, and fulfills a vital purpose and
function. Each entails its own distinctive sequence of events, tools, and techniques, while
simultaneously interacting and building upon one another. Without Planning, Control, and
Breakthrough, an organization would be incapable of prolonging its existence. In other words, no
organization will survive if these functions are not performed.

PLANDOCHECKACT CYCLE

Also called: PDCA, plandostudyact (PDSA) cycle, Deming cycle, Shewhart cycle

Description

The plandocheckact cycle (Figure 1) is a four-step model for carrying out change. Just as a circle
has no end, the PDCA cycle should be repeated again and again for continuous improvement.

Figure 1: Plan-do-check-act cycle

When to Use Plan-Do-Check-Act

As a model for continuous improvement.

When starting a new improvement project.

When developing a new or improved design of a process, product or service.

When defining a repetitive work process.

When planning data collection and analysis in order to verify and prioritize problems or root causes.
When implementing any change.

Plan-Do-Check-Act Procedure

Plan. Recognize an opportunity and plan a change.

Do. Test the change. Carry out a small-scale study.

Study. Review the test, analyze the results and identify what youve learned.

Act. Take action based on what you learned in the study step: If the change did not work, go through
the cycle again with a different plan. If you were successful, incorporate what you learned from the
test into wider changes. Use what you learned to plan new improvements, beginning the cycle again.

Plan-Do-Check-Act Example

The Pearl River, NY School District, a 2001 recipient of the Malcolm Baldrige National Quality
Award, uses the PDCA cycle as a model for defining most of their work processes, from the
boardroom to the classroom.

PDCA is the basic structure for the districts overall strategic planning, needs-analysis, curriculum
design and delivery, staff goal-setting and evaluation, provision of student services and support
services, and classroom instruction.

Figure 2 shows their A+ Approach to Classroom Success. This is a continuous cycle of designing
curriculum and delivering classroom instruction. Improvement is not a separate activity: It is built
into the work process.

Figure 2: Plan-do-check-act example

Plan. The A+ Approach begins with a plan step called analyze. In this step, students needs are
analyzed by examining a range of data available in Pearl Rivers electronic data warehouse, from
grades to performance on standardized tests. Data can be analyzed for individual students or
stratified by grade, gender or any other subgroup. Because PDCA does not specify how to analyze
data, a separate data analysis process (Figure 3) is used here as well as in other processes throughout
the organization.
Figure 3: Pearl River: analysis process

Do. The A+ Approach continues with two do steps:

Align asks what national and state standards require and how they will be assessed. Teaching staff
also plans curriculum by looking at what is taught at earlier and later grade levels and in other
disciplines to assure a clear continuity of instruction throughout the students schooling.

Teachers develop individual goals to improve their instruction where the analyze step showed any
gaps.

The second do step is, in this example, called act. This is where instruction is actually provided,
following the curriculum and teaching goals. Within set parameters, teachers vary the delivery of
instruction based on each students learning rates and styles and varying teaching methods.

Check. The check step is called assess in this example. Formal and informal assessments take
place continually, from daily teacher dipstick assessments to every-six-weeks progress reports to
annual standardized tests. Teachers also can access comparative data on the electronic database to
identify trends. High-need students are monitored by a special child study team.

Throughout the school year, if assessments show students are not learning as expected, mid-course
corrections are made such as re-instruction, changing teaching methods and more direct teacher
mentoring. Assessment data become input for the next step in the cycle.

Act. In this example the act step is called standardize. When goals are met, the curriculum
design and teaching methods are considered standardized. Teachers share best practices in formal
and informal settings. Results from this cycle become input for the analyze phase of the next A+
cycle.

KAIZEN

"Kai" means change, and "Zen" means good ( for the better ). Basically kaizen is for small
improvements, but carried out on a continual basis and involve all people in the organization. Kaizen
is opposite to big spectacular innovations. Kaizen requires no or little investment. The principle
behind is that "a very large number of small improvements are move effective in an organizational
environment than a few improvements of large value. This pillar is aimed at reducing losses in the
workplace that affect our efficiencies. By using a detailed and thorough procedure we eliminate
losses in a systematic method using various Kaizen tools. These activities are not limited to
production areas and can be implemented in administrative areas as well.
Kaizen Policy:
1. Practice concepts of zero losses in every sphere of activity.
2. relentless pursuit to achieve cost reduction targets in all resources
3. Relentless pursuit to improve over all plant equipment effectiveness.
4. Extensive use of PM analysis as a tool for eliminating losses.
5. Focus of easy handling of operators.

Kaizen Target :

Achieve and sustain zero loses with respect to minor stops, measurement and adjustments, defects
and unavoidable downtimes. It also aims to achieve 30% manufacturing cost reduction.

Tools used in Kaizen :


1. PM analysis
2. Why - Why analysis
3. Summary of losses
4. Kaizen register
5. Kaizen summary sheet.

The objective of TPM is maximization of equipment effectiveness. TPM aims at maximization of


machine utilization and not merely machine availability maximization. As one of the pillars of
TPM activities, Kaizen pursues efficient equipment, operator and material and energy utilization,
that is extremes of productivity and aims at achieving substantial effects. Kaizen activities try to
thoroughly eliminate 16 major losses.

16 Major losses in an organisation:

1. Failure losses - Breakdown loss


2. Setup / adjustment losses
3. Cutting blade loss
4. Start up loss
5. Minor stoppage / Idling loss.
Losses that impede equipment efficiency
6. Speed loss - operating at low
speeds.
7. Defect / rework loss

8. Scheduled downtime loss

9. Management loss
10. Operating motion loss
11. Line organization loss
Loses that impede human work efficiency
12. Logistic loss

13. Measurement and adjustment loss

14. Energy loss Loses that impede effective use of production resources
15. Die, jig and tool breakage loss

16. Yield loss.

Classification of losses :

Aspect Sporadic Loss Chronic Loss

This loss cannot be easily


Causes for this failure can be
Causation identified and solved. Even if
easily traced. Cause-effect
various counter measures are
relationship is simple to trace.
applied

This type of losses are caused


Remedy Easy to establish a remedial
because of hidden defects in
measure
machine, equipment and methods.

A single cause is rare - a


Impact / Loss
A single loss can be costly combination of causes trends to be
a rule

Frequency of The frequency of occurrence is


The frequency of loss is more.
occurrence low and occasional.

Usually the line personnel in the Specialists in process engineering,


Corrective action production can attend to this quality assurance and maintenance
problem. people are required.

5S

5S is a reference to a list of five Japanese words which 'start' with S. This list is a mnemonic for a
methodology that is often incorrectly characterized as "standardized cleanup", however it is much
more than cleanup. 5S is a philosophy and a way of organizing and managing the workspace and
work flow with the intent to improve efficiency by eliminating waste.

What is 5S?

The key targets of 5S are workplace morale and efficiency. The assertion of 5S is, by assigning
everything a location, time is not wasted by looking for things. Additionally, it is quickly obvious
when something is missing from its designated location. 5S advocates believe the benefits of this
methodology come from deciding what should be kept, where it should be kept, and how it should be
stored. This decision making process should lead to a dialog which can build a clear understanding,
between employees, of how work should be done. It also instills ownership of the process in each
employee.
In addition to the above, another key distinction between 5S and "standardized cleanup" is Seiton.
Seiton is often misunderstood, perhaps due to efforts to translate into an English word beginning
with "S" (such as "sort" or "straighten"). The key concept here is to order items or activities in a
manner to promote work flow. For example, tools should be kept at the point of use, workers should
not have to repetitively bend to access materials, flow paths can be altered to improve efficiency, etc.

The 5S's are:

Seiri: Separating. Refers to the practice of going through all the tools, materials, etc., in the work
area and keeping only essential items. Everything else is stored or discarded. This leads to fewer
hazards and less clutter to interfere with productive work.

Seiton: Sorting. Focuses on the need for an orderly workplace. "Orderly" in this sense means
arranging the tools and equipment in an order that promotes work flow. Tools and equipment should
be kept where they will be used, and the process should be ordered in a manner that eliminates extra
motion.

Seis : Shine. Indicates the need to keep the workplace clean as well as neat. Cleaning in Japanese
companies is a daily activity. At the end of each shift, the work area is cleaned up and everything is
restored to its place. The key point is that maintaining cleanliness should be part of the daily work -
not an occasional activity initiated when things get too messy.

Seiketsu : Standardizing. This refers to standardized work practices. It refers to more than
standardized cleanliness (otherwise this would mean essentially the same as "systemized
cleanliness"). This means operating in a consistent and standardized fashion. Everyone knows
exactly what his or her responsibilities are.

Shitsuke : Sustaining. Refers to maintaining standards. Once the previous 4S's have been established
they become the new way to operate. Maintain the focus on this new way of operating, and do not
allow a gradual decline back to the old ways of operating.

Translations and modifications

Often in the west, alternative terms are used for the five S's. These are "Sort, Straighten, Shine,
Systemize and Sustain". "Standardize" is also used as an alternative for "Systemize". Sometimes
"Safety" is included as 6th S. Similarly 5Cs aim at same goal but without the strength of maintaining
the 5S name.

Alternative acronyms have also been introduced, such as CANDO (Cleanup, Arranging, Neatness,
Discipline, and Ongoing improvement). Even though he refers to the ensemble practice as "5S" in
his canonical work, Hirano prefers the terms Organization, Orderliness, Cleanliness, Standardized
Cleanup, and Discipline because they are better translations than the alliterative approximations. In
the book, there is a photo of a Japanese sign that shows the Latin "5S" mixed with Kanji.

Additional practices are frequently added to 5S, under such headings as 5S Plus, 6S, 5S+2S, 7S, etc.
The most common additional S is for Safety mentioned above, and James Leflar writes that Agilent
adds Security as the seventh (7th) S. Purists insist that the other concepts be left out to maintain
simplicity and because Safety, for example, is a side-benefit to disciplined housekeeping.
SUPPLIER QUALITY

Performance means taking inputs (such as employee work, marketplace requirements, operating
funds, raw materials and supplies) and effectively and efficiently converting them to outputs deemed
valuable by customers.

Its in your best interest to select and work with suppliers in ways that will provide for high quality.

Supplier performance is about more than just a low purchase price:

The costs of transactions, communication, problem resolution and switching suppliers all impact
overall cost.

The reliability of supplier delivery, as well as the suppliers internal policies such as inventory levels,
all impact supply-chain performance.

It used to be common to line up multiple suppliers for the same raw material, over concern about
running out of stock or a desire to play suppliers against one another for price reductions. But this
has given way, in some industries, to working more closely with a smaller number of suppliers in
longer-term, partnership-oriented arrangements.

Benefits of supplier partnerships include:

Partnership arrangements with fewer suppliers mean less variation in vital process inputs.

If your suppliers have proven to be effective at controlling their output, you dont need to monitor
the supplier and their product as closely.

Establishing an effective supplier management process requires:

Support from the top management of both companies involved.

Mutual trust.

Spending more money now to develop the relationship, in order to prevent problems later.

The manufacturing industry is in a special situation: Much of what manufacturers purchase is then
incorporated into their products. This means there is a higher inherent risk, or potential impact, in the
manufacturing customer-supplier relationship. For this reason, manufacturers often develop detailed
supplier-management processes.
Many of those same methods have been adapted by non-manufacturing organizations. This is
especially true of partnerships and alliances, which are becoming a widespread way of sharing
expertise and resources -- and spreading risk -- in a complex global environment.

Supplier Selection Strategies and Criteria

Supplier selection criteria for a particular product or service category should be defined by a cross-
functional team of representatives from different sectors of your organization. In a manufacturing
company, for example, members of the team typically would include representatives from
purchasing, quality, engineering and production. Team members should include personnel with
technical/applications knowledge of the product or service to be purchased, as well as members of
the department that uses the purchased item.

Common supplier selection criteria:

Previous experience and past performance with the product/service to be purchased.

Relative level of sophistication of the quality system, including meeting regulatory requirements or
mandated quality system registration (for example, ISO 9001, QS-9000).

Ability to meet current and potential capacity requirements, and do so on the desired delivery
schedule.

Financial stability.

Technical support availability and willingness to participate as a partner in developing and


optimizing design and a long-term relationship.

Total cost of dealing with the supplier (including material cost, communications methods, inventory
requirements and incoming verification required).

The supplier's track record for business-performance improvement.

Total cost assessment.

Methods for determining how well a potential supplier fits the criteria:

Obtaining a Dun & Bradstreet or other publicly available financial report.

Requesting a formal quote, which includes providing the supplier with specifications and other
requirements (for example, testing).

Visits to the supplier by management and/or the selection team.

Confirmation of quality system status either by on-site assessment, a written survey or request for a
certificate of quality system registration.

Discussions with other customers served by the supplier.


Review of databases or industry sources for the product line and supplier.

Evaluation (SUCH AS prototyping, lab tests, OR validation testing) of samples obtained from the
supplier.

UNIT III

STATISTICAL PROCESS CONTROL

SEVEN BASIC QUALITY TOOLS

These are the most fundamental quality control (QC) tools. They were first emphasized by Kaoru
Ishikawa, professor of engineering at Tokyo University and the father of quality circles.

This list is sometimes called the seven quality control tools, the seven basic tools or the seven
old tools.

1. Cause-and-effect diagram (also called Ishikawa or fishbone chart): Identifies many possible
causes for an effect or problem and sorts ideas into useful categories.
2. Check sheet: A structured, prepared form for collecting and analyzing data; a generic tool
that can be adapted for a wide variety of purposes.
3. Control charts: Graphs used to study how a process changes over time.
4. Histogram: The most commonly used graph for showing frequency distributions, or how
often each different value in a set of data occurs.
5. Pareto chart: Shows on a bar graph which factors are more significant.
6. Scatter diagram: Graphs pairs of numerical data, one variable on each axis, to look for a
relationship.
7. Stratification: A technique that separates data gathered from a variety of sources so that
patterns can be seen (some lists replace "stratification" with "flowchart" or "run chart").

FISHBONE DIAGRAM

Also Called: Cause-and-Effect Diagram, Ishikawa Diagram

Variations: cause enumeration diagram, process fishbone, time-delay fishbone, CEDAC (cause-and-
effect diagram with the addition of cards), desired-result fishbone, reverse fishbone diagram

Description

The fishbone diagram identifies many possible causes for an effect or problem. It can be used to
structure a brainstorming session. It immediately sorts ideas into useful categories.

When to Use a Fishbone Diagram


When identifying possible causes for a problem.
Especially when a teams thinking tends to fall into ruts.

Fishbone Diagram Procedure

Materials needed: flipchart or whiteboard, marking pens.

1. Agree on a problem statement (effect). Write it at the center right of the flipchart or
whiteboard. Draw a box around it and draw a horizontal arrow running to it.
2. Brainstorm the major categories of causes of the problem. If this is difficult use generic
headings:
o Methods
o Machines (equipment)
o People (manpower)
o Materials
o Measurement
o Environment
3. Write the categories of causes as branches from the main arrow.
4. Brainstorm all the possible causes of the problem. Ask: Why does this happen? As each
idea is given, the facilitator writes it as a branch from the appropriate category. Causes can be
written in several places if they relate to several categories.
5. Again ask why does this happen? about each cause. Write sub-causes branching off the
causes. Continue to ask Why? and generate deeper levels of causes. Layers of branches
indicate causal relationships.
6. When the group runs out of ideas, focus attention to places on the chart where ideas are few.

Fishbone Diagram Example

This fishbone diagram was drawn by a manufacturing team to try to understand the source of
periodic iron contamination. The team used the six generic headings to prompt ideas. Layers of
branches show thorough thinking about the causes of the problem.

Fishbone Diagram Example


For example, under the heading Machines, the idea materials of construction shows four kinds
of equipment and then several specific machine numbers.

Note that some ideas appear in two different places. Calibration shows up under Methods as a
factor in the analytical procedure, and also under Measurement as a cause of lab error. Iron tools
can be considered a Methods problem when taking samples or a Manpower problem with
maintenance personnel.

CHECK SHEET

Also called: defect concentration diagram

Description

A check sheet is a structured, prepared form for collecting and analyzing data. This is a generic tool
that can be adapted for a wide variety of purposes.

When to Use a Check Sheet


When data can be observed and collected repeatedly by the same person or at the same
location.
When collecting data on the frequency or patterns of events, problems, defects, defect
location, defect causes, etc.
When collecting data from a production process.

Check Sheet Procedure


1. Decide what event or problem will be observed. Develop operational definitions.
2. Decide when data will be collected and for how long.
3. Design the form. Set it up so that data can be recorded simply by making check marks or Xs
or similar symbols and so that data do not have to be recopied for analysis.
4. Label all spaces on the form.
5. Test the check sheet for a short trial period to be sure it collects the appropriate data and is
easy to use.
6. Each time the targeted event or problem occurs, record data on the check sheet.

Check Sheet Example

The figure below shows a check sheet used to collect data on telephone interruptions. The tick marks
were added as data was collected over several weeks.
Check Sheet Example

CONTROL CHART

Also called: statistical process control

Variations

Different types of control charts can be used, depending upon the type of data. The two broadest
groupings are for variable data and attribute data.

Variable data are measured on a continuous scale. For example: time, weight, distance or
temperature can be measured in fractions or decimals. The possibility of measuring to greater
precision defines variable data.
Attribute data are counted and cannot have fractions or decimals. Attribute data arise when
you are determining only the presence or absence of something: success or failure, accept or
reject, correct or not correct. For example, a report can have four errors or five errors, but it
cannot have four and a half errors.

Variables charts

o x and R chart (also called averages and range chart)


o x and s chart
o chart of individuals (also called X chart, X-R chart, IX-MR chart, Xm R chart,
moving range chart)
o moving averagemoving range chart (also called MAMR chart)
o target charts (also called difference charts, deviation charts and nominal charts)
o CUSUM (also called cumulative sum chart)
o EWMA (also called exponentially weighted moving average chart)
o multivariate chart (also called Hotelling T2)

Attributes charts

o p chart (also called proportion chart)


o np chart
o c chart (also called count chart)
o u chart

Charts for either kind of data

o short run charts (also called stabilized charts or Z charts)


o group charts (also called multiple characteristic charts)

Description
The control chart is a graph used to study how a process changes over time. Data are plotted in time
order. A control chart always has a central line for the average, an upper line for the upper control
limit and a lower line for the lower control limit. These lines are determined from historical data. By
comparing current data to these lines, you can draw conclusions about whether the process variation
is consistent (in control) or is unpredictable (out of control, affected by special causes of variation).

Control charts for variable data are used in pairs. The top chart monitors the average, or the centering
of the distribution of data from the process. The bottom chart monitors the range, or the width of the
distribution. If your data were shots in target practice, the average is where the shots are clustering,
and the range is how tightly they are clustered. Control charts for attribute data are used singly.

When to Use a Control Chart


When controlling ongoing processes by finding and correcting problems as they occur.
When predicting the expected range of outcomes from a process.
When determining whether a process is stable (in statistical control).
When analyzing patterns of process variation from special causes (non-routine events) or
common causes (built into the process).
When determining whether your quality improvement project should aim to prevent specific
problems or to make fundamental changes to the process.

Control Chart Basic Procedure


1. Choose the appropriate control chart for your data.
2. Determine the appropriate time period for collecting and plotting data.
3. Collect data, construct your chart and analyze the data.
4. Look for out-of-control signals on the control chart. When one is identified, mark it on the
chart and investigate the cause. Document how you investigated, what you learned, the cause
and how it was corrected.

Out-of-control signals

o A single point outside the control limits. In Figure 1, point sixteen is above the UCL
(upper control limit).
o Two out of three successive points are on the same side of the centerline and farther
than 2 from it. In Figure 1, point 4 sends that signal.
o Four out of five successive points are on the same side of the centerline and farther
than 1 from it. In Figure 1, point 11 sends that signal.
o A run of eight in a row are on the same side of the centerline. Or 10 out of 11, 12 out
of 14 or 16 out of 20. In Figure 1, point 21 is eighth in a row above the centerline.
o Obvious consistent or persistent patterns that suggest something unusual about your
data and your process.
Figure 1 Control Chart: Out-of-Control Signals

5. Continue to plot data as they are generated. As each new data point is plotted, check for new
out-of-control signals.
6. When you start a new control chart, the process may be out of control. If so, the control
limits calculated from the first 20 points are conditional limits. When you have at least 20
sequential points from a period when the process is operating in control, recalculate control
limits.

HISTOGRAM

Description

A frequency distribution shows how often each different value in a set of data occurs. A histogram is
the most commonly used graph to show frequency distributions. It looks very much like a bar chart,
but there are important differences between them.

When to Use a Histogram


When the data are numerical.
When you want to see the shape of the datas distribution, especially when determining
whether the output of a process is distributed approximately normally.
When analyzing whether a process can meet the customers requirements.
When analyzing what the output from a suppliers process looks like.
When seeing whether a process change has occurred from one time period to another.
When determining whether the outputs of two or more processes are different.
When you wish to communicate the distribution of data quickly and easily to others.

Histogram Analysis
Before drawing any conclusions from your histogram, satisfy yourself that the process was
operating normally during the time period being studied. If any unusual events affected the
process during the time period of the histogram, your analysis of the histogram shape
probably cannot be generalized to all time periods.
Analyze the meaning of your histograms shape.
PARETO CHART

Also called: Pareto diagram, Pareto analysis

Variations: weighted Pareto chart, comparative Pareto charts

Description

A Pareto chart is a bar graph. The lengths of the bars represent frequency or cost (time or money),
and are arranged with longest bars on the left and the shortest to the right. In this way the chart
visually depicts which situations are more significant.

When to Use a Pareto Chart


When analyzing data about the frequency of problems or causes in a process.
When there are many problems or causes and you want to focus on the most significant.
When analyzing broad causes by looking at their specific components.
When communicating with others about your data.

Pareto Chart Procedure


1. Decide what categories you will use to group items.
2. Decide what measurement is appropriate. Common measurements are frequency, quantity,
cost and time.
3. Decide what period of time the Pareto chart will cover: One work cycle? One full day? A
week?
4. Collect the data, recording the category each time. (Or assemble data that already exist.)
5. Subtotal the measurements for each category.
6. Determine the appropriate scale for the measurements you have collected. The maximum
value will be the largest subtotal from step 5. (If you will do optional steps 8 and 9 below, the
maximum value will be the sum of all subtotals from step 5.) Mark the scale on the left side
of the chart.
7. Construct and label bars for each category. Place the tallest at the far left, then the next tallest
to its right and so on. If there are many categories with small measurements, they can be
grouped as other.

Steps 8 and 9 are optional but are useful for analysis and communication.

8. Calculate the percentage for each category: the subtotal for that category divided by the total
for all categories. Draw a right vertical axis and label it with percentages. Be sure the two
scales match: For example, the left measurement that corresponds to one-half should be
exactly opposite 50% on the right scale.
9. Calculate and draw cumulative sums: Add the subtotals for the first and second categories,
and place a dot above the second bar indicating that sum. To that sum add the subtotal for the
third category, and place a dot above the third bar for that new sum. Continue the process for
all the bars. Connect the dots, starting at the top of the first bar. The last dot should reach 100
percent on the right scale.

Pareto Chart Examples


Example #1 shows how many customer complaints were received in each of five categories.

Example #2 takes the largest category, documents, from Example #1, breaks it down into six
categories of document-related complaints, and shows cumulative values.

If all complaints cause equal distress to the customer, working on eliminating document-related
complaints would have the most impact, and of those, working on quality certificates should be most
fruitful.

Example #1

Example #2

SCATTER DIAGRAM

Also called: scatter plot, XY graph

Description
The scatter diagram graphs pairs of numerical data, with one variable on each axis, to look for a
relationship between them. If the variables are correlated, the points will fall along a line or curve.
The better the correlation, the tighter the points will hug the line.

When to Use a Scatter Diagram


When you have paired numerical data.
When your dependent variable may have multiple values for each value of your independent
variable.
When trying to determine whether the two variables are related, such as
o When trying to identify potential root causes of problems.
o After brainstorming causes and effects using a fishbone diagram, to determine
objectively whether a particular cause and effect are related.
o When determining whether two effects that appear to be related both occur with the
same cause.
o When testing for autocorrelation before constructing a control chart.

Scatter Diagram Example

The ZZ-400 manufacturing team suspects a relationship between product purity (percent purity) and
the amount of iron (measured in parts per million or ppm). Purity and iron are plotted against each
other as a scatter diagram, as shown in the figure below.

There are 24 data points. Median lines are drawn so that 12 points fall on each side for both percent
purity and ppm iron.

To test for a relationship, they calculate:


A = points in upper left + points in lower right = 9 + 9 = 18
B = points in upper right + points in lower left = 3 + 3 = 6
Q = the smaller of A and B = the smaller of 18 and 6 = 6
N = A + B = 18 + 6 = 24

Then they look up the limit for N on the trend test table. For N = 24, the limit is 6.
Q is equal to the limit. Therefore, the pattern could have occurred from random chance, and no
relationship is demonstrated.
Scatter Diagram Example

STRATIFICATION

Description

Stratification is a technique used in combination with other data analysis tools. When data from a
variety of sources or categories have been lumped together, the meaning of the data can be
impossible to see. This technique separates the data so that patterns can be seen.

When to Use Stratification


Before collecting data.
When data come from several sources or conditions, such as shifts, days of the week,
suppliers or population groups.
When data analysis may require separating different sources or conditions.

Stratification Example

The ZZ-400 manufacturing team drew a scatter diagram to test whether product purity and iron
contamination were related, but the plot did not demonstrate a relationship. Then a team member
realized that the data came from three different reactors. The team member redrew the diagram,
using a different symbol for each reactors data:
Now patterns can be seen. The data from reactor 2 and reactor 3 are circled. Even without doing any
calculations, it is clear that for those two reactors, purity decreases as iron increases. However, the
data from reactor 1, the solid dots that are not circled, do not show that relationship. Something is
different about reactor 1.

Stratification Considerations
Here are examples of different sources that might require data to be stratified:
o Equipment
o Shifts
o Departments
o Materials
o Suppliers
o Day of the week
o Time of day
o Products

Survey data usually benefit from stratification.

Always consider before collecting data whether stratification might be needed during
analysis. Plan to collect stratification information. After the data are collected it might be too
late.
On your graph or chart, include a legend that identifies the marks or colors used.

STATISTICAL PROCESS CONTROL


Statistical process control (SPC) procedures can help you monitor process behavior.

Arguably the most successful SPC tool is the control chart, originally developed by Walter Shewhart
in the early 1920s. A control chart helps you record data and lets you see when an unusual event,
e.g., a very high or low observation compared with typical process performance, occurs.

Control charts attempt to distinguish between two types of process variation:

Common cause variation, which is intrinsic to the process and will always be present.
Special cause variation, which stems from external sources and indicates that the process is
out of statistical control.

Various tests can help determine when an out-of-control event has occurred. However, as more tests
are employed, the probability of a false alarm also increases.

Background

A marked increase in the use of control charts occurred during World War II in the United States to
ensure the quality of munitions and other strategically important products. The use of SPC
diminished somewhat after the war, though was subsequently taken up with great effect in Japan and
continues to the present day.

Many SPC techniques have been rediscovered by American firms in recent years, especially as a
component of quality improvement initiatives like Six Sigma. The widespread use of control
charting procedures has been greatly assisted by statistical software packages and ever-more
sophisticated data collection systems.

Over time, other process-monitoring tools have been developed, including:

Cumulative Sum (CUSUM) charts: the ordinate of each plotted point represents the algebraic
sum of the previous ordinate and the most recent deviations from the target.
Exponentially Weighted Moving Average (EWMA) charts: each chart point represents the
weighted average of current and all previous subgroup values, giving more weight to recent
process history and decreasing weights for older data.

More recently, others have advocated integrating SPC with Engineering Process Control (EPC)
tools, which regularly change process inputs to improve performance.

VARIATION

In simple yet profound terms, variation represents the difference between an ideal and an actual
situation.

An ideal represents a standard of perfectionthe highest standard of excellencethat is uniquely


defined by stakeholders, including direct customers, internal customers, suppliers, society and
shareholders. Excellence is synonymous with quality, and excellent quality results from doing the
right things, in the right way.
The fact that we can strive for an ideal but never achieve it means that stakeholders always
experience some variation from the perfect situations they envision. This, however, also makes
improvement and progress possible. Reducing the variation stakeholders experience is the key to
quality and continuous improvement.

According to the law of variation as defined in the Statistical Quality Control Handbook:

Everything varies. In other words, no two things are exactly alike.


Groups of things from a constant system of causes tend to be predictable. We cant predict
the behavior or characteristics of any one thing. Predictions only become possible for groups
of things where patterns can be observed.

If outcomes from systems can be predicted, then it follows that they can be anticipated and
managed.

Managing Variation

In 1924, Dr. Walter Shewhart of Bell Telephone Laboratories developed the new paradigm for
managing variation. As part of this paradigm, he identified two causes of variation:

Common cause, or noise, variation is inherent in a process over time. It affects every
outcome of the process and everyone working in the process. Managing common cause
variation thus requires improvements to the process.
Special cause, or signal, variation arises because of unusual circumstances and is not an
inherent part of a process. Managing this kind of variation involves locating and removing
the unusual or special cause.

Shewhart further distinguished two types of mistakes that are possible in managing variation:
treating a common cause as special and treating a special cause as common. Later, W. Edwards
Deming estimated that a lack of an understanding of variation resulted in situations where 95% of
management actions result in no improvement. Referred to as tampering, action taken to compensate
for variation within the control limits of a stable system increases, rather than decreases, variation.

SAMPLING

Sampling is the selection of a set of elements from a population or product lot. Sampling is
frequently used because data on every member of a population are often impossible, impractical or
too costly to collect. Sampling lets you draw conclusions or make inferences about the population
from which the sample is drawn.

When used in conjunction with randomization, samples provide virtually identical characteristics
relative to those of the population from which the sample was drawn.

Beware, however, of three categories of sampling error:

Bias (lack of accuracy)


Dispersion (lack of precision)
Non-reproducibility (lack of consistency)
These are easily accounted for by knowledgeable practitioners.

Determinations of sample sizes for specific situations are readily obtained through the selection and
application of the appropriate mathematical equation. All thats needed to determine the minimum
sample size is to specify:

If the data are continuous (variable) or discrete (attribute).


If the population is finite or infinite.
What confidence level is desired/specified.
The magnitude of the maximum allowable error (due to bias, dispersion and/or non-
reproducibility).
The likelihood of occurrence of a specific event.

STATISTICS: The use of statistical methods in quality improvement takes many forms, including:

Hypothesis Two hypotheses are evaluated: a null hypothesis (H0) and an alternative
Testing hypothesis (H 1). The null hypothesis is a straw man used in a statistical test.
The conclusion is to either reject or fail to reject the null hypothesis.

Regression Determines a mathematical expression describing the functional relationship


Analysis between one response and one or more independent variables.

Statistical Process Monitors, controls and improves processes through statistical techniques. SPC
Control (SPC) identifies when processes are out of control due to special cause variation
(variation caused by special circumstances, not inherent to the process).
Practitioners may then seek ways to remove that variation from the process.

Design and Planning, conducting, analyzing and interpreting controlled tests to evaluate the
Analysis of factors that may influence a response variable.
Experiments

The practice of employing a small, representative sample to make an inference of a wider population
originated in the early part of the 20th century. William S. Gosset, more commonly known by his
pseudonym Student, was required to take small samples from a brewing process to understand
particular quality characteristics. The statistical approach he derived (now called a one-sample t-test)
was subsequently built upon by R. A. Fisher and others.

Jerzy Neyman and E. S. Pearson developed a more complete mathematical framework for
hypothesis testing in the 1920s. This included concepts now familiar to statisticians, such as:

Type I errorincorrectly rejecting the null hypothesis.


Type II errorincorrectly failing to reject the null hypothesis.
Statistical powerthe probability of correctly rejecting the null hypothesis.

Fishers Analysis of Variance (or ANOVA) procedure provides the statistical engine through which
many statistical analyses are conducted, as in Gage Repeatability and Reproducibility studies and
other designed experiments. ANOVA has proven to be a very helpful tool to address how variation
may be attributed to certain factors under consideration.

W. Edwards Deming and others have criticized the indiscriminate use of statistical inference
procedures, noting that erroneous conclusions may be drawn unless one is sampling from a stable
system. Consideration of the type of statistical study being performed should be a key concern when
looking at data.

PROCESS CAPABILITY
You can use a process-capability study to assess the ability of a process to meet specifications.

During a quality improvement initiative, such as Six Sigma, a capability estimate is typically
obtained at the start and end of the study to reflect the level of improvement that occurred.

Several capability estimates are in widespread use, including:

Cp and Cpkare process capability estimates. They show how capable a process is of meeting
its specification limits, used with continuous data.
Sigma is a capability estimate typically used with attribute data (i.e., with defect rates).

Capability estimates like these essentially reflect the nonconformance rate of a process by expressing
this performance in the form of a single number. Typically this involves calculating some ratio of the
specification limits to process spread.

Control Charts
Control charts are a method of Statistical Process Control, SPC. (Control system for production
processes). They enable the control of distribution of variation rather than attempting to control each
individual variation. Upper and lower control and tolerance limits are calculated for a process and
sampled measures are regularly plotted about a central line between the two sets of limits. The
plotted line corresponds to the stability/trend of the process. Action can be taken based on trend
rather than on individual variation. This prevents over-correction/compensation for random
variation, which would lead to many rejects.

SEVEN NEW MANAGEMENT AND PLANNING TOOLS

In 1976, the Union of Japanese Scientists and Engineers (JUSE) saw the need for tools to promote
innovation, communicate information and successfully plan major projects. A team researched and
developed the seven new quality control tools, often called the seven management and planning
(MP) tools, or simply the seven management tools. Not all the tools were new, but their collection
and promotion were.

The seven MP tools, listed in an order that moves from abstract analysis to detailed planning, are:

1. Affinity diagram: organizes a large number of ideas into their natural relationships.

2. Relations diagram: shows cause-and-effect relationships and helps you analyze the natural
links between different aspects of a complex situation.
3. Tree diagram: breaks down broad categories into finer and finer levels of detail, helping you
move your thinking step by step from generalities to specifics.
4. Matrix diagram: shows the relationship between two, three or four groups of information and
can give information about the relationship, such as its strength, the roles played by various
individuals, or measurements.
5. Matrix data analysis: a complex mathematical technique for analyzing matrices, often
replaced in this list by the similar prioritization matrix. One of the most rigorous, careful and
time-consuming of decision-making tools, a prioritization matrix is an L-shaped matrix that
uses pairwise comparisons of a list of options to a set of criteria in order to choose the best
option(s).
6. Arrow diagram: shows the required order of tasks in a project or process, the best schedule
for the entire project, and potential scheduling and resource problems and their solutions.
7. Process decision program chart (PDPC): systematically identifies what might go wrong in a
plan under development.

AFFINITY DIAGRAM

Also called: affinity chart, K-J method


Variation: thematic analysis

Description

The affinity diagram organizes a large number of ideas into their natural relationships. This method
taps a teams creativity and intuition. It was created in the 1960s by Japanese anthropologist Jiro
Kawakita.

When to Use an Affinity Diagram


When you are confronted with many facts or ideas in apparent chaos
When issues seem too large and complex to grasp
When group consensus is necessary

Typical situations are:

After a brainstorming exercise


When analyzing verbal data, such as survey results.

Affinity Diagram Example

The ZZ-400 manufacturing team used an affinity diagram to organize its list of potential
performance indicators. Figure 1 shows the list team members brainstormed. Because the team
works a shift schedule and members could not meet to do the affinity diagram together, they
modified the procedure.
Figure 1 Brainstorming for Affinity Diagram Example

They wrote each idea on a sticky note and put all the notes randomly on a rarely used door. Over
several days, everyone reviewed the notes in their spare time and moved the notes into related
groups. Some people reviewed the evolving pattern several times. After a few days, the natural
grouping shown in figure 2 had emerged.

Notice that one of the notes, Safety, has become part of the heading for its group. The rest of the
headings were added after the grouping emerged. Five broad areas of performance were identified:
product quality, equipment maintenance, manufacturing cost, production volume, and safety and
environmental.
Figure 2 Affinity Diagram Example

RELATIONS DIAGRAM

Also called: interrelationship diagram or digraph, network diagram

Variation: matrix relations diagram

Description

The relations diagram shows cause-and-effect relationships. Just as importantly, the process of
creating a relations diagram helps a group analyze the natural links between different aspects of a
complex situation.

When to Use a Relations Diagram


When trying to understand links between ideas or cause-and-effect relationships, such as
when trying to identify an area of greatest impact for improvement.
When a complex issue is being analyzed for causes.
When a complex solution is being implemented.
After generating an affinity diagram, cause-and-effect diagram or tree diagram, to more
completely explore the relations of ideas.

Relations Diagram Basic Procedure

Materials needed: sticky notes or cards, large paper surface (newsprint or two flipchart pages taped
together), marking pens, tape.

1. Write a statement defining the issue that the relations diagram will explore. Write it on a card
or sticky note and place it at the top of the work surface.
2. Brainstorm ideas about the issue and write them on cards or notes. If another tool has
preceded this one, take the ideas from the affinity diagram, the most detailed row of the tree
diagram or the final branches on the fishbone diagram. You may want to use these ideas as
starting points and brainstorm additional ideas.
3. Place one idea at a time on the work surface and ask: Is this idea related to any others?
Place ideas that are related near the first. Leave space between cards to allow for drawing
arrows later. Repeat until all cards are on the work surface.
4. For each idea, ask, Does this idea cause or influence any other idea? Draw arrows from
each idea to the ones it causes or influences. Repeat the question for every idea.
5. Analyze the diagram:
o Count the arrows in and out for each idea. Write the counts at the bottom of each box.
The ones with the most arrows are the key ideas.
o Note which ideas have primarily outgoing (from) arrows. These are basic causes.
o Note which ideas have primarily incoming (to) arrows. These are final effects that
also may be critical to address.

Be sure to check whether ideas with fewer arrows also are key ideas. The number of arrows is only
an indicator, not an absolute rule. Draw bold lines around the key ideas.

Relations Diagram Example

A computer support group is planning a major project: replacing the mainframe computer. The group
drew a relations diagram (see figure below) to sort out a confusing set of elements involved in this
project.
Relations Diagram Example

Computer replacement project is the card identifying the issue. The ideas that were brainstormed
were a mixture of action steps, problems, desired results and less-desirable effects to be handled. All
these ideas went onto the diagram together. As the questions were asked about relationships and
causes, the mixture of ideas began to sort itself out.

After all the arrows were drawn, key issues became clear. They are outlined with bold lines.

New software has one arrow in and six arrows out. Install new mainframe has one arrow
in and four out. Both ideas are basic causes.
Service interruptions and increased processing cost both have three arrows in, and the
group identified them as key effects to avoid.

TREE DIAGRAM

Also called: systematic diagram, tree analysis, analytical tree, hierarchy diagram

Description

The tree diagram starts with one item that branches into two or more, each of which branch into two
or more, and so on. It looks like a tree, with trunk and multiple branches.

It is used to break down broad categories into finer and finer levels of detail. Developing the tree
diagram helps you move your thinking step by step from generalities to specifics.

When to Use a Tree Diagram


When an issue is known or being addressed in broad generalities and you must move to
specific details, such as when developing logical steps to achieve an objective.
When developing actions to carry out a solution or other plan.
When analyzing processes in detail.
When probing for the root cause of a problem.
When evaluating implementation issues for several potential solutions.
After an affinity diagram or relations diagram has uncovered key issues.
As a communication tool, to explain details to others.

Tree Diagram Procedure


1. Develop a statement of the goal, project, plan, problem or whatever is being studied. Write it
at the top (for a vertical tree) or far left (for a horizontal tree) of your work surface.
2. Ask a question that will lead you to the next level of detail. For example:
o For a goal, action plan or work breakdown structure: What tasks must be done to
accomplish this? or How can this be accomplished?
o For root-cause analysis: What causes this? or Why does this happen?
o For gozinto chart: What are the components? (Gozinto literally comes from the
phrase What goes into it?

Brainstorm all possible answers. If an affinity diagram or relationship diagram has been done
previously, ideas may be taken from there. Write each idea in a line below (for a vertical tree)
or to the right of (for a horizontal tree) the first statement. Show links between the tiers with
arrows.

3. Do a necessary and sufficient check. Are all the items at this level necessary for the one on
the level above? If all the items at this level were present or accomplished, would they be
sufficient for the one on the level above?
4. Each of the new idea statements now becomes the subject: a goal, objective or problem
statement. For each one, ask the question again to uncover the next level of detail. Create
another tier of statements and show the relationships to the previous tier of ideas with arrows.
Do a necessary and sufficient check for each set of items.
5. Continue to turn each new idea into a subject statement and ask the question. Do not stop
until you reach fundamental elements: specific actions that can be carried out, components
that are not divisible, root causes.
6. Do a necessary and sufficient check of the entire diagram. Are all the items necessary for
the objective? If all the items were present or accomplished, would they be sufficient for the
objective?

MATRIX DIAGRAM

Also called: matrix, matrix chart

Description

The matrix diagram shows the relationship between two, three or four groups of information. It also
can give information about the relationship, such as its strength, the roles played by various
individuals or measurements.

Six differently shaped matrices are possible: L, T, Y, X, C and roof-shaped, depending on how many
groups must be compared.

When to Use Each Matrix Diagram Shape


Table 1 summarizes when to use each type of matrix. Also click on the links below to see an
example of each type. In the examples, matrix axes have been shaded to emphasize the letter that
gives each matrix its name.

An L-shaped matrix relates two groups of items to each other (or one group to itself).
A T-shaped matrix relates three groups of items: groups B and C are each related to A.
Groups B and C are not related to each other.
A Y-shaped matrix relates three groups of items. Each group is related to the other two in a
circular fashion.
A C-shaped matrix relates three groups of items all together simultaneously, in 3-D.
An X-shaped matrix relates four groups of items. Each group is related to two others in a
circular fashion.
A roof-shaped matrix relates one group of items to itself. It is usually used along with an L-
or T-shaped matrix.

Table 1: When to use differently-shaped matrices

L-shaped 2 groups A B (or A A)

T-shaped 3 groups B A C but not B C

Y-shaped 3 groups A B C A

C-shaped 3 groups All three simultaneously (3-D)

X-shaped 4 groups A B C D A but not A C or B D

Roof-shaped 1 group A A when also A B in L or T

ARROW DIAGRAM

Also called: activity network diagram, network diagram, activity chart, node diagram, CPM (critical
path method) chart

Variation: PERT (program evaluation and review technique) chart

Description

The arrow diagram shows the required order of tasks in a project or process, the best schedule for the
entire project, and potential scheduling and resource problems and their solutions. The arrow
diagram lets you calculate the critical path of the project. This is the flow of critical steps where
delays will affect the timing of the entire project and where addition of resources can speed up the
project.

When to Use an Arrow Diagram


When scheduling and monitoring tasks within a complex project or process with interrelated
tasks and resources.
When you know the steps of the project or process, their sequence and how long each step
takes, and.
When project schedule is critical, with serious consequences for completing the project late
or significant advantage to completing the project early.

PROCESS DECISION PROGRAM CHART

Also called: PDPC

Description

The process decision program chart systematically identifies what might go wrong in a plan under
development. Countermeasures are developed to prevent or offset those problems. By using PDPC,
you can either revise the plan to avoid the problems or be ready with the best response when a
problem occurs.

When to Use PDPC


Before implementing a plan, especially when the plan is large and complex.
When the plan must be completed on schedule.
When the price of failure is high.

PDPC Procedure
1. Obtain or develop a tree diagram of the proposed plan. This should be a high-level diagram
showing the objective, a second level of main activities and a third level of broadly defined
tasks to accomplish the main activities.
2. For each task on the third level, brainstorm what could go wrong.
3. Review all the potential problems and eliminate any that are improbable or whose
consequences would be insignificant. Show the problems as a fourth level linked to the tasks.
4. For each potential problem, brainstorm possible countermeasures. These might be actions or
changes to the plan that would prevent the problem, or actions that would remedy it once it
occurred. Show the countermeasures as a fifth level, outlined in clouds or jagged lines.
5. Decide how practical each countermeasure is. Use criteria such as cost, time required, ease of
implementation and effectiveness. Mark impractical countermeasures with an X and practical
ones with an O.

CONCEPT OF SIX SIGMA

Six Sigma is a fact-based, data-driven philosophy of quality improvement that values defect
prevention over defect detection. It drives customer satisfaction and bottom-line results by reducing
variation and waste, thereby promoting a competitive advantage. It applies anywhere variation and
waste exist, and every employee should be involved.
In simple terms, Six Sigma quality performance means no more than 3.4 defects per million
opportunities.

Several different definitions have been proposed for Six Sigma, but they all share some common
themes:

Use of teams that are assigned well-defined projects that have direct impact on the
organizations bottom line.
Training in statistical thinking at all levels and providing key people with extensive
training in advanced statistics and project management. These key people are designated
black belts (PDF, 755KB). Review the different Six Sigma belts, levels and roles.
Emphasis on the DMAIC approach (define, measure, analyze, improve and control) to
problem solving.
A management environment that supports these initiatives as a business strategy.

Differing opinions on the definition of Six Sigma:

Six Sigma is a philosophy This perspective views all work as processes that can be defined,
measured, analyzed, improved and controlled. Processes require inputs (x) and produce outputs (y).
If you control the inputs, you will control the outputs: This is generally expressed as y = f(x).

Six Sigma is a set of tools The Six Sigma expert uses qualitative and quantitative techniques to
drive process improvement. A few such tools include statistical process control (SPC), control charts,
failure mode and effects analysis and flowcharting.

Six Sigma is a methodology This view of Six Sigma recognizes the underlying and rigorous
approach known as DMAIC (define, measure, analyze, improve and control). DMAIC defines the
steps a Six Sigma practitioner is expected to follow, starting with identifying the problem and ending
with the implementation of long-lasting solutions. While DMAIC is not the only Six Sigma
methodology in use, it is certainly the most widely adopted and recognized.
UNIT IV

TQM TOOLS

BENCHMARKING

The benchmarking process consists of five phases:

1. Planning. The essential steps are those of any plan development: what, who and how.

What is to be benchmarked? Every function of an organization has or delivers a product


or output. Benchmarking is appropriate for any output of a process or function, whether its a
physical good, an order, a shipment, an invoice, a service or a report.
To whom or what will we compare? Business-to-business, direct competitors are certainly
prime candidates to benchmark. But they are not the only targets. Benchmarking must be
conducted against the best companies and business functions regardless of where they exist.
How will the data be collected? Theres no one way to conduct benchmarking
investigations. Theres an infinite variety of ways to obtain required data and most of the
data youll need are readily and publicly available. Recognize that benchmarking is a process
not only of deriving quantifiable goals and targets, but more importantly, its the process of
investigating and documenting the best industry practices, which can help you achieve goals
and targets.

2. Analysis. The analysis phase must involve a careful understanding of your current process and
practices, as well as those of the organizations being benchmarked. What is desired is an
understanding of internal performance on which to assess strengths and weaknesses.

Ask: Is this other organization better than we are?

Why are they better?


By how much?
What best practices are being used now or can be anticipated?
How can their practices be incorporated or adapted for use in our organization?

Answers to these questions will define the dimensions of any performance gap: negative, positive or
parity. The gap provides an objective basis on which to actto close the gap or capitalize on any
advantage your organization has.

3. Integration. Integration is the process of using benchmark findings to set operational targets for
change. It involves careful planning to incorporate new practices in the operation and to ensure
benchmark findings are incorporated in all formal planning processes.

Steps include:

Gain operational and management acceptance of benchmark findings. Clearly and


convincingly demonstrate findings as correct and based on substantive data.
Develop action plans.
Communicate findings to all organizational levels to obtain support, commitment and
ownership.

4. Action. Convert benchmark findings, and operational principles based on them, to specific actions
to be taken. Put in place a periodic measurement and assessment of achievement. Use the creative
talents of the people who actually perform work tasks to determine how the findings can be
incorporated into the work processes.

Any plan for change also should contain milestones for updating the benchmark findings, and an
ongoing reporting mechanism. Progress toward benchmark findings must be reported to all
employees.

5. Maturity. Maturity will be reached when best industry practices are incorporated in all business
processes, thus ensuring superiority.

Tests for superiority:

If the now-changed process were to be made available to others, would a knowledgeable


businessperson prefer it?
Do other organizations benchmark your internal operations?

Maturity also is achieved when benchmarking becomes an ongoing, essential and self-initiated
facet of the management process. Benchmarking becomes institutionalized and is done at all
appropriate levels of the organization, not by specialists. Figure 1 Benchmarking process steps

FAILURE MODES AND EFFECTS ANALYSIS (FMEA)

Also called: potential failure modes and effects analysis; failure modes, effects and criticality
analysis (FMECA).

Description

Failure modes and effects analysis (FMEA) is a step-by-step approach for identifying all possible
failures in a design, a manufacturing or assembly process, or a product or service.

Failure modes means the ways, or modes, in which something might fail. Failures are any errors or
defects, especially ones that affect the customer, and can be potential or actual.

Effects analysis refers to studying the consequences of those failures.


Failures are prioritized according to how serious their consequences are, how frequently they occur
and how easily they can be detected. The purpose of the FMEA is to take actions to eliminate or
reduce failures, starting with the highest-priority ones.

Failure modes and effects analysis also documents current knowledge and actions about the risks of
failures, for use in continuous improvement. FMEA is used during design to prevent failures. Later
its used for control, before and during ongoing operation of the process. Ideally, FMEA begins
during the earliest conceptual stages of design and continues throughout the life of the product or
service.

Begun in the 1940s by the U.S. military, FMEA was further developed by the aerospace and
automotive industries. Several industries maintain formal FMEA standards.

What follows is an overview and reference. Before undertaking an FMEA process, learn more about
standards and specific methods in your organization and industry through other references and
training.

When to Use FMEA


When a process, product or service is being designed or redesigned, after quality function
deployment.
When an existing process, product or service is being applied in a new way.
Before developing control plans for a new or modified process.
When improvement goals are planned for an existing process, product or service.
When analyzing failures of an existing process, product or service.
Periodically throughout the life of the process, product or service

FMEA Procedure

(Again, this is a general procedure. Specific details may vary with standards of your organization or
industry.)

1. Assemble a cross-functional team of people with diverse knowledge about the process,
product or service and customer needs. Functions often included are: design, manufacturing,
quality, testing, reliability, maintenance, purchasing (and suppliers), sales, marketing (and
customers) and customer service.
2. Identify the scope of the FMEA. Is it for concept, system, design, process or service? What
are the boundaries? How detailed should we be? Use flowcharts to identify the scope and to
make sure every team member understands it in detail. (From here on, well use the word
scope to mean the system, design, process or service that is the subject of your FMEA.)
3. Fill in the identifying information at the top of your FMEA form. Figure 1 shows a typical
format. The remaining steps ask for information that will go into the columns of the form.
Figure 1 FMEA Example

4. Identify the functions of your scope. Ask, What is the purpose of this system, design,
process or service? What do our customers expect it to do? Name it with a verb followed by
a noun. Usually you will break the scope into separate subsystems, items, parts, assemblies or
process steps and identify the function of each.
5. For each function, identify all the ways failure could happen. These are potential failure
modes. If necessary, go back and rewrite the function with more detail to be sure the failure
modes show a loss of that function.
6. For each failure mode, identify all the consequences on the system, related systems, process,
related processes, product, service, customer or regulations. These are potential effects of
failure. Ask, What does the customer experience because of this failure? What happens
when this failure occurs?
7. Determine how serious each effect is. This is the severity rating, or S. Severity is usually
rated on a scale from 1 to 10, where 1 is insignificant and 10 is catastrophic. If a failure mode
has more than one effect, write on the FMEA table only the highest severity rating for that
failure mode.
8. For each failure mode, determine all the potential root causes. Use tools classified as cause
analysis tool, as well as the best knowledge and experience of the team. List all possible
causes for each failure mode on the FMEA form.
9. For each cause, determine the occurrence rating, or O. This rating estimates the probability of
failure occurring for that reason during the lifetime of your scope. Occurrence is usually
rated on a scale from 1 to 10, where 1 is extremely unlikely and 10 is inevitable. On the
FMEA table, list the occurrence rating for each cause.
10. For each cause, identify current process controls. These are tests, procedures or mechanisms
that you now have in place to keep failures from reaching the customer. These controls might
prevent the cause from happening, reduce the likelihood that it will happen or detect failure
after the cause has already happened but before the customer is affected.
11. For each control, determine the detection rating, or D. This rating estimates how well the
controls can detect either the cause or its failure mode after they have happened but before
the customer is affected. Detection is usually rated on a scale from 1 to 10, where 1 means
the control is absolutely certain to detect the problem and 10 means the control is certain not
to detect the problem (or no control exists). On the FMEA table, list the detection rating for
each cause.
12. (Optional for most industries) Is this failure mode associated with a critical characteristic?
(Critical characteristics are measurements or indicators that reflect safety or compliance with
government regulations and need special controls.) If so, a column labeled Classification
receives a Y or N to show whether special controls are needed. Usually, critical
characteristics have a severity of 9 or 10 and occurrence and detection ratings above 3.
13. Calculate the risk priority number, or RPN, which equals S O D. Also calculate
Criticality by multiplying severity by occurrence, S O. These numbers provide guidance
for ranking potential failures in the order they should be addressed.
14. Identify recommended actions. These actions may be design or process changes to lower
severity or occurrence. They may be additional controls to improve detection. Also note who
is responsible for the actions and target completion dates.
15. As actions are completed, note results and the date on the FMEA form. Also, note new S, O
or D ratings and new RPNs.

FMEA Example

A bank performed a process FMEA on their ATM system. Figure 1 shows part of itthe function
dispense cash and a few of the failure modes for that function. The optional Classification
column was not used. Only the headings are shown for the rightmost (action) columns.
Notice that RPN and criticality prioritize causes differently. According to the RPN, machine jams
and heavy computer network traffic are the first and second highest risks.

One high value for severity or occurrence times a detection rating of 10 generates a high RPN.
Criticality does not include the detection rating, so it rates highest the only cause with medium to
high values for both severity and occurrence: out of cash. The team should use their experience
and judgment to determine appropriate priorities for action.

QUALITY FUNCTION DEPLOYMENT (QFD)

QFD was developed to bring personal interface to modern manufacturing and business. In
today's industrial society, where the growing distance between producers and users is a
concern, QFD links the needs of the customer (end user) with design, development,
engineering, manufacturing, and service functions.

QFD is:

1. Understanding Customer Requirements

2. Quality Systems Thinking + Psychology + Knowledge/Epistemology

3. Maximizing Positive Quality That Adds Value

4. Comprehensive Quality System for Customer Satisfaction

5. Strategy to Stay Ahead of The Game

As a quality system that implements elements of Systems Thinking with elements of


Psychology and Epistemology (knowledge), QFD provides a system of comprehensive
development process for:
Understanding customer needs
What 'value' means to the customer
Understanding how customers or end users become interested, choose, and are
satisfied
Analyzing how do we know the needs of the customer
Deciding what features to include
Determining what level of performance to deliver
Intelligently linking the needs of the customer with design, development,
engineering, manufacturing, and service functions
Intelligently linking Design for Six Sigma (DFSS) with the front end Voice of
Customer analysis and the entire design system
QFD helps organizations seek out both spoken and unspoken needs, translate these into
actions and designs, and focus various business functions toward achieving this common goal,
empowering organizations to exceed normal expectations and provide a level of unanticipated
excitement that generates value.

The QFD methodology can be used for both tangible products and non-tangible services,
including manufactured goods, service industry, software products, IT projects, business
process development, government, healthcare, environmental initiatives, and many other
applications.

QFD and other quality initiatives


Traditional quality systems aim at minimizing negative quality such as eliminating defects or
reducing operational errors. Assuming that everything goes well, the best you can attain with
these systems is zero defects. That sounds pretty good, doesn't it? But, what if your competitors
are also zero defects? Also, a product can be defect-free and still may not sell. This is where
design makes a difference. Conventional design processes, however, focus more on engineering
capabilities and less on customer needs. When they do try to incorporate customer perspectives,
these tend to be engineer or provider-perceived.

QFD is quite different in that it seeks out both "spoken" and "unspoken" customer requirements
and maximizes "positive" quality (such as ease of use, fun, luxury) that creates value. Traditional
quality systems aim at minimizing negative quality (such as defects, poor service).

Characteristics of QFD as a quality system


1. QFD is a quality system that implements elements of Systems Thinking (viewing the
development process as a system) and Psychology (understanding customer needs,
what 'value' is, and how customers or end users become interested, choose, and are
satisfied, etc.).
2. QFD is a quality method of good Knowledge or Epistemology (how do we know the
needs of the customer? how do we decide what features to include? and to what level
of performance?)
3. QFD is a quality system for strategic competitiveness; it maximizes positive quality
that adds value; it seeks out spoken and unspoken customer requirements, translate
them into technical requirements, prioritize them and directs us to optimize those
features that will bring the greatest competitive advantage.
4. Quality Function Deployment (QFD) is the only comprehensive quality system aimed
specifically at satisfying the customer throughout the development and business
process -- end to end.

Tools of QFD
7 Management and Planning Tools.

The problem with the conventional design process


Conventional design processes focus more on engineering capabilities and less on customer
needs. When they do try to incorporate customer perspectives, these tend to be engineer-
perceived or producer-perceived. Quality Function Deployment (QFD), however, focuses like a
laser all product development activities on customer needs.

"Expected quality" and "Exciting quality?"

"Expected" quality or requirements are essentially basic functions or features that customers
normally expect of a product or service. Expected requirements are usually invisible unless they
become visible when they are unfulfilled.
"Exciting" quality or requirements are sort of "out of ordinary" functions or features of a product
or service that cause "wow" reactions in customers. Exciting requirements are also usually
invisible unless they become visible when they are fulfilled and result in customer satisfaction;
they do not leave customers dissatisfied when left unfulfilled.

The original research on expected vs. exciting quality was conducted and reported in a paper
called "Must Be Quality" by Dr. Kano and his students in Japan. Although the paper is often
misinterpreted as a simple relationship model of expected quality vs. excited quality, what is
really important, however, is that the target of customer satisfaction can be moving and invisible
which requires more complex analysis. This is precisely where QFD is strongest. QFD makes
invisible requirements and strategic advantages visible.

TAGUCHI'S LOSS FUNCTION

Definition

Simply put, the Taguchi loss function is a way to show how each non-perfect part produced, results
in a loss for the company. Deming states that it shows

"a minimal loss at the nominal value, and an ever-increasing loss with departure either way from the
nominal value." - W. Edwards Deming Out of the Crisis. p.141

A technical definition is

A parabolic representation that estimates the quality loss, expressed monetarily, that results when
quality characteristics deviate from the target values. The cost of this deviation increases
quadratically as the characteristic moves farther from the target value. - Ducan, William Total
Quality Key Terms. p. 171

Graphically, the loss function is represented as shown above.

Interpreting the chart

This standard representation of the loss function demonstrates a few of the key attributes of loss. For
example, the target value and the bottom of the parabolic function intersect, implying that as parts
are produced at the nominal value, little or no loss occurs. Also, the curve flattens as it approaches
and departs from the target value. (This shows that as products approach the nominal value, the loss
incurred is less than when it departs from the target.) Any departure from the nominal value results
in a loss!

Loss can be measured per part. Measuring loss encourages a focus on achieving less variation. As
we understand how even a little variation from the nominal results in a loss, the tendency would be
to try and keep product and process as close to the nominal value as possible. This is what is so
beneficial about the Taguchi loss. It always keeps our focus on the need to continually improve.

A business that misuses what it has will continue to misuse what it can get. The point is--cure the
misuse. - Ford and Crowther
Application

A company that manufactures parts that require a large amount of machining grew tired of the high
costs of tooling. To avoid premature replacement of these expensive tools, the manager suggested
that operators set the machine to run at the high-end of the specification limits. As the tool would
wear down, the products would end up measuring on the low-end of the specification limits. So, the
machine would start by producing parts on the high-end and after a period of time, the machine
would produce parts that fell just inside of the specs.

The variation of parts produced on this machine was much greater than it should be, since the
strategy was to use the entire spec width allowed rather than produce the highest quality part
possible. Products may fall within spec, but will not produce close to the nominal. Several of these
"good parts" may not assemble well, may require recall, or may come back under warranty. The
Taguchi loss would be very high.

We should consider these vital questions:

* Is the savings of tool life worth the cost of poor products?

* Would it be better to replace the tool twice as often, reduce variation, or look at incoming part
quality?

Calculations

Formulas:

Loss at a point: L(x) = k*(x-t)^2


where,
k = loss coefficient
x = measured value
t = target value

Average Loss of a sample set: L = k*(s^2 + (pm - t)^2)


where,
s = standard deviation of sample
pm = process mean

Total Loss = Avg. Loss * number of samples

For example: A medical company produces a part that has a hole measuring 0.5" + 0.050". The
tooling used to make the hole is worn and needs replacing, but management doesn't feel it necessary
since it still makes "good parts". All parts pass QC, but several parts have been rejected by
assembly. Failure costs per part is $0.45. Using the loss function, explain why it may be to the
benefit of the company and customer to replace or sharpen the tool more frequently. Use the data
below:

Measured Value
0.459 | 0.478 | 0.495 | 0.501 | 0.511 | 0.527
0.462 | 0.483 | 0.495 | 0.501 | 0.516 | 0.532
0.467 | 0.489 | 0.495 | 0.502 | 0.521 | 0.532
0.474 | 0.491 | 0.498 | 0.505 | 0.524 | 0.533
0.476 | 0.492 | 0.500 | 0.509 | 0.527 | 0.536

Solution:

The average of the points is 0.501 and the standard deviation is about 0.022.

find k,
using L(x) = k * (x-t)^2
$0.45 = k * (0.550 - 0.500)^2
k = 18000

next,

using the Average loss equation: L=k * (s^2 + (pm - t)^2)

L = 18000 * (.022^2 + (.501 - .500)^2) = 8.73

So the average loss per part in this set is $8.73.

For the loss of the total 30 parts produced,

= L * number of samples
= $8.73 * 30
= $261.90

From the calculations above, one can determine that at 0.500", no loss is experienced. At a
measured value of 0.501", the loss is $0.018, and with a value of 0.536", the loss would be as much
as $23.00.

Even though all measurements were within specification limits and the average hole size was 0.501",
the Taguchi loss shows that the company lost about $261.90 per 30 parts being made. If the batch
size was increased to 1000 parts, then the loss would be $8730 per batch. Due to variation being
caused by the old tooling, the department is losing a significant amount of money.

From the chart, we can see that deviation from the nominal, could cost as much as $0.30 per part. In
addition we would want to investigate whether this kind of deviation would compromise the
integrity of the final product after assembly to the point of product failure.

TOTAL PRODUCTIVE MAINTENANCE (TPM)

It can be considered as the medical science of machines. Total Productive Maintenance (TPM) is a
maintenance program which involves a newly defined concept for maintaining plants and
equipment. The goal of the TPM program is to markedly increase production while, at the same
time, increasing employee morale and job satisfaction.

TPM brings maintenance into focus as a necessary and vitally important part of the business. It is no
longer regarded as a non-profit activity. Down time for maintenance is scheduled as a part of the
manufacturing day and, in some cases, as an integral part of the manufacturing process. The goal is
to hold emergency and unscheduled maintenance to a minimum.

Why TPM ?

TPM was introduced to achieve the following objectives. The important ones are listed below.

Avoid wastage in a quickly changing economic environment.


Producing goods without reducing product quality.
Reduce cost.
Produce a low batch quantity at the earliest possible time.
Goods send to the customers must be non defective.

Similarities and differences between TQM and TPM :

The TPM program closely resembles the popular Total Quality Management (TQM) program. Many
of the tools such as employee empowerment, benchmarking, documentation, etc. used in TQM are
used to implement and optimize TPM.Following are the similarities between the two.

1. Total commitment to the program by upper level management is required in both


programmes
2. Employees must be empowered to initiate corrective action, and
3. A long range outlook must be accepted as TPM may take a year or more to implement and is
an on-going process. Changes in employee mind-set toward their job responsibilities must
take place as well.

The differences between TQM and TPM is summarized below.

Category TQM TPM

Object Quality ( Output and effects ) Equipment ( Input and cause )

Systematize the management. It is Employees participation and it is


Mains of attaining goal
software oriented hardware oriented

Target Quality for PPM Elimination of losses and wastes.

Types of maintenance :

1. Breakdown maintenance :

It means that people waits until equipment fails and repair it. Such a thing could be used when the
equipment failure does not significantly affect the operation or production or generate any significant
loss other than repair cost.

2. Preventive maintenance ( 1951 ):


It is a daily maintenance ( cleaning, inspection, oiling and re-tightening ), design to retain the healthy
condition of equipment and prevent failure through the prevention of deterioration, periodic
inspection or equipment condition diagnosis, to measure deterioration. It is further divided into
periodic maintenance and predictive maintenance. Just like human life is extended by preventive
medicine, the equipment service life can be prolonged by doing preventive maintenance.

2a. Periodic maintenance ( Time based maintenance - TBM) :

Time based maintenance consists of periodically inspecting, servicing and cleaning equipment and
replacing parts to prevent sudden failure and process problems.

2b. Predictive maintenance :

This is a method in which the service life of important part is predicted based on inspection or
diagnosis, in order to use the parts to the limit of their service life. Compared to periodic
maintenance, predictive maintenance is condition based maintenance. It manages trend values, by
measuring and analyzing data about deterioration and employs a surveillance system, designed to
monitor conditions through an on-line system.

3. Corrective maintenance ( 1957 ) :

It improves equipment and its components so that preventive maintenance can be carried out
reliably. Equipment with design weakness must be redesigned to improve reliability or improving
maintainability

4. Maintenance prevention ( 1960 ):

It indicates the design of a new equipment. Weakness of current machines are sufficiently studied
( on site information leading to failure prevention, easier maintenance and prevents of defects, safety
and ease of manufacturing ) and are incorporated before commissioning a new equipment.

TPM Targets:

P
Obtain Minimum 80% OPE.
Obtain Minimum 90% OEE ( Overall Equipment Effectiveness )
Run the machines even during lunch. ( Lunch is for operators and not for machines ! )

Q
Operate in a manner, so that there are no customer complaints.

C
Reduce the manufacturing cost by 30%.

D
Achieve 100% success in delivering the goods as required by the customer.
S
Maintain a accident free environment.

M
Increase the suggestions by 3 times. Develop Multi-skilled and flexible workers.

Motives of TPM 1. Adoption of life cycle approach for improving the overall
performance of production equipment.
2. Improving productivity by highly motivated workers which is
achieved by job enlargement.

3. The use of voluntary small group activities for identifying the


cause of failure, possible plant and equipment modifications.

Uniqueness of TPM The major difference between TPM and other concepts is that the
operators are also made to involve in the maintenance process. The
concept of "I ( Production operators ) Operate, You ( Maintenance
department ) fix" is not followed.

TPM Objectives 1. Achieve Zero Defects, Zero Breakdown and Zero accidents in all
functional areas of the organization.
2. Involve people in all levels of organization.

3. Form different teams to reduce defects and Self Maintenance.

Direct benefits of TPM 1. Increase productivity and OPE ( Overall Plant Efficiency ) by 1.5
or 2 times.
2. Rectify customer complaints.
3. Reducethe manufacturing cost by 30%.
4. Satisfy the customers needs by 100 % ( Delivering the right
quantity at the right time, in the required quality. )
5. Reduce accidents.

6. Follow pollution control measures.

Indirect benefits of TPM 1. Higher confidence level among the employees.


2. Keep the work place clean, neat and attractive.
3. Favorablechange in the attitude of the operators.
4. Achieve goals by working as team.
5. Horizontaldeployment of a new concept in all areas of the
organization.
6. Share knowledge and experience.

7. The workers get a feeling of owning the machine.

OEE ( Overall Equipment Efficiency ) :


OEE = A x PE x Q

A - Availability of the machine. Availability is proportion of time machine is actually available out of
time it should be available.

A = ( MTBF - MTTR ) / MTBF.

MTBF - Mean Time Between Failures = ( Total Running Time ) / Number of Failures.
MTTR - Mean Time To Repair.

PE - Performance Efficiency. It is given by RE X SE.

Rate efficiency (RE) : Actual average cycle time is slower than design cycle time because of jams,
etc. Output is reduced because of jams
Speed efficiency (SE) : Actual cycle time is slower than design cycle time machine output is reduced
because it is running at reduced speed.

Q - Refers to quality rate. Which is percentage of good parts out of total produced sometimes called
"yield".

UNIT V

QUALITY SYSTEMS
ISO 9000 AND OTHER STANDARDS

ISO's name

Because "International Organization for Standardization" would have different acronyms in different
languages ("IOS" in English, "OIN" in French for Organisation internationale de normalisation), its
founders decided to give it also a short, all-purpose name. They chose "ISO", derived from the
Greek isos, meaning "equal". Whatever the country, whatever the language, the short form of the
organization's name is always ISO.

Why standards matter

Standards make an enormous and positive contribution to most aspects of our lives.

Standards ensure desirable characteristics of products and services such as quality, environmental
friendliness, safety, reliability, efficiency and interchangeability - and at an economical cost.

When products and services meet our expectations, we tend to take this for granted and be unaware
of the role of standards. However, when standards are absent, we soon notice. We soon care when
products turn out to be of poor quality, do not fit, are incompatible with equipment that we already
have, are unreliable or dangerous.

When products, systems, machinery and devices work well and safely, it is often because they meet
standards. And the organization responsible for many thousands of the standards which benefit the
world is ISO.

When standards are absent, we soon notice.

What standards do

ISO standards:

make the development, manufacturing and supply of products and services more efficient,
safer and cleaner
facilitate trade between countries and make it fairer
provide governments with a technical base for health, safety and environmental
legislation, and conformity assessment
share technological advances and good management practice
disseminate innovation
safeguard consumers, and users in general, of products and services
make life simpler by providing solutions to common problems

Who standards benefit

ISO standards provide technological, economic and societal benefits.


For businesses, the widespread adoption of International Standards means that suppliers can develop
and offer products and services meeting specifications that have wide international acceptance in
their sectors. Therefore, businesses using International Standards can compete on many more
markets around the world.

For innovators of new technologies, International Standards on aspects like terminology,


compatibility and safety speed up the dissemination of innovations and their development into
manufacturable and marketable products.

For customers, the worldwide compatibility of technology which is achieved when products and
services are based on International Standards gives them a broad choice of offers. They also benefit
from the effects of competition among suppliers.

For governments, International Standards provide the technological and scientific bases
underpinning health, safety and environmental legislation.

For trade officials, International Standards create "a level playing field" for all competitors on
those markets. The existence of divergent national or regional standards can create technical barriers
to trade. International Standards are the technical means by which political trade agreements can be
put into practice.

For developing countries, International Standards that represent an international consensus on the
state of the art are an important source of technological know-how. By defining the characteristics
that products and services will be expected to meet on export markets, International Standards give
developing countries a basis for making the right decisions when investing their scarce resources
and thus avoid squandering them.

For consumers, conformity of products and services to International Standards provides assurance
about their quality, safety and reliability.

For everyone, International Standards contribute to the quality of life in general by ensuring that the
transport, machinery and tools we use are safe.

For the planet we inhabit, International Standards on air, water and soil quality, on emissions of
gases and radiation and environmental aspects of products can contribute to efforts to preserve the
environment.

Quality professionals use the term standards to mean many things, such as metrics, specifications,
gages, statements, categories, segments, groupings or behaviors.

But usually when they talk about standards, theyre talking about quality management.

Management standards address the needs of organizations in training, quality auditing and quality-
management systems. The ISO 9000 Series, for example, is a set of international standards for
quality management and quality assurance. The standards were developed to help companies
effectively document the elements they need to maintain an efficient quality system. They are not
specific to any one industry.
The ISO 9000 Series

ISO 9000 can help a company satisfy its customers, meet regulatory requirements and achieve
continual improvement. But its a first step, many quality professionals will tell you, the base level
of a quality system, not a complete guarantee of quality.

ISO 9000 Facts

Originally published in 1987 by the International Organization for Standardization (ISO), a


specialized international agency for standardization composed of the national standards
bodies of 90 countries.
Underwent major revision in 2000.
Now includes ISO 9000:2000 (definitions), ISO 9001:2000 (requirements) and ISO
9004:2000 (continuous improvement).

The revised ISO 9000:2000 series of standards is based on eight quality management principles that
senior management can apply for organizational improvement:

1. Customer focus
2. Leadership
3. Involvement of people
4. Process approach
5. System approach to management
6. Continual improvement
7. Factual approach to decision-making
8. Mutually beneficial supplier relationships

Other Standards

Standards addressing the specialized needs and circumstances of certain industries and applications
also exist:

Environment. The ISO 14000 series of international standards integrate environmental


considerations into operations and product standards. The standards specify requirements for
establishing an environmental policy, determining environmental impacts of products or services,
planning environmental objectives, implementation of programs to meet objectives, corrective action
and management review.

Aerospace. AS9100, the international quality management standard for the aerospace industry, was
released in November 1999.

Automotive. There are three popular standards used in the automotive industry:

QS-9000 is a quality management system developed by Daimler-Chrysler, Ford and General


Motors for suppliers of production parts, materials and services to the automotive industry.
ISO/TS 16949, developed by the International Automotive Task Force, aligns existing
American, German, French and Italian automotive quality standards within the global
automotive industry.
ISO 14001 environmental standards are being applied by automotive suppliers as a
requirement from Ford and General Motors.

Statistics. Statistical standards provide methods for collecting, analyzing and interpreting data.
ANSI/ASQ Z1.4-2003 establishes sampling plans and procedures for inspection by attributes.
ANSI/ASQ Z1.9-2003 establishes sampling plans and procedures for inspection by variables.

Telecommunications. TL 9000 defines the telecommunications quality system requirements for the
design, development, production, delivery, installation and maintenance of products and services in
the telecommunications industry. It uses ISO 9000 as a foundation but goes a step further to include
industry-specific requirements and metrics.

ISO 14000 ESSENTIALS

The ISO 14000 family addresses various aspects of environmental management. The very first two
standards, ISO 14001:2004 and ISO 14004:2004 deal with environmental management systems
(EMS). ISO 14001:2004 provides the requirements for an EMS and ISO 14004:2004 gives general
EMS guidelines.

The other standards and guidelines in the family address specific environmental aspects, including:
labeling, performance evaluation, life cycle analysis, communication and auditing.

An ISO 14001:2004-based EMS

An EMS meeting the requirements of ISO 14001:2004 is a management tool enabling an


organization of any size or type to:

identify and control the environmental impact of its activities, products or services, and to
improve its environmental performance continually, and to
implement a systematic approach to setting environmental objectives and targets, to
achieving these and to demonstrating that they have been achieved.

How it works

ISO 14001:2004 does not specify levels of environmental performance. If it specified levels of
environmental performance, they would have to be specific to each business activity and this would
require a specific EMS standard for each business. That is not the intention.

ISO has many other standards dealing with specific environmental issues. The intention of ISO
14001:2004 is to provide a framework for a holistic, strategic approach to the organization's
environmental policy, plans and actions.

ISO 14001:2004 gives the generic requirements for an environmental management system. The
underlying philosophy is that whatever the organization's activity, the requirements of an effective
EMS are the same.
This has the effect of establishing a common reference for communicating about environmental
management issues between organizations and their customers, regulators, the public and other
stakeholders.

Because ISO 14001:2004 does not lay down levels of environmental performance, the standard can
to be implemented by a wide variety of organizations, whatever their current level of
environmental maturity. However, a commitment to compliance with applicable environmental
legislation and regulations is required, along with a commitment to continual improvement for
which the EMS provides the framework.

The EMS standards

ISO 14004:2004 provides guidelines on the elements of an environmental management system and
its implementation, and discusses principal issues involved.

ISO 14001:2004 specifies the requirements for such an environmental management system.
Fulfilling these requirements demands objective evidence which can be audited to demonstrate that
the environmental management system is operating effectively in conformity to the standard.

What can be achieved

ISO 14001:2004 is a tool that can be used to meet internal objectives:

provide assurance to management that it is in control of the organizational processes and


activities having an impact on the environment
assure employees that they are working for an environmentally responsible organization.

ISO 14001:2004 can also be used to meet external objectives:

provide assurance on environmental issues to external stakeholders such as customers, the


community and regulatory agencies
comply with environmental regulations
support the organization's claims and communication about its own environmental policies,
plans and actions
provides a framework for demonstrating conformity via suppliers' declarations of
conformity, assessment of conformity by an external stakeholder - such as a business client -
and for certification of conformity by an independent certification body.

ISO 9000 ESSENTIALS

The ISO 9000 family of standards represents an international consensus on good quality
management practices. It consists of standards and guidelines relating to quality management
systems and related supporting standards.

ISO 9001:2000 is the standard that provides a set of standardized requirements for a quality
management system, regardless of what the user organization does, its size, or whether it is in the
private, or public sector. It is the only standard in the family against which organizations can be
certified although certification is not a compulsory requirement of the standard.

The other standards in the family cover specific aspects such as fundamentals and vocabulary,
performance improvements, documentation, training, and financial and economic aspects.

How the ISO 9001:2000 model works

The requirements for a quality system have been standardized - but many organizations like to think
of themselves as unique. So how does ISO 9001:2000 allow for the diversity of say, on the one
hand, a "Mr. and Mrs." enterprise, and on the other, to a multinational manufacturing company with
service components, or a public utility, or a government administration?

The answer is that ISO 9001:2000 lays down what requirements your quality system must meet, but
does not dictate how they should be met in any particular organization. This leaves great scope and
flexibility for implementation in different business sectors and business cultures, as well as in
different national cultures.

Checking that it works

1. The standard requires the organization itself to audit its ISO 9001:2000-based quality
system to verify that it is managing its processes effectively - or, to put it another way, to
check that it is fully in control of its activities.
2. In addition, the organization may invite its clients to audit the quality system in order to
give them confidence that the organization is capable of delivering products or services that
will meet their requirements.
3. Lastly, the organization may engage the services of an independent quality system
certification body to obtain an ISO 9001:2000 certificate of conformity. This last option has
proved extremely popular in the market-place because of the perceived credibility of an
independent assessment.

The organization may thus avoid multiple audits by its clients, or reduce the frequency or
duration of client audits. The certificate can also serve as a business reference between the
organization and potential clients, especially when supplier and client are new to each other, or far
removed geographically, as in an export context.

MALCOLM BALDRIGE AWARD

The Malcolm Baldrige National Quality Award (MBNQA) is presented annually by the President of
the United States to organizations that demonstrate quality and performance excellence. Three
awards may be given annually in each of six categories:

Manufacturing
Service company
Small business
Education
Healthcare
Nonprofit

Established by Congress in 1987 for manufacturers, service businesses and small businesses, the
Baldrige Award was designed to raise awareness of quality management and recognize U.S.
companies that have implemented successful quality-management systems.

The education and healthcare categories were added in 1999. A government and nonprofit category
was added in 2007.

The Baldrige Award is named after the late Secretary of Commerce Malcolm Baldrige, a proponent
of quality management. The U.S. Commerce Department's National Institute of Standards and
Technology manages the award and ASQ administers it.

Organizations that apply for the Baldrige Award are judged by an independent board of examiners.
Recipients are selected based on achievement and improvement in seven areas, known as the
Baldrige Criteria for Performance Excellence:

1. Leadership: How upper management leads the organization, and how the organization leads
within the community.
2. Strategic planning: How the organization establishes and plans to implement strategic
directions.
3. Customer and market focus: How the organization builds and maintains strong, lasting
relationships with customers.
4. Measurement, analysis, and knowledge management: How the organization uses data to
support key processes and manage performance.
5. Human resource focus: How the organization empowers and involves its workforce.
6. Process management: How the organization designs, manages and improves key processes.
7. Business/organizational performance results: How the organization performs in terms of
customer satisfaction, finances, human resources, supplier and partner performance,
operations, governance and social responsibility, and how the organization compares to its
competitors.

Вам также может понравиться