Академический Документы
Профессиональный Документы
Культура Документы
Semester : Semester II
Q.1
ANSWER
In computing, just-in-time compilation (JIT), also known as dynamic translation, is a
method to improve the runtime performance of computer programs. Traditionally,
computer programs had two modes of runtime operation, either interpreted or static
(ahead-of-time) compilation.[citation needed] Interpreted code is translated from a high-
level language to a machine code continuously during every execution, whereas statically
compiled code is translated into machine code before execution, and only requires this
translation once.
JIT builds upon two earlier ideas in run-time environments: byte code compilation and
dynamic compilation. It converts code at runtime prior to executing it natively, for
example byte code into native machine code.
Several modern runtime environments, such as Microsoft's .NET Framework and most
implementations of Java, rely on JIT compilation for high-speed code execution.
In contrast, a traditional interpreted virtual machine will simply interpret the bytecode,
generally with much lower performance. Some interpreters even interpret source code,
without the step of first compiling to bytecode, with even worse performance. Statically
compiled code or native code is compiled prior to deployment. A dynamic compilation
environment is one in which the compiler can be used during execution. For instance,
most Common Lisp systems have a compile function which can compile new functions
created during the run. This provides many of the advantages of JIT, but the programmer,
rather than the runtime, is in control of what parts of the code are compiled. This can also
compile dynamically generated code, which can, in many scenarios, provide substantial
performance advantages over statically compiled code, as well as over most JIT systems.
A common goal of using JIT techniques is to reach or surpass the performance of static
compilation, while maintaining the advantages of bytecode interpretation: Much of the
"heavy lifting" of parsing the original source code and performing basic optimization is
often handled at compile time, prior to deployment: compilation from bytecode to
machine code is much faster than compiling from source. The deployed bytecode is
portable, unlike native code. Since the runtime has control over the compilation, like
interpreted bytecode, it can run in a secure sandbox. Compilers from bytecode to machine
code are easier to write, because the portable bytecode compiler has already done much of
the work.
JIT code generally offers far better performance than interpreters. In addition, it can in
some cases offer better performance than static compilation, as many optimizations are
only feasible at run-time: The JIT compiler translates byte codes into native machine
code. This compilation process is done only once, and a link is created between the
bytecode and the corresponding compiled code
The compilation can be optimized to the targeted CPU and the operating system model
where the application runs. For example JIT can choose SSE2 CPU instructions when it
detects that the CPU supports them. To obtain this level of optimization specificity with a
static compiler, one must either compile a binary for each intended platform/architecture,
or else include multiple versions of portions of the code within a single binary.
The system is able to collect statistics about how the program is actually running in the
environment it is in, and it can rearrange and recompile for optimum performance.
However, some static compilers can also take profile information as input.
The system can do global code optimizations (e.g. inlining of library functions) without
losing the advantages of dynamic linking and without the overheads inherent to static
compilers and linkers. Specifically, when doing global inline substitutions, a static
compilation process may need run-time checks and ensure that a virtual call would occur
if the actual class of the object overrides the inlined method, and boundary condition
checks on array accesses may need to be processed within loops. With just-in-time
compilation in many cases this processing can be moved out of loops, often giving large
increases of speed.
Although this is possible with statically compiled garbage collected languages, a bytecode
system can more easily rearrange executed code for better cache utilization.
Startup delay and optimizations JIT typically causes a slight delay in initial execution of
an application, due to the time taken to load and compile the bytecode. Sometimes this
delay is called "startup time delay". In general, the more optimization JIT performs, the
better the code it will generate, but the initial delay will also increase. A JIT compiler
therefore has to make a trade-off between the compilation time and the quality of the code
it hopes to generate. However, it seems that much of the startup time is sometimes due to
IO-bound operations rather than JIT compilation (for example, the rt.jar class data file for
the Java Virtual Machine is 40 MB and the JVM must seek a lot of data in this
contextually huge file).
One possible optimization, used by Sun's HotSpot Java Virtual Machine, is to combine
interpretation and JIT compilation. The application code is initially interpreted, but the
JVM monitors which sequences of bytecode are frequently executed and translates them
to machine code for direct execution on the hardware. For bytecode which is executed
only a few times, this saves the compilation time and reduces the initial latency; for
frequently executed bytecode, JIT compilation is used to run at high speed, after an initial
phase of slow interpretation. Additionally, since a program spends most time executing a
minority of its code, the reduced compilation time is significant. Finally, during the initial
code interpretation, execution statistics can be collected before compilation, which helps
to perform better optimization.
The correct tradeoff can vary due to circumstances. For example, Sun's Java Virtual
Machine has two major modes - client and server. In client mode, minimal compilation
and optimization is performed, to reduce startup time. In server mode, extensive
compilation and optimization is performed, to maximize performance once the application
is running by sacrificing startup time. Other Java just-in-time compilers have used a
runtime measurement of the number of times a method has executed combined with the
bytecode size of a method as a heuristic to decide when to compile.[3] Still another uses
the number of times executed combined with the detection of loops.[4] In general, it is
much harder to accurately predict which methods to optimize in short-running
applications than in long-running ones.
Native Image Generator (NGen) by Microsoft is another approach at reducing the initial
delay.[6] Ngen pre-compiles (or "pre-jits") bytecode in a Common Intermediate Language
image into machine native code. As a result, no runtime compilation is needed. .NET
framework 2.0 shipped with Visual Studio 2005 runs Ngen on all of the Microsoft library
DLLs right after the installation. Pre-jitting provides a way to improve the startup time.
However, the quality of code it generates might not be as good as the one that is jitted, for
the same reasons why code compiled statically, without profile-guided optimization,
cannot be as good as JIT compiled code in the extreme case: the lack of profiling data to
drive, for instance, inline caching.
There also exist Java implementations that combine an AOT (ahead-of-time) compiler
with either a JIT compiler (Excelsior JET) or interpreter (GNU Compiler for Java.)
History The earliest published JIT compiler is generally attributed to work on LISP by
McCarthy in 1960. In his seminal paper Recursive functions of symbolic expressions and
their computation by machine, Part I, he mentions functions that are translated during
runtime, thereby sparing the need to save the compiler output to punch cards. In 1968,
Thompson presented a method to automatically compile regular expressions to machine
code, which is then executed in order to perform the matching on an input text. An
influential technique for deriving compiled code from interpretation was pioneered by
Mitchell in 1970, which he implemented for the experimental language LC².
Smalltalk pioneered new aspects of JIT compilations. For example, translation to machine
code was done on demand, and the result was cached for later use. When memory became
sparse, the system would delete some of this code and regenerate it when it was needed
again. Sun's Self language improved these techniques extensively and was at one point the
fastest Smalltalk system in the world; achieving up to half the speed of optimized C[13]
but with a fully object-oriented language.
Self was abandoned by Sun, but the research went into the Java language, and currently it
is used by most implementations of the Java Virtual Machine, as HotSpot builds on, and
extensively uses, this research base.
The HP project Dynamo was an experimental JIT compiler where the 'bytecode' format
and the machine code format were the same; the system turned HPA-8000 machine code
into HPA-8000 machine code. Counter intuitively, this resulted in speed ups, in some
cases of 30% since doing this permitted optimizations at the machine code level, for
example, inlining code for better cache usage and optimizations of calls to dynamic
libraries and many other run-time optimizations which conventional compilers are not
able to attempt.
Q.2
ANSWER
Value engineering (VE) or Value Analysis (VA) is a systematic method to improve the
"value" of goods or products and services by using an examination of function. Value, as
defined, is the ratio of function to cost. Value can therefore be increased by either
improving the function or reducing the cost. It is a primary tenet of value engineering that
basic functions be preserved and not be reduced as a consequence of pursuing value
improvements.
In the United States, value engineering is specifically spelled out in Public Law 104-106,
which states “Each executive agency shall establish and maintain cost-effective value
engineering procedures and processes."
Value engineering is sometimes taught within the project management or industrial
engineering body of knowledge as a technique in which the value of a system’s outputs is
optimized by crafting a mix of performance (function) and costs. In most cases this
practice identifies and removes unnecessary expenditures, thereby increasing the value for
the manufacturer and/or their customers.
VE follows a structured thought process that is based exclusively on "function", i.e. what
something "does" not what it is. For example a screw driver that is being used to stir a can
of paint has a "function" of mixing the contents of a paint can and not the original
connotation of securing a screw into a screw-hole. In value engineering "functions" are
always described in a two word abridgment consisting of an active verb and measurable
noun (what is being done - the verb - and what it is being done to - the noun) and to do so
in the most non-prescriptive way possible. In the screw driver and can of paint example,
the most basic function would be "blend liquid" which is less prescriptive than "stir paint"
which can be seen to limit the action (by stirring) and to limit the application (only
considers paint.) This is the basis of what value engineering refers to as "function
analysis".
Value engineering uses rational logic (a unique "how" - "why" questioning technique) and
the analysis of function to identify relationships that increase value. It is considered a
quantitative method similar to the scientific method, which focuses on hypothesis-
conclusion approaches to test relationships, and operations research, which uses model
building to identify predictive relationships.
Quantitative Models
Work practices are ways of doing any work which has been in vogue and found to be
Useful. These are determined by motion and time study conducted over years and found
to be efficient and practiced. Any method improvement that is conducted may be adopted
to change the practice, but only after trials they have shown that, they increase the
comfort of the worker and get the job done faster.
Work study
We say that work study is being conducted when analysis of work methods is conducted
during the period when a job is done on a machine or equipment. The study helps in
designing the optimum work method and standardization of the work method. This study
enables the methods engineer to search for better methods for higher utilization of man
and machine and accomplishment of higher productivity. The study gives an opportunity
to the workmen to learn the process of study thus making them able to offer suggestions
for improved methods. This encourages workmen participation and they can be permitted
to make changes and report the advantages that can be derived from those. This course is
in alignment with the principle of continuous improvement and helps the organization in
the long run. Reward systems may be implemented for recognizing contributions from the
workmen.
Work study comprises of work measurement and method study. Work measurement
focuses on the time element of work, while method study focuses on the methods
deployed and development of better methods.
Work measurement Work measurement can be defined as a systematic application of
various techniques that are designed to establish the content of work involved in
performing a specific task. The task is performed by a qualified worker. With this we
arrive at the standard time for a task. This will be used to fix performance rating of other
workers. It forms the basis of incentives, promotion, and training for workmen and
assessment of capacity for the plant.
ILO defines a qualified worker as “one who is accepted as having the necessary physical
attributes, possessing the required intelligence and education, and having acquired the
necessary skill and knowledge to carry out the work in hand to satisfactory standards of
safety, quantity, and quality
Methods study
Method study focus is on studying the method currently being used and developing a new
method of performing the task in a better way. Operation Flow charts, Motion Charts,
Flow Process charts, which are the elements of the task, are studied to find the purpose of
each activity, the sequence in which they are done, and the effect of these on the work.
The study may help in changing some of them and even eliminate some of them to effect
improvements. The new method should result in saving of time, reduced motions, and
simpler activities.
Ergonomics
Ergonomics is the study of physical human factors and their functioning. We study the
movements, the amount of energy that is required for certain activities, and the
coordination among them. In operations management, we use these factors at two places.
The first is when we design the machines which are operated, the way the operator does
the tasks on the machine using different controls. Levers, wheels, switches, pedals (See
figure) have to be positioned so that the operators have maximum comfort for long
working hours
Q.4
ANSWER
Rapid Prototyping has also been referred to as solid free-form manufacturing; computer
automated manufacturing, and layered manufacturing. RP has obvious use as a vehicle for
visualization. In addition, RP models can be used for testing, such as when an airfoil
shape is put into a wind tunnel. RP models can be used to create male models for tooling,
such as silicone rubber molds and investment casts. In some cases, the RP part can be the
final part, but typically the RP material is not strong or accurate enough. When the RP
material is suitable, highly convoluted shapes (including parts nested within parts) can be
produced because of the nature of RP.
The basic methodology for all current rapid prototyping techniques can be summarized as
follows:
1. A CAD model is constructed, and then converted to STL format. The resolution can be
set to minimize stair stepping.
2. The RP machine processes the .STL file by creating sliced layers of the model.
3. The first layer of the physical model is created. The model is then lowered by the
thickness of the next layer, and the process is repeated until completion of the model.
4. The model and any supports are removed. The surface of the model is then finished and
cleaned.
Q.5
ANSWER
Introduction
The determination of the break-even point of a firm is an important factor in assessing its
profitability. It is a valuable control technique and a planning device in any business
enterprise. It depicts the relation between total cost and total revenue at the level of a
particular output. Ordinarily, the profit of an industrial unit depends upon the selling price
pf product (revenue), volume of business (it depends on price) and cost price of the
product.
If an entrepreneur is aware of the product cost and its selling price, he can plan the
volume of his sale in order to achieve a certain level of profit. The Break-even point is
determined as that point of sales volume at which the total cost and total revenue’ are
identical.
Break-Even Point
Break-even point is an important measure being used by thepr0ponents and banks in
deciding the viability of a new project, especially in respect of manufacturing activities.
This technique is useful dealing with a new project or a new activity of the existing unit.
The break-even point (BE?) establishes the level of output/production which evenly
breaks the costs and revenues. It is the level of production at which the turnover just
covers the fixed overheads and the ‘unit starts making profits.
From the banker’s point of view, the project should achieve a break-even position within
a reasonable time from the start of production. The project, which reaches a break-even
paint earlier, is considered as a viable project by bankers. They cannot only expect earlier
repayment of their advances in the case of such projects but can also be assured that the
project can fairly adapt itself to the day-to-day developing technology. The projects which
are unlikely to reach the break-even point in the third or fourth year of its permanent of-
production will not be a viable proposal for the bankers.
The break-even analysis also determines the margin of safety, i.e., excess of budgeted or
actual sales over the break-even sales so that the bankers would’ know how sensitive a
project is to recession. This is an important factor in determining the feasibility of the
project and its ability to absorb the ups and downs in the economy. The bankers, as
lenders of funds, insist upon a reasonable margin of safety so that fixed costs are met at a
fairly earlier stage.
Example:
Sales 1000 units
Selling price per unit Rs. 60
Variable cost per unit Rs. 40
Fixed cost Rs. 1500
60 - 40
1500
BEP(Unit Volume ) =1500/20
= 75 units
BEP in terms of units
BEP in terms of sales = Rs. 75 x Rs. 60 = Rs. 4500
Calculation of BEP
The break-even point can be calculated in terms of physical units and in terms of sales
turnover.
i. In terms of physical units: The number of units required to be sold to achieve the break-
even point can be calculated using the following formula:
Where
FC =variable cost
VC = fixed cost
SP = selling price
C= contribution per unit (C = SP - VC)
Example if:
FC = Rs. 1,00,000
VC = Rs. 2 per unit
SP = Rs. 4 per unit, and
Maximum productivity capacity = 1,00,000 units per year.
Thus both the layouts have their own advantages and disadvantages. In fact merits of one are
demerits of the of the other. The choice an suitability of he layout mainly mainly depends n the
nature of the manufacturing system and type f the product to be produced. In general process
layout is suitable for intermittent systems and product layout is appropriate for continuous
systems.
Quality management for macro processes is carried out by use of the Juran Trilogy, which
basically consists of three steps- Quality Planning, Quality Control and Quality
Improvement. Let us understand the main activities and the relation between the three
phases of the Juran Trilogy.
Quality Planning: The quality planning phase is the activity of developing products and
processes to meet customers' needs. It deals with setting goals and establishing the means
required to reach the goals.
Quality Control: This process deals with the execution of plans and it includes monitoring
operations so as to detect differences between actual performance and goals. It consists of
three steps:
The Alligator Analogy: The distinction between Quality Planning and Quality
Improvement is brought out by the alligator analogy. This is a fable of a manager who is
up to his waist in alligators; and each live alligator is a metaphor for chronic waste. Each
completed quality improvement project results in a dead alligator and when all alligators
are terminated the quality improvement is considered complete for the moment; but that
doesn't happen as long as the quality planning process has not changed. A changed and
improved planning process will only help complete improvement and sustain the same.
From the trilogy diagram and the alligator analogy it is clear that quality improvement
reduces quality issues but to sustain the new level there are to be improvement in the
quality planning process.
Semester : Semester II
Q.1
ANSWER
Summary
Like Logical Data Modeling, Logical Process Modeling is one of the primary techniques
for analyzing and managing the information needed to achieve business goals. It is
important that analysts understand the concepts of process modeling; the methods used in
process discovery and definition, and perfect the analytical skills for relating and
explaining the data and processes used by a business area. Properly performed, logical
process modeling can greatly assist the system architects and developers in their efforts,
producing functional and scalable applications.
Q.2
ANSWER
The project manager must have clear understanding of the process, activities and
deliverables in managing project. It includes knowledge on how to use specific tools to
bring about the expected product of each project management process. Here are the
PMBoK and Prince2 definitions of the project management knowledge areas:
1. Management of Integration describes the processes and activities that integrate the
various elements of project management.
2. Management of Scope Describes the processes involved in ascertaining that the project
includes all the work required, and only the work required, to complete the project
successfully.
3. Management of Time describes the processes concerning the timely completion of the
project.
4. Management of Cost Describes the processes involved in planning, estimating,
budgeting, and controlling costs so that the project is completed within the approved
budget.
5. Management of Quality Describes the processes involved in assuring that the project
will satisfy the objectives for which it was undertaken.
6. Management of Human Resource describes the processes that organize and manage the
project team.
7. Management of Project Communications describes the processes concerning the timely
and appropriate generation, collection, dissemination, storage and ultimate disposition of
project information.
8. Management of Risk describes the processes concerned with conducting risk
management on a project.
9. Management of Procurement Management describes the processes that purchase or
acquire products, services or results, as well as contract management processes.
Q.3
ANSWER
Project Life Cycle - Project Cycle Management
The Project Life Cycle refers to a logical sequence of activities to accomplish the
project’s goals or objectives. Regardless of scope or complexity, any project goes
through a series of stages during its life. There is first an Initiation or Birth phase, in
which the outputs and critical success factors are defined, followed by a Planning phase,
characterized by breaking down the project into smaller parts/tasks, an Execution phase,
in which the project plan is executed, and lastly a Closure or Exit phase, that marks the
completion of the project. Project activities must be grouped into phases because by doing
so, the project manager and the core team can efficiently plan and organize resources for
each activity, and also objectively measure achievement of goals and justify their
decisions to move ahead, correct, or terminate. It is of great importance to organize
project phases into industry-specific project cycles. Why? Not only because each industry
sector involves specific requirements, tasks, and procedures when it comes to projects, but
also because different industry sectors have different needs for life cycle management
methodology. And paying close attention to such details is the difference between doing
well things, Diverse project management tools and methodologies prevail in the different
project cycle phases. Let’s take a closer look at what’s important in each one of these
stages:
1)Initiation
In this first stage, the scope of the project is defined along with the approach to be taken
to deliver the desired outputs. The project manager is appointed and in turn, he selects the
team members based on their skills and experience. The most common tools or
methodologies used in the initiation stage are Project Charter, Business Plan, Project
Framework (or Overview), Business Case Justification, and Milestones Reviews.
2)Planning The second phase should include a detailed identification and assignment of
each task until the end of the project. It should also include a risk analysis and a definition
of a criteria for the successful completion of each deliverable. The governance process is
defined, stake holders identified and reporting frequency and channels agreed. The most
common tools or methodologies used in the planning stage are Business Plans.
3)Execution and controlling The most important issue in this phase is to ensure project
activities are properly executed and controlled. During the execution phase, the planned
solution is implemented to solve the problem specified in the project's requirements. In
product and system development, a design resulting in a specific set of product
requirements is created. This convergence is measured by prototypes, testing, and
reviews. As the execution phase progresses, groups across the organization become more
deeply involved in planning for the final testing, production, and support. The most
common tools or methodologies used in the execution phase are an update of Risk
Analysis and Score Cards, in addition to Business Plan and Milestones Reviews.
4)Closure
In this last stage, the project manager must ensure that the project is brought to its proper
completion. The closure phase is characterized by a written formal project review report
containing the following components: a formal acceptance of the final product by the
client, Weighted Critical Measurements (matching the initial requirements specified by
the client with the final delivered product), rewarding the team, a list of lessons learned,
releasing project resources, and a formal project closure notification to higher
management. No special tool or methodology is needed during the closure phase.
Processes overlap and interact throughout a project or phase. Processes are described in
terms of :
• Inputs (documents, plans, designs, etc.)
• Tools and Techniques (mechanisms applied to inputs)
• Outputs (documents, products, etc.)
The nine knowledge areas are:
1. Project Integration Management
2. Project Scope Management
3. Project Time Management
4. Project Cost Management
5. Project Quality Management
6. Project Human Resource Management
7. Project Communications Management
8. Project Risk Management
9. Project Procurement Management
Each knowledge area contains some or all of the project management processes. For
example, Project Procurement Management includes:
• Procurement Planning
• Solicitation Planning
• Solicitation
• Source Selection
• Contract Administration
• Contract Closeout
Much of PMBOK is unique to project management e.g. critical path and work breakdown
structure (WBS). Some areas overlap with other management disciplines. General
management also includes planning, organizing, staffing, executing and controlling the
operations of an organization. Financial forecasting, organizational behaviour and
planning techniques are also similar.
The Project Management Institute (PMI) is the publisher of PMBOK (now in its fourth
edition) and offers two levels of certification:
A Certified Associate in Project Management (CAPM) has demonstrated a common base
of knowledge and terms in the field of project management. It requires either 1500 hours
of work on a project team or 23 contact hours of formal education in project management.
A Project Management Professional (PMP) has met specific education and experience
requirements, has agreed to adhere to a code of professional conduct and has passed an
examination designed to objectively assess and measure project management knowledge.
In addition, a PMP must satisfy continuing certification requirements or lose the
certification.
As of 2006, PMI reported over 220,000 members and over 50,000 Project Management
Professionals (PMPs) in 175 countries. Over 44,000 PMP certifications expire annually; a
PMP must document ongoing project management experience and education every three
years to keep their certification current.
Q.4
ANSWER
When the success factors are studied focus falls on the human aspects. A strong academic
majority raises a big concern around this area. All agree that the intellectual assets of the
employees are the foremost critical success factor. “Usually people begin a KM project by
focusing on the technology needs. But the key is people and process.” (Shir Nir, 2002).
The key to successful knowledge management (KM) projects is focusing on people first,
not cutting-edge technology. The biggest misconception that IT leaders make is that
knowledge management is about technology," says Shir Nir, There is no "cookie cutter
approach" to adopting knowledge management. Every organization and company has its
own definition of knowledge and how it should be gathered, categorized and made
available to employees. What works for one company will not work for another because
organizational knowledge is so subjective. The one size- fits-all mentality, coupled with
the tendency to focus on technology rather than people and process, has obscured the real
benefits that KM can bring, according to Nir (2002). It does not help that knowledge
management means different things and often involves different kinds of technologies a t
different organizations.
Bixler (2002) developed a four pillar model to describe success factors for a KM
implementation. To achieve a basic entry level KM program, it has been determined that
all ``four pillars`` must be addressed. The four enterprise engineering pillars are
leadership, organization, technology and learning in support of enterprise wide knowledge
management initiatives. Leadership means that managers develop business and
operational strategies to survive and position for success in today’s dynamic environment.
Those strategies determine vision, and must align knowledge management with business
tactics to drive the value of KM throughout the enterprise. Focus must be placed on
building executive support and KM champions.
The success factor organization describes that the value of knowledge creation and
collaboration should be intertwined throughout an enterprise. Operational processes must
align with the KM framework and strategy, including all performance metrics and
objectives. While operational needs dictate organizational alignment, a KM system must
be designed to facilitate KM through out the organization. Technology enables and
provides the entire infrastructure and tools to support KM within an enterprise.
The Gartner Group defines 10 technologies that collectively make up full-function KM.
The functional requirements that enterprises can select and use to build a KM solution
include: “capture and store“, “search and retrieve”, send critical information to individuals
or groups, “structure and navigate”, “share and collaborate”, “synthesize, profile and
personalize”, “solve or recommend”, “’integrate with business applications”, and
“maintenance”.
Summary
Based on the above study, it is considered that the most relevant factors for the successful
implementation and sustenance of momentum for the KM initiatives are:
(1) A Culture of pervasive knowledge sharing needs to be nurtured enabled within and
aligned with organizational objectives. The underlying concern is employees do not want
to share information. Successful organizations empower employees to want to share and
Contribute intellectual information, by rewarding them for such actions. And, with
organizational leaders role models of information sharing and interface regularly with
staff, teams and stakeholders in review sessions and openly talk about successes and
failures.
(2) KM Organization: The first important variable is leadership with a vision, strategy
and ability to promote change of the management to a compelling knowledge
management actively promoted by the Chief Executive that clearly articulates how
knowledge management contributes to achieving organizational objectives. A specialist
team t o aggressively manage knowledge property i.e., manage intellectual assets as
routines-process, appropriate technology, infrastructure for ‘’social’’ and electronic
networking to allow for innovation and leverage organizational knowledge.
(4) Strategy, Systems & Infrastructure establishes a clear definition of all required KM
elements and an overall system approach and integration.
(5) Finally the Measures the success of knowledge management can be measured
against pragmatic milestones, such as the creation of products, the development of new
clients and an increase in sales revenue.
Q.5
ANS:
The most requested article in the 10-year history of Supply Chain Management Review
was one that appeared in our very first issue in the spring of 1997. Written by experts
from the respected Logistics practice of Andersen Consulting (now Accenture), “The
Seven Principles of Supply Chain Management,” layed out a clear and compelling case
for excellence in supply chain management.
The insights provided here remain remarkably fresh ten years later.
• Principle 1: Segment customers based on the service needs of distinct groups and adapt
the supply chain to serve these segments profitably.
• Principle 2: Customize the logistics network to the service requirements and profitability
of customer segments.
• Principle 3: Listen to market signals and align demand planning accordingly across the
supply chain, ensuring consistent forecasts and optimal resource allocation.
• Principle 4: Differentiate product closer to the customer and speed conversion across the
supply chain.
• Principle 5: Manage sources of supply strategically to reduce the total cost of owning
materials and services.
Managers increasingly find themselves assigned the role of the rope in a very real tug of
war—pulled one way by customers' mounting demands and the opposite way by the
company's need for growth and profitability. Many have discovered that they can keep the
rope from snapping and, in fact, achieve profitable growth by treating supply chain
management as a strategic variable. These savvy managers recognize two important
things:
1. They think about the supply chain as a whole—all the links involved in managing the
flow of products, services, and information from their suppliers' suppliers to their
customers' customers (that is, channel customers, such as distributors and retailers).
2. They pursue tangible outcomes—focused on revenue growth, asset utilization, and cost.
Rejecting the traditional view of a company and its component parts as distinct functional
entities, these managers realize that the real measure of success is how well activities
coordinate across the supply chain to create value for customers, while increasing the
profitability of every link in the chain.
Our analysis of initiatives to improve supply chain management by more than 100
manufacturers, distributors, and retailers shows many making great progress, while others
fail dismally. The successful initiatives that have contributed to profitable growth share
several themes. They are typically broad efforts, combining both strategic and tactical
change. They also reflect a holistic approach, viewing the supply chain from end to end
and orchestrating efforts so that the whole improvement achieved—in revenue, costs, and
asset utilization—is greater than the sum of its parts.
Because customer demand is rarely perfectly stable, businesses must forecast demand to
properly position inventory and other resources. Forecasts are based on statistics, and they
are rarely perfectly accurate. Because forecast errors are a given, companies often carry
an inventory buffer called "safety stock".
Moving up the supply chain from end-consumer to raw materials supplier, each supply
chain participant has greater observed variation in demand and thus greater need for
safety stock. In periods of rising demand, down-stream participants increase orders. In
periods of falling demand, orders fall or stop to reduce inventory. The effect is that
variations are amplified as one moves upstream in the supply chain (further from the
customer). This sequence of events is well simulated by the Beer Distribution Game
which was developed by Prasad Ligade MIT Sloan School of Management in the 1960s.
The causes can further be divided into behavioral and operational causes:
Behavioral causes
• misuse of base-stock policies
• misapplication of trinomial theorem
• misperceptions of feedback and time delays
• panic ordering reactions after unmet demand
• perceived risk of other players' bounded rationality
Operational causes
• Dependent demand processing
o Forecast Errors
o adjustment of inventory control parameters with each demand observation
• Lead time Variability (forecast error during replenishment lead time)
• lot-sizing/order synchronization
o consolidation of demands
o transaction motive
o quantity discount
• trade promotion and forward buying
• anticipation of shortages
o allocation rule of suppliers
o shortage gaming (including dereliction under Benford's Law)
o Lean and JIT style management of inventories and a chase production
strategy
Theoretically the Bullwhip effect does not occur if all orders exactly meet the demand of
each period. This is consistent with findings of supply chain experts who have recognized
that the Bullwhip Effect is a problem in forecast-driven supply chains, and careful
management of the effect is an important goal for Supply Chain Managers. Therefore it is
necessary to extend the visibility of customer demand as far as possible. One way to
achieve this is to establish a demand-driven supply chain which reacts to actual customer
orders. In manufacturing, this concept is called Kanban. This model has been most
successfully implemented in Wal-Mart's distribution system. Individual Wal-Mart stores
transmit point-of-sale (POS) data from the cash register back to corporate headquarters
several times a day. This demand information is used to queue shipments from the Wal-
Mart distribution center to the store and from the supplier to the Wal-Mart distribution
center. The result is near-perfect visibility of customer demand and inventory movement
throughout the supply chain. Better information leads to better inventory positioning and
lower costs throughout the supply chain. Barriers to the implementation of a demand-
driven supply chain include the necessary investment in information technology and the
creation of a corporate culture of flexibility and focus on customer demand. Another
prerequisite is that all members of a supply chain recognize that they can gain more if
they act as a whole which requires trustful collaboration and information sharing.
Methods intended to reduce uncertainty, variability, and lead time: