Вы находитесь на странице: 1из 9

(Spring/Feb 2013) Master of Business Administration - MBA Semester 3 Operations Management Specialization OM0010-Operations Management (4 credits) (Book ID:

B1232) ASSIGNMENT- Set 1 Marks 60 Note: Assignment Set -1 must be written within 6-8 pages. Answer all questions. Q1. What are the effects of Global Competition on the industries in India?. Answer. As the greatest hope for growth in the global economy for the past two years, the emerging markets have become the darlings of the financial press and a favourite talking point of C-suite executives worldwide. Once attractive only for their natural resources or as a source of cheap labour and low-cost manufacturing, emerging markets are now seen as promising markets in their own right. Rapid population growth, sustained economic development and a growing middle class are making many companies look at emerging markets in a whole new way. In particular, we see the following trends ahead: A company should be very effective in its operational performance and should have good strategy, to perform well. It is very difficult for a company to outperform others merely on the strength of its operational effectiveness.

Due to rapid globalisation, industries in most countries are facing intense competition. Developed countries look for new markets for their products in new countries as their own home markets are maturing, while the emerging economies churn out superior products offered at lower prices since the industries in their countries look for larger markets.

Tremendous growth in transportation and communication has made accessing the modern and distant market easier. The entire world can be perceived today as a "Global Village 5, wherein economic events in one country promptly affect other countries.

Q2. How is Economies of Scope different from Economies of Scale? Answer. Generally speaking, economies of scale is about the benefits gained by the production of large volume of a product, while economies of scope is linked to benefits gained by producing a

wide variety of products by efficiently utilising the same Operations. Each of these business strategies, their strengths and weaknesses, will be discussed in details in this paper. "Economies of scale" has been known for long time as a major factor in increasing profitability and contributing to a firm's other financial and operational ratios. Mass production of a mature, standardised product can apply the most efficient line-flow process and standard inputs for reducing the manufacturing cost (per unit). Mass manufacturing is also associated with a significant market-share, and a tight supply-chain

In microeconomics, economies of scale are the cost advantages that enterprises obtain due to size, with cost per unit of output generally decreasing with increasing scale as fixed costs are spread out over more units of output. Often operational efficiency is also greater with increasing scale, leading to lower variable cost as well. Economies of scale apply to a variety of organizational and business situations and at various levels, such as a business or manufacturing unit, plant or an entire enterprise. For example, a large manufacturing facility would be expected to have a lower cost per unit of output than a smaller facility, all other factors being equal, while a company with many facilities should have a cost advantage over a competitor with fewer. Some economies of scale, such as capital cost of manufacturing facilities and friction loss of transportation and industrial equipment, have a physical or engineering basis. The economic concept dates back to Adam Smith and the idea of obtaining larger production returns through the use of division of labor.[1] Diseconomies of scale is the opposite. Economies of scale often have limits, such as passing the optimum design point where costs per additional unit begin to increase. Common limits include exceeding the nearby raw material supply, such as wood in the lumber, pulp and paper industry. A common limit for low cost per unit weight commodities is saturating the regional market, thus having to ship product uneconomical distances. Other limits include using energy less efficiently or having a higher defect rate. Large producers are usually efficient at long runs of a product grade (a commodity) and find it costly to switch grades frequently. They will therefore avoid specialty grades even though they have higher margins. Often smaller (usually older) manufacturing facilities remain viable by changing from commodity grade production to specialty products.

Economies of scale and returns to scale


Economies of scale is related to and can easily be confused with the theoretical economic notion of returns to scale. Where economies of scale refer to a firm's costs, returns to scale describe the relationship between inputs and outputs in a long-run (all inputs variable) production function. A production function has constant returns to scale if increasing all inputs by some proportion results in output increasing by that same proportion. Returns are decreasing if, say, doubling inputs results in less than double the output, and increasing if more than double the output. If a mathematical function is used to represent the production function, and if that production function is homogeneous, returns to scale are represented by

the degree of homogeneity of the function. Homegeneous production functions with constant returns to scale are first degree homogeneous, increasing returns to scale are represented by degrees of homogeneity greater than one, and decreasing returns to scale by degrees of homogeneity less than one. If the firm is a perfect competitor in all input markets, and thus the per-unit prices of all its inputs are unaffected by how much of the inputs the firm purchases, then it can be shown [10] [11][12] that at a particular level of output, the firm has economies of scale if and only if it has increasing returns to scale, has diseconomies of scale if and only if it has decreasing returns to scale, and has neither economies nor diseconomies of scale if it has constant returns to scale. In this case, with perfect competition in the output market the long-run equilibrium will involve all firms operating at the minimum point of their long-run average cost curves (i.e., at the borderline between economies and diseconomies of scale). If, however, the firm is not a perfect competitor in the input markets, then the above conclusions are modified. For example, if there are increasing returns to scale in some range of output levels, but the firm is so big in one or more input markets that increasing its purchases of an input drives up the input's per-unit cost, then the firm could have diseconomies of scale in that range of output levels. Conversely, if the firm is able to get bulk discounts of an input, then it could have economies of scale in some range of output levels even if it has decreasing returns in production in that output range. The literature assumed that due to the competitive nature of reverse auction, and in order to compensate for lower prices and lower margins, suppliers seek higher volumes to maintain or increase the total revenue. Buyers, in turn, benefit from the lower transaction costs and economies of scale that result from larger volumes. In part as a result, numerous studies have indicated that the procurement volume must be sufficiently high to provide sufficient profits to attract enough suppliers, and provide buyers with enough savings to cover their additional costs.[13] However, surprisingly enough, Shalev and Asbjornsen found, in their research based on 139 reverse auctions conducted in the public sector by public sector buyers, that the higher auction volume, or economies of scale, did not lead to better success of the auction. They found that Auction volume did not correlate with competition, nor with the number of bidder, suggesting that auction volume does not promote additional competition. They noted, however, that their data included a wide range of products, and the degree of competition in each market varied significantly, and offer that further research on this issue should be conducted to determine whether these findings remain the same when purchasing the same product for both small and high volumes. Keeping competitive factors constant, increasing auction volume may further increase competition.[14]

Q3. List and explain the six basic steps involved in preparing aforecast. Answer. 1) Identify all sources of monthly income. Consider all sources of income such as salary, bonuses, dividends, alimony, child support, etc. 2) List all expenses. This might take a few tries to get everything. Check your calendar for birthdays, anniversaries, weddings, etc. Include prescriptions, subscriptions, and memberships and annual expenses. Here is a list of common annual expenses: auto insurance

The systematic approach to forecasting Whilst the Antarctic environment is different from the climate and weather experienced by most forecasters in their home country, the same fundamental laws of physics and chemistry apply. Accordingly, Antarctic weather forecasters can approach their task in a manner similar to that which they are used to at their home locality. The systematic approach outlined here may not be followed worldwide but has proven to be successful in some countries (eg: Australian Bureau of Meteorology, 1984) and probably contains some steps that are adhered to in full or in part Getting to know the physical environment of the area for which forecasts are being prepared Prior to commencing forecasting for a particular area, Australian Bureau of Meteorology (1984) advises that a new forecaster must have a detailed knowledge of the mesoscale climatology and geography of the forecast area. This information should be established for each relevant location and stored in a convenient integrated summary or display that is easily accessible. by most national forecasting agencies.

Getting to know an area's climatology


Basically, climatology summarises various aspects of weather (e.g. average cloud amount; days of gales; blowing snow; fog, etc.) in a convenient form (e.g. frequency tables) and consequently provides some information on the normal expectation of weather at a particular locality. Appendix 2 contains climatological data for many of the places covered in this handbook. The value of these data for shortterm forecasting varies greatly with locality and season. Frequently climate statistics only provide background information, but, where the weather is strongly linked to seasonal and diurnal patterns, climatology exercises a strong influence on the forecast. Nevertheless, climatology must not be insisted on at the expense of synoptic and dynamic reasoning.

Major steps in shortterm forecasting


In practice, the best approach to shortterm forecasting will depend on the particular time-period of the forecast and the scale and life cycle of the weather phenomena that are expected in the forecast period. For example, in a blizzard situation persistence might be quite satisfactory for the next few hours to a day or so if a major lowpressure system has forced the blizzard and is slow moving. On the other hand, the onset or cessation of blizzard conditions may be more difficult to predict. Nevertheless, assuming familiarity with the

forecast area and a knowledge of, or ready access to, local climatology, there are general principles that should be taken in a forecasting shift in order to develop a solid forecasting methodology. These principles are enunciated in a sequence of major steps that are recommended by Australian Bureau of Meteorology (1984), and outlined below and elaborated on in the subsequent sections.

A thorough briefing
A thorough handover/takeover briefing is needed at the start and end of each shift in order to: maintain continuity of shortterm forecasting services;

provide the new shift forecaster with sufficient background to act immediately to revise a forecast or to issue a new forecast if required;
Q4. Explain Johnsons rule for sequencing and how it is different from CDS algorithm. Answer. Johnsons algorithm of sequencing Johnsons algorithm is used for sequencing of n jobs through two work centres. The purpose is to minimise idle time on machines and reduce the total time taken for completing all the jobs. As there are no priority rules since all job have equal priority, sequencing the jobs according to the time taken may minimise the idle time taken by the jobs on machines. This reduces the total time taken.

Johnson's algorithm is a way to find the shortest paths between all pairs of vertices in a sparse directed graph. It allows some of the edge weights to be negative numbers, but no negative-weight cycles may exist. It works by using the BellmanFord algorithm to compute a transformation of the input graph that removes all negative weights, allowing Dijkstra's algorithm to be used on the transformed graph. It is named after Donald B. Johnson, who first published the technique in 1977. A similar reweighting technique is also used in Suurballe's algorithm (1974) for finding two disjoint paths of minimum total length between the same two vertices in a graph with nonnegative edge weights.

Correctness
In the reweighted graph, all paths between a pair s and t of nodes have the same quantity h(s) h(t) added to them. The previous statement can be proven as follows: Let p be an s-t path. Its weight W in the reweighted graph is given by the following expression:

Notice that every is cancelled by in the previous bracketed expression; therefore, we are left with the following expression for W:

Notice that the bracketed expression is the weight of p in the original weighting. Since the reweighting adds the same amount to the weight of every s-t path, a path is a shortest path in the original weighting if and only if it is a shortest path after reweighting. The weight of edges that belong to a shortest path from q to any node is zero, and therefore the lengths of the shortest paths from q to every node become zero in the reweighted graph; however, they still remain shortest paths. Therefore, there can be no negative edges: if edge uv had a negative weight after the reweighting, then the zero-length path from q to u together with this edge would form a negative-length path from q to v, contradicting the fact that all vertices have zero distance from q. The non-existence of negative edges ensures the optimality of the paths found by Dijkstra's algorithm. The distances in the original graph may be calculated from the distances calculated by Dijkstra's algorithm in the reweighted graph by reversing the reweighting transformation.

Analysis
The time complexity of this algorithm, using Fibonacci heaps in the implementation of Dijkstra's algorithm, is O(V2log V + VE): the algorithm uses O(VE) time for the Bellman Ford stage of the algorithm, and O(V log V + E) for each of V instantiations of Dijkstra's algorithm. Thus, when the graph is sparse, the total time can be faster than the Floyd Warshall algorithm, which solves the same problem in time O(V3).

Q5. How does Crosbys absolute of quality differ from Demings principles? Answer. Like Deming, Crosby also lays emphasis on top management commitment and responsibility for designing the system so that defects are not inevitable. He urged that there be no restriction on spending for achieving quality. In the long run, maintaining quality is more economical than compromising omits achievement. His absolutes can be listed as under: Quality is conformance to requirements, not goodness Prevention, not appraisal, is the path to quality Quality is measured as the price paid for non-conformance and as indices Quality originates in all factions. There are no quality problems. It is the people, designs, and processes that create problems Crosby also has given 14 points similar to those of Deming. His approach emphasises on measurement of quality, increasing awareness, corrective action, error cause removal and continuously reinforcing the system, so that advantages derived are not lost over time. He opined that the quality management regimen should improve the overall health of the organisation and prescribed a vaccine.

Like Deming, he also lays emphasis on top management commitment and responsibility for designing the system so that defects are not inevitable. He urged that there be no restriction on spending for achieving quality. In the long run, maintaining quality is more economical rather than compromising on its achievement. His absolutes can be listed as under.

(i) Quality is conformance to requirements not goodness. (ii) Prevention, not appraisal, is the path to quality. (iii) Quality is measured as the price paid for non-conformance and as indexes. (iv) Quality originates in all factions not quality department. There are no quality problems people, design, process create problems. Crosby also has given 14 points similar to those of Deming. His approach emphasizes on measurement of quality, increasing awareness, corrective action, error cause removal and continuously reinforcing the system, so that advantages derived are not lost over time. He desires that the quality management regimen should improve the overall health of the organisation and prescribed a vaccine. The ingredients are: 1. Integrity honesty and commitment to produce everything right first time, every time. 2. Communication Flow of information between departments, suppliers, customers helps in identifying opportunities. 3. Systems and operations These should bring in a quality environment so that nobody is comfortable with anything less than the best.

Q6. Analyse the various types of Probability distribution. Answer. In probability and statistics, a probability distribution assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of statistical inference. Examples are found in experiments whose sample space is non-numerical, where the distribution would be a categorical distribution; experiments whose sample space is encoded by discrete random variables, where the distribution can be specified by a probability mass function; and experiments with sample spaces encoded by continuous random variables, where the distribution can be specified by a probability density function. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures.

In probability and statistics, a probability distribution assigns a probability to each measurable subset of the possible outcomes of a random experiment, survey, or procedure of

statistical inference. Examples are found in experiments whose sample space is nonnumerical, where the distribution would be a categorical distribution; experiments whose sample space is encoded by discrete random variables, where the distribution can be specified by a probability mass function; and experiments with sample spaces encoded by continuous random variables, where the distribution can be specified by a probability density function. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures. In applied probability, a probability distribution can be specified in a number of different ways, often chosen for mathematical convenience:

by supplying a valid probability mass function or probability density function by supplying a valid cumulative distribution function or survival function by supplying a valid hazard function by supplying a valid characteristic function by supplying a rule for constructing a new random variable from other random variables whose joint probability distribution is known.

A probability distribution can either be univariate or multivariate. A univariate distribution gives the probabilities of a single random variable taking on various alternative values; a multivariate distribution (a joint probability distribution) gives the probabilities of a random vectora set of two or more random variablestaking on various combinations of values. Important and commonly encountered univariate probability distributions include the binomial distribution, the hypergeometric distribution, and the normal distribution. The multivariate normal distribution is a commonly encountered multivariate distribution.

Continuous probability distribution


See also: Probability density function

A continuous probability distribution is a probability distribution that has a probability density function. Mathematicians also call such a distribution absolutely continuous, since its cumulative distribution function is absolutely continuous with respect to the Lebesgue measure . If the distribution of X is continuous, then X is called a continuous random variable. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others. Intuitively, a continuous random variable is the one which can take a continuous range of values as opposed to a discrete distribution, where the set of possible values for the random variable is at most countable. While for a discrete distribution an event with probability zero is impossible (e.g. rolling 3 on a standard die is impossible, and has probability zero), this is not so in the case of a continuous random variable. For example, if one measures the width of an oak leaf, the result of 3 cm is possible, however it has probability zero because there are uncountably many other potential values even between 3 cm and 4 cm. Each of these individual outcomes has probability zero, yet the probability that the outcome will fall into the interval (3 cm, 4 cm) is nonzero. This apparent paradox is resolved by the fact that the probability that X attains some value within an infinite set, such as an interval, cannot be found by naively adding the probabilities for individual values. Formally, each value has an infinitesimally small probability, which statistically is equivalent to zero.

Вам также может понравиться