Академический Документы
Профессиональный Документы
Культура Документы
TABLE OF CONTENTS
2
ICADABAI 2009 – Abstracts
COVERING BASED ROUGH SET APPROACH TO UNCERTAINTY MANAGEMENT IN
DATABASES .............................................................................................................................................. 37
REAL TIME SPIKE DETECTION FROM MICRO ELECTRODE ARRAY RECORDINGS USING
WAVELET DENOISING AND THRESHOLDING............................................................................... 38
MOTIF FINDING USING DNA DATA COMPRESSION .................................................................... 39
AN APPROACH OF SUMMARIZATION OF HINDI TEXT BY EXTRACTION ............................ 40
FORMAL MODELING OF DIGITAL RIGHTS MANAGEMENT FOR SUSTAINABLE
DEVELOPMENT OF E-COMMERCE................................................................................................... 41
RECOVERY RATE MODELING FOR CONSUMER LOAN PORTFOLIO ..................................... 42
THE PROACTIVE PRICING MODEL- USING FORECASTED PRICE ESCALATION
FUNCTION................................................................................................................................................. 43
BEHAVIOURAL SEGMENTATION OF CREDIT CARD CUSTOMERS ........................................ 44
PRECISION TARGETING MODELS FOR IMPROVING ROI OF DIRECT MARKETING
INTERVENTIONS..................................................................................................................................... 45
CUSTOMER PURCHASE BEHAVIOUR PREDICTION APPROACH FOR MANAGING THE
CUSTOMER FAVOURITES LIST ON A GROCERY E-COMMERCE PORTAL........................... 46
PRODUCT INVENTORY MANAGEMENT AT BPCL & EFFECTIVE AND EFFICIENT
DISTRIBUTION OF PRODUCTS TO DEMAND CENTERS.............................................................. 47
INDIAN MUTUAL FUNDS PERFORMANCE: 1999-2008................................................................... 48
HOUSEHOLD MEAT DEMAND IN INDIA – A SYSTEMS APPROACH USING MICRO LEVEL
DATA........................................................................................................................................................... 49
THE LEAD-LAG RELATIONSHIP BETWEEN NIFTY SPOT AND NIFTY FUTURES: AN
INTRADAY ANALYSIS ........................................................................................................................... 50
CAN ETF ARBITRAGE BE EXTENDED TO SECTOR TRADING? ................................................ 51
DEVELOPMENT OF EMOTIONAL LABOUR SCALE IN INDIAN CONTEXT ............................ 52
WOMEN IN SMALL BUSINESSES: A STUDY OF ENTREPRENEURIAL ISSUES ...................... 53
EMPLOYEES PERCEPTION OF THE FACTORS INFLUENCING TRAINING
EFFECTIVENESS ..................................................................................................................................... 54
ONE SHOE DOESN’T FIT ALL: AN INVESTIGATION INTO THE PROCESSES THAT LEAD
TO SUCCESS IN DIFFERENT TYPES OF ENTREPRENEURS........................................................ 55
USE OF ANALYTICS IN INDIAN ENTERPRISES: A SURVEY ...................................................... 56
USING DATA TO MAKE GOOD MANAGEMENT DECISIONS ...................................................... 57
ENHANCING BUSINESS DECISIONS THROUGH DATA ANALYTICS AND USE OF GIS ....... 58
A BUSINESS APPLICATION .................................................................................................................. 58
TRENDS IN TECHNICAL PROGRESS IN INDIA, 1968 TO 2003 ..................................................... 59
TERRORIST ATTACK & CHANGES IN THE PRICE OF THE UNDERLYING OF INDIAN
DEPOSITORIES ........................................................................................................................................ 60
CO-INTEGRATION OF US & INDIAN STOCK INDEXES ................................................................ 61
A COMMON FINANCIAL PERFORMANCE APPRAISAL MODEL FOR EVALUATING
DISTRICT CENTRAL COOPERATIVE BANKS ................................................................................. 62
3
ICADABAI 2009 – Abstracts
ANALYSIS OF RENDERING TECHNIQUES FOR THE PERCEPTION OF 3D SHAPES ............ 63
MMER: AN ALGORITHM FOR CLUSTERING CATEGORICAL DATA USING ROUGH SET
THEORY..................................................................................................................................................... 64
ROLE OF FORECASTING IN DECISION MAKING SCIENCE ....................................................... 65
BULLWHIP DIMINUTION USING CONTROL ENGINEERING ..................................................... 66
AUTOMATIC DETECTION OF CLUSTERS ....................................................................................... 67
REVENUE MANAGEMENT ................................................................................................................... 68
DATA ANALYSIS USING SAS IN RETAIL SECTOR......................................................................... 69
SEGMENTING THE APPAREL CONSUMERS IN THE ORGANIZED RETAIL MARKET........ 70
THE IMPACT OF PSYCHOGRAPHICS ON THE FOOTWEAR PURCHASE OF YOUTH:
IMPLICATIONS FOR THE MANUFACTURERS TO REPOSITION THEIR PRODUCTS. ......... 71
FACTOR ANALYTICAL APPROACH FOR SITE SELECTION OF RETAIL OUTLET - A CASE
STUDY ........................................................................................................................................................ 72
A STATISTICAL ANALYSIS FOR UNDERSTANDING MOBILE PHONE USAGE PATTERN
AMONG COLLEGE-GOERS IN THE DISTRICT OF KACHCHH, GUJARAT.............................. 73
EXPLORING THE FACTORS AFFECTING THE MIGRATION FROM TRADITIONAL
BANKING CHANNELS TO ALTERNATE BANKING CHANNELS (INTERNET BANKING,
ATM) ........................................................................................................................................................... 74
WEATHER BUSINESS IN INDIA – POTENTIAL & CHALLENGES............................................... 75
UNDERSTANDING OF HAPPINESS AMONG INDIAN YOUTH: A QUALITATIVE APPROACH
...................................................................................................................................................................... 76
ANALYTICAL APPROACH FOR CREDIT ASSESSMENT OF MICROFINANCE BORROWERS
...................................................................................................................................................................... 77
DATA MINING & BUSINESS INTELLIGENCE IN HEALTHCARE ............................................... 78
BUSINESS INTELLIGENCE IN CUSTOMER RELATIONSHIP MANAGEMENT, A SYNERGY
FOR THE RETAIL BANKING INDUSTRY .......................................................................................... 79
‘COMPETITIVE INTELLIGENCE’ IN PRICING ANALYTICS ....................................................... 81
RETAIL ANALYTICS AND ‘LIFESTYLE NEEDS’ SEGMENTATIONS ........................................ 82
REVENUE/PROFIT MANAGEMENT IN POWER STATIONS BY MERIT ORDER OPERATION
...................................................................................................................................................................... 83
HOW TO HANDLE MULTIPLE UNSYSTEMATIC SHOCKS TO A TIME SERIES
FORECASTING SYSTEM - AN APPLICATION TO RETAIL SALES FORECASTING ............... 84
A MODEL USING SCIENTIFIC METHOD TO CUT DOWN COSTS BY EFFICIENT DESIGN OF
SUPPLY CHAIN IN POWER SECTOR ................................................................................................. 85
CLUSTERING AS A BUSINESS INTELLIGENCE TOOL ................................................................. 86
VALIDATING SERVICE CONVENIENCE SCALE AND PROFILING CUSTOMERS: A STUDY
IN THE INDIAN RETAIL CONTEXT.................................................................................................... 87
A MODEL FOR CLASSIFICATION AND PRIORITIZATION OF CUSTOMER
REQUIREMENTS IN THE VALUE CHAIN OF INSURANCE INDUSTRY..................................... 88
4
ICADABAI 2009 – Abstracts
ON THE FOLLY OF REWARDING WITHOUT MEASURING: A CASE STUDY ON
PERFORMANCE APPRAISAL OF SALES OFFICERS AND SALES MANAGERS IN A
PHARMACEUTICAL COMPANY ......................................................................................................... 89
THE FORMAT OR THE STORE. HOW BUYERS MAKE THEIR CHOICE?................................. 90
CONSUMER INVOLVEMENT FOR DURABLE AND NON DURABLE PRODUCT: KEY
INDICATORS AND IT’S IMPACT ......................................................................................................... 91
DEVELOPMENT OF UTILITY FUNCTION FOR LIFE INSURANCE BUYERS IN THE INDIAN
MARKET .................................................................................................................................................... 92
A RIDIT APPROACH TO EVALUATE THE VENDOR PERCEPTION TOWARDS BIDDING
PROCESS IN A VENDOR-VENDEE RELATIONSHIP....................................................................... 93
LINEAR PROBABILISTIC APPROACH TO FLEET SIZE OPTIMISATION ................................ 94
OPTIMISATION OF MANUFACTURING LEAD TIME IN AN ENGINE VALVE
MANUFACTURING COMPANY USING ECRS TECHNIQUE ......................................................... 95
EFFICIENT DECISIONS USING CREDIT SCORING MODELS...................................................... 96
IMPROVING PREDICTIVE POWER OF BINARY RESPONCE MODEL USING MULTI STEP
LOGISTIC APPROACH........................................................................................................................... 97
NET OPINION IN A BOX ........................................................................................................................ 98
USING INVESTIGATIVE ANALYTICS & MARKET-MIX MODELS FOR BUSINESS RULE &
STRATEGY FORMULATION – A CPG CASE STUDY ...................................................................... 99
IMPROVE DISPATCH CAPACITY OF CENTRAL PHARMACY.................................................. 100
APPLICATION OF NEURAL NETWORKS IN STATISTICAL CONTROL CHARTS FOR
PROCESS QUALITY CONTROL ......................................................................................................... 101
MEASUREMENT OF RISK AND IPO UNDERPRICE...................................................................... 102
EFFICIENCY OF MICROFINANCE INSTITUTIONS IN INDIA.................................................... 103
MEASURING EFFICIENCY OF INDIAN RURAL BANKS USING DATA ENVELOPMENT
ANALYSIS................................................................................................................................................ 104
RANKING R&D INSTITUTIONS: A DEA STUDY IN THE INDIAN CONTEXT ......................... 105
A NEW FILTERING APPROACH TO CREDIT RISK ...................................................................... 106
VOLATILITY OF EURODOLLAR FUTURES AND GAUSSIAN HJM TERM STRUCTURE
MODELS................................................................................................................................................... 107
WAVELET BASED VOLATILITY CLUSTERING ESTIMATION OF FOREIGN EXCHANGE
RATES....................................................................................................................................................... 108
MODELLING MULTIVARIATE GARCH MODELS WITH R: THE CCGARCH PACKAGE... 109
WIND ENERGY: MODELS AND INFERENCE ................................................................................. 110
FIELD DATA ANALYSIS - A DRIVER FOR BUSINESS INTELLIGENCE AND PROACTIVE
CUSTOMER ORIENTED APPROACH ............................................................................................... 111
SIMPLE ALGORITHMS FOR PEAK DETECTION IN TIME-SERIES ......................................... 112
USING THE DECISION TREE APPROACH FOR SEGMENTATION ANALYSIS – AN
ANALYTICAL OVERVIEW.................................................................................................................. 113
NOVEL BUSINESS APPLICATION - BUSINESS ANALYTICS..................................................... 114
5
ICADABAI 2009 – Abstracts
SERVICE QUALITY EVALUATION ON OCCUPATIONAL HEALTH IN FISHING SECTOR
USING GREY RELATIONAL ANALYSIS TO LIKERT SCALE SURVEYS ................................. 115
AN EMPIRICAL STUDY ON PERCEPTION OF CONSUMER IN INSURANCE SECTOR........ 116
TWO COMPONENT CUSTOMER RELATIONSHIP MANAGEMENT MODEL FOR HEALTH
CARE SERVICES.................................................................................................................................... 118
AN ANALYTICAL STUDY OF THE EFFECT OF ADVERTISEMENT ON THE CONSUMERS
OF MIDDLE SIZE TOWN ..................................................................................................................... 119
EMPIRICAL FRAMEWORK OF BAYESIAN APPROACH TO PURCHASE INCIDENCE
MODEL..................................................................................................................................................... 121
EXPLORING TEMPORAL ASSOCIATIVE CLASSIFIERS FOR BUSINESS ANALYTICS....... 122
APPLICATION OF ANALYTICAL PROCESS FRAMEWORK FOR OPTIMIZATION OF NEW
PRODUCT LAUNCHES IN CONSUMER PACKAGED GOODS AND RETAIL INDUSTRY ..... 124
THE PREDICTIVE ANALYTICS USING INNOVATIVE DATA MINING APPROACH ............ 125
ON ROUGH APPROXIMATIONS OF CLASSIFICATIONS, REPRESENTATION OF
KNOWLEDGE AND MULTIVALUED LOGIC.................................................................................. 126
SB-ROBUST ESTIMATION OF PARAMETERS OF CIRCULAR NORMAL DISTRIBUTION . 127
BAYESIAN ANALYSIS OF RANK DATA WITH COVARIATES ................................................... 128
SELECTING A STROKE RISK MODEL USING PARALLEL GENETIC ALGORITHM........... 129
LINKING PSYCHOLOGICAL EMPOWERMENT TO WORK-OUTCOMES .............................. 130
TO IDENTIFY THE EMPLOYABILITY SKILLS FOR MANAGERS THROUGH THE CONTENT
ANALYSIS OF THE SELECTED JOB ADVERTISEMENTS........................................................... 131
PERFORMANCE MEASUREMENT IN RELIEF CHAIN: AN INDIAN PERSPECTIVE ............ 132
MACHINE LEARNING APPROACH FOR PREDICTING QUALITY OF COTTON USING
SUPPORT VECTOR MACHINE........................................................................................................... 133
MACHINE LEARNING TECHNIQUES: APPROACH FOR MAPPING OF MHC CLASS
BINDING NONAMERS .......................................................................................................................... 134
THE CLICK CLICK AGREEMENTS –THE LEGAL PERSPECTIVES ......................................... 135
6
ICADABAI 2009 – Abstracts
Schedule
6th June
2009
8:00-9:00 Registration
9:00-9:45 Inauguration
7
ICADABAI 2009 – Abstracts
Shankar Prawesh, Martin Eling, Skew Ellipticality in Hedge Fund
Debasis Kundu, Luisa Tibiletti Returns: Which is the Best Fit
(ic025) Distribution?
A Case Study - To Prioritize the
Ashif Tadvi, Rakesh D. Raut Information Management Register (IMR)
(ic002) Issues uses ∆RWA (Risk Weighted
Assets) Approach
Closeness between Heuristic and
Dilip Roy, Goutam Mitra, Soma
Optimum Selections of Portfolio: An
Panja (ic023)
Empirical Analysis
8
ICADABAI 2009 – Abstracts
Real time spike detection from Micro
Nisseem S. Nabar, K. Rajgopal
Electrode Array Recordings using
(ic129)
Wavelet Denoising and Thresholding
Anjali Mohapatra, P.M.Mishra, Motif Finding using DNA Data
S.Padhy (ic152) Compression
An Approach of Summarization of Hindi
Swapnali Pote, L.G.Mallik (ic191)
text by Extraction
Formal Modeling of Digital Rights
Shefalika Ghosh Samaddar
Management for Sustainable
(ic015)
Development of e-Commerce
9
ICADABAI 2009 – Abstracts
Amitabh Deo Kodwani, Manisha Employees Perception of the Factors
K. (ic226) Influencing Training Effectiveness
One Shoe Doesn’t Fit All: An
Anurag Pant, Sanjay Mishra Investigation into the Processes that
(ic225) Lead to Success in Different Types of
Entrepreneurs
18:30-20:00 P-II
Shyamal Tanna, Sanjay Shah
Data Analysis using SAS in Retail Sector
(ic032)
Segmenting the Apparel Consumers in
Bikramjit Rishi(ic042)
the Organized Retail Market
10
ICADABAI 2009 – Abstracts
The Impact of Psychographics on the
Footwear Purchase of Youth:
V R Uma (ic135)
Implications for the manufacturers to
reposition their products.
18:30-20:00 P-III
Analytical Approach for Credit
Keerthi Kumar, M.Pratima (ic086)
Assessment of Microfinance Borrowers
Data Mining & Business Intelligence in
Sorabh Sarupria (ic174)
Healthcare
Business Intelligence in Customer
Chiranjibi Dipti Ranjan Panda
Relationship Management, A Synergy
(ic186)
for the Retail Banking Industry
Chetna Gupta, Abhishek Ranjan ‘Competitive Intelligence’ in Pricing
(ic193) Analytics
Sagar J Kadam, Biren Pandya Retail Analytics and ‘Lifestyle Needs’
(ic220) Segmentations
Revenue/Profit Management in Power
E. Nanda Kishore (ic016)
Stations by Merit Order Operation
How to handle Multiple Unsystematic
Shocks to a Time Series Forecasting
Anindo Chakraborty (ic185)
System - an application to Retail Sales
Forecasting
A Model using scientific method to cut
U.K.Panda, GBRK Prasad, A.R
down costs by efficient design of supply
Aryasri (ic200)
chain in Power Sector
Suresh Veluchamy, Andrew Clustering as a Business Intelligence
Cardno, Ashok K Singh (ic219) Tool
7th June
2009
11
ICADABAI 2009 – Abstracts
Validating Service Convenience scale
Jayesh P Aagja, Toby Mammen,
and Profiling Customers- a study of
Amit Saraswat (ic140)
Indian retail context
A model for Classification and
Saroj Datta, Shivani Anand,
Prioritization of customer requirements
Sadhan K De (ic188)
in the value chain of Insurance industry
On the Folly of Rewarding Without
Measuring: A Case Study on
Ramendra Singh, Bhavin Shah
Performance Appraisal of Sales Officers
(ic244)
and Sales Managers in a
Pharmaceutical Company
Sanjeev Tripathi, P. K. Sinha The format or the store? How buyers
(ic242) make their choice.
Consumer Involvement for Durable and
Sapna Solanki (ic145) Non Durable Product: Key Indicators
and It’s Impact
12
ICADABAI 2009 – Abstracts
Application of Neural Networks in
Chetan Mahajan, Prakash G.
Statistical Control Charts for Process
Awate (ic103)
Quality Control
13
ICADABAI 2009 – Abstracts
Two Component Customer Relationship
Hardeep Chahal(ic012) Management Model for Health Care
Services
An analytical study of the effect of
Uma V. P. Shrivastava (ic091) Advertisement on the consumers of
middle size towns
Sadia Samar Ali, R. K.
Empirical Framework of Bayesian
Bharadwaj, A. G. Jayakumari
Approach to Purchase Incidence Model
(ic222)
On Rough Approximations of
D Mohanty, B.K.Tripathy, J.Ojha
Classifications, Representations of
(ic176)
Knowledge and Multivalued Logic
14
ICADABAI 2009 – Abstracts
Machine Learning Approach for
M. Selvanayaki, Vijaya MS (ic127) Predicting Quality of Cotton using
Support Vector Machine
V. S. Gomase, Yash Parekh, Machine learning techniques: approach
Subin Koshy, Siddhesh Lakhan, for mapping of MHC class binding
Archana Khade(ic209) nonamers
Rashmi Kumar Agrawal, Sanjeev The Click Click Agreements- legal
Prashar (ic059) perspectives
15
ICADABAI 2009 – Abstracts
Siddhartha Roy
Economic Advisor – Tata Group
At the outset, let me thank the organizers of the 1st IIMA International Conference on
Advanced Data Analysis, Business Analytics and Intelligence for inviting me to deliver
the keynote address; one feels both privileged and honoured.
These are turbulent times, for someone like me who is usually an indolent user of
established quantitative methods, the recent events provide a wakeup call. Discontinuity
in the behavioural reaction when income growth, consumption expenditure growth and
corporate earnings come hurtling down make us question the adequacy of
methodologies in predicting the future. Nothing seems incongruous, housing, durables,
FMCG off take collapse yet chocolate sales merrily climb up.
For nearly three decades one has been associated with the application of decision
analytics in business. Both as a practitioner and a user one has marveled at the
developments like the surge in computing power, the progress from simple multivariate
techniques to sophisticated data mining, the increasing use of Neural Nets and GA in
addressing financial, marketing and advertising response issues, extensive use of
simulation for scenario studies. All these are intellectually fascinating and this
conference provides a veritable feast of such papers.
Yet more often than not one has been dismayed by the incapacitating predictive failures
at the major turning points of the economy or asset markets. Our business cycle
research and understanding of lead and lag indicators have progressed a lot – yet we
are not quite there!
We did not predict the timing when we slipped into the current meltdown; nor do we
know when we’ll manage to get out of it. Someone said in retrospect everything is
obvious, in fact our grand children will seriously question the intellectual sanity of a set of
risk management experts who could not predict the last snivel of investment bankers in
2008.
For a moment one is not suggesting that when quantitative methods succeed in
predicting the outcome it is pure serendipity; nor is one saying that our failings put us on
par with a Voodoo practitioner, or an astrologer. However, there is a need for serious
introspection. In order to maintain our credibility it is better to avoid the temptation of
competing with an aphrodisiac or snake-oil seller.
Moving ahead, one possibly has to focus a lot more on the context and the behavioural
information captured in the data. For example, in the same product group, the
consumer’s sensitivity (elasticity) to a pricing change is very different when the economy
gets into disequilibrium. How consumer and investor confidence or the lack of it keep
reinforcing each other in the formation of demand cycles often escapes the attention of
16
ICADABAI 2009 – Abstracts
researcher focused on the micro issue. Then there could be asymmetry in micro
behaviour related to pricing and advertising as well as demand ratchets. Their linkage
with the changing macro context appears to be more visible in a meltdown phase. Cars,
durables, revenue per mobile unit even certain FMCG items seem to get affected.
The next question is how do we generalize a research result; there are significant cross
cultural and cross country differences in behavioral response functions. A forum like this
can certainly be helpful for exchanging research results and experience. However, this
also requires a contextual understanding of the socio-economic stages of development,
cultural alignments, etc. In other words, the quantitative specialists have to broad base
their thinking and welcome experts from other disciplines.
Many a times, lack of connectivity with other disciplines become quite brazen. We have
excellent simulation models for minimizing enterprise value at risk, but do we really
understand how risk and greed react with each other possibly nonlinearly. Similarly,
there are other questions, is past a good indicator of the future? How do we incorporate
structural discontinuity in our understanding? Retrofitting dummy variable may not be the
smartest solution.
Finally, there is a career related question, would you rather be an adviser to a gambler
rolling a six-faced die or even picking a card from a standard packet or join a day trader
facing new outcomes everyday. The meltdown has one good effect. It has exposed the
limits of our understanding in delineating the outcomes. May be this softer side of
business analytics calls for greater creativity and needs some focus.
17
ICADABAI 2009 – Abstracts
We consider Bayesian analysis of data from multivariate linear regression models whose
errors have a distribution that is a scale mixture of normals. Such models are used to
analyze data on financial returns, which are notoriously heavy-tailed. Let π denote the
intractable posterior density that results when this regression model is combined with the
standard non-informative prior on the unknown regression coefficients and scale matrix
of the errors. Roughly speaking, the posterior is proper if and only if n ≥ d + k, where n is
the sample size, d is the dimension of the response, and k is number of covariates. We
provide a method of making exact draws from π in the special case where n = d + k, and
we study Markov chain Monte Carlo (MCMC) algorithms that can be used to explore π
when n > d + k. In particular, we show how the Haar PX-DA technology of Hobert and
Marchev (2008) can be used to improve upon Liu’s (1996) data augmentation (DA)
algorithm. Indeed, the new algorithm that we introduce is theoretically superior to the DA
algorithm, yet equivalent to DA in terms of computational complexity. Moreover, we
analyze the convergence rates of these MCMC algorithms in the important special case
where the regression errors have a Student’s t distribution. We prove that, under
conditions on n, d, k, and the degrees of freedom of the t distribution, both algorithms
converge at a geometric rate. These convergence rate results are important from a
practical standpoint because geometric ergodicity guarantees the existence of central
limit theorems which are essential for the calculation of valid asymptotic standard errors
for MCMC based estimates.
Key words and phrases: Data augmentation algorithm, Drift condition, Markov chain,
Minorization condition, Monte Carlo, Robust multivariate regression
18
ICADABAI 2009 – Abstracts
Atanu Biswas
Applied Statistics Unit, Indian Statistical Institute,
atanu@isical.ac.in
Saumen Mandal
Department of Statistics, University of Manitoba, Canada
saumen_mandal@umanitoba.ca
19
ICADABAI 2009 – Abstracts
Buddhananda Banerjee
Indian Statistical Institute, Kolkata
buddhananda_r@isical.ac.in
20
ICADABAI 2009 – Abstracts
K. Muralidharan
Department of Statistics,
The M. S. University of Baroda
lmv_murali@yahoo.com
Power law process (PLP) or Weibull process is the simplest point process model applied
to repairable systems and reliability growth situations. A repairable systems sometimes
called a maintained system is usually characterized by the intensity function λ (x) usually
a time dependant function. Therefore, a test for H 0 : λ ( x) = λ0 , a constant intensity
against an increasing or decreasing intensity is very important to assess the presence of
trend in the process. The test for trend is also essential for a maintained system working
under different environmental conditions as many often the repair policy is decided on
the basis of the type of trend present in the model. We investigate some conditional
inferences for constructing test statistics for testing trend and study their practical
importance from the repair policy point of view. Some numerical computations and
example are also studied.
Keywords: Power law process, Reliability growth, Repairable systems, Repair policy
21
ICADABAI 2009 – Abstracts
The behaviour of stock price has been a recurrent topic in the financial jargon. Stock
price is time varying and depends upon its past information, market news and
various macroeconomic factors. The paper, however, aims at examining the impact
of macroeconomic factors on the stock price by using Bombay Stock Exchange as a
case study. The cointegration and vector Error Correction Model (VECM) has been
used to ascertain both short run and long run relationships. Monthly data over the
period 1994-2005, especially during the globalization era of 1990s, has been taken
for the empirical investigation. The findings reveal that stock price and
macroeconomic variables (such as stock price, index of industrial production, money
supply, inflation and exchange rate) are integrated of order one and an existence of
long run equilibrium relationship between them. The VECM finally confirms that the
possibility of both short run and long run dynamics between the stock price and
macro economic variables. The policy implication of this study is that macroeconomic
variables are considered as the policy variable to forecast the stock price in the
economy.
Keywords: Stock Price, Macroeconomics variables, VECM
22
ICADABAI 2009 – Abstracts
A. Roy Chowdhury4
1,2
Army Institute of Management, Kolkata, 3Department of Economics, Jadavpur
University, 4High Energy Physics Division, Department of Physics, Jadavpur
University
1 2 3 4
kousikg@gmail.com; santoban@gmail.com; basabi54@gmail.com; arc.roy@gmail.com
It has long been challenged that the distributions of empirical returns do not follow
the lognormal distribution upon which many celebrated results of finance are based
including the Black Scholes Option Pricing model. There have been many alternative
approaches to it. To our knowledge, none result in manageable closed form
solutions, which is a useful result of the Black and Scholes approach. However,
Borland (2002) succeed in obtaining closed form solutions for European options.
Their approach is based on a new class of stochastic processes, recently developed
within the very active field of Tsallis non-extensive thermo statistics, which allow for
statistical feedback as a model of the underlying stock returns. Motivated by this, we
simulate two distinct time series based on initial data from NIFTY daily close values.
One is based on the classical Gaussian model where stock price follows Geometric
Brownian Motion. The other is based on the Non-Gaussian model based on Tsallis
distribution as proposed by Borland. Using techniques of Non-linear dynamics we
examine the underlying dynamic characteristics of both the simulated time series and
compare them with the characteristics of actual data. Our findings give a definite
edge to the Non Gaussian Model over the Gaussian one.
Keywords: Stock Price Movement, Brownian Motion, Tsallis Distribution, Non Linear
Analysis
23
ICADABAI 2009 – Abstracts
To study the nature of financial products it is necessary to model the empirical return
data with a proper statistical distribution. In view of deviations from the normal
distribution and the heavy tails of empirical densities, studying statistical distributions of
financial returns have become imperative. Due to the use of options and leverage,
hedge funds are especially prone to non-normality. The aim of this present paper is to
model heavy tailed hedge fund returns with the skew elliptical distributions. Specifically,
we focus on the skew-normal, skew-t and skew-logistic.
24
ICADABAI 2009 – Abstracts
25
ICADABAI 2009 – Abstracts
1
Centre for Management Studies, University of Burdwan, West Bengal:
dr.diliproy@gmail.com
2
Department of Business Administration, University of Burdwan, West Bengal:
goutamnbp2160@gmail.com
3
Management Institute of Durgapur, Durgapur, West Bengal:
soma_panja1980@rediffmail.com
Selection of the optimum portfolio is difficult task to the investors as choice of optimum
weight is very difficult. In this paper, we have selected heuristic portfolios based on the
investors’ propensity to take risk. For this purpose, two extreme situations have been
chosen – risk taker and risk aversive investors. To construct heuristic portfolios, we have
calculated portfolio weights heuristically and tried to see whether there is any closeness
exists between the optimum portfolio constructed on the basis of traditional method and
portfolio constructed on the basis of heuristic method. For demonstration purpose, we
have taken Nifty data of 2006 and 2007 and selected a portfolio of 10 securities. After
detailed discussion, we have obtained that closeness exists between the optimum
portfolio selected traditionally and portfolio selected heuristically.
Key words: portfolio return, portfolio risk, optimum portfolio, heuristic portfolio, City Block
Distance and Euclidian Distance.
26
ICADABAI 2009 – Abstracts
The time has come for India to leverage information technology to accelerate its internal
development on several fronts. One important aspect of improved efficiency in the
economy, that is within immediate grasp for rapid implementation, is to move towards
empirically based decision support by leveraging the databases that are emerging from
digitization or “transduction” of complex processes in the economy. The decision
sciences have given us a plethora of modeling paradigms. However, advanced training
is required for skilled use of these methodologies and the scale at which analytics needs
to be effectively applied leaves us with a massive challenge.
We need a semi-automated and high content software platform that assembles and
homogenizes data pulled from huge repositories of raw, partial and fragmented data,
aids the “semi-skilled” user in performing deep analysis with ease in interaction and
helps him/her discover preliminary hypotheses that can be handed off to the specialists
for deeper modeling and decision support. There is an insufficient pool of trained
analysts to cope with the scale and complexity of data spewing out. It is this need that
has been called “De-Skilled Decision Analytics” and for which a solution is described in
this paper along with a number of case studies.
27
ICADABAI 2009 – Abstracts
This paper highlights the concept, advantages and application of Latent Class Analysis
(LCA). The first section presents a preview of LCA, a clustering technique, to underline
the method’s suitability for various research and analytics work. The second section
highlights the relevance of LCA in light of the limitations encountered by other frequently
used clustering techniques such as K Means and hierarchical clustering. Subsequently,
the third section underscores the application of LCA by presenting a real life project
executed by the authors while they were working with marketRx, a consulting company
in pharmaceutical analytics.
Key Words: Clustering, Latent Class Analysis, Latent Gold, Consumer Behavior and
Attitudes
28
ICADABAI 2009 – Abstracts
Maximum margin based clustering has shown to be a promising method. The central
idea in the maximum margin clustering (MMC) method is to assign labels (belonging to
the set {-1, +1}) to all the N data points such that the resulting label assignment has
maximum margin. This convex integer programming problem is cast as a semi definite
programming (SDP) formulation by introducing a few relaxations Linli, X., James, N.,
Bryce L., and Dale S., (2005). Experiments show the superiority of MMC over spectral
kernel clustering method, and other clustering methods.
In the present work, we aim at improving further the MMC formulation. Our idea is to
assign labels to all the N data points such that margin is maximized and the
generalization error bound on the support vector machine (SVM) (given in terms of span
of support vectors) is minimized simultaneously. Minimizing the span of support vectors
is formulated as SDP formulation and is combined with the original MMC formulation
which aims at maximizing the margin. The resulting formulation is shown to perform
better compared to original MMC on UCI data sets.
Key words: Kernel methods, maximum margin, span of support vectors, clustering,
unsupervised learning
29
ICADABAI 2009 – Abstracts
Anand Natarajan
Caterpillar India Pvt. Ltd.,
Engineering Design Center
Chennai, India
natarajan_anand@cat.com
The focus of this paper is to delineate the importance of minimizing variations in process
rate as opposed to attempting to minimize process variation alone. Normal distributions
are used extensively in industrial settings to derive the probability of defects occurring in
processes, using sampled data and applying the central limit theorem of statistics. In this
paper, a different approach is taken, whereby defects are described by the number of
up-crossings of a prescribed level, set as the specification limit for the output of a
process. The number of level crossings is modeled as a Poisson process, over a
constant or time varying barrier and an exceedance probability is computed. Modeling
defect occurrences using a level crossing approach is shown to be inclusive of
deterministic events and tracking of time dependent factors that impact the processes.
The paper expands on the principle of level crossings to emphasize that the
achievement of 6-Sigma quality levels should be focused on minimizing the variation of
the process rate and not the process by itself, as done conventionally. An algorithm
based on linear algebra to connect the process rate with the process is developed to
enable direct integration into minimization procedures, which provides optimal statistical
process control.
Key Words: Level crossings, Poisson processes, mean crossing rate, 6-Sigma,
probability of exceedance
30
ICADABAI 2009 – Abstracts
Arijit Laha
Center for Knowledge-driven Information Systems
Software Engineering and Technology Labs
Infosys Technologies Ltd.
Hyderabad
Arijit_laha@infosys.com
One of the most important goals of information management (IM) is supporting the
knowledge workers in performing their works. In this paper we examine issues of
relevance, linkage and provenance of information, as accessed and used by the
knowledge workers. These are usually not adequately addressed in most of the IT based
solutions for IM. Here we propose a non-conventional approach for building information
systems for supporting the knowledge workers which addresses these issues. The
approach leads to the ideas of building Information Warehouses (IW) and Knowledge
work Support Systems (KwSS). Such systems can open up potential for building
innovative applications of significant impact, including those capable of helping
organizations in implementing processes for double-loop learning.
31
ICADABAI 2009 – Abstracts
Measuring customer lifetime value (CLV) in contexts where customer defections are not
observed, i.e. noncontractual contexts, has been very challenging for firms. This paper
proposes a flexible Markov Chain Monte Carlo (MCMC) based data augmentation
framework for forecasting lifetimes and estimating customer lifetime value (CLV) in such
contexts. The framework can be used to estimate many different types of CLV models—
both existing and new.
Models proposed so far for estimating CLV in noncontractual contexts have built-in
stringent assumptions with respect to the underlying customer lifetime and purchase
behavior. For example, two existing state-of-the-art models for lifetime value estimation
in a noncontractual context are the Pareto/NBD and the BG/NBD models. Both of these
models are based on fixed underlying assumptions about drivers of CLV that cannot be
changed even in situations where the firm believes that these assumptions are violated.
The proposed simulation framework—not being a model but an estimation framework—
allows the user to use any of the commonly available statistical distributions for the
drivers of CLV, and thus the multitude of models that can be estimated using the
proposed framework (the Pareto/NBD and the BG/NBD models included) is limited only
by the availability of statistical distributions. In addition, the proposed framework allows
users to incorporate covariates and correlations across all the drivers of CLV in
estimating lifetime values of customers.
32
ICADABAI 2009 – Abstracts
1, 2
Jesse H. Jones Graduate School of Management, Rice University, Texas
3
J. L. Kellogg School of Management, Northwestern University, Illinois
33
ICADABAI 2009 – Abstracts
Cullen Habel
The University of Adelaide, South Australia
cullen.habel@adelaide.edu.au
Larry Lockshin
University of South Australia, South Australia
larry.lockshin@unisa.edu.au
This paper extends a well established normative model of market behaviour to the
analysis of dynamics in repeat purchase (FMCG) markets. Whilst the NBD-Dirichlet is a
stochastic market model most commonly associated with stationary markets, we argue
that it may also be used to develop sequential snapshots of a market that changes over
time.
We also harness the double jeopardy (DJ) line as a method of describing these
dynamics. A DJ line is an x-y plot of a penetration measure against average purchase
frequency for brands in a market. The position and shape of a DJ line can be expected
to change as market conditions change.
In drawing the theoretical double jeopardy lines for consecutive periods we use an NBD-
Dirichlet based functional form. This allows for the meanings of the NBD-Dirichlet
parameters to be given a visual dimension and used to develop predictions for brand
growth. The NBD-Dirichlet parameters can be interpreted as parameters of category
acceptance (K), weight of category purchasing (A), category competition (S) and each
brand’s strength - αj for each brand j.
From the analysis in this paper, we establish that there are three types of brand growth –
balanced, expansive and reductive – and that these correlate to different patterns in
parameter changes. We conclude that the infinite array of nonstationary market
behaviours can be given some structure through a sound understanding of NBD-
Dirichlet parameters, viewed through the lens of the DJ line.
34
ICADABAI 2009 – Abstracts
Matthew Semadeni
Department of Management & Entrepreneurship, Kelley School of Business,
Indiana University, Bloomington
semadeni@indiana.edu
This research examines the characteristics of firm signals that influence whether
competitor firms move toward or away from a predecessor firm’s market position.
Specifically, we examine if competing firms in the professional service industry follow (or
stay away from) the market position defined by a market predecessor’s trademark linked
to a service, referred to as a service mark. We predict differential effects for the market-
positioning responses of follower firms depending on the interaction of two factors
signaled in a firm’s service mark application: (1) existing firm capability to address a
market space opportunity and (2) firm commitment to pursue that opportunity. Using a
novel combination of text and network analysis, we examine all service mark filings by
the top 50 professional service firms from 1989 to 1999. Results indicate that when
existing capabilities are signaled as being low, firms attract greater competitive overlap
compared to when signaled capabilities are high. The level of commitment signaled by
the predecessor firm moderates this relationship.
Key Words: market signaling, competitive positioning, service marks, time series
analysis
35
ICADABAI 2009 – Abstracts
Pradip Sadarangani
IIMB, Bangalore, Karnataka
pradip@iimb.ernet.in
Sridhar Parthasarathy
IIMB, Bangalore, Karnataka,
Sridharp03@iimb.ernet.in
Today however, LISREL is no longer confined to SEM. The latest LISREL for Windows
includes other modules for applications like data manipulations, basic statistical
analyses, hierarchical linear and non-linear modelling and generalized linear modelling.
We address the concerns of a beginner to LISREL and provide normative guidelines for
modelling various multivariate techniques like Exploratory Factor Analysis, Confirmatory
Factor Analysis, multiple regression, ANOVA/ MANOVA and multiple group analysis.
36
ICADABAI 2009 – Abstracts
B.K.Tripathy V.M.Patro
School of Computing Sciences Biju Pattanaik Computer
VIT University, Vellore Centre, Berhampur
Tamilnadu University,Berhampur,
Orissa
tripathybk@rediffmail.com vmpatro@gmail.com
Key words: CB-rough sets, CB-fuzzy rough sets, CB-intuitionistic fuzzy rough sets, CB-
rough relational operators, CB-intuitionistic fuzzy rough relational operators.
37
ICADABAI 2009 – Abstracts
Brain Machine Interfaces can be used to restore functions lost through injury or
disease. Micro Electrode Arrays are an invasive method of acquiring neural
signals which can then be used as control signals. The first requirement for such
a use is to extract time-stamped spike trains from the MEA recordings. For use in
BMI applications this extraction needs to be real time and computationally less
expensive.
Key words: Time series analysis, signal processing, analysis of biological data.
38
ICADABAI 2009 – Abstracts
anjali.mohapatra@iiit-bh.in
39
ICADABAI 2009 – Abstracts
1srkurhade@yahoo.com
This paper proposes a method to generate the summary of Hindi textual data by
extracting the most relevant sentences from the text. The method is based on the
combination of Statistical & Linguistic approach. The growth of the Internet has lead to
the ample of the digitally stored information, so it must be filtered and extracted in order
to avoid drowning in it. With the growth in Indian economy, very soon the broadband will
reach most parts of India and then the non-English (Hindi) speaking user base will
outgrow the English-speaking user base in India. Therefore in the coming years the text
summarizer in Hindi becomes essential. The summary generated from the text will help
readers to learn new facts without reading the whole text.
40
ICADABAI 2009 – Abstracts
The approach is illustrated by using the modeling the management of copyright in the
Internet, known as Digital Rights Management (DRM) - a mechanism to enforce access
control over a resource without considering its location. The DRM framework, that lies
behind and the whole value chain from creators to end-users based on different roles
assumed at different point of time, is achieved transforming the core concepts for
creations, rights, actions. The set of actions operating on content using and assuming
various roles at different juncture of application domain are the building blocks of the
complex copyright domain ensuring interoperability. Rights and action patterns are
modeled as role-ranks of actions, and concrete actions are modeled as instances of
these role-ranks. If some right or license requires an action, it is required to check for
role-rank it assumes and dynamic instance classification through roles they assume. The
resulting copyright model framework is flexible enough to model the moral exploitation of
content.
Keywords: Formal Model, Object Role-Rank Model, IPR Management, Digital Rights
Management (DRM), Z Specification of DRM, Object Z.
41
ICADABAI 2009 – Abstracts
Basel Accord II allows bank to estimate their risk determinants under the IRB
approach. The present study tries to provide an empirical framework for estimating Loss
Given Default (LGD) for a retail consumer loan portfolio. LGD is modeled using Hurdle
regression model and family of censored regression model. Results indicate ability of the
model to estimate LGD with bimodal distribution. Key determinants which affect LGD are
total outstanding as proportion of loan size along with loan size and historical payment
performance of the consumer.
42
ICADABAI 2009 – Abstracts
Analytics,
Genpact India
Kolkata
This paper proposes a method for proactive price adjustment that addresses, both, the
increase in cost and retaining the strategic share in the market and most importantly with
a targeted profit. The time series forecasting technique is used to predict the input cost
increase. After the price function is designed with the forecasted cost, the concept of
elasticity is used to capture market sensitivity of the pricing moves. Then the final
adjustment to align the pricing with business profit target gives us the price function.
Key Words: Producer Price Index (PPI), Time Series Modeling, Autoregressive
Integrated Moving Average (ARIMA), Arc Elasticity, Sales Segment, Contribution Margin
(CM).
43
ICADABAI 2009 – Abstracts
Many financial companies consider segmenting their customers based on the way these
customers transact with the company, to help them design customized marketing
programs for each segment which would help improve customer satisfaction &
eventually increase customer profitability. In this paper we describe a way of segmenting
credit card customers based on their transactional behaviour with the card company.
The segmentation solution has been obtained using a combination of factor analysis, k-
means clustering & transition analysis models. Factor analysis was used to obtain key
factors from the available set of transactional variables. The factors were found to
encapsulate the following characteristics: monetary value, utilization, spending
frequency, speed of activation, preference for POS/ATM transactions and propensity to
“revolve” (i.e., pay interest on borrowings). The k-means clustering algorithm was used
on these factor scores to arrive at the segments. The customers from the study were
segmented into 6 groups based on their behavior. Segment dynamics were analyzed
and used to come up with recommendations for differentiable marketing treatment to
each of these 6 segments to enhance profitability.
Keywords: CRM, Data Analysis in Banking and Financial Services, Cluster Analysis
44
ICADABAI 2009 – Abstracts
45
ICADABAI 2009 – Abstracts
Sunit Pahwa
Prasanna Janardhanam
Rajan Manickavasagam
The approach discussed in this paper uses a machine learning technique to capture
the buying patterns of the products bought by a customer over a period of time. It then
predicts and generates a list of items which are most likely to be bought by the customer
on his next visit to the grocery e-commerce portal of the retailer. Since the prediction is
done at a product level and have just two options: (a) Customer will purchase the
product, (b) Customer will not purchase the product. This problem is more like
classifying a product for a customer in either a likely purchase or a likely non purchase.
Being a typical classification problem, we used the Naïve Bayes Classifier to generate a
likelihood score of purchase for each item previously purchased by a customer (and for
each customer). This likelihood score is then used to rank all the items and generate the
favourites list for each customer. This approach was accurate enough to predict about
two-third of the baskets of about three-fourth of the total customers. The immediate
result of this approach is enhanced online experience for the customers who feel their
needs are better understood by the online retailer.
46
ICADABAI 2009 – Abstracts
V. Ramachandran
BPCL
The paper highlights the process adopted by BPCL in managing its system
inventories of petrol and diesel at about 101 locations (22 terminals, 9 tap off
points and 70 depots/demand points)
47
ICADABAI 2009 – Abstracts
Rajkumari Soni
E-mail: rajkumari_soni2001@yahoo.com
Mutual fund is the prominent investment institution today. Mutual fund performance is
one of the most frequently studied topics in investment area in many countries. The
reason for this popularity is availability of data and the importance of mutual funds as
vehicles for investment by for both individuals and institutions. Since mutual funds have
become popular, there is a growing importance of research by institutions and
academician. The present study examines the past performance of mutual funds as a
criterion for investors’ future choices. The study started the analysis by the fund
attributes influenced the return. In this paper, hypotheses are based on the fund
characteristics i.e. beta, standard deviation, fund size, NAV, fund age, management
tenure and expense ratio. The study covers 47 equity mutual fund schemes (with equity
option) for which the data is available for the entire study period i.e. from Jan 1999 to
Dec 2008. The results indicate that the hypothesized relationship between mutual funds
performance and the explanatory variables are generally upheld. The study provides a
comprehensive examination of recent Indian mutual funds performance by analyzing the
fund returns and fund attributes affecting the funds performance and an effort to link
performance to funds specific characteristics.
Key Words: Mutual funds Performance, Correlation, Regression analysis, Mutual Funds
characteristics
48
ICADABAI 2009 – Abstracts
This study presents the results of estimation of a linear approximate almost ideal
demand system for Indian meat products, using cross-sectional household level data
collected by National Sample Statistics Organization in India as part of the 60th survey in
2004.
The paper uses a censored regression method for the system of equations to analyze
the consumption patterns for meat products. The Heckman’s two-step procedure was
used to estimate the demand system. In the first step, Inverse Mills Ratio(IMR) was
estimated using a Probit model. In the second step, IMR was included in the LA/AIDS
model as an independent variable, while estimating the system of equations using the
Seemingly Unrelated Regression model.
The objective of this study is to provide econometric estimates of price and expenditure
elasticity estimates for meat demand in India. Some other demographic variables
influencing the demand for meat products are identified, as Sector (Rural/Urban),
Religion, Land Ownership and Size of the Household. The results revealed that the
demand for Beef, Pork and Fish is elastic while that for egg and chicken is inelastic. The
cross-price elasticity estimates indicated that mutton and beef are substitutes to chicken,
whereas, egg and fish are substitutes to each-other.
49
ICADABAI 2009 – Abstracts
This paper focuses on the price discovery in the Indian stock market by taking the case
of Nifty Spot and Futures using five minute prices. The data considered is of two periods:
bull and bear market. Vector Error Correction Model is used to examine the lead lag
relationship between Nifty spot and futures return. It is found that futures return leads
spot return by as much as ten minutes in bull and thirty minutes in bear market. Spot
return leads futures return by five minute in bull market and thirty minute in recent bear
market. Vector Autoregressive model is used for finding the price discovery in spot and
futures volatilities. Futures market lead spot market by as much as twenty minutes in bull
market and twenty five in bear market. In conclusion, there is no significant role that Nifty
futures is playing in price discovery.
50
ICADABAI 2009 – Abstracts
We design and deploy a trading strategy that mirrors the Exchange Traded Fund (ETF)
arbitrage technique for sector trading. Artificial Neural Networks (ANNs) are used to
capture pricing relationships within a sector using intra-day trade data. The fair price of a
target security is learnt by the ANN. Significant deviations of the true price from the
computed price (ANN predicted price) are exploited. To facilitate arbitrage, output
function of the trained ANN is locally linearly approximated. The strategy has been
backtested on intra-day data from September 2005. Results are very promising, with a
high percentage of profitable trades. With low average trade durations and ease of
computation, this strategy is well suited for algorithmic trading systems.
51
ICADABAI 2009 – Abstracts
Niharika Gaan
IMIS, Bhubaneshwar.
E-mail: niharikagaan@yahoo.com, n_gaan@imis.ac.in
This study describes the development and validation of emotional labour scale (ELS) as
tested on samples of 491 respondents from B-schools of India. The ELS is a 12-item
self- reporting questionnaire that measures 4 facets of emotional labour in the work
place, which includes variety in emotional display, deep acting, surface acting and
emotional regulation. Estimates of internal consistency for the subscales ranged from
.67 to .89. Confirmatory factor analysis results provided support to the 4 facets of
unidimensional subscales emotional labour scale, which contradicts the six facets of
existing emotional labour scale. Evidence was also provided for convergent and
discriminant validity.
Key Words: Deep acting, Surface acting, Automatic regulation, Variety in emotional
display, Emotional Labour
52
ICADABAI 2009 – Abstracts
Anil Kumar
Haryana School of Business,
Guru Jambheshwar University of Science & Technology,
Hisar, Haryana
E-mail: anil_k6559@yahoo.co.uk
53
ICADABAI 2009 – Abstracts
Email: deoamitabh@gmail.com
Indian Public Sector Enterprises are passing through massive changes due to rapid
technological change on one hand and competition from the private sector (especially
MNCs) on the other hand. In order to compete in such a liberalized and globalized
economy, PSEs are required to improve their organisational effectiveness. These
changes necessitate the need of training and development in PSEs for the optimal use
of manpower, which will benefit both employees and organization.
The need for systematic training and development has also increased because of the
rapid technological change, competition that creates new kind of jobs and eliminates old
ones. New jobs require some sort of special skills which may be developed in existing
work force by providing them necessary training, otherwise employee’s train themselves
by trial and error or by observing others if no formal training programme exist in the
organization. In this way the employees will take much longer time to learn new skills.
Systematic training and development not only increases the skill levels but also
increases the versatility and adaptability of employees.
With the changing time there is also need for reexamining the existing system of training
and development and to look at the training and development policies and practices from
new perspective. Organizations need to rethink and modify these training and
development policies and practices to get maximum benefit, which is essential for
improving organisational effectiveness. Success of training not only depends upon
instructor, content, input, training method, but also depends upon the perception of the
participants/employees about the training, training awareness, motivation to learn and
transfer, learning efforts, training participation & involvement, training transfer climate,
and training evaluation. In order to make it more effective, perception of the employees
towards training and development need to be made positive. This can be done by
involving them in training and development activities, by creating good learning
environment and by helping and encouraging them to learn and then practice those
learning’s on the job.
54
ICADABAI 2009 – Abstracts
Anurag Pant
School of Business and Economics
Indiana University South Bend
Email: anurag@iusb.edu
Sanjay Mishra
School of Business
University of Kansas
Email: smishra@ku.edu
Some entrepreneurs, who lack the cognitive ability to elaborate on issues, can still be
successful. Such ‘naive’ entrepreneurs have a lower need for cognition, a lower recall,
and a higher feeling of knowing about a topic than sophisticated entrepreneurs.
Consequently, we expect them to be lower risk takers than sophisticated entrepreneurs.
On the other hand, naïve entrepreneurs induce higher empathetic support from key
business associates and employees. This lets them get more “resources” than
sophisticated ones. A better understanding of naïve entrepreneurs could help to reduce
new venture failure rates. This paper uses content analysis to measure the constructs
and structural equation modeling to show their interrelations.
55
ICADABAI 2009 – Abstracts
M.J.Xavier
Kotler-Srinivasan Center for Research in Marketing
Great Lakes Institute of Management, Chennai
xavier@greatlakes.edu.in
Anil Srinivasan
Kotler-Srinivasan Center for Research in Marketing
Arun Thamizhvanan
Great Lakes Institute of Management, Chennai
In 2007, India accounted for one-third of the total $17-billion global market for analytics.
However, the rate of adoption of analytics for decision making and enhancing the
customer experience has been slow on the uptake. While the term ‘analytics’ has found
universal usage in almost all business platforms, what it refers to and the specific
contexts in which it ought to be used is still ambiguous among senior managers in the
Indian corporate milieu.
The paper discusses these findings in detail and concludes with a brief discussion on the
steps ahead.
56
ICADABAI 2009 – Abstracts
Anthony Hayter,
Department of Statistics and Operations Technology
University of Denver
Denver, Colorado, USA
Some thoughts and perspectives are provided on quantitative courses in the business
school curriculum and the challenges of motivating and equipping managers with
appropriate statistical techniques and skills. Experiences gained from teaching in the
business school environment and from consulting with companies will be presented.
Some case studies from the USA and Asia will be provided that illustrate how data
analysis has been employed to better understand business situations, and to provide the
basis for better decision making. Are businesses today making efficient use of the data
they have available, and would they be surprised by their own data?
Generally, the curriculum of a quantitative course in a business school would address
the following goals.
• Develop an understanding of the basic concepts of probability and statistics, and
how they relate to managerial type problems and decision making.
However, a crucial aspect of this education is the motivation of students of the value of
this material to their businesses. Some Golden Rules will be discussed which provide
the foundation for this motivation. Additionally, the pitfalls and dangers of a lack of
appreciation of the complexities of probability theory are presented.
Case studies are a wonderful tool for motivating students. Case studies from various
countries and industrial sectors are presented that illustrate how data analysis
techniques can be applied to real problems and how they can have an impact on the
company’s financial bottom line.
57
ICADABAI 2009 – Abstracts
Uma V
Datafix Technologies Pvt. Ltd., Mumbai
Abhinandan Jain,
Indian Institute of Management, Ahmedabad
This paper presents the application of data analytics and spatial analysis (GIS) in one
circle of a leading Indian telecom service organisation (Bharti Airtel: BA). BA was facing
the problem of increasing bad debts and collection costs in one of the circles. BA turned
to a data analytics organisation (Datafix Technologies Pvt Ltd1 ) for resolving the issues
by using a data based approach. The available data included filled up customer
application forms and company’s collection points. The customer data, like name,
address, etc., was of poor quality. The methodology included (i) identifying relevant
variables, (ii) splitting/ exploding the data fields and deriving new variables (iii) deriving
linkages to identify unique customers and their relationships, (iv) using spatial analysis to
study and link customers and collection centers. Paper uses, and shares the rationale of
choosing the, specific tools in the Indian context.
Key words: data parsing, data enrichment, data linkages, spatial analysis
58
ICADABAI 2009 – Abstracts
Ravindra H. Dholakia1, Astha Agarwalla2, Amir Bashir Bazaz3 & Prasoon Agarwal4
Indian Institute of Management, Ahmedabad, Gujarat.
The paper is based on the Input – Output (I-O) tables for the Indian economy for the
eight years covering a period of 36 years from 1968-69 to 2003-04. The technical
progress (TP) in the context of the I-O tables is based on the concept of the production
function defining the relationship between gross output and material inputs as well as
value added. Moreover, it is also at the disaggregated sectoral level. The paper
empirically verifies the following hypotheses: (i) Indian economy experienced substantial
TP continuously through out the period; (ii) The rate of TP during the inward looking and
outward looking growth strategy phases of the Indian economy remained the same; (iii)
The rate of TP at the disaggregated sectoral level is almost uniform over time; and (iv)
Liberalization and globalization have not impacted sectoral rates of TP differentially.
In order to measure the rate of TP, the available eight national I-O tables in India are first
made compatible for the number, scope and definitions of sectors as well as for prices
by converting them at constant 1993-94 prices. Simple measures are also used for
converting changes in technical coefficients into the aggregate rate of TP for a sector
and for the economy.
59
ICADABAI 2009 – Abstracts
Gaurav Agrawal
Atal Bihari Vajpayee - Indian Institute of Information Technology & Management
(ABV - IIITM), Gwalior
This research paper empirically examine the impact on the stock returns of the
underlying domestic shares of the Indian companies’ listed ADRs / GDRs issues in
NYSE, NASDAQ and LSE of the terrorist attack at London’s Public Transport System
on 7th July 2005. An event study was conducted on the stock returns of the underlying
domestic shares of the 08 Indian ADRs listed in NYSE/NASDAQ and 07 GDRs listed in
LSE. For the study 07th July 2005 was considered the event day. The Abnormal Returns
(ARs), Average Abnormal Returns (AARs) and Cumulative Average Abnormal Returns
(CAARs) were computed based on the Market model using daily closing price data of
the underlying companies and S&P CNX Nifty. The behavior of these variables was
examined for 15 days before and 15 days after the event day. The study found that the
impact of the announcement on the event day was insignificant for the all baskets of
underlying domestic shares of Indian ADRs/GDRs listed in NYSE/NASDAQ/LSE.
However during the event window of 31 days (i.e. -15 to +15) AARs and CAARs were
negative on most of the days for all the baskets of ADRs / GDRs, that clearly indicated
that announcements possess important information which leads changes in the
underlying stock prices. Therefore study concluded that the terrorist attack hold
important information to the baskets of underlying domestic shares of Indian ADRs /
GDRs. Further the trend of CAARs that declined continuously even several days after
the event day indicated slow assimilation of information to the stock prices that
concluded that Indian stock market was inefficient in the semi strong form of Efficient
Market Hypothesis (EMH) during the study period.
Key Words: Terrorist Attack, Event Study, Average Abnormal Returns (AARs), Efficient
Market Hypothesis (EMH), ADRs/GDRs
60
ICADABAI 2009 – Abstracts
One of the most profound phenomenon prevailing in the present financial markets is the
increase in international financial transactions across the world. With the advent of
liberalization, globalization and advances in information technology, this process has
gained much momentum resulting in a progressive integration of the emerging markets
with the developed markets. In line with the global trend, the present paper empirically
investigates the long-run equilibrium relationship between the US and Indian stock
market indexes. Econometric tests like test of cointegration, Augmented Dickey-Fuller
test for unit roots, and Granger causality test have been employed in the analysis. We
conclude that BSE Sensex is highly influenced by Nasdaq Composite Index which
reinforces the long run relationship between the two stock markets.
61
ICADABAI 2009 – Abstracts
A. Oliver Bright,
Dept. of MBA, Infant Jesus College of Engineering,
Tuticorin, Tamil Nadu.
Email: aobright67@yahoo.co.in
In India there are 31 State Cooperative Banks (SCBs) and 372 District Cooperative
Banks (DCCBs) functioning under respective SCB and 97,224 Primary Agricultural
Cooperative Banks (PACBs) are functioning under respective DCCB. The DCCBs are
formed in each district with a prime objective of uplifting the economically weaker
sections, poor agriculturists and to foster savings among them. The Government of India
is allotting a huge amount in this sector every year. Among the DCCBs in India 262
DCCBs are operating in profit. Most of them are able to earn only a marginal amount of
profit. Only 85 DCCBs struggled to get profit and could declare dividend. Many DCCBs
are sustaining loss year after year. Every year the performance of the DCCBs are
assessed by awarding marks on 18 selected parameters with a maximum of 800 marks.
The State Cooperative Banks have circulated this format to the DCCBs for its evaluation.
The current performances appraisal system is incomplete and it does not cover all the
essential factors for assessment. Moreover there is no standard format which is
universally applicable for all the DCCDs in India to evaluate. The Economic Value
Addition (EVA) to the development of people of the respective region such as fostering
of savings, generation of direct and indirect employment, self employment, economic
growth of economically weaker sections and the poor agriculturists are not included in it.
The profit earned and the dividend declared are not given due consideration. The
collection, reduction of NPA and deposit mobilization are not given due weightage
Considering all the factors an intensive study is made and a fair model for assessing the
performance of the DCCBs in India is developed. This model will help for assessing the
performance of the DCCBs. This system can be used as a tool for evaluating the relative
performance level of DCCBs also. This “BRIGHT” model can be used for evaluating the
performance and for assessing the relative position of DCCBs which may be further
extended to countries having similar cooperative banking or credit system.
62
ICADABAI 2009 – Abstracts
Vishal Dahiya
IBMR, Ahmedabad
Human vision system starts with just the shower of photons that hit the retina of each
eye and proceeds to construct a 2D contours and 3D shape by consulting various
sources of information such as shading, texture, motion, occlusion, binocular disparity. In
this process it uses many law which is based on reflectance, geometry, projection and
lighting. An image is perceptually realistic if a viewer of the image synthesizes a mental
image similar to that synthesized by the virtual viewer. The human visual process
synthesizes many different signals into an internal mental image. Normally the input to
this process is the light coming from the various surfaces in the scene. 3D shape
visualization is usually done on the 2D screens. The algorithm and techniques involved
in the process of generating 2D image from a 3D world coordinate system is basically
known as Rendering. Lighting Models, Shading techniques, the presence of textures and
the properties of 3D shape material provide very different rendering quality. Other
important factors that influence the visual quality of a 3D model are Line Of Detail (LOD).
The perception of different LODs strongly depends on the selected rendering technique
In this research paper, I will explore the factors that influence the perception of a
rendered image and also the analysis of rendering technique that are used in this area
and their limitation.
63
ICADABAI 2009 – Abstracts
So far, in the field of clustering various techniques have been introduced in dealing with
categorical data. These algorithms are not able to deal with uncertainty and stability. Our
algorithm is using Rough Set Theory (RST), which handles uncertainty from its very
basic definition. Though an algorithm named MMR has already introduced RST it needs
a few consistency measures to improve the results, in which we worked on. The areas,
selection of the splitting attribute and selection of a cluster for re-clustering are improved.
In case of re-clustering the cluster with highest average distance is chosen for re-
clustering rather than the cluster with highest number of objects, which is done by
introducing a criterion for finding the distance between any two objects. This is basically
derived from Hamming distance. The results thus obtained are found out by calculating
the purity of the clusters. For example, clustering the ZOO data set using MMeR resulted
in a purity of 96.5% and 78% is the highest ever achieved till now, by MMR. This
algorithm can still be developed by introducing Fuzzy properties. Rough-Fuzzy
properties are already defined.
Keywords: Data Mining, Clustering, Rough Set Theory, MMR, Hamming Distance
64
ICADABAI 2009 – Abstracts
ISB&M Pune
Forecasting comprises of the techniques that predict the future on the basis of
probability. This paper is about forecasting for a new product development project; of
Hyundai Construction Equipment India Private Ltd (HCEIPL), which ultimately
grounded the completion of the feasibility study of the company setting up the plant
for the production of excavators. HCEIPL is the wholly owned subsidiary of Hyundai
Heavy Industries (HHI) (Korea). The main objectives for this study are, First to
explore the best forecasting technique for predicting the total cost of HCEIPL and
secondly to check the financial feasibility of HCEIPL over the period of five years.
Hypothesis of creating a role model has been selected on the basis of cross-
sectional analysis, cash flow analysis, and ratio analysis of the competitors in India
with HHI (Construction Division).L&T-Komatsu is the role model on the basis of
similar financial risk, growth, cash flow characteristics. Estimation of L&T-Komatsu is
done for the next five years after that the ratio of forecasted value of expenditure to
forecasted value of sales of L&T for the 5 year and with an approximation of 1 or 2 %
in the ratio estimate for HCEIPL from 2008 to 2012 has been taken for further
calculation. Computation of Net Present Value (NPV) of the project is required to
check the feasibility of the project. Quadratic regression is the best forecasting
technique for deterministic model. Validation of the model is done by residual
analysis (accuracy measures being MAPE, MSD, MAD), & F- statistic (deciding upon
the model).Quadratic regression gives us the exact value. As future prediction is not
always exact value. So there is the need of confidence level. Double exponential can
create the prediction interval. Since in double exponential MSD was coming higher
than quadratic regression. Differencing with lag one was required. Double
Exponential Smoothing is the best forecasting technique on the basis of residual
analysis, f-statistic, and t-statistic for prediction interval. Decision-making is an
essential part of the management process. The Net Present value of the project of
230 crores is positive. NPV comes to around 6.38 crores. Project should be
accepted.
Keywords: Net Present value, double exponential, Quadratic regression, role model.
65
ICADABAI 2009 – Abstracts
The bullwhip effect is a well known instability phenomenon in supply chains, related to
volatility amplification of demand profiles in the upper nodes of the chain. This paper
proposes a novel control engineering approach for analyzing the bullwhip effect using an
exponential smoothening forecasting model with a simple type 0, 1 and 2 systems
representing constant, linear and quadratic demand input respectively. Analyses of
bullwhip effect with different demand trends are done using both the statistical control
engineering approaches. Using control engineering approach, various techniques are
studied for calculating explicitly the associated noise form under the bullwhip effect.
Therefore an analysis for minimizing the error for different smoothening factors under
constant, linear and quadratic demand input in the studied inventory policies on the
bullwhip effect can be studied. Stability analysis of the output signals is done by using
Bode plot and root locus plot. Also, the representation of bullwhip in terms of noise
transmission and its reduction via matching the bandwidth, with the “Control Engineering
Perspective” is done and results are analyzed.
66
ICADABAI 2009 – Abstracts
Email: lalitgoyal78@rediffmail.com
Knowledge discovery is primary goal of data warehousing. Data Mining is one of the
steps in knowledge discovery process It is a technique of extracting meaningful
information from large databases or data warehouse. Mining can be done by different
techniques. Clustering is one of the techniques, which partitions the database in various
groups. Its use in data mining is growing very fast. There are different clustering
methods but the major focus here is on partitioning based clustering which requires prior
information from the outside world of the number of clusters into which the database is to
be divided. But today there is requirement of such algorithms that can generate different
clusters automatically. The objective here is to propose a new partitioning based
clustering algorithm that can generate clusters automatically without any previous
knowledge on the user side.
67
ICADABAI 2009 – Abstracts
Revenue Management
Patitapaban2009@gmail.com
Revenue Management (RM) is a relatively new field currently receiving much attention of
researchers and practitioners and essentially means setting and adjusting prices on a
tactical level in order to maximize profit.
Clearly, traditional well-known pricing techniques are closely related, however, the new
twist is that RM avails itself of sophisticated demand forecasting and pricing that is
based on research in many areas such as management science, economics,
mathematics and others.
Due the availability of a vast amount of data through customer relationship management
systems that can be used to calibrate the models, these techniques had a tremendous
impact on the airline industry where RM first was applied, and subsequently in other
industries such as car rentals, cargo or hotels, .
As part of ongoing changes in the industry, companies throughout the entire hospitality
spectrum are placing a strong emphasis on implementing major operational changes.
Beyond recognizing that meaningful cost reductions must be achieved without
compromising safety, capacity and service levels, they are also looking at reducing costs
by increasing flexibility and improving asset utilization through an RM strategy. In doing
so, they continue to reassess their true core.
Keywords: RM, RM & Pricing, RM in Hotel Industry, RM vs. MIS, Problem in Future
Research
68
ICADABAI 2009 – Abstracts
This research paper is mainly focused on the analysis of the available data in the area of
the retail sector. In this experiment, a store wants to examine its customer base and to
understand which of its products tend to be purchased together. It has chosen to
conduct a market basket analysis of a sample of its customer base. After this analysis
the store can put those items that the customers buy, together. Then there will be more
chances that customers buy both of those products.
Association rules are used for the market basket analysis. Process flow for this
experiment involves; firstly, selecting the input data source node, this data set contains
the grocery products purchased by 1,001 customers and from this, twenty possible items
are represented in this data set. The next phase of the experiment concentrates on
defining the role of the different variables such as customer-id, product-nm etc, from
here the associations nodes are configured and then lastly we will attempt to run the
model. It is hoped that the resultant data will be in percentage of support that will provide
conclusive evidence that certain products should be put together.
69
ICADABAI 2009 – Abstracts
Bikramjit Rishi
Institute of Management Technology (IMT)
Raj Nagar, Ghaziabad – U.P.
Email: brishi@imt.edu
Retailing in India has emerged as one of the most dynamic and fast paced industries
with several players entering the market. Apparel Retailing in India is gradually inching
its way to becoming major contributor in the retailing growth in India. The whole concept
of shopping in apparel category has undergone change in terms of format and consumer
buying behavior, ushering in a revolution in shopping. This study makes an effort to
understand the Indian apparel buyer so that the Indian retailers can devise strategies to
fulfill the needs of the buyers in a better way. The study highlights the four segments i.e
Modern & Professional, Orthodox, Incautious and perfectionist. The study further entices
the researchers in this field to go for more in depth analysis for the better understanding
of the Indian apparel buyer.
70
ICADABAI 2009 – Abstracts
V.R.UMA
Christ University, Bangalore
This paper focuses on the influence of psychographics on the footwear purchase of the
Indian youth. For the purpose of the study 401 males and 401 females between the age
group of 19 to 26 from Bangalore were considered. Cluster analysis revealed that 62%
of the male population comprised of the Fashionables, 15% were Economicals and 23%
were Independents. In the case of females 6 clusters were formed wherein 6% were
Traditionals, 38% were Economicals, 12% were Independents, 3% were Health
conscious, 38% were Fashionables and 3% were Economic Fashionables. Separate
analysis was done for casuals and formal footwear. The major attributes that were used
to measure the preferences include - Footwear should go with the colour of the dress,
Standard Colours, Warranty, Durability, Price, Quality, Variety, Elegance, Bargain
preferred than fixed price, periodicity of shopping, Convenient location, Amenities,
Ambience of the store and courteousness of salesmen. These attributes were listed by
the respondents. Data regarding Income, such as monthly income and spendable
income was also collected. The study revealed that people belonging to different
lifestyles have different preferences irrespective of the income class they were in.
71
ICADABAI 2009 – Abstracts
The growing affluence of India’s consuming class, emergence of new breed of retail
entrepreneurs & flood of imported products in the grocery space, has driven the current
retail boom.
Against this backdrop the purpose of the present paper is to –
• Evaluate the factors affecting the choice of Retail outlets by consumers.
• Existing literature
• Seek opinion of Retail Customers
• Provide useful suggestions for Improvement
• Use of statistical tools of Factor and Regression Analysis .
Research & Methodology
1.1 Area under the study - Mumbai, Delhi
1.2 Research Design - Exploratory
Age range -20-30
Data Sources - Primary, Secondary
1.3 Sampling frame - Convenience
Sample units -120.
2.0 Analysis of Results
1.Respondents profile, Market in Retail sectors, Products.
2.Review of literature
3.Analysis of 20 variables
4.Regression Analysis
5.Recommendations
3.0 Findings
1. Companies should try and improvise on the services
2. Location of the outlet should be strategically designed
3. Engagement of Consumers to curb billing time
4. Loyalty schemes to be emphasized
5. Exclusive brands corner to be displayed
In nutshell the retail outlets should focus on this complete experience so as build a
strong customer base.
72
ICADABAI 2009 – Abstracts
The paper throws light on mobile phone consumption pattern among college-goers.
Understanding youngsters as one of the market provides a competitive advantage to them.
The study reveals how gender, monthly voucher amount and years of owning mobile
phones influence the usage pattern of this device. Findings of the study would be helpful
for the telecom service providers and handset manufacturers to design a mix of product
and promotion, for different market segments. Also, research undertaken in this area
helps researchers and scholars understand the individual usage pattern of a new media
73
ICADABAI 2009 – Abstracts
In today’s competitive world everyone wants a greater pie in the value derived from
providing the consumer needs. The banks have evolved over a period of time from being
just a product provider to a becoming a combination of service and product provider.
One way of deriving maximum value is by reducing the cost of offering the service. This
has lead to the birth of alternate mode of channels for banking. This report tries to looks
at some of the factors, which enable and deter the adoption of these alternate channels
in the Indian context. A literature survey was done to get an idea of various factors which
may affect the adoption of alternate channels. Then in-depth interviews were conducted
to get insights regarding the attitude, motivational, and behaviour aspects of the
adoption of alternate channels. Factor analysis followed by multi-variate regression lead
to the results that factors of benefit awareness, easy accessibility, self-image motivation,
ease of instructions, time saving, perception of future substitution and perception of
human element have been seen to be very important.
74
ICADABAI 2009 – Abstracts
Pratap Sikdar
Express Advisory Services Private Limited (Express Weather)
Salt Lake City, Kolkata, West Bengal
www.expressgrp.com
In this Business case we have discussed the potential and challenges of Weather
business in India. It is evident that even though the usefulness is understood for the
different market segments but the effort in improving the service has not reached the
satisfactory level because of quality weather data. This is where we emphasized highly
in developing quality weather forecasts.
Developing quality weather data is a part of the total initiative taken by Express, the
other important facet of the initiative lies in the proper packaging and dissemination of
the data and service which could be easily got implemented in the existing system of the
clients.
The Indian market is in its fledging existence, and the perception which the target
segments have is not matured upto the desired extent. It’s a cumbersome task for
Express to make the target segments understand the benefits of such use of location
specific weather forecast information in their operational areas and its application in the
decision support system of the client.
We have discussed one such successful application case of weather service in the value
chain of an agrochemical company which has reaped immense benefits in its marketing
value chain.
75
ICADABAI 2009 – Abstracts
Mandeep Dhillon
ICFAI National College, Chandigarh
This qualitative study explored what Indian Youth think about Happiness. Eight hundred
(800) students wrote free-format essays in response to a simple open ended question,
“what is happiness”? All these essays were coded using thematic analysis. Using
thematic analysis main themes were found, (1) Happiness is a state of satisfaction,
positive feelings and contentment. (2) Happiness is goal achievement and sense of
accomplishment. (3)Social capital (i.e. family and friends) is more instrumental in
happiness than financial capital. (4)Happiness comes from spiritual enrichment, freedom
from ill-being i.e. being healthy. These themes were discussed in the context of Indian
philosophical and spiritual views of happiness.
76
ICADABAI 2009 – Abstracts
Traditional models of Microfinance which include Group Lending offer substantial scope
for one-to-one customer interaction leading to customer credit assessment. However,
with larger banks venturing into the sector - a greater focus on sustainable finance was
sought and consequently the need for statistical tools for credit assessment was felt. In
this direction, this study has been conducted by ICICI Bank with one of its key MFI
partners.
The objective of this study was to identify borrower characteristics which distinguish
good customers who are bankable.
Cross-sectional data of One Lakh borrowers was collected by the MFI. Multivariate
analysis was conducted which led to useful insights. Clients residing in better housing
conditions showed lesser probability of default. Older borrowers (by age) were observed
to have a higher credit quality. Moreover when the sons of borrowers were in their
working age, the repayment performance was superior. Another interesting result was
clients residing near Primary Health Centers showed better repayment performance.
However it may be noted that the results hold for the MFI in question and may not be
extended to microfinance lending in general.
77
ICADABAI 2009 – Abstracts
Sorabh Sarupria
Product Practice team,
Healthcare and Lifesciences,
Syntel Inc.
Topics Discussed
This paper (poster) discusses data mining and its applications within healthcare in major
areas such as the evaluation of treatment effectiveness, management of healthcare,
customer relationship management, and the detection of fraud and abuse. The paper
(poster) highlights the limitations of data mining and discusses some future directions.
Major conclusions
Fraud and abuse: Data mining applications that attempt to detect fraud and abuse often
establish norms and then identify unusual or abnormal patterns of claims by physicians,
laboratories, clinics, or others. Among other things, these applications can highlight
inappropriate prescriptions or referrals and fraudulent insurance and medical claims.
78
ICADABAI 2009 – Abstracts
BI 4A model:-
Approach: Monitoring, analytical & predictive intelligence.
Acumen: BI investment alignment with strategic goals.
Assumption: Willingness to identify & solve business problems.
Activation: Pre- & post-launch strategy for successful implementation of BI
process.
79
ICADABAI 2009 – Abstracts
Process & technology flow model:-
1. Customer information
2. Segmentation
3. The Game plan
4. Identifying the high-end customers
5. Tracking
6. Reaching to the customer
7. Alternate channels
8. Delivery of the product/services
9. Feedback
10. Product/service innovation
80
ICADABAI 2009 – Abstracts
Pricing Analytics is a core activity within any organization. It becomes even more critical
in a commodity market where customers are extremely price sensitive. Right pricing gets
Market share, Increased Revenue and Profitability to a business. ‘Competitive
Intelligence’ (CI) is a critical process by which management assesses the capabilities
and behavior of its current and potential competitors to assist in maintaining or
developing a “competitive advantage”. Pricing decisions in any Company are based on
Competitive Intelligence besides Cost Analytics, Technology Transitions, Product
Positioning and Business Strategy. It provides actionable insights on competing products
for pricing decisions besides supporting defensive Competitive Intelligence.
There are four stages in monitoring competitors - the four "C"s:
• Collecting the information,
• Converting information into intelligence by Collating,
• Cataloguing, Interpreting and Analyzing it by data visualization,
• Communicating the intelligence and countering any adverse competitor actions
Competitive indices often triangulated with P&L line items give valuable insights to
improve profitability. They are used in several industry verticals like Retail, Telecom,
Airlines etc. helping them in demand shaping, understanding their own product portfolio,
competitors’ response to their pricing actions and the implications of change in
competitive environment on the different products which the industry offers.
81
ICADABAI 2009 – Abstracts
1 sagar.kadam@clearcellgroup.com
2 biren.pandya@clearcellgroup.com
Value levers are the areas within a business which are positively impacted by customer
insight: Pricing, Promotions, Store location & Layouts, Assortment and Communication.
The basis of the Lifestyle Needs segmentation is that a customer’s lifestyle needs can
be defined by their grocery purchases, effectively “you are what you eat.” Even the
simplest approach to create needs segmentation requires both sound statistical analysis
along with regular inputs from business users.
In this paper, we have described the Analytical approach to create a Lifestyle Needs
Segmentation. Product association methodology was used to identify trends in the
contents of the grocery baskets. Final segments were obtained by using cluster analysis
to group segments with similar lifestyle needs but potentially different products
purchased. Stability and robustness of final segments were established through different
period rollout and observing how customers move between segments from period to
period.
82
ICADABAI 2009 – Abstracts
E.Nanda Kishore
NTPC Ramagundam, Employee Development Center
erukullank@yahoo.com
Results:
1. There is structural change in regression at 450mw. The slope of equation
is significantly different in both regions (<450mw and >450mw).
2. Hence Coal consumption pattern is significantly different in one region
than that of other.
Topics Discussed:
Classical Linear regression model.
Dummy variables Regression model.
Checking for Structural change in Regression.
Other aspects like MAPE etc.
Main Conclusions:
Above experiment proved that there is change in behavior of unit at 450mw. This
unit load is to be reduced upto 450mw more rapidly during backing down. Other
units load can be reduced for further load reduction had they also show similar
behavior. If unit load were to be reduced to 425mw and then to be raised with
load demand this unit load has to be increased immediately upto 450mw as it
can be done with less amount of coal.
83
ICADABAI 2009 – Abstracts
Anindo Chakraborty
Target Corporation, India – Bangalore
Anindo.Chakraborty@target.com
Key words: Retail sales forecasting, Classification & Regression Trees, Multiple
shocks to time series, Segmentation as a tool to reduce MAPE
84
ICADABAI 2009 – Abstracts
Key Words: Tariff, Power purchase model, Sales Module, Revenue Model, Tariff
Schedule
85
ICADABAI 2009 – Abstracts
Large store chains deploy sophisticated forecasting and planning systems based
on a clustering or grouping of their stores. Large stores with customer loyalty
cards also use customer transaction data for segmentation of customers in order
to improve their marketing and increase membership into their loyalty programs.
The groupings of stores or customer segments, quite often, are formed by
simplistic methods, with clusters formed by all stores (or customers) in a
geographic location such as a zip code, or by a ranking of stores by total sales.
These clustering approaches typically ignore a large amount of data collected by
the store chains. This data is multivariate in nature, with variables representing,
for example, amounts sold by individual stores for various product categories.
Cluster analysis is a data mining tool that uses the correlation among different
variables in a database to form store clusters or customer clusters. Forecasting
or planning systems that utilize this added information obtained from statistical
clustering will lead to increased potential for growth.
86
ICADABAI 2009 – Abstracts
Jayesh P Aagja
Institute of Management,
Nirma University
jayeshaagja@gmail.com
Post Liberalization, the Indian economy has seen intense competition due to the entry of
new players in most sectors. But in the current recessionary phase which set in 2007,
most service organizations were trying to hold their market share than increasing it. The
objective of this study is to validate a service convenience scale in the Indian organized
food & grocery retail context, and develop linkage between service convenience on one
side, and satisfaction / behaviourial intentions on the other. A convenience sample was
drawn from SEC A & SEC B from various parts of Ahmedabad city, with experience of
shopping from organized retail food & grocery outlets. The samples, drawn in two
phases – the first during Jan-March 2008, the second between Feb-March 2009, had
270 and 326 respondents respectively. As in the original scale, through the scale
validation process five dimensions emerged though with 15 items instead of the original
17 items (Seiders et al, 2007). Neural networks used for nomological model testing
indicated a good model fit. Subsequently, an attempt was made to segment respondents
based on their service convenience scores which resulted in four groups for both the
datasets. Statistically insignificant differences were observed amongst these clusters
based on demographics.
87
ICADABAI 2009 – Abstracts
E-mail:
1shivanianand83@gmail.com, 2drskde@vgsom.iitkgp.ernet.in, 3dean.fms@mitsuniversity.ac.in
88
ICADABAI 2009 – Abstracts
Bhavin Shah,
B. K. Majumdar Institute of Business Administration
H. L. College Campus, Navrangpura
Ahmedabad, Gujarat
Email: bhavin.shah@bkmiba.edu.in
Ramendra Singh
IIM Ahmedabad
Ahmedabad, Gujarat
Email: ramendras@iimahd.ernet.in
This case study highlights the performance appraisal measurement issues and
challenges for the sales force of a pharmaceutical company, Pharmex (name disguised).
We found that Pharmex was measuring too many (41) qualitative (e.g. competencies,
skills and job knowledge) aspects for Business Officers (BOs) and Area Business
Managers (ABMs), using inconsistent measures. ABMs had a tendency to score their
supervisee BOs higher on qualitative aspects, since such skills and competencies were
difficult to measure accurately, compared to more specific measures on numerical
quotas. Due to this measurement dichotomy between Part A (qualitative) and Part B
(numerical/quantitative) little correlation was found between these two parts of the
performance appraisal. Multiple regression analysis also suggested that little variance in
performance is explained by efforts, activities or qualitative aspects of BOs and ABMs.
Such measures which varied with appraisal aspects led to inconsistencies, and therefore
are likely to be faulty. Like Pharmex, other organizations too may be measuring
performance appraisal of their sales force using unreliable and invalid measures, leading
to erroneous managerial decisions about its salesforce’s performance, and its
consequent rewards. Efforts should be made by organizations on making performance
appraisal measures more scientific, and robust.
89
ICADABAI 2009 – Abstracts
E-mail: sanjeev@iimahd.ernet.in
The literature on store choice has mainly studied the store attributes, and ignored the
consumer attributes in store choice. Even when, the consumer attributes have been
incorporated the strength of relationship has been weak. Also, the literature on store
choice has completely ignored format choice, when studying store choice.
The paper argues for incorporating both the shopper attributes in store choice, and the
store formats. Shopper attributes can be captured through the demographic variables,
as they can be objectively measured, and these also capture a considerable amount of
attitudinal and behavioural variables. The paper proposes to link store choice, format
choice and consumer demographic variables, through a hierarchical logistic choice
model in which the consumers first choose a store format and then a particular store
within that format.
A nested logit model is developed, and the variables predicting the choice probabilities
are identified. The requirement of data for the empirical analysis is specified, the model
has not been verified in the absence of empirical data but the operationalization of
variables is done.
Keywords: Format choice, hierarchical choice model, nested logit, shopper attributes,
store attributes, store choice.
90
ICADABAI 2009 – Abstracts
Sapna Solanki
Sanghvi Institute of Management & Science
Indore.
E - ma il : sa p na . so la nk i5 9 4 @g ma il .c o m, sa p na a _ so la n ki @re d if f ma il .c o m
Involvement refers to how much time, attention, energy and other resources people
devote for purchasing or learning about the product or it is one of the fundamental
concepts used to explain the consumer buying process. Study finds that how level of
consumer involvement for durable and non durable product influence by financial risk,
performance risk, physical risk, social risk, time risk, uncertainty in selection,
psychological risk, previous Shopping experiences, product attribute, situation, brand
personality, hedonic value, motivation, level of learning, utility of the product , price,
durability, gift (for whom a product is purchased), life style, store, frequency of use,
additional benefits, packaging and endorsement. A self design opinionnaire was framed
to find out the various dimensions influencing on consumer involvement. For dependent
variable (consumer involvement) Zaichkowsky’s (1985) unidimensional a 20 items
bipolar likert scale called Personal- Involvement-Inventory (PII) was adapted. Stepwise
regression was used for both durable and non durable product categories. This
regression method suggested four models for each product. The best model suggests
that level of consumer involvement while purchasing garments is influenced by
Previous Shopping Experiences Hedonic Value, Special Offer and Uncertainty and for
Laptop models explains Brand Personality, Hedonic Value, Frequency of Use and
Durability are the core predictors.
Key Word: Consumer Involvement, Level of Learning, Social Risk and Experience.
91
ICADABAI 2009 – Abstracts
Goutam Dutta
Indian Institute of Management
Ahmedabad, Gujarat
E-mail : goutam@iimahd.ernet.in
Sankarshan Basu
Indian Institute of Management
Bangalore
Jose John
Indian Institute of Management
Ahmedabad, Gujarat
Insurance as a financial instrument has been used for a long time. The
dramatic increase in competition within the insurance sector (in terms of providers
coupled with awareness for the need for insurance) has concomitantly resulted in more
policy options being available in the market. The insurance seller needs to know the
buyer’s preference for an insurance product accurately. Based on such multi-criterion
decision-making, we use a logarithmic goal programming method to develop a linear
utility model. The model is then used to develop a ready reckoner for policies that will aid
investors in comparing them across various attributes.
92
ICADABAI 2009 – Abstracts
Sreekumar
Rourkela Institute of Management Studies
Rourkela
E-mail: sreekumar42003@yahoo.com
S.S. Mahapatra
Department of Mechanical Engineering
National Institute of Technology
Rourkela
E-mail: ssm@nitrkl.ac.in ; mahapatrass2003@yahoo.com
93
ICADABAI 2009 – Abstracts
As a well-structured and costly activity that pervades industries in both the public and
private sector, vehicle fleet management would appear to be an excellent candidate for
model-based planning and optimization. And yet, until recently the combinatorial
intricacies of vehicle routing and of vehicle scheduling have precluded the widespread
use of optimization (exact) methods for this problem class. The objective of paper
minimising freight on a day t subjected to mathematical & existing business constraints
gives a fleet requirement on a day t. Thus we have fleet requirement for 6 days. Now
turnaround time analysis has been done. The weightage turnaround time of quantities
send to different destinations comes about to be 2.89 days which has been rounded to
three days that means truck dispatched from hub on Monday would be again available
on Thursday. Using this analysis, Initial solution is reached which gives the initial fleet
mix.
Now, existing clubbing zones are identified & a search is made on the last three months
dispatch data where clubbing would have been possible & is not being taken care of in
the initial model. The initial model would have selected a two smaller vehicles where as
the clubbed locations could be served by a single large vehicle. Thus initial solution is
modified to attain final fleet mix. The results obtained are validated by examining the
dispatch pattern of the last three months by considering the average breakeven volumes
(for all destinations) & maximum dispatches occur for the type of truck which is required
in maximum quantity & vice-versa.
94
ICADABAI 2009 – Abstracts
There has been a constant need for an efficient and robust scheduling
technique to reduce the lead time and increase the productivity. This paper focuses on
reducing the Manufacturing Lead Time in a production layout of an engine valve
manufacturing company. The layout experiences problems like high inventory, higher
setup time and more part travel, etc. A six sigma tool, DMAIC is used to approach the
problem. Details with regard to Product mix, Volume of production, Work – In – Progress
(WIP), and Sequence of operations were collected. And a simple heuristic technique
was proposed to decrease the Manufacturing Lead Time and the approach was
validated using WITNESS, a simulation software. The simulation of the proposed
heuristic technique in WITNESS software show considerable reduction in Work – In –
Progress (WIP) and thereby resulted in monetary benefits. When this proposed heuristic
technique was implemented, the situation demanded a lean tool, Eliminate – Combine –
Rearrange – Simplify (ECRS). In total, the setup time for a particular process step
reduced from the existing 20 hours per month to 6 hours per month.
Keywords: Batch size, Six Sigma tool, WITNESS software, Inventory reduction.
95
ICADABAI 2009 – Abstracts
Jayaram Holla
Shrikant Kolhar
Srinivas Prakhya
In this paper, an ordered model that classifies applicants into good, marginal and bad
categories is developed. A model of repayment behavior is proposed where variation in
willingness and ability to repay is explained by individual specific factors and
heterogeneity. Category probabilities are derived assuming that the categories are
ordinal. The model explains more variation than standard two category models used in
the industry. The model estimates are robust as evidenced by results when deployed on
a validation sample. The model has the additional benefit of being useful in allocation of
collection resources post-disbursement. The model can also be used in conjunction with
other information to design cross-selling programs. The final model could be the core for
assessing value-at-risk and moving to risk-based pricing. Information-based strategies
are knowledge intensive. Firms deploying such strategies are in a learning-by-doing
mode, resulting in the accumulation of tacit knowledge that is an inimitable resource.
Development of inimitable resources is a key factor in obtaining sustainable competitive
advantage.
96
ICADABAI 2009 – Abstracts
Sandeep Das
Analytics, Genpact India
Rajarhat, Kolkata
E-mail: sandeep.das1@genpact.com
This paper discusses a methodology called “Multi Step Logistic Regression” to improve
the predictive power of the binary logistic regression model in terms of a higher Hit/Miss
ratio. A ‘Hit’ is defined as right classification/tagging and a ‘Miss’ is defined as wrong
classification obtained from cross tabulation between actual vs. predicted tagging. In this
approach, after choosing the final cut logistic model, the model building population is
segregated into two parts – predicted 1 and predicted 0 by selecting a cut off on
predicted probability distribution. For predicted 1 group, parameter estimates are re-
estimated keeping the same variables came significant for initial model. User may
choose to introduce new variables in each iteration and keep them in the model as per
significance. These steps are iteratively repeated till we get a good cost-benefit cause to
stop. The conventional logistic method (single step) doesn’t help to tackle a situation
where the proportion of 1 & 0 distinctly different or cost of misallocation is high. To tackle
such a situation, we will discuss this alternative approach. This paper targets to
improve the concentration in Hit cells with (without) tolerable/regulated (alarming)
increase in concentration of misclassification compared to the Single step approach.
KEY WORDS: Multistep Logistic, Binary Response Model, Improving predictive power of
Probability of Default (PD) model.
97
ICADABAI 2009 – Abstracts
+Feedback Consulting
Bangalore
E-mail: nimisha@strandls.com
Market research plays an integral role in the product development lifecycle. One of the
goals of market research is to understand the likes and dislikes of customers and identify
the features that need to be added or enhanced. Traditional market research involving
survey design, focus groups and quantitative research can be very expensive.
In this paper, we describe, Net Opinion in a Box (NOB), an opinion mining platform that
can aid the market research process. Using Natural Language Processing (NLP)
technologies, NOB extracts opinions expressed on Web 2.0 platforms like blogs, product
forums, and social networking sites. An optional curation module can be used to
manually improve the precision of NLP results. The opinions about specific features of
products are stored along with relevant meta-data like publication date, author location,
product brand, model etc.
These sentiments are presented to the end user in an intuitive web based visualization
dashboard. The dashboard allows users to apply filtering criteria to examine all aspects
of a product, perform a side-by-side comparative analysis of different brands, and study
how the opinions about a brand change with time. The interface allows users to drill-
down to the actual sentences and provides links to the source site.
The entire pipeline from product definition to publishing can be configured and monitored
via simple web-based user interfaces. The platform is currently being used to power
opinion research for 17 products at http://www.feedbackstrands.com/.
98
ICADABAI 2009 – Abstracts
Jayalakshmi Subramanian,
Infosys Consulting, Bangalore.
E-mail: Jaya_s@infosys.com
Suyashi Shrivastava,
Infosys Technologies Limited, Bangalore
E-mail: Suyashi_S@infosys.com
Kunal Krishnan,
Infosys Technologies Limited,
Bangalore .
E-mail: Kunal_Krishnan@infosys.com
FMCG (or CPG) companies spend anywhere between 12 to 25% of their revenue in
various marketing activities which drives 40-60% of the sales volume. Annual budgeting
in this industry is of paramount importance and sets the tone of marketing initiatives for
the rest of the year. Plethora of metrics and models are deployed in understanding the
effectiveness of campaigns and promotions enabled by powerful IT tools. Yet, more
often than not, category managers are left with open questions – which are not
explained by any metrics or regular market mix models. CPG companies are
increasingly looking for investigative analytics in understanding market drivers and how
they are influenced by various promotional activities. It influences annual budgeting
activity and effective allocation of Trade Funds. The paper discusses how to use
investigative analytics in aiding such strategic decisions with the help of a case study.
The study highlights a process which aims to answer a simple question – What is the
extent to which market drivers influence the sales of two top brands and where
organization can focus its marketing spend for each brand in next one year. The study
used live data from a leading Consumer packaged goods (CPG) company. The study
also aimed at preparing the hierarchy of the key factors that influence sales of each
brands. The study was aided by statistical tools SAS 9.2, SPSS 17.0 and iCAT platform
developed by Infosys Product Incubation & Engineering Group.
99
ICADABAI 2009 – Abstracts
Email: ruhi.khanna@maxhealthcare.com
To increase the new patient enrollments and to retain existing patients, healthcare
organizations are increasingly becoming patient-centric, as word-of-mouth publicity
affects the market-share of a healthcare set-up, most largely. It has thus become of
equal importance to optimize the performance with respect to quality. One of the means
to assess patient expectations is Feedback forms. Through these feed back forms the
Voice of Customer revealed high dissatisfaction amongst patients on Max, Chemist
Experience
The CTQ drill down suggested that the ‘Low Dispatch Capacity of Central Pharmacy’
was directly impacting Customer Satisfaction at the different Pharmacies. With the help
of LEAN Six Sigma we achieved increase in dispatch capacity of Central Pharmacy to
satellite pharmacies by 37% against the target of 32 %. The post project analysis of
Patient feedback revealed an immediate impact of 12% increase in Customer
satisfaction and reduced negative VOCs.
100
ICADABAI 2009 – Abstracts
Chetan Mahajan
Prakash G. Awate
The objective of the study is to obtain the best network configurations to detect
the different out-of-control patterns present in x bar control charts after investigating in
detail several aspects and issues concerning use of multi-layer feed forward networks.
The multi layer feed forward network with standard back-propagation algorithm
training was employed to detect the non-random patterns in x bar control chart. The
structures as well as neurons’ parameters in the network were obtained through
extensive simulations of learning and testing.
101
ICADABAI 2009 – Abstracts
1
Vinod Gupta School of Management (VGSOM) IIT Kharagpur
Institute of Management & Information Science, Bhubaneswar, Orissa.
2
Corresponding Author, Vinod Gupta School of Management, Indian Institue of
Technology, Kharagpur, West Bengal.
Empirical studies on IPO underprice anomaly in recent years have closely examined use
of various proxies for risks, but none of which seems to explain significant portion of
underprice. The paper seeks to shed light on this controversy by taking a sample of 92
IPOs issued in India during 2002-2006. We examine suitability of high price deflated to
low price (H/L) as risk surrogate to explain underprice. The sample displays some
evidence that H/L is a better proxy for ex-ante risk than other risk surrogates. The H/L
ratio is estimated as average high price to low price for initial one month of trading has
superior predictive ability for underprice. Besides H/L, other risk proxies proves
statistically significant include investment bank prestige and inverse of offer proceeds.
Further, we studied variation in predictive behavior of risk proxies across manufacturing
and non-manufacturing sectors. We found no significant difference in average H/L value
for manufacturing and non-manufacturing firms. We also document, after market H/L,
investment bank prestige, and age of issue firm are suitable risk proxies for
manufacturing sector IPOs, while risk for non-manufacturing sector IPOs is better
represented by H/L, investment bank prestige, inverse of offer proceeds, and after
market price volatility.
Key Words: High Price to Low Price, Risk Proxy, Investment Bank Prestige, Initial Day
Return.
102
ICADABAI 2009 – Abstracts
Debdatta Pal
Indian Institute of Management, Ahmedabad
Email: debdatta@iimahd.ernet.in
In this study data envelopment analysis approach to efficiency has been used on a
sample of thirty six Indian Microfinance Institutions (MFIs), taking each institution as
a Decision Making Unit. The analysis considers portfolio outstanding as on March
31, 2008 as output variable while on the input side number of personnel involved in
the organization and cost per borrower been considered as a proxy for labour and
expenditures respectively. MFIs that remain efficient under both constant returns to
scale and variable returns to scale assumption are Sanghamithra Rural Financial
Services, Spandana Sphoorty Financial Limited and Pusthikar. The study also
attempted to identify and analyze the possible determinants of efficiency of MFIs in
India and variables are grouped under four wide categories namely location,
governance, presence & outreach and financial management & performance. The
results indicate that value of total assets, level of operational self sufficiency, returns
on assets, returns on equity, age and borrower per staff of MFI are positively
correlated with all efficiency measures, while, portfolio at risk (PAR 30 days) is
positively correlated with only Technical Efficiency (TE) and Pure Technical
Efficiency (PTE). As expected debt equity ratio is negatively related with TE and
PTE. In case of location only the MFIs from southern Indian states have positive
correlation with all three measures of efficiency.
Key words: Data Envelopment Analysis, Rural Financial Services, Pure Technical
Efficiency
103
ICADABAI 2009 – Abstracts
Gunjan M. Sanjeev
E-mail: gunjmit@hotmail.com
This study has made an exploratory attempt to measure the efficiency of the 96 Regional
Rural Bank (RRS) using a mathematical programming approach, Data Envelopment
analysis (DEA). It is found that seven RRBs emerge fully efficient out of the 96 total
studied. The mean efficiency score is 0.764. Few banks need immediate attention as
their efficiency scores are very low. A preliminary effort has been made to see i) if there
is any link between the efficiency of a RRB and its association with a respective sponsor
bank; ii) if the efficiency of the RRB has any link with the geographical location. It is
found that there are a few sponsor banks who emerge winners- all RRBs operating
under them are efficient. Also, there are a few states in India where all the RRBs are
efficient.
104
ICADABAI 2009 – Abstracts
Santanu Roy
Institute of Management Technology
Ghaziabad
One major problem in evaluating the efficiencies of public institutions as pointed out
by many researchers is the lack of a good estimate of the production function. The
study reported in the paper adopts the methodology of data envelopment analysis
(DEA) and measures the relative efficiencies of public-funded research and
development laboratories in India (each laboratory being considered as a decision
making unit) with data drawn from 12 such laboratories functioning under the Council
of Scientific and Industrial Research (CSIR). The laboratories considered are spread
over different regions of the country and work on diverse fields of science,
engineering and technology. The input data for the study consist of the total number
of scientific personnel and the total number of technical personnel working in each
laboratory and the output data consist of the number of papers published in Indian
journals, the number of papers published in foreign journals, the number of patents
filed by these laboratories. Both the global efficiency scores and the different local
efficiency scores (with specific inputs and outputs) were evaluated and potential
improvements were ascertained. The implications of the study results have been
analyzed and discussed.
105
ICADABAI 2009 – Abstracts
Vivek S. Borkar
School of Technology & Computer Science,
TATA Institute of Fundamental Research,
Mumbai
Email: borkar@tifr.res.in
Mrinal K. Ghosh
Department of Mathematics,
Indian Institute of Science,
Bangalore
Email: mkg@math.iisc.ernet.in
Govindan Rangarajan
Department of Mathematics,
Indian Institute of Science,
Bangalore.
Email: rangaraj@math.iisc.ernet.in
The celebrated Merton's model for equity of a firm views equity as a long call of a
European call option on the assets of a firm. This allows one to treat the assets as a
partially observed process observed through the `observation' process of equity. This is
the standard framework for nonlinear filtering, which in particular allows us to write an
explicit expression for the likelihood ratio for underlying parameters in terms of the
nonlinear filter. As the evolution of the filter itself depends on the parameters in question,
this does not permit direct maximum likelihood estimation, but does pave way for the
`Expectation-Maximization' (EM) method for estimating parameters.
106
ICADABAI 2009 – Abstracts
One of the standard tools for the theoretical analysis of fixed income securities
and their associated derivatives is the term structure model of Heath, Jarrow and
Morton. In this paper we suggest a simple criteria based on realized volatility that tells
which Gaussian HJM model is consistent with observed Eurodollar futures. We also
address the question of estimation of parameters of these models by two different
methods - method of realized volatility and method of maximum likelihood.
107
ICADABAI 2009 – Abstracts
A.N.Sekar Iyengar,
Saha Institute of Nuclear Physics,
1/AF Bidhan Nagar, Kolkata
Email: ansekar.iyengar@saha.ac.in
108
ICADABAI 2009 – Abstracts
Tomoaki Nakatani
Department of Agricultural Economics
Hokkaido University
Sapporo, Japan
and
Department of Economic Statistics
Stockholm School of Economics
Stockholm, Sweden
E-mail: naktom2@gmail.com
A preliminary version. Please do not cite without a permission from the author.
This paper contains a brief introduction to the package ccgarch that is developed for use
in the open source statistical environment R. ccgarch can estimate certain types of
multivariate GARCH models with explicit modelling of conditional correlations (the CC-
GARCH models). The package is also capable of simulating data from major types of
the CC-GARCH models with multivariate normal or Student’s t innovations. Small Monte
Carlo simulations are conducted to see how the choice of the initial values a.ects the
parameter estimates in estimation. The usefulness of the package is illustrated by .tting
a trivariate Dynamic Conditional Correlation GARCH model to stock returns data.
109
ICADABAI 2009 – Abstracts
Abhinanda Sarkar
GE John F Welch Technology Center
EPIP, Whitefield
Bangalore
Email: Abhinanda.Sarkar@ge.com
Wind turbines are sources of electrical power and convert a random source – the wind –
to electricity at a steady frequency. The wind velocity can be considered as a time series
with a marginal distribution that permits extreme winds. This mechanical energy relates
to electrical energy via a power curve that also depends on other characteristics. The
modeling and estimation challenge is to model and estimate a non-normal time series,
together with implications for aspects such as turbulence parameters and the effects of
averaging. The uncertainty in the wind can be converted to risk measures for power
generated.
Key words: Weibull distribution, autoregressive time series, power curve, turbulence
intensity, value at risk
110
ICADABAI 2009 – Abstracts
Today, industries across the world are driven to be innovative, globalized and cost
effective due to a vigilant globalized consumer (customer). Consumers are looking for
innovative, reliable and safe products with extended warranties, sales and service, and
clauses for liabilities and penalties for product non-function. There is also the cost factor
that the consumer looks for in addition to the above stated needs. In order to cater to the
consumer’s needs, industries must be innovative and cost effective, not only in terms of
their product design but also to proactively address product reliability, and reduce
warranty claims. This paper deals with product performance, life modeling and
simulation to make business decisions. The technique of data collection and
computation through a concept called Early Indicators Product Tracking which
dynamically alerts product performance is dealt in detail. The use of Weibull analysis for
product life modeling and risk forecasting is explained. The advantage of time dependent
modeling over the traditional “Take-away” constant failure rate model is discussed. The
concept of business simulation to help plan operations and anticipate bottle necks along
with its benefits is addressed in the paper. The above techniques if applied in a
systematic way can help manage the business by taking right decisions and being
proactive in addressing customer issues
Key words: Reliability, risk forecasting, Weibull, Early indicator product tracking,
Simulation, Mean Time between Failures (MTBF).
111
ICADABAI 2009 – Abstracts
Email: gk.palshikar@tcs.com
Identifying and analyzing peaks (or spikes) in a given time-series is important in many
applications. Peaks indicate significant events such as sudden increase in price/volume,
sharp rise in demand, bursts in data traffic etc. While it is easy to visually identify peaks
in a small univariate time-series, there is a need to formalize the notion of a peak to
avoid subjectivity and to devise algorithms to automatically detect peaks in any given
time-series. The latter is important in applications such as data center monitoring where
thousands of large time-series indicating CPU/memory utilization need to be analyzed in
real-time. A data point in a time-series is a local peak if (a) it is a large and locally
maximum value within a window, which is not necessarily large nor globally maximum in
the entire time-series; and (b) it is isolated i.e., not too many points in the window have
similar values. Not all local peaks are true peaks; a local peak is a true peak if it is a
reasonably large value even in the global context. We offer different formalizations of the
notion of a peak and propose corresponding algorithms to detect peaks in the given
time-series. We experimentally compare the effectiveness of these algorithms.
112
ICADABAI 2009 – Abstracts
Rudra Sarkar
Genpact Analytics, Genpact India
Rajarhat, Kolkata
E-mail: rudra.sarkar@genpact.com
Very frequently we want to find ‘proper’ segments within our customer base to meet
various business challenges. These segments can be aligned to various business
verticals like marketing, risk or collections. In this paper we will review one such method
of doing segmentation analysis, which is relevant for the analytics support of the
business. The ‘decision tree’ helps us do the CHI-Square Automatic Interaction Detector
(CHAID) segmentation analysis and eventually drill down to the target segments. The
tree can be produced using different software applications. One of the most popular
amongst them is the Knowledge Seeker Studio application from Angoss.
We have also talked about some basic concepts on Segmentation with relevant
examples from business. And then we have discussed how we typically deal with such a
segmentation analysis using a decision tree and eventually how do we come up with
recommendations for the business. Through proper segmentation and right targeting a
business can add a lot to the bottom-line. Moreover, segmentation analysis does not
demand a huge investment. Gathering data points, doing segmentation and coming up
with specific business implementations are the actionable.
113
ICADABAI 2009 – Abstracts
Sanjay Bhargava
Bharat Petroleum Corporation Limited
Crude oil procurement is one of the most critical activities in an oil refinery. Crude oil
constitutes 95% of total refining cost. After procurement the refining of crude oil and the
product slate obtained thereby needs to be planned.
There is a conventional method of crude oil evaluation and arriving at product slate
which was being used by oil industry when the refineries complexity was less, used to
process one type of crude oil and products quality requirement was not stringent.
However with passage of time crude oils evaluation and preparing product slate posed
greater challenges as refineries complexity increased to improve value addition from
processing of crude oils, products specifications became more environment friendly,
meeting pollution control norms.
The presentation deals with transformation from simple yield based calculations to
Linear Programming (LP) based model for refineries crude procurement and production
planning. In LP model in addition to yields of crude oils and various processing units of
refinery, products demand, processing units capacity, crude oils and products price,
make or buy decisions etc. is considered. The output in addition to crude processing,
products slate and quality, also indicates the marginal value for each crude oil and
product. Scenario analysis like production of various products, external streams can be
carried out.
We at BPCL are using Process Industry Modeling System (PIMS), LP model from M/s
Aspen Tech., USA for term and spot crude procurement, yearly and monthly planning.
The quarterly planning is broken into period say 4 each of 7/8 days for immediate month
to arrive at realistic production slate considering crude arrivals schedule by using multi-
period PIMS. The subsequent months are run on fortnight / monthly basis which helps to
arrive at decision for crude oils transportation schedule.
114
ICADABAI 2009 – Abstracts
1, 2
Department of Humanities and Social sciences
National Institute of Technology (NIT), Rourkela, Orissa
3
Department of Mechanical Engineering
National Institute of Technology (NIT), Rourkela, Orissa
The study assesses the service quality of occupational health and safety of
fishermen. Occupational hazards are a major concern in fishing, particularly in sea
water fishing. The productivity of fishing companies is greatly affected due to
occupational health related problems. Occupational health related problems caused
to fishing personnel impacts on economy and social well being of the community in
addition to loss of economic and goodwill of companies. The aim of this study is to
assess occupational health care system prevailing in the sector and propose some
remedial measure to control such hazards in future. To this end, a specially designed
questionnaire has been prepared and distributed to respondents. A Grey relational
analysis has been adapted to the responses derived in Likert type scale for
prioritization of remedial action needed for improving quality of occupational health
care system in the sector.
115
ICADABAI 2009 – Abstracts
Ranchi, Jharkhand.
Consumer behavior studies the behavior of individual or a group of people. The study of
consumer behavior provides marketers to understand and predict the future market
behavior. In this paper, role of IRDA, role of Indian banks, role of private insurance
companies, function of insurance company, various factors influencing consumer
behavior, factors influencing buying decision and model of consumer decisions making
process have been considered. Also, the types of insurance policy taken by consumer,
the total sum assured of life insurance, the total sum assured of life insurance for the
spouse, the share of public insurance in insurance sector, share of LIC in life insurance in
insurance sector and the reasons for invested in life insurance have been studied. The
survey was conducted across 334 cities/towns in all the states and union territories. A
sample of 1947 individuals has been selected by setting questionnaire. The online
response system has self-checking and its validation system vetted the quality and
veracity of the responses. Indicus Analytics then cross-checked and inputs with its
databases on investors and their habits. The majorities of the respondents were from the
top five metros and 10 major cities and had at least 30 participants. The profile of the
target respondents is typically matched. The target respondents are well educated,
familiar with English, spread over major urban centers having a higher socio-economic
and income profile and spread across a range of occupations, professions and different
age groups.
Insurance sector provides some security to the consumer for any type of mishappening.In
this sector, IRDA plays an important role and time to time gives important guide lines to
various companies. Still, LIC plays an important role and has maximum share in this
sector. Recently, banking sector has also moved towards insurance sector since they
would get better dividends than the commission they would get by entering into
partnerships with other major insurance market players. Union Bank, Federal Bank,
Allahabad Bank, Bank of India, Karnataka Bank, Indian Overseas Bank, Bank of
Maharashtra, Bank of Baroda, Punjab National Bank, and Dena Bank are planning to
enter in this sector. Among private sectors Max New York insurance company plays a vital
role. There are various factors that affect the consumer buying decision and also influence
consumer thinking when they are planning to invest in insurance scheme .Major
respondents generally prefer insurance like vehicle insurance, term cover insurance,
medical/health insurance and they also prefer sum assured of life insurance less than Rs
116
ICADABAI 2009 – Abstracts
10 lakh. Most of the respondents have shown their interest in life insurance having higher
risk coverage and also for tax saving purpose.
Key words: Consumer behavior, Buying decision and Consumer decisions making
process
117
ICADABAI 2009 – Abstracts
Hardeep Chahal
Department of Commerce
University of Jammu
Jammu
Email: chahalhardeep@rediffmail.com
Although customer relationship management (CRM) is a recent concept but, its tenets
have been around for long some time. To sustain competitive advantage it is necessary
to understand what customers require and equip employees to deliver to customers
more than they expect i.e.. customer value, while constantly refining value propositions
to ensure customer loyalty and retention. .But at the same time developing and
maintaining CRM is not an easy task. There is need to have objective mechanism to
operationalise CRM in the organization. The paper has made an maiden attempt to
conceptualise and operationalise CRM through Two Component Model (Operational
CRM ( OCRM) and Analytical CRM ( ACRM) ), particularly in healthcare sector.
Relationship between OCRM, based on three patient-staff constructs ( physicians,
nurses and supportive staff) and ACRM four constructs ( satisfaction, repatronization,
recommend and organizational performance) with service quality as an antecedent to
OCRM rather than as a moderator between two CRM components – OCRM and ACRM
was analysed using confirmatory factor analysis ( AMOS). The data for the model were
collected from three large hospitals from 306 patients who have been associated with
the hospital for atleast last five years. The validity and reliability of the varied multi-
dimensional OCRM and ACRM scales were duly assessed. Dimensions primarily caring
attitude, friendliness, helpfulness, response to queries, expertise and effective treatment
are found to be significant for OCRM from physicians, nurses and supportive staff
perspectives that can impact four ACRM dimensions - satisfaction, repatronization,
recommend and organizational performance. The paper concludes with implications
(managerial, theoretical and patient) and limitations and future research.
118
ICADABAI 2009 – Abstracts
Uma V.P.Shrivastava
Department of Business Administration,
Hitkarini College of Engineering and Technology,
Jabalpur, Madhya Pradesh
In the middle size towns the impact of advertising is very strong, and more than
what it is on the residents of metros and big towns. The effect on the consumers of
advertising is very high and the consumers are very highly influenced. Studies have
proved that 37% consumers tend to buy a product for the first time because they have
liked the advertisement of the product and it has aroused a curiosity about the product /
service. Moreover, as now-a-days the advertisements show, the celebrities endorse one
or the other products. Consumers usually identify themselves with such celebrities and
thus end up using or purchasing the endorsed product. Thus, there are different reasons
why advertisements have such a huge impact; it can be imagery, self identification,
glamour or just like that feeling. But the crux is that “advertising have a strong impact on
the consumers.”
The hypothesis which was adhered to working on this study was that, “the
advertisement does have a very strong impact on the consumers of middle size towns.”
The reasons why this topic was considered was that as the secondary data support; the
consumers of middle size town mostly have a defined disposable income and are
majorly away from the world of glamour. They aspire to become “someone” out of “no-
one”. The advertisements are the most glamorous ways of selling a product / service
and these consumers thus fall for them. The core objectives which were focused consist
of (a) What type of an impact do the advertisements create on the consumers of middle
size towns?; (b) What percentage of the income do consumers of middle income group
divert as the disposable income? (c) Do the consumers make purchases out of
requirement of the product or out of fancy for advertisement?
This study was conducted in six small and semi-small towns of Madhya Pradesh
and followed the basic research tools of random sampling, customer and consumer
interviews and FGD’s. Its sample size was more than one thousand respondents
including respondents in the age group of 9 yrs to 66 yrs. The respondents included
males and females of A1, A2 and B1, B2 category. This was cautiously done because it
is normally people of this category who have the buying capacity as an influence of and
119
ICADABAI 2009 – Abstracts
reaction to advertisement. The respondents were interviewed at various locations of the
cities. Data both qualitative and quantitative was collected to analyze.
The research data was analyzed and the key findings were a list: (a) the advertisements
do make a strong impact on the consumers in various ways – they either end up buying
the product / service for themselves or they recommend it to other fellow consumers with
immense confidence proving themselves a loyal consumer to the product.(b) The middle
income group and the higher income groups of middle size town both have a defined
percentage range of their house-hold income which they are willing to use as disposable
income; but this percentage varies amongst the various age groups of consumers. (c)
The products / services are broadly categorized as impulse purchase products and non-
impulse or thought over purchase products as per their own life-style and living
requirements. The impulse purchase products are more governed by fancy but the non-
impulse is governed more by deliberate purchase due to requirement of a particular
product / services. Apart from this a lot of information and insight was received related to
the consumers, their buying behaviour and patterns, their thought process and the
reasons of their ways and manners of buying products / services. The study also helps
to an extent in understanding the reasons why some particular products / services do
better than other in any middle size town as against some others.
This paper would at length discuss the various aspects of the influence and / or
impact, advertisement makes on the consumers and their buying behaviour and
patterns; also the other aspects of the same supported with both qualitative and
quantitative data.
120
ICADABAI 2009 – Abstracts
1, 2
Institute of Management Studies, Ghaziabad
3
SSR Institute of Management & Research, Silvassa
The most favorable alternative for any company is to satisfy consumers’ demand,
which has always been a key consideration of any product demand and supply
system. The nature of related decisions usually is considered to be single dimensional
retailer’s information or consumer’s purchase data. Bayesian decision methodology
provides an alternative framework to handle the problem of over-stocking and under-
stocking and is used to determine the decision strategies for best alternative selection
for efficient supply-chain management. Designing a purchase incidence model for the
same requires data of purchase either obtained from retailers or from consumers’
survey. In this paper, we propose a Bayesian Criteria Purchase-Incidence Model
(BCPIM). The proposed model can help in designing effective and efficient policy
depending on the information available from both. Further, companies can use this
analysis as a strategic decision-making tool to develop efficient and sufficient supply
chain management. Finally, an example has been illustrated to highlight the
procedural implementation of the proposed model.
121
ICADABAI 2009 – Abstracts
Many crucial business decisions are being taken without having systematic study
of the existing scenario and also without applying suitable data analytics. The availability
of large volume of data on customers, made possible by new information technology
tools, has created opportunities as well as challenges for businesses to leverage the
data and gain competitive advantage. On the one hand, many organizations have
realized that the knowledge in these huge databases are key for supporting the various
organizational decisions. Particularly, the knowledge about customers from these
databases is critical for the marketing function. But, much of this useful knowledge is
hidden and untapped.
122
ICADABAI 2009 – Abstracts
Key words: Market Basket Analysis, Temporal Associative Classification, CBA, CMAR,
CPAR.
123
ICADABAI 2009 – Abstracts
Derick Jose
MindTree Consulting, Bangalore
Ganesan Kannabiran
National Institute of Technology, Tiruchirappalli
Shriharsha Imrapur
MindTree Consulting, Bangalore
Consumer Packaged Goods (CPG)/ Retail Organizations are looking at new product
development (NPD) process as a critical component to deliver breakthroughs in the
market space. With every launch organizations spend millions of dollars in researching
new products; test marketing it and releasing it to the broader market. This paper
attempts to introduce rigorous analytical techniques and processes along with new
sources of data which would help optimize the critical decisions which are undertaken
when a new product is launched. We use an analytical framework to optimize six crucial
New Product Development (NPD) decisions which consist of a pre fabricated industry
specific data model and a 10- step process using advanced statistical techniques. We
then attempt to implement the framework using a commercial tool. Preliminary outputs
show significant insights can be leveraged by the product development and the
marketing groups to align decisions regarding product features, packaging and
messaging to their target market.
124
ICADABAI 2009 – Abstracts
The availability of huge amount of information does not mean wealth of information.
Filtering the data using various mining techniques gives the essence of valuable
information called knowledgeable information. Data Mining is the exploration and
analysis of large sets, in order to discover meaningful patterns and rules. Almost every
business process today involves some form of data mining.
Most of today's structured business data is stored in relational databases. Existing data
mining algorithms (including those for classification, clustering, association analysis, and
outlier detection) work on single tables or single files. Unfortunately, information in the
world can hardly be represented by such independent tables. One of the main tasks in
data mining is supervised classification, whose goal is to induce a predictive model from
a set of training data. Multi-relational classification is a very important research area
because of the popularity of relational database. It can be widely used in many
disciplines, such as financial decision-making, medical research, and geographical
applications.
125
ICADABAI 2009 – Abstracts
1
Department of Mathematics
School of Computing Science,
VIT University, Vellore, Tamil Nadu
Email: tripathybk@rediffmail.com
2
Khallikote College, Berhampur, Orissa
Email: hodmca@sify.com
3
Department of Mathematics
Simanta Mahavidyalaya, Jharpokharia, Orissa
Email: debadutta.mohanty@rediffmail.com
Several approaches have been introduced to deal with impreciseness in data. The
concept of fuzzy sets put forth by Zadeh (1965), is one of the earliest among them. The
other major and perhaps a better approach is the concept of rough sets due to Pawlak
(1982). Classification of universes is the core concept in defining basic rough sets.
Approximations of classifications are of great interest due to the fact that, in the process
of learning from examples, the rules are derived from classifications generated by single
decisions (Busse 1988 and Tripathy et al. 2009). These rules can be used in multivalued
logic. Four propositions were established by Busse (1988) which characterize properties
of approximations of classifications. These results are instrumental in defining types of
classifications, which are used to generate rules from information systems. In this article,
we extend these propositions to obtain necessary and sufficient type results. From these
theorems, several results are derived, in addition to the above four results. We shall
provide interpretations to each of these results and also illustrate them through
examples, to determine the kind of knowledge one can infer from the information
systems, which satisfy the conditions of these propositions.
126
ICADABAI 2009 – Abstracts
The Circular Normal distribution (a.k.a von Mises distribution), is the most widely used
probability model for circular data. The maximum likelihood estimators (m.l.e) of the
location parameter (µ) and the concentration parameter (κ) of the distribution are known
to be not SB-robust at F = {CN(µ,κ) : κ>0}. In this paper, we define a natural measure of
dispersion (S) and show that the directional mean is not SB-robust at F with respect to
S. Next, we show that the directional mean is SB-robust at F for the following families (1)
mixture of normal and circular normal distributions, (2) mixture of two circular normal
distributions, and (3) mixture of wrapped normal and circular normal distributions with
respect to S. Subsequently we define a γ-circular trimmed mean with trimming
proportion γ and show that it is an SB-robust estimator for µ at F with respect to S. Next,
we study the SB-robustness of the m.l.e. of the concentration parameter of circular
normal distribution and show that the m.l.e. of the concentration parameter is not SB-
robust at F with respect S. Next, we define the γ-trimmed dispersion measure (Sγ) and γ-
trimmed estimator for concentration parameter and show that this new estimator is SB-
robust at F with respect to Sγ.
127
ICADABAI 2009 – Abstracts
Arnab K. Laha
Indian Institute of Management, Ahmedabad
Somak Dutta
Indian Statistical Institute, Kolkata
Rank data arise quite frequently in many areas of management like marketing, finance,
organizational behaviour and psychology. Such data arises when a group of randomly
chosen respondents are asked to rank a set of k items according to their order of
preference. The resultant data is a set of permutations of {1, …, k}– one permutation for
each respondent. Analysis of rank data becomes difficult because the permutation
groups do not have rich structure like the real line or the real space and the dimension of
the data increases very rapidly with increase in the number of items to be ranked. In this
paper we propose a model that assumes the observed ranks to be a random
permutation of the true rank where each permutation has some specific probability of
appearing. We consider the general case where covariates are present and the true rank
is a function of the covariate values. A Bayesian approach is taken to estimate the model
parameters. The Gibbs Sampler and the Population Monte Carlo methods are used to
sample from the posteriors. Two real life data sets are analyzed using the model to
indicate their usefulness.
Keywords: Gibbs Sampling, Permutation Group, Population Monte Carlo, Ranking Data
128
ICADABAI 2009 – Abstracts
Email: siuli@math.iitb.ac.in
129
ICADABAI 2009 – Abstracts
Based on the concept put forth by researchers (Conger & Kanungo, 1988;
Thomas & Velthouse, 1990; Spreitzer, 1995, 1996) we studied the concept of
empowerment in the context of women primary school teachers of India. Relationship of
two important work-outcomes of empowerment, job involvement and innovative behavior
are studied. While individual’s empowerment is based on self report, the data for work-
outcomes have been collected from self, superiors and colleagues. Total 113 teachers, 8
superiors and 303 colleagues from three schools of Gujarat, India have participated in
the study.
All the latent constructs under study were tested for both convergent and
discriminant validity. We performed both single rater and multi rater confirmatory factor
analysis, as appropriate for multi-rater research. Before aggregating the data for
colleagues, rwg, average deviation, and intra-class correlation were calculated. Structural
equation modeling has been used to test the model fit. Results show that empowerment
lead to job involvement and innovative behavior.
The study supports earlier rating researchers’ perspective that different types of
raters have different perspectives for the same dimension and that influence their ratings
(Landy & Farr, 1980; Harris, & Schaubroeck, 1988). Overall the research indicates
importance of psychological empowerment in the workplace.
130
ICADABAI 2009 – Abstracts
Mandeep Dhillon
ICFAI National College, Chandigarh
Email: dhillonmandeep.inc@gmail.com
The purpose of this paper was to outline the basic employability skills and or
competencies required by employees in business organizations to perform and compete.
For the purpose of the present study employability skills were, “a set of attributes, skills
and knowledge that all labor market participants should possess to ensure that they
have the capability of being effective in the workplace-to the benefit of themselves, their
employer and the wider economy”.
This study has identified that the employability skills can be categorized as Basic
Academic skills, Technical skills and Generic skills. Generic skills were further grouped
as Operational, Behavioral skills and other soft skills. Communication skills, analytic skill
and leadership skills were found to be most important generic skills.
131
ICADABAI 2009 – Abstracts
When disasters strike, government agencies, military, paramilitary forces, and relief
organizations responds by delivering aid to those in need. The distribution chain needs
to be both fast and agile in responding sudden onset of disasters. Given the stake and
size of relief industry, the study of relief chain is an important domain of supply chain
management. The distribution chain needs to be both fast and agile in responding
sudden onset of disasters.
132
ICADABAI 2009 – Abstracts
Selvanayaki M
PSGR Krishnammal College for Women
Coimbatore
E-mail: selvanayaki79@gmail.com
Vijaya MS
GR Govindarajulu School of Applied Computer Technology
PSGR Krishnammal College for Women
Coimbatore
E-mail: msvijaya@grgsat.com
The dataset is trained using SVM with linear, polynomial and RBF kernel and
with different parameter settings for d, gamma and C - regularization parameter. The
performance of the trained model is evaluated using 10 – fold cross validation for its
predictive accuracy. Prediction accuracy is measured as the ratio of number of correctly
classified instances in the test dataset and the total number of instances. Prediction
accuracy is measured as the ratio of number of correctly classified instances in the test
dataset and the total number of instances. It is found that the predictive accuracy shown
by SVM with Radial Basis Function kernel is higher than the other two models.
133
ICADABAI 2009 – Abstracts
Gomase V.S.*, Yash Parekh, Subin Koshy, Siddhesh Lakhan and Archana Khade
*
Department of Bioinformatics, Padmashree Dr. D.Y. Patil University,
Navi Mumbai
Email- virusgene1@yahoo.co.in
The machine learning techniques are playing a major role in the field of
immunoinformatics for DNA-binding domain analysis. Functional analysis of the binding
ability of DNA-binding domain protein antigen peptides to major histocompatibility
complex (MHC) class molecules is important in vaccine development. The variable
length of each binding peptide complicates this prediction. Such predictions can be used
to select epitopes for use in rational vaccine design and to increase the understanding of
roles of the immune system in infectious diseases. Antigenic epitopes of DNA-binding
domain protein form Human papilloma virus-31 are important determinant for protection
of many host form viral infection. This study shows active part in host immune reactions
and involvement of MHC class-I and MHC II in response to almost all antigens. We used
PSSM and SVM algorithms for antigen design, which represented predicted binders as
MHCII-IAb, MHCII-IAd, MHCII-IAg7, and MHCII- RT1.B nonamers from viral DNA-
binding domain crystal structure. These peptide nonamers are from a set of aligned
peptides known to bind to a given MHC molecule as the predictor of MHC-peptide
binding. Analysis shows potential drug targets to identify active sites against diseases.
134
ICADABAI 2009 – Abstracts
The world post WTO -1995 has opened new dimensions of trade and commerce
both nationally and internationally. This got further facilitated by the advent of
technology. This research paper attempts to understand the legal implications of
different types of contracts envisaged through internet and the statutory
provisions pertaining to the same. The main contentions for E Contracts are the
legal sanctity of E commerce per se and electronic governance as to writing and
signature for legal recognition. The first step in this direction was the enactment
of The Information Technology Act, 1999 which provided for equal legal
treatment to users of electronic communication as the paper base
communication followed by amendments in the Indian Penal Code, 1890, The
Indian Evidence Act, 1872, the Reserve Bank of India Act, 1934 and the Bankers
Books Evidence Act, 1891. The three basic forms of "E-Contracts"; the Click
Wrap, the Shrink-Wrap agreements and the Electronic Data Interchange, and
their legal sanctity have been analysed by content analysis of Information
Technology Act, 2002 and Indian Contract Act, 1872 supported by relevant
judicial pronouncements both from India and developed countries.
135
ICADABAI 2009 – Abstracts
136