Вы находитесь на странице: 1из 12

INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

Big Data Analytics Implementation In Banking


Industry – Case Study Cross Selling Activity In
Indonesia’s Commercial Bank
Raden Ali Rakhman, Rosalina Yani Widiastuti, Nilo Legowo, Emil Robert Kaburuan

Abstract: In 21st century, the big data revolution is happening and has found its place, also within the banking Industry, Bank can leverage big data
analytics to gain deeper insights for customers, channels, and the entire market. Integrating predictive analytics with automatic decision making, a bank
can better understand the preference of its customers, identify customers with high spending potential, promote the right products to the right customers
/ cross selling, and improve customer experience, and drive revenue. One of Indonesia’s commercial bank is having an issue with cross selling activity of
loan product, and seeks a big data analytics solution to help them. This paper aims to design paper aims to create a design of big data analytics
application architecture, suitable business rule and model for cross selling analysis in the Bank. By leveraging Cloudera Hadoop, Aster Analytics as big
data analytics engine, TeraData RDMS as their target storage for analytics result, and tableau for data visualization, also Talend data integrator for ETL
engine we can perform cross selling analytics for several Bank loan products with a promising result, also design business rule and algorithm used for
performing the analytics by using Propensity model using Random Forest and special tagging using SAX using bank specific threshold, and additional
filter. Random forest algorithm is showing a good result measured by ROC / AUC.

Index Terms: Big data analytics; Cross Selling; Loan Product; Bank; Indonesia
——————————  ——————————

1. INTRODUCTION business especially for cross-sell insight based on


IN 21st century, the big data revolution is happening and has transactional data, customer profile, and behavior so bank can
found its place, also within the banking Industry, considering deliver personalized marketing to customers. We believes that
the valuable data they’ve been storing since many decades[1]. Indonesia is a perfect country as a case study for this paper,
Big data initially takes part as a solution to address storage because the implementation of big data in Indonesia is still not
and processing issue for big volume, high velocity, and a lot of very popular [6], and as developed country with fourth largest
veracity of data, according to [2] cost of implement IT population in the world [7] Indonesia is a big market for
infrastructure systems for data storage is a lot on money, and consumer loan, in line with the case study Bank, Cross selling
sometimes the subject which stores data is not able to extract is also one of pain point in bank’s business that required
data value, so, it is desirable to consider an investment in solution to be improved.This paper aims to create a design of
implementation a solution with suitable IT infrastructure big data analytics application architecture, suitable business
systems. Nowadays, Big data has extend its capability beyond rule and model for cross selling analysis using Indonesia’s
data storage, the usage of big data technology in form of Big commercial bank as a case study. Analytics will include design
Data analytics computing is also a pioneer in banking industry will be based on the bank’s current application architecture
and beginning to have an impact on improving business and source data. Research subject will focus on a main
processes and workforce effectiveness, reducing enterprise business line in banking which are loan products that provided
costs and attracting new customers [3].Bank can leverage big by the bank in Indonesia, several products included personal
data analytics to gain deeper insights for customers, channels, loan, housing loan, mortgage, auto loan, working capital loan,
and the entire market. Integrating predictive analytics with power cash – credit card, and taking into consideration also
automatic decision making, a bank can better understand the current business process of cross selling activity, also we will
preference of its customers, identify customers with high test the effectiveness of the design based on training dataThe
spending potential, promote the right products to the right bank for the case study is a one of biggest commercial bank in
customers / cross selling, and improve customer experience, Indonesia. Identity has been concealed to prevent confidential
and drive revenue. [4] similar to literature review by as information from leaking; hence we will refer to this bank as
previous literature review by [5] banks are analyzing big data XYZ Bank. It’s been in operation since the past 20 years,
to increase revenue, boost retention of clients, and serve created during the Indonesia financial crisis in 1998, and
clients better by deliver personalized marketing. Other benefits establishes its operation until today. The bank is having a
of big data analytics, it is also can be used for regulatory problem of with current operation in cross selling and
compliances management, reputational risk management, alignment causing inefficient marketing and less customer
financial crime management and much more [1].In this paper, acquisition, and relationship manager can’t follow up correct
we want to explore more about big data analytics and its customer in the right time. Hence we have given a research
possibilities and application to help Banks expand their opportunity to perform; create a design of application
architecture leveraging existing platform in the bank, and
———————————————— create suitable business rule and model used in the solution
 Raden Ali Rakhman, Rosalina Yani Widiastuti, Nilo Legowo, Emil R. for cross selling. We propose preposition for this research,
Kabururuan
which is, Big Data Analytics Application architecture using
 Information Systems Management Department in BINUS Graduate
Program-Master of Information Systems Management, Bina Nusantara specific platforms (leveraging current platforms possessed by
University, Indonesia. the case study bank) and using business rule and models can
 E-mail: raden.rakhman@binus.ac.id, yani.widiastuti@binus.ac.id, be used for loan product cross selling data analytics measured
nlegowo@binus.edu, emil.kaburuan@binus.edu by the using the area under the curve (AUC) and receiver
operating characteristic curves (ROC) as also used by
1632
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

previous research as the primary measure of model prediction engine. Aster comes with analysis function accessible using
[8].Study can be used by other bank to design their own big SQL and R languages. Aster-on-Hadoop is designed to work
data analytics solution, also for academic to explore and well with CDH and Teradata RDBMS, it has connector to
research other business rule and model or application to Hadoop and Teradata platforms that eases in and out data
design for big data solution for enterprises. transfer.

2 LITERATUR REVIEW 2.5 Talend Data Integration and Tableau Visualization


Talend Data Integration is a platform orchestration tools that
2.1 Big Data Analytics in General lets users define data movement and required transformation
In this paper, we’re using big data as a solution to solve the process across many platforms, and it is used to integrate data
problem; General definition of Big Data refers to "the 3Vs" - between current data warehouse to Hadoop and between
Volume for large amounts of data, Varieties for data creation Hadoop layers. The Bank also used Tableau for analytics
speed, and Velocity for unstructured data that develops [9]. As result visualization.
the technology growth, big Data has led to data-based
decision making, also known as evidence-based decision 2.6 Analytics Model
making [10]. In fact, the more an organization characterizes This research will also design business rule of analytics
itself as an organization driven by data, the better the model, for example propensity model that can be referred as a
organization will be in terms of finance and key operational statistical scorecard that is used to predict customer or
number goals [9], furthermore, nowadays, Big data expand the prospect behavior [16] and in this research is for cross selling
technology towards data analytics is the process of using activity, as defined by previous research, Cross-selling
analysis algorithms or model running on powerful supporting pertains to efforts to increase the number of products or
platforms to uncover potentials concealed in big data, such as services that a customer uses within a firm. Cross-selling
hidden patterns or unknown correlations with minimum products and services to current customers has lower
processing time requirement [11]. As we have mention to associated cost than acquiring new customers, because the
leverage current bank platform, we will use the combination of firm already has some relationship with the customer. A proper
Hadoop, Tera Data RDBMS, Tableau dashboard visualization, implementation of cross-selling can be achieved if there is an
and Talend Data integrator. information infrastructure that allows managers to offer
customers products and services that tap into their needs [17].
2.2 Hadoop On top of propensity model we also apply special tagging to
Hadoop is an open-source framework that deals with improve classification and likeliness to take a loan product.
distributed computing of large datasets across clusters of This research is using Random Forest model for propensity
computer using simple programming models [12]. Currently model and SAX special tagging also additional filter define by
bank has leveraging Hadoop for big data storage and query as the case study bank for regulatory and internal risk
known as Hadoop Distributed File System (HDFS). According perspective.
to [13] big data tends to be diverse in terms of data types, and
a data-type-agnostic file system like HDFS could be a good fit 2.7 Random Forest
for that diversity. Also, many of the complex data types we Random forest (RF) is a non-parametric statistical method
associate with big data originate in files, and using traditional which was proposed by [18] and it is also a suitable method for
database management system (DBMS) is a problem when Big Data analytics. The basic constituents of random forests
dealing with big data because of time-consuming processes are tree-structured predictors and each tree is constructed
for data query and integration. Bank XYZ has around 17 using an injection of randomness. Being unlike with traditional
Million customers which is required a lot of time consumption standard trees in which each node is split using the best split
just to keep and processing data. among all variables, random forest split each node by using
the best among a subset of predictors randomly chosen at that
2.3 Tera Data RDBMS node. Relative to decision tree, AdaBoost, Neural Network,
Tera Data RDBMS was the very first RDBMS that is linearly SVM etc., RF has higher prediction accuracy, better noise
scalable and support parallel processing capabilities. Teradata tolerance and is robust against over fitting [18] [19]. Research
is designed mainly for data warehousing and reporting use conducted by [20] determine that random forest algorithm can
cases. Bank XYZ has purchased Tera Data RDBMS for Data be used in the large scaled imbalanced classification of
warehouse since 2016 as a replacement of current Oracle business data, and it’s resulted that the random forest
Data warehouse. Bank has gradually move data from oracle algorithm is more suitable in the product recommendation or
data warehouse to Tera Data RDBMS as a new data potential customer analysis than traditional strong classifier
warehouse, and in parallel use Teradata to store analytics like SVM and Logistic Regression.There are a number of
result for visualization purpose via Tableau. factors to be considered in choosing machine learning
algorithms. Some of those factors are size of training datasets,
2.4 Aster Analytics dimensionality feature space, linearity, feature dependency,
Aster Analytics is a shared-nothing, massively-parallel and required processing power. Random forest able to
processing database designed for online analytical processing discover more complex dependencies in non-linier machine
(OLAP), data warehouse and big data tasks. It manages a learning problem, robust, not be affected even though the
cluster of commodity servers which can be scaled up to feature are not scaled nor highly correlated, and able to solve
hundreds of nodes and analyze petabytes of data & Aster binary classification problem by decision tree and because it’s
performs 25% to 552% better than Pig and Hive [14] [15]. a bagging algorithm so it can handle high dimensional data. In
Aster analytics is a platform that can run on-hadoop execution this paper we’re not going to detail about the algorithm, and
1633
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

use the algorithm based on success experience in previous


research or study.

2.8 SAX (Symbolic Aggregate Approximation)


Symbolic Aggregate Approximation (SAX) is function that
transforms original time series data into symbolic strings,
which are more suitable for many additional types of
manipulation, because of their smaller size and the relative
ease with which patters can be identified and compared. Time
series is a collection of data observations made sequentially
over time. SAX splits data into several intervals and assigns
each interval into alphabetical symbol. Output of this function
is a letters represent a pattern occurring over time. The symbol
created by SAX corresponds to the time series features with
equal probability, allowing them to be compared and use for
further manipulation with reliable accuracy. SAX has many
advantages over other symbolic approaches such as
dimensionality reduction power and lower bounding the
distance [21].

3 METHODOLOGY
3.1 Research Method Fig. 1. Research Methodology
To reach our purposes of this research, we used a quantitative
approach, more specifically; a multiple cases content analysis to The cases collection, and study case files for analysis are
gain understanding of big data analytics design and its benefits. described in the following subsections.
Also, for our case study files we have done interview and
analyze current system and current business flow in our case 1…Case collection
study subject Bank XYZ and take a sample data of customer Our cases are divided into two types; one is based on big data
profile and transactional data that provided by the bank to us for analytics design with point of view of application and
testing the proposed design, similar to what [1] has done for recommended platform, second is with point of view of model
previous research. or business rule that used for analytics. Our collected cases
will be used as reference platform used for big data analytics
same with purposed platform for the case study bank, also a
recommendation of model that will be used to perform the
analytics. Collected case as follow:

TABLE 1
CASE COLLECTION FOR APPLICATION PLATFORM
Title Big Data Applications Big Data Platform Reference
Big data analytics: Understanding its Healthcare Hadoop (Wang, Kung, & Byrd, 2016) [3]
capabilities and potential benefits for organization
healthcare organizations
Big Data Analysis: Recommendation System Movie Rating Hadoop (Verma , Patel, & Patel, 2015)[22]
with Hadoop Framework
Big Data Weather Analytics Using Hadoop Weather Hadoop (Dagade, Lagali, Avadhani, & Kalekar,
2015) [23]
Network Equipment Failure Prediction with Big IT Monitoring Hadoop (Shuan, Fei, King, Xiaoning, & Mein,
Data Analytics 2016)[24]
Urban Planning and Building Smart Cities IoT Smart City Hadoop (Rathore, Ahmad, Paul, & Rho,
based on the Internet of Things using Big Data 2016)[25]
Analytics
Analyzing Relationships in Terrorism Big Data Terrorism Hadoop (Strang & Sun, 2017)[26]
Using Hadoop and Statistics
Who Renews? Who Leaves? Identifying Telecom Company Aster Analytics (Asamoah, Sharda, Kalgotra, & Ott,
Customer Churn 2016)[27]
in a Telecom Company Using Big Data
Techniques
SQL-SA for Big Data Discovery Polymorphic Big Data Discovery Aster Analytics (Tang, 2016)[14]
and Parallelizable SQL User Defined Scalar
and Aggregate Infrastructure in Teradata Aster
6.20

1634
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

TABLE 2
CASE COLLECTION FOR ANALYTICS ALGORITHM
Title Big Data Applications Algorithm Reference
Overview of random forest methodology and Bioinformatics and Random forest (RF) (Boulesteix A. L., Janitza,
practical guidance with emphasis on computational algorithm Kruppa, & König, 2012) [15]
computational biology and bioinformatics Biology
A Parallel Random Forest Algorithm for Big Spark Cloud Computing Random Forest (Chen, et al., 2016)[28]
Data in a Spark Cloud Computing Environment Algorithm
Environment
Random Forests of Very Fast Decision Trees Mining Evolving Big Data Random Forest (Marron, Bifet, & Morales,
on GPU for Mining Evolving Big Data Streams Streams Algorithm 2014) [29]
Risk Adjustment of Patient Expenditures: A Healthcare applications Random Forest (Li, Bagheri, Goote, Hasan, &
Big Data Analytics Approach Algorithm Hazard, 2013) [30]
An Ensemble Random Forest Algorithm for Insurance Big Data Analysis Random Forest (Wu, Lin, Zhang, Wen , & Lin ,
Insurance Big Data Analysis Algorithm 2017)[31]
Random Forest: A Classification and Compound Classification Random Forest (Svetnik, et al., 2003)[32]
Regression Tool for Compound Classification and QSAR Modeling Algorithm
and QSAR Modeling

Semantic Filtering of IoT Data using Symbolic IoT Symbolic Aggregate (Mahalakshmi & Kannan ,
Aggregate Approximation (SAX) Approximation (SAX) 2016) [33]
Clustering of Electricity Consumption Behavior Electricity Consumption Symbolic Aggregate (Wang Y. , et al., 2016)[34]
Dynamics toward Big Data Applications Approximation (SAX)

1. Case Study Interview


Interviewed has been done with the project manager of this big data project in the case study bank, the purpose of the interview
is to identify current pain point in cross selling activity in the bank. Currently cross sell activity describe by below diagram:

Fig. 2 . Current Cross Sell Activity

Risk division is in charge to generate cross sell leads every that come from two business units to be tested in the analytics
time business unit requested a potential customer to be system, first is consumer loan unit, which the product includes
included in a campaign or sales event. Risk division will personal loan, small business and micro banking loan / (CBC,
generate leads based on low risk segment, product criteria, BB, KUM), mortgage, auto loan, housing loan / KPR, and loan
limit assignment and score card filtering. All data is calculated without colateral / (KTA, KSM). This unit is suffering with 1-5%
manually by risk division so it takes a long time just to of take up rate from the leads and limited base customer
generate leads, that is impacted to the delay of business unit’s choices because the parameter is only risk criteria not
campaign. Also low intensity of communication between potential value, and the unit aim to increase take up rate by
business unit and data management division is also a pain better campaign and targeting specific customer for cross sell
point in the process which is resulted with minimum view of loan. Second unit is credit card business, because bank XYZ
business factor to produce the best leads based on business credit card offers many features as additional benefits such as
potential value. Based on interview, there are several products power cash which is use available credit limit as loan, power
1635
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

bill is monthly utility bills directly from credit limit, and RDBMS, Tableau dashboard visualization, and Talend Data
insurance product, however the penetration is still low and the integrator, and combine with the case study we design
challenge is to identify right customer at the right time to offer, application architecture suitable to be implemented by Bank
in this paper we will focus on Power cash (PWC). Bank XYZ as describe in below figures. Aster analytics as part of
believes that implementation of big data analytics can improve Teradata unified data architecture can be used by the bank
their marketing campaign and cross selling activity of their loan and run on CDH will be a huge advantage for the bank in
products. project cost point of view and also match to our case collection
as a suitable solution for data analytics.
4. DISCUSSION AND RESUL

4.1Application Architecture
By leveraging bank’s current platforms like Hadoop, Tera Data

Fig. 3 . Big Data Analytics Design


We designed CDH by logically divided into three major areas as mention by [35] a typical usage of storage policies would
as shown in above figure 3, Landing layer will serve as involve setting the storage type of files that are frequently
temporary or spool area where data will be landed here accessed (‖hot‖) as SSD storage and that for files that are less
temporarily before they are transformed and stored in the frequently accessed (‖cold‖) as ARCHIVE storage.This would
persistence layer. Persistence layer will hold the ―hot‖ data for optimize overall access time to disk by minimizing the latency
as long as the retention period that was defined. ―Hot‖ data is for the common case. Talend data integrator will be used to
data that needs to be access frequently. It is typically business extract data from source Data warehouse oracle to Hadoop,
critical information that needs to be accessed quickly and is also transformation from landing layer to persistence layer in
often used for quick decision making, once data is rarely used Hadoop, also from persistence layer to archive layer for cold
then they can be archived, and archive layer is the place for data. Below figures is to describe data offloading process from
that which will hold the ―cold‖ data for as long as the retention the source system and Hadoop. Oracle DWH is currently
period was defined. ―Cold‖ data is the inactive data that is integrating all required data for analytics for several source
rarely used or accessed typically must be retained for system within bank; such as Core banking, ATM log, E-
business or compliance purposes on long-term bases but not Channel transaction, Remittance system, limit system, loan
indefinitely, sometimes it called as a storage policy, system, EDC Transaction, merchant management system, and
biller system.

Fig. 4 . Data Offset


1636
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

Next in the process is interface from Hadoop to aster Hadoop to Aster Analytics it’s using Teradata Query Grid as
analytics, aster analytics is also divided into layers, first is part of aster analytics and Teradata RDBMS solution. It
prepared layer, this layer is for prepared data area that has enables bi-directional data movement and pushdown
gone through data preparation process and to be consumefor processing on data where it resides, while minimizing overall
analytics, followed up with the second layer which is analytics data movement and duplication. Query grid allows transparent
layer, this layer is where analysis is performed, this area is access to data stored in Hadoop and seamlessly orchestrates
used to support rapid exploitation and evaluation of data by analytic processing of data between aster and Hadoop and
various analytics parameter and last layer is result layer. leverages data available in Hadoop to enhance analytics, and
Result layer is a final analytics result area which will be Teradata Query Grid is also able to store results from aster
pushed to Tera data database as permanent storage, similar analytics into the Teradata spool space for integrating the data
with the purposed design, previous researched that used aster with the Teradata database [37] and ready for visualization
analytics solution also determine three layers of aster using tableau dashboard.
architecture. According to [36] first is the storage layer consists
of multiple stores supported by a fault tolerant distributed block 4. 1 Business Rule
store, as we called it result layer. The processing layer Proposed analytics model is to apply propensity model
includes a planner, executor, and multiple processing engine random forest, special tagging SAX and additional filter by the
instances connected by a data movement fabric, as we called bank case study to the list, based on interview result, we
it analytics layer. Last is The function layer comprises pre-built proposed a new business process to handle cross selling loan
analytic such as path and pattern analysis, statistical analysis, product by leveraging big data analytics as describe in below
text analytics, clustering, data transformation, and data figure.
preparation, as we called it prepared layer.Interface from

Fig. 5 . Business Rule Big Data Analytics

Step to create a good propensity model, with correct prediction gender, job type and other demographic information. The account
result; it is required predictive variable that can reflect information includes the application date of the account, what
customer demography and behavior. After done interview and type of business has been opened and the opening date, and
analyze Bank’s data also a bit of research, we’re come up with other required application information. Frequency of the
variable that will derive a good result in prediction. There will transactions via each channel and online banking website are
be different variable between retail and credit card customer, major components of the transaction data. From the interview
because the bank has considered based on experience that process, we have designed required business rule parameters to
first taker of power cash and previous taker of power cash will be used for analytics. This parameter is based on business
have significant impact to the analytics result. As previous knowledge from business unit and risk assessment criteria.
research conducted by [4] to performed better customer Required parameters for consumer loans (personal loan, small
analytics for banking industry, there will be more than two business and micro banking loan / (KUM, BB, CBC), mortgage,
hundred of attributes need to be generated from different auto loan, and loan without colateral / (KTA, KSM), and housing
sources, including personal information, account information, loan / KPR) and power cash (PWC) as follow;
and transactions. The personal information included age,

1637
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

TABLE 3
REQUIRED PARAMETERS
Consumer Loan Power Cash
Parameter Unit / Period Description Parameter Unit / Period Description
Count_spending One year Frequency of Total_NPLLoan Number NPL current loan (all
spending products)
Avg_spending One year Average of Total_PastLoan Number Matured current loan (all
spending products)
Max_spending One year Maximum spending Total_out_loan Amount Total amount outstanding loan
Sum_income One year Total amount Count_cc Number Number of credit card
Income
Count_income One year Frequency of CC_limit Amount Credit card limit
Income
Avg_ income One year Average of Income Powercash_take Number Number of power cash taken
Max_ income One year Maximum Income Gap_cc_pwc Days Period between CC active and
PWC active
Recent spending One year Last time spending Tenure_cc Days Number of days from first CC
opening date until today
Cbal_depo Amount Active current term Tenure_pwc Days Number of days from first
deposit balance PWC opening date until today
Origbal_depo Amount Original term Tenure_last_pwc Days Number of days from last
deposit balance active PWC date until today
Total_Loan Number Active current loan gap_pwc Days Gap between last two pwc
(all products)
Total_PastLoan Number Matured current Slp_mon_txn_amt One Year Trend Slope for monthly
loan (all products) transaction amount
Occupation String Customer Slp_mon_txn_cnt One Year Trend Slope for monthly
occupation transaction counter
MartialStatus String Customer Martial Slp_mon_creditlimit One Year Trend Slope for monthly credit
status limit utilization
Education String Customer last Std_mon_txn_amt One Year Standard deviation for
education monthly transaction amount
Dependent Number Customer number Std_mon_txn_cnt One Year Standard deviation for
of dependent monthly transaction counter

After defining parameter, we continue with by creating a target target window. As per target window, we define based on
within the defined target window. For power cash product, insight information from bank XYZ during interview session
target 1 or positive, where customers accept power case offer and it’s based on their experience selling the product. Target
from telemarketing at given window and target 0 or negative window in detail as follow:
indicates customer who reject power cash offer by
telemarketer. For consumer loan, target 1 or positive indicates
customer with CASA account who will open loan account
within a given target window, and target 0 or negative indicate
any CASA customer who don’t open loan account within a
1638
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

TABLE 3
REQUIRED PARAMETERS
Consumer Loan Power Cash
Unit /
Parameter Description Parameter Unit / Period Description
Period
Customer Age /
Age Year Customer Age / company age Age Year
company age
Number of days from first
Current Account / Saving
Tenure_Casa Days Occupation String Customer occupation
Account (CASA) opening date
until today
Previous Saving account closing
Cbal_S Gender String Customer Gender
month balance
Previous Saving account average Customer employers /
Avgbal_S Industry String
month balance company Industry
Active product for saving
Activeprd_S Number Sum_income One year Total amount Income
account
Previous Saving account Maximum Active product for
Maxbal_S Activeprd_S Number
month balance saving account
Previous Saving account Debit Active product for
Nodr_S Activeprd_C Number
month frequency current account
Previous Saving account Credit Tenure of longest
Nocr_S Tenure_S Year
month frequency Saving account
Previous Current Savings
Amtdr_S Saving account amount debit bal_A Current
month balance (all account)
Previous Debit Transaction
Amtcr_S Saving account amount credit Nodr_A One Year
month frequency
Previous Current account closing Credit Transaction
Cbal_C Nocr_A One Year
month balance frequency
Previous Current account average
Avgbal_C Amtdr_A One Year amount debit
month balance
Active product for current
Activeprd_C Number Amtcr_A One Year amount credit
account
Number of sender who
Previous Current account Maximum
Maxbal_C In_flow One Year transferred money to
month balance
the customer
Number of recipient
Previous Current account Debit who received money
Nodr_C Out_fow One Year
month frequency transfer by the
customer
Previous Current account Credit
Nocr_C Avg_ income One year Average of Income
month frequency
Previous
Amtdr_C Current account amount debit Avg_spending One year Average of spending
month
Previous Current account amount Active current term
Amtcr_C Count_depo Number
month credit deposit account
Average Year-to-Year CASA Active current term
Avgbal_yty One year Cbal_depo Amount
Balance deposit balance
Active current loan (all
Avgbal_3m 3 Months Average ending balance Total_Loan Number
products)
Closed loan – early
Sum_spending One year Total amount spending Total_ClosedLoan Number
payment (all products)

AUC, the better the model is able to distinguish between the


products which experience attrition and those which do not.
Thus, AUC will vary between 0 and 1 with a value of 0.50
representing random classification performance and a value of
1 representing a perfect model. Sensitivity or the true positive
rate measures the percentage of actual product attrition pr
dicted as such and ranges from 0 to 1.Next in the process is
to do train test to train a model. Extraction training set and test
set: according to a certain percentage of all samples from the
training set. Let the ratio between the training set and the test
set of the sample be Ntrain : Ntest with ratio of 7:3[20], and
adopting it to this research we’re also using 7:3 ratio between
Ntrain : Ntest. Followed up with down sampling process, this
This method is similar to previous research by [38] where it process is required to increase accuracy of a model, as
measure performance as the area under the receiver indicated by previous study by [39] that Random Forest model
operating characteristic (ROC) curve (AUC), ROC curves with down sampling has better accuracy rate when compared
display the sensitivity versus the specificity. thus the higher the to other models. Finally we can evaluate model by running it

1639
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

using test data and measured it performance using ROC and


AUC. On top of Random forest algorithm, we design a special
tagging / event as a consideration for cross selling. Special
tagging is using SAX. SAX is a dimensionality reduction
method for time series, performs as well as other method like
Discreate Wavelet Transform and Discreate Fourier
Transform, while requiring less storage space[40]. It is match
with our research that required special tagging within certain
time period. After gathering Bank XYZ requirements for SAX,
here’s detail tagging information that should be included in
leads

End result of this process is generated leads as describe in


below table. We have designed it with minimum information
that is required by Bank XYZ for cross selling activity.Valid
period of the leads is one month after generating date because
mostly we’re using monthly transactional data as parameter in
analytics. When the valid date is expired, it is recommended to
change leads execution to next period leads and refreshed list
of customer.

4.1 Result
After configuring all required parameters for analytics we run
Finally, we added an additional filter based on bank XYZ the analytics process from data offset to visualization as
internal regulation and risk criteria. Filter as follow; decribe in figure 6.

1640
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

Bank XYZ is using tableau as visualization, there will be two dashboards for the result, and first is detail layout of cross sell
leads. It’s showing customer information, and can be filtered by year, month, product, class, score and status. Status flag can be
leveraged if we want to analyze follow up result of every leads in the future. Second is ROC monitoring dashboard that used a
trend chart current ROC and AUC by product also, TPR (True Positive Rate) compared to model, we can leverage this for future
research when business unit executing the leads. Test data is using 2017 test data because 2018 and 2019 data has not been
disclosed by the bank, hence can’t be used for this research.

Fig. 7 . Cross Sell Leads Details Vizualization

The Dashboard is showing customer information of generated leads. This dashboard can be filtered by Year, Month, Product,
Class, Score, & Status. All charts & tables can be used as filter as well

Fig. 8 . Cross Sell ROC Monitoring

1641
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

algorithm used for performing the analytics by using


Propensity model using Random Forest and special
tagging using SAX using bank specific threshold, and
additional filter. Random forest algorithm is showing a good
result measured by ROC / AUC, and SAX and additional
filter also increase number of generating leads by the
system. In term of product power cash is more promising
than other loan products. It is because parameter for power
cash is more specific and more detail also number of
customer for this product is way less than other loan
products, power cash has only around one million
customers out of ten million customers of retail banking in
Bank XYZ.
For future study, hardware architecture design, technical
configuration in big data analytics platform like Hadoop and
Aster, and a performance wise analysis using real data will be
a value added to the research, also false positive or true
Fig. 6 . Analytics aprocess
positive Flow
analysis by comparing insight result and follow up
result can be used to enhanced analytics result and get the

The Dashboard is used for monitoring Cross Sell ROC better insight to improve the cross selling analytics result and
(Receiver Operating Characteristic)/Accuracy. A trend chard is the bank business.
showing Current ROC by product and can be filtered by Date
period; while a ROC by product per Month is showing TPF vs ACKNOWLEDGMENT
random line. All charts & tables can be used as filter as well, The authors wish to thank A, B, C. This work was supported in
we’re showing the best result which is Power cash (PWC part by a grant from XYZ.
product)
REFERENCES
5. CONCLUSION [1] U. Srivastava and S. Gopalkrishnan, ―Impact of big
Big Data analytics is now being implemented across various data analytics on banking sector: Learning for Indian
business of banking sector, because the volume of data banks,‖ Procedia Comput. Sci., vol. 50, pp. 643–652,
operated upon by banks are growing at a tremendous rate, 2015.
posing intriguing challenges for parallel and distributed [2] M. Vozábal, ―Department of Computer Science and
computing platforms. These challenges range from building Engineering Master Thesis Tools and Methods for Big
storage systems that can accommodate these large datasets Data Analysis,‖ 2016.
to collecting data from vastly distributed sources into storage [3] Y. Wang, L. Kung, and T. A. Byrd, ―Big data analytics:
systems to running a diverse set of computations on data, Understanding its capabilities and potential benefits for
hence big data analytic came as solution. The technology is healthcare organizations,‖ Technol. Forecast. Soc.
helping bank to deliver better services to their customers, and Change, vol. 126, pp. 3–13, 2018.
for our case study is cross selling activity for loan product in [4] N. Sun, J. G. Morris, J. Xu, X. Zhu, and M. Xie,
Bank XYZ, a largest commercial bank in Indonesia. This study ―iCARE: A framework for big data-based banking
design the application architecture of Big Data analytics based customer analytics,‖ IBM J. Res. Dev., vol. 58, no. 5/6,
on collective cases and interview with bank XYZ persona. We pp. 1–4, 2014.
also define business rule or model in for analytics and test [5] I. Lee, ―Big data: Dimensions, evolution, impacts, and
design effectiveness. The outcome we see as following: challenges,‖ Bus. Horiz., vol. 60, no. 3, pp. 293–303,
1. By leveraging Cloudera Hadoop, Aster Analytics as big 2017.
data analytics engine, TeraData RDMS as their target [6] E. R. E. Sirait, ―Implementasi Teknologi Big Data di
storage for analytics result, and tableau for data Lembaga Pemerintahan Indonesia,‖ J. Penelit. Pos
visualization, also Talend data integrator for ETL engine we dan Inform., vol. 6, no. 2, pp. 113–136, 2016.
can perform cross selling analytics for several Bank loan [7] http://worldpopulationreview.com, ―Indonesia
products with a promising result by number of leads Population 2019 (Demographics, Maps, Graphs).‖ .
generated using test data. [8] R. A. Taylor et al., ―Prediction of in-hospital mortality in
2. In line with application, we design business rule and emergency department patients with sepsis: a local big
1642
IJSTR©2019
www.ijstr.org
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME 8, ISSUE 09, SEPTEMBER 2019 ISSN 2277-8616

data--driven, machine learning approach,‖ Acad. planning and building smart cities based on the
Emerg. Med., vol. 23, no. 3, pp. 269–278, 2016. internet of things using big data analytics,‖ Comput.
[9] A. McAfee, E. Brynjolfsson, T. H. Davenport, D. J. Networks, vol. 101, pp. 63–80, 2016.
Patil, and D. Barton, ―Big data: the management [26] K. D. Strang and Z. Sun, ―Analyzing relationships in
revolution,‖ Harv. Bus. Rev., vol. 90, no. 10, pp. 60–68, terrorism big data using Hadoop and statistics,‖ J.
2012. Comput. Inf. Syst., vol. 57, no. 1, pp. 67–75, 2017.
[10] A. Labrinidis and H. V Jagadish, ―Challenges and [27] D. A. Asamoah, R. Sharda, P. Kalgotra, and M. Ott,
opportunities with big data,‖ Proc. VLDB Endow., vol. ―Teaching Case Who Renews? Who Leaves?
5, no. 12, pp. 2032–2033, 2012. Identifying Customer Churn in a Telecom Company
[11] H. Hu, Y. Wen, T.-S. Chua, and X. Li, ―Toward scalable Using Big Data Techniques,‖ J. Inf. Syst. Educ., vol.
systems for big data analytics: A technology tutorial,‖ 27, no. 4, pp. 223–232, 2016.
IEEE access, vol. 2, pp. 652–687, 2014. [28] J. Chen et al., ―A parallel random forest algorithm for
[12] J. Anuradha and others, ―A brief introduction on Big big data in a spark cloud computing environment,‖
Data 5Vs characteristics and Hadoop technology,‖ IEEE Trans. Parallel Distrib. Syst., vol. 28, no. 4, pp.
Procedia Comput. Sci., vol. 48, pp. 319–324, 2015. 919–933, 2016.
[13] B. P. Russom, ―BIG DATA,‖ 2011. [29] D. Marron, A. Bifet, and G. D. F. Morales, ―Random
[14] X. Tang et al., ―SQL-SA for big data discovery Forests of Very Fast Decision Trees on GPU for Mining
polymorphic and parallelizable SQL user-defined Evolving Big Data Streams.,‖ in ECAI, 2014, vol. 14,
scalar and aggregate infrastructure in Teradata Aster pp. 615–620.
6.20,‖ in 2016 IEEE 32nd International Conference on [30] L. Li, S. Bagheri, H. Goote, A. Hasan, and G. Hazard,
Data Engineering (ICDE), 2016, pp. 1182–1193. ―Risk adjustment of patient expenditures: A big data
[15] A.-L. Boulesteix, S. Janitza, J. Kruppa, and I. R. König, analytics approach,‖ in 2013 IEEE international
―Overview of random forest methodology and practical conference on big data, 2013, pp. 12–14.
guidance with emphasis on computational biology and [31] W. Lin, Z. Wu, L. Lin, A. Wen, and J. Li, ―An ensemble
bioinformatics,‖ Wiley Interdiscip. Rev. Data Min. random forest algorithm for insurance big data
Knowl. Discov., vol. 2, no. 6, pp. 493–507, 2012. analysis,‖ Ieee Access, vol. 5, pp. 16568–16575, 2017.
[16] L. Tan, ―Data-Driven Marketing: Purchase Behavioral [32] V. Svetnik, A. Liaw, C. Tong, J. C. Culberson, R. P.
Targeting in Travel Industry based on Propensity Sheridan, and B. P. Feuston, ―Random forest: a
Model.‖ 2017. classification and regression tool for compound
[17] W. A. Kamakura, M. Wedel, F. De Rosa, and J. A. classification and QSAR modeling,‖ J. Chem. Inf.
Mazzon, ―Cross-selling through database marketing: a Comput. Sci., vol. 43, no. 6, pp. 1947–1958, 2003.
mixed data factor analyzer for data augmentation and [33] R. Mahalakshmi and S. Kannan, ―Semantic Filtering of
prediction,‖ Int. J. Res. Mark., vol. 20, no. 1, pp. 45–65, IoT Data using Symbolic Aggregate Approximation
2003. (SAX),‖ J. Comput. Sci. Appl., vol. 8, no. 1, pp. 31–39,
[18] L. Breiman, ―Random forests,‖ Mach. Learn., vol. 45, 2016.
no. 1, pp. 5–32, 2001. [34] Y. Wang, Q. Chen, C. Kang, and Q. Xia, ―Clustering of
[19] B. Larivière and D. den Poel, ―Predicting customer electricity consumption behavior dynamics toward big
retention and profitability by using random forests and data applications,‖ IEEE Trans. Smart Grid, vol. 7, no.
regression forests techniques,‖ Expert Syst. Appl., vol. 5, pp. 2437–2447, 2016.
29, no. 2, pp. 472–484, 2005. [35] R. Subramanyam, ―HDFS heterogeneous storage
[20] J. Zhang, W. Shen, L. Liu, and Z. Wu, ―Face resource management based on data temperature,‖ in
recognition model based on privacy protection and 2015 International Conference on Cloud and
random forest algorithm,‖ in 2018 27th Wireless and Autonomic Computing, 2015, pp. 232–235.
Optical Communication Conference (WOCC), 2018, [36] D. Simmen et al., ―Large-scale graph analytics in Aster
pp. 1–5. 6: bringing context to big data discovery,‖ Proc. VLDB
[21] B. Lkhagva, Y. Suzuki, and K. Kawagoe, ―Extended Endow., vol. 7, no. 13, pp. 1405–1416, 2014.
SAX: Extension of symbolic aggregate approximation [37] N. Yuhanna and E. D. W. E. B. T. Storage, ―The
for financial time series data representation,‖ Forrester Wave enterprise data warehouse, Q4 2015,‖
DEWS2006 4A-i8, vol. 7, 2006. Forrester Res. Inc., Cambridge, 2015.
[22] J. P. Verma, B. Patel, and A. Patel, ―Big data analysis: [38] J. Lismont, S. Ram, J. Vanthienen, W. Lemahieu, and
recommendation system with Hadoop framework,‖ in B. Baesens, ―Predicting interpurchase time in a retail
2015 IEEE International Conference on Computational environment using customer-product networks: An
Intelligence & Communication Technology, 2015, pp. empirical study and evaluation,‖ Expert Syst. Appl., vol.
92–97. 104, pp. 22–32, 2018.
[23] V. Dagade, M. Lagali, S. Avadhani, and P. Kalekar, ―Big [39] D. Zakirov, A. Bondarev, and N. Momtselidze, ―A
Data Weather Analytics Using Hadoop,‖ Int. J. Emerg. comparison of data mining techniques in evaluating
Technol. Comput. Sci. Electron. ISSN, pp. 976–1353, retail credit scoring using R programming,‖ in 2015
2015. Twelve International Conference on Electronics
[24] L. H. Shuan, T. Y. Fei, S. W. King, G. Xiaoning, and L. Computer and Computation (ICECCO), 2015, pp. 1–4.
Z. Mein, ―Network Equipment Failure Prediction with [40] Y. Wang, Q. Chen, C. Kang, M. Zhang, K. Wang, and
Big Data Analytics.,‖ Int. J. Adv. Soft Comput. Its Appl., Y. Zhao, ―Load profiling and its application to demand
vol. 8, no. 3, 2016. response: A review,‖ Tsinghua Sci. Technol., vol. 20,
[25] M. M. Rathore, A. Ahmad, A. Paul, and S. Rho, ―Urban no. 2, pp. 117–129, 2015.

1643
IJSTR©2019
www.ijstr.org