Вы находитесь на странице: 1из 52

[BIG] DATA ANALYTICS

ENGAGE WITH YOUR CUSTOMER


PREPARED BY GHULAM I
ABOUT ME
Currently work in Telkomsel as senior data analyst
8 years professional experience with 4 years in big data
and predictive analytics field in telecommunication
industry
Bachelor from Computer Science, Gadjah Mada
University & get master degree from Magister of
Information Technology, Universitas Indonesia
Lecturer in Muhammadiyah Jakarta University

https://id.linkedin.com/pub/ghulam-imaduddin/32/a21/507
ghulam@ideweb.co.id
WHAT’S IN THIS SLIDE
Intro & Data Trends Type of Analytics

Challenges Analytics Lifecycle

Tech Approach
[BIG] DATA ANALYTICS Methodology

Big Data Tools Tools

 Use Cases (Sentiment Analysis)


 What’s Trending
 Where to Start
THE WORLD OF DATA

Source: http://www.cision.com/us/2012/10/big-data-and-big-analytics/
DATA VS BIG DATA
Big data is just data with:
 More volume
 Faster data generation (velocity)
 Multiple data format (variety)

World's data volume to grow 40% per year


& 50 times by 2020 [1]

Data coming from various human & machine


activity

[1] http://e27.co/worlds-data-volume-to-grow-40-per-year-50-times-by-2020-aureus-20150115-2/
CHALLENGES
 More data = more storage space
 More storage = more money to spend  (RDBMS server needs very costly
storage)

 Data coming faster


 Speed up data processing or we’ll have backlog

 Needs to handle various data structure


 How do we put JSON data format in standard RDBMS?
 Hey, we also have XML format from other sources
 Other system give us compressed data in gzip format

 Agile business requirement.


 On initial discussion, they only need 10 information, now they ask for 25? Can
we do that? We only put that 10 in our database
 Our standard ETL process can’t handle this
STORAGE COST
In Terms of storage cost, Hadoop has lower comparing to standard
RDBMS.
Hadoop provides highly scalable storage and process with fraction of
the EDW Cost
STORAGE & COMPUTE
TOGETHER
The Old Way The Hadoop Way
Compute Data Storage
(RDBMS, EDW) (SAN, NAS) Compute Memory Storage
(CPU) (Disk)
z
Network
z

Expensive, Special purpose, “Reliable” Servers Commodity “Unreliable” Servers


Expensive Licensed Software Hybrid Open Source Software
• Hard to scale • Scales out forever
• Network is a bottleneck • No bottlenecks
• Only handles relational data • Easy to ingest any data
• Difficult to add new fields & data types • Agile data access

Source: Cloudera Presentation Deck by Amr Awadallah


MAP REDUCE APPROACH
 Process data in parallel way using distributed algorithm on a cluster
 Map procedure performs filtering and sorting data locally
 Reduce procedure performs a summary operation (count, sum,
average, etc.)
HADOOP vs UNSTRUCTURED
DATA
 Hadoop has HDFS (Hadoop Distributed File System)
 It is just file system, so what you need is just drop the file there 
 Schema on read concept

User

APPROACH
APPROACH

HADOOP
RDBMS

Application (BI Tools)


Database Table Metadata

Load the data Applying schema


Source Data
HIVE
 The Apache Hive ™ data warehouse software facilitates querying and
managing large datasets residing in distributed storage.
 With Hive you can write the schema for the data in HDFS
 Hive provide many library that enable you to read various data type
like XML, JSON, or even compressed format
 You can create your own data parser with Java language
 Hive support SQL language to read from your data
 Hive will convert your SQL into Java MapReduce code, and run it in
cluster
 Apache spark is fast and general engine for large-scale data processing
 Run programs up to 100x faster than Hadoop MapReduce in memory,
or 10x faster on disk
 You can write spark application in Java, Scala, Python, or R
 Spark support library to run SQL, streaming, and complex analysis like
graph computation and machine learning
 https://spark.apache.org/
ANALYTICS
ANALYTICS IS IN YOUR BLOOD
 Do you realize that you do analytics everyday?
 I need to go to campus faster!
 Hmm.. Looking at the sky today, I think it’ll be rain
 Based on my mid term and assignment score, I need to get at least 80
in my final exam to pass this course
 I stalked her social media. I think she is single because most of her
post only about food :p
DESCRIPTIVE & PREDICTIVE
 Descriptive statistics is the term given to the analysis of data that helps
describe, show or summarize data in a meaningful way such that, for
example, patterns might emerge from the data.
 In Information System Design course, most of the student get C grade (11
people). There is 4 people get A, 7 get B, 7 get D, and 7 get E
 Fulan only post his activity on Facebook at weekend

 Predictive analytics is the branch of data mining concerned with the


prediction of future probabilities and trends.
 The central element of predictive analytics is the predictor, a variable
that can be measured for an individual or other entity to predict future
behavior.
 Fulan should be has a job. Because he always left home at 7 in the morning
and get back at 6 afternoon
PREDICTIVE ANALYTICS
There is 2 types of predictive analytics:
◦ Supervised
Supervised analytics is when we know the truth about something in the past
Example:
we have historical weather data. The temperature, humidity, cloud density and
weather type (rain, cloudy, or sunny). Then we can predict today weather
based on temp, humidity, and cloud density today
Machine learning to be used: Regression, decision tree, SVM, ANN, etc.
◦ Unsupervised
Unsupervised is when we don’t know the truth about something in the past.
The result is segment that we need to interpret
Example:
We want to do segmentation over the student based on the historical exam
score, attendance, and late history
APPLYING THE CONTEXT
Raw & unstructured

IMEI
Source

&
TAC Socmed data,
Location URL access CDR Complaint,
Device info
Survey
Context

Point Of Interest, sentiment library, socmed buzzer, website category


Information
Derived

Commute pattern Idols Community leader


Hangout location Political view Family member
Pain point Communication spending
ANALYTICS LIFECYCLE

Data
Business Data
Mining
Understanding Understanding
& Modeling
- Gathering problem - Define variables to - Defining target variable
information support hypothesis - Splitting data for training and
- Defining the goal to - Cleaning & validating the model
solve the problem transforming the data - Defining analysis time frame
- Defining expected - Create longitudinal for training and validation
output data/trend data - Correlation analysis and
- Defining hypothesis - Ingesting additional variable selection
- Defining analysis data if needed - Selecting right data mining
methodology - Build analytical data algorithm
- Measuring the mart - Do validation by measuring
business value accuracy, sensitivity, and
model lift
- Data mining and modeling is
an iterative process
ANALYTICS LIFECYCLE

Model Model Model


Interpretation Operationalization Monitoring

- Describe the importance - Define the model scoring - Create monitoring


of each variable period process for model
- Visualize overall model - Integrate model result evaluation
by creating decision tree with execution system - Evaluate the model
for example (campaign system, CRM, based on real-world
- Define business action etc) result
based on the model - Create operational - Monitor and evaluate
result process that timely, the business impact
consistent, and efficient

Analytics and modeling is an iterative process. Data model will become


obsolete and need to evolve to accommodate changes in behavior
BUILDING THE
METHODOLOGY
Analysis Domain
• What is the analysis domain? Is it for male only? Is it for housewife or worker? Your
“customer” segment has different behavior

Type of Analysis
• Do we need only descriptive analysis? Or we need to go with predictive analysis?

Supervised or Unsupervised?
• Do we need to build unsupervised clustering/segmentation for this analysis?

Define Analysis Time Window


• What time window of data we need for behavior observation?
• What is the prediction time window?
• Is there any seasonal event on that time window?
ANALYTICS TOOLS
Microsoft Excel. Very powerful tools to do statistical data manipulation, pivoting, even doing
simple prediction
SQL is just the language. Your data lying in database? SQL will help to filter, aggregate and
extract your data
RapidMiner provide built-in RDBMS connector, parser for common data format (csv, xml),
data manipulation, and many machine learning algorithm. We can also create our own library.
Latest version of RapidMiner can connect to Hadoop and do more complex analysis like text
mining. Free version is available (community edition)
KNIME. Known as a powerful tools to do predictive analytics. Overall function is similar to
RapidMiner. Latest version of KNIME can connect to Hadoop and do more complex analysis
such as text mining. Free version is available
Tableau is one of the famous tools to build visualization on top of the data. Tableau also
powerful to create interactive dashboard. Free version is available with some limitation
QlikView. Similar to Tableau, QlikView designed to enable data analyst to develop a
dashboard or just simple visualization on top of the data. Free version is available
SAMPLE
USE CASE #1
SENTIMENT ANALYSIS ON TWIT TER DATA
BACKGROUND
Objective
Measuring customer sentiment over big tree telecommunication provider in
Indonesia (Telkomsel, XL, Indosat)
Metric
Measuring NPS (Net Promotor Score) for each operator using twitter data.
NPS calculated as percentage of positive tweets minus percentage of
negative tweets.

Putra, B. P. (2015). Analisis Sentimen Layanan Telekomunikasi pada Pengguna Media Sosial Twitter. Jakarta: Universitas Indonesia
WORKFLOW
Data Data Data Data
Collection Labeling Preparation Modeling
- Create twitter crawler - Label some sample - Deduplication - Generate word vector
with python and for training dataset - Convert to lower case using machine
twitter API - This part done with - Tokenization learning algorithm
- Run the crawler with crowdsourcing - Filter stop word based on training
selected keyword, dataset
parse, and store to - Using SVM and C4.5
RDBMS - The result is 2
- Collection for tweet different model
generated in April - Select the best model
2015 by comparing the
accuracy
WORKFLOW
Data NPS
Scoring Calculation
- Using best model, - Aggregate scoring
score the rest dataset result by telco
- Scoring result is a provider to get count
label of positive tweets
(positive/negative/ and negative tweets
neutral) for each - Calculate the NPS for
tweet each telco provider
- Visualize the result as
a bar chart
DATA COLLECTION
 We run the crawler 3 times, one time for each operator. We only
search tweets containing some keywords
• Telepon
• Telkomsel
• SMS
• Indosat
• Internet
• XL
• Jaringan

 Parse the json result using json parser library embedded in python 2.7,
form it as CSV (comma separated value)
 Load the csv into database (we use MySQL in this experiment)
DATA LABELING
 The objective is to build the ground truth
 Using crowdsourcing approach. We build online questionnaire and ask
people to define each tweets if it is negative, positive, or neutral
 We label 100 tweets by ourselves as a validated tweets for
questionnaire validation
 We put 20 tweets for each questionnaire. 5 tweets for Indosat, 5 for
XL, 5 for Telkomsel, and the rest 5 is random validated tweets
 If 4 out of 5 validated tweets answered correctly, then we flag a
questionnaire as a valid questionnaire
 This approach used to eliminate the answer submitted by people who
do it randomly
DATA PREPARATION
 Deduplication process is to remove duplicated tweets
 Tokenization is a process to split a sentence into words. This should be
done because the model will generate the word vector instead of
sentence.
DATA PREPARATION
 Filtering stop words. We eliminate non useful word (word that doesn’t
reflect to positive or negative means)
TOOLS USED
 Data preparation modeling done with RapidMiner software
 RapidMiner has text analysis function and procedure. We can found
procedure to do tokenize, convert case, deduplication, and filter stop
word
 RapidMiner also has SVM and C4.5 algorithm to do modeling
MODEL ACCURACY
 Model accuracy measurement done by confusion matrix

(𝑇𝑃 + 𝑇𝑁)
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 =
(𝑇𝑃 + 𝐹𝑃 + 𝑇𝑁 + 𝐹𝑁)

 In this experiment, we found that SVM performs better than C4.5


NPS Result
 After we do aggregation for scored dataset, we found that Indosat has
higher NPS than the others.

Telco % Promoters % Detractors NPS

Indosat 37% 14% 23%

Telkomsel 30% 27% 3%

XL 19% 37% -18%


SAMPLE
USE CASE #2
SIMPLE DESCRIPTIVE ANALYTICS USING HADOOP
AND SPARK
BACKGROUND
 This is the demonstration how to use Apache Spark to extract some
information from twitter data
 Twitter data collected with some crawler made with python language,
and store as it is (JSON formatted data)
DATA EXPLORATION
 Load JSON data to memory
val tweets = sqlContext.jsonFile("/user/flume/tweets/2015/09/01/*/*")

 Looking the data schema, and select useful field only


tweets.printSchema
DATA EXPLORATION
 Finding top 10 users based on tweet count
tweets.

select("user.screen_name").

rdd.map(x => (x(0).toString,1)).

reduceByKey(_+_).

map(_.swap).

sortByKey(false).

map(_.swap).

take(10).

foreach(println)
DATA EXPLORATION
 Finding top words
tweets.select("text").rdd.
flatMap(x => x(0).toString.toLowerCase.
split(“[^A-Za-z0-9]+")).
map(x => (x,1)).
filter(x => x._1.length >= 3).
reduceByKey(_+_).
map(_.swap).
sortByKey(false).
map(_.swap).
take(20).foreach(println)
DATA EXPLORATION
 Finding top words with stop word exclusion
val stop_words = sc.textFile("/user/ghulam/stopwords.txt")
val bc_stop = sc.broadcast(stop_words.collect)
tweets.select("text").rdd.
flatMap(x => x(0).toString.toLowerCase.split("[^A-Za-z0-9]+")).
map(x => (x,1)).
filter(x => x._1.length > 3 & !bc_stop.value.contains(x._1)).
reduceByKey(_+_).
map(_.swap).sortByKey(false).map(_.swap).
take(20).foreach(println)
DATA EXPLORATION
 Words Chain (Market Basket Analysis)
import org.apache.spark.mllib.fpm.FPGrowth
val stop_words = sc.broadcast(sc.textFile("/user/hadoop-
user/ghulam/stopwords.txt").collect)
val tweets = sqlContext.jsonFile("/user/flume/tweets/2015/09/01/*/*")
val trx = tweets.select("text").rdd.
filter(!_(0).toString.toLowerCase.contains("ini 20 finalis aplikasi")).
filter(!_(0).toString.toLowerCase.contains("telkomsel jaring 20 devel")).
filter(!_(0).toString.toLowerCase.contains("[jual")).
filter(!_(0).toString.toLowerCase.contains("lelang acc")).
filter(!_(0).toString.toLowerCase.matches(".*theme.*line.*")).
filter(!_(0).toString.toLowerCase.matches(".*fol.*back.*")).
filter(!_(0).toString.toLowerCase.matches(".*favorite.*digital.*")).
filter(!_(0).toString.toLowerCase.startsWith("rt @")).
map(x => x(0).toString.toLowerCase.split("[^A-Za-z0-9]+").filter(x =>
x.length > 3 & !stop_words.value.contains(x)).distinct)
val fpg = new FPGrowth().setMinSupport(0.01).setNumPartitions(10)
val model = fpg.run(trx)
model.freqItemsets.filter(x => x.items.length >= 3).take(20).foreach {
itemset =>
println(itemset.items.mkString("[", ",", "]") + ", " + itemset.freq)
}
WORLD TRENDS
2015 HYPE CYCLE
Big data related things in top
of hype curve:
• Advanced analytics
• IoT
• Machine Learning
DATA SCIENTIST
 Data scientist/analyst is one of the sexy and emerging job in the market
WHERE TO START
LETS GET OUR HAND DIRTY
SKILLS NEEDED

DOMAIN KNOWLEDGE
SKILLS NEEDED
 Business Acumen
In terms of data science, being able to discern which problems are
important to solve for the business is critical, in addition to identifying
new ways the business should be leveraging its data.
 Python, Scala, and SQL
SQL skills is a must! Python and Scala also become a common language to
do data processing, along with Java, Perl, or C/C++
 Hadoop Platform
It is heavily preferred in many cases. Having experience with Hive or Pig is
also a strong selling point. Familiarity with cloud tools such as Amazon S3
can also be beneficial.
 SAS or R or other predictive analytics tools
In-depth knowledge of at least one of these analytical tools, for data
science R is generally preferred. Along with this, statistical knowledge also
important
SKILLS NEEDED
 Intellectual curiosity
Curiosity to dig deeper into data and solving a problem by finding a
root cause of it
 Communication & Presentation
Companies searching for a strong data scientist are looking for
someone who can clearly and fluently translate their technical findings
to a non-technical team. A data scientist must enable the business to
make decisions by arming them with quantified insights

Summarized from http://www.kdnuggets.com/2014/11/9-must-have-skills-data-scientist.html


[BIG] DATA SOURCES
 Social media platform. Most of social media provided some API to
fetch the data from there. Twitter and Facebook is the most common
example
 KDNuggets (http://www.kdnuggets.com/datasets/index.html)
 Kaggle (https://www.kaggle.com/)
 Portal Data Indonesia (http://data.go.id/)
 Your WhatsApp group conversation
ONLINE TUTORIAL
 Coursera (https://www.coursera.org/)
 DataQuest (https://www.dataquest.io/)
 Udacity (https://www.udacity.com/)
 TutorialsPoint (http://www.tutorialspoint.com/)
 Youtube, RapidMiner Channel
(https://www.youtube.com/user/RapidIVideos)
 Youtube KNIME TV (https://www.youtube.com/user/KNIMETV)
 Cloudera Quickstart VM (http://www.cloudera.com/content/www/en-
us/documentation/enterprise/latest/topics/cloudera_quickstart_vm.html)
 Hortonworks Sandbox VM
(http://hortonworks.com/products/hortonworks-sandbox/)
 Apache Spark Page (https://spark.apache.org/examples.html)
PREPARE
YOURSELF
TO SURF THE DATA ERA!
Question?

Вам также может понравиться