Вы находитесь на странице: 1из 65

Data Warehousing and Data Mining

UNIT II: DATA PREPROCESSING, LANGUAGE,


ARCHITECTURES,
CONCEPT DESCRIPTION

Preprocessing Cleaning Integration Transformation


Reduction - Discretization & Concept Hierarchy Generation
Data Mining Primitives Query Language Graphical User
Interfaces Architectures Concept Description Data
Generalization Characterizations - Class Comparisons
Descriptive Statistical Measures.

Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy


generation

Summary

April 27, 2016

Data Mining: Concepts and


Techniques

Why Data Preprocessing?

Data in the real world is dirty


incomplete: lacking attribute values, lacking
certain attributes of interest, or containing only
aggregate data

noisy: containing errors or outliers

e.g., occupation=
e.g., Salary=-10

inconsistent: containing discrepancies in codes


or names

April 27, 2016

e.g., Age=42 Birthday=03/07/1997


e.g., Was rating 1,2,3, now rating A, B, C
e.g., discrepancy between duplicate records
Data Mining: Concepts and
Techniques

Why Is Data Dirty?

Incomplete data comes from

n/a data value when collected

different consideration between the time when the data


was collected and when it is analyzed.

human/hardware/software problems

Noisy data comes from the process of data

collection

entry

transmission

Inconsistent data comes from

Different data sources

Functional dependency violation

April 27, 2016

Data Mining: Concepts and


Techniques

Why Is Data Preprocessing


Important?

No quality data, no quality mining results!

Quality decisions must be based on quality data

e.g., duplicate or missing data may cause incorrect or


even misleading statistics.

Data warehouse needs consistent integration of


quality data

Data extraction, cleaning, and transformation


comprises the majority of the work of building a
data warehouse. Bill Inmon

April 27, 2016

Data Mining: Concepts and


Techniques

Multi-Dimensional Measure of Data


Quality

A well-accepted multidimensional view:


Accuracy
Completeness
Consistency
Timeliness
Believability
Value added
Interpretability
Accessibility
Broad categories:
intrinsic, contextual, representational, and
accessibility.

April 27, 2016

Data Mining: Concepts and


Techniques

Major Tasks in Data


Preprocessing

Data cleaning

Data integration

Normalization and aggregation

Data reduction

Integration of multiple databases, data cubes, or files

Data transformation

Fill in missing values, smooth noisy data, identify or


remove outliers, and resolve inconsistencies

Obtains reduced representation in volume but produces


the same or similar analytical results

Data discretization

April 27, 2016

Part of data reduction but with particular importance,


especially for numerical data
Data Mining: Concepts and
Techniques

Forms of data
preprocessing

April 27, 2016

Data Mining: Concepts and


Techniques

Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy


generation

Summary

April 27, 2016

Data Mining: Concepts and


Techniques

Data Cleaning

Importance
Data cleaning is one of the three biggest
problems in data warehousingRalph Kimball
Data cleaning is the number one problem in
data warehousingDCI survey

Data cleaning tasks

Fill in missing values

Identify outliers and smooth out noisy data

Correct inconsistent data

Resolve redundancy caused by data integration

April 27, 2016

Data Mining: Concepts and


Techniques

10

Missing Data

Data is not always available

E.g., many tuples have no recorded value for several


attributes, such as customer income in sales data

Missing data may be due to

equipment malfunction

inconsistent with other recorded data and thus deleted

data not entered due to misunderstanding

certain data may not be considered important at the


time of entry

not register history or changes of the data

Missing data may need to be inferred.

April 27, 2016

Data Mining: Concepts and


Techniques

11

How to Handle Missing


Data?

Ignore the tuple: usually done when class label is missing


(assuming the tasks in classificationnot effective when the
percentage of missing values per attribute varies considerably.

Fill in the missing value manually: tedious + infeasible?

Fill in it automatically with

a global constant : e.g., unknown, a new class?!

the attribute mean

the attribute mean for all samples belonging to the same


class: smarter

the most probable value: inference-based such as Bayesian


formula or decision tree

April 27, 2016

Data Mining: Concepts and


Techniques

12

Noisy Data

Noise: random error or variance in a measured


variable
Incorrect attribute values may due to
faulty data collection instruments
data entry problems
data transmission problems
technology limitation
inconsistency in naming convention
Other data problems which requires data cleaning
duplicate records
incomplete data
inconsistent data

April 27, 2016

Data Mining: Concepts and


Techniques

13

How to Handle Noisy Data?

Binning method:
first sort data and partition into (equi-depth) bins

Partition equi - depth or width

then one can smooth by bin means, smooth by


bin median, smooth by bin boundaries, etc.
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human
(e.g., deal with possible outliers)
Regression
smooth by fitting the data into regression functions

April 27, 2016

Data Mining: Concepts and


Techniques

14

Simple Discretization Methods:


Binning
Binning: Binning methods smooth a sorted data value by

consulting its neighborhood, that is, the values around it. The
sorted values are distributed into a number of buckets, or
bins. Because binning methods consult the neighborhood of
values, they perform local smoothing.
Equal-width (distance) partitioning:
Divides the range into N intervals of equal size: uniform grid
if A and B are the lowest and highest values of the attribute,
the width of intervals will be: W = (B A)/N.
The most straightforward, but outliers may dominate
presentation
Skewed data is not handled well.
Equal-depth (frequency) partitioning:
Divides the range into N intervals, each containing
approximately same number of samples
Good data scaling
Data Mining: Concepts and
Managing categorical
attributes
can be tricky.
April 27, 2016
Techniques
15

Binning Methods for Data


Smoothing

* Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25,
26, 28, 29, 34
* Partition into (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
April 27, 2016

Data Mining: Concepts and


Techniques

16

Cluster Analysis

April 27, 2016

Data Mining: Concepts and


Techniques

17

Regression

The regression functions are used to determine the


relationship between the dependent variable (target
field) and one or more independent variables.

The dependent variable is the one whose values you


want to predict, whereas the independent variables
are the variables that you base your prediction on.

A Regression Model defines three types of


regression models: linear, polynomial, and logistic
regression.

18

Regression
y
Y1

y=x+1

Y1

X1

April 27, 2016

Data Mining: Concepts and


Techniques

19

Data Cleaning as a Process

20

Data discrepancy detection


Use metadata (e.g., domain, range, dependency,
distribution)
Check field overloading
Check uniqueness rule, consecutive rule and null
rule
Use commercial tools

Data scrubbing: use simple domain knowledge


(e.g., postal code, spell-check) to detect errors
and make corrections

Data auditing: by analyzing data to discover


rules and relationship to detect violators (e.g.,
correlation and clustering to find outliers)

Chapter 3: Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy


generation

Summary

April 27, 2016

Data Mining: Concepts and


Techniques

21

Chapter 3: Data Preprocessing

Data Integration

Data migration and integration


Data migration tools: allow transformations to
be specified
ETL (Extraction/Transformation/Loading) tools:
allow users to specify transformations through
a graphical user interface
Integration of the two processes
Iterative and interactive (e.g., Potters Wheels)

22

Data Integration

Data integration:
combines data from multiple sources into a
coherent store
Schema integration
integrate metadata from different sources
Entity identification problem: identify real world
entities from multiple data sources, e.g., A.cust-id
B.cust-#
Detecting and resolving data value conflicts
for the same real world entity, attribute values
from different sources are different
possible reasons: different representations,
different scales, e.g., metric vs. British units

April 27, 2016

Data Mining: Concepts and


Techniques

23

Handling Redundancy in Data


Integration

Redundant data occur often when integration of


multiple databases

Object identification: The same attribute or object


may have different names in different databases

Derivable data: One attribute may be a derived


attribute in another table, e.g., annual revenue

Redundant data may be able to be detected by


correlational analysis

Careful integration of the data from multiple sources


may help reduce/avoid redundancies and
inconsistencies and improve mining speed and quality

April 27, 2016

Data Mining: Concepts and


Techniques

24

Correlation Analysis

correlational analysis - the use of


statistical correlation to evaluate the strength
of the relations between variables

Correlation
analysis
measures
the
relationship between two items, for example,
a security's price and an indicator.

The resulting value (called the "correlation


coefficient") shows if changes in one item
(e.g., an indicator) will result in changes in
the other item (e.g., the security's price).
25

Correlation Analysis (Numeric Data)

Correlation coefficient (also called Pearsons product


moment coefficient)
rA, B

i 1

(ai A)(bi B )

(n 1) A B

i 1

(ai bi ) n A B

(n 1) A B

A
B and
where n is the number of tuples,
are the
respective means of A and B, A and B are the
respective standard deviation of A and B, and
(aibi) is the sum of the AB cross-product.

If rA,B > 0, A and B are positively correlated (As values


increase as Bs). The higher, the stronger correlation.
rA,B = 0: independent; rAB < 0: negatively correlated
26

Correlation Analysis (Nominal Data)

2 (chi-square) test
2
(
Observed

Expected
)
2
Expected
The larger the 2 value, the more likely the
variables are related

The cells that contribute the most to the 2 value


are those whose actual count is very different from
the expected count

Correlation does not imply causality

# of hospitals and # of car-theft in a city are


correlated

Both are causally linked to the third variable:


population
27

Chi-Square Calculation: An Example

Play
chess

Not play
chess

Sum
(row)

Like science fiction

250(90)

200(360)

450

Not like science


fiction

50(210)

1000(840)

1050

Sum(col.)

300

1200

1500

2 (chi-square) calculation (numbers in parenthesis


are expected counts calculated based on the data
distribution in the two categories)
(250 90) 2 (50 210) 2 (200 360) 2 (1000 840) 2

507.93
90
210
360
840
2

It shows that like_science_fiction and play_chess are


correlated in the group
28

Visually Evaluating Correlation

Scatter plots
showing the
similarity from
1 to 1.

29

Correlation (viewed as linear


relationship)

Correlation measures the linear relationship


between objects
To compute correlation, we standardize
data objects, A and B, and then take their
dot product

a 'k (ak mean( A)) / std ( A)

b'k (bk mean( B )) / std ( B)


correlatio n( A, B ) A' B'
30

Covariance (Numeric Data)

Covariance is similar to correlation

Correlation coefficient:

where n is the number of tuples,


and
are the respective
B
A
mean or expected values of A and B, A and B are the respective
standard deviation of A and B
Positive covariance: If CovA,B > 0, then A and B both tend to be
larger than their expected values
Negative covariance: If CovA,B < 0 then if A is larger than its
expected value, B is likely to be smaller than its expected value
Independence: CovA,B = 0 but the converse is not true:

Some pairs of random variables may have a covariance of 0 but are not
independent.
Only under some additional assumptions (e.g., the data follow multivariate
normal distributions) does a covariance of 0 imply independence

31

Co-Variance: An Example

It can be simplified in computation as

Suppose two stocks A and B have the following values in one


week: (2, 5), (3, 8), (5, 10), (4, 11), (6, 14).

Question: If the stocks are affected by the same industry trends,


will their prices rise or fall together?

E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4

E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6

Cov(A,B) = (25+38+510+411+614)/5 4 9.6 = 4

Thus, A and B rise together since Cov(A, B) > 0.

Data Transformation

A function that maps the entire set of values of a given


attribute to a new set of replacement values s.t. each
old value can be identified with one of the new values

Methods

Smoothing: remove noise from data

Aggregation: summarization, data cube construction

Generalization: concept hierarchy climbing

Normalization: scaled to fall within a small, specified


range

min-max normalization

z-score normalization

normalization by decimal scaling

Attribute/feature construction
Data Mining: Concepts and

New attributes constructed


from the given ones
April 27, 2016
Techniques

33

Data Transformation:
Normalization

min-max normalization

v minA
v'
(new _ maxA new _ minA) new _ minA
maxA minA

z-score normalization

v meanA
v'
stand _ devA

normalization by decimal scaling

v
v' j
10

April 27, 2016

Where j is the smallest integer such that Max(| v ' |)<1

Data Mining: Concepts and


Techniques

34

Example

Min-max normalization: to [new_minA, new_maxA]

Ex. Let income range $12,000 to $98,000 normalized to


[0.0, 1.0]. Then $73,000 is mapped to

73,600 12,000
(1.0 0) 0 0.716
98,000 12,000

Z-score normalization (: mean, : standard deviation):

Ex. Let = 54,000, = 16,000. Then

73,600 54,000
1.225
16,000
April 27, 2016

Data Mining: Concepts and


Techniques

35

Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy


generation

Summary

April 27, 2016

Data Mining: Concepts and


Techniques

36

Data Reduction
Strategies

A data warehouse may store terabytes of data


Complex data analysis/mining may take a very long
time to run on the complete data set
Data reduction
Obtain a reduced representation of the data set that
is much smaller in volume but yet produce the same
(or almost the same) analytical results
Data reduction strategies
Data cube aggregation
Dimensionality reductionremove unimportant
attributes
Data Compression
Numerosity reductionfit data into models
Discretization and concept hierarchy generation
Data Mining: Concepts and

April 27, 2016

Techniques

37

Data Cube Aggregation

The lowest level of a data cube

the aggregated data for an individual entity of


interest

e.g., a customer in a phone calling data warehouse.

Multiple levels of aggregation in data cubes

Reference appropriate levels

Further reduce the size of data to deal with


Use the smallest representation which is enough to
solve the task

Queries regarding aggregated information should be


answered using data
when
Datacube,
Mining: Concepts
and possible

April 27, 2016

Techniques

38

Dimensionality Reduction

Feature selection (i.e., attribute subset selection):


Select a minimum set of features such that the
probability distribution of different classes given the
values for those features is as close as possible to the
original distribution given the values of all features
reduce # of patterns in the patterns, easier to
understand
Heuristic methods (due to exponential # of choices):
step-wise forward selection
step-wise backward elimination
combining forward selection and backward elimination
decision-tree induction

April 27, 2016

Data Mining: Concepts and


Techniques

39

Example of Decision Tree Induction


Initial attribute set:
{A1, A2, A3, A4, A5, A6}
A4 ?
A6?

A1?

Class 1
>
April 27, 2016

Class 2

Class 1

Class 2

Reduced attribute set: {A1, A4, A6}


Data Mining: Concepts and
Techniques

40

Data Compression

String compression
There are extensive theories and well-tuned
algorithms
Typically lossless
But only limited manipulation is possible without
expansion
Audio/video compression
Typically lossy compression, with progressive
refinement
Sometimes small fragments of signal can be
reconstructed without reconstructing the whole
Time sequence is not audio
Typically short and vary slowly with time
Data Mining: Concepts and

April 27, 2016

Techniques

42

Data Compression

Compressed
Data

Original Data
lossless

Original Data
Approximated
April 27, 2016

y
s
s
lo

Data Mining: Concepts and


Techniques

43

Wavelet Transformation
Haar2

Daubechie4

Discrete wavelet transform (DWT): linear signal


processing, multiresolutional analysis

Compressed approximation: store only a small fraction


of the strongest of the wavelet coefficients

Similar to discrete Fourier transform (DFT), but better


lossy compression, localized in space

Method:

Length, L, must be an integer power of 2 (padding with 0s, when


necessary)

Each transform has 2 functions: smoothing, difference

Applies to pairs of data, resulting in two set of data of length L/2

Applies two functions recursively, until reaches the desired


length

April 27, 2016

Data Mining: Concepts and


Techniques

44

DWT for Image Compression


Image

Low Pass
Low Pass
Low Pass

April 27, 2016

High Pass

High Pass

High Pass

Data Mining: Concepts and


Techniques

45

Principal Component Analysis

Given N data vectors from k-dimensions, find c <=


k orthogonal vectors that can be best used to
represent data

The original data set is reduced to one


consisting of N data vectors on c principal
components (reduced dimensions)

Each data vector is a linear combination of the c


principal component vectors

Works for numeric data only

Used when the number of dimensions is large

April 27, 2016

Data Mining: Concepts and


Techniques

46

Principal Component Analysis


X2
Y1
Y2

X1

April 27, 2016

Data Mining: Concepts and


Techniques

47

Numerosity Reduction

Parametric methods

Assume the data fits some model, estimate


model parameters, store only the parameters,
and discard the data (except possible outliers)

Log-linear models: obtain value at a point in mD space as the product on appropriate marginal
subspaces

Non-parametric methods

Do not assume models

Major families: histograms, clustering, sampling

April 27, 2016

Data Mining: Concepts and


Techniques

48

Regression and Log-Linear


Models

Linear regression: Data are modeled to fit a straight


line

Often uses the least-square method to fit the line

Multiple regression: allows a response variable Y to


be modeled as a linear function of multidimensional
feature vector

Log-linear model: approximates discrete


multidimensional probability distributions

April 27, 2016

Data Mining: Concepts and


Techniques

49

Regress Analysis and Log-Linear


Models

Linear regression: Y = + X
Two parameters , and specify the line and
are to be estimated by using the data at hand.
using the least squares criterion to the known
values of Y1, Y2, , X1, X2, .
Multiple regression: Y = b0 + b1 X1 + b2 X2.
Many nonlinear functions can be transformed
into the above.
Log-linear models:
The multi-way table of joint probabilities is
approximated by a product of lower-order
tables.

Probability: p(a, b, c, d) = ab acad bcd

Histograms

30
25
20
15
10
5

April 27, 2016

100000

90000

80000

70000

60000

Data Mining: Concepts and


Techniques

50000

0
10000

35

40000

40

30000

A popular data
reduction technique
Divide data into buckets
and store average
(sum) for each bucket
Can be constructed
optimally in one
dimension using
dynamic programming
Related to quantization
problems.

20000

51

Clustering

Partition data set into clusters, and one can store


cluster representation only

Can be very effective if data is clustered but not


if data is smeared

Can have hierarchical clustering and be stored in


multi-dimensional index tree structures

There are many choices of clustering definitions


and clustering algorithms, further detailed in
Chapter 8

April 27, 2016

Data Mining: Concepts and


Techniques

52

Sampling

Allow a mining algorithm to run in complexity that is


potentially sub-linear to the size of the data
Choose a representative subset of the data
Simple random sampling may have very poor
performance in the presence of skew
Develop adaptive sampling methods
Stratified sampling:

Approximate the percentage of each class (or


subpopulation of interest) in the overall database

Used in conjunction with skewed data


Sampling may not reduce database I/Os (page at a
time).

April 27, 2016

Data Mining: Concepts and


Techniques

53

Sampling

R
O
W
SRS le random
t
p
u
o
m
i
h
t
s
i
(
w
e
l
samp ment)
ce
a
l
p
e
r

SRSW
R

Raw Data
April 27, 2016

Data Mining: Concepts and


Techniques

54

Sampling
Raw Data

April 27, 2016

Cluster/Stratified Sample

Data Mining: Concepts and


Techniques

55

Hierarchical Reduction

Use multi-resolution structure with different degrees


of reduction
Hierarchical clustering is often performed but tends
to define partitions of data sets rather than clusters
Parametric methods are usually not amenable to
hierarchical representation
Hierarchical aggregation
An index tree hierarchically divides a data set into
partitions by value range of some attributes
Each partition can be considered as a bucket
Thus an index tree with aggregates stored at each
node is a hierarchical histogram

April 27, 2016

Data Mining: Concepts and


Techniques

56

Data Preprocessing

Why preprocess the data?

Data cleaning

Data integration and transformation

Data reduction

Discretization and concept hierarchy


generation

Summary

April 27, 2016

Data Mining: Concepts and


Techniques

57

Discretization

Three types of attributes:


Nominal values from an unordered set
Ordinal values from an ordered set
Continuous real numbers
Discretization:
divide the range of a continuous attribute into
intervals
Some classification algorithms only accept
categorical attributes.
Reduce data size by discretization
Prepare for further analysis

April 27, 2016

Data Mining: Concepts and


Techniques

58

Discretization and Concept


Discretization
hierachy- reduce the number of values

for a given
continuous attribute by dividing the range of the attribute
into intervals. Interval labels can then be used to replace
actual data values
Concept hierarchies - reduce the data by collecting and
replacing low level concepts (such as numeric values for the
attribute age) by higher level concepts (such as young,
middle-aged, or senior)
Discretization techniques can be categorized based on how
the discretization is performed,
uses class information or
which direction it proceeds (i.e., top-down vs. bottom-up).
If the discretization process uses class information, then we
say it is supervised discretization. Otherwise, it is
unsupervised.
Data Mining: Concepts and
April 27, 2016

Techniques

59

Conti..

If the process starts by first finding one or a few points


(called split points or cut points) to split the entire attribute
range, and then repeats this recursively on the resulting
intervals, it is called top-down discretization or splitting.
This contrasts with bottom-up discretization or merging,
which starts by considering all of the continuous values
as potential split-points, removes some by merging
neighborhood values to form intervals, and then
recursively applies this process to the resulting intervals.
Discretization can be performed recursively on an attribute
to provide a hierarchical or multiresolution partitioning of
the attribute values, known as a concept hierarchy.
Concept hierarchies are useful for mining at multiple levels
of abstraction.

April 27, 2016

Data Mining: Concepts and


Techniques

60

Discretization and Concept Hierarchy


Generation for Numeric Data

Manual definition of concept hierarchies can be a tedious


and time-consuming task for a user or a domain expert.
Several discretization methods can be used to
automatically generate or dynamically refine concept
hierarchies for numerical attributes.
Furthermore, many hierarchies for categorical attributes
are implicit within the database schema and can be
automatically defined at the schema definition level.
Binning top down splitting tech
Histogram analysis unsupervised DT
Clustering analysis - unsupervised DT
Entropy-based discretization
Segmentation by natural partitioning

April 27, 2016

Data Mining: Concepts and


Techniques

61

Entropy-Based Discretization

Given a set of samples S, if S is partitioned into two


intervals S1 and S2 using boundary T, the entropy
after partitioning is | S 1|
|S 2|
E (S , T )

|S|

Ent ( S 1)

|S|

Ent ( S 2)

The boundary that minimizes the entropy function


over all possible boundaries is selected as a binary
discretization.
The process is recursively applied to partitions
obtained until some stopping criterion is met, e.g.,

Ent ( S ) E (T , S )

Experiments show that it may reduce data size and


improve classification accuracy

April 27, 2016

Data Mining: Concepts and


Techniques

62

Segmentation by Natural
Partitioning

A simply 3-4-5 rule can be used to segment numeric


data into relatively uniform, natural intervals.

If an interval covers 3, 6, 7 or 9 distinct values at the


most significant digit, partition the range into 3 equiwidth intervals

If it covers 2, 4, or 8 distinct values at the most


significant digit, partition the range into 4 intervals

If it covers 1, 5, or 10 distinct values at the most


significant digit, partition the range into 5 intervals

April 27, 2016

Data Mining: Concepts and


Techniques

63

Example of 3-4-5 Rule


count

Step 1:
Step 2:

-$351

-$159

Min

Low (i.e, 5%-tile)

msd=1,000

profit
Low=-$1,000

(-$1,000 - 0)

(-$400 - 0)

(-$200 -$100)
(-$100 0)

April 27, 2016

Max

High=$2,000

($1,000 - $2,000)

(0 -$ 1,000)

(-$4000 -$5,000)

Step 4:

(-$300 -$200)

High(i.e, 95%-0 tile)

$4,700

(-$1,000 - $2,000)

Step 3:

(-$400 -$300)

$1,838

($1,000 - $2, 000)

(0 - $1,000)
(0 $200)

($1,000 $1,200)

($200 $400)

($1,200 $1,400)
($1,400 $1,600)

($400 $600)
($600 $800)

($800 $1,000)

($1,600 ($1,800 $1,800)


$2,000)

Data Mining: Concepts and


Techniques

($2,000 - $5, 000)

($2,000 $3,000)
($3,000 $4,000)
($4,000 $5,000)

64

Concept Hierarchy Generation for


Categorical Data

Specification of a partial ordering of attributes


explicitly at the schema level by users or experts
street<city<state<country
Specification of a portion of a hierarchy by explicit
data grouping
{Urbana, Champaign, Chicago}<Illinois
Specification of a set of attributes.
System automatically generates partial ordering
by analysis of the number of distinct values

E.g., street < city <state < country


Specification of only a partial set of attributes
E.g., only street < city, not others

April 27, 2016

Data Mining: Concepts and


Techniques

65

Automatic Concept Hierarchy


Generation

Some concept hierarchies can be automatically


generated based on the analysis of the number of
distinct values per attribute in the given data set
The attribute with the most distinct values is
placed at the lowest level of the hierarchy
Note: Exceptionweekday, month, quarter, year

country

15 distinct values

province_or_ state

65 distinct
values
3567 distinct values

city
street
April 27, 2016

674,339 distinct values

Data Mining: Concepts and


Techniques

66