Вы находитесь на странице: 1из 13

Prehistoric Use-wear Analysis through Neural Networks.

By Juan A. Barceló & Jordi Pijoan-López


UNIVERSITAT AUTÒNOMA DE BARCELONA
SPAIN
Juanantonio.barcelo@uab.cat
jordipl31770@yahoo.es

Abstract. The surface of prehistoric lithic tools are not uniform but contain many variations; some of
them are of visual or tactile nature. Such variations go beyond the peaks and valleys characterizing
surface micro-topography, which is the obvious frame of reference for “textures” in usual speaking.
Beyond those physical, geological characteristics of the raw material, some visual features of an artifact’s
surfaces are consequences of the modifications having experimented that object along its history.
Consequently, when we analyze macro- or microscopically an object’s surface, we should recognize some
differential features which are the consequence of an action (human or bio-geological) having modified
the original appearance of that surface. In this paper we have described and measured use-wear evidence
as an archaeological texture in terms of the particular dispersion of luminance values across the surface.
Textures have been analyzed as complex visual patterns composed of entities, or sub-patterns, that have
characteristic brightness, color, slope, size, etc. Once the texture elements have been identified in the
image, we have computed statistical properties from the extracted texture elements and used these as
texture features. A neural network has been programmed to induce the best discriminant function. Results
show the reliability of this approach.
Key words: lithics, use-wear, texture, neural networks, machine learning

Introduction

Archaeologists have been erroneously using clustering methods to achieve


classifications. It has been traditionally assumed that everything which is similar, was
produced, distributed and used in the past in the same way. But we cannot learn
explanatory knowledge nor predict concept assignments on the basis only of perceived
similarities.
The goal in a classification problem is to develop an algorithm which will assign any
artistic, historical or prehistoric artifact, represented by a vector x, to one of c classes
(chronology, function, origin, etc). The problem is to find the best mapping from the
descriptive features (input patterns) to the explanation (output). The task of learning the
explanatory concepts consists in separating the input vectors, assigning some of them to
a known output, from the input vectors which are assigned to the another one. We need
training data, that is, known instances of explanatory concepts, to be capable of
predicting the way each explanatory output is connected to each descriptive input.
This is a fast perfect example of inverse reasoning. That is, the answer is known, but
not the question. What we need in Culture Heritage research is “guessing a past event
from its vestiges”. In archaeology for instance, the main source for inverse problems
lies in the fact that archaeologists generally do not know why archaeological
observables have the shape, size, texture, composition and spatiotemporal location they
have. Instead we have sparse and noisy observations or measurements of physical
properties, and an incomplete knowledge of relational contexts and possible causal
processes. From this information, an inverse engineering approach should be used to
adequately interpret ancient remains preserved in the present as the material
consequence of some social actions performed in the past.
An inverse problem can be solved by conjecturing unobservable mechanisms that link
the input (observation) with the output (explanation). This is exactly what philosophers
of science have called induction. It can be defined as the way of concluding that facts
similar to observed facts are true in cases not examined. The underlying assumptions
are:
A. When a thing of certain sort A has been found to be associated with a
thing of a certain other sort B, and has never been found dissociated from
a thing of the sort B, the greater the number of cases in which A and B
have been associated, the greater is the probability that they will be
associated in a fresh case in which one of them is known to be present;
B. Under the same circumstances, a sufficient number of cases of
association will make the probability of a fresh association nearly a
certainty, and will make it approach certainty without limit.
Computer scientists are intensively exploring this subject and there are many
new mechanisms and technologies for knowledge expansion through iterative and
recursive revision. Artificial Intelligence offers us powerful methods and techniques to
bring about this new task. Fuzzy logic, rough sets, genetic algorithms, neural networks,
Bayesian models and agent-based systems are among the directions we have to explore.
These paradigms, differ from usual methods in that: a) they are (in comparison at least)
robust in the presence of noise; b) they are flexible as to the statistical types that can be
combined; c) they can work with feature (attribute) spaces of very high dimensionality;
d) they can be based on non-linear and non monotonic assumptions; e) they require less
training data, and f) they make fewer prior assumptions about data distributions and
model parameters.
The huge number of learning algorithms and data mining tools make impossible
that we can review the entire field in a single paper (see Barceló 2008 for a list of
algorithms and their applications to culture heritage research).

Use-Wear Analysis as Archaeological texture.

When an archaeologist analyzes prehistoric objects, the first aspect she perceives are the
visual properties of the artefact. Its surfaces present variations in local properties like
albedo, colour, density, coarseness, roughness, regularity, linearity, directionality,
direction, frequency, phase, hardness, brightness, bumpiness, specularity, reflectivity
and transparency. Texture is the name we give to these variations, which seem to be
usually caused as a result of the process that created that surface.
Texture has always been used to describe archaeological materials. May be the
most obvious example of texture analysis in archaeology is that of distinguishing
surface irregularities due to the characteristics of the raw material. We can distinguish
between carved stone tools, stripped bones, polished wood, dry hide, painted pottery,
etc. in terms of the visual appearance of the raw material they are made of.
Furthermore, texture patterns are not only intrinsic to the raw material itself.
Some visual features of an artifact’s surfaces are consequences of the modifications
having experimented that object along its history. Consequently, when we inspect
macro- or microscopically an object’s surface, we try to recognize those differential
features -striations, polished areas, scars, particles, undifferentiated background-, which
can be the consequence of working actions (use) having modified the original
appearance of that surface.
Archaeologists studying stone instruments usually wish to determine whether or
not these objects were used as tools and how. The best way to do this is through the
analysis of microscopic traces of wear (texture) appearing on the surface of the tool
(Figure 1).

ORIGINAL
UNALTERED
SURFACE

ALTERED
SURFACE
AFTER
HUMAN WORK

Figure. 1. Identifying texture on the basis of image features

Surface variations in prehistoric stone tools due to human work can be analyzed in
terms of different causal factors:

• Worked Material: (wood, bone, shell, fur, etc.) the effects of its physical
properties (hardness, wetness, porosity, plasticity, etc.) on the tool activity
surface (Figure 2)
• Movement: longitudinal (cut), transversal (scrape), etc. (Figure 3)
A B C

Fig. 2. Texture differences between replicated prehistoric stone tools used in different ways. A: original raw material texture
before using (andesite stone); B, Result of the alteration in surface A when the tool was used scrapping fur. C, A different raw
material (obsidian) with texture features produced through wood scrapping.

Edge’s axis Edge’s axis

LONGITUDINAL (cutting) TRANSVERSAL (scrapping)

Fig. 3. Longitudinal and Transversally generated original surfaces (Photographs by the author’s research team).

The inverse problem of getting the way the tool was used in the past from the vestiges
of its use-wear texture observed in the present has been traditionally answered in a
subjective way. Archaeological specialists, with many years of experience replicating
prehistoric tools in laboratory generalize their personal intuitions and diagnose
prehistoric objects. Although some essays have been published towards a more formal
approach using statistics and traditional quantitative image analysis tools, the task is still
carried out by few specialists based on their personal experience.
Let us consider how this archaeological problem can be solved using a computational
approach based on Neural Networks.

An introduction to Neural Networks


Our brain is composed of biological neurons. We can replicate up to a certain point a
human brain but building a system of artificial neurons. In analogy to the way chemical
transmitters transport signals across neurons in the human brain, a mathematical
function (learning algorithm) controls the transport of numerical values through the
connections of the artificial neural network. If the strength of a signal arriving in a
neuron exceeds a certain threshold value, the neuron will itself become active and
“fired”, i.e., pass on the signal through its outgoing connection.
The power of neural computation comes from the massive interconnection among the
artificial neurons, which share the load of the overall processing task, and from the
adaptive nature of the parameters (weights) that interconnect the neurons. In this case,
each neuron is connected to many others, and inputs correspond to the incoming signals
from other neurons into the synapses of a single biological neuron. Each neuron
receives connections from other neurons and/or itself.
For a computing system to be called “Artificial neural network” or “connectionist
system”, it is necessary to have a labelled directed graph structure where nodes
(representing artificial neurons) perform some simple computations. It is the pattern of
interconnections which is represented mathematically as a weighted, directed graph in
which the vertices or nodes represent basic computing elements (neurons), the links or
edges represent the connections between elements, the weights represent the strengths
of these connections, and the directions establish the flow of information and more
specifically define inputs and outputs of nodes and of the network. Each node’s
activation is based on the activations of the nodes that have connections directed at it,
and the weights on those connections. To complete specification of the network, we
need to declare how the nodes process information arriving at the incoming links and
disseminate the information on the outgoing links. The nodes of the network are either
input variables, computational elements, or output variables.
Getting the networks to do something is a matter of giving them inputs, and letting the
information travel through the topology by simulating each artificial neuron. The role of
neural networks is to provide general parameterised non-linear mappings between a set
of input variables and a set of output variables. The neural network builds discriminant
functions from its neurons or processing elements. The network topology determines
the number and shape of the different classifiers. The shapes of the discriminant
functions change with the topology, so the networks may be considered semiparametric
classifiers.

There are two modes in neural information processing:

• using mode
• training mode.

During training, the network is trained to associate outputs with input patterns. Once the
network has learnt, training stops, and weights are not changed further, unless
something new must be learned. Once trained, a network’s response becomes
insensitive to minor variations in its input. Then, when the network is used, it identifies
the input pattern and tries to output the associated output pattern. The power of neural
networks comes to life when a pattern that has no output associated with it, is given as
an input. In this case, the network gives the output that corresponds to a taught input
pattern that is least different from the given pattern.

In the using mode, the presentation of an input sample should trigger the generation of a
specific output pattern. Each such input (or output) set is referred as a vector. Neural
networks work by feeding in some input variables, and producing some output
variables. Data fed to the network represent the pattern of activation over the set of
processing units. They can therefore be used where you have some known information,
and would like to infer some unknown information. When a known input pattern is
detected at the input, its associated output becomes the current output. If the input
pattern does not belong in the taught list of input patterns, the firing rule is used to
determine whether to fire or not. This is also called, the Retrieving Phase. Various non-
linear systems have been proposed for retrieving desired or stored patterns. The final
neuron values represent the desired output to be retrieved.

In the training or learning mode, a network modifies selectively its parameters so that
application of a set of inputs produces the desired (or at least consistent) set of outputs.
The network requires input data and a desired response to each input. Neural networks
take this input-output data apply a learning rule and extract information from the data.
Essential to this learning process is the repeated presentation of the input-output
patterns. Training is accomplished by sequentially applying input vectors, while
adjusting network weights according to a predetermined procedure. The more data are
presented to the network, the better its performance will be. As the network is trained,
the weights of the system are continually adjusted to incrementally reduce the difference
between the output of the system and the desired response. If the weights change too
fast, the conditions previously learned will be rapidly forgotten. If the weights change
too slowly, it will take a long time to learn complicated input-output relations. The rate
of learning is problem dependent and must be judiciously chosen.

During training the network weights gradually converge to values such that each input
vector produces the desired output vector. It is the network which self-adjusts to
produce consistent responses. By adapting its weights, the neural network works
towards an optimal solution based on a measurement of its performance. Usually, the
optimal weights are obtained by optimizing (minimizing or maximizing) certain
"energy" functions. Relaxation is the process whereby the unit activations (not the
weights) change over time until they evolve to a state in which activations are no longer
changing, and thus the network can be said to have “relaxed”, i.e., fallen into a state of
little activity. Relaxation differs from learning in that only activations change; in
learning, the weights change.

For more technical details, and a complete list of references, the reader is addressed to a
recent book on the subject (Barceló 2008).

The Analysis of Use-Wear Evidence. An identification-based approach


The easiest way of creating an associative memory for archaeological explanations is by
assuming that there is a roughly fixed set or vocabulary of “supposed” descriptive
regularities shared by a single population of objects, which are also distinctive enough.
In this case, input neurons are not proper visual units, because there is no sensor
acquiring image data and sending them to the computer system. Instead, it is the human
user, who feeds the network with an interpreted input, in which each feature contains
the result of a previous inference. In this way, the receptive field properties of low-level
neurons does not encode the salient features of the input image, but the previous
knowledge the user has about the features characterizing the archaeological evidence.
This kind of neural network is mostly similar to an Expert System
M.H. van den Dries (1998) has published a system of that kind. She has tried to classify
stone tools according to function using this kind of texture descriptors. The system has
been trained to recognize polishes. The other wear categories (edge retouch, edge
rounding and striations) have not been included, and the system has not been equipped
to relate the wear patterns to motions. The main reason for this limitation is that the
reference data set yielded the largest number of training examples for this wear
category.
The input neurons represent 31 wear attributes, and the output neurons represent the
worked materials: dry hide, fresh hide, hard wood, soft wood, dry bone, soaked bone,
dry antler, soaked antler, cereals, meat, pottery, stone, soil, siliceous plants, non-
siliceous plants. The training set consisted of 160 examples of experimentally replicated
tools used in the archaeological laboratory to cut and scrap the materials mentioned on
the output neurons list. After training, the neural network correctly identified most
presented cases (26.4% of misclassification rate with training data).
Because of the failure to learn a fourth part of the experimental examples, those special
cases were individually analyzed. It turned out that the network was more rigorous than
necessary, because many answers were not really wrong, but a matter of degree of
certainty. Most of the differences resulted from the fact that the answers were not very
persuasive. Consequently, many of them had a score that just misfit the training
tolerance. Still most answers corresponded with the expected answer and pointed at the
right contact material. In fact, all “mistakes” included the right contact material in the
answer. Moreover, eight of the “mistakes” consisted of two outputs with equal scores on
similar materials like cereals and siliceous plants.
Subsequently, WARP was tested with 16 randomly selected testing data. Considering
the degree of answer overlapping, only four (25%) were really false. Van den Dries
compared the performance of the network on experimental data, and on archaeological
data (interpreted by a human analyst). Despite some unfortunate guesses, WARP
performed rather well.
Although M.H. van den Dries results are impressive, we should question the use of
qualitative presence/absence variables to describe texture. It is really difficult to know
whether a texture pattern is “greasy” or “very brilliant”. If we want to go beyond this
kind of identification-based analysis, we should find a way to fed the network with
texture information directly, and not through a subjective identification.

The Analysis of Use-Wear Evidence. A visual-based approach

The PEDRA system is an example of how using macro- and microtexture


analysis for the texture classification of lithic tools according to use-wear (pedra means
“stone” in catalan language, Barceló and Pijoan 2004; Barceló et al. 2008, Barceló
2008, Pijoan 2007). Instead of subjective “types of use-wear”, a computational system
was designed based on image segmentation techniques which associated pixels with the
same grey level and define areas with comparable luminance variance. The underlying
idea was that extracted texels or texture elements corresponded to bumps or “large
plateaux” generated by the use of the tool cutting or scrapping different materials (meat,
hide, wood, bone, shell). Different gray-level thresholds were explored to obtain
different image segmentations, but we selected finally 120 grey levels as the threshold
to separate a texel from the tool’s background.
Original image Image Segmentation Identified Texels

The idea was to calculate a non-linear discrimination rule for texel parameters, that is,
how to distinguish texels generated because of longitudinal movement (cutting), from
texels generated during transversal movement (scrapping). Or, alternatively, to
discriminate between texels produced working hard materials (wood, shell) from soft
ones.
We have replicated in laboratory more than 100 lithic tools using the same kind
of flint. Three microphotographs were taken for each tool from different areas of the
working surface. Texels were individually measured for their area, perimeter, axis
length, etc.
Statistical analysis (Pijoan-López 2007) has proved that there is no clear-cut rule
that relates the shape and geometry of the texture elements to the work activity
performed by the stone tool. It is important to remember that what we are describing as
texture is just a light effect, that is to say, an indirect evidence of some irregularities on
the active surface of the tool. The shape parameters of regions with different luminance
properties correspond to the interfacial boundary defined by light reflection, and
consequently they do not fit necessarily with the real texture. Observed image texture
depends on factors such as illumination conditions. Certain properties of stone surfaces
have effects on the appearance of use wear. Because grey values depend on shadows,
and shadows depend on the position of light sources, if we do not care, the same object
surface may have very different texels associated. In our experiments, we have
controlled light sources, and the influence of the image acquisition device to be able to
understand observed patterns, but additional control is necessary to select the luminance
intervals selected for texel extraction.
A Neural network should allow us to discover whether there is enough evidence
to establish some degree of nonlinear relationships between light reflection variability
and micro-topographic features on the active surface of the replicated stone tool. A
feed-forward neural network has been built (Figure 4). Input neurons read central
tendency measures (mean and standard deviation) of all texels segmented at each
microphotograph, obtaining a dataset of 496 microphotographs, described in terms of:
- Mean of Elongation/ Std. dev. of Elongation
- Mean of Circularity/ Std. dev. of Circularity
- Mean of Quadrature-Thinness/ Std. dev. of Quadrature-Thinness
- Mean of Ratio Compactness-Thinness/ Std. dev. of Ratio Comp.-Thin.
- Mean of Compactness/ Std. dev. of Compactness,
- Mean of Irregularity/ Std. dev. of Irregularity
- Mean of Rectangularity/ Std. dev. of Rectangularity,
- Mean of Ratio Perimeter/Elongation/Std. dev. of Rt. Per./Elong.
- Mean of Feret diameter/ Std. dev. of Feret diameter
- Mean of Minimum rectangularity/ Std. dev. of Minimum rectangularity
- Mean of luminance means within a texel/ Std. dev. of lum. means
- Mean of luminance std.dev. within a texel/ Std. dev. of lum. st. dev.
- Mean of luminance modes within a texel/ Std. dev. of lum. modes
- Mean of luminance min.values within a texel/ Std. dev. of lum. min.va.
- Mean of Area of all texels within the image / Std. dev. of Area

Figure 4. General Architecture of the Neural Network implementing the PEDRA Classifier

In a preliminary investigation, we analyzed the relationship between texture


variation at a single microphotograph and the experimented activity (cutting or
scrapping bone, shell, meat, dry or fresh hide, dry or fresh wood). Therefore, the
network has seven output units and a hidden layer with 144 neurons. Learning
algorithm was backpropagation. Results are fairy interesting (Table 1). When
comparing training data with network interpretation:

Output / FRESH FRESH


Desired BONE BUTCHERY DRY HIDE DRY WOOD HIDE WOOD SHELL
BONE 55 0 0 6 1 3 10
BUTCHERY 8 43 4 2 5 1 0
DRY HIDE 13 4 46 6 0 6 3
DRY WOOD 13 1 12 43 1 13 4
FRESH HIDE 3 17 7 3 18 0 0
FRESH WOOD 10 1 5 6 0 28 8
SHELL 20 1 6 5 0 11 44
FRESH FRESH
Performance BONE BUTCHERY DRY HIDE DRY WOOD HIDE WOOD SHELL
MSE 0,1433 0,0626 0,0983 0,0962 0,0405 0,0852 0,0892
NMSE 0,7731 0,5364 0,7271 0,7844 0,8473 0,7795 0,7449
MAE 0,237 0,1455 0,1966 0,20543 0,1137 0,1910 0,1950
Min Abs
Error 1,1E-05 8,2E-05 0,0006 0,0006 0,0002 2,7E-05 0,0002
Max Abs
Error 1,0388 0,9980 0,9961 0,9779 0,9352 1,0341 0,9818
r 0,512 0,7116 0,5397 0,477 0,4998 0,4709 0,5119

Percent
Correct 45,0819 64,1791 57,5 60,5633 72 45,1612 63,7681

Table 1. Results from Experiment 1: Discriminating among worked materials.(Original experimental


data set)

It is easy to see that the neural network correctly classifies most replicated tools
according to the worked material. Only “bone” and “fresh wood” get a percentage of right
classifications less than 50%. However, even these errors are understandable. Bone can be
misclassified with shell, shell with bone, fresh wood with dry wood, butchery with fresh hide,
Only worked materials with similar hardness can be confounded.
When comparing test data (15% of experimental replications not used in the training
set) with network interpretation (Table 2):

Output / FRESH FRESH


Desired BONE BUTCHERY DRY HIDE DRY WOOD HIDE WOOD SHELL
BONE 10 0 0 3 0 0 2
BUTCHERY 2 5 1 0 2 0 0
DRY HIDE 4 1 5 1 0 5 1
DRY WOOD 2 0 2 8 0 4 3
FRESH HIDE 1 6 1 0 0 0 0
FRESH WOOD 3 1 0 3 0 4 2
SHELL 6 0 1 1 0 3 13

FRESH FRESH
Performance BONE BUTCHERY DRY HIDE DRY WOOD HIDE WOOD SHELL
MSE 0,189 0,0605 0,0819 0,1074 0,04058 0,1057 0,1151
NMSE 0,974 0,5631 0,9590 0,8387 2,192 0,8254 0,725
MAE 0,288 0,133 0,1761 0,2202 0,1109 0,2225 0,2392
Min Abs
Error 0,005 2,4E-05 0,0006 0,00289 0,0005 0,0003 0,0003
Max Abs
Error 1,015 0,9161 0,9882 0,9690 0,67488 0,8836 0,86803
r 0,318 0,7056 0,3194 0,422 0,28249 0,4213 0,5303

Percent
Correct 35,71 38,4615 50 50 0 25 61,904

Table 2. Testing the PEDRA system with additional data.

Obviously, testing results are worst than training data results. The reason of the bad
results for fresh hide is the size of the analyzed sample. In any case, errors are again within
similar hardness categories. This fact can be used to explain what the neural network is really
doing. It seems that it is able to “generalize” texture parameters characteristic of each work
activity, but not precisely each worked material. The network could not find formally defined
grammars for texture components placement rules, but it has had some success in creating an
associative memory for similar hardness categories.
Results of texture analysis are even better, when analyzing the kinematics of the
working activity (Table 3). The experimental database contained replications of three different
actions: cutting (longitudinal kinematics), scrapping (transversal kinematics), and butchery (a
kind of kinematics that pretends to be longitudinal but given the soft nature of the worked
material –meat- is at the end half longitudinal/half transversal). Using the same texture
attributes in the input layer, three output units, and a hidden layer with 13 neurons, we obtain
the following results:

Output / Desired KYNEMAT(L) KYNEMAT(T) KYNEMAT(L(T))


KYNEMAT(L) 196 43 0
KYNEMAT(T) 39 125 0
KYNEMAT(L(T)) 20 13 60

Performance KYNEMAT(L) KYNEMAT(T) KYNEMAT(L(T))


MSE 0,159825824 0,147879337 0,042461323
NMSE 0,639813032 0,63808968 0,399318228
MAE 0,329403594 0,323901311 0,115223238
Min Abs Error 0,00045138 0,004611709 0,000183032
Max Abs Error 0,989362897 0,963606931 0,969773317
r 0,615687356 0,612099014 0,802275184

Percent Correct 76,8627451 69,06077348 100


Table 3. Results from Experiment 2: Discriminating between kinematic actions: cutting (longitudinal),
scrapping (transversal), butchery (half longitudinal-half transversal)(Original experimental data set)

Butchery activity was correctly identified in all tested cases, and longitudinal and
transversal kinematics was distinguished in a majority of cases! Nevertheless, these good results
can be the consequence of a bad selection of the output categories. Longitudinal and
Transversal kinematics have been replicated over different materials (shell, bone, dry wood,
fresh wood, dry hide, fresh hide), but the third kind of kinematics was exclusive of a worked
material (meat). It would be possible that the neural network had learnt to distinguish butchery
from the other categories, but not the proper activity. To solve this problem we built a new
network to distinguish only longitudinal from the transversal action, and deleted from the
database all the butchery experiments (table 4). The network had the usual 35 central tendency
inputs, 13 units in the hidden layer, and only two outputs. The activation function of those
output neurons was adjusted so that it can be read as a probability measure (the joint activation
of both units sum 1). The results for the experimental training set are the following:

Output /
Desired KYNEMAT(T) KYNEMAT(L)
KYNEMAT(T) 143 86
KYNEMAT(L) 26 177

Performance KYNEMAT(T) KYNEMAT(L)


MSE 0,173922583 0,174325181
NMSE 0,730265892 0,731956322
MAE 0,360646928 0,361481759
Min Abs Error 0,000365206 0,000365206
Max Abs Error 0,962687106 0,962687106
r 0,61404546 0,61404546
Percent Correct 84,61538462 67,30038023
Table 4. Results from Experiment 2: Discriminating between kinematic actions: cutting (longitudinal),
scrapping (transversal (Original experimental data set).

Using 20% of replicated tools not used for training as a test database, we obtain also excellent
results (table 5), showing the ability of the network to learn to discriminate between the
working activities, and hence, to discover the social cause behind the visual appearances of
texture.

Output /
Desired KYNEMAT(T) KYNEMAT(L)
KYNEMAT(T) 40 29
KYNEMAT(L) 12 43

Performance KYNEMAT(T) KYNEMAT(L)


MSE 0,21612187 0,217864788
NMSE 0,887577423 0,894735306
MAE 0,402846942 0,406095708
Min Abs Error 0,0230306 0,0230306
Max Abs Error 0,981177104 0,981177104
r 0,4237645 0,4237645
Percent Correct 76,92307692 59,72222222

Table 2. Testing the PEDRA system with additional data.

¡Rotational activities (that of burins) were correctly identified in all tested cases, and
longitudinal and transversal kinematics was distinguished in a majority of cases!

Conclusions
The surface of prehistoric lithic tools are not uniform but contain many
variations; some of them are of visual or tactile nature. Such variations go beyond the
peaks and valleys characterizing surface micro-topography, which is the obvious frame
of reference for “textures” in usual speaking. Such patterns are not only intrinsic to the
solid itself. Beyond those physical, geological characteristics of the raw material, some
visual features of an artifact’s surfaces are consequences of the modifications having
experimented that object along its history. After all, the surface of solids plays a
significant role in any kind of dynamic processes. Consequently, when we analyze
macro- or microscopically an object’s surface, we should recognize some differential
features which are the consequence of an action (human or bio-geological) having
modified the original appearance of that surface.
In this paper we have described and measured use-wear evidence as an
archaeological texture in terms of the particular dispersion of luminance values across
the surface. Textures have been analyzed as complex visual patterns composed of
entities, or sub-patterns, that have characteristic brightness, color, slope, size, etc. Thus,
we have decomposed the image of the analyzed surface into regions that differ in the
statistical variability of their constitutive visual features.
Once the texture elements have been identified in the image, we have computed
statistical properties from the extracted texture elements and used these as texture
features. The method of texture classification involves two main steps. The first step is
obtaining prior knowledge of each class to be recognized. Normally this knowledge
encompasses some sets of texture features of one or all of the classes. This knowledge
has been acquired by experimentation replication in laboratory conditions. A neural
network has been programmed to induce the best discriminant function. That is the
second step.
Results show the reliability of this approach.

Aknowledgments
This research has been made possible thanks to different research projects funded by the Spanish Ministry
of Research. Jordi Pijoan-López also aknowledge a research grant from the catalan Government
(Generalitat de Catalunya). Andrea Toselli and Assumpció Vila also contributed in different moments of
our research. We wish to aknowledge to our colleagues of the joint research team between Universitat
Autònoma de Barcelona and Institució Milà i Fontanals (CSIC). This research should be considered
among the results of the collaboration between both institutions.

References

BARCELÓ, J.A. PIJOAN-LOPEZ,J 2004, “Cutting or Scrapping? Using Neural Networks to Distinguish Kinematics in
Use Wear Analysis”. In Enter the Past. The E-way into the Four Dimensions of Culture Heritage . Edited by Magistrat der
Stadt Wien. Oxford, ArcheoPress, BAR Int. Series 1227, pp. 427-431

BARCELÓ, J.A., PIJOAN-LÒPEZ, J., TOSELLI,A., VILA, A., 2008, “Kinematics in use-wear traces: an attempt of
characterization thorugh image digitalization”. Prehistoric Technology, 40 years Later: Functional Studies and the Russian
legacy. Edited by L. Longo and N. Skakun. BAR International series 1783, pp. 63-74. ArcheoPress, oxford (UK).

BARCELÓ, J.A., 2008, Computational intelligence in Archaeology. Information Science reference (The IGI Group). Henshey
(NY).

DRIES, M.H. van den, 1998, Archeology and the Application of Artificial Intelligence. Case Studies on Use-wear Analysis of
Prehistoric Flint Tools. Archaeological Studies Leiden University No. 1., Faculty of Archaeology, University of Leiden
(Holland).

PIJOAN-LÓPEZ, J., 2007, Quantificació de traces d’ús en instruments lítics mitjançant imatges digitalitzades: Resultats
d’experiments amb Xarxes Neurals I Estadística. PhD. Dissertation. Universitat Autonoma de Barcelona (Spain).

Вам также может понравиться