Вы находитесь на странице: 1из 13

materials

Article
Investigation of the Rupture Surface of the Titanium
Alloy Using Convolutional Neural Networks
Ihor Konovalenko 1 , Pavlo Maruschak 1 , Olegas Prentkovskis 2, * and Raimundas Junevičius 2
1 Department of Industrial Automation, Ternopil National Ivan Pul’uj Technical University, Rus’ka str. 56,
46001 Ternopil, Ukraine; icxxan@gmail.com (I.K.); maruschak.tu.edu@gmail.com (P.M.)
2 Department of Mobile Machinery and Railway Transport, Faculty of Transport Engineering, Vilnius
Gediminas Technical University, Plytinės g. 27, LT-10105 Vilnius, Lithuania; raimundas.junevicius@vgtu.lt
* Correspondence: olegas.prentkovskis@vgtu.lt; Tel.: +370-5-2744784

Received: 11 October 2018; Accepted: 4 December 2018; Published: 5 December 2018 

Abstract: The research of fractographic images of metals is an important method that allows
obtaining valuable information about the physical and mechanical properties of a metallic specimen,
determining the causes of its fracture, and developing models for optimizing its properties. One of the
main lines of research in this case is studying the characteristics of the dimples of viscous detachment,
which are formed on the metal surface in the process of its fracture. This paper proposes a method
for detecting dimples of viscous detachment on a fractographic image, which is based on using a
convolutional neural network. Compared to classical image processing algorithms, the use of the
neural network significantly reduces the number of parameters to be adjusted manually. In addition,
when being trained, the neural network can reveal a lot more characteristic features that affect
the quality of recognition in a positive way. This makes the method more versatile and accurate.
We investigated 17 models of convolutional neural networks with different structures and selected the
optimal variant in terms of accuracy and speed. The proposed neural network classifies image pixels
into two categories: “dimple” and “edge”. A transition from a probabilistic result at the output of
the neural network to an unambiguously clear classification is proposed. The results obtained using
the neural network were compared to the results obtained using a previously developed algorithm
based on a set of filters. It has been found that the results are very similar (more than 90% similarity),
but the neural network reveals the necessary features more accurately than the previous method.

Keywords: image processing; convolutional neural network; dimples of tearing; fracture mechanisms

1. Introduction
For the non-destructive visual analysis of surfaces, the systems of machine vision are widely used.
They are aimed at recognizing visual patterns that are characteristic for defective and undamaged
surface fragments. The industrial application of such systems requires reliability, accuracy and speed
of response. The methods for detecting defects are classified into four main categories: statistical [1],
structural, filter-based [2], and model-based. The techniques based on the analysis of histograms [3,4],
co-occurrence matrices [5], and local binary patterns [6,7] are also used. The approaches based on
the use of neural networks are used increasingly more often. The latest advances in the field of
convolutional neural networks and use of the graphics processors to perform parallel calculations
ensure the practical use of such approaches in real production conditions.
Weimer et al. [8] developed a method to identify common features of the image surface,
implemented with a two-layer neural network. Features of image are generated by combining
multi-resolution analysis and grayscale patch statistics. The detection scheme consists of two major
sections. In the training mode, the algorithm uses a dataset of labeled training examples with and

Materials 2018, 11, 2467; doi:10.3390/ma11122467 www.mdpi.com/journal/materials


Materials 2018, 11, 2467 2 of 13

without defect regions and extracts random patches from defective and non-defective regions. In the
last step, the neural network classifier uses all training examples to learn a hypothesis (model).
For practical suitability, the authors focused on a micro cold forming scenario, where a micro cold
forming machine produces micro cups. Evaluation with different parameters showed good defect
detection results.
In Reference [9], authors presented an application of deep convolutional neural networks (DCNNs)
for automatic detection of rail surface defects. For classification, a DCNN was used, based on the
classical convolutional neural network proposed by LeCun et al. [10]. The authors developed three
models of convolutional networks, which contained 2–3 convolutional layers and 2–3 fully connected
layers with different numbers of neurons. After each convolutional layer, max-pooling was used.
Samples were classified into three classes. The first class represented the normal rail. The second class
contained all types of small defects, and finally the third class only consisted of rail joints. With the
proposed DCNNs, the rail defect classes can be successfully classified with almost 92% accuracy.
Malekzadeh et al. [11] proposed an automatic-image-based aircraft defect detection method using
Deep Neural Networks. Dataset images were taken in a straight view of the airplane fuselage. For each
image, a binary mask was created by an experienced inspector to represent defects. Each image
was partitioned into the 65 × 65 patches with respect to image resolution to include the smallest
defect within a single patch. There were two classes of patches: defect and no-defect. Authors have
used a convolutional neural network pre-trained on ImageNet as a feature extractor: AlexNet [12]
and VGG-F [13] networks. The proposed algorithm achieves about 96.37% accuracy. The proposed
algorithm is able to detect almost all the defects of the aircraft fuselage and significantly reduce the
workload of manual inspection.
An algorithm for surface defect detection of steel plates was proposed by Tian and Xu [14].
For this purpose, authors used extreme learning machine (ELM)—a fast machine learning algorithm,
implemented by a hidden matrix generated with random initialization parameters. The ELM algorithm
was proposed by Huang et al. [15] as a kind of single-hidden-layer feedforward neural network.
The accuracy of defect detection on testing dataset is 90–94%. CPU time needed for analyzing one
image is from 0.03 to 250 s.
Cha et al. [16] proposed a vision-based method using a deep architecture of convolutional neural
networks for detecting concrete cracks without calculating the defect features. Analyzed images
were cropped into small images with 256 × 256 pixel resolutions for training and validation.
The small images were used as the dataset to train the CNN. The test images were scanned by
the trained CNN using a sliding window technique, which facilitated the scanning of any images
with resolutions higher than 256 × 256 pixel, and the crack maps were consequently obtained.
Subsequently, the authors expanded their methodology by offering the Faster-R-CNN-based structural
visual inspection method [17]. The method allows detecting five types in the five types of surface
damages: concrete cracks, steel corrosion (medium and high levels), bolt corrosion, and steel
delamination. Authors showed that Faster R-CNN has better computation efficiency based on the
optimized architecture of the network, and it provides more flexible sizes of the bounding boxes to
accommodate for different sizes and scales of the input images. In addition, a video-based damage
detection framework using the Faster R-CNN was developed, which can provide quasi real-time,
autonomous vision-based structural damage detection.
A separate line of research on images is the fractographic analysis, the main tools of which are
physics of a solid body (the theory of dislocations, etc.), material science (in the case of building
correlations between fracture parameters and the size of grains, inclusions, etc.), optical–digital
methods, etc. [18,19].
Until recently, fracture surface parameters of materials and structures were measured, either manually
or automatically, but the measurement software was adjusted by the operator. This brought a significant
subjectivity and complexity to measurements. Therefore, the need to develop and test new approaches to
the automated fractographic analysis of material fractures is very important [20,21].
Materials 2018, 11, 2467 3 of 13

The peculiarity of working with fractographic images is that, for their analysis, it is necessary
not only to identify the area, in which the objects are located, but also to evaluate the morphological
features of the objects. The calculation of quantitative parameters of dimples of viscous detachment
(their count, size, shape, etc.) will not only allow establishing the causes of material fracture, but will be
informative input parameters for models aimed at optimizing their properties. In addition, with a view
to fractodiagnostics and the establishment of the causes that lead to fracture of load-bearing structures
(especially in aviation and space technology, and nuclear energy), algorithms are needed that make
it possible to identify, recognize and calculate the parameters of dimples of detachment, which will
provide for their further statistical analysis and high accuracy and repeatability of the experiment.
We should note that in the context of fractographic analysis of the surface of tearing,
the disadvantage of the previously considered approaches is either their narrow focus on the solution
of a particular task (classification of images, etc.) [8–15], or a zonal result in the form of bounding boxes,
which makes it impossible to calculate the morphological parameters of the objects found [16,17].
Therefore, the task of developing a methodology that would enable effective automation of the process
of fractographic research remains relevant.
In this paper, we proposed a method for detecting dimples of viscous detachment on fractograms
of titanium alloys, which was based on the use of a DCNN. For this purpose, a number of models
of neural networks with various sets of hyperparameters have been developed and investigated.
Their accuracy and speed were investigated, and the optimal neural network model was selected.
The proposed network contained two convolution layers, two subsampling layers, and two fully
connected layers. The neural network classified the pixels into two categories: “dimples” and “edges”,
and admitted the affiliation of each pixel to one of these classes. To study the neural network, a known
algorithm for analyzing fractograms was used. A series of images of the rupture surface of a titanium
alloy was analyzed using the proposed neural network. The result obtained was compared to the
basic method.

2. Methods
To identify dimples of viscous detachment, a convolutional neural network was used for
fractograms of the rupture surface of the titanium alloy VT23. All pixels of the image were divided
into two classes: these that belong to the dimples of detachment, and those that do not belong to them.
To study the neural network, the result of detecting dimples by the previously developed algorithm [22]
was used. The essence of this algorithm is in finding the edges of dimples of detachment. It contains
the operations of filtering with a set of filters for identify edges, adaptive thresholding, skeletonization,
dilation and segregation of connected areas. Each of these steps needs to be customized in practice.
Using a trained neural network allows reducing the number of parameters that need to be adjusted
manually, which simplifies the process of image analysis.

2.1. Training DataSet of the Neural Network


Figure 1 shows the image of the rupture surface of the titanium alloy obtained from the electronic
scanning microscope REM-106I (JSC «SELMI», Sumy, Ukraine). On the basis of the initial images
(Figure 1a) and the results of finding dimples by the algorithm [22] (Figure 1b), a training dataset for
the convolutional network was created.
Materials 2018,11,
Materials2018, 11,x2467
FOR PEER REVIEW 4 4ofof13
13

(a) (b)

(c)
Figure 1. (a) Dimples of viscous detachment on the surface of a specimen from titanium alloy VT23;
Figure 1. (a) Dimples of viscous detachment on the surface of a specimen from titanium alloy VT23;
(b) the result of their detection using the algorithm [22]; (c) a diagram illustrating the principle of
(b) the result of their detection using the algorithm [22]; (c) a diagram illustrating the principle of
training the neural network: (1) training image, (2) image with correct answer labels, (3) zoomed
training the neural network: (1) training image, (2) image with correct answer labels, (3) zoomed
fragment of training image with the size 2w + 1 × 2w + 1, which is an element of the training set,
fragment of training image with the size 2wkk+ 1 × 2wkk+ 1, which is an element of the training set,
(4) fragment of the image with labels, which by its position and size corresponds to (3), (5) the central
(4) fragment of the image with labels, which by its position and size corresponds to (3), (5) the central
pixel of (3), and (6) the pixel containing the correct answer label for (3).
pixel of (3), and (6) the pixel containing the correct answer label for (3).
Let the initial greyscale image be described by the function io (x, y), and the binary image
Let the initial greyscale image be described by the function io (x, y), and the binary image
showing the result be expressed by the function is (x, y). For each pixel (x, y) of the initial
showing the result be expressed by the function is (x, y). For each pixel (x, y) of the initial image, a
image, a square window with the size 2wk + 1 × 2wk + 1 and the coordinates of the angles
square window with the size 2wk + 1 × 2wk + 1 and the coordinates of the angles (x − wk , y − wk ),
(x − wk , y − wk ), (x + wk , y − wk ), (x − wk , y + wk ), (x + wk , y + wk ) was used. The fragment
(x + wk , y − wk ), (x − wk , y + wk ), (x + wk , y + wk ) was used. The fragment of the initial image
of the initial image io (x, y) that belonged to this window was a single test sample for the neural
io (x, y) that belonged to this window was a single test sample for the neural network (Figure 1c). The
network (Figure 1c). The label indicating the correct answer for such a fragment was the value of
label indicating the correct answer for such a fragment was the value of is (x, y). Thus, a fragment of
is (x, y). Thus, a fragment of the initial image with some central pixels (x, y) was provided at the input
the initial image with some central pixels (x, y) was provided at the input sample of the neural
sample of the neural network, and the result was obtained at the output, indicating the affiliation of
network, and the result was obtained at the output, indicating the affiliation of this pixel to one of the
this pixel to one of the two classes: “dimple” (label 0) or “edge” (label 1). To provide a greater variety
two classes: “dimple” (label 0) or “edge” (label 1). To provide a greater variety of training data, the
of training data, the fragments were selected with a step of 3 pixels in both axes.
fragments were selected with a step of 3 pixels in both axes.
Based on 46 fractograms measuring 400 × 277 pixels, a training dataset for the neural network was
Based on 46 fractograms measuring 400 × 277 pixels, a training dataset for the neural network
created, which contained 562,856 image fragments measuring 31 × 31 pixels (wk = 15). Another 11
was created, which contained 562,856 image fragments measuring 31 × 31 pixels (wk = 15). Another
fractograms were used for testing. On this basis, a test dataset of 134,596 specimens was generated.
11 fractograms were used for testing. On this basis, a test dataset of 134,596 specimens was generated.
Subsequently, the neural networks of different architectures were investigated at different values of
Subsequently, the neural networks of different architectures were investigated at different values of
wk . For wk = 7 and wk = 10, we used a central portion of the desired size (15 × 15 pixels at wk = 7
wk . For wk = 7 and wk = 10, we used a central portion of the desired size (15 × 15 pixels at wk = 7 and
and 21 × 21 pixels at wk = 10) from the initial fragment size, which was performed as follows. After
21 × 21 pixels at wk = 10) from the initial fragment size, which was performed as follows. After
Materials 2018, 11, 2467 5 of 13
Materials 2018, 11, x FOR PEER REVIEW 5 of 13

training the
training the neural
neural network,
network, we
we used
used aa floating
floating window
window measuring
measuring 2w2wkk +
+ 11 ×
× 2w
2wkk +
+ 1, which was
1, which was
moved along the fractogram with a step of 1 pixel in both axes of the image. Each section that got into
moved along the fractogram with a step of 1 pixel in both axes of the image. Each section that got into
the window
the window was was fed
fed on
on the
the input
input of
of the
the neural
neural network.
network. Thus,
Thus, we
we obtained
obtained aa prediction
prediction for
for the
the central
central
pixel of
pixel of the
the window
window whether
whether itit belongs
belongs to
to one
one of
of the
the classes.
classes.

2.2. Architecture of
2.2. Architecture of Convolutional
Convolutional Neural
Neural Network
Network
Seventeen models of
Seventeen models of convolutional neural networks
convolutional neural were designed
networks were designed and and investigated
investigated toto detect
detect
dimples of viscous detachment on surface images. The best neural network architectures
dimples of viscous detachment on surface images. The best neural network architectures are shown are shown in
Figure 2. A
in Figure 2.complete list list
A complete of the
of investigated networks
the investigated is given
networks in Table
is given 1. The1.neural
in Table networks
The neural were
networks
were implemented using the Keras library on the basis of Theano tools [23] and the CUDA Toolkit
implemented using the Keras library on the basis of Theano tools [23] and the CUDA Toolkit [24].
[24]. CUDA is a parallel computing platform and programming model developed by NVIDIA for
CUDA is a parallel computing platform and programming model developed by NVIDIA for general
general computing
computing on graphical
on graphical processing processing units For
units (GPUs). (GPUs). For and
training training and
testing of testing of the
the neural neural
networks,
networks, we used a normal desktop PC with GPU NVIDIA GeForce GT 630M.
we used a normal desktop PC with GPU NVIDIA GeForce GT 630 M.

(a) (b)

(c) (d)

Figure 2.
Figure 2. Best by accuracy/speed
accuracy/speed ratio ratio neural network architectures. (a)—neural
(a)—neural network
network with 2
convolutional layers and 3 intermediate fully connected layers; (b)—network convolutional
convolutional layers and 3 intermediate fully connected layers; (b)—network with single with single
convolutional
layer and the layer
stackand
of 7the stackfully
small of 7 small fully connected
connected layers; (c)—network
layers; (c)—network with 2 convolutional
with 2 convolutional layers
layers
and 2 and
big 2fully
big fully connected
connected layers
layers (this(this
modelmodel showed
showed thethe best
best performance);(d)—model
performance); (d)—model with
with 22
convolutional layers
convolutional layers and
and the
the stack
stack of
of 66 intermediate
intermediate fully
fully connected
connectedlayers;
layers;CV—convolutional
CV—convolutional layers;
layers;
MP—max-pooling layers; FC—fully connected layers.
MP—max-pooling layers; FC—fully connected layers.

In general terms,
In general terms, the
the structure
structure of of these
these networks
networks contains
contains two
two blocks
blocks ofof layers: (a) convolutional
layers: (a) convolutional
layers
layers (in pairs with max-pooling layers) to detect spatial features, and (b) fully connected layers
(in pairs with max-pooling layers) to detect spatial features, and (b) fully connected layers toto
generalize the found features. The input layer in all models consisted of 2w
generalize the found features. The input layer in all models consisted of 2wk + 1 × 2wk + 1 neurons,
k + 1 × 2w k + 1 neurons,
and
and the
the output
output layer
layer was
was composed
composed of of two
two neurons
neurons with with the
the activation
activation function
function Softmax.
Softmax. The
The first
first
and
and second output neurons show the probabilities that the central pixel of the image fragment fedfed
second output neurons show the probabilities that the central pixel of the image fragment at
at the input belong to the classes “dimple” and “edge”, respectively. For
the input belong to the classes “dimple” and “edge”, respectively. For the neurons of the hidden the neurons of the hidden
convolutional
convolutional andand fully
fully connected
connected layers,layers, the
the linear
linear activation
activation function
function Rectified
Rectified Linear
Linear Unit
Unit (ReLU)
(ReLU)
was used. It can be defined as f ( x ) = max ( 0, x ) . In the previous analysis of the
was used. It can be defined as f(x) = max(0, x). In the previous analysis of the images, the accuracy images, the accuracy of
the neural
of the network
neural networkwithwith
activation
activation functions suchsuch
functions as Sigmoid, Tanh Tanh
as Sigmoid, and ReLU
and ReLUwas investigated. It was
was investigated.
found that the best accuracy was provided by the activation function ReLU,
It was found that the best accuracy was provided by the activation function ReLU, and therefore, and therefore, it was used it
in further research.
was used in further research.
Materials 2018, 11, 2467 6 of 13

Table 1. Structure and characteristics of the investigated neural networks. CV—convolutional layers
(the brackets indicate the number of feature maps and the size of the kernels), MP—max-pooling layers
(the brackets show the size of the pooling window), FC—fully connected layers (the brackets indicate
the number of neurons).

Average Test Diagnostic


No CNN Model Description wk , px
Accuracy, % Time, s
CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 ×
1. 15 92.81 45
3)–CV(32 × 3 × 3)–FC(1024)
CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 ×
2. 10 92.41 19
3)–FC(250)–FC(250)–FC(250)
CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–MP(2 ×
3. 10 92.39 17
2)–FC(100)–FC(100)–FC(100)–FC(100)–FC(100)–FC(100)
CV(32 × 5 × 5)–MP(2 × 2)–CV(64 × 3 × 3)–MP(2 × 2)–CV(32 × 3 ×
4. 15 92.23 61
3)–FC(1024)
5. CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–MP(2 × 2)–FC(400)–FC(400) 10 92.11 12
CV(32 × 3 × 3)–MP(2 ×
6. 10 92.08 8
2)–FC(60)–FC(60)–FC(60)–FC(60)–FC(60)–FC(60)–FC(60)
7. CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–MP(2 × 2)–FC(1024) 10 92.00 17
CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–CV(32 × 3 × 3)–CV(32 × 3
8. 10 91.91 24
× 3)–FC(1024)
CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 ×
9. 10 91.91 19
3)–FC(1024)
CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–MP(2 × 2)–CV(32 ×3 ×
10. 10 91.70 19
3)–FC(200)–FC(200)
11. CV(32 × 3 × 3)–MP(2 × 2)–FC(100)–FC(100)–FC(100) 10 91.50 9
CV(48 × 3 × 3)–MP(2 ×
12. 10 91,44 11
2)–FC(50)–FC(50)–FC(50)–FC(50)–FC(50)–FC(50)–FC(50)–FC(50)–FC(50)
CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 ×
13. 10 91.22 19
3)–FC(1024)
CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–CV(20 × 3 ×
14. 7 90.70 11
3)–FC(400)–FC(400)
CV(32 × 3 × 3)–MP(2 × 2)–CV(32 × 3 × 3)–CV(32 × 3 × 3)–CV(32 × 3
15. 7 90.60 13
× 3)–FC(1024)
16. CV(20 × 3 × 3)–MP(2 × 2)–CV(30 × 3 × 3)–MP(2 × 2)–FC(200)–FC(200) 7 89.95 7
17. CV(20 × 5 × 5)–MP(4 × 4)–CV(32 × 3 × 3)–MP(4 × 4)–FC(1024) 15 85.21 18

As loss, the function used categorical cross-entropy. If our model predicts q(x) while the target
is P(x), then the categorical cross-entropy is defined as follows: H(P, q) = − ∑x P(x) log (q(x)).
The cross-entropy measures the performance of a classification model, of which output is a probability
value between 0 and 1. As metrics, we use accuracy: the proportion of correct predictions with respect
to the targets. For training, CNNs used the optimizer Adam [25], which implements an algorithm for
first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates
of lower-order moments.
The investigated models of neural networks differed in the number and size of convolutional and
fully connected layers, as well as in the size of the input layer. The training process was stopped if,
after 15 epochs, the accuracy of the validation data did not exceed the maximum accuracy attained
during the previous period.

2.3. Comparison of the Developed Neural Networks


The main criteria for choosing the optimal neural network architecture were: the recognition
accuracy of the test data and the time of analysis. It was found that for the investigated type of images,
the use of more than two convolutional layers does not lead to a significant increase in accuracy,
but significantly increases both the training time and the analysis time. Moreover, neural networks
Materials
Materials 2018, 11, 2467
2018, 11, x FOR PEER REVIEW 77 of
of 13
13

significantly increases both the training time and the analysis time. Moreover, neural networks with
a single
with convolutional
a single layer and
convolutional layera sufficient number number
and a sufficient of featureofmaps are maps
feature also quite efficient.
are also quiteHowever,
efficient.
in this case, deep block of fully connected layers is required. A need for a small number
However, in this case, deep block of fully connected layers is required. A need for a small number of
convolutional
of convolutional layers
layerssuggests
suggests that the
that thefeatures
featuresallocated
allocatedforfordiagnosis
diagnosisby bythe
the neural
neural network
network are
simple. It It was
was also
also found
found that
that itit is more efficient to use a bigger number of fully connected layers
with aa small
smallnumber
numberofofneurons
neuronsthan thanoneonefully connected
fully connectedlayer withwith
layer a large number
a large of neurons.
number This
of neurons.
reduces
This not only
reduces the number
not only the number of training
of trainingparameters
parameters of of
thethe
neural
neuralnetwork,
network,but butalso
alsothe
thetime
time of
of its
work. The accuracy of the diagnosis remains high.
Figure
Figure 33 shows
shows the the averaged
averaged accuracy ofdetecting
accuracy aatt of detectingdimples
dimples in in test
test images for the investigated
neural networks
networks with with aa different
differentnumber
numberof oftraining
trainingparameters
parametersP.P.The Theresults
resultsobtained
obtainedshowshowthatthata
bigger number of neurons is not a guarantee of high accuracy of the neural network. In Figure 3, we
a bigger number of neurons is not a guarantee of high accuracy of the neural network. In Figure 3,
cancan
we distinguish
distinguish three groups
three of neural
groups networks:
of neural A—simple,
networks: A—simple, withwith
a small number
a small of parameters
number of parametersand
lower accuracy;
and lower B—more
accuracy; complex,
B—more with more
complex, withparameters and higher
more parameters andaccuracy; C—the most
higher accuracy; C—thecomplex,
most
with the biggest
complex, with the number
biggestofnumber
parameters and highest
of parameters andaccuracy. Most valuable
highest accuracy. from a practical
Most valuable point of
from a practical
view of
point areview
neural
are networks of group
neural networks B, which
of group combine
B, which simplicity
combine of structure
simplicity andand
of structure accuracy
accuracyof
diagnostics.
of diagnostics.

Figure 3. Relationship
Figure 3. Relationshipbetween
betweenthethe accuracy
accuracy andnumber
at the
at and the number P of the Pneural
of parameters
of parameters of the neural
network.
network.
At the input of the neural network, fragments of size 2wk + 1 × 2wk + 1 pixels were fed.
The networks were investigated at wk = 7, wk = 10 and wk = 15. It was found that for the
analyzed images, wk = 10 is optimal. With a small window (wk = 7), the accuracy of the diagnosis
is low. The large window (wk = 15) allows getting higher accuracy, but its increment is very small
compared to the middle window (wk = 10).
Figure 4 illustrates the relationship between the accuracy and time of diagnostics of the neural
networks. Of all the architectures, we have four, which combine good performance and accuracy
of diagnostics. Their structures are shown in Figure 2. For further practical use and analysis of the
dimples of detachment of the titanium alloy according to the fractograms, the neural network with
architecture shown in Figure 2c (also Table 1, network #5) was chosen. On average, it ensured the
diagnostic accuracy of 92.11% for the test data. The network was trained during 77 epochs. For this
network (wk = 10), the number of training parameters was 108,622, and the time of diagnostics of the
image with a size of 400 × 277 pixels was 12 s (on the above equipment).

Figure 4. Relatioship between the accuracy 𝑎𝑎𝑡𝑡 and the duration of image analysis t. The neural
networks shown in Figure 2 are highlighted.

At the input of the neural network, fragments of size 2wk + 1 × 2wk + 1 pixels were fed. The
networks were investigated at wk = 7, wk = 10 and wk = 15. It was found that for the analyzed
images, wk = 10 is optimal. With a small window (wk = 7), the accuracy of the diagnosis is low. The
Figure
Materials 2018, 3.
11,Relationship
2467 between the accuracy at and the number of parameters P of the neural8 of 13
network.

Materials 2018, 11, x FOR PEER REVIEW 8 of 13

large window (wk = 15) allows getting higher accuracy, but its increment is very small compared to
the middle window (wk = 10).
Figure 4 illustrates the relationship between the accuracy and time of diagnostics of the neural
networks. Of all the architectures, we have four, which combine good performance and accuracy of
diagnostics. Their structures are shown in Figure 2. For further practical use and analysis of the
dimples of detachment of the titanium alloy according to the fractograms, the neural network with
architecture shown in Figure 2c (also Table 1, network #5) was chosen. On average, it ensured the
diagnostic accuracy of 92.11% for the test data. The network was trained during 77 epochs. For this
network (w4.
Figure
Figure 4. =
k 10), the number
Relatioship
Relatioship between
between ofthe
the training
accuracy
accuracy parameters
at and was
andduration
𝑎𝑎𝑡𝑡 the the 108,622,
duration
of imageofand theanalysis
image
analysis time
t. Theof diagnostics
t. The
neural neuralof the
networks
image with
networks a size of 400
shown 2inare
shown in Figure ×
Figure277 pixels was
2 are highlighted.
highlighted. 12 s (on the above equipment). The processing time of an
image can be reduced if we apply a step larger than 1 pixel to a floating window in the analysis of
the fractogram.
At
Thethe inputInofthis
processing thecase,
time of aninterpolation
neural network,
image can be ofreduced
missedifof
fragments values
wesize should
2w
apply k + 1be performed
a step×larger
2w than 1after
k + 1 pixels the
to aanalysis.
were
pixel fed. The
floating
However,inwere
networks
window atthe
theinvestigated
same time,
analysis the
of theat accuracy
wk = 7, w
fractogram. ofkdetection
=In10this and ofwthe
case, k = boundary
15. It was
interpolation elements
found
of of
that
missed objects
for
valuesthewill be lost.
analyzed
should be
Since the
performed proposed
after the neural
analysis. network
However, is used
at the forsamelaboratory
time, theanalysis
accuracy of
images, wk = 10 is optimal. With a small window (wk = 7), the accuracy of the diagnosis is low. The test
of specimens,
detection of thethe selected
boundary
processing
elements oftime waswill
objects considered acceptable.
be lost. Since the proposed neural network is used for laboratory analysis of
The processing
test specimens, time of an
the selected image can
processing be reduced
time was consideredif we apply a step larger than 1 pixel to a floating
acceptable.
window in the analysis of the fractogram. In this case, interpolation
The visualization of the weights of 20 kernels of the trained of missed
model (Figure values
5) shows should
that be
the best
performed
response after
will the analysis.
be provided However,
by the at the same time,
above convolutional layerthe
for accuracy
the pixelsof detection
that of the
correspond to boundary
the edges
elements
of of objects
the dimples. willboth
Here, be lost.
the Since
kernels thethat
proposed
correspond neuraltonetwork is used for laboratory
vertical/horizontal brightnessanalysis of
gradient,
test specimens, the selected processing time was considered acceptable.
and the kernels with a diagonal gradient were presented.

Figure 5.
Figure 5. Vizualization of the
Vizualization of the kernel
kernel weights
weights of
of the
the trained model. Black
trained model. Black cells
cells represent the minimum
represent the minimum
weight value
weight value ((−1.262),
−1.262), and
and white
white cells
cells represent
represent the
the maximum
maximum weight
weight value
value (1.010).
(1.010).

3. Results and Discussion


The visualization of the weights of 20 kernels of the trained model (Figure 5) shows that the best
response will
Figure 6abe provided
shows by the above
the fractograms convolutional
from layerand
the test dataset forFigure
the pixels that correspond
6b presents to the edges
the diagnostic result
of the the
using dimples.
selectedHere, bothnetwork
neural the kernels that correspond
model. Fractogramstowere vertical/horizontal
obtained by anbrightness gradient, and
electron microscopy of
the rupture
the kernels with
surfacea diagonal
of titaniumgradient
alloyswere
VT23presented.
and VT23M. Although during the training of the network,
only two labels were used: 0 (affiliation to the dimple) and 1 (affiliation to the edge), a value within the
3. Results
range (0–1)andwasDiscussion
obtained at the output for each pixel, which indicates the probability belonging to the
someFigure
class according
6a showstothe thefractograms
selected neural fromnetwork
the testmodel.
dataset Letand
an array
Figureof6boutput values
presents theobtained for
diagnostic
all pixels
result of image
using for class “edge”
the selected neural be denoted
network model. (x, y). Visualization
by PcnnFractograms of this array
were obtained is shown
by an electronin
Figure 6b.
microscopy of the rupture surface of titanium alloys VT23 and VT23M. Although during the training
of theTonetwork,
make anonly explicit
twodecision on the
labels were used:affiliation of pixels
0 (affiliation to theto dimple)
one of the andclasses, a specific
1 (affiliation threshold
to the edge),
T to be selected. Pixels, for which icnn ( x, y ) ≤
a value within the range (0–1) was obtained at the output for each pixel, which indicatesclass,
needs T, is considered to belong to the “dimple” the
probabilitytobelonging
otherwise the “edge” toclass.
the some class according
A typical normalized tohistogram
the selected of neural network model.
the distribution LetPan
of values (x, y)
cnnarray

of output
for one of thevalues obtained images
investigated for all ispixels
given of in image
Figure for7. Theclassvalues
"edge"Pcnnbeclose
denoted by Ρcnn (x, y)to.
to 0 correspond
pixels that are highly likely to belong
Visualization of this array is shown in Figure 6b. to the “dimple” class. Values close to 1 correspond to pixels
To make an explicit decision on the affiliation of pixels to one of the classes, a specific threshold
that are highly likely to belong to the “edge” class. As is seen from the histogram, two peaks are
T needs to
grouped be selected.
around Pixels, for
two opposite which iand(x,
values—0
cnn
1.y)This
≤ T,is is considered
a sign to belong
of a successful to the “dimple”
operation class,
of the neural
otherwise to the “edge” class. A typical normalized histogram of the distribution of values Ρcnn (x, y)
for one of the investigated images is given in Figure 7. The values Ρcnn close to 0 correspond to pixels
that are highly likely to belong to the “dimple” class. Values close to 1 correspond to pixels that are
highly likely to belong to the “edge” class. As is seen from the histogram, two peaks are grouped
around two opposite values—0 and 1. This is a sign of a successful operation of the neural network.
Materials 2018, 11, 2467 9 of 13

Materials 2018,
2018, 11,first for Pcnn ≈ 0, . . . , 0.15 corresponds to the “dimple” class, and
xx FOR
FOR(higher)
PEER REVIEW
REVIEW 99 of
ofthe
13
network. The peak
Materials 11, PEER 13
second (smaller) peak for Pcnn ≈ 0.85, . . . , 1 corresponds to the “edge” class. In total, these two zones
(smaller)
(smaller)
belong peak
topeak
moreforfor ΡΡcnn
than ≈ 0.85,
80%
cnn ≈ 0.85, …pixels.
of all… corresponds
,, 11 corresponds to the
to
The remaining the20%
“edge”
“edge” class.
class.
of the In total,
In
pixels total, these two
these two zones
are distributed zones belong
in thebelong
range
to more
cnn ≈ than
0.15, .80%
. . , of all
0.85. pixels.
Thus, The
the remaining
choice of the20% of
boundary the pixels
T in are
this distributed
to more than 80% of all pixels. The remaining 20% of the pixels are distributed in the range Ρcnn
P range has a in the range
negligible Ρ
effect
cnnon≈

0.15, … , 0.85. Thus, the choice of the boundary T in this range has a negligible effect
0.15, … , 0.85. Thus, the choice of the boundary T in this range has a negligible effect on the result.
the result. on the result.

aa

bb

cc
Static fracture
Static fracture Static fracture
Static fracture Static fracture
Static fracture after
after DNP
DNP

Figure 6.
Figure
Figure 6. (a)
6. (a) SEM images
(a) SEM
SEM images of
images of the
of the titanium
the titanium alloy
titanium alloy VT23M
alloy VT23M (first
VT23M (first image)
(first image) and
image) and VT23
and VT23 (second
VT23 (second and
(second and third
and third
third
images)
images) [26];
[26]; (b)
(b) visual
visual representation
representation of of
thethe result
result from
from the
the outputoutput
of theofneural
the neural network;
network;
images) [26]; (b) visual representation of the result from the output of the neural network; (c) the
(c) the(c) the
result
result
of of detecting
detecting dimples dimples (black
(black spots), spots),
which which is
is imposed imposed
on the on the
initial initial
SEM SEM
images.
result of detecting dimples (black spots), which is imposed on the initial SEM images. images.

Figure 7.
Figure 7. Normalized
Normalized histogram
Normalized histogram of
histogram of distribution
of distribution of
distribution of values
of values P
values ΡΡcnn ((x,
cnn(x,
cnn
y).
x,y). is
y). ηηη is the
is the share
the share of image
share of
of image pixels,
image pixels,
pixels,
which corresponds
which corresponds to
to the
the value
valuePΡΡcnn
cnn...
cnn
Materials 2018, 11, 2467 10 of 13

Figure 8 shows the visual representation of the array Pcnn (x, y) for one of the images and gives its
cross-sections along two coordinate axes. The cross-sections show that the peaks corresponding to the
edges of the dimples recognized by neural network have a high slope. This confirms the conclusion
that choosing the boundary from the middle range of Pcnn has a slight effect on the final result. It is
also evident that the heights of these peaks are different in the sections, so the choice of T in the
upper range causes loss of some edges. At the same time, even at very low values, i.e., Pcnn < 0.1,
Materials 2018,dimples
individual 11, x FORare
PEER REVIEW
highlighted, since the line slop in the section is high. 10 of 13

Figure 8.8.Visualization
Figure of theofarray
Visualization the Ρ𝑐𝑐𝑐𝑐𝑐𝑐
array and
(𝑥𝑥, 𝑦𝑦)Pcnn (x,its
y) cross-sections in two mutually
and its cross-sections perpendicular
in two mutually
directions.
perpendicular directions.

Thus,
Figurethe choicethe
8 shows of the boundary
visual value T in
representation of the
the upper
array range
Ρcnn (x,of
y)values
for one (Pof ≥ 0.85;
cnnthe images seeand
Figure
gives7)
results in the allocation of pixels that are very likely to belong to the edges,
its cross-sections along two coordinate axes. The cross-sections show that the peaks corresponding to but not all edges can be
detected.
the edgesThis of can
the lead to therecognized
dimples “coalescence” by of adjacent
neural objects—the
network have adimples.
high slope.Choosing
This Tconfirms
in the lowerthe
range (P
conclusion ≤ 0.15) will provide for the allocation of
cnn that choosing the boundary from the middle range of Ρ pixels that are very likely
has a to belong
slight to
effect the
on dimples.
the final
cnn
Since
result.the aim
It is ofevident
also this research
that theis to studyof
heights the dimples,
these peaksthe areboundary
different in value T was chosen
the sections, so theinchoice
this lower
of T
range. studied group of images, T =
in the upper range causes loss of some edges. At the same time, even at very low values, i.e., Ρcnn to
For the 0.1 was used. This corresponds to the pixel allocation <
the
0.1,“dimple”
individual class, for which
dimples the neural network
are highlighted, since the haslinecalculated
slop in theasection
probability of more than 90%.
is high.
Thus, the
Figure choicethe
6c shows of the boundary
results valuedimples
of detecting T in the upper range on
superimposed of the
values (Ρcnn images
original ≥ 0.85;(inseeorder
Figureto
7) results in the allocation of pixels that are very likely to belong to the edges, but not all edges can
increase the contrast, we increased brightness in the initial images). The trained neural network clearly
be detected.
identifies theThis
image canareas
lead that
to the “coalescence”
correspond to theof dimples.
adjacent objects—the
Interestingly, dimples.
the visualChoosing T in the
expert analysis
lower range (Ρcnn ≤ 0.15) will provide for the allocation of pixels that are very likely to belong to the
of the results obtained has revealed that the neural network often allocates dimples better than the
dimples.method.
original Since the aim
The of this
most research
probable is to study
explanation forthe
thisdimples, the boundary
phenomenon is that thevalue
neural was chosen
T network in
of the
this lower
selected range. Forallocates
architecture the studied group of
40 feature images,
maps, while T the was used.
original
= 0.1 method This[22]
corresponds
uses only to twothe pixel
spatial
allocation
filters to the it
(therefore, “dimple”
focuses on class,
twofor whichonly).
features the neural network has calculated a probability of more
than 90%.
Comparison of the Results Obtained Using the Proposed Neural Network with the Results Obtained Using the
InitialFigure 6c shows the results of detecting dimples superimposed on the original images (in order
Algorithm
to increase the contrast, we increased brightness in the initial images). The trained neural network
clearly
Theidentifies
developedthe image
neural areas performs
network that correspond to theprocessing
pixel-by-pixel dimples. ofInterestingly, the visual
the image, making expert
predictions
analysis of the results obtained has revealed that the neural network often allocates dimples better
for each pixel concerning its affiliation to one of the classes. However, the practical value is the calculation
than
of thethe original
dimple method. in
parameters Thethemost probable
image: explanation
their distinct for this
and total area,phenomenon is that theangle
equivalent diameter, neural
of
network of etc.
inclination, the A
selected architecture
comparison allocates
of the results 40 feature
obtained on themaps, while
basis of the original
the proposed method
neural [22] with
network uses
only two spatial filters (therefore, it focuses on two features only).
the results obtained by the initial algorithm [22,27,28] shows their high similarity (Table 2).

3.1. Comparison of the Results Obtained Using the Proposed Neural Network with the Results Obtained
Using the Initial Algorithm
The developed neural network performs pixel-by-pixel processing of the image, making
predictions for each pixel concerning its affiliation to one of the classes. However, the practical value
is the calculation of the dimple parameters in the image: their distinct and total area, equivalent
diameter, angle of inclination, etc. A comparison of the results obtained on the basis of the proposed
neural network with the results obtained by the initial algorithm [22,27,28] shows their high similarity
(Table 2).
Materials 2018, 11, 2467 11 of 13

Table 2. Comparison of the results obtained using the neural network and method [22].

Deviation of the Result of the Developed Neural


Parameter
Network in Relation to the Result of Method [22]
The total area of the dimples <4.0%
The number of dimples <4.0%
Average equivalent diameter of the dimple <3.0%

Figure 9 shows a distribution diagram of equivalent diameters for one of the investigated images
calculated using both methods. The diagram illustrates several features of the proposed approach
compared to those of the original method:
Materials 2018, 11, x FOR PEER REVIEW 11 of 13
• The neural network detects a smaller amount of small dimples. This can be explained by the fact
Table
that the 2. Comparison
old method of theon
is focused results obtained
finding using differences;
brightness the neural network and dimples
therefore, method [22].
are sometimes
erroneously detected on homogeneous parts of the image;
Deviation of the Result of the Developed Neural Network in
• Parameter
The neural network also detects a smaller amount of large dimples. This is due to the greater
Relation to the Result of Method [22]
sensitivity of the neural network, which takes into account significantly more feature maps. As a
The total area of the dimples <4.0%
result, large dimples become divided into several smaller ones;
The number of dimples <4.0%
• The neural network detects a bigger amount of middle-sized dimples. This amount is likely to
Average equivalent diameter
increase due to the division of some large dimples and the <3.0% coalescence of some small dimples.
of the dimple

Figure 9. Diagram
Figure 9. Diagram of
ofthe
thedistribution
distributionofofequivalent
equivalent diameters,
diameters, calculated
calculated using
using the the original
original method
method [22]
[22]
and and the proposed
the proposed CNN. CNN.

4. Conclusions
Figure 9 shows a distribution diagram of equivalent diameters for one of the investigated images
calculated
In thisusing
paper,both methods.a The
we proposed diagram
method illustrates
for detecting several
dimples featuresdetachment
of viscous of the proposed approach
on fractographic
compared to those of the original method:
images using a 4-layer convolutional neural network, which allocates each pixel of the input image to
•one ofThe neural
two network
classes: detects
“dimple” or a“edge”.
smallerTheamount
neuralof network
small dimples. This can
was trained onbe explained
the by the
basis of the fact
results
that the old method is focused on finding brightness differences; therefore,
obtained using the previously developed algorithm. The neural network was tested on surface images dimples are
sometimes erroneously detected on homogeneous parts of the image;
of titanium alloys VT23 and VT23M. Experimental results show high accuracy of the method and
The the
•indicate neural network
possibility ofalso detects aapplication.
its practical smaller amount of large dimples.
It is revealed that in theThis is due
general to the
case, thegreater
neural
sensitivity of the neural network, which takes into account significantly more
network recognizes the sought objects better than the original method based on the set of filters. feature maps. As
Baseda on
result, large dimples
the results obtainedbecome
with thedivided
model, into several
dimple smallersuch
parameters ones;as their area, number, equivalent
The neural
•diameter, shape network detects
factor, visual a bigger
depth, amount
inclination, of are
etc., middle-sized
calculated.dimples. This amount
Having these is likely
parameters to
for the
increase due to the division of some large dimples and the coalescence of some
whole set of surface dimples, it is possible to perform their statistical analysis and make conclusions small dimples.
about the properties of the material.
4. Conclusions
In this paper, we proposed a method for detecting dimples of viscous detachment on
fractographic images using a 4-layer convolutional neural network, which allocates each pixel of the
input image to one of two classes: “dimple” or “edge”. The neural network was trained on the basis
of the results obtained using the previously developed algorithm. The neural network was tested on
surface images of titanium alloys VT23 and VT23M. Experimental results show high accuracy of the
Materials 2018, 11, 2467 12 of 13

Author Contributions: Conceptualization, P.M.; Formal analysis, I.K. and O.P.; Investigation, I.K., P.M., O.P.,
R.J.; Methodology, I.K.; Project administration, O.P.; Validation, I.K., P.M., O.P., R.J.; Visualization, O.P.;
Writing—original draft, P.M. and I.K.; Writing—review & editing, P.M. and O.P.
Funding: This research received no external funding.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Zitova, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [CrossRef]
2. Yun, J.P.; Choi, S.; Seo, B.; Park, C.H.; Kim, S.W. Defects detection of billet surface using optimized gabor
filters. In Proceedings of the 17th World Congress the International Federation of Automatic Control, Seoul,
Korea, 6–11 July 2008.
3. Vidal, M.; Ostra, M.; Imaz, N.; Garcia-Lecina, E.; Ubide, C. Analysis of SEM digital images to quantify crack
network pattern area in chromium electrodeposits. Surf. Coat. Technol. 2016, 285, 289–297. [CrossRef]
4. Hassani, A.; Ghasemzadeh Tehrani, A.H. Crack detection and classification in asphalt pavement using image
processing. In Proceedings of the 6th Rilem International Conference on Cracking in Pavements, Chicago, IL,
USA, 16–18 June 2008.
5. Dutta, S.; Das, A.; Barat, K.; Roy, H. Automatic characterization of fracture surfaces of AISI 304LN stainless
steel using image texture analysis. Measurement 2012, 45, 1140–1150. [CrossRef]
6. Hu, Y.; Zhao, C.-X. A local binary pattern based methods for pavement crack detection. J. Pattern Recognit. Res.
2010, 1, 140–147. [CrossRef]
7. Ojala, T.; Pietikainen, M.; Maenpaa, T. Multiresolution gray-scale and rotation invariant texture classification
with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [CrossRef]
8. Weimer, D.; Thamer, H.; Scholz-Reiter, B. Learning defect classifiers for textured surfaces using neural
networks and statistical feature representations. Procedia CIRP 2013, 7, 347–352. [CrossRef]
9. Faghih-Roohi, S.; Hajizadeh, S.; Nunez, A.; Babuska, R.; De Schutter, B. Deep convolutional neural networks
for detection of rail surface defects. In Proceedings of the 2016 International Joint Conference on Neural
Networks, Vancouver, BC, Canada, 24–29 July 2016.
10. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation
applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [CrossRef]
11. Malekzadeh, T.; Abdollahzadeh, M.; Nejati, H.; Cheung, N. Aircraft Fuselage Defect Detection using Deep
Neural Networks. Available online: https://arxiv.org/ftp/arxiv/papers/1712/1712.09213.pdf (accessed on
4 December 2018).
12. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks.
Available online: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-
neural-networks.pdf (accessed on 4 December 2018).
13. Chatfield, K.; Simonyan, K.; Vedaldi, A.; Zisserman, A. Return of the devil in the details: Delving deep into
convolutional nets. Available online: https://arxiv.org/pdf/1405.3531.pdf (accessed on 4 December 2018).
14. Tian, S.; Xu, K. An algorithm for surface defect identification of steel plates based on genetic algorithm and
extreme learning machine. Metals 2017, 7, 311. [CrossRef]
15. Huang, G.; Zhu, Q.; Siew, C. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70,
489–501. [CrossRef]
16. Cha, Y.J.; Choi, W.; Büyüköztürk, O. Deep learning-based crack damage detection using convolutional neural
networks. Comput.-Aided Civ. Infrastruct. Eng. 2017, 32, 361–378. [CrossRef]
17. Cha, Y.J.; Choi, W.; Suh, G.; Mahmoudkhani, S.; Büyüköztürk, O. Autonomous structural visual inspection
using region-based deep learning for detecting multiple damage types. Comput.-Aided Civ. Infrastruct. Eng.
2018, 33, 731–747. [CrossRef]
18. Zhong, Q.P.; Zhao, Z.H.; Zhang, Z. Develpment of "Fractography" and research of fracture micromechansim.
J. Mech. Strength 2005, 27, 358–370.
19. Azevedo, C.R.F.; Marques, E.R. Three-dimensional analysis of fracture, corrosion and wear surfaces.
Eng. Fail. Anal. 2010, 17, 286–300. [CrossRef]
20. Bastidas-Rodriguez, M.X.; Prieto-Ortiz, F.A.; Espejo, E. Fractographic classification in metallic materials by
using computer vision. Eng. Fail. Anal. 2016, 59, 237–252. [CrossRef]
Materials 2018, 11, 2467 13 of 13

21. Kosarevych, R.Y.; Student, O.Z.; Svirs’ka, L.M.; Rusyn, B.P.; Nykyforchyn, H.M. Computer analysis of
characteristic elements of fractographic images. Mater. Sci. 2013, 48, 474–481. [CrossRef]
22. Konovalenko, I.; Maruschak, P.; Prentkovskis, O. Automated method for fractographic analysis of shape and
size of dimples on fracture surface of high-strength titanium alloys. Metals 2018, 8, 161. [CrossRef]
23. Gulli, A.; Pal, S. Deep Learning with Keras; Packt Publishing: Birmingham, UK, 2017.
24. CUDA Zone. Available online: https://developer.nvidia.com/cuda-zone (accessed on 22 April 2018).
25. Kingma, D.; Ba, J. Adam: A Method for Stochastic Optimization. Available online: https://arxiv.org/pdf/
1412.6980.pdf (accessed on 4 December 2018).
26. Chausov, M.G.; Maruschak, P.O.; Pylypenko, A.P.; Berezin, V.B. Features of Deformation and Fracture of Plastic
Materials Under Impact-Oscillatory Loading; Ternopil: Terno-Graf, Ukraine. (in Ukrainian)
27. Maruschak, P.; Konovalenko, I.; Chausov, M.; Pylypenko, A.; Panin, S.; Vlasov, I.; Prentkovskis, O. Impact of
dynamic non-equilibrium processes on fracture mechanisms of high-strength titanium alloy VT23. Metals
2018, 8, 983. [CrossRef]
28. Konovalenko, I.V.; Maruschak, P.O. Application of the properties of fuzzy sets in the computer analysis of
the shapes and sizes of tear pits. Mater. Sci. 2018, 53, 548–559. [CrossRef]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Вам также может понравиться