Вы находитесь на странице: 1из 39

Applied Intelligence

https://doi.org/10.1007/s10489-019-01445-0

Multiparametric similarity measures on Pythagorean fuzzy sets


with applications to pattern recognition
Xindong Peng1 · Harish Garg2

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Abstract
Pythagorean fuzzy sets (PFSs), characterized by membership degrees and non-membership degrees, are a more effective
and flexible way than intuitionistic fuzzy sets (IFSs) to capture indeterminacy. In this paper, some new diverse types of
similarity measures, overcoming the blemishes of the existing similarity measures, for PFSs with multiple parameters are
studied, along with their detailed proofs. The various desirable properties among the developed similarity measures and
distance measures have also been derived. A comparison between the proposed and the existing similarity measures has
been performed in terms of the division by zero problem, unsatisfied similarity axiom conditions, and counter-intuitive cases
for showing their effectiveness and feasibility. The initiated similarity measures have been illustrated with case studies of
pattern recognition, along with the effect of the different parameters on the ordering and classification of the patterns.

Keywords Pythagorean fuzzy sets · Similarity measures · Distance measures · Pattern recognition

1 Introduction nonmembership degree, whose sum of squares is less


than or equal to one. In some cases, the PFSs can deal
The similarity measure indicates the similar degree of two the problems that the IFSs cannot. For example, if a
objects. Measures of similarity between fuzzy sets [1] and decision maker (DM) gives the membership degree and
its extensions have achieved a great sense of attention the nonmembership degree as 0.6 and 0.8, respectively,
from researchers for their wide applications in diverse then it is only valid for the PFSs. That is to say, all
fields, such as pattern recognition, decision making, medical the IFSs are a part of the PFSs, which indicates that the
diagnoses and image processing. Numerous similarity PFSs are more powerful to solve the vague problems.
measures between fuzzy sets have been developed and Zhang and Xu [15] presented the detailed mathematical
discussed in recent years. Atanassov [2] expanded fuzzy expression for PFSs and defined the concept of Pythagorean
sets to intuitionistic fuzzy sets (IFSs), since then amount of fuzzy number (PFN). Meanwhile, they also proposed a
similarity measures between IFSs have been discussed in revised Pythagorean fuzzy technique for order preference
the literature [3–12]. by similarity to ideal solution (PF-TOPSIS) for dealing
Recently, the Pythagorean fuzzy sets (PFSs) [13], the MCDM problem within PFNs. Peng and Yang [16]
initiatively proposed by Yager, have regarded as an explored the division and subtraction operations for PFNs
efficient tool for depicting vagueness of the multi-criteria and also initiated a Pythagorean fuzzy superiority and
decision making (MCDM) problems [14]. The PFSs are inferiority ranking (PF-SIR) method to solve multicriteria
also characterized by the membership degree and the group decision-making (MCGDM) problem with PFNs.
Moreover, some extension models of PFSs [17] are rapidly
 Xindong Peng developed, such as interval-valued Pythagorean fuzzy set
952518336@qq.com (IVPFS) [18], Pythagorean fuzzy multigranulation rough
set (PFMRS) [19], Pythagorean fuzzy linguistic set (PFLS)
1 School of Information Sciences and Engineering,
[20], Pythagorean uncertain linguistic set (PULS) [21],
Shaoguan University, Shaoguan, China hesitant Pythagorean fuzzy set (HPFS) [22], Pythagorean
2 School of Mathematics, Thapar Institute of Engineering
fuzzy soft set (PFSS) [23].
& Technology, Deemed University, Patiala 147004, However, the most important task for the DM is to
Punjab, India rank the objects so as to obtain the desired object(s)
X. Peng and H. Garg

[24–27]. For this, researchers have made efforts to enrich their extensions to compute the degree of similarity
the concept of information measures in Pythagorean fuzzy between the two or more PFSs. The effects of the
environments [28]. Zhang and Xu [15] introduced a distance various parameters associated with these measures are
measure for PFS, while Yang et al. [29] pointed out an explained in detail. The working of the proposed measure
unreasonable case of proof in [15]. Wei and Wei [30] is demonstrated through several illustrative examples,
presented 10 similarity measures between PFSs based on which shows that the proposed measures work effectively
the cosine function and applied them in medical diagnosis under even those cases also, where the existing measures
and pattern recognition. Li et al. [31] explored the Hamming suffers from “the division by zero problem” and “counter-
distance measure, the Euclidean distance measure and the intuitive cases”. Objective 3 is achieved by considering
Minkowski distance measure between PFSs, and discussed the several case studies such as pattern recognition
their properties in detail. Zhang [32] introduced a novel (Medical diagnosis, Nanometer material identification, Ore
similarity measure for PFSs, and applied it in dealing identification, Bacterial detection, Jewellery identification)
the selection problem of photovoltaic cells. Zeng et al. to show the performance of the proposed similarity
[33] considered five parameters for distance and similarity measures. Further, the effect of the different parameters on
measures of Pythagorean fuzzy sets and applied them in the ordering and classification of the patterns are studied in
the selection of China’s internet stocks. Peng et al. [34] detail.
presented similarity measure, distance measure, entropy, To achieve these aims, the remainder of this paper is
inclusion measure for PFSs, put forward transformation organized as follows: In Section 2, some basic concepts of
relationships, and successfully applied them in pattern IFSs and PFSs are briefly reviewed, which will be used in
recognition, clustering analysis and medical diagnosis [35]. the analysis of this paper. In Section 3, some novel distance
From the existing studies, it has been observed that the measures and similarity measures are presented and proved.
similarity or distance measures under the PFS have the In Section 4, the applications of pattern recognition with
drawbacks, in some situations, that they cannot get the Pythagorean fuzzy information are shown, and the effect of
perfect ranking order of the alternatives due to the fact that the different parameters on the ordering of the objects are
they have “the division by zero problem” [12, 30, 34] and discussed in detail. The paper is concluded in Section 5.
counter-intuitive cases [3–12, 30, 32, 34, 36, 37, 40]. So
we need to propose a method, which can overcome the
drawbacks of these existing studies. 2 Preliminaries
In this paper, by keeping the advantages of the Lp
norm and other factors, we have developed some new In this section, we briefly review the basic concepts related
distance measures between the PFNs, which can overcome to IFS and PFS, and then list the properties that the distance
the drawbacks of the methods presented in [3–12, 30, 32, 34, measure and similarity measure for PFSs should possess.
36, 37, 40]. Under it, the proposed measures depends on the
three parameters, namely, p, tk , and k, where p represents Definition 1 [2] Let X be a universe of discourse. An IFS
the Lp norm, tk represents the levels of uncertainties and I in X is given by
k represents the slope. Further, based on these distance
I = {< x, μI (x), νI (x) >| x ∈ X}, (1)
measures, the concept of the similarity measure is developed
and applied it to several pattern recognition problems. These where μI : X →[0,1] denotes the degree of membership
considerations have led us to consider the following main and νI : X →[0,1] denotes the degree of nonmembership
objectives for this paper: of the element x ∈ X to the set I , respectively, with the
condition that 0 ≤ μI (x) + νI (x) ≤ 1. The degree of
– to represent the information of the decision makers in
indeterminacy πI (x) = 1−μI (x)−νI (x). For convenience,
terms of Pythagorean fuzzy numbers;
Xu and Yager [38] called (μI (x), νI (x)) an intuitionistc
– to introduce some new distance or similarity measures
fuzzy number (IFN) denoted by i = (μI , νI ).
under PFSs environment to overcome the drawbacks of
the existing studies;
Definition 2 [13] Let X be a universe of discourse. A PFS
– to exhibit several illustrations to demonstrate the
P in X is given by
measures.
P = {< x, μP (x), νP (x) >| x ∈ X}, (2)
To achieve these objectives, in the manuscript, we
consider the Pythagorean fuzzy set environment to rate where μP : X →[0,1] denotes the degree of membership
the objects so as to give the more degree of freedom and νP : X →[0,1] denotes the degree of nonmembership
to the decision practices. Objective 2 is achieved by of the element x ∈ X to the set P , respectively, with the
proposing some new distance or similarity measures and condition that 0 ≤ (μP (x))2 + (νP (x))2 ≤ 1. The degree
Multiparametric similarity measures on Pythagorean...


of indeterminacy πP (x) = 1 − (μP (x))2 − (νP (x))2 . (3) M = N iff ∀x ∈ X, μM (x) = μN (x) and νM (x) =
For convenience, Zhang and Xu [15] called (μP (x), νP (x)) νN (x); 

a Pythagorean fuzzy number (PFN) denoted by p = (4) M ⊕N = < x, μ2M (x) + μ2N (x) − μ2M (x)μ2N (x),
(μP , νP ). 
νM (x)νN (x) > |x ∈ X ;
Definition 3 [15] For any PFN p = (μ, ν), the score  
function of p is defined as follows: (5) M⊗N = < x, μM (x)μN (x), νM
2 (x)+ν 2 (x)−ν 2 (x)ν 2 (x)
N M N

s(p) = (μ)2 − (ν)2 , (3) > |x ∈ X ;
  
where s(p) ∈ [−1, 1]. μ2M (x)−μ2N (x) νM (x)
(6) M N = < x, , > |x ∈ X ,
1−μ2N (x) νN (x)

Definition 4 [18] For any PFN p = (μ, ν), the accuracy if μM (x) ≥ μN (x), νM (x) ≤ min νN (x),
function of p is defined as follows: 
νN (x)πM (x)
πN (x) ;
a(p) = (μ)2 + (ν)2 , (4)  
μM (x) ν 2 (x)−ν 2 (x)
(7) M N = < x, μN (x) , M 2 N > |x ∈ X}, if
1−νN (x)
where a(p) ∈ [0, 1]. 
μN (x)πM (x)
νM (x) ≥ νN (x), μM (x) ≤ min{μN (x), πN (x) .
For any two PFNs p1 , p2 ,
(1) if s(p1 ) > s(p2 ), then p1  p2 ;
(2) if s(p1 ) = s(p2 ), then 3 Novel information measure
(a) if a(p1 ) > a(p2 ), then p1  p2 ; for Pythagorean fuzzy sets
(b) if a(p1 ) = a(p2 ), then p1 ∼ p2 .
Let M and N be two PFSs in X = {x1 , x2 , · · · , xn }.
Definition 5 [34] Let M, N and O be three PFSs on X. A Assume that (μM , νM ) and (μN , νN ) be two PFNs,
where
distance measure D(M, N) is a mapping D : P F S(X) × μ2
M ∈ μ 2 , μ2 + π 2 , ν 2 ∈ ν 2 , ν 2 + π 2 . Therefore,
M
M M M M M M
P F S(X) → [0, 1], possessing the following properties: the value of μ2 2
M , νM can be intuitively expressed as the
triangle H MC section in Fig. 1.
(D1) 0 ≤ D(M, N) ≤ 1; Over the M point, make the straight line H C at the point
(D2) D(M, N) = D(N, M); E, and define the slope of the line ME as k. The line ME is
(D3) D(M, N) = 0 iff M = N; at any point D(μ2 2
M , νM ) within the triangle H MC. Suppose
2 2
(D4) D(M, M c ) = 1 iff M is a crisp set; that MF =
πM
then DF =
kπM
tk , tk , (tk > 0). Moreover, we
(D5) If M ⊆ N ⊆ O, then D(M, N) ≤ D(M, O) and
D(N, O) ≤ D(M, O).

Definition 6 [34] Let M, N and O be three PFSs on X. A


similarity measure S(M, N) is a mapping S : P F S(X) ×
P F S(X) → [0, 1], possessing the following properties:

(S1) 0 ≤ S(M, N) ≤ 1;
(S2) S(M, N) = S(N, M);
(S3) S(M, N) = 1 iff M = N;
(S4) S(M, M c ) = 0 iff M is a crisp set;
(S5) If M ⊆ N ⊆ O, then S(M, O) ≤ S(M, N) and
S(M, O) ≤ S(N, O).

Definition 7 [34] If M, N ∈ PFSs, then the operations can


be defined as follows:
(1) M c = {< x, νM (x), μM (x) > |x ∈ X};
(2) M ⊆ N iff ∀x ∈ X, μM (x) ≤ μN (x) and νM (x) ≥

νN (x); Fig. 1 The possible values for μ2 2
M , νM
X. Peng and H. Garg



2
can have μ2 M , νM = μM + πM /tk , νM + kπM /tk . The
2 2 2 2 We present the new distance measure D1 (M, N), which
range of parameters tk and k can be achieved as follows: can be obtained in the following steps.
⎧ 2

⎪ μM + πM 2 /t ≤ μ2 + π 2
k


M M

⎨ μM + πM
2 2 /t + ν 2 + kπ 2 /t ≤ 1
k M k
M t ≥k+1
νM + kπM /tk ≤ νM + πM
2 2 2 2 ⇒ k

⎪ k ≥0

⎪ t >0
⎩ k
k≥0


 n   
 1  (tk − 1)(μ2 (xi ) − μ2 (xi )) − (ν 2 (xi ) − ν 2 (xi ))p
D1 (M, N) = 
p  M N M N  (5)
2ntk i=1 + (tk − k)(νM
2 (x ) − ν 2 (x )) − k(μ2 (x ) − μ2 (x ))p
p
i N i M i N i


 
M , νM is easily found as (μM +πM /tk , νM +
The pair μ2 2 2 2 2
The mean of μ2 M − μN
2 p and |ν 2 − ν 2 |p is defined as
M N
2
kπM /tk ). Based on the Definition 2, we can have the follows:
following equality such that: 1  2 p  p 
  2 2 
μM − μ2 N  + ν M − νN
2
    
2
 
1 (tk − 1) μ − μ2 − ν 2 − ν 2 p
μ2 , ν 2
= μ 2
+ π 2
/t , ν 2
+ kπ 2
/t = p 

M N

M N 
2 − ν 2 − k μ2 − μ2 p .
M M M M k M M k
  2tk + (tk − k) νM N M N
(tk −1)μM −νM +1 (tk −k)νM
2 2 2 −kμ2 +k
 2 
= , M
. The p root of the mean of μM − μ2  and |ν 2 − ν 2 |p
p
tk tk N M N
is given as:
 
Similarly, we can have p 1  2    
μ − μ2 p + ν 2 − ν 2 p
M N M N
2
 
 

 
  (tk − 1)μ2N − νN
2 + 1 (t − k)ν 2 − kμ2 + k 1 (tk − 1) μ2 − μ2 − ν 2 − ν 2 p
μ2 2
=
k N N = p p 
M 2 N M
N 
+ (tk − k) ν − ν 2 − k μ2 − μ2 
N , νN , . p
tk tk 2tk M N M N


2 2 For more than one feature such as (xi ), i = 1, 2, · · · n,
The absolute difference between

2 μM , νM and we can define as (5).


2 can be defined as follows:
μN , νN
Theorem 1 D1 (M, N) is the distance between two PFSs
  M and N in X.
   (t − 1)μ2 −ν 2 + 1 (t − 1)μ2 −ν 2 + 1 
 2   k k 
μM − μ2
N =  M M
− N N

 tk tk 
  Proof For two PFSs M and N, we have
 (t − 1)(μ2 − μ2 ) − k(ν 2 − ν 2 ) 
 k M N M N 
=  ,
 tk  (D1) 0 ≤ μM (xi ), μN (xi ), νM (xi ), νN (xi ) ≤ 1. Thus,
  −1 ≤ (tk − 1)μ2M (xi ) − νM 2 (x ) ≤ t − 1, 1 − t ≤
   (t −k)ν 2 −kμ2 +k (tk −k)νN −kμ2N + k 
2 i k k
 2 2   k −((tk − 1)μN (xi ) − νN (xi )) ≤ 1, then we have
2 2
νM − νN  =  M M
− 
 tk tk 
  −tk ≤ (tk − 1)μ2M (xi ) − νM 2 (x ) − ((t − 1)μ2 (x ) −
i k N i
 (t − k)(ν 2 − ν 2 ) − k(μ2 − μ2 )  νN (xi )) ≤ tk .
2
 k M N M N 
=  . It means that 0 ≤ |(tk − 1)(μ2M (xi ) − μ2N (xi )) −
 tk 
(νM (xi ) − νN
2 2 (x ))|p = |(t − 1)μ2 (x ) − ν 2 (x ) −
i k M i M i
  ((tk − 1)μN (xi ) − νN
2 2 (x ))|p ≤ t p .
i
The power of p of μ2 2  k
M − μN and |νM − νN | is shown
2 2
Similarly, we can have 0 ≤ |(tk − k)(νM 2 (x ) −
i
as follows: νN (xi )) − k(μM (xi ) − μN (xi ))| ≤ tk .
2 2 2 p p

 
 p  (t − 1)
μ2 − μ2 −
ν 2 − ν 2 p Therefore, according to (5), we can achieve 0 ≤
 2 2   k M N M N  D1 (M, N) ≤ 1.
μM − μN  =   ,
 tk  (D2) It is easily obtained from the (5).


 Assume that M = N, which implies that μM (xi ) =
 p  (t − k) ν 2 − ν 2 − k μ2 − μ2 p (D3)
 2 2   k M N M N 
νM − νN  =   . μN (xi ) and νM (xi ) = νN (xi ) for i = 1, 2, · · · , n.
 tk 
Thus, by (5), we can obtain D1 (M, N) = 0.
Multiparametric similarity measures on Pythagorean...

On the contrary, suppose that D1 (M, N) = 0 (D5) If M ⊆ N ⊆ O, then μM (xi ) ≤ μN (xi ) ≤ μO (xi )
for two PFSs M and N, it signifies that |(tk − and νM (xi ) ≥ νN (xi ) ≥ νO (xi ), which implies that
1)(μ2M (xi )−μ2N (xi ))−(νM2 (x )−ν 2 (x ))|p +|(t − (tk −1)μ2O (xi )−νO 2 (x ) ≥ (t −1)μ2 (x )−ν 2 (x ) ≥
i N i k i k N i N i
2 (x ) − ν 2 (x )) − k(μ2 (x ) − μ2 (x ))|p = 0
k)(νM (tk −1)μ2M (xi )−νM 2 (x ), (t −k)ν 2 (x )−kμ2 (x ) ≤
i N i M i N i i k O i O i
or (tk − k)νN 2 (x ) − kμ2 (x ) ≤ (t − k)ν 2 (x ) −
i N i k M i
2
kμM (xi ).
|(tk −1)(μ2M (xi ) − μ2N (xi ))− (νM
2 2
(xi )−νN (xi ))|p = 0,
So it is easily concluded that |(tk − 1)(μ2M (xi ) −
|(tk −k)(νM
2 2
(xi )−νN (xi ))−k(μ2M (xi )−μ2N (xi ))|p = 0. 2
μN (xi ))−(νM 2 (x )−ν 2 (x ))|p ≤ |(t −1)(μ2 (x )−
i N i k M i
μO (xi )) − (νM
2 2 (x ) − ν 2 (x ))|p , |(t − k)(ν 2 (x ) −
i O i k M i
After computing, we can achieve μM (xi ) − νN2 (x )) − k(μ2 (x ) − μ2 (x ))|p ≤ |(tk −
i M i N i
μN (xi ) = 0, νM (xi ) − νN (xi ) = 0, which implies k)(νM 2 (x ) − ν 2 (x )) − k(μ2 (x ) − μ2 (x ))|p .
i N i M i N i
μM (xi ) = μN (xi ), νM (xi ) = νN (xi ). By adding, we obtain D1 (M, N) ≤ D1 (M, O).
Consequently, M = N. Hence D1 (M, N) = 0 if Similarly, we obtain D1 (N, O) ≤ D1 (M, O).
M = N. 
n
(D4) D1 (M, M ) = 1 ⇔ p n1
c |μ2M (xi ) − νM
2 (x )|p =
i
However, in many real cases, the different sets may
i=1 have taken different weights, and thus weight wi (i =
1 ⇔ |μ2M (xi ) − νM 2 (x )| = 1 ⇔ μ (x ) =
i M i 1, 2, · · · , n) of the element xi ∈ X should be taken into
1, νM (xi ) = 0 or μM (xi ) = 0, νM (xi ) = 1 ⇔ M is account. In the following, we present a weighted distance
a crisp set. measure D1w (M, N) between PFSs.

   
 1  n
(tk − 1)(μ2 (xi ) − μ2 (xi )) − (ν 2 (xi ) − ν 2 (xi ))p
D1 (M, N) =
w 
p
wi  M N M N  (6)
p
2tk i=1 + (tk − k)(νM
2 (x ) − ν 2 (x )) − k(μ2 (x ) − μ2 (x ))p ,
i N i M i N i

≥ 0, tk ≥ k+1, and wi is the weights of the element


where k Similarly, we can have
xi with ni=1 wi = 1.
p
0 ≤ |(tk − k)(νM
2 2
(xi )−νN (xi )) − k(μ2M (xi )−μ2N (xi ))|p ≤ tk .
Theorem 2 D1w (M, N) is the distance measure between
two PFSs M and N in X.
Hence, by the above (6), we can obtain 0 ≤ D1w (M, N) ≤ 1.
(D2)-(D5) It is straightforward.
Proof (D1) If we product the inequality defined above with
wi , then we can easily have The Hausdroff distance between two non-empty closed
p and bounded sets is a measure of the resemblance between
0 ≤ wi |(tk − 1)(μ2M (xi )−μ2N (xi )) − (νM
2 2
(xi )−νN (xi ))|p ≤ wi tk .
them. For example, we consider M = [x1 , x2 ] and N =
[y1 , y2 ] in the Euclidean domain R; the Hausdroff distance
Furthermore, we can write the following inequality
in the additive set environment is presented by the following

n 
n
p [39]:
0≤ wi |(tk − 1)(μ2M (xi )−μ2N (xi )) − (νM
2 2
(xi )−νN (xi ))|p ≤ wi tk .
i=1 i=1
H (M, N) = max{|x1 − y1 |, |x2 − y2 |}.

n
p p 
n
It is easy to know that wi tk is equal to tk since wi = 1.
i=1 i=1 Now, for any two PFSs M and N over X =

n
{x1 , x2 , · · · , xn }, we propose the following utmost distance
Hence, 0 ≤ wi |(tk −1)(μ2M (xi )−μ2N (xi ))−(νM
2 (x )−
i
i=1 measures:
2 (x ))|p ≤ t . p
νN i k


   
 1  n
(tk − 1)(μ2 (xi ) − μ2 (xi )) − (ν 2 (xi ) − ν 2 (xi ))p ,
D2 (M, N) = 
p
max  M N M N  (7)
p 2 (x ) − ν 2 (x )) − k(μ2 (x ) − μ2 (x ))p ,
(tk − k)(νM
ntk i=1 i N i M i N i


   
1  n
(tk − 1)(μ2 (xi ) − μ2 (xi )) − (ν 2 (xi ) − ν 2 (xi ))p ,
D2 (M, N) =
w 
p
wi max  M N M N  (8)
p 2 (x ) − ν 2 (x )) − k(μ2 (x ) − μ2 (x ))p ,
(tk − k)(νM
tk i=1 i N i M i N i
X. Peng and H. Garg

Theorem 3 D2 (M, N) and D2w (M, N) are the distance Theorem 5 If D1 (M, N), D1w (M, N), D2 (M, N) and
between two PFSs M and N in X. D2w (M, N) are distance measures between PFSs M and
N, then S1 (M, N) = 1 − D1 (M, N), S1w (M, N) = 1 −
D1w (M, N), S2 (M, N) = 1 − D2 (M, N) and S2w (M, N) =
Proof It is easily proved. 1 − D2w (M, N) are similarity measures between M and N
respectively.
Theorem 4 The distance measures D1 (M, N) and
D2 (M, N) satisfy the inequality D1 (M, N) ≤ D2 (M, N).


 n   
 1  (tk − 1)(μ2 (xi ) − μ2 (xi )) − (ν 2 (xi ) − ν 2 (xi ))p
S1 (M, N) = 1 − 
p  M N M N  (9)
2ntk i=1 + (tk − k)(νM
2 (x ) − ν 2 (x )) − k(μ2 (x ) − μ2 (x ))p
p
i N i M i N i


   
 1  n
(tk − 1)(μ2 (xi ) − μ2 (xi )) − (ν 2 (xi ) − ν 2 (xi ))p
S1 (M, N) = 1 −
w 
p
w  M N M N  , (10)
+ (tk − k)(νM
2 (x ) − ν 2 (x )) − k(μ2 (x ) − μ2 (x ))
p i p
2tk i=1 i N i M i N i


   
 1  n
(tk − 1)(μ2 (xi ) − μ2 (xi )) − (ν 2 (xi ) − ν 2 (xi ))p
S2 (M, N) = 1 − 
p
max  M N M N  , (11)
, (tk − k)(νM
2 (x ) − ν 2 (x )) − k(μ2 (x ) − μ2 (x ))
p p
ntk i=1 i N i M i N i


   
1  n
(tk − 1)(μ2 (xi ) − μ2 (xi )) − (ν 2 (xi ) − ν 2 (xi ))p ,
S2w (M, N) = 1 − 
p
p w i max  M N M N 
(tk − k)(ν 2 (xi ) − ν 2 (xi )) − k(μ2 (xi ) − μ2 (xi ))p . (12)
tk i=1 M N M N


Theorem 6 Let M and N be two PFSs, then we have For D1 (N, M N) with ∀xi ∈ X, we can achieve

D1 (M, M ⊗ N) = D1 (N, M ⊕ N); 
(1) (tk −1)(μ2N (xi )−(μ2M (xi )+μ2N (xi )−μ2M (xi )μ2N (xi)))
(2) D1 (M, M ⊕ N) = D1 (N, M ⊗ N); p

(3) S1 (M, M ⊗ N) = S1 (N, M ⊕ N); −(νN 2
(xi ) − νM 2
(xi )νN 2
(xi ))
(4) S1 (M, M ⊕ N) = S1 (N, M ⊗ N). 

+ (tk − k)(νM 2
(xi ) − νM 2 2
(xi )νN (xi )) − k(μ2N (xi )
p

Proof We just prove the (1), and the (2)-(4) can be proved −(μ2M (xi ) + μ2N (xi ) − μ2M (xi )μ2N (xi )))
in the similar  way. (1) Based on Definition 7 and (5), and 

for D1 (M, M N) with ∀xi ∈ X, we can achieve = (tk − 1)(μ2M (xi )μ2N (xi ) − μ2M (xi )) − (νN 2
(xi )
 p
 
(tk − 1)(μ2M (xi ) − μ2M (xi )μ2N (xi )) − (νM 2
(xi ) −νM 2
(xi )νN2
(xi ))
p 
 
−(νM 2
(xi ) + νN 2
(xi ) − νM 2
(xi )νN2
(xi ))) + (tk − k)(νN 2
(xi ) − νM 2 2
(xi )νN (xi ))
 p
 
+ (tk − k)(νM 2
(xi ) − (νM 2
(xi ) + νN2
(xi ) −k(μ2M (xi )μ2N (xi ) − μ2M (xi ))
p 
 
−νM 2
(xi )νN2
(xi ))) − k(μ2M (xi ) − μ2M (xi )μ2N (xi )) = (tk − 1)(μ2M (xi ) − μ2M (xi )μ2N (xi ))
 p
 
= (tk − 1)(μ2M (xi ) − μ2M (xi )μ2N (xi )) −(νM 2
(xi )νN2
(xi ) − νN 2
(xi ))
p 
 
−(νM 2
(xi )νN2
(xi ) − νN 2
(xi )) + (tk − k)(νM 2
(xi )νN2
(xi ) − νN2
(xi )) − k(μ2M (xi )
 p
 
+ (tk − k)(νM 2
(xi )νN2
(xi ) − νN 2
(xi )) − k(μ2M (xi ) −μ2M (xi )μ2N (xi )) .
p

−μ2M (xi )μ2N (xi )) . Therefore, we have D1 (M, M ⊗ N) = D1 (N, M ⊕ N).
Multiparametric similarity measures on Pythagorean...

Theorem 7 Let M and N be two PFSs, and xi ∈   


 μ2M (xi )(1 − μ2N (xi )) 2 (x )(1 − ν 2 (x )) p
 νN i M i 
X, μ2M (xi ) + μ2N (xi ) = 1, νM
2 (x ) + ν 2 (x ) = 1, then we
i N i = (tk − 1) − 
 μ2N (xi ) 1 − νN
2 (x )
i 
have  p
 ν 2 (xi )(1−νM 2 (x )) μ2 (xi )(1−μ2N (xi )) 
 i
+ (tk − k) N −k M  .
(1) D1 (M, M N) = D1 (N, N M), μM (xi ) ≤ μN (xi ),  1 − νN (xi )
2 μ2N (xi ) 
νM (xi ) ≥ νN (xi );
(2) D1 (M, M N) = D1 (N, N M), μM (xi ) ≥ μN (xi ), Therefore, we can have D1 (M, M N) = D1 (N, M
νM (xi ) ≤ νN (xi ); N).
(3) S1 (M, M N) = S1 (N, N M), μM (xi ) ≤ μN (xi ),
νM (xi ) ≥ νN (xi );
(4) S1 (M, M N) = S1 (N, N M), μM (xi ) ≥ μN (xi ), 4 Applications of the proposed similarity
νM (xi ) ≤ νN (xi ). measure between PFSs

Proof We just prove the (1), and the (2)-(4) can be proved 4.1 A comparison of similarity measures for PFSs
in the similar way.
(1) Based on Definition 7 and (5), and for D1 (M, M N) In order to state the superiority of the developed similarity
with ∀xi ∈ X, we can achieve measures S1 and S2 , a comparison between the initiated
similarity measures and some existing similarity measures
  
 μ2M (xi ) is conducted. The existing similarity measures are shown in

(tk − 1) μM (xi ) − 2
2
Table 1.
 μN (xi )
  In the next moment, we employ six sets of PFSs
2 (x ) − ν 2 (x ) p
νM i N i  adopted from [12] to compare the results of the developed
− νM (xi ) −
2

1 − νN 2 (x )
i  similarity measures (S1 and S2 ) with some existing
  
 2 (x ) − ν 2 (x ) similarity measures [3–12, 30, 32, 34, 37, 40], as shown
 νM i N i
+ (tk − k) νM (xi ) −
2
in Table 2. From Table 2, it is easily achieved that the
 1 − νN 2 (x )
i
 p explored similarity measures (S1 , S2 ), SBA [40], SCC [5]
μ2 (xi )  and SP P [37] can overcome the shortcomings of getting the
−k μ2M (xi ) − M 
μ2N (xi )  unreasonable results of the existing similarity measures (SC
   [4], SH Y 1 [6], SH Y 2 [6], SH Y 3 [6], SH K [7], SLC [8], SLX
 μ2M (xi )(1 − μ2N (xi )) 2 (x )(1 − ν 2 (x )) p
 νN i M i 
= (tk − 1) −  [9], SL [3], SLS1 [10], SLS2 [10], SLS3 [10], SM [11], SY
 μ2N (xi ) 1 − νN2 (x )
i 
 p
[12], SP 1 [34], SP 2 [34], SP 3 [34], SZ [32] and SW [32]).
 ν 2 (xi )(1 − νM 2 (x )) μ2 (xi )(1− μ2N (xi ))  The five main drawbacks are discussed in detail as follows:
 i
+ (tk − k) N −k M 
 1 − νN (xi )
2 μ2N (xi )  (1) It is easily found that SC , SLC , SY , SW , SP 1 cannot
meet the condition of the third axiom of similarity
For D1 (N, M N) with ∀xi ∈ X and μ2M (xi ) + measure (S3) since SC (M, N) = SLC (M, N) =
= 1, νM
μ2N (xi ) 2 (x ) + ν 2 (x ) = 1, we can have
i N i SY (M, N) = SW (M, N) = SP 1 (M, N) = 1
   when M = (0.3, 0.3) and N = (0.4, 0.4) (Case
 μ2N (xi ) − μ2M (xi )
 1) which aren’t equal. Analogously, the SC (M, N)
(tk − 1) μN (xi ) −
2
 1 − μ2M (xi ) and SLC (M, N) also does not meet the condition of
 p
ν 2 (xi )  third axiom of similarity measure (S3) when M =
− νN 2
(xi ) − N 2 (x )  (0.5, 0.5), N = (0, 0) and M = (0.4, 0.2), N =
νM i
   (0.5, 0.3). Furthermore, we see that SZ is not satisfied
 2 (x )
 νN i with the fourth axiom of similarity measure (S4)
+ (tk − k) νN (xi ) − 2
2
 νM (xi ) when M = (0.3, 0.4) and N = (0.4, 0.3)(Case 3)
 p since SZ (M, N) = 0, which are not a crisp number.
μ2N (xi ) − μ2M (xi ) 
−k μN (xi ) −
2
 Analogously, the SH Y 1 , SH Y 2 , SH Y 3 and SP 2 are not
1 − μ2M (xi ) 
  satisfied with the fourth axiom of similarity measure
 μ2M (xi )(1 − μ2N (xi )) νN 2 (x )(1 − ν 2 (x )) p (S4) when M = (1, 0), N = (0, 0) (Case 3), and also
 i M i 
= (tk − 1) − 
 1 − μ2M (xi ) 2 (x )
νM i  SP 2 when M = (0.5, 0.5), N = (0, 0) (Case 4).
 p (2) The similarity measures [3, 6, 7, 10, 11, 34] fail
 ν 2 (xi )(ν 2 (xi )) μ2 (xi )(1 − μ2N (xi )) 
 to distinguish the positive difference from negative
+ (tk − k) N 2 M −k M 
 νM (xi ) 1 − μ2M (xi )  difference. For instance, SL (M, N) = SL (M1, N 2) =
X. Peng and H. Garg

Table 1 Existing similarity measures

Authors Similarity measure


 n
i=1 ((μM (xi )−μN (xi )) +(νM (xi )−νN (xi )) )
2 2
Li et al. [3] SL (M, N) = 1 − 2n

n
|μM (xi )−νM (xi )−(μN (xi )−νN (xi ))|
Chen [4] SC (M, N) = 1 − i=1
2n

n
Chen and Chang [5] SCC (M, N) = 1 − n1 (|μM (xi ) − μN (xi )| × (1 − πM (xi )+π 2
N (xi )
)+
1 i=1
( 0 |μMxi (z) − μNxi (z)|dz ) × ( πM (xi )+π N (xi )
))
⎧ 2

⎨ 1, if z = μM (xi ) = 1 − νM (xi ),
where μMxi (z) = 1−μ1−ν(xM (x i )−z
, if z ∈ [μM (xi ), 1 − νM (xi )],

⎩ M i )−νM (xi )
0, otherwise.

n
max(|μM (xi )−μN (xi )|,|νM (xi )−νN (xi )|)
Hung and Yang [6] SH Y 1 (M, N) = 1 − i=1
n ,
eSH Y 1 (M,N )−1 −e−1 SH Y 1 (M,N )
SH Y 2 (M, N) = 1−e−1
, SHY3 (M, N) = 2−SH Y 1 (M,N )
 n
(|μM (xi )−μN (xi )|+|νM (xi )−νN (xi )|)
Hong and Kim [7] SH K (M, N) = 1 − i=1
 n 2n
p |ψ (x )−ψ (x )|p
Li and Cheng [8] SLC (M, N) = 1 − i=1 M i
n
N i
,
μM (xi )+1−νM (xi ) μN (xi )+1−νN (xi )
where ψM (xi ) = 2 , ψ N i =
(x ) 2 , and 1 ≤ p < ∞.

n
(|μM (xi )−νM (xi )−(μN (xi )−νN (xi ))|+|μM (xi )−μN (xi )|+|νM (xi )−νN (xi )|)
Li and Xu [9] SLX (M, N) = 1 − i=1
 n 4n  n
p |φμ (xi )+φν (xi )| p i=1 |ϕμ (xi )+ϕν (xi )|
Liang and Shi [10] SLS1 (M, N) = 1 − i=1
, S (M, N) = 1 − ,
 n n LS2 n
p
p i=1 (η1 (xi )+η2 (xi )+η3 (xi ))
SLS3 (M, N) = 1 − 3n ,
where φμ (xi ) = |μM (xi )−μ 2
N (xi )|
, φ ν (x i ) = |νM (xi ))−νN (xi )|
2 , ϕμ (xi ) = |mM1 (xi )−m 2
N 1 (xi )|
,
|mM2 (xi )−mN 2 (xi )| |μM (xi )+mM (xi )| |μN (xi )+mN (xi )|
ϕν (xi ) = 2 , m M1 (x i ) = 2 , m N 1 (xi ) = 2 ,
mM2 (xi ) = |1−νM (xi )+m 2
M (xi )|
, mN 2 (xi ) = |1−νN (xi )+m 2
N (xi )|
, mM (xi ) = |μM (xi )+1−ν 2
M (xi )|
,
|μN (xi )+1−νN (xi )| |μM (xi )−μN (xi )|+|νM (xi )−νN (xi )|
mN (xi ) = 2 , η1 (xi ) = 2 ,
η2 (xi ) = |(μM (xi )−νM (xi ))−(μ 2
N (xi )−νN (xi ))|
,
η3 (xi ) = max( πM2(xi ) , πN2(xi ) ) − min( πM2(xi ) , πN2(xi ) ).
ρ (M,N )+ρν (M,N )
Mitchell [11] SM (M, N) = μ 2  ,
n
 n
i=1 |μM (xi )−μN (xi )| i=1 |νM (xi )−νN (xi )|
p p p p
where ρμ (M, N) = 1 − n , ρ ν (M, N) = 1 − n , and 1 ≤ p < ∞.
1   μM (xi )μN (xi )+ν
n
(x )ν (x )
Ye [12] SY (M, N) = n  M i N i
2 2 2 2
i=1 μM (xi )+νM (xi ) μN (xi )+νN (xi )
 n μ 2 (x )μ2 (x )+ν 2 (x )ν 2 (x )
Wei and Wei [30] SW (M, N) = n1  M i N i M i N i
4 4 4 4
i=1 μM (xi )+νM (xi ) μN (xi )+νN (xi )
1 
n |μ2M (xi )−νN 2 (x )|+|ν 2 (x )−μ2 (x )|+|π 2 (x )−π 2 (x )|
Zhang [32] SZ (M, N) = n i M i N i M i N i
|μ2M (xi )−μ2N (xi )|+|νM 2 (x )−ν 2 (x )|+|π 2 (x )−π 2 (x )|+|μ2 (x )−ν 2 (x )|+|ν 2 (x )−μ2 (x )|+|π 2 (x )−π 2 (x )|
i N i M i N i M i N i M i N i M i N i
i=1
 n
|μ2M (xi )−νM 2 (x )−(μ2 (x )−ν 2 (x ))|
i N i N i
Peng et al. [34] SP 1 (M, N) = 1 − i=1 2n
n (μ2 (x )  μ2 (x ))+(ν 2 (x )  ν 2 (x ))

SP 2 (M, N) = n1 M i
2
N i
2
M i
2
N i
2
(μ (x ) μN (xi ))+(νM (xi ) νN (xi ))
i=1 M i
n (μ2 (x )  μ2 (x ))+(1−ν 2 (x )) (1−ν 2 (x ))
SP 3 (M, N) = 1 M i N i M i N i
n (μ2 (x ) μ2N (xi ))+(1−νM 2 (x )) (1−ν 2 (x ))
i=1 M i i N i

n
p {|t (μM (xi )−μN (xi ))−(νM (xi )−νN (xi ))|p +|t (νM (xi )−νN (xi ))−(μM (xi )−μN (xi ))|p }
Boran and Akay [40] SBA (M, N) = 1 − i=1
 2n(t+1)p

n
p {|(t+1−a)(μ2M (xi )−μ2N (xi ))−a(νM
2 (x )−ν 2 (x ))|p +|(t+1−b)(ν 2 (x )−ν 2 (x ))−b(μ2 (x )−μ2 (x ))|p }
i N i M i N i M i N i
Peng [37] SP P (M, N) = 1 − i=1
2n(t+1)p

0.9 when M = (0.3, 0.3), N = (0.4, 0.4) (Case in SH Y 1 , SH Y 2 , SH Y 3 , SH K , SLS1 , SLS3 , SM , SP 2 ,


1), M1 = (0.3, 0.4) and N1 = (0.4, 0.3) SP 3 . Another counter-intuitive case appears when
(Case 2). The similar counter-intuitive case exists M = (1, 0), N = (0, 0) (Case 3) and M1 =
Multiparametric similarity measures on Pythagorean...

(0.5, 0.5), N1 = (0, 0) (Case 4). In such case, N1  N  M by the score function defined in
SH K (M, N) = SH K (M1, N 1) = 0.5. The similar Definition 3. Nevertheless, the similarity degree is
counter-intuitive case exists in SLS1 , SM , SZ , SP 2 . greater than the similarity degree when SL , SH Y 1 ,
(3) Some similarity measures fail to handle the division SH Y 2 , SH Y 3 , SH K , SLC , SLX , SLS1 , SLS2 , SC ,
by zero problem. For instance, SY and SW when M = SLS3 , SM , SW , SZ , SP 2 , SP 3 are utlized, which
(1, 0), N = (0, 0) (Case 3) or M = (0.5, 0.5), N = is not rational. Meanwhile, the developed similarity
(0, 0) (Case 4). measures S1 (M, N) = 0.9767, S1 (M, N 1) = 0.955
(4) Another counter-intuitive case can be offered for and S2 (M, N) = 0.9783, S2 (M, N 1) = 0.97. The
the condition in which the similarity measures are presented similarity measures (S1 and S2 ) and the
SH Y 1 (M, N) = SH Y 1 (M1, N 1) = 0.9 when existing similarity measures (SCC and SBA ) are the
M = (0.4, 0.2), N = (0.5, 0.3) (Case 5), similarity measures out of the counter-intuitive cases
M1 = (0.4, 0.2), N1 = (0.5, 0.2) (Case 6). as shown in Table 2. For continuing digging the
The similar counter-intuitive case also exists in imperfections of the existing similarity measures (SCC
SH Y 2 , SH Y 3 , SLX , SLS2 . and SBA ), we give the further discussion by the
(5) Another interesting counter-intuitive case happens following tables.
when M = (0.4, 0.2), N = (0.5, 0.3) (Case 5),
M = (0.4, 0.2), N1 = (0.5, 0.2) (Case 6). In Meanwhile, SBA [3] has the blemishes of achieving
current case, it is prospected that the similarity unconscionable results in some special cases presented in
degree is equal or greater than the similarity degree Table 3. In Table 3, we use six series of PFSs adopted
between M and N1 because they are ordered as from [5] to compare the decision results of the developed

Table 2 The comparison of similarity measures adopted from [12]

Case 1 Case 2 Case 3 Case 4 Case 5 Case 6


M {< x, 0.3, 0.3 >} {< x, 0.3, 0.4 >} {< x, 1, 0 >} {< x, 0.5, 0.5 >} {< x, 0.4, 0.2 >} {< x, 0.4, 0.2 >}
N {< x, 0.4, 0.4 >} {< x, 0.4, 0.3 >} {< x, 0, 0 >} {< x, 0, 0 >} {< x, 0.5, 0.3 >} {< x, 0.5, 0.2 >}

SL [3] 0.9 0.9 0.2929 0.5 0.9 0.9293


SC [4] 1 0.9 0.5 1 1 0.95
SCC [5] 0.9225 0.88 0.25 0.5 0.9225 0.8913
SH Y 1 [6] 0.9 0.9 0 0.5 0.9 0.9
SH Y 2 [6] 0.8495 0.8495 0 0.3775 0.8495 0.8495
SH Y 3 [6] 0.8182 0.8182 0 0.3333 0.8182 0.8182
SH K [7] 0.9 0.9 0.5 0.5 0.9 0.95
SLC [8] 1 0.9 0.5 1 1 0.95
SLX [9] 0.95 0.9 0.5 0.75 0.95 0.95
SLS1 [10] 0.9 0.9 0.5 0.5 0.9 0.95
SLS2 [10] 0.95 0.9 0.5 0.75 0.95 0.95
SLS3 [10] 0.9333 0.9333 0.5 0.6667 0.9333 0.95
SM [11] 0.9 0.9 0.5 0.5 0.9 0.95
SY [12] 1 0.96 N/A N/A 0.9971 0.9965
SW [30] 1 0.8546 N/A N/A 0.9949 0.9963
SZ [32] 0.5 0 0.5 0.5 0.6 0.7
SP 1 [34] 1 0.93 0.5 1 0.98 0.955
SP 2 [34] 0.5625 0.5625 0 0 0.5882 0.6897
SP 3 [34] 0.8692 0.8692 0.5 0.6 0.8843 0.9256
SP P [37] 0.9767 0.93 0.5 0.9167 0.9767 0.955
SBA [40] 0.967 0.9 0.5 0.8333 0.9667 0.95
S1 (proposed) 0.9767 0.93 0.5 0.9167 0.9767 0.955
S2 (proposed) 0.9883 0.965 0.6667 0.9583 0.9783 0.97

(p = 1 in SM , SLC , SLS1 , SLS2 , SLS3 , p = 1, t = 2 in SBA and p = 1, tk = 3, k = 1 in S1 , S2 .) “Bold” denotes unreasonable results. “N/A”
denotes it cannot compute the degree of similarity due to “the division by zero problem”
X. Peng and H. Garg

Table 3 The comparison of similarity measures adopted from [5]

Case 1 Case 2 Case 3 Case 4 Case 5 Case 6


M {< x, 0.5, 0.5 >} {< x, 0.6, 0.4 >} {< x, 0, 0.87 >} {< x, 0.6, 0.27 >} {< x, 0.125, 0.075 >} {< x, 0.5, 0.45 >}
N {< x, 0, 0 >} {< x, 0, 0 >} {< x, 0.28, 0.55 >} {< x, 0.28, 0.55 >} {< x, 0.175, 0.025 >} {< x, 0.55, 0.4 >}

SL [3] 0.5 0.4901 0.6993 0.6993 0.95 0.95


SC [4] 1 0.9 0.7 0.7 0.95 0.95
SCC [5] 0.5 0.45 0.7395 0.7055 0.9125 0.95
SH Y 1 [6] 0.5 0.4 0.68 0.68 0.95 0.95
SH Y 2 [6] 0.3775 0.2862 0.5668 0.5668 0.9229 0.9229
SH Y 3 [6] 0.3333 0.25 0.5152 0.5152 0.9048 0.9048
SH K [7] 0.5 0.5 0.7 0.7 0.95 0.95
SLC [8] 1 0.9 0.7 0.7 0.95 0.95
SLX [9] 0.75 0.7 0.7 0.7 0.95 0.95
SLS1 [10] 0.5 0.5 0.7 0.7 0.95 0.95
SLS2 [10] 0.75 0.75 0.7 0.7 0.95 0.95
SLS3 [10] 0.6667 0.6333 0.7933 0.7933 0.9667 0.9667
SM [11] 0.5 0.5 0.7 0.7 0.95 0.95
SY [12] N/A N/A 0.8912 0.7794 0.9216 0.9946
SW [30] N/A N/A 0.968 0.438 0.9476 0.9812
SZ [32] 0.5 0.5 0.5989 0.1696 0.625 0.6557
SP 1 [34] 1 0.9 0.7336 0.7444 0.99 0.9525
SP 2 [34] 0 0 0.3621 0.2284 0.4483 0.8119
SP 3 [34] 0.6 0.6176 0.3133 0.6028 0.9806 0.9168
SP P [37] 0.9167 0.9 0.7336 0.7444 0.99 0.9525
SBA [40] 0.8333 0.8333 0.7 0.7 0.95 0.95
S1 (proposed) 0.9167 0.9 0.7336 0.7444 0.99 0.9525
S2 (proposed) 0.9583 0.9067 0.8355 0.8679 0.9942 0.9754

(p = 1 in SM , SLC , SLS1 , SLS2 , SLS3 , p = 1, t = 2 in SBA , p = 1, a = 1, b = 1, t = 2 in SP P and p = 1, tk = 3, k = 1 in S1 , S2 .) “Bold”


denotes unreasonable results. “N/A” denotes it cannot compute the degree of similarity due to “the division by zero problem”

similarity measures S1 and S2 with the existing similarity of the existing similarity measures SBA [40], SC [4], SH Y 1
measures [3–12, 30, 32, 34, 37, 40], as shown in Table 3. [6], SH Y 2 [6], SH Y 3 [6], SH K [7], SLC [8], SLX [9], SL [3] ,
From Table 3, we can find that the proposed similarity SLS1 [10], SLS2 [10], SLS3 [10], SM [11], SY [12], SP 1 [34],
measures (S1 , S2 ) and SCC [5], SP 3 [34], SP P [37] can solve SP 2 [34], SZ [32]SW [30] and SP P [37].
the blemishes of obtaining the unconscionable results of the
existing similarity measures SBA [40], SC [4], SH Y 1 [6], 4.2 Apply the similarity measure between PFSs
SH Y 2 [6], SH Y 3 [6], SH K [7], SLC [8], SLX [9], SL [3] , to pattern recognition
SLS1 [10], SLS2 [10], SLS3 [10], SM [11], SY [12], SP 1 [34],
SP 2 [34], SZ [32] and SW [30]. In order to state the effectiveness of the developed similarity
However, SCC [5] also has the blemishes of achieving measures for PFS to pattern recognition, we give some
unconscionable results in some special cases, as shown in examples in this subsection.
Table 4. In Table 4, we use six sets of PFSs to compare
the results of the developed similarity measures S1 and S2 Example 1 (Medical diagnosis) The current set of diag-
with the existing similarity measures [3–12, 30, 32, 34, 37, nostic results C = {C1 , C2 , C3 }, which represent viral
40], as shown in Table 4. From Table 4, we can see that influenza, stomach problems, and chest problems, respec-
the proposed similarity measures S1 and S2 with SP 3 can tively. The collection of symptoms X = {x1 (fever), x2
overcome the blemishes of getting the unreasonable results (headache), x3 (stomach pain), x4 (chest pain)}. The stan-
Multiparametric similarity measures on Pythagorean...

Table 4 The comparison of similarity measures

Case 1 Case 2 Case 3 Case 4 Case 5 Case 6


M {< x, 0.3, 0.7 >} {< x, 0.3, 0.7 >} {< x, 0.5, 0.5 >} {< x, 0.4, 0.6 >} {< x, 0.1, 0.5 >} {< x, 0.4, 0.2 >}
N {< x, 0.4, 0.6 >} {< x, 0.2, 0.8 >} {< x, 0, 0 >} {< x, 0, 0 >} {< x, 0.2, 0.3 >} {< x, 0.2, 0.3 >}

SL [3] 0.6863 0.6863 0.5 0.4901 0.8419 0.8419


SC [4] 0.9 0.9 1 0.9 0.85 0.85
SCC [5] 0.9 0.9 0.5 0.55 0.8438 0.7685
SH Y 1 [6] 0.9 0.9 0.5 0.4 0.8 0.8
SH Y 2 [6] 0.8494 0.8494 0.3775 0.2862 0.7132 0.7132
SH Y 3 [6] 0.8182 0.8182 0.3333 0.25 0.6667 0.6667
SH K [7] 0.9 0.9 0.5 0.5 0.85 0.85
SLC [8] 0.9 0.9 1 0.9 0.85 0.85
SLX [9] 0.9 0.9 0.75 0.7 0.85 0.85
SLS1 [10] 0.9 0.9 0.5 0.5 0.85 0.85
SLS2 [10] 0.9 0.9 0.5 0.75 0.85 0.85
SLS3 [10] 0.95 0.95 0.6667 0.6333 0.8833 0.8833
SM [11] 0.9 0.9 0.5 0.5 0.85 0.85
SY [12] 0.9832 0.9873 N/A N/A 0.9249 0.8685
SW [30] 0.9721 0.9929 N/A N/A 0.9293 0.6156
SZ [32] 0.7174 0.7857 0.5 0.5 0.5676 0.3684
SP 1 [34] 0.9 0.9 1 0.9 0.905 0.915
SP 2 [34] 0.6923 0.726 0 0 0.3448 0.32
SP 3 [34] 0.75 0.6667 0.6 0.5517 0.8 0.8482
SP P [37] 0.9 0.9 0.9167 0.9 0.905 0.915
SBA [40] 0.9 0.9 0.8333 0.8333 0.8667 0.8667
S1 (proposed) 0.915 0.925 0.875 0.97 0.9375 0.8975
S2 (proposed) 0.9575 0.9625 0.9375 0.985 0.9688 0.9488

(p = 1 in SM , SLC , SLS1 , SLS2 , SLS3 , p = 1, t = 2 in SBA , p = 1, a = 1, b = 1, t = 2 in SP P and p = 1, tk = 4, k = 3 in S1 , S2 .) “Bold”


denotes unreasonable results. “N/A” denotes it cannot compute the degree of similarity due to “the division by zero problem”

dard model data for the shape characteristics of the three know that S1 (C3 , Q) is the largest value among
diagnostic results are known as follows: S1 (C1 , Q), S1 (C2 , Q), S1 (C3 , Q). Hence, the unknown
diagnostic result is classified into the diagnostic result
C1 = {< x1 , 0.3, 0.3 >, < x2 , 0.4, 0.4 >, < x3 , 0.4, 0.4 >, C3 . The proposed similarity measure S2 also has the same
< x4 , 0.4, 0.4 >}, classification result. Table 5 shows a comparison of the
C2 = {< x1 , 0.5, 0.5 >, < x2 , 0.1, 0.1 >, < x3 , 0.5, 0.5 >, classification result of the proposed similarity measures
(S1 , S2 ) with the ones of the existing similarity measures
< x4 , 0.1, 0.1 >},
[3–12, 30, 32, 34, 40]. From Table 5, we can know the
C3 = {< x1 , 0.5, 0.4 >, < x2 , 0.4, 0.5 >, < x3 , 0.3, 0.3 >, proposed similarity measure (S1 , S2 ), SCC [5], SH K [7],
< x4 , 0.2, 0.2 >}. SLS1 [10], SLS2 [10], SLS3 [10], SLX [9], SL [3], SM [11],
SH Y 1 [6], SH Y 2 [6], SH Y 3 [6], SP 2 [34] and SP 3 [34] can
Unknown diagnostic result Q is given as follows: overcome the blemishes of the existing similarity measures
SBA [40], SC [4], SLC [8], SY [12], SP 1 [34], SZ [32], SP P
Q = {< x1 , 0.4, 0.4 >, < x2 , 0.5, 0.5 >, < x3 , 0.2, 0.2 >, [37] and SW [30].
< x4 , 0.3, 0.3 >}.
Example 2 (Nanometer material identification) The exist-
Our goal is to find out the diagnostic result that Q ing nanometer materials collection C = {C1 , C2 , C3 },
belongs to. Because S1 (C1 , Q) = 0.9475, S1 (C2 , Q) = representing nanometer-fiber, nanometer-film, nanometer-
0.9070 and S1 (C3 , Q) = 0.9625, we can ceramics, respectively. The shape characteristics of the three
X. Peng and H. Garg

Table 5 The similarity


measures between the known S(C1 , Q) S(C2 , Q) S(C3 , Q) Classification result
patterns and the unknown
pattern in Example 1 SL [3] 0.8677 0.7261 0.9134 P3
SC [4] 1.0000 1.0000 0.9750 Cannot be classified
SCC [5] 0.8679 0.7425 0.8923 P3
SH Y 1 [6] 0.8750 0.7500 0.9000 P3
SH Y 2 [6] 0.8141 0.6501 0.8495 P3
SH Y 3 [6] 0.7778 0.6000 0.8182 P3
SH K [7] 0.8750 0.7500 0.9250 P3
SLC [8] 1.0000 1.0000 0.9750 Cannot be classified
SLX [9] 0.9375 0.8750 0.9500 P3
SLS1 [10] 0.8750 0.7500 0.9250 P3
SLS2 [10] 0.9375 0.8750 0.9500 P3
SLS3 [10] 0.9167 0.8333 0.9417 P3
SM [11] 0.8750 0.7500 0.9250 P3
SY [12] 1.0000 1.0000 0.9969 Cannot be classified
SW [30] 1.0000 1.0000 0.9884 Cannot be classified
SZ [32] 0.5000 0.5000 0.5000 Cannot be classified
SP 1 [34] 1.0000 1.0000 0.9775 Cannot be classified
SP 2 [34] 0.5038 0.2378 0.6223 P3
SP 3 [34] 0.8785 0.7912 0.9205 P3
SP P [37] 1.0000 1.0000 0.9775 Cannot be classified
SBA [40] 0.9583 0.9167 0.9583 Cannot be classified
S1 (proposed) 0.9475 0.9070 0.9625 P3
S2 (proposed) 0.9475 0.9070 0.9490 P3

(p = 1 in SM , SLC , SLS1 , SLS2 , SLS3 , p = 1, t = 2 in SBA , p = a = b = t = 1 in SP P and


p = 1, tk = 5, k = 1 in S1 , S2 .) “Bold” denotes unreasonable results

nanometer materials are mainly described by the following similarity measures [3–12, 30, 32, 34, 40]. From Table 6, we
set: X = {x1 (color), x2 (layer), x3 (odour)}. The stan- can know the proposed similarity measures (S1 , S2 ), SCC
dard model data for the shape characteristics of the three [5], SH K [7], SLS1 [10], SLS2 [10], SLS3 [10], SLX [9], SL
nanometer materials are known as follows: [3], SM [11], SH Y 1 [6], SH Y 2 [6], SH Y 3 [6], SP 2 [34], SP 3
[34] , SBA [40], SY [12], SP 1 [34], SZ [32], SP P [37] and SW
C1 = {< x1 , 0.1, 0.1 >, < x2 , 0.5, 0.1 >, < x3 , 0.1, 0.9 >},
[30] can overcome the blemishes of the existing similarity
C2 = {< x1 , 0.5, 0.5 >, < x2 , 0.7, 0.3 >, < x3 , 0, 0.8 >}, measures SC [4] and SLC [8].
C3 = {< x1 , 0.7, 0.2 >, < x2 , 0.1, 0.8 >, < x3 , 0.4, 0.4 >}.
Example 3 (Ore identification) There are 3 types of ore in
There is an existing nanometer material Q that needs to be
the area developed by a coal mine company, namely C1
identified as follows:
(Lead ore), C2 (Dolomite), C3 (Fang Liangshi). Its main
Q = {< x1 , 0.4, 0.4 >, < x2 , 0.6, 0.2 >, < x3 , 0, 0.8 >}. component combination is X = {x1 (lead),x2 (silver), x3
(carbonate), x4 (iron)}. The data of ore are represented by
Our goal is to find out the nanometer material that the following PFSs as follows:
Q belongs to. Because S1 (C1 , Q) = 0.9293, S1 (C2 , Q)
= 0.9640 and S1 (C3 , Q) = 0.6600, we can know C1 = {< x1 , 0.3, 0.3 >, < x2 , 0.6, 0.1 >, < x3 , 0.2, 0.6 >,
that S1 (C2 , Q) is the largest value among S1 (C1 , Q), < x4 , 0.7, 0.3 >},
S1 (C2 , Q), S1 (C3 , Q). Therefore, the unknown nanometer C2 = {< x1 , 0.5, 0.3 >, < x2 , 0.8, 0.1 >, < x3 , 0.2, 0.6 >,
material is classified into the pattern C2 . This result
< x4 , 0.7, 0.3 >},
corresponds with the one obtained in [10]. Table 6 shows
a comparison of the classification result of the proposed C3 = {< x1 , 0.5, 0.3 >, < x2 , 0.6, 0.1 >, < x3 , 0.2, 0.6 >,
similarity measures (S1 , S2 ) with the ones of the existing < x4 , 0.7, 0.3 >}.
Multiparametric similarity measures on Pythagorean...

Table 6 The similarity


measures between the known S(C1 , Q) S(C2 , Q) S(C3 , Q) Classification result
patterns and the unknown
pattern in Example 2 SL [3] 0.8085 0.9184 0.5797 P2
SC [4] 1.0000 1.0000 0.6000 Cannot be classified
SCC [5] 0.8846 0.9333 0.6383 P2
SH Y 1 [6] 0.8333 0.9333 0.5667 P2
SH Y 2 [6] 0.7571 0.8980 0.4437 P2
SH Y 3 [6] 0.7143 0.8750 0.3953 P2
SH K [7] 0.8333 0.9333 0.6000 P2
SLC [8] 1.0000 1.0000 0.6000 Cannot be classified
SLX [9] 0.9167 0.9667 0.6000 P2
SLS1 [10] 0.8333 0.9333 0.6000 P2
SLS2 [10] 0.9167 0.9667 0.6000 P2
SLS3 [10] 0.8889 0.9556 0.7222 P2
SM [11] 0.8333 0.9333 0.6000 P2
SY [12] 0.9954 0.9988 0.6709 P2
SW [30] 0.9991 0.9992 0.5318 P2
SZ [32] 0.6775 0.7381 0.4394 P2
SP 1 [34] 0.9600 0.9867 0.6600 P2
SP 2 [34] 0.4977 0.7766 0.1859 P2
SP 3 [34] 0.8618 0.9390 0.4732 P2
SBA [40] 0.9444 0.9778 0.6000 P2
SP P [37] 0.9462 0.9862 0.7188 P2
S1 (proposed) 0.9293 0.9640 0.6600 P2
S2 (proposed) 0.8980 0.9507 0.5820 P2

(p = 1 in SM , SLC , SLS1 , SLS2 , SLS3 , p = 1, t = 2 in SBA , p = a = b = t = 1 in SP P and


p = 1, tk = 5, k = 1 in S1 , S2 .) “Bold” denotes unreasonable results

There is an existing ore Q that needs to be identified as Example 4 (Bacterial detection) The existing bacterial
follows: collection C = {C1 , C2 , C3 }, representing Escherichia
coli, Shigella, and Salmonella, respectively. The shape
Q = {< x1 , 0.4, 0.3 >, < x2 , 0.7, 0.1 >, < x3 , 0.3, 0.6 >, characteristics of the three gut bacteria are mainly described
by the following set: X = {x1 (round head shape), x2
< x4 , 0.7, 0.3 >}.
(single micromorphology), x3 (double micromorphology),
x4 (big belly small morphology)}. The standard model data
Our goal is to find out the ore that Q belongs to.
for the shape characteristics of the three gut bacteria are
Because S1 (C1 , Q) = 0.9688, S1 (C2 , Q) = 0.9638 and
known as follows:
S1 (C3 , Q) = 0.9663, we can know that S1 (C1 , Q) is
the largest value among S1 (C1 , Q), S1 (C2 , Q), S1 (C3 , Q).
Hence, the unknown ore is classified into the C1 . C1 = {< x1 , 0.2, 0.8 >, < x2 , 0.4, 0.6 >, < x3 , 0.5, 0.5 >,
Table 7 shows a comparison of the classification result of < x4 , 0.4, 0.6 >},
the proposed similarity measures (S1 , S2 ) with the ones of C2 = {< x1 , 0.5, 0.4 >, < x2 , 0.3, 0.7 >, < x3 , 0.5, 0.5 >,
the existing similarity measures [3–12, 30, 32, 34, 40]. From
< x4 , 0.4, 0.6 >},
Table 7, we can know the proposed similarity measures
(S1 , S2 ), SZ [32], SP 1 [34], SP 2 [34], SP P [37] and SP 3 C3 = {< x1 , 0.5, 0.5 >, < x2 , 0.4, 0.6 >, < x3 , 0.4, 0.6 >,
[34] can overcome the blemishes of the existing similarity < x4 , 0.4, 0.6 >}.
measures SCC [5], SBA [40], SC [4], SH K [7], SLS1 [10],
SLS2 [10], SLS3 [10], SLX [9], SL [3], SLC [8], SM [11], The laboratory has an unknown bacteria Q as follows:
SH Y 1 [6], SH Y 2 [6], SH Y 3 [6], SY [12] and SW [30].
Although the final classification results (P1 (SP1 [34], SP 3
[34], S1 and S2 ) and P2 ( SP2 [34], SZ [32]) ) are different, Q = {< x1 , 0.4, 0.6 >, < x2 , 0.4, 0.6 >, < x3 , 0.5, 0.5 >,
they can distinguish in unknown pattern. < x4 , 0.4, 0.6 >}
X. Peng and H. Garg

Table 7 The similarity


measures between the known S(C1 , Q) S(C2 , Q) S(C3 , Q) Classification result
patterns and the unknown
pattern in Example 3 SL [3] 0.9388 0.9388 0.9388 Cannot be classified
SC [4] 0.9625 0.9625 0.9625 Cannot be classified
SCC [5] 0.8880 0.8902 0.8902 Cannot be classified
SH Y 1 [6] 0.9250 0.9250 0.9250 Cannot be classified
SH Y 2 [6] 0.8857 0.8857 0.8857 Cannot be classified
SH Y 3 [6] 0.8605 0.8605 0.8605 Cannot be classified
SH K [7] 0.9625 0.9625 0.9625 Cannot be classified
SLC [8] 0.9625 0.9625 0.9625 Cannot be classified
SLX [9] 0.9625 0.9625 0.9625 Cannot be classified
SLS1 [10] 0.9625 0.9625 0.9625 Cannot be classified
SLS2 [10] 0.9625 0.9625 0.9625 Cannot be classified
SLS3 [10] 0.9625 0.9625 0.9625 Cannot be classified
SM [11] 0.9625 0.9625 0.9625 Cannot be classified
SY [12] 0.9949 0.9961 0.9961 Cannot be classified
SW [30] 0.9885 0.9943 0.9943 Cannot be classified
SZ [32] 0.7879 0.8281 0.8229 P2
SP 1 [34] 0.9688 0.9638 0.9663 P1
SP 2 [34] 0.8372 0.8484 0.8410 P2
SP 3 [34] 0.9446 0.9405 0.9415 P1
SP P [37] 0.9688 0.9638 0.9663 P1
SBA [40] 0.9625 0.9625 0.9625 Cannot be classified
S1 (proposed) 0.9688 0.9638 0.9663 P1
S2 (proposed) 0.9500 0.9420 0.9460 P1

(p = 1 in SM , SLC , SLS1 , SLS2 , SLS3 , p = 1, t = 2 in SBA , p = a = b = t = 1 in SP P and


p = 1, tk = 5, k = 1 in S1 , S2 .) “Bold” denotes unreasonable results

Our goal is to find out the unknown bacteria that Q p(p = 1, 2, · · · 9) corresponding to a different value of
belongs to. Because S1 (C1 , Q) = 0.9580, S1 (C2 , Q) = the uncertainty parameters tk (tk = 1, 2, · · · , 9) and slope
0.9473 and S1 (C3 , Q) = 0.9520, we can know k(k = 1, 2, · · · , 8).
that S1 (C1 , Q) is the largest value among S1 (C1 , Q),
S1 (C2 , Q), S1 (C3 , Q). Hence, the unknown bacteria is clas- 4.3.1 The effect of the parameters in Example 5 (Jewellery
sified into the Escherichia coli C1 . identification)
Table 8 shows a comparison of the classification result of
the proposed similarity measures (S1 , S2 ) with the ones of Example 5 (Jewellery identification) The existing jewellery
the existing similarity measures [3–12, 30, 32, 34, 40]. From collection C = {C1 , C2 , C3 }, representing Moissanite,
Table 8, we can know the proposed similarity measures Ruby, and Emerald, respectively. The shape characteristics
(S1 , S2 ), SY [12], SL [3], SW [30], SP 2 [34], SP 3 [34] can of the three jewellery are mainly described by the following
overcome the blemishes of the existing similarity measures set: X = {x1 (refractive index), x2 (color), x3 (hardness),
SCC [5], SBA [40], SC [4], SH K [7], SLS1 [10], SLS2 [10], x4 (density)}. The standard model data for the shape
SLS3 [10], SLX [9], SLC [8], SM [11], SH Y 1 [6], SH Y 2 [6], characteristics of the three jewellery are known as follows:
SH Y 3 [6], SZ [32], SP P [37] and SP 1 [34].
C1 = {< x1 , 0.3, 0.3 >, < x2 , 0.7, 0.4 >, < x3 , 0.4, 0.4 >,
4.3 The effect of the parameters p , tk and k < x4 , 0.8, 0.4 >},
on the ordering C2 = {< x1 , 0.5, 0.5 >, < x2 , 0.1, 0.1 >, < x3 , 0.5, 0.5 >,
< x4 , 0.1, 0.1 >},
However, in order to analyze the effect of the parameters
p, tk and k on the measure values, five experiments C3 = {< x1 , 0.5, 0.3 >, < x2 , 0.4, 0.4 >, < x3 , 0.3, 0.6 >,
(Examples 1-5) are performed by taking different values of < x4 , 0.2, 0.8 >}.
Multiparametric similarity measures on Pythagorean...

Table 8 The similarity


measures between the known S(C1 , Q) S(C2 , Q) S(C3 , Q) Classification result
patterns and the unknown
pattern in Example 4 SL [3] 0.9000 0.9065 0.9293 P3
SC [4] 0.9500 0.9375 0.9500 Cannot be classified
SCC [5] 0.9500 0.9456 0.9500 Cannot be classified
SH Y 1 [6] 0.9500 0.9250 0.9500 Cannot be classified
SH Y 2 [6] 0.9228 0.8857 0.9228 Cannot be classified
SH Y 3 [6] 0.9048 0.8605 0.9048 Cannot be classified
SH K [7] 0.9500 0.9375 0.9500 Cannot be classified
SLC [8] 0.9500 0.9375 0.9500 Cannot be classified
SLX [9] 0.9500 0.9375 0.9500 Cannot be classified
SLS1 [10] 0.9500 0.9375 0.9500 Cannot be classified
SLS2 [10] 0.9500 0.9375 0.9500 Cannot be classified
SLS3 [10] 0.9667 0.9542 0.9667 Cannot be classified
SM [11] 0.9500 0.9375 0.9500 Cannot be classified
SY [12] 0.9854 0.9841 0.9903 P3
SW [30] 0.9843 0.9517 0.9667 P1
SZ [32] N/A N/A N/A ∗
SP 1 [34] 0.9500 0.9388 0.9500 Cannot be classified
SP 2 [34] 0.8750 0.8042 0.8361 P1
SP 3 [34] 0.9423 0.9074 0.9247 P1
SP P [37] 0.9500 0.9388 0.9500 Cannot be classified
SBA [40] 0.9500 0.9375 0.9500 Cannot be classified
S1 (proposed) 0.9580 0.9473 0.9520 P1
S2 (proposed) 0.9540 0.9430 0.9510 P1

(p = 1 in SM , SLC , SLS1 , SLS2 , SLS3 , p = 1, t = 2 in SBA , p = a = b = t = 1 in SP P and


p = 1, tk = 5, k = 3 in S1 , S2 .) “Bold” denotes unreasonable results. “N/A” denotes it cannot compute the
degree of similarity due to “the division by zero problem”. “*” denotes no results

The customer has an unknown jewelry Q as follows: increases and then monotonically decreases (Fig. 4).
Q = {< x1 , 0.4, 0.4 >, < x2 , 0.5, 0.5 >, < x3 , 0.4, 0.7 >, Moreover, it can be easily seen that the similarity value
of C2 is bigger than C1 during the changing trend of
< x4 , 0.3, 0.8 >}. the p ∈ [2, 9] when (a) to (c). But no matter how
it changes, the similarity value of C3 is always the
On the basis of these different pairs of parameters, largest. From (d) to (i), the C1 and C2 has a clear
similarity measure (we only take S1 into consideration) distinction. Meanwhile, the final result also keep as
is computed, and its results are summarized in Fig. 2. C3  C2  C1 .
Consequently, three significant points have been achieved (3) For constant p and tk , as k increases, the measure
as follows: values corresponding to each label monotonously
(1) For constant tk and k, it has been observed that the increases and then monotonically decreases (Fig. 5).
measure values corresponding to each label decrease Moreover, from (a) to (c), the final result also keep
with the increase in the value of p (Fig. 3). Moreover, as C3  C2  C1 . But the similarity value of C2
it can be easily seen that the similarity value of C2 is smaller than C1 from (d) to (i) when k = 0.
is bigger than C1 during the changing trend of the Meanwhile, the similarity value of C2 is bigger than
p ∈ {1, 2} when tk = 9, k = 0. After that, the C1 .
similarity value of C1 is bigger than C2 when p ≥ 3.
But no matter how it changes, the similarity value of 4.3.2 The effect of the parameters in Example 1 (Medical
C3 is always the largest. From (b) to (i), the final result diagnosis)
keep as C3  C2  C1 .
(2) For constant p and k, as tk increases, the measure In order to discuss the effect of the parameters in Example
values corresponding to each label monotonously 1, similarity measure S1 is computed, and their results are
X. Peng and H. Garg

Fig. 2 The total changing trend


of parameters tk and k in
Example 5
Multiparametric similarity measures on Pythagorean...

Fig. 3 The changing trend of


0.95 0.95
parameter p in Example 5 0.9 0.9
0.85 0.85

Similarity values

Similarity values
0.8 0.8
0.75 0.75
0.7 0.7
0.65 0.65
0.6 0.6
0.55 0.55
0.5 0.5
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.786 0.705 0.651 0.614 0.588 0.569 0.555 0.544 0.535 C1 0.786 0.708 0.654 0.617 0.591 0.572 0.558 0.547 0.538
C2 0.909 0.887 0.874 0.864 0.857 0.851 0.846 0.843 0.840 C2 0.906 0.892 0.883 0.875 0.868 0.863 0.858 0.854 0.851
C3 0.938 0.935 0.933 0.931 0.929 0.927 0.926 0.925 0.924 C3 0.944 0.940 0.937 0.934 0.931 0.929 0.928 0.926 0.925
The parameter p (tk=9,k=8) The parameter p (tk=9,k=7)

0.95 0.95
0.9 0.9

Similarity values
0.85 0.85
Similarity values
0.8 0.8
0.75 0.75
0.7 0.7
0.65 0.65
0.6 0.6
0.55 0.55
0.5 0.5
1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
C1 0.787 0.710 0.656 0.62 0.594 0.575 0.561 0.550 0.542
C1 0.787 0.710 0.658 0.622 0.597 0.578 0.564 0.553 0.544
C2 0.904 0.887 0.876 0.868 0.863 0.858 0.854 0.850 0.848
C2 0.901 0.872 0.854 0.842 0.832 0.825 0.819 0.814 0.810
C3 0.950 0.943 0.939 0.935 0.932 0.930 0.928 0.927 0.926
C3 0.952 0.946 0.939 0.936 0.931 0.931 0.929 0.928 0.927
The parameter p (tk=9,k=6)
The parameter p (tk=9,k=5)

0.95 0.95
0.9 0.9
Similarity values

0.85 0.85

Similarity values
0.8 0.8
0.75
0.75
0.7
0.7
0.65
0.65 0.6
0.6 0.55
0.55 0.5
0.5 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9 C1 0.782 0.711 0.661 0.626 0.602 0.583 0.569 0.558 0.549
C1 0.786 0.711 0.660 0.625 0.599 0.581 0.567 0.556 0.547 C2 0.868 0.824 0.788 0.760 0.738 0.721 0.707 0.697 0.688
C2 0.889 0.850 0.823 0.803 0.787 0.775 0.765 0.757 0.750 C3 0.945 0.941 0.938 0.935 0.933 0.931 0.930 0.928 0.927
C3 0.950 0.944 0.939 0.936 0.933 0.931 0.930 0.928 0.927 The parameter p (tk=9,k=3)
The parameter p (tk=9,k=4)

0.95 0.95
0.9 0.9
0.85 0.85
Similarity values
Similarity values

0.8 0.8
0.75 0.75
0.7 0.7
0.65 0.65
0.6 0.6
0.55
0.55
0.5
0.5
0.45
0.45 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
C1 0.775 0.707 0.660 0.628 0.605 0.587 0.573 0.563 0.554
C1 0.779 0.709 0.661 0.628 0.603 0.585 0.572 0.561 0.552
C2 0.826 0.764 0.711 0.668 0.635 0.610 0.590 0.575 0.563
C2 0.847 0.795 0.750 0.714 0.687 0.666 0.649 0.636 0.625
C3 0.937 0.931 0.926 0.923 0.920 0.918 0.916 0.914 0.913
C3 0.941 0.936 0.933 0.931 0.929 0.927 0.926 0.925 0.924
The parameter p (tk=9,k=1)
The parameter p (tk=9,k=2)

0.95
0.9
Similarity values

0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
1 2 3 4 5 6 7 8 9
C1 0.772 0.704 0.659 0.628 0.605 0.588 0.575 0.564 0.555
C2 0.804 0.733 0.670 0.621 0.583 0.554 0.532 0.514 0.500
C3 0.933 0.924 0.918 0.913 0.908 0.905 0.901 0.898 0.896
The parameter p (tk=9,k=0)
X. Peng and H. Garg

Fig. 4 The changing trend of


0.95 0.95
parameter tk in Example 5 0.9 0.9
0.85 0.85

Similarity values
Similarity values
0.8 0.8
0.75 0.75
0.7 0.7
0.65 0.65
0.6 0.6
0.55 0.55
0.5 0.5
0.45 0.45
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.756 0.773 0.772 0.772 0.772 0.772 0.772 0.772 0.772 C1 0.703 0.710 0.709 0.707 0.706 0.706 0.705 0.705 0.704
C2 0.700 0.700 0.799 0.801 0.803 0.803 0.804 0.804 0.804 C2 0.639 0.721 0.733 0.735 0.735 0.736 0.734 0.733 0.733
C3 0.926 0.947 0.945 0.940 0.938 0.936 0.934 0.933 0.933
C3 0.914 0.931 0.930 0.928 0.927 0.926 0.925 0.924 0.924
The parameter tk (k=0,p=1)
The parameter tk (k=0,p=2)

0.95 0.95
0.9 0.9
0.85 0.85

Similarity values
Similarity values

0.8 0.8
0.75 0.75
0.7 0.7
0.65 0.65
0.6 0.6
0.55 0.55
0.5 0.5
0.45 0.45
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.667 0.668 0.665 0.663 0.661 0.660 0.660 0.659 0.659 C1 0.633 0.638 0.634 0.632 0.630 0.629 0.629 0.628 0.628
C2 0.589 0.664 0.671 0.672 0.672 0.671 0.671 0.671 0.670 C2 0.550 0.618 0.621 0.621 0.621 0.621 0.621 0.621 0.621
C3 0.907 0.922 0.921 0.920 0.920 0.919 0.918 0.918 0.918 C3 0.902 0.915 0.915 0.914 0.914 0.913 0.913 0.913 0.913
The parameter tk (k=0,p=3) The parameter tk (k=0,p=4)

0.95 0.95
0.9 0.9
0.85 0.85

Similarity values
Similarity values

0.8 0.8
0.75 0.75
0.7 0.7
0.65 0.65
0.6 0.6
0.55 0.55
0.5 0.5
0.45 0.45
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.616 0.617 0.612 0.610 0.608 0.607 0.606 0.606 0.605 C1 0.601 0.601 0.596 0.593 0.591 0.590 0.589 0.589 0.588
C2 0.521 0.582 0.583 0.583 0.583 0.583 0.583 0.583 0.583 C2 0.500 0.554 0.554 0.554 0.554 0.554 0.554 0.554 0.554
C3 0.898 0.910 0.909 0.909 0.909 0.909 0.909 0.908 0.908 C3 0.895 0.905 0.905 0.905 0.905 0.905 0.905 0.905 0.905
The parameter tk (k=0,p=5) The parameter tk (k=0,p=6)

0.95 0.95
0.9 0.9
0.85 0.85
Similarity values
Similarity values

0.8 0.8
0.75 0.75
0.7 0.7
0.65 0.65
0.6 0.6
0.55 0.55
0.5
0.5
0.45
0.45 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9
C1 0.574 0.579 0.573 0.57 0.568 0.566 0.565 0.565 0.564
C1 0.582 0.589 0.583 0.580 0.578 0.577 0.576 0.575 0.575
C2 0.500 0.514 0.514 0.514 0.514 0.514 0.514 0.514 0.514
C2 0.483 0.532 0.532 0.532 0.532 0.532 0.532 0.532 0.532
C3 0.888 0.899 0.899 0.899 0.899 0.899 0.899 0.899 0.898
C3 0.892 0.902 0.902 0.902 0.902 0.901 0.901 0.901 0.901
The parameter tk (k=0,p=8)
The parameter tk (k=0,p=7)

0.95
0.9
Similarity values

0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45
1 2 3 4 5 6 7 8 9
C1 0 .570 0.571 0.564 0.561 0.559 0.558 0.557 0.556 0.555
C2 0.460 0.500 0.500 0.500 0.500 0.500 0.500 0.500 0.500
C3 0.888 0.896 0.896 0.896 0.896 0.896 0.896 0.896 0.896
The parameter tk (k=0,p=9)
Multiparametric similarity measures on Pythagorean...

Fig. 5 The changing trend of


parameter k in Example 5
X. Peng and H. Garg

given in Fig. 6. Consequently, three significant points have decreases (Fig. 12). However, the similarity values of
been achieved as follows: corresponding to the label C3 is still monotonously
increases. Moreover, from (a) to (i), the final result
(1) For constant tk and k, it has been observed that the
also keep as C2  C1  C3 .
measure values corresponding to each label decrease
(3) For constant p and k, as tk increases, the similarity
with the increase in the value of p (Fig. 7). The
values of corresponding to the labels (C1 and
similarity values of C1 and C3 are differentiable when
C2 ) monotonously increases and then monotonically
k ∈ {0, 1}. After that, they are quite closer. Moreover,
decreases (Fig. 13). However, the similarity values of
it can be easily seen that the similarity value of C3
corresponding to the label C3 is still monotonously
is bigger than C1 when (a) to (d) and (h) to (i).
increases. The similarity values of C1 and C2 are
However, the similarity value of C1 is bigger than C3
differentiable when k ∈ [0, 2]. After that, they are
when (e)(p = 1) and (f ) to (g)(p = 1, 2). But no
quite close. Moreover, from (a) to (i), the final result
matter how it changes, the similarity value of C2 is
also keep as C2  C1  C3 .
always the smallest. From (a) to (i), most of the final
result keep as C3  C2  C1 .
4.3.4 The effect of the parameters in Example 3
(2) For constant p and k, as tk increases, the measure
(ore identification)
values corresponding to each label monotonously
increases and then monotonically decreases (Fig. 8).
In order to discuss the effect of the parameters in Example
Moreover, it can be easily seen that the similarity value
3, similarity measure S1 is computed, and their results are
of C3 is bigger than C1 and C2 when (a) to (i). From
given in Fig. 14. Consequently, three significant points have
(a) to (i), the final result also keep as C3  C2  C1 .
been achieved as follows:
(3) For constant p and tk , as k increases, the measure
values corresponding to each label monotonously
(1) For constant tk and k, it has been observed that the
increases and then monotonically decreases (Fig. 9).
measure values corresponding to each label decrease
The similarity values of C1 and C3 are differentiable.
with the increase in the value of p (Fig. 15). The
After that, they are quite close. Moreover, for (a), it
similarity values of C1 and C3 cannot be differentiable
can be easily seen that the similarity value of C3 is
from (a) to (i). But no matter how close it is, the
bigger than C1 when k ∈ [0, 3] and k ∈ [7, 8] while
similarity value of C1 is always bigger than C3 . From
the similarity value of C1 is bigger than C3 when k ∈
(a) to (i), the final result keep as C1  C3  C2 .
[4, 6]; for (b), it can be easily seen that the similarity
(2) For constant p and k, as tk increases, the similarity
value of C3 is bigger than C1 when k ∈ [0, 4] and
values of corresponding to each label (C1 , C2 , C3 )
k ∈ [7, 8] while the similarity value of C1 is bigger
monotonically decreases (Fig. 16). The similarity
than C3 when k ∈ [5, 6]. From (c) to (i), the similarity
values of C1 and C3 cannot be differentiable from (a)
value of C3 is bigger than C1 , and the final result also
to (i). But no matter how close it is, the similarity value
keep as C3  C2  C1 .
of C1 is always bigger than C3 (except the special case
tk = 1). Moreover, from (a) to (i), the most of final
4.3.3 The effect of the parameters in Example 2
result also keep as C1  C3  C2 .
(nanometer material identification)
(3) For constant p and k, as tk increases, the similarity
values of corresponding to each label monotonically
In order to discuss the effect of the parameters in Example
decreases (Fig. 17).The similarity values of C1 and
2, similarity measure S1 is computed, and their results are
C2 are differentiable from (a) to (d). After that,
given in Fig. 10. Consequently, three significant points have
they are quite close. But no matter how close it is,
been achieved as follows:
the similarity value of C1 is always bigger than C3 .
(1) For constant tk and k, it has been observed that the Moreover, from (a) to (i), the final result also keep as
measure values corresponding to each label decrease C1  C 3  C 2 .
with the increase in the value of p (Fig. 11). The
similarity values of C1 and C2 cannot be differentiable 4.3.5 The effect of the parameters in Example 4
from (d) to (i). But no matter how it close to, the (bacterial detection)
similarity value of C2 is always bigger than C1 . From
(a) to (i), the final result keep as C2  C1  C3 . In order to discuss the effect of the parameters in Example
(2) For constant p and k, as tk increases, the similarity 4, similarity measure S1 is computed, and their results are
values of corresponding to the labels (C1 and given in Fig. 18. Consequently, three significant points have
C2 ) monotonously increases and then monotonically been achieved as follows:
Multiparametric similarity measures on Pythagorean...

Fig. 6 The total changing trend


of parameters tk and k in
Example 1
X. Peng and H. Garg

0.975 0.975
0.95 0.95
0.925 0.925

Similarity values

Similarity values
0.9 0.9
0.875 0.875
0.85 0.85
0.825 0.825
0.8 0.8
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.922 0.92 0.917 0.914 0.911 0.909 0.907 0.905 0.903 C1 0.932 0.93 0.928 0.926 0.925 0.923 0.922 0.92 0.919
C2 0.862 0.847 0.835 0.826 0.819 0.813 0.809 0.805 0.802 C2 0.879 0.867 0.858 0.851 0.846 0.842 0.839 0.837 0.835
C3 0.958 0.949 0.944 0.941 0.938 0.936 0.934 0.933 0.931 C3 0.958 0.951 0.946 0.942 0.939 0.936 0.934 0.933 0.931
The parameter p (tk=9,k=0) The parameter p (tk=9,k=1)

0.975
0.975
0.95
0.95
0.925

Similarity values
0.925
Similarity values

0.9
0.9
0.875
0.875
0.85
0.85
0.825
0.825
0.8 0.8
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.942 0.939 0.937 0.934 0.932 0.93 0.928 0.926 0.925 C1 0.951 0.946 0.942 0.938 0.934 0.931 0.929 0.927 0.925
C2 0.897 0.885 0.875 0.867 0.861 0.857 0.853 0.849 0.847 C2 0.914 0.898 0.884 0.874 0.866 0.859 0.855 0.851 0.847
C3 0.958 0.952 0.947 0.942 0.939 0.936 0.934 0.933 0.931 C3 0.959 0.952 0.947 0.942 0.939 0.936 0.934 0.933 0.931
The parameter p (tk=9,k=2) The parameter p (tk=9,k=3)

0.975 0.975
0.95 0.95
Similarity values

Similarity values

0.925 0.925
0.9 0.9
0.875 0.875
0.85 0.85
0.825 0.825
0.8 0.8
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.961 0.95 0.943 0.938 0.934 0.931 0.929 0.927 0.925 C1 0.961 0.95 0.943 0.938 0.934 0.931 0.929 0.927 0.925
C2 0.931 0.905 0.887 0.875 0.866 0.86 0.855 0.851 0.847 C2 0.931 0.905 0.887 0.875 0.866 0.86 0.855 0.851 0.847
C3 0.959 0.951 0.946 0.942 0.939 0.936 0.934 0.933 0.931 C3 0.956 0.949 0.944 0.941 0.938 0.936 0.934 0.933 0.931
The parameter p (tk=9,k=4) The parameter p (tk=9,k=5)

0.975
0.975
0.95
0.95
0.925
Similarity values
Similarity values

0.925
0.9
0.9
0.875
0.875
0.85 0.85

0.825 0.825

0.8 0.8
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.951 0.946 0.942 0.938 0.934 0.931 0.929 0.927 0.925 C1 0.942 0.939 0.937 0.934 0.932 0.93 0.928 0.926 0.925
C2 0.914 0.898 0.884 0.874 0.866 0.859 0.855 0.851 0.847 C2 0.897 0.885 0.875 0.867 0.861 0.857 0.853 0.849 0.847
C3 0.951 0.946 0.942 0.939 0.937 0.935 0.933 0.932 0.931 C3 0.946 0.942 0.938 0.936 0.934 0.932 0.931 0.93 0.929
The parameter p (tk=9,k=6) The parameter p (tk=9,k=7)

0.975
0.95
0.925
Similarity values

0.9
0.875
0.85
0.825
0.8
1 2 3 4 5 6 7 8 9
C1 0.932 0.93 0.928 0.926 0.925 0.923 0.922 0.92 0.919
C2 0.879 0.867 0.858 0.851 0.846 0.842 0.839 0.837 0.835
C3 0.941 0.937 0.934 0.932 0.93 0.929 0.928 0.927 0.926
The parameter p (tk=9,k=8)

Fig. 7 The changing trend of parameter p in Example 1


Multiparametric similarity measures on Pythagorean...

0.98 0.98
0.955 0.955
0.93 0.93

Similarity values
Similarity values
0.905 0.905
0.88 0.88
0.855 0.855
0.83 0.83
0.805 0.805
0.78 0.78
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.913 0.956 0.942 0.934 0.93 0.927 0.925 0.923 0.922 C1 0.91 0.937 0.933 0.929 0.926 0.924 0.922 0.921 0.92
C2 0.845 0.923 0.897 0.884 0.876 0.871 0.867 0.864 0.862 C2 0.83 0.88 0.873 0.865 0.86 0.855 0.852 0.849 0.847
C3 0.975 0.976 0.968 0.964 0.962 0.96 0.959 0.958 0.958 C3 0.965 0.966 0.96 0.956 0.954 0.952 0.951 0.95 0.949
The parameter tk (k=0,p=2)
The parameter tk (k=0,p=1)

0.98 0.98
0.955 0.955
0.93 0.93
Similarity values

Similarity values
0.905 0.905
0.88 0.88
0.855 0.855
0.83 0.83
0.805 0.805
0.78 0.78
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.905 0.92 0.92 0.919 0.918 0.917 0.916 0.915 0.914
C1 0.908 0.927 0.926 0.924 0.922 0.92 0.919 0.918 0.917
C2 0.809 0.839 0.839 0.837 0.834 0.832 0.83 0.828 0.826
C2 0.818 0.855 0.853 0.849 0.845 0.842 0.839 0.837 0.835
C3 0.958 0.96 0.953 0.949 0.946 0.944 0.943 0.942 0.941
C3 0.96 0.962 0.956 0.952 0.949 0.947 0.946 0.945 0.944
The parameter tk (k=0,p=4)
The parameter tk (k=0,p=3)

0.98 0.98
0.955 0.955
0.93 0.93
Similarity values

Similarity values

0.905 0.905
0.88 0.88
0.855 0.855
0.83 0.83
0.805 0.805
0.78 0.78
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.903 0.916 0.916 0.915 0.914 0.914 0.913 0.912 0.911 C1 0.901 0.912 0.912 0.912 0.911 0.911 0.91 0.909 0.909
C2 0.802 0.828 0.828 0.827 0.825 0.823 0.822 0.82 0.819 C2 0.797 0.819 0.819 0.819 0.818 0.817 0.816 0.814 0.813
C3 0.957 0.958 0.951 0.947 0.944 0.942 0.94 0.939 0.938 C3 0.956 0.957 0.95 0.945 0.942 0.94 0.938 0.937 0.936

The parameter tk (k=0,p=5) The parameter tk (k=0,p=6)

0.98 0.98
0.955 0.955
0.93 0.93
Similarity values
Similarity values

0.905 0.905
0.88 0.88
0.855 0.855
0.83 0.83
0.805 0.805
0.78 0.78
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.899 0.909 0.909 0.909 0.908 0.908 0.908 0.907 0.907 C1 0.898 0.906 0.906 0.906 0.906 0.906 0.905 0.905 0.905
C2 0.794 0.813 0.813 0.813 0.812 0.812 0.811 0.81 0.809 C2 0.791 0.808 0.808 0.808 0.808 0.807 0.806 0.806 0.805
C3 0.955 0.957 0.949 0.944 0.94 0.938 0.936 0.935 0.934 C3 0.954 0.956 0.948 0.943 0.939 0.937 0.935 0.934 0.933

The parameter tk (k=0,p=7) The parameter tk (k=0,p=8)

0.98
0.955
0.93
Similarity values

0.905
0.88
0.855
0.83
0.805
0.78
1 2 3 4 5 6 7 8 9
C1 0.896 0.904 0.904 0.904 0.904 0.904 0.903 0.903 0.903
C2 0.788 0.804 0.804 0.804 0.804 0.803 0.803 0.802 0.802
C3 0.954 0.956 0.948 0.942 0.938 0.936 0.934 0.932 0.931

The parameter tk (k=0,p=9)

Fig. 8 The changing trend of parameter tk in Example 1


X. Peng and H. Garg

Fig. 9 The changing trend of 0.98 0.98


parameter k in Example 1 0.96 0.96
0.94 0.94

Similarity values
Similarity values
0.92 0.92
0.9 0.9
0.88 0.88
0.86 0.86
0.84 0.84
0.82 0.82
0.8 0.8
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.922 0.932 0.942 0.951 0.961 0.961 0.951 0.942 0.932 C1 0.92 0.93 0.939 0.946 0.95 0.95 0.946 0.939 0.93
C2 0.862 0.879 0.897 0.914 0.931 0.931 0.914 0.897 0.879 C2 0.847 0.867 0.885 0.898 0.905 0.905 0.898 0.885 0.867
C3 0.958 0.958 0.958 0.959 0.959 0.956 0.951 0.946 0.941 C3 0.949 0.951 0.952 0.952 0.951 0.949 0.946 0.942 0.937
The parameter k (tk=9,p=1) The parameter k (tk=9,p=2)

0.98 0.98
0.96 0.96
0.94 0.94
Similarity values

Similarity values
0.92 0.92
0.9 0.9
0.88 0.88
0.86 0.86
0.84 0.84
0.82 0.82
0.8 0.8
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.917 0.928 0.937 0.942 0.943 0.943 0.942 0.937 0.928 C1 0.914 0.926 0.934 0.938 0.938 0.938 0.938 0.934 0.926
C2 0.835 0.858 0.875 0.884 0.887 0.887 0.884 0.875 0.858 C2 0.826 0.851 0.867 0.874 0.875 0.875 0.874 0.867 0.851
C3 0.944 0.946 0.947 0.947 0.946 0.944 0.942 0.938 0.934 C3 0.941 0.942 0.942 0.942 0.942 0.941 0.939 0.936 0.932
The parameter k (tk=9,p=3) The parameter k (tk=9,p=4)

0.98 0.98
0.96 0.96
0.94 0.94
Similarity values

Similarity values
0.92 0.92
0.9 0.9
0.88 0.88
0.86 0.86
0.84 0.84
0.82 0.82
0.8 0.8
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.911 0.925 0.932 0.934 0.934 0.934 0.934 0.932 0.925 C1 0.909 0.923 0.93 0.931 0.931 0.931 0.931 0.93 0.923
C2 0.819 0.846 0.861 0.866 0.866 0.866 0.866 0.861 0.846 C2 0.813 0.842 0.857 0.859 0.86 0.86 0.859 0.857 0.842
C3 0.938 0.939 0.939 0.939 0.939 0.938 0.937 0.934 0.93 C3 0.936 0.936 0.936 0.936 0.936 0.936 0.935 0.932 0.929
The parameter k (tk=9,p=5) The parameter k (tk=9,p=6)

0.98 0.98
0.96 0.96
0.94 0.94
Similarity values
Similarity values

0.92 0.92
0.9 0.9
0.88 0.88
0.86 0.86
0.84 0.84
0.82 0.82
0.8 0.8
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.907 0.922 0.928 0.929 0.929 0.929 0.929 0.928 0.922 C1 0.905 0.92 0.926 0.927 0.927 0.927 0.927 0.926 0.92
C2 0.809 0.839 0.853 0.855 0.855 0.855 0.855 0.853 0.839 C2 0.805 0.837 0.849 0.851 0.851 0.851 0.851 0.849 0.837
C3 0.934 0.934 0.934 0.934 0.934 0.934 0.933 0.931 0.928 C3 0.933 0.933 0.933 0.933 0.933 0.933 0.932 0.93 0.927
The parameter k (tk=9,p=7) The parameter k (tk=9,p=8)

0.98
0.96
0.94
Similarity values

0.92
0.9
0.88
0.86
0.84
0.82
0.8
0 1 2 3 4 5 6 7 8
C1 0.903 0.919 0.925 0.925 0.925 0.925 0.925 0.925 0.919
C2 0.802 0.835 0.847 0.847 0.847 0.847 0.847 0.847 0.835
C3 0.931 0.931 0.931 0.931 0.931 0.931 0.931 0.929 0.926
The parameter k (tk=9,p=9)
Multiparametric similarity measures on Pythagorean...

Fig. 10 The total changing trend


of parameters tk and k in
Example 2
X. Peng and H. Garg

0.98 0.98
0.932 0.932
0.884 0.884
0.836 0.836

Similarity values

Similarity values
0.788 0.788
0.74 0.74
0.692 0.692
0.644 0.644
0.596 0.596
0.548 0.548
0.5 0.5
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.905 0.888 0.879 0.872 0.867 0.864 0.861 0.858 0.856 C1 0.916 0.901 0.893 0.888 0.884 0.881 0.878 0.876 0.874
C2 0.947 0.932 0.925 0.92 0.917 0.914 0.912 0.91 0.908 C2 0.953 0.938 0.93 0.925 0.92 0.917 0.914 0.912 0.91
C3 0.653 0.617 0.588 0.566 0.547 0.532 0.52 0.51 0.501 C3 0.66 0.63 0.606 0.585 0.569 0.555 0.543 0.533 0.525
The parameter p (tk=9,k=0) The parameter p (tk=9,k=1)

0.98 0.98
0.932 0.932
0.884 0.884
0.836 0.836
Similarity values

Similarity values
0.788 0.788
0.74 0.74
0.692 0.692
0.644 0.644
0.596 0.596
0.548 0.548
0.5 0.5
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.927 0.912 0.905 0.9 0.897 0.894 0.892 0.89 0.889 C1 0.934 0.921 0.913 0.909 0.905 0.903 0.901 0.9 0.898
C2 0.96 0.943 0.933 0.926 0.921 0.917 0.914 0.912 0.91 C2 0.963 0.945 0.934 0.927 0.922 0.918 0.914 0.912 0.91
C3 0.667 0.643 0.622 0.604 0.589 0.577 0.566 0.556 0.548 C3 0.673 0.654 0.637 0.622 0.609 0.597 0.587 0.579 0.571
The parameter p (tk=9,k=2) The parameter p (tk=9,k=3)

0.98 0.98
0.932 0.932
0.884 0.884
0.836
Similarity values

0.836
Similarity values

0.788 0.788
0.74 0.74
0.692 0.692
0.644 0.644
0.596 0.596
0.548 0.548
0.5 0.5
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.94 0.927 0.919 0.914 0.91 0.908 0.905 0.904 0.902 C1 0.941 0.929 0.922 0.916 0.912 0.909 0.907 0.905 0.903
C2 0.963 0.945 0.934 0.927 0.922 0.918 0.914 0.912 0.91 C2 0.96 0.943 0.933 0.926 0.921 0.917 0.914 0.912 0.91
C3 0.68 0.665 0.651 0.638 0.627 0.617 0.608 0.6 0.593 C3 0.687 0.675 0.663 0.653 0.643 0.634 0.627 0.62 0.614
The parameter p (tk=9,k=4) The parameter p (tk=9,k=5)

0.98 0.98
0.932 0.932
0.884 0.884
0.836 0.836
Similarity values
Similarity values

0.788 0.788
0.74 0.74
0.692 0.692
0.644 0.644
0.596 0.596
0.548 0.548
0.5 0.5
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.936 0.927 0.921 0.916 0.913 0.909 0.907 0.905 0.903 C1 0.931 0.922 0.916 0.912 0.91 0.907 0.905 0.904 0.902
C2 0.953 0.938 0.93 0.925 0.92 0.917 0.914 0.912 0.91 C2 0.947 0.932 0.925 0.92 0.917 0.914 0.912 0.91 0.908
C3 0.693 0.683 0.674 0.665 0.657 0.65 0.643 0.638 0.633 C3 0.7 0.691 0.683 0.675 0.668 0.662 0.657 0.653 0.649
The parameter p (tk=9,k=6) The parameter p (tk=9,k=7)

0.98
0.932
0.884
0.836
Similarity values

0.788
0.74
0.692
0.644
0.596
0.548
0.5
1 2 3 4 5 6 7 8 9
C1 0.926 0.913 0.907 0.903 0.901 0.899 0.897 0.896 0.895
C2 0.94 0.925 0.918 0.913 0.91 0.907 0.905 0.904 0.903
C3 0.707 0.697 0.689 0.682 0.676 0.671 0.667 0.663 0.66
The parameter p (tk=9,k=8)

Fig. 11 The changing trend of parameter p in Example 2


Multiparametric similarity measures on Pythagorean...

Fig. 12 The changing trend of 0.98 0.98


parameter tk in Example 2 0.94 0.94
0.9 0.9
0.86 0.86

Similarity values
0.82

Similarity values
0.82
0.78 0.78
0.74 0.74
0.7 0.7
0.66 0.66
0.62 0.62
0.58 0.58
0.54 0.54
0.5 0.5
0.46 0.46
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.883 0.922 0.914 0.911 0.909 0.907 0.906 0.905 0.905 C1 0.868 0.9 0.899 0.896 0.894 0.892 0.89 0.889 0.888
C2 0.953 0.97 0.96 0.955 0.952 0.95 0.949 0.948 0.947 C2 0.941 0.955 0.948 0.943 0.939 0.937 0.935 0.933 0.932
C3 0.6 0.63 0.64 0.645 0.648 0.65 0.651 0.653 0.653 C3 0.551 0.595 0.606 0.611 0.613 0.615 0.616 0.617 0.617
The parameter tk (k=0,p=1) The parameter tk (k=0,p=2)

0.98 0.98
0.94 0.94
0.9 0.9
0.86 0.86
Similarity values

0.82

Similarity values
0.82
0.78 0.78
0.74 0.74
0.7 0.7
0.66 0.66
0.62 0.62
0.58 0.58
0.54 0.54
0.5 0.5
0.46 0.46
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.86 0.886 0.886 0.885 0.883 0.882 0.88 0.879 0.879 C1 0.855 0.877 0.877 0.876 0.875 0.874 0.873 0.873 0.872
C2 0.934 0.947 0.941 0.936 0.933 0.93 0.928 0.926 0.925 C2 0.93 0.941 0.937 0.932 0.928 0.926 0.923 0.922 0.92
C3 0.522 0.569 0.58 0.583 0.585 0.587 0.587 0.588 0.588 C3 0.503 0.55 0.559 0.562 0.564 0.564 0.565 0.565 0.566

The parameter tk (k=0,p=3) The parameter tk (k=0,p=4)

0.98 0.98
0.94 0.94
0.9 0.9
0.86 0.86
Similarity values

0.82

Similarity values
0.82
0.78 0.78
0.74 0.74
0.7 0.7
0.66 0.66
0.62 0.62
0.58 0.58
0.54 0.54
0.5 0.5
0.46 0.46
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.851 0.87 0.87 0.87 0.869 0.869 0.868 0.868 0.867 C1 0.849 0.865 0.865 0.865 0.865 0.865 0.864 0.864 0.864
C2 0.927 0.936 0.933 0.929 0.925 0.922 0.92 0.918 0.917 C2 0.925 0.933 0.931 0.927 0.923 0.92 0.917 0.916 0.914
C3 0.49 0.534 0.542 0.545 0.546 0.546 0.547 0.547 0.547 C3 0.481 0.522 0.528 0.53 0.531 0.532 0.532 0.532 0.532

The parameter tk (k=0,p=5) The parameter tk (k=0,p=6)

0.98
0.94 0.98
0.9 0.94
0.86 0.9
0.86
Similarity values

0.82
Similarity values

0.82
0.78 0.78
0.74 0.74
0.7 0.7
0.66 0.66
0.62 0.62
0.58 0.58
0.54 0.54
0.5 0.5
0.46 0.46
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.847 0.862 0.862 0.862 0.861 0.861 0.861 0.861 0.861 C1 0.846 0.859 0.859 0.859 0.859 0.858 0.858 0.858 0.858
C2 0.922 0.928 0.927 0.923 0.92 0.916 0.914 0.912 0.91
C2 0.923 0.93 0.929 0.925 0.921 0.918 0.915 0.913 0.912
C3 0.467 0.503 0.507 0.508 0.509 0.509 0.509 0.509 0.51
C3 0.473 0.512 0.517 0.518 0.519 0.52 0.52 0.52 0.52
The parameter tk (k=0,p=7) The parameter tk (k=0,p=8)

0.98
0.94
0.9
Similarity values

0.86
0.82
0.78
0.74
0.7
0.66
0.62
0.58
0.54
0.5
0.46
1 2 3 4 5 6 7 8 9
C1 0.845 0.856 0.856 0.856 0.856 0.856 0.856 0.856 0.856
C2 0.92 0.926 0.925 0.922 0.918 0.915 0.912 0.91 0.908
C3 0.462 0.496 0.499 0.5 0.5 0.5 0.5 0.501 0.501
The parameter tk (k=0,p=9)
X. Peng and H. Garg

Fig. 13 The changing trend of 0.98 0.98


parameter k in Example 2 0.9 0.9

Similarity values

Similarity values
0.82 0.82
0.74 0.74
0.66 0.66
0.58 0.58
0.5 0.5
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.905 0.916 0.927 0.934 0.940 0.941 0.936 0.931 0.926 C1 0.888 0.901 0.912 0.921 0.927 0.929 0.927 0.922 0.913
C2 0.947 0.953 0.960 0.963 0.963 0.960 0.953 0.947 0.947 C2 0.932 0.938 0.943 0.945 0.945 0.943 0.938 0.932 0.925
C3 0.653 0.660 0.667 0.673 0.680 0.687 0.693 0.700 0.707 C3 0.617 0.630 0.643 0.654 0.665 0.675 0.683 0.691 0.697
The parameter k (tk=9,p=1)
The parameter k (tk=9,p=2)

0.98 0.98

0.9 0.9

Similarity values
Similarity values

0.82 0.82

0.74 0.74

0.66 0.66

0.58 0.58

0.5 0.5
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.879 0.893 0.905 0.913 0.919 0.922 0.921 0.916 0.907 C1 0.872 0.888 0.900 0.909 0.914 0.916 0.916 0.912 0.903
C2 0.925 0.930 0.933 0.934 0.934 0.934 0.933 0.925 0.918 C2 0.920 0.925 0.926 0.926 0.927 0.926 0.925 0.920 0.913
C3 0.588 0.606 0.622 0.637 0.651 0.663 0.674 0.683 0.689 C3 0.570 0.590 0.604 0.622 0.638 0.653 0.665 0.675 0.682

The parameter k (tk=9,p=3) The parameter k (tk=9,p=4)

0.98 0.98

0.9 0.9
Similarity values

Similarity values
0.82 0.82

0.74 0.74

0.66 0.66

0.58 0.58

0.5 0.5
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.867 0.884 0.897 0.905 0.905 0.912 0.913 0.917 0.901 C1 0.864 0.881 0.894 0.903 0.908 0.909 0.909 0.907 0.899
C2 0.917 0.920 0.921 0.922 0.922 0.921 0.920 0.917 0.910 C2 0.914 0.917 0.917 0.918 0.918 0.917 0.917 0.914 0.907
C3 0.547 0.569 0.589 0.609 0.627 0.643 0.657 0.668 0.676 C3 0.532 0.555 0.577 0.597 0.617 0.634 0.650 0.662 0.671

The parameter k (tk=9,p=5) The parameter k (tk=9,p=6)

0.98 0.98
0.9 0.9
Similarity values

Similarity values

0.82 0.82
0.74 0.74
0.66 0.66
0.58 0.58

0.5 0.5
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.861 0.878 0.892 0.901 0.905 0.907 0.907 0.905 0.897 C1 0.858 0.880 0.890 0.900 0.904 0.905 0.905 0.904 0.896
C2 0.912 0.914 0.914 0.914 0.914 0.914 0.914 0.912 0.905 C2 0.910 0.912 0.912 0.912 0.912 0.912 0.912 0.910 0.904
C3 0.520 0.543 0.566 0.587 0.608 0.627 0.643 0.657 0.667 C3 0.510 0.533 0.556 0.579 0.600 0.620 0.638 0.653 0.663

The parameter k (tk=9,p=7) The parameter k (tk=9,p=8)

0.98

0.9
Similarity values

0.82

0.74

0.66

0.58

0.5
0 1 2 3 4 5 6 7 8
C1 0.856 0.874 0.889 0.898 0.902 0.903 0.903 0.902 0.895
C2 0.908 0.910 0.910 0.910 0.910 0.910 0.910 0.908 0.903
C3 0.501 0.525 0.548 0.571 0.593 0.614 0.633 0.649 0.660
The parameter k (tk=9,p=9)
Multiparametric similarity measures on Pythagorean...

Fig. 14 The total changing trend


of parameters tk and k in
Example 3
X. Peng and H. Garg

0.98 0.98

0.955 0.955

Similarity values
Similarity values
0.93 0.93

0.905 0.905

0.88 0.88
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.972 0.951 0.938 0.93 0.923 0.918 0.914 0.911 0.908 C1 0.969 0.951 0.938 0.93 0.923 0.918 0.914 0.911 0.908
C2 0.968 0.943 0.928 0.918 0.911 0.905 0.901 0.897 0.894 C2 0.964 0.942 0.928 0.918 0.911 0.905 0.901 0.897 0.894
C3 0.97 0.948 0.936 0.927 0.921 0.917 0.913 0.91 0.908 C3 0.966 0.948 0.936 0.927 0.921 0.917 0.913 0.91 0.908
The parameter p (tk=9,k=0) The parameter p (tk=9,k=1)

0.98 0.98

0.955 0.955
Similarity values

Similarity values
0.93 0.93

0.905 0.905

0.88 0.88
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.965 0.95 0.938 0.93 0.923 0.918 0.914 0.911 0.908 C1 0.962 0.948 0.937 0.929 0.923 0.918 0.914 0.911 0.908
C2 0.96 0.941 0.928 0.918 0.911 0.905 0.901 0.897 0.894 C2 0.956 0.939 0.927 0.918 0.911 0.905 0.901 0.897 0.894
C3 0.963 0.946 0.935 0.927 0.921 0.917 0.913 0.91 0.908 C3 0.959 0.944 0.934 0.927 0.921 0.917 0.913 0.91 0.908
The parameter p (tk=9,k=2) The parameter p (tk=9,k=3)

0.98 0.98

0.955 0.955
Similarity values

Similarity values

0.93 0.93

0.905 0.905

0.88 0.88
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.958 0.945 0.936 0.929 0.923 0.918 0.914 0.911 0.908 C1 0.955 0.942 0.934 0.927 0.922 0.917 0.914 0.911 0.908
C2 0.948 0.933 0.923 0.915 0.909 0.904 0.9 0.897 0.894
C2 0.952 0.936 0.925 0.917 0.91 0.905 0.9 0.897 0.894
C3 0.951 0.939 0.931 0.925 0.92 0.916 0.913 0.91 0.908
C3 0.955 0.942 0.933 0.926 0.921 0.917 0.913 0.91 0.908
The parameter p (tk=9,k=5)
The parameter p (tk=9,k=4)

0.98 0.98

0.955 0.955
Similarity values
Similarity values

0.93 0.93

0.905 0.905

0.88 0.88
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.951 0.939 0.931 0.925 0.92 0.916 0.912 0.91 0.908 C1 0.948 0.935 0.927 0.921 0.916 0.913 0.91 0.908 0.906
C2 0.944 0.929 0.919 0.912 0.907 0.902 0.899 0.896 0.893 C2 0.94 0.924 0.915 0.908 0.903 0.899 0.896 0.893 0.891
C3 0.948 0.935 0.928 0.922 0.918 0.915 0.912 0.909 0.907 C3 0.944 0.931 0.924 0.918 0.915 0.912 0.909 0.907 0.905

The parameter p (tk=9,k=6) The parameter p (tk=9,k=7)

0.98

0.955
Similarity values

0.93

0.905

0.88
1 2 3 4 5 6 7 8 9
C1 0.944 0.931 0.922 0.916 0.912 0.908 0.905 0.903 0.901
C2 0.936 0.919 0.909 0.903 0.897 0.893 0.89 0.888 0.886
C3 0.94 0.926 0.919 0.914 0.91 0.907 0.904 0.902 0.901
The parameter p (tk=9,k=8)

Fig. 15 The changing trend of parameter p in Example 3


Multiparametric similarity measures on Pythagorean...

Fig. 16 The changing trend of 1 1


parameter tk in Example 3 0.99 0.99
0.98 0.98
0.97 0.97

Similarity values

Similarity values
0.96 0.96
0.95 0.95
0.94 0.94
0.93 0.93
0.92 0.92
0.91 0.91
0.9 0.9
0.89 0.89
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 1 0.984 0.979 0.977 0.975 0.974 0.973 0.973 0.972 C1 1 0.972 0.963 0.959 0.956 0.954 0.953 0.952 0.951
C2 1 0.982 0.976 0.973 0.971 0.97 0.969 0.968 0.968 C2 1 0.968 0.957 0.952 0.949 0.946 0.945 0.944 0.943
C3 1 0.983 0.978 0.975 0.973 0.972 0.971 0.971 0.97 C3 1 0.971 0.961 0.956 0.953 0.951 0.95 0.949 0.948

The parameter tk (k=0,p=1) The parameter tk (k=0,p=2)

1 1
0.99 0.99
0.98 0.98
0.97 0.97

Similarity values
Similarity values

0.96 0.96
0.95 0.95
0.94
0.94
0.93
0.93 0.92
0.92 0.91
0.91 0.9
0.9 0.89
0.89 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9 C1 1 0.96 0.947 0.941 0.937 0.934 0.932 0.931 0.93
C1 1 0.965 0.954 0.948 0.945 0.942 0.941 0.939 0.938 C2 1 0.954 0.939 0.931 0.926 0.923 0.921 0.919 0.918
C2 1 0.96 0.946 0.939 0.935 0.933 0.931 0.929 0.928 C3 1 0.959 0.946 0.939 0.935 0.932 0.93 0.929 0.927
C3 1 0.964 0.952 0.946 0.942 0.94 0.938 0.937 0.936
The parameter tk (k=0,p=4)
The parameter tk (k=0,p=3)

1 1
0.99 0.99
0.98 0.98
0.97 0.97
Similarity values

Similarity values
0.96 0.96
0.95 0.95
0.94 0.94
0.93 0.93
0.92 0.92
0.91 0.91
0.9 0.9
0.89 0.89
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 1 0.957 0.942 0.935 0.931 0.928 0.926 0.924 0.923 C1 1 0.954 0.938 0.931 0.926 0.923 0.921 0.919 0.918
C2 1 0.95 0.933 0.925 0.92 0.916 0.914 0.912 0.911
C2 1 0.947 0.929 0.92 0.915 0.911 0.908 0.907 0.905
C3 1 0.956 0.941 0.934 0.929 0.926 0.924 0.923 0.921
C3 1 0.953 0.938 0.93 0.925 0.922 0.92 0.918 0.917
The parameter tk (k=0,p=5) The parameter tk (k=0,p=6)

1 1
0.99 0.99
0.98 0.98
0.97 0.97
Similarity values
Similarity values

0.96 0.96
0.95 0.95
0.94
0.94
0.93
0.93 0.92
0.92 0.91
0.91 0.9
0.9 0.89
0.89 1 2 3 4 5 6 7 8 9
1 2 3 4 5 6 7 8 9 C1 1 0.95 0.933 0.925 0.92 0.916 0.914 0.912 0.911
C1 1 0.952 0.936 0.927 0.923 0.919 0.917 0.915 0.914 C2 1 0.942 0.923 0.913 0.907 0.903 0.901 0.899 0.897
C2 1 0.944 0.925 0.916 0.911 0.907 0.904 0.902 0.901 C3 1 0.95 0.933 0.924 0.919 0.916 0.914 0.912 0.91
C3 1 0.951 0.935 0.927 0.922 0.919 0.916 0.915 0.913
The parameter tk (k=0,p=8)
The parameter tk (k=0,p=7)

1
0.99
0.98
0.97
Similarity values

0.96
0.95
0.94
0.93
0.92
0.91
0.9
0.89
1 2 3 4 5 6 7 8 9
C1 1 0.948 0.931 0.923 0.917 0.914 0.912 0.91 0.908
C2 1 0.94 0.921 0.911 0.905 0.901 0.898 0.896 0.894
C3 1 0.948 0.931 0.922 0.917 0.914 0.911 0.909 0.908
The parameter tk (k=0,p=9)
X. Peng and H. Garg

Fig. 17 The changing trend of 0.98 0.98


parameter k in Example 3
0.96 0.96

Similarity values
Similarity values
0.94 0.94

0.92 0.92

0.9
0.9
0.88
0.88 0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
C1 0.951 0.951 0.95 0.948 0.945 0.942 0.939 0.935 0.931
C1 0.972 0.969 0.965 0.962 0.958 0.955 0.951 0.948 0.944 C2 0.943 0.942 0.941 0.939 0.936 0.933 0.929 0.924 0.919
C2 0.968 0.964 0.96 0.956 0.952 0.948 0.944 0.94 0.936 C3 0.948 0.948 0.946 0.944 0.942 0.939 0.935 0.931 0.926
C3 0.97 0.966 0.963 0.959 0.955 0.951 0.948 0.944 0.94
The parameter k (tk=9,p=2)
The parameter k (tk=9,p=1)

0.98 0.98

0.96 0.96
Similarity values

Similarity values
0.94 0.94

0.92 0.92

0.9 0.9
0.88
0 1 2 3 4 5 6 7 8 0.88
0 1 2 3 4 5 6 7 8
C1 0.938 0.938 0.938 0.937 0.936 0.934 0.931 0.927 0.922
C1 0.93 0.93 0.93 0.929 0.929 0.927 0.925 0.921 0.916
C2 0.928 0.928 0.928 0.927 0.925 0.923 0.919 0.915 0.909
C2 0.918 0.918 0.918 0.918 0.917 0.915 0.912 0.908 0.903
C3 0.936 0.936 0.935 0.934 0.933 0.931 0.928 0.924 0.919
C3 0.927 0.927 0.927 0.927 0.926 0.925 0.922 0.918 0.914
The parameter k (tk=9,p=3) The parameter k (tk=9,p=4)

0.98 0.98

0.96 0.96

Similarity values
Similarity values

0.94 0.94

0.92 0.92

0.9 0.9

0.88
0.88 0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
C1 0.918 0.918 0.918 0.918 0.918 0.917 0.916 0.913 0.908
C1 0.92 0.92 0.92 0.92 0.92 0.92 0.92 0.92 0.91
C2 0.905 0.905 0.905 0.905 0.905 0.904 0.902 0.899 0.893
C2 0.91 0.91 0.91 0.91 0.91 0.91 0.91 0.9 0.9 C3 0.917 0.917 0.917 0.917 0.917 0.916 0.915 0.912 0.907
C3 0.92 0.92 0.92 0.92 0.92 0.92 0.92 0.91 0.91
The parameter k (tk=9,p=6)
The parameter k (tk=9,p=5)

0.98 0.98

0.96 0.96
Similarity values

Similarity values

0.94 0.94

0.92 0.92

0.9 0.9

0.88 0.88
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.914 0.914 0.914 0.914 0.914 0.914 0.912 0.91 0.905 C1 0.911 0.911 0.911 0.911 0.911 0.911 0.91 0.908 0.903
C2 0.901 0.901 0.901 0.901 0.9 0.9 0.899 0.896 0.89 C2 0.897 0.897 0.897 0.897 0.897 0.897 0.896 0.893 0.888
C3 0.913 0.913 0.913 0.913 0.913 0.913 0.912 0.909 0.904 C3 0.91 0.91 0.91 0.91 0.91 0.91 0.909 0.907 0.902

The parameter k (tk=9,p=7) The parameter k (tk=9,p=8)

0.98

0.96
Similarity values

0.94

0.92

0.9

0.88
0 1 2 3 4 5 6 7 8
C1 0.908 0.908 0.908 0.908 0.908 0.908 0.908 0.906 0.901
C2 0.894 0.894 0.894 0.894 0.894 0.894 0.893 0.891 0.886
C3 0.908 0.908 0.908 0.908 0.908 0.908 0.907 0.905 0.901

The parameter k (tk=9,p=9)


Multiparametric similarity measures on Pythagorean...

Fig. 18 The total changing trend


of parameters tk and k in
Example 4
X. Peng and H. Garg

0.97 0.97

0.93 0.93

Similarity values
Similarity values
0.89 0.89

0.85 0.85

0.81 0.81

0.77 0.77
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.948 0.89 0.855 0.831 0.814 0.802 0.792 0.784 0.778 C1 0.95 0.895 0.863 0.841 0.826 0.814 0.805 0.798 0.792
C2 0.936 0.904 0.886 0.874 0.864 0.856 0.85 0.845 0.841 C2 0.939 0.909 0.892 0.881 0.872 0.865 0.859 0.854 0.851
C3 0.949 0.928 0.919 0.914 0.911 0.908 0.906 0.905 0.904 C3 0.95 0.929 0.92 0.915 0.912 0.91 0.908 0.907 0.905
The parameter p (tk=9,k=0)
The parameter p (tk=9,k=1)

0.97 0.97

0.93 0.93
Similarity values

Similarity values
0.89 0.89

0.85 0.85

0.81 0.81

0.77 0.77
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.952 0.901 0.871 0.851 0.837 0.826 0.818 0.811 0.806 C1 0.954 0.906 0.879 0.861 0.848 0.838 0.831 0.825 0.82
C2 0.941 0.913 0.898 0.888 0.88 0.873 0.868 0.864 0.86 C2 0.944 0.917 0.903 0.894 0.887 0.882 0.877 0.873 0.87
C3 0.951 0.93 0.921 0.916 0.913 0.911 0.909 0.908 0.907 C3 0.951 0.931 0.922 0.917 0.914 0.912 0.911 0.909 0.908
The parameter p (tk=9,k=2) The parameter p (tk=9,k=3)

0.97 0.97

0.93 0.93
Similarity values
Similarity values

0.89 0.89

0.85 0.85

0.81 0.81

0.77 0.77
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.957 0.912 0.886 0.87 0.859 0.85 0.844 0.838 0.834 C1 0.959 0.917 0.894 0.879 0.869 0.862 0.856 0.851 0.847
C2 0.946 0.921 0.909 0.901 0.895 0.89 0.886 0.882 0.879 C2 0.948 0.925 0.914 0.907 0.902 0.897 0.894 0.891 0.889
C3 0.952 0.932 0.923 0.919 0.916 0.913 0.912 0.911 0.91 C3 0.952 0.932 0.924 0.92 0.917 0.915 0.913 0.912 0.911
The parameter p (tk=9,k=4) The parameter p (tk=9,k=5)

0.97 0.97

0.93 0.93
Similarity values
Similarity values

0.89 0.89

0.85 0.85

0.81 0.81

0.77 0.77
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.961 0.922 0.901 0.888 0.879 0.873 0.868 0.864 0.861 C1 0.963 0.927 0.907 0.896 0.888 0.883 0.878 0.875 0.873
C2 0.951 0.929 0.919 0.913 0.908 0.905 0.902 0.899 0.897 C2 0.953 0.933 0.923 0.918 0.914 0.911 0.909 0.907 0.905
C3 0.953 0.933 0.925 0.921 0.918 0.916 0.914 0.913 0.912 C3 0.953 0.934 0.926 0.922 0.919 0.917 0.915 0.914 0.914

The parameter p (tk=9,k=6) The parameter p (tk=9,k=7)

0.97

0.93
Similarity values

0.89

0.85

0.81

0.77
1 2 3 4 5 6 7 8 9
C1 0.966 0.931 0.913 0.903 0.896 0.891 0.887 0.884 0.882
C2 0.955 0.936 0.928 0.923 0.919 0.917 0.915 0.913 0.912
C3 0.954 0.935 0.927 0.923 0.92 0.918 0.917 0.915 0.915
The parameter p (tk=9,k=8)

Fig. 19 The changing trend of parameter p in Example 4


Multiparametric similarity measures on Pythagorean...

0.95 0.95
0.925 0.925
0.9 0.9

Similarity values

Similarity values
0.875 0.875
0.85 0.85
0.825 0.825
0.8 0.8
0.775 0.775
0.75 0.75
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.93 0.94 0.943 0.945 0.946 0.947 0.947 0.948 0.948 C1 0.86 0.878 0.884 0.886 0.887 0.888 0.889 0.889 0.89
C2 0.918 0.928 0.932 0.933 0.935 0.935 0.936 0.936 0.936 C2 0.881 0.895 0.899 0.901 0.902 0.903 0.904 0.904 0.904
C3 0.945 0.948 0.948 0.949 0.949 0.949 0.949 0.949 0.949 C3 0.922 0.926 0.927 0.927 0.928 0.928 0.928 0.928 0.928
The parameter tk (k=0,p=1) The parameter tk (k=0,p=2)

0.95 0.95
0.925 0.925
0.9 0.9
Similarity values

Similarity values
0.875 0.875
0.85 0.85
0.825 0.825
0.8 0.8
0.775 0.775
0.75 0.75
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.824 0.845 0.85 0.852 0.853 0.854 0.854 0.854 0.855 C1 0.802 0.824 0.828 0.829 0.83 0.831 0.831 0.831 0.831
C2 0.863 0.879 0.883 0.884 0.885 0.886 0.886 0.886 0.886 C2 0.853 0.868 0.871 0.872 0.873 0.873 0.874 0.874 0.874
C3 0.913 0.917 0.918 0.918 0.919 0.919 0.919 0.919 0.919 C3 0.908 0.911 0.913 0.913 0.913 0.914 0.914 0.914 0.914
The parameter tk (k=0,p=3) The parameter tk (k=0,p=4)

0.95 0.95
0.925 0.925
0.9 0.9
Similarity values

Similarity values

0.875 0.875
0.85 0.85
0.825 0.825
0.8 0.8
0.775 0.775
0.75 0.75
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.788 0.809 0.812 0.813 0.814 0.814 0.814 0.814 0.814 C1 0.778 0.798 0.8 0.801 0.801 0.801 0.801 0.802 0.802
C2 0.845 0.86 0.862 0.863 0.864 0.864 0.864 0.864 0.864 C2 0.839 0.854 0.855 0.856 0.856 0.856 0.856 0.856 0.856
C3 0.904 0.908 0.909 0.91 0.91 0.91 0.911 0.911 0.911 C3 0.902 0.906 0.907 0.908 0.908 0.908 0.908 0.908 0.908

The parameter tk (k=0,p=5) The parameter tk (k=0,p=6)

0.95 0.95
0.925 0.925
0.9 0.9
Similarity values

Similarity values

0.875 0.875
0.85 0.85
0.825 0.825
0.8 0.8
0.775 0.775
0.75 0.75
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9
C1 0.77 0.789 0.791 0.791 0.792 0.792 0.792 0.792 0.792 C1 0.765 0.782 0.784 0.784 0.784 0.784 0.784 0.784 0.784
C2 0.835 0.848 0.85 0.85 0.85 0.85 0.85 0.85 0.85 C2 0.831 0.844 0.845 0.845 0.845 0.845 0.845 0.845 0.845
C3 0.9 0.904 0.905 0.906 0.906 0.906 0.906 0.906 0.906 C3 0.899 0.903 0.904 0.904 0.905 0.905 0.905 0.905 0.905

The parameter tk (k=0,p=7) The parameter tk (k=0,p=8)

0.95
0.925
0.9
Similarity values

0.875
0.85
0.825
0.8
0.775
0.75
1 2 3 4 5 6 7 8 9
C1 0.76 0.777 0.777 0.778 0.778 0.778 0.778 0.778 0.778
C2 0.828 0.84 0.841 0.841 0.841 0.841 0.841 0.841 0.841
C3 0.898 0.902 0.903 0.903 0.903 0.904 0.904 0.904 0.904

The parameter tk (k=0,p=9)

Fig. 20 The changing trend of parameter tk in Example 4


X. Peng and H. Garg

Fig. 21 The changing trend of 0.97 0.97


parameter k in Example 4
0.93 0.93

Similarity values

Similarity values
0.89 0.89

0.85 0.85

0.81 0.81

0.77 0.77
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.948 0.95 0.952 0.954 0.957 0.959 0.961 0.963 0.966 C1 0.89 0.895 0.901 0.906 0.912 0.917 0.922 0.927 0.931
C2 0.936 0.939 0.941 0.944 0.946 0.948 0.951 0.953 0.955 C2 0.904 0.909 0.913 0.917 0.921 0.925 0.929 0.933 0.936
C3 0.949 0.95 0.951 0.951 0.952 0.952 0.953 0.953 0.954 C3 0.928 0.929 0.93 0.931 0.932 0.932 0.933 0.934 0.935

The parameter k (tk=9,p=1) The parameter k (tk=9,p=2)

0.97 0.97

0.93 0.93
Similarity values

Similarity values
0.89 0.89

0.85 0.85

0.81 0.81

0.77 0.77
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.855 0.863 0.871 0.879 0.886 0.894 0.901 0.907 0.913 C1 0.831 0.841 0.851 0.861 0.87 0.879 0.888 0.896 0.903
C2 0.886 0.892 0.898 0.903 0.909 0.914 0.919 0.923 0.928 C2 0.874 0.881 0.888 0.894 0.901 0.907 0.913 0.918 0.923
C3 0.919 0.92 0.921 0.922 0.923 0.924 0.925 0.926 0.927 C3 0.914 0.915 0.916 0.917 0.919 0.92 0.921 0.922 0.923

The parameter k (tk=9,p=3) The parameter k (tk=9,p=4)

0.97 0.97

0.93 0.93

Similarity values
Similarity values

0.89 0.89

0.85 0.85

0.81 0.81

0.77
0.77 0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8
C1 0.802 0.814 0.826 0.838 0.85 0.862 0.873 0.883 0.891
C1 0.814 0.826 0.837 0.848 0.859 0.869 0.879 0.888 0.896
C2 0.856 0.865 0.873 0.882 0.89 0.897 0.905 0.911 0.917
C2 0.864 0.872 0.88 0.887 0.895 0.902 0.908 0.914 0.919
C3 0.908 0.91 0.911 0.912 0.913 0.915 0.916 0.917 0.918
C3 0.911 0.912 0.913 0.914 0.916 0.917 0.918 0.919 0.92
The parameter k (tk=9,p=6)
The parameter k (tk=9,p=5)

0.97 0.97

0.93 0.93
Similarity values
Similarity values

0.89 0.89

0.85 0.85

0.81 0.81

0.77 0.77
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
C1 0.792 0.805 0.818 0.831 0.844 0.856 0.868 0.878 0.887 C1 0.784 0.798 0.811 0.825 0.838 0.851 0.864 0.875 0.884
C2 0.85 0.859 0.868 0.877 0.886 0.894 0.902 0.909 0.915 C2 0.845 0.854 0.864 0.873 0.882 0.891 0.899 0.907 0.913
C3 0.906 0.908 0.909 0.911 0.912 0.913 0.914 0.915 0.917 C3 0.905 0.907 0.908 0.909 0.911 0.912 0.913 0.914 0.915

The parameter k (tk=9,p=7) The parameter k (tk=9,p=8)

0.97

0.93
Similarity values

0.89

0.85

0.81

0.77
0 1 2 3 4 5 6 7 8
C1 0.778 0.792 0.806 0.82 0.834 0.847 0.861 0.873 0.882
C2 0.841 0.851 0.86 0.87 0.879 0.889 0.897 0.905 0.912
C3 0.904 0.905 0.907 0.908 0.91 0.911 0.912 0.914 0.915
The parameter k (tk=9,p=9)
Multiparametric similarity measures on Pythagorean...

(1) For constant tk and k, it has been observed that the 5 Conclusion
measure values corresponding to each label decrease
with the increase in the value of p (Fig. 19). In this paper, an endeavor has been made to present some
The similarity values of C1 , C2 and C3 cannot be series of distance or similarity measures to accommodate
differentiable from (a) to (i). The similarity values of the PFS information. PFS is one of the generalizations of the
C1 is greater or equal than C3 when p = 1 from (a) to IFSs by handling the uncertainties in a deeper way. Keeping
(b) while the similarity value of C3 is bigger than C1 the advantages of it, we propose some series of measures,
when p ∈ [1, 9] from (a) to (i). From (a) to (i), most which differentiate the different PFSs and also provide an
of the final result keep as C3  C2  C1 . alternative way to deal with PFSs in some cases. From
(2) For constant p and k, as tk increases, the similarity the existing studies, it has been reveal that several existing
values of corresponding to each label monotonously theories are unable to rank the alternatives due to “the
increases (Fig. 20). The similarity values of C1 , C2 division by zero problem” and “counter-intuitive cases”. To
and C3 cannot be differentiable from (a). After that, overcome it and to handle the uncertain data in a more
it’s getting more and more differentiated. Moreover, comprehensive manner, we present some series of measures
from (a) to (i), the final result also keep as C3  C2  by considering the different parameters such as Lp norm,
C1 . levels of uncertainties tk and slope k of the relations.
(3) For constant p and k, as tk increases, the similarity Some illustrative examples related to pattern recognitions
values of corresponding to each label monotonously are taken to demonstrate the developed measures and show
increases (Fig. 21). The similarity values of C1 and their advantages. The major contributions in this paper can
C2 are not differentiable when (a). Especially, the be summarized as follows:
discrimination of the similarity values of C2 and C3
(1) The formulae of Pythagorean fuzzy similarity mea-
is not well presented when k ∈ [7, 8] from (a) to (i).
sures and distance measures are proposed, and their
Moreover, from (a) to (i), the most of final result also
properties are proved.
keep as C3  C2  C1 .
(2) The various desirable relations between the proposed
similarity measures and distance measures have been
4.4 Advantages of the proposed similarity measures
derived.
(3) Some counter-intuitive examples of existing similar-
(1) The proposed similarity measures (S1 and S2 ) or
ity measures are shown and discussed. For pattern
distance measures (D1 and D2 ) depend upon three
recognition problems (Medical diagnosis, Nanome-
parameters p, tk and k, which help in adjusting the
ter material identification, Ore identification, Bacterial
hesitation margin in computing data. The effect of
detection, Jewellery identification), some identifica-
hesitation will be diminished or almost neglected if the
tion difficulties of the existing similarity measures
value of tk taken is very large, and for smaller values
are shown to state the effectiveness of the proposed
of tk , the effect of hesitation will rise. Thus, according
similarity measures.
to the requirements, the decision-maker can adjust
(4) The general trends and sub-trends of the three parame-
the parameter to deal with incomplete information as
ters of the proposed similarity measure are provided.
well as uncertain information. Hence, this initiated
similarity measures (S1 and S2 ) are more suitable for The proposed information measures (similarity measure
medical, industrial and scientific applications. and distance measure) exhibit a broad scope of potential
(2) As has been seen from existing studies [3–12, 30, 32, applications. Future directions may place emphasis on
34, 37, 40], diverse existing similarity measures under solving more pattern recognition problems and more
Pythagorean fuzzy environments have been developed decision making problems involving more flexible data
by researchers, but there are some situations that representation by information measures.
cannot be distinguished and tackled the division by Acknowledgements The authors are very appreciative to the reviewers
zero problem by these existing similarity measures. for their precious comments which enormously ameliorated the
Therefore, their corresponding algorithm may give quality of this paper. Our work is sponsored by the National Natural
an irrelevant or unreasonable result. The proposed Science Foundation of China (No. 61462019), MOE (Ministry of
Education in China) Project of Humanities and Social Sciences
similarity measures have the ability to overcome these
(No. 18YJCZH054), Natural Science Foundation of Guangdong
flaws, thus it is a more suitable and effective measure Province (No. 2018A030307033, 2018A0303130274), Social Science
to handle these cases. Foundation of Guangdong Province (No. GD18CFX06).
X. Peng and H. Garg

Compliance with Ethical Standards 21. Liu ZM, Liu PD, Liu WL, Pang JY (2017) Pythagorean
uncertain linguistic partitioned bonferroni mean operators and
their application in multi-attribute decision making. J Intell Fuzzy
Conflict of interests The authors declare no conflict of interests Syst 32(3):2779–2790
regarding the publication for the paper. 22. Liang DC, Xu ZS (2017) The new extension of TOPSIS method
for multiple criteria decision making with hesitant Pythagorean
fuzzy sets. Appl Soft Comput 60:167–179
23. Peng XD, Yang Y, Song J, Jiang Y (2015) Pythagorean fuzzy soft
set and its application. Comput Eng 41(7):224–229
References 24. Zhan J, Sun B (2018) Covering-based intuitionistic fuzzy rough
sets and applications in multi-attribute decision-making. Artif
1. Zadeh LA (1965) Fuzzy sets. Inf Control 8(3):338–353 Intell Rev. https://doi.org/10.1007/s10462-018-9674-7, 2018
2. Atanassov K (1986) Intuitionistic fuzzy sets. Fuzzy Sets Syst 25. Peng HG, Wang JQ (2018) A multicriteria group decision-
20(1):87–96 making method based on the normal cloud model with Zadeh’s
3. Li Y, Olson DL, Qin Z (2007) Similarity measures between Z-numbers. IEEE Trans Fuzzy Syst 26:3246–3260
intuitionistic fuzzy (vague) sets: a comparative analysis. Pattern 26. Chen TY (2019) Multiple criteria decision analysis under complex
Recognit Lett 28(2):278–285 uncertainty: a pearson-like correlation-based Pythagorean fuzzy
4. Chen SM (1997) Similarity measures between vague sets and compromise approach. Int J Intell Syst 34(1):114–151
between elements. IEEE Trans Syst Man Cyber 27(1):153–158 27. Yang W, Pang Y (2018) New Pythagorean fuzzy interaction
5. Chen SM, Chang CH (2015) A novel similarity measure between Maclaurin symmetric mean operators and their application in
Atanssov’s intuitionistic fuzzy sets based on transformation multiple attribute decision making. IEEE Access 6:39241–39260
techniques with applications to pattern recognition. Inf Sci 28. Peng XD, Li WQ (2019) Algorithms for interval-valued
291:96–114 Pythagorean fuzzy sets in emergency decision making based on
6. Hung WL, Yang MS (2004) Similarity measures of intuitionistic multiparametric similarity measures and WDBA. IEEE Access
fuzzy sets based on Hausdorff distance. Pattern Recognit Lett 7:7419–7441
25(14):1603–1611 29. Yang Y, Ding H, Chen ZS, Li YL (2016) A note on extension
7. Hong DH, Kim C (1999) A note on similarity measures between of TOPSIS to multiple criteria decision making with Pythagorean
vague sets and between elements. Inf Sci 115(1-4):83–96 fuzzy sets. Int J Intell Syst 31(1):68–72
8. Li DF, Cheng CT (2002) New similarity measures of intuitionistic 30. Wei G, Wei Y (2018) Similarity measures of Pythagorean fuzzy
fuzzy sets and application to pattern recognitions. Pattern sets based on the cosine function and their applications. Int J Intell
Recognit Lett 23(1-3):221–225 Syst 33(3):634–652
9. Li F, Xu ZY (2001) Measures of similarity between vague sets. J 31. Li DQ, Zeng WY, Qian Y (2017) Distance measures of
Softw 12(6):922–927 Pythagorean fuzzy sets and their applications in multiattribute
10. Liang ZZ, Shi PF (2003) Similarity measures on intuitionistic decision making. Control Decis 32(10):1817–1823
fuzzy sets. Pattern Recognit Lett 24(15):2687–2693 32. Zhang X (2016) A novel approach based on similarity measure for
11. Mitchell HB (2003) On the Dengfeng-Chuntian similarity Pythagorean fuzzy multiple criteria group decision making. Int J
measure and its application to pattern recognition. Pattern Intell Syst 31(6):593–611
Recognit Lett 24(16):3101–3104 33. Zeng W, Li D, Yin Q (2018) Distance and similarity measures of
12. Ye J (2011) Cosine similarity measures for intuitionistic fuzzy sets Pythagorean fuzzy sets and their applications to multiple criteria
and their applications. Math Comput Model 53(1-2):91–97 group decision making. Int J Intell Syst 33(11):2236–2254
13. Yager RR (2014) Pythagorean membership grades in multicriteria 34. Peng X, Yuan H, Yang Y (2017) Pythagorean fuzzy information
decision making. IEEE Trans Fuzzy Syst 22(4):958–965 measures and their applications. Int J Intell Syst 32(10):991–1029
14. Garg H (2019) A hybrid GSA-GA algorithm for constrained 35. Huang HH, Liang Y (2018) Hybrid L1/2+2 method for gene
optimization problems. Inf Sci 478:499–523 selection in the Cox proportional hazards model. Comput Meth
15. Zhang XL, Xu ZS (2014) Extension of TOPSIS to multiple criteria Prog Bio 164:65–73
decision making with Pythagorean fuzzy sets. Int J Intell Syst 36. Li DQ, Zeng WY (2018) Distance measure of Pythagorean fuzzy
29(12):1061–1078 sets. Int J Intell Syst 33(2):348–361
16. Peng XD, Yang Y (2015) Some results for Pythagorean fuzzy sets. 37. Peng X (2018) New similarity measure and distance measure for
Int J Intell Syst 30(11):1133–1160 Pythagorean fuzzy set. Complex Intell Syst. https://doi.org/10.
17. Peng XD, Selvachandran G (2018) Pythagorean fuzzy set: state 1007/s40747-018-0084-x
of the art and future directions. Artif Intell Rev. https://doi.org/10. 38. Xu ZS, Yager RR (2006) Some geometric aggregation operators
1007/s10462-017-9596-9 based on intuitionistic fuzzy sets. Int J Gen Syst 35(4):417–433
18. Peng XD, Yang Y (2016) Fundamental properties of interval- 39. Grzegorzewski P (2004) Distance between intuitionistic fuzzy sets
valued Pythagorean fuzzy aggregation operators. Int J Intell Syst and/or interval-valued fuzzy sets based on the hausdorff metric.
31(5):444–487 Fuzzy Sets Syst 148(2):319–328
19. Zhang C, Li D, Ren R (2016) Pythagorean fuzzy multigranulation 40. Boran FE, Akay D (2014) A biparametric similarity measure on
rough set over two universes and its applications in merger and intuitionistic fuzzy sets with applications to pattern recognition.
acquisition. Int J Intell Syst 31(9):921–943 Inf Sci 255:45–57
20. Peng XD, Yang Y (2016) Multiple attribute group decision making
methods based on Pythagorean fuzzy linguistic set. Comput Eng Publisher’s note Springer Nature remains neutral with regard to
Appl 52(23):50–54 jurisdictional claims in published maps and institutional affiliations.
Multiparametric similarity measures on Pythagorean...

Xindong Peng received the Harish Garg did his PhD in


M.Sc. degree in Computer Applied Mathematics from
Science from Northwest IIT Roorkee, India, 2013. He
Normal University, China, works in Thapar Institute of
2016. He works in Shaoguan Engineering & Technology
University. More than 25 SCI- (Deemed University). His
indexed papers (including 7 research interest is decision-
ESI Highly Cited Papers) of making, aggregation operator,
first author have been pub- fuzzy set. Dr. Garg has pro-
lished such as ASOC, CIE, duced 170+ papers published
AIR, NCA, IJIS. As a reviewer in refereed International Jour-
of 30 SCI/SSCI journals such nals including ESWA., IEEE
as TFS, IEEE Access, KBS, Access, CC, SC, IJIS, ASOC,
ASOC, CSR, AIR, IJFS, SC. CIE, and many more. He is
His current research interests the Associate Editor of JIFS.
include decision making, neu-
trosophic set, Pythagorean
fuzzy set, soft set, aggregation
operator.

Вам также может понравиться