Вы находитесь на странице: 1из 14

Hindawi Publishing Corporation

Journal of Applied Mathematics


Volume 2013, Article ID 942309, 13 pages
http://dx.doi.org/10.1155/2013/942309

Research Article
Almost Periodic Solutions for Neutral-Type BAM Neural
Networks with Delays on Time Scales

Yongkun Li and Li Yang


Department of Mathematics, Yunnan University, Kunming, Yunnan 650091, China

Correspondence should be addressed to Yongkun Li; yklie@ynu.edu.cn

Received 5 May 2013; Accepted 17 June 2013

Academic Editor: Sabri Arik

Copyright © 2013 Y. Li and L. Yang. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Using the existence of the exponential dichotomy of linear dynamic equations on time scales, a fixed point theorem and the theory of
calculus on time scales, we obtain some sufficient conditions for the existence and exponential stability of almost periodic solutions
for a class of neutral-type BAM neural networks with delays on time scales. Finally, a numerical example illustrates the feasibility
of our results and also shows that the continuous-time neural network and its discrete-time analogue have the same dynamical
behaviors. The results of this paper are completely new even if the time scale T = R or Z and complementary to the previously
known results.

1. Introduction [15–26]. For example, in [21], under the assumptions that


the activation functions satisfy boundedness and Lipschitz
The bidirectional associative memory (BAM) neural net- conditions, authors discussed global asymptotic stability of
work, which was introduced by Kosko (see [1]), is a special neutral-type BAM neural networks with delays as follows:
recurrent neural network that can store bipolar vector pairs
and is composed of neurons arranged in two layers. The 𝑚 𝑚

neurons in one layer are fully interconnected to the neurons 𝑥𝑖󸀠 (𝑡)+ ∑𝑒𝑖𝑗 𝑥𝑗󸀠 (𝑡 − ℎ) = −𝑎𝑖 𝑥𝑖 (𝑡)− ∑𝑠𝑖𝑗 𝑓𝑗 (𝑦𝑗 (𝑡 − 𝜏))+𝐼𝑖 ,
𝑗=1 𝑗=1
in the other layer, while there are no interconnections among
neurons in the same layer. 𝑖 = 1, 2, . . . , 𝑚,
Recently, due to its wide range of applications, for
instance, pattern recognition, associative memory, and com- 𝑚 𝑚
binatorial optimization, BAM neural network has received 𝑦𝑗󸀠 (𝑡)+ ∑]𝑗𝑖 𝑦𝑗󸀠 (𝑡 − 𝑑) = −𝑐𝑗 𝑦𝑗 (𝑡)− ∑𝑡𝑗𝑖 𝑔𝑖 (𝑥𝑖 (𝑡 − 𝛿))+𝐽𝑗 ,
much attention. For example, in [2–4], some sufficient condi- 𝑖=1 𝑖=1

tions were obtained for the stability of the equilibrium points 𝑗 = 1, 2, . . . , 𝑚.


of BAM neural networks; in [5, 6], authors investigated the (1)
periodic solutions of BAM neural networks by using the
continuation theorem of coincidence degree theory; in [7–9], Also, it is well known that the study of dynamical systems
authors studied the almost periodic solution for BAM neural on time scales is now an active area of research. One of
networks by using the exponential dichotomy and fixed point the reasons for this is the fact that the study on time scales
theorems; for other results about BAM neural networks, the unifies the study of both discrete and continuous processes,
reader may see [10–13] and reference therein. besides many others. The pioneering works in this direction
Since it is natural and important that systems will contain are [27–30]. The theory of time scales was initiated by Stefan
some information about the derivative of the past state to Hilger in his Ph.D. thesis in 1988, providing a rich theory
further describe and model the dynamics for such complex that unifies and extends discrete and continuous analysis [31,
neural reactions [14], many authors investigated the dynam- 32]. The time scales calculus has a tremendous potential for
ical behaviors of neutral-type neural networks with delays applications in some mathematical models of real processes
2 Journal of Applied Mathematics

and phenomena studied in physics, chemical technology, and |𝜓Δ |0 = max1≤𝑗≤𝑚 (𝜓𝑗Δ (𝑡))+ , and 𝐶1 (T, R) is the set of
population dynamics, biotechnology and economics, neural continuous functions with continuous derivatives on T, then
networks, and social sciences. X is a Banach space.
In fact, both continuous and discrete systems are very The initial condition of (2) is
important in implementation and applications. But, it is trou-
blesome to study the the dynamical properties for continuous 𝑥𝑖 (𝑠) = 𝜑𝑖 (𝑠) , 𝑦𝑗 (𝑠) = 𝜓𝑗 (𝑠) , 𝑠 ∈ [−𝜃, 0]T , (3)
and discrete systems, respectively. Therefore, it is meaningful
to study that on time scale which can unify the continuous where 𝜃 = max{max(𝑖,𝑗) {𝜏𝑗𝑖+ , 𝜎𝑗𝑖+ , 𝜁𝑖𝑗+ , 𝜍𝑖𝑗+ }}, 𝜑𝑖 , 𝜓𝑗 ∈ 𝐶1 ([−𝜃, 0]T ,
and discrete situations (see [13, 31, 33–40]). R), 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, . . . , 𝑚.
Motivated by the above, in this paper, we propose a Throughout this paper, we assume that the following
neutral-type BAM neural network with delays on time scales conditions hold:
as follows:
𝑚 (H1 ) 𝑎𝑗𝑖 , 𝑝𝑗𝑖 , 𝑏𝑖𝑗 , 𝑞𝑖𝑗 , 𝐼𝑖 , 𝐽𝑗 ∈ 𝐶(T, R), 𝑎𝑖 , 𝑏𝑗 ∈ 𝐶(T, R+ ) with
𝑥𝑖Δ (𝑡) = − 𝑎𝑖 (𝑡) 𝑥𝑖 (𝑡) + ∑ 𝑎𝑗𝑖 (𝑡) 𝑓𝑗 (𝑦𝑗 (𝑡 − 𝜏𝑗𝑖 (𝑡))) −𝑎𝑖 , −𝑏𝑗 ∈ R+ , 𝜏𝑗𝑖 , 𝜎𝑗𝑖 , 𝜁𝑖𝑗 , and 𝜍𝑖𝑗 ∈ 𝐶(T, T ∩ R+ ) are
𝑗=1 all almost periodic functions, where R+ denotes the
𝑚 set of positively regressive functions from T to R, 𝑖 =
+ ∑𝑝𝑗𝑖 (𝑡) 𝑔𝑗 (𝑦𝑗Δ (𝑡 − 𝜎𝑗𝑖 (𝑡))) + 𝐼𝑖 (𝑡) , 1, 2, . . . , 𝑛, 𝑗 = 1, 2, . . . , 𝑚;
𝑗=1
(H2 ) 𝑓𝑗 , 𝑔𝑗 , ℎ𝑖 , 𝑘𝑖 ∈ 𝐶(R, R), and there exist positive con-
𝑓 𝑔
𝑡 ∈ T, 𝑖 = 1, 2, . . . , 𝑛, stants 𝐻𝑗 , 𝐻𝑗 , 𝐻𝑖ℎ , and 𝐻𝑖𝑘 such that
(2)
𝑛
󵄨󵄨 󵄨
𝑦𝑗Δ (𝑡) = − 𝑏𝑗 (𝑡) 𝑦𝑗 (𝑡) + ∑𝑏𝑖𝑗 (𝑡) ℎ𝑖 (𝑥𝑖 (𝑡 − 𝜁𝑖𝑗 (𝑡))) 󵄨󵄨𝑓𝑗 (𝑢) − 𝑓𝑗 (V)󵄨󵄨󵄨 ≤ 𝐻𝑗𝑓 |𝑢 − V| ,
󵄨 󵄨
𝑖=1
󵄨󵄨󵄨𝑔 (𝑢) − 𝑔 (V)󵄨󵄨󵄨 ≤ 𝐻𝑔 |𝑢 − V| ,
𝑛 󵄨󵄨 𝑗 𝑗 󵄨󵄨 𝑗
+ ∑𝑞𝑖𝑗 (𝑡) 𝑘𝑖 (𝑥𝑖Δ (𝑡 − 𝜍𝑖𝑗 (𝑡))) + 𝐽𝑗 (𝑡) , 󵄨󵄨 󵄨
(4)
󵄨󵄨ℎ𝑖 (𝑢) − ℎ𝑖 (V)󵄨󵄨󵄨 ≤ 𝐻𝑖 |𝑢 − V| ,

𝑖=1

𝑡 ∈ T, 𝑗 = 1, 2, . . . , 𝑚, 󵄨󵄨 󵄨
󵄨󵄨𝑘𝑖 (𝑢) − 𝑘𝑖 (V)󵄨󵄨󵄨 ≤ 𝐻𝑖 |𝑢 − V| ,
𝑘

where T is an almost periodic time scale; 𝑛, 𝑚 are the number for all 𝑢, V ∈ R, 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, . . . , 𝑚.
of neurons in layers; 𝑥𝑖 (𝑡) and 𝑦𝑗 (𝑡) denote the activations of
the 𝑖th neuron and the 𝑗th neuron at time 𝑡; 𝑎𝑖 and 𝑏𝑗 represent This paper is organized as follows. In Section 2, we
the rate with which the 𝑖th neuron and 𝑗th neuron will reset introduce some notations and definitions and state some
their potential to the resting state in isolation when they are preliminary results which are needed in later sections. In
disconnected from the network and the external inputs at Section 3, we establish some sufficient conditions for the
time 𝑡; 𝑓𝑗 , 𝑔𝑗 , ℎ𝑖 , and 𝑘𝑖 are the input-output functions (the existence of almost periodic solutions of (2). In Section 4, we
activation functions); 𝜏𝑗𝑖 , 𝜎𝑗𝑖 , 𝜁𝑖𝑗 , and 𝜍𝑖𝑗 are transmission prove that the almost periodic solution obtained in Section 3
delays at time 𝑡 and satisfy 𝑡 − 𝜏𝑗𝑖 (𝑡) ∈ T, 𝑡 − 𝜎𝑗𝑖 (𝑡) ∈ T, is exponentially stable. In Section 5, we give an example to
𝑡 − 𝜁𝑖𝑗 (𝑡) ∈ T, and 𝑡 − 𝜍𝑖𝑗 (𝑡) ∈ T for 𝑡 ∈ T; 𝑎𝑗𝑖 , 𝑏𝑖𝑗 are illustrate the feasibility of our results obtained in previous
elements of feedback templates at time 𝑡; 𝑝𝑗𝑖 , 𝑞𝑖𝑗 are elements sections. We draw a conclusion in Section 6.
of feed-forward templates at time 𝑡; and 𝐼𝑖 , 𝐽𝑗 denote biases of
the 𝑖th neuron and the 𝑗th neuron at time 𝑡, 𝑖 = 1, 2, . . . , 𝑛,
2. Preliminaries
𝑗 = 1, 2, . . . , 𝑚.
Our main purpose of this paper is, using the exponential In this section, we introduce some definitions and state some
dichotomy of linear dynamic equations on time scales, a fixed preliminary results.
point theorem and the theory of calculus on time scales,
to study the existence and exponential stability of almost Definition 1 (see [32]). Let T be a nonempty closed subset
periodic solutions for (2). Our results of this paper are new (time scale) of R. The forward and backward jump operators
and complementary to the previously known results even if 𝜎, 𝜌 : T → T and the graininess 𝜇 : T → R+ are defined,
the time scale T = R or Z. respectively, by
For convenience, we denote [𝑎, 𝑏]T = {𝑡 | 𝑡 ∈ [𝑎, 𝑏] ∩
T}. For an almost periodic function 𝑓 : T → R, denote 𝜎 (𝑡) = inf {𝑠 ∈ T : 𝑠 > 𝑡} ,
𝑓+ = sup𝑡∈T |𝑓(𝑡)|, 𝑓− = inf 𝑡∈T |𝑓(𝑡)|. Set that X = {𝜙 =
(𝜑1 , 𝜑2 , . . . , 𝜑𝑛 , 𝜓1 , 𝜓2 , . . .,𝜓𝑚 )𝑇 |𝜑𝑖 , 𝜓𝑗 ∈ 𝐶1 (T, R), 𝜑𝑖 , 𝜓𝑗 are 𝜌 (𝑡) = sup {𝑠 ∈ T : 𝑠 < 𝑡} , (5)
almost periodic functions on T, 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 𝜇 (𝑡) = 𝜎 (𝑡) − 𝑡.
1, 2, . . . , 𝑚} with the norm ‖𝜙‖ = max{|𝜑|1 , |𝜓|1 }, where
|𝜑|1 = max{|𝜑|0 , |𝜑Δ |0 }, |𝜓|1 = max{|𝜓|0 , |𝜓Δ |0 }, |𝜑|0 = Lemma 2 (see [32]). Assume that 𝑝, 𝑞 : T → R are two
max1≤𝑖≤𝑛 𝜑𝑖+ , |𝜑Δ |0 = max1≤𝑖≤𝑛 (𝜑𝑖Δ (𝑡))+ , |𝜓|0 = max1≤𝑗≤𝑚 𝜓𝑗+ , regressive functions, then
Journal of Applied Mathematics 3

(i) 𝑒0 (𝑡, 𝑠) ≡ 1 and 𝑒𝑝 (𝑡, 𝑡) ≡ 1; Definition 10 (see [34]). Let T be an almost periodic time
scale. A function 𝑓(𝑡) : T → R𝑛 is said to be almost periodic
(ii) 𝑒𝑝 (𝑡, 𝑠) = 1/𝑒𝑝 (𝑠, 𝑡) = 𝑒⊖𝑝 (𝑠, 𝑡);
on T, if for any 𝜀 > 0, the set
(iii) 𝑒𝑝 (𝑡, 𝑠)𝑒𝑝 (𝑠, 𝑟) = 𝑒𝑝 (𝑡, 𝑟); 󵄨 󵄨
𝐸 (𝜀, 𝑓) = {𝜏 ∈ Π : 󵄨󵄨󵄨𝑓 (𝑡 + 𝜏) − 𝑓 (𝑡)󵄨󵄨󵄨 < 𝜀, ∀𝑡 ∈ T} (9)
(iv) (𝑒𝑝 (𝑡, 𝑠))Δ = 𝑝(𝑡)𝑒𝑝 (𝑡, 𝑠). is relatively dense in T; that is, for any 𝜀 > 0, there exists
a constant 𝑙(𝜀) > 0 such that each interval of length 𝑙(𝜀)
Lemma 3 (see [32]). Let 𝑓, 𝑔 be Δ-differentiable functions on
contains at least one 𝜏 ∈ 𝐸(𝜀, 𝑓) such that
𝑇. Then
󵄨󵄨 󵄨
󵄨󵄨𝑓 (𝑡 + 𝜏) − 𝑓 (𝑡)󵄨󵄨󵄨 < 𝜀, ∀𝑡 ∈ T. (10)
(i) (]1 𝑓 + ]2 𝑔)Δ = ]1 𝑓Δ + ]2 𝑔Δ , for any constants ]1 , ]2 ;
The set 𝐸(𝜀, 𝑓) is called the 𝜀-translation set of 𝑓(𝑡), 𝜏 is
(ii) (𝑓𝑔)Δ (𝑡) = 𝑓Δ (𝑡)𝑔(𝑡) + 𝑓(𝜎(𝑡))𝑔Δ (𝑡) = 𝑓(𝑡)𝑔Δ (𝑡) + called the 𝜀-translation number of 𝑓(𝑡), and 𝑙(𝜀) is called the
𝑓Δ (𝑡)𝑔(𝜎(𝑡)). inclusion of 𝐸(𝜀, 𝑓).

Lemma 4 (see [32]). Assume that 𝑝(𝑡) ≥ 0 for 𝑡 ≥ 𝑠. Then Lemma 11 (see [34]). If 𝑓 ∈ 𝐶(T, R𝑛 ) is an almost periodic
𝑒𝑝 (𝑡, 𝑠) ≥ 1. function, then 𝑓 is bounded on T.

Definition 5 (see [32]). A function 𝑓 : T → R is positively Lemma 12 (see [34]). If 𝑓, 𝑔 ∈ (T, R𝑛 ) are almost periodic
regressive if 1 + 𝜇(𝑡)𝑓(𝑡) > 0 for all 𝑡 ∈ T. functions, then 𝑓 + 𝑔, 𝑓𝑔 are also almost periodic.

Lemma 6 (see [32]). Suppose that 𝑝 ∈ R+ . Then Definition 13 (see [35]). Letting 𝑋 ∈ R𝑛 and 𝐴(𝑡) be a 𝑛 × 𝑛
matrix-valued function on T, the linear system
(i) 𝑒𝑝 (𝑡, 𝑠) > 0, for all 𝑡, 𝑠 ∈ T;
𝑋Δ (𝑡) = 𝐴 (𝑡) 𝑋 (𝑡) , 𝑡∈T (11)
(ii) if 𝑝(𝑡) ≤ 𝑞(𝑡) for all 𝑡 ≥ 𝑠, 𝑡, 𝑠 ∈ T, then 𝑒𝑝 (𝑡, 𝑠) ≤
𝑒𝑞 (𝑡, 𝑠) for all 𝑡 ≥ 𝑠. is said to admit an exponential dichotomy on T if there exist
positive constants 𝑘𝑖 , 𝛼𝑖 , 𝑖 = 1, 2, projection 𝑃, and the
Lemma 7 (see [32]). If 𝑝 ∈ R and 𝑎, 𝑏, 𝑐 ∈ T, then fundamental solution matrix 𝑋(𝑡) of (11) satisfying
󵄨󵄨 󵄨
Δ 𝜎 󵄨󵄨𝑋 (𝑡) 𝑃𝑋−1 (𝑠)󵄨󵄨󵄨 ≤ 𝑘1 𝑒⊖𝛼1 (𝑡, 𝑠) , 𝑠, 𝑡 ∈ T, 𝑡 ≥ 𝑠,
[𝑒𝑝 (𝑐, ⋅)] = −𝑝[𝑒𝑝 (𝑐, ⋅)] , 󵄨 󵄨
󵄨󵄨 󵄨 (12)
(6) 󵄨󵄨𝑋 (𝑡) (𝐼 − 𝑃) 𝑋 (𝑠)󵄨󵄨󵄨 ≤ 𝑘2 𝑒⊖𝛼2 (𝑠, 𝑡) , 𝑠, 𝑡 ∈ T, 𝑡 ≤ 𝑠,
−1
𝑏 󵄨 󵄨
∫ 𝑝 (𝑡) 𝑒𝑝 (𝑐, 𝜎 (𝑡)) Δ𝑡 = 𝑒𝑝 (𝑐, 𝑎) − 𝑒𝑝 (𝑐, 𝑏) .
𝑎 where | ⋅ | is a matrix norm on T; that is, if 𝐴 = (𝑎𝑖𝑗 )𝑛×𝑚 , then
we can take |𝐴| = (∑𝑛𝑖=1 ∑𝑚 2 1/2
𝑗=1 |𝑎𝑖𝑗 | ) .
Lemma 8 (see [32]). Let 𝑎 ∈ T 𝑘 , 𝑏 ∈ T, and assume that
𝑓 : T × T 𝑘 → R is continuous at (𝑡, 𝑡), where 𝑡 ∈ T 𝑘 with Lemma 14 (see [34]). If (11) admits an exponential dichotomy,
𝑡 > 𝑎. Also assume that 𝑓Δ (𝑡, ⋅) is 𝑟𝑑-continuous on [𝑎, 𝜎(𝑡)]. then the following almost periodic system
Suppose that for each 𝜀 > 0, there exists a neighborhood 𝑈 of
𝜏 ∈ [𝑎, 𝜎(𝑡)] such that 𝑋Δ (𝑡) = 𝐴 (𝑡) 𝑋 (𝑡) + 𝑔 (𝑡) , 𝑡∈T (13)

󵄨󵄨 󵄨 has an almost periodic solution as follows:


󵄨󵄨𝑓 (𝜎 (𝑡) , 𝜏) − 𝑓 (𝑠, 𝜏) − 𝑓Δ (𝑡, 𝜏) (𝜎 (𝑡) − 𝑠)󵄨󵄨󵄨 ≤ 𝜀 |𝜎 (𝑡) − 𝑠| ,
󵄨 󵄨 𝑡
𝑋 (𝑡) = ∫ 𝑋 (𝑡) 𝑃𝑋−1 (𝜎 (𝑠)) 𝑔 (𝑠) Δ𝑠
∀𝑠 ∈ 𝑈, −∞
(14)
+∞
(7) −1
−∫ 𝑋 (𝑡) (𝐼 − 𝑃) 𝑋 (𝜎 (𝑠)) 𝑔 (𝑠) Δ𝑠,
𝑡
Δ
where 𝑓 denotes the derivative of 𝑓 with respect to the first where 𝑋(𝑡) is the fundamental solution matrix of (11).
variable. Then,
𝑡 𝑡 Lemma 15 (see [35]). If 𝐴(𝑡) is a uniformly bounded 𝑟𝑑-
(i) 𝑔(𝑡) := ∫𝑎 𝑓(𝑡, 𝜏)Δ𝜏 implies 𝑔Δ (𝑡) := ∫𝑎 𝑓Δ (𝑡, 𝜏)Δ𝜏 + continuous 𝑛 × 𝑛 matrix-valued function on T, and there is a
𝑓(𝜎(𝑡), 𝑡); 𝛿 > 0 such that
𝑏 𝑏 2
(ii) ℎ(𝑡) := ∫𝑡 𝑓(𝑡, 𝜏)Δ𝜏 implies ℎΔ (𝑡) := ∫𝑡 𝑓Δ (𝑡, 𝜏)Δ𝜏 − 󵄨󵄨 󵄨 󵄨 󵄨 1 𝑛
󵄨 󵄨
󵄨󵄨𝑎𝑖𝑖 (𝑡)󵄨󵄨󵄨 − ∑ 󵄨󵄨󵄨󵄨𝑎𝑖𝑗 (𝑡)󵄨󵄨󵄨󵄨 − 𝜇 (𝑡) ( ∑ 󵄨󵄨󵄨󵄨𝑎𝑖𝑗 (𝑡)󵄨󵄨󵄨󵄨) − 𝛿 𝜇 (𝑡) ≥ 2𝛿,
2
𝑓(𝜎(𝑡), 𝑡). 2
𝑗 ≠ 𝑖 𝑗=1

Definition 9 (see [34]). A time scale T is called an almost 𝑡 ∈ T, 𝑖 = 1, 2, . . . , 𝑛,


periodic time scale if
(15)
Π := {𝜏 ∈ R : 𝑡 + 𝜏 ∈ T, ∀𝑡 ∈ T} ≠ {0} . (8) then (11) admits an exponential dichotomy on T.
4 Journal of Applied Mathematics

Definition 16. Let 𝑧∗ (𝑡) = (𝑥1∗ (𝑡), 𝑥2∗ (𝑡), . . . , 𝑥𝑛∗ (𝑡), 𝑦1∗ (𝑡), Proof. For any given 𝜙 ∈ X, we consider the following almost
𝑦2∗ (𝑡), . . . , 𝑦𝑚

(𝑡))𝑇 be an almost periodic solution of (2) with periodic system:
initial value 𝜙∗ (𝑠) = (𝜑1∗ (𝑠), 𝜑2∗ (𝑠), . . . , 𝜑𝑛∗ (𝑠), 𝜓1∗ (𝑠), 𝜓2∗ (𝑠), . . . ,
𝜓𝑚∗
(𝑠))𝑇 . If there exists a positive constant 𝜆 with −𝜆 ∈ 𝑥𝑖Δ (𝑡) = −𝑎𝑖 (𝑡) 𝑥𝑖 (𝑡) + 𝐹𝑖 (𝑡, 𝜓) + 𝐼𝑖 (𝑡) , 𝑖 = 1, 2, . . . , 𝑛,
+
R such that for 𝑡0 ∈ [−𝜃, 0]T , there exists 𝑀 > 1 such that
for an arbitrary solution 𝑧(𝑡) = (𝑥1 (𝑡), 𝑥2 (𝑡), . . . , 𝑥𝑛 (𝑡), 𝑦1 (𝑡), 𝑦𝑗Δ (𝑡) = −𝑏𝑗 (𝑡) 𝑦𝑗 (𝑡) + 𝐺𝑗 (𝑡, 𝜑) + 𝐽𝑗 (𝑡) , 𝑗 = 1, 2, . . . , 𝑚,
𝑦2 (𝑡), . . . , 𝑦𝑚 (𝑡))𝑇 of (2) with initial value 𝜙(𝑠) = (𝜑1 (𝑠), (19)
𝜑2 (𝑠), . . . , 𝜑𝑛 (𝑠), 𝜓1 (𝑠), 𝜓2 (𝑠), . . . , 𝜓𝑚 (𝑠))𝑇 satisfies
󵄨󵄨 󵄨 󵄩 ∗󵄩 where
󵄨󵄨𝑧 (𝑡) − 𝑧 (𝑡)󵄨󵄨󵄨1 ≤ 𝑀 󵄩󵄩󵄩𝜙 − 𝜙 󵄩󵄩󵄩 𝑒−𝜆 (𝑡, 𝑡0 ) ,

(16) 𝑚
𝑡 ∈ [−𝜃, ∞)T , 𝑡 ≥ 𝑡0 ,
𝐹𝑖 (𝑡, 𝜓) = ∑ 𝑎𝑗𝑖 (𝑡) 𝑓𝑗 (𝜓𝑗 (𝑡 − 𝜏𝑗𝑖 (𝑡)))
𝑗=1
where |𝑧(𝑡) − 𝑧∗ (𝑡)|1 = max{max1≤𝑖≤𝑛 {|𝑥𝑖 (𝑡) − 𝑥𝑖∗ (𝑡)|, |(𝑥𝑖 (𝑡) −
𝑚
𝑥𝑖∗ (𝑡))Δ |}, max1≤𝑗≤𝑚 {|𝑦𝑗 (𝑡) − 𝑦𝑗∗ (𝑡)|, |(𝑦𝑗 (𝑡) − 𝑦𝑗∗ (𝑡))Δ |}}, ‖𝜙 − + ∑𝑝𝑗𝑖 (𝑡) 𝑔𝑗 (𝜓𝑗Δ (𝑡 − 𝜎𝑗𝑖 (𝑡))) , 𝑖 = 1, 2, . . . , 𝑛,
𝜙∗ ‖ = max{max1≤𝑖≤𝑛 sup𝑠∈[−𝜃,0]T {|𝜑𝑖 (𝑠) − 𝜑𝑖∗ (𝑠)|, |(𝜑𝑖 (𝑠) − 𝑗=1
𝜑𝑖∗ (𝑠))Δ |}, max1≤𝑗≤𝑚 sup𝑠∈[−𝜃,0]T {|𝜓𝑗 (𝑠) − 𝜓𝑗∗ (𝑠)|, |(𝜓𝑗 (𝑠) − 𝑛
𝜓𝑗∗ (𝑠))Δ |}}. Then, the solution 𝑧∗ (𝑡) is said to be exponentially 𝐺𝑗 (𝑡, 𝜑) = ∑𝑏𝑖𝑗 (𝑡) ℎ𝑖 (𝜑𝑖 (𝑡 − 𝜁𝑖𝑗 (𝑡)))
stable. 𝑖=1

𝑛
+ ∑𝑞𝑖𝑗 (𝑡) 𝑘𝑖 (𝜑𝑖Δ (𝑡 − 𝜍𝑖𝑗 (𝑡))) , 𝑗 = 1, 2, . . . , 𝑚.
3. Existence of Almost Periodic Solutions 𝑖=1
(20)
In this section, we will state and prove the sufficient condi-
tions for the existence of almost periodic solutions of (2).
Let 𝜙0 (𝑡) = (𝜑10 (𝑡), 𝜑20 (𝑡), . . . , 𝜑𝑛0 (𝑡), 𝜓10 (𝑡), 𝜓20 (𝑡), . . . , Since (H3 ) holds, it follows from Lemma 15 that the linear
𝑡 system
𝜓𝑚 (𝑡))𝑇 , where 𝜑𝑖0 (𝑡) = ∫−∞ 𝑒−𝑎𝑖 (𝑡, 𝜎(𝑠)) × 𝐼𝑖 (𝑠)Δ𝑠, 𝜓𝑗0 (𝑡) =
0

𝑡
∫−∞ 𝑒−𝑏𝑗 (𝑡, 𝜎(𝑠))𝐽𝑗 (𝑠)Δ𝑠, 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, . . . , 𝑚,
𝑥𝑖Δ (𝑡) = −𝑎𝑖 (𝑡) 𝑥𝑖 (𝑡) , 𝑖 = 1, 2, . . . , 𝑛,
and let 𝐿 be a constant satisfying 𝐿 ≥ max{‖𝜙0 ‖, (21)
max1≤𝑗≤𝑚 {|𝑓𝑗 (0)|, |𝑔𝑗 (0)|, max1≤𝑖≤𝑛 {|ℎ𝑖 (0)|, 𝑘𝑖 (0)|}. We have 𝑦𝑗Δ (𝑡) = −𝑏𝑗 (𝑡) 𝑦𝑗 (𝑡) , 𝑗 = 1, 2, . . . , 𝑚
the following theorem.

Theorem 17. Let (H1 ) and (H2 ) hold. Suppose that admits an exponential dichotomy on T. Thus, by Lemma 14,
we obtain that (19) has an almost periodic solution, which is
(H3 ) there exists a positive constant 𝛿 such that expressed as follows:

1 𝑡
𝑐𝑘 (𝑡) − 𝜇 (𝑡) 𝑐𝑘2 (𝑡) − 𝛿2 𝜇 (𝑡) ≥ 2𝛿, 𝜑
𝑥𝑖 (𝑡) = ∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠)) (𝐹𝑖 (𝑠, 𝜓) + 𝐼𝑖 (𝑠)) Δ𝑠,
2 (17) −∞
𝑡 ∈ T, 𝑘 = 1, 2, . . . , 𝑛 + 𝑚,
𝑖 = 1, 2, . . . , 𝑛,
(22)
where 𝑡
𝜓
𝑦𝑗 (𝑡) = ∫ 𝑒−𝑏𝑗 (𝑡, 𝜎 (𝑠)) (𝐺𝑗 (𝑠, 𝜑) + 𝐽𝑖 (𝑠)) Δ𝑠,
−∞
𝑎 (𝑡) , 𝑘 = 𝑖, 𝑖 = 1, 2, . . . , 𝑛,
𝑐𝑘 (𝑡) = { 𝑖 (18) 𝑗 = 1, 2, . . . , 𝑚.
𝑏𝑗 (𝑡) , 𝑘 = 𝑛 + 𝑗, 𝑗 = 1, 2, . . . , 𝑚;

For 𝜙 ∈ X0 , then ‖𝜙‖ ≤ ‖𝜙 − 𝜙0 ‖ + ‖𝜙0 ‖ ≤ 2𝐿. Define the


(H4 ) max{max1≤𝑖≤𝑛 {(𝜃𝑖 /𝑎𝑖− ), (1 + (𝑎𝑖+ /𝑎𝑖− ))𝜃𝑖 }, following operator:
max1≤𝑗≤𝑚 {(𝛾𝑗 /𝑏𝑗− ), (1 + (𝑏𝑗+ /𝑏𝑗− ))𝛾𝑗 }} ≤ (1/2),
𝑓 𝑔
where 𝜃𝑖 = ∑𝑚 + + + +
𝑗=1 (𝑎𝑗𝑖 𝐻𝑗 + 𝑝𝑗𝑖 𝐻𝑗 + 𝑎𝑗𝑖 + 𝑝𝑗𝑖 ), Φ : X0 󳨀→ X0 ,
𝛾𝑗 = ∑𝑛𝑖=1 (𝑏𝑖𝑗+ 𝐻𝑖ℎ + 𝑞𝑖𝑗+ 𝐻𝑖𝑘 + 𝑏𝑖𝑗+ + 𝑞𝑖𝑗+ ), 𝑖 = 1, 2, 𝑇
. . . , 𝑛, 𝑗 = 1, 2, . . . , 𝑚. (𝜑1 , 𝜑2 , . . . , 𝜑𝑛 , 𝜓1 , 𝜓2 , . . . , 𝜓𝑚 ) (23)
𝜑 𝜑 𝜓 𝜓 𝑇
Then, (2) has a unique almost periodic solution in X0 = 󳨀→ (𝑥1 , 𝑥2 , . . . , 𝑥𝑛𝜑 , 𝑦1 , 𝑦2 , . . . , 𝑦𝑚
𝜓
) .
{𝜙 ∈ X | ‖𝜙 − 𝜙0 ‖ ≤ 𝐿}, 𝜙(𝑡) = (𝜑1 (𝑡), 𝜑2 (𝑡),. . . , 𝜑𝑛 (𝑡),
𝜓1 (𝑡), 𝜓2 (𝑡), . . . , 𝜓𝑚 (𝑡))𝑇 . We will show that Φ is a contraction.
Journal of Applied Mathematics 5

󵄨󵄨 󵄨 󵄨󵄨 𝑡 󵄨󵄨
First, we show that for any 𝜙 ∈ X0 , we have Φ𝜙 ∈ X0 . 󵄨󵄨(Φ (𝜙 − 𝜙0 )) (𝑡)󵄨󵄨󵄨 = 󵄨󵄨󵄨∫ 𝑒−𝑏 (𝑡, 𝜎 (𝑠)) 𝐺𝑗 (𝑠, 𝜑) Δ𝑠󵄨󵄨󵄨
󵄨󵄨 𝑛+𝑗 󵄨󵄨 󵄨󵄨 −∞ 𝑗 󵄨󵄨
Note that 󵄨 󵄨
𝑡
󵄨󵄨 𝑚 󵄨 󵄨
󵄨󵄨 󵄨󵄨 󵄨󵄨󵄨󵄨 ≤∫ 𝑒−𝑏𝑗 (𝑡, 𝜎 (𝑠)) 󵄨󵄨󵄨󵄨𝐺𝑗 (𝑠, 𝜑)󵄨󵄨󵄨󵄨 Δ𝑠
󵄨󵄨𝐹𝑖 (𝑠, 𝜓)󵄨󵄨 = 󵄨󵄨 ∑ 𝑎𝑗𝑖 (𝑠) 𝑓𝑗 (𝜓𝑗 (𝑠 − 𝜏𝑗𝑖 (𝑠))) −∞
󵄨󵄨𝑗=1
󵄨 𝑡
󵄨󵄨 ≤∫ 𝑒−𝑏𝑗− (𝑡, 𝜎 (𝑠)) 2𝛾𝑗 𝐿Δ𝑠
𝑚 󵄨󵄨 −∞
+ ∑𝑝𝑗𝑖 (𝑠) 𝑔𝑗 (𝜓𝑗 (𝑠 − 𝜎𝑗𝑖 (𝑠)))󵄨󵄨󵄨󵄨
Δ
󵄨󵄨 2𝛾𝑗 𝐿
𝑗=1 󵄨 ≤ , 𝑗 = 1, 2, . . . , 𝑚.
𝑚 𝑏𝑗−
󵄨 󵄨 󵄨 󵄨
≤ ∑ 𝑎𝑗𝑖+ (󵄨󵄨󵄨󵄨𝑓𝑗 (𝜓𝑗 (𝑠 − 𝜏𝑗𝑖 (𝑠))) − 𝑓𝑗 (0)󵄨󵄨󵄨󵄨 + 󵄨󵄨󵄨󵄨𝑓𝑗 (0)󵄨󵄨󵄨󵄨) (26)
𝑗=1
On the other hand, for 𝑖 = 1, 2, . . . , 𝑛, we have
𝑚
󵄨 󵄨 󵄨󵄨 󵄨
+ ∑ 𝑝𝑗𝑖+ (󵄨󵄨󵄨󵄨𝑔𝑗 (𝜓𝑗Δ (𝑠 − 𝜎𝑗𝑖 (𝑠))) − 𝑔𝑗 (0)󵄨󵄨󵄨󵄨 󵄨󵄨(Φ (𝜙 − 𝜙0 ))Δ (𝑡)󵄨󵄨󵄨
𝑗=1
󵄨󵄨 𝑖 󵄨󵄨
󵄨 󵄨 󵄨󵄨 𝑡 Δ 󵄨󵄨
󵄨󵄨 󵄨󵄨
+ 󵄨󵄨󵄨󵄨𝑔𝑗 (0)󵄨󵄨󵄨󵄨 ) = 󵄨󵄨󵄨(∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠)) 𝐹𝑖 (𝑠, 𝜓) Δ𝑠) 󵄨󵄨󵄨
󵄨󵄨󵄨 −∞ 𝑡 󵄨󵄨󵄨
𝑚
𝑓 𝑔 󵄨 󵄨 󵄨󵄨 󵄨󵄨
≤ ∑ (𝑎𝑗𝑖+ 𝐻𝑗 + 𝑝𝑗𝑖+ 𝐻𝑗 ) 󵄨󵄨󵄨𝜓󵄨󵄨󵄨1 󵄨
𝑡
󵄨
𝑗=1 = 󵄨󵄨󵄨𝐹𝑖 (𝑡, 𝜓)−𝑎𝑖 (𝑡) ∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠)) 𝐹𝑖 (𝑠, 𝜓) Δ𝑠󵄨󵄨󵄨
󵄨󵄨 −∞ 󵄨󵄨
(27)
𝑚 󵄨 󵄨 󵄨 󵄨
+ ∑ (𝑎𝑗𝑖+
󵄨󵄨 󵄨 󵄨 󵄨
󵄨󵄨𝑓𝑗 (0)󵄨󵄨󵄨 + 𝑝𝑗𝑖+ 󵄨󵄨󵄨𝑔𝑗 (0)󵄨󵄨󵄨) ≤ 󵄨󵄨󵄨𝐹𝑖 (𝑡, 𝜓)󵄨󵄨󵄨 + 󵄨󵄨󵄨𝑎𝑖 (𝑡)󵄨󵄨󵄨
󵄨 󵄨 󵄨 󵄨
𝑗=1 𝑡
󵄨 󵄨
𝑚
×∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠)) 󵄨󵄨󵄨𝐹𝑖 (𝑠, 𝜓)󵄨󵄨󵄨 Δ𝑠
𝑓 𝑔 󵄩 󵄩 −∞
≤ ∑ (𝑎𝑗𝑖+ 𝐻𝑗 + 𝑝𝑗𝑖+ 𝐻𝑗 ) 󵄩󵄩󵄩𝜙󵄩󵄩󵄩
𝑗=1 𝑎𝑖+
≤ (1 + ) 2𝜃𝑖 𝐿
𝑚 𝑎𝑖−
󵄨 󵄨 󵄨 󵄨
+ ∑ (𝑎𝑗𝑖+ 󵄨󵄨󵄨󵄨𝑓𝑗 (0)󵄨󵄨󵄨󵄨 + 𝑝𝑗𝑖+ 󵄨󵄨󵄨󵄨𝑔𝑗 (0)󵄨󵄨󵄨󵄨) and for 𝑗 = 1, 2, . . . , 𝑚,
𝑗=1
󵄨󵄨 󵄨 𝑏+
𝑚 󵄨󵄨(Φ (𝜙 − 𝜙0 ))Δ (𝑡)󵄨󵄨󵄨 ≤ (1 + 𝑗 ) 2𝛾𝑗 𝐿. (28)

𝑓
∑ (𝑎𝑗𝑖+ 𝐻𝑗 +
𝑔
𝑝𝑗𝑖+ 𝐻𝑗 + 𝑎𝑗𝑖+ + 𝑝𝑗𝑖+ ) 2𝐿 := 2𝜃𝑖 𝐿, 󵄨󵄨 𝑛+𝑗 󵄨󵄨 𝑏𝑗−
𝑗=1
In view of (H4 ), we have
𝑖 = 1, 2, . . . , 𝑛
󵄩󵄩 󵄩 2𝜃 𝐿 𝑎+
(24) 󵄩󵄩Φ𝜙 − 𝜙0 󵄩󵄩󵄩 ≤ max {max { −𝑖 , (1 + 𝑖− ) 2𝜃𝑖 𝐿} ,
󵄩 󵄩 1≤𝑖≤𝑛 𝑎𝑖 𝑎𝑖
and similarly, 𝑏𝑗+
2𝛾𝑗 𝐿
max { , (1 + ) 2𝛾𝑗 𝐿}} ≤ 𝐿,
𝑛 1≤𝑗≤𝑚 𝑏𝑗− 𝑏𝑗−
󵄨󵄨 󵄨
󵄨󵄨𝐺𝑗 (𝑠, 𝜑)󵄨󵄨󵄨 ≤ ∑ (𝑏𝑖𝑗+ 𝐻𝑖ℎ + 𝑞𝑖𝑗+ 𝐻𝑖𝑘 + 𝑏𝑖𝑗+ + 𝑞𝑖𝑗+ ) 2𝐿 := 2𝛾𝑗 𝐿, (29)
󵄨 󵄨
𝑖=1 (25)
that is, Φ𝜙 ∈ X0 . Next, we show that Φ is a con-
𝑗 = 1, 2, . . . , 𝑚. traction. For 𝜙 = (𝜑1 , 𝜑2 , . . . , 𝜑𝑛 , 𝜓1 , 𝜓2 , . . . , 𝜓𝑚 )𝑇 , 𝜗 =
(𝜉1 , 𝜉2 , . . . , 𝜉𝑛 , 𝜂1 , 𝜂2 , . . . , 𝜂𝑚 )𝑇 ∈ X0 , for 𝑖 = 1, 2, . . . , 𝑛, denote
Therefore, we have by

󵄨󵄨 󵄨 󵄨󵄨 𝑡 󵄨󵄨 𝐹𝑖(1) (𝑠, 𝜓, 𝜂)
󵄨󵄨(Φ (𝜙 − 𝜙0 )) (𝑡)󵄨󵄨󵄨 = 󵄨󵄨󵄨󵄨∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠)) 𝐹𝑖 (𝑠, 𝜓) Δ𝑠󵄨󵄨󵄨󵄨
󵄨 𝑖 󵄨 󵄨󵄨 −∞ 󵄨󵄨 𝑚
𝑡 = ∑𝑎𝑗𝑖 (𝑠) (𝑓𝑗 (𝜓𝑗 (𝑠 − 𝜏𝑗𝑖 (𝑠))) − 𝑓𝑗 (𝜂𝑗 (𝑠 − 𝜏𝑗𝑖 (𝑠)))) ,
󵄨 󵄨
≤∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠)) 󵄨󵄨󵄨𝐹𝑖 (𝑠, 𝜓)󵄨󵄨󵄨 Δ𝑠 𝑗=1
−∞
𝑡 𝐹𝑖(2) (𝑠, 𝜓, 𝜂)
≤∫ 𝑒−𝑎𝑖− (𝑡, 𝜎 (𝑠)) 2𝜃𝑖 𝐿Δ𝑠 𝑚
−∞
= ∑𝑝𝑗𝑖 (𝑠) (𝑔𝑗 (𝜓𝑗Δ (𝑠 − 𝜎𝑗𝑖 (𝑠))) − 𝑔𝑗 (𝜂𝑗Δ (𝑠 − 𝜎𝑗𝑖 (𝑠)))) .
2𝜃𝑖 𝐿 𝑗=1
≤ , 𝑖 = 1, 2, . . . , 𝑛,
𝑎𝑖− (30)
6 Journal of Applied Mathematics

Then, we have In a similar way, we have

󵄨󵄨 󵄨 𝑛
󵄨󵄨 󵄨 󵄨󵄨(Φ𝜙 − Φ𝜁) (𝑡)󵄨󵄨󵄨 ≤ 1 ∑ (𝑏+ 𝐻ℎ + 𝑞+ 𝐻𝑘 ) 󵄨󵄨󵄨𝜑 − 𝜉󵄨󵄨󵄨 ,
󵄨󵄨(Φ𝜙 − Φ𝜁)𝑖 (𝑡)󵄨󵄨󵄨 󵄨󵄨 𝑛+𝑗 󵄨󵄨 𝑏− 𝑖𝑗 𝑖 𝑖𝑗 𝑖 󵄨 󵄨1
𝑗 𝑖=1

󵄨󵄨 𝑡 󵄨󵄨
󵄨 󵄨 𝑗 = 1, 2, . . . , 𝑚,
= 󵄨󵄨󵄨∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠)) (𝐹𝑖(1) (𝑠, 𝜓, 𝜂) + 𝐹𝑖(2) (𝑠, 𝜓, 𝜂)) Δ𝑠󵄨󵄨󵄨
󵄨󵄨 −∞ 󵄨󵄨
󵄨󵄨 󵄨 𝑏𝑗+
󵄨󵄨(Φ𝜙 − Φ𝜁)Δ (𝑡)󵄨󵄨󵄨 ≤ (1 + )
𝑡 󵄨󵄨 𝑛+𝑗 󵄨󵄨 𝑏𝑗−
≤∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠))
−∞
𝑛
󵄨 󵄨
𝑚 × ∑ (𝑏𝑖𝑗+ 𝐻𝑖ℎ + 𝑞𝑖𝑗+ 𝐻𝑖𝑘 ) 󵄨󵄨󵄨𝜑 − 𝜉󵄨󵄨󵄨1 ,
𝑓󵄨 󵄨
× ( ∑𝑎𝑗𝑖+ 𝐻𝑗 󵄨󵄨󵄨󵄨𝜓𝑗 (𝑠 − 𝜏𝑗𝑖 (𝑠)) − 𝜂𝑗 (𝑠 − 𝜏𝑗𝑖 (𝑠))󵄨󵄨󵄨󵄨 𝑖=1
𝑗=1
𝑗 = 1, 2, . . . , 𝑚.
𝑚
𝑔 󵄨 (32)
+ ∑𝑝𝑗𝑖+ 𝐻𝑗 󵄨󵄨󵄨󵄨𝜓𝑗Δ (𝑠 − 𝜎𝑗𝑖 (𝑠))
𝑗=1
By (H4 ), we have

󵄨
−𝜂𝑗Δ (𝑠 − 𝜎𝑗𝑖 (𝑠))󵄨󵄨󵄨󵄨 ) Δ𝑠 1 𝑎𝑖+
max {max { − 󰜚𝑖 , (1 + − ) 󰜚𝑖 } ,
1≤𝑖≤𝑛 𝑎𝑖 𝑎𝑖
(33)
1 𝑚 + 𝑓 𝑔 󵄨 󵄨 𝑏𝑗+
≤ ∑ (𝑎 𝐻 + 𝑝𝑗𝑖+ 𝐻𝑗 ) 󵄨󵄨󵄨𝜓 − 𝜂󵄨󵄨󵄨1 , 𝑖 = 1, 2, . . . , 𝑛, 1
𝑎𝑖− 𝑗=1 𝑗𝑖 𝑗 max { 𝜌𝑗 , (1 + − ) 𝜌𝑗 }} < 1,
1≤𝑗≤𝑚 𝑏− 𝑏𝑗
𝑗

󵄨󵄨 󵄨
󵄨󵄨(Φ𝜙 − Φ𝜁)Δ (𝑡)󵄨󵄨󵄨
󵄨󵄨 󵄨󵄨 where 󰜚𝑖 = ∑𝑚 + + 𝑓 𝑔𝑛 + ℎ
𝑗=1 (𝑎𝑗𝑖 𝐻𝑗 + 𝑝𝑗𝑖 𝐻𝑗 ) and 𝜌𝑗 = ∑𝑖=1 (𝑏𝑖𝑗 𝐻𝑖 +
𝑖

󵄨󵄨 𝑡 Δ 󵄨󵄨 𝑞𝑖𝑗+ 𝐻𝑖𝑘 ), 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, . . . , 𝑚. It implies that ‖Φ𝜙 −


󵄨󵄨 󵄨󵄨
= 󵄨󵄨󵄨(∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠)) (𝐹𝑖(1) (𝑠, 𝜓, 𝜂) + 𝐹𝑖(2) (𝑠, 𝜓, 𝜂)) Δ𝑠) 󵄨󵄨󵄨 Φ𝜁‖ < ‖𝜑 − 𝜓‖. Hence, Φ is a contraction. Therefore, Φ has
󵄨󵄨 −∞ 𝑡 󵄨󵄨󵄨 a fixed point in X0 ; that is, (2) has a unique almost periodic
󵄨
solution in X0 . This completes the proof of Theorem 17.
󵄨󵄨
󵄨
= 󵄨󵄨󵄨𝐹𝑖(1) (𝑡, 𝜓, 𝜂) + 𝐹𝑖(2) (𝑡, 𝜓, 𝜂) − 𝑎𝑖 (𝑡)
󵄨󵄨
4. Exponential Stability of Almost
𝑡 󵄨󵄨 Periodic Solution
󵄨
× ∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠)) (𝐹𝑖(1) (𝑠, 𝜓, 𝜂) + 𝐹𝑖(2) (𝑠, 𝜓, 𝜂)) Δ𝑠󵄨󵄨󵄨
−∞ 󵄨󵄨 In this section, we will study the exponential stability of
almost periodic solution of (2).
󵄨 󵄨 󵄨 󵄨 󵄨 󵄨
≤ 󵄨󵄨󵄨󵄨𝐹𝑖(1) (𝑡, 𝜓, 𝜂)󵄨󵄨󵄨󵄨 + 󵄨󵄨󵄨󵄨𝐹𝑖(2) (𝑡, 𝜓, 𝜂)󵄨󵄨󵄨󵄨 + 󵄨󵄨󵄨𝑎𝑖 (𝑡)󵄨󵄨󵄨
Theorem 18. Let (H1 )–(H4 ) hold. Suppose further that
𝑡
󵄨 󵄨
×∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠)) 󵄨󵄨󵄨󵄨𝐹𝑖(1) (𝑠, 𝜓, 𝜂) + 𝐹𝑖(2) (𝑠, 𝜓, 𝜂)󵄨󵄨󵄨󵄨 Δ𝑠 (H5 ) 󰜚𝑖 ≥ (𝑎𝑖− − 𝑎𝑖+ 𝑎𝑖− )/(𝑎𝑖+ + 𝑎𝑖− ) and 𝜌𝑗 ≥ (𝑏𝑗− − 𝑏𝑗+ 𝑏𝑗− )/(𝑏𝑗+ +
−∞ 𝑓 𝑔
𝑏𝑗− ), where 󰜚𝑖 = ∑𝑚 + +
𝑗=1 (𝑎𝑗𝑖 𝐻𝑗 + 𝑝𝑗𝑖 𝐻𝑗 ) and 𝜌𝑗 =
𝑚
𝑓 𝑔 󵄨 󵄨 ∑𝑛𝑖=1 (𝑏𝑖𝑗+ 𝐻𝑖ℎ + 𝑞𝑖𝑗+ 𝐻𝑖𝑘 ), 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, . . . , 𝑚.
≤ ∑ (𝑎𝑗𝑖+ 𝐻𝑗 + 𝑝𝑗𝑖+ 𝐻𝑗 ) 󵄨󵄨󵄨𝜓 − 𝜂󵄨󵄨󵄨1
𝑗=1
Then, the almost periodic solution of (2) is exponentially stable.
𝑎𝑖+ 𝑚 + 𝑓 𝑔 󵄨 󵄨
+ ∑ (𝑎 𝐻 + 𝑝𝑗𝑖+ 𝐻𝑗 ) 󵄨󵄨󵄨𝜓 − 𝜂󵄨󵄨󵄨1 Proof. By Theorem 17, (2) has an almost periodic solution
𝑎𝑖− 𝑗=1 𝑗𝑖 𝑗 𝜔(𝑡) = (𝛼1 (𝑡), 𝛼2 (𝑡), . . . , 𝛼𝑛 (𝑡), 𝛽1 (𝑡), 𝛽2 (𝑡), . . . , 𝛽𝑚 (𝑡))𝑇 with
initial condition 𝜙∗ (𝑠) = (𝜑1∗ (𝑠), 𝜑2∗ (𝑠), . . . , 𝜑𝑛∗ (𝑠), 𝜓1∗ (𝑠),
𝑎𝑖+ 𝑚 + 𝑓 𝑔 󵄨 󵄨 𝜓2∗ (𝑠), . . . , 𝜓𝑚

(𝑠))𝑇 . Suppose that 𝑧(𝑡) = (𝑥1 (𝑡), 𝑥2 (𝑡), . . . ,
= (1 + ) ∑ (𝑎 𝐻 + 𝑝𝑗𝑖+ 𝐻𝑗 ) 󵄨󵄨󵄨𝜓 − 𝜂󵄨󵄨󵄨1 ,
𝑎𝑖− 𝑗=1 𝑗𝑖 𝑗 𝑥𝑛 (𝑡), 𝑦1 (𝑡), 𝑦2 (𝑡), . . . , 𝑦𝑚 (𝑡))𝑇 is an arbitrary solution of (2)
with initial condition 𝜙(𝑠) = (𝜑1 (𝑠), 𝜑2 (𝑠), . . . , 𝜑𝑛 (𝑠), 𝜓1 (𝑠),
𝑖 = 1, 2, . . . , 𝑛. 𝜓2 (𝑠), . . . , 𝜓𝑚 (𝑠))𝑇 . Denote V(𝑡) = (𝑢1 (𝑡), 𝑢2 (𝑡), . . . , 𝑢𝑛 (𝑡),
(31) ]1 (𝑡), ]2 (𝑡), . . . , ]𝑚 (𝑡))𝑇 , where 𝑢𝑖 (𝑡) = 𝑥𝑖 (𝑡) − 𝛼𝑖 (𝑡),
Journal of Applied Mathematics 7

]𝑗 (𝑡) = 𝑦𝑗 (𝑡) − 𝛽𝑗 (𝑡), 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, . . . , 𝑚. Then, Similarly, multiplying both sides of (35) by 𝑒−𝑏𝑗 (𝑡, 𝜎(𝑠)) and
it follows from (2) that integrating on [𝑡0 , 𝑡]T , we get
𝑢𝑖Δ (𝑡) = − 𝑎𝑖 (𝑡) 𝑢𝑖 (𝑡)
]𝑗 (𝑡)
𝑚
+ ∑ 𝑎𝑗𝑖 (𝑡) (𝑓𝑗 (𝑦𝑗 (𝑡 − 𝜏𝑗𝑖 (𝑡))) = ]𝑗 (𝑡0 ) 𝑒−𝑏𝑗 (𝑡, 𝑡0 )
𝑗=1
𝑡
−𝑓𝑗 (𝛽𝑗 (𝑡 − 𝜏𝑗𝑖 (𝑡)))) + ∫ 𝑒−𝑏𝑗 (𝑡, 𝜎 (𝑠))
(34) 𝑡0
𝑚
+ ∑ 𝑝𝑗𝑖 (𝑡) (𝑔𝑗 (𝑦𝑗Δ (𝑡 − 𝜎𝑗𝑖 (𝑡))) 𝑛
𝑗=1 × {∑𝑏𝑖𝑗 (𝑠) (ℎ𝑖 (𝑥𝑖 (𝑠 − 𝜁𝑖𝑗 (𝑠)))
𝑖=1
−𝑔𝑗 (𝛽𝑗Δ (𝑡 − 𝜎𝑗𝑖 (𝑡)))) ,
−ℎ𝑖 (𝛼𝑖 (𝑠 − 𝜁𝑖𝑗 (𝑠))))
𝑖 = 1, 2, . . . , 𝑛,
𝑛
]Δ𝑗 (𝑡) = − 𝑏𝑗 (𝑡) ]𝑗 (𝑡) + ∑𝑞𝑖𝑗 (𝑠) (𝑘𝑖 (𝑥𝑖Δ (𝑠 − 𝜍𝑖𝑗 (𝑠)))
𝑖=1
𝑛
+ ∑𝑏𝑖𝑗 (𝑡) (ℎ𝑖 (𝑥𝑖 (𝑡 − 𝜁𝑖𝑗 (𝑡)))
𝑖=1 − 𝑘𝑖 (𝛼𝑖Δ (𝑠 − 𝜍𝑖𝑗 (𝑠)))) } Δ𝑠,

− ℎ𝑖 (𝛼𝑖 (𝑡 − 𝜁𝑖𝑗 (𝑡))))


(35) 𝑗 = 1, 2, . . . , 𝑚.
𝑛 (38)
+ ∑𝑞𝑖𝑗 (𝑡) (𝑘𝑖 (𝑥𝑖Δ (𝑡 − 𝜍𝑖𝑗 (𝑡)))
𝑖=1 For positive constant 𝛼 < min{min1≤𝑖≤𝑛 𝑎𝑖− , min1≤𝑗≤𝑚 𝑏𝑗− } with
− 𝑘𝑖 (𝛼𝑖Δ (𝑡 − 𝜍𝑖𝑗 (𝑡)))) , −𝛼 ∈ R+ , we have 𝑒⊖𝛼 (𝑡, 𝑡0 ) > 1, where 𝑡 ∈ [−𝜃, 𝑡0 ]T . Take

𝑗 = 1, 2, . . . , 𝑚. 𝑀 > max {𝜖1 , 𝜖2 } , (39)


The initial condition of (34) and (35) is where
𝑢𝑖 (𝑠) = 𝜑𝑖 (𝑠) − 𝜑𝑖∗ (𝑠) ,
(36) 𝑎𝑖+ 𝑎𝑖− 𝑎𝑖−
𝜖1 = max { , },
]𝑗 (𝑠) = 𝜓𝑗 (𝑠) − 𝜓𝑗∗ (𝑠) , 𝑠 ∈ [−𝜃, 0]T , 1≤𝑖≤𝑛 𝑎𝑖− − (𝑎𝑖+ + 𝑎𝑖− ) 󰜚𝑖 𝑎𝑖− − 󰜚𝑖
where 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, . . . , 𝑚. (40)
{ 𝑏𝑗+ 𝑏𝑗− 𝑏𝑗− }
Multiplying both sides of (34) by 𝑒−𝑎𝑖 (𝑡, 𝜎(𝑠)) and inte- 𝜖2 = max { , − }.
grating on [𝑡0 , 𝑡]T , where 𝑡0 ∈ [−𝜃, 0]T , we get 1≤𝑗≤𝑚 𝑏𝑗− − (𝑏𝑗+ + 𝑏𝑗− ) 𝜌𝑗 𝑏𝑗 − 𝜌𝑗
{ }
𝑢𝑖 (𝑡)
By (H4 ), we have
= 𝑢𝑖 (𝑡0 ) 𝑒−𝑎𝑖 (𝑡, 𝑡0 )
𝑎𝑖− > (𝑎𝑖+ + 𝑎𝑖− ) 󰜚𝑖 , 𝑎𝑖− > 󰜚𝑖 ,
𝑡
+ ∫ 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠))
𝑡0 𝑏𝑗− > (𝑏𝑗+ + 𝑏𝑗− ) 𝜌𝑗 , 𝑏𝑗− > 𝜌𝑗 , (41)

{𝑚 𝑖 = 1, 2, . . . , 𝑛, 𝑗 = 1, 2, . . . , 𝑚.
× { ∑𝑎𝑗𝑖 (𝑠) (𝑓𝑗 (𝑦𝑗 (𝑠 − 𝜏𝑗𝑖 (𝑠)))
{𝑗=1 In view of (H5 ), we have 𝑀 > 1. Hence, it is obvious that
− 𝑓𝑗 (𝛽𝑗 (𝑠 − 𝜏𝑗𝑖 (𝑠)))) 󵄩 󵄩
|V (𝑡)|1 ≤ 𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 , ∀𝑡 ∈ [−𝜃, 𝑡0 ]T . (42)
𝑚
+ ∑𝑝𝑗𝑖 (𝑠) (𝑔𝑗 (𝑦𝑗Δ (𝑠 − 𝜎𝑗𝑖 (𝑠))) We claim that
𝑗=1
󵄩 󵄩
|V (𝑡)|1 ≤ 𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 , ∀𝑡 ∈ (𝑡0 , +∞)T . (43)
}
− 𝑔𝑗 (𝛽𝑗Δ (𝑠 − 𝜎𝑗𝑖 (𝑠)))) } Δ𝑠,
To prove this claim, we show that for any 𝑝 > 1, the following
}
inequality holds:
𝑖 = 1, 2, . . . , 𝑛.
󵄩 󵄩
(37) |V (𝑡)|1 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 , ∀𝑡 ∈ (𝑡0 , +∞)T , (44)
8 Journal of Applied Mathematics

which means that, for 𝑖 = 1, 2, . . . , 𝑛, we have {𝑚


× { ∑𝑎𝑗𝑖0 (𝑠) (𝑓𝑗 (𝑦𝑗 (𝑠 − 𝜏𝑗𝑖0 (𝑠)))
{𝑗=1
󵄨󵄨 󵄨 󵄩 ∗󵄩
󵄨󵄨𝑢𝑖 (𝑡)󵄨󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙 󵄩󵄩󵄩 , ∀𝑡 ∈ (𝑡0 , +∞)T , (45) − 𝑓𝑗 (𝛽𝑗 (𝑠 − 𝜏𝑗𝑖0 (𝑠))))
󵄨󵄨 Δ 󵄨󵄨 󵄩 ∗󵄩
󵄨󵄨󵄨𝑢𝑖 (𝑡)󵄨󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙 󵄩󵄩󵄩 ,
𝑚
∀𝑡 ∈ (𝑡0 , +∞)T , (46)
+ ∑𝑝𝑗𝑖0 (𝑠) (𝑔𝑗 (𝑦𝑗Δ (𝑠 − 𝜎𝑗𝑖0 (𝑠)))
𝑗=1

󵄨󵄨
and for 𝑗 = 1, 2, . . . , 𝑚, we have } 󵄨󵄨󵄨
− 𝑔𝑗 (𝛽𝑗Δ (𝑠 − 𝜎𝑗𝑖0 (𝑠)))) } Δ𝑠󵄨󵄨󵄨
󵄨󵄨
} 󵄨󵄨
󵄩 󵄩
󵄨󵄨 󵄨 ≤ 𝑒−𝑎𝑖 (𝑡1 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
󵄨󵄨]𝑗 (𝑡)󵄨󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , ∀𝑡 ∈ (𝑡0 , +∞)T , (47) 0
󵄨 󵄨 𝑡1
󵄨󵄨 Δ 󵄨󵄨
󵄨󵄨]𝑗 (𝑡)󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , ∀𝑡 ∈ (𝑡0 , +∞)T . (48) + ∫ 𝑒−𝑎𝑖 (𝑡1 , 𝜎 (𝑠))
󵄨 󵄨 𝑡0 0

𝑚
󵄨󵄨 󵄨
× ( ∑𝑎𝑗𝑖+ 0 𝐻𝑗
𝑓
󵄨󵄨]𝑗 (𝑠 − 𝜏𝑗𝑖0 (𝑠))󵄨󵄨󵄨
󵄨 󵄨
By way of contradiction, assume that (44) does not hold. 𝑗=1
Firstly, we consider the following four cases.
𝑛
𝑔󵄨 󵄨
Case 1. Equation (45) is not true and (46)–(48) are all true. + ∑𝑝𝑗𝑖+ 0 𝐻𝑗 󵄨󵄨󵄨󵄨]Δ𝑗 (𝑠 − 𝜎𝑗𝑖0 (𝑠))󵄨󵄨󵄨󵄨) Δ𝑠
Then, there exists 𝑡1 ∈ (𝑡0 , +∞)T and 𝑖0 ∈ {1, 2, . . . , 𝑛} such 𝑗=1
that 󵄩 󵄩
≤ 𝑒−𝑎𝑖 (𝑡1 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
0

𝑡1
󵄨󵄨 󵄨
󵄨󵄨𝑢𝑖0 (𝑡1 )󵄨󵄨󵄨 ≥ 𝑝𝑀𝑒⊖𝛼 (𝑡1 , 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , + ∫ 𝑒−𝑎𝑖 (𝑡1 , 𝜎 (𝑠))
󵄨 󵄨 𝑡0 0

󵄨󵄨 󵄨󵄨
󵄨󵄨𝑢𝑖0 (𝑡)󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , 𝑡 ∈ (𝑡0 , 𝑡1 )T , 𝑚
󵄨 󵄨 𝑓 󵄩 󵄩
(49) × ( ∑ 𝑎𝑗𝑖+ 0 𝐻𝑗 𝛿1 𝑝𝑀𝑒⊖𝛼 (𝑠 − 𝜏𝑗𝑖0 (𝑠) , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
󵄨󵄨 󵄨 󵄩 ∗󵄩
󵄨󵄨𝑢𝑙 (𝑡)󵄨󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙 󵄩󵄩󵄩 , 𝑗=1

for 𝑙 ≠ 𝑖0 , 𝑡 ∈ (𝑡0 , 𝑡1 ]T , 𝑙 = 1, 2, . . . , 𝑛. 𝑚
𝑔 󵄩 󵄩
+∑𝑝𝑗𝑖+ 0 𝐻𝑗 𝛿1 𝑝𝑀𝑒⊖𝛼 (𝑠−𝜎𝑗𝑖0 (𝑠) , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩) Δ𝑠
𝑗=1

󵄩 󵄩
= 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 𝑒⊖𝛼 (𝑡1 , 𝑡0 ) 𝑒−𝑎𝑖 ⊕𝛼 (𝑡1 , 𝑡0 )
Therefore, there must be a constant 𝛿1 ≥ 1 such that 0

󵄩 󵄩
+ 𝛿1 𝑝𝑀𝑒⊖𝛼 (𝑡1 , 𝑡0 ) 󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
󵄩

󵄨󵄨 󵄨
󵄨󵄨𝑢𝑖0 (𝑡1 )󵄨󵄨󵄨 = 𝛿1 𝑝𝑀𝑒⊖𝛼 (𝑡1 , 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 ,
𝑡1
󵄨 󵄨 × ∫ 𝑒−𝑎𝑖 (𝑡1 , 𝜎 (𝑠))
0
𝑡0
󵄨󵄨 󵄨󵄨
󵄨󵄨𝑢𝑖0 (𝑡)󵄨󵄨 < 𝛿1 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , 𝑡 ∈ (𝑡0 , 𝑡1 )T ,
󵄨 󵄨 (50) 𝑚
󵄨󵄨 󵄨 󵄩 ∗󵄩
󵄨󵄨𝑢𝑙 (𝑡)󵄨󵄨󵄨 < 𝛿1 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙 󵄩󵄩󵄩 , 𝑓
× ( ∑ 𝑎𝑗𝑖+ 0 𝐻𝑗 𝑒⊖𝛼 (𝑠 − 𝜏𝑗𝑖0 (𝑠) , 𝑡1 )
𝑗=1
for 𝑙 ≠ 𝑖0 , 𝑡 ∈ (𝑡0 , 𝑡1 ]T , 𝑙 = 1, 2, . . . , 𝑛.
𝑚
𝑔
+ ∑𝑝𝑗𝑖+ 0 𝐻𝑗 𝑒⊖𝛼 (𝑠 − 𝜎𝑗𝑖0 (𝑠) , 𝑡1 )) Δ𝑠
𝑗=1
Note that, in view of (37), we have
󵄩 󵄩
≤ 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 𝑒⊖𝛼 (𝑡1 , 𝑡0 ) 𝑒−𝑎𝑖− +𝛼 (𝑡1 , 𝑡0 )
0

󵄨󵄨 󵄨
󵄨󵄨𝑢𝑖0 (𝑡1 )󵄨󵄨󵄨 󵄩 󵄩
+ 𝛿1 𝑝𝑀𝑒⊖𝛼 (𝑡1 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
󵄨 󵄨
󵄨󵄨
󵄨󵄨 𝑡1
= 󵄨󵄨󵄨󵄨𝑢𝑖0 (𝑡0 ) 𝑒−𝑎𝑖 (𝑡1 , 𝑡0 ) + ∫ 𝑒−𝑎𝑖 (𝑡1 , 𝜎 (𝑠))
𝑡1
󵄨󵄨 0
𝑡0 0 × ∫ 𝑒−𝑎𝑖− (𝑡1 , 𝜎 (𝑠))
󵄨 𝑡0 0
Journal of Applied Mathematics 9

𝑚 𝑚 Hence, there must be a constant 𝛿2 ≥ 1 such that


𝑓 𝑔
× ( ∑𝑎𝑗𝑖+ 0 𝐻𝑗 exp {−𝛼̃
𝜏} + ∑𝑝𝑗𝑖+ 0 𝐻𝑗 exp {−𝛼̃
𝜎}) Δ𝑠
𝑗=1 𝑗=1
󵄨󵄨 Δ 󵄨
󵄨󵄨𝑢𝑖 (𝑡2 )󵄨󵄨󵄨 = 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 ,
󵄩 󵄩 󵄩 󵄩 1 󵄨 1 󵄨
≤ 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 𝑒⊖𝛼 (𝑡1 , 𝑡0 ) + 𝛿1 𝑝𝑀𝑒⊖𝛼 (𝑡1 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 −
−𝑎𝑖0 󵄨󵄨 Δ 󵄨󵄨
󵄨󵄨𝑢𝑖 (𝑡)󵄨󵄨 < 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , 𝑡 ∈ (𝑡0 , 𝑡2 )T ,
𝑡1 󵄨 1 󵄨
×∫ (−𝑎𝑖−0 ) 𝑒−𝑎𝑖− (𝑡1 , 𝜎 (𝑠)) Δ𝑠 󵄨󵄨 Δ 󵄨󵄨
(53)
𝑡0 0
󵄨󵄨𝑢𝑙 (𝑡)󵄨󵄨 < 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 ,
󵄨 󵄨
𝑚 𝑚
𝑓 𝑔
× ( ∑𝑎𝑗𝑖+ 0 𝐻𝑗 exp {−𝛼̃
𝜏} + ∑𝑝𝑗𝑖+ 0 𝐻𝑗 exp {−𝛼̃
𝜎}) for 𝑙 ≠ 𝑖1 , 𝑡 ∈ (𝑡0 , 𝑡2 ]T , 𝑙 = 1, 2, . . . , 𝑛.
𝑗=1 𝑗=1

󵄩 󵄩 1
= 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 𝑒⊖𝛼 (𝑡1 , 𝑡0 ) − − 𝛿1 𝑝𝑀𝑒⊖(−𝛼) (𝑡1 , 𝑡0 ) Note that, in view of (37), we have
𝑎𝑖0
󵄩 󵄩 𝑢𝑖Δ (𝑡)
× 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 (𝑒−𝑎𝑖− (𝑡1 , 𝑡0 ) − 1)
0

𝑚 𝑚 = −𝑎𝑖 (𝑡) 𝑢𝑖 (𝑡0 ) 𝑒−𝑎𝑖 (𝑡, 𝑡0 )


𝑓 𝑔
× ( ∑𝑎𝑗𝑖+ 0 𝐻𝑗 𝜏} +
exp {−𝛼̃ ∑𝑝𝑗𝑖+ 0 𝐻𝑗 𝜎})
exp {−𝛼̃ 𝑡
𝑗=1 𝑗=1 + ∫ (−𝑎𝑖 (𝑡)) 𝑒−𝑎𝑖 (𝑡, 𝜎 (𝑠))
󵄩 󵄩 𝑡0
< 𝛿1 𝑝𝑀 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 𝑒⊖𝛼 (𝑡1 , 𝑡0 )
{𝑚
1 1 𝑓
𝑚 × { ∑𝑎𝑗𝑖 (𝑠) (𝑓𝑗 (𝑦𝑗 (𝑠 − 𝜏𝑗𝑖 (𝑠)))
×[ + ( ∑ 𝑎+ 𝐻 exp {−𝛼̃
𝜏} {𝑗=1
𝛿1 𝑝𝑀 𝑎𝑖−0 𝑗=1 𝑗𝑖0 𝑗
[
− 𝑓𝑗 (𝛽𝑗 (𝑠 − 𝜏𝑗𝑖 (𝑠))))
𝑚
𝑔 𝑚
𝜎})]
+ ∑𝑝𝑗𝑖+ 0 𝐻𝑗 exp {−𝛼̃ + ∑𝑝𝑗𝑖 (𝑠) (𝑔𝑗 (𝑦𝑗Δ (𝑠 − 𝜎𝑗𝑖 (𝑠)))
𝑗=1
] 𝑗=1
󵄩 󵄩
< 𝛿1 𝑝𝑀 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 𝑒⊖𝛼 (𝑡1 , 𝑡0 )
}
−𝑔𝑗 (𝛽𝑗Δ (𝑠 − 𝜎𝑗𝑖 (𝑠)))) } Δ𝑠 (54)
1 1 𝑚 𝑓 𝑔 }
×( + − ∑ (𝑎𝑗𝑖+ 0 𝐻𝑗 + 𝑝𝑗𝑖+ 0 𝐻𝑗 ))
𝑀 𝑎𝑖0 𝑗=1 + 𝑒−𝑎𝑖 (𝜎 (𝑡) , 𝜎 (𝑡))

󵄩 󵄩
< 𝛿1 𝑝𝑀𝑒⊖𝛼 (𝑡1 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 , (51) {𝑚
× {∑ 𝑎𝑗𝑖 (𝑡) (𝑓𝑗 (𝑦𝑗 (𝑡 − 𝜏𝑗𝑖 (𝑡)))
{𝑗=1
−𝑓𝑗 (𝛽𝑗 (𝑡 − 𝜏𝑗𝑖 (𝑡))))
where 𝜏̃ = min(𝑖,𝑗) 𝜏𝑗𝑖− , 𝜎̃ = min(𝑖,𝑗) 𝜎𝑗𝑖− . In the proof, we use the 𝑚
inequality 𝑒−𝑎𝑖− (𝑡1 , 𝑡0 ) < 1. Thus, we get a contradiction. + ∑ 𝑝𝑗𝑖 (𝑡) (𝑔𝑗 (𝑦𝑗Δ (𝑡 − 𝜎𝑗𝑖 (𝑡)))
0
𝑗=1
Case 2. Equation (46) is not true and (45), (47), and (48) are
all true. Then, there exists 𝑡2 ∈ (𝑡0 , +∞)T and 𝑖1 ∈ {1, 2, . . . , 𝑛}
}
such that −𝑔𝑗 (𝛽𝑗Δ (𝑡 − 𝜎𝑗𝑖 (𝑡)))) } ,
}
𝑖 = 1, 2, . . . , 𝑛.
󵄨󵄨 Δ 󵄨
󵄨󵄨𝑢𝑖 (𝑡2 )󵄨󵄨󵄨 ≥ 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 ,
󵄨 1 󵄨
󵄨󵄨 Δ 󵄨󵄨 Thus, we have
󵄨󵄨𝑢𝑖 (𝑡)󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , 𝑡 ∈ (𝑡0 , 𝑡2 )T ,
󵄨 1 󵄨
(52)
󵄨󵄨 Δ 󵄨󵄨 󵄨󵄨 Δ 󵄨
󵄨󵄨𝑢𝑙 (𝑡)󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , 󵄨󵄨𝑢𝑖 (𝑡2 )󵄨󵄨󵄨
󵄨 󵄨 󵄨 1 󵄨
󵄩 󵄩
for 𝑙 ≠ 𝑖1 , 𝑡 ∈ (𝑡0 , 𝑡2 ]T , 𝑙 = 1, 2, . . . , 𝑛. ≤ 𝑎𝑖+1 𝑒−𝑎𝑖− (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
1
10 Journal of Applied Mathematics

𝑡2 󵄩 󵄩
+ 𝑎𝑖+1 ∫ 𝑒−𝑎𝑖 (𝑡2 , 𝜎 (𝑠)) + 𝑎𝑖+1 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
1
𝑡0
𝑡2
𝑚 × ∫ 𝑒−𝑎𝑖− (𝑡2 , 𝜎 (𝑠))
𝑓󵄨 󵄨
× ( ∑ 𝑎𝑗𝑖+ 1 𝐻𝑗 󵄨󵄨󵄨󵄨]𝑗 (𝑠 − 𝜏𝑗𝑖1 (𝑠))󵄨󵄨󵄨󵄨
𝑡0 1

𝑗=1 𝑚 𝑚
𝑓 𝑔
× ( ∑ 𝑎𝑗𝑖+ 1 𝐻𝑗 exp {−𝛼̃
𝜏} + ∑ 𝑝𝑗𝑖+ 1 𝐻𝑗 exp {−𝛼̃
𝜎}) Δ𝑠
𝑚
𝑔󵄨 󵄨 𝑗=1 𝑗=1
+ ∑ 𝑝𝑗𝑖+ 1 𝐻𝑗 󵄨󵄨󵄨󵄨]Δ𝑗 (𝑠 − 𝜎𝑗𝑖1 (𝑠))󵄨󵄨󵄨󵄨) Δ𝑠
󵄩 󵄩
𝑗=1 + 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
𝑚
𝑓󵄨 󵄨 𝑚
+ ∑𝑎𝑗𝑖+ 1 𝐻𝑗 󵄨󵄨󵄨󵄨]𝑗 (𝑡2 − 𝜏𝑗𝑖1 (𝑡2 ))󵄨󵄨󵄨󵄨 𝑓
× ∑ (𝑎𝑗𝑖+ 1 𝐻𝑗 exp {−𝛼̃
𝜏} + 𝑝𝑗𝑖+ 1 𝐻𝑗 exp {−𝛼̃
𝜎})
𝑔
𝑗=1 𝑗=1
𝑚
𝑔󵄨 󵄨 󵄩 󵄩 󵄩 󵄩
+ ∑𝑝𝑗𝑖+ 1 𝐻𝑗 󵄨󵄨󵄨󵄨]Δ𝑗 (𝑡2 − 𝜎𝑗𝑖1 (𝑡2 ))󵄨󵄨󵄨󵄨 ≤ 𝑎𝑖+1 𝑒−𝑎𝑖− (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 + 𝑎𝑖+1 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
1
𝑗=1
𝑡2
1 −
󵄩 󵄩 × − ∫ (−𝑎𝑖1 ) 𝑒−𝑎𝑖1 (𝑡2 , 𝜎 (𝑠)) Δ𝑠
𝑎𝑖+1 𝑒−𝑎𝑖− (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩

≤ −𝑎𝑖1 𝑡0
1

𝑡2 𝑚 𝑚
+ 𝑎𝑖+1 ∫ 𝑒−𝑎𝑖 (𝑡2 , 𝜎 (𝑠)) 𝑓
× ( ∑𝑎𝑗𝑖+ 1 𝐻𝑗 exp {−𝛼̃
𝜏} + ∑𝑝𝑗𝑖+ 1 𝐻𝑗 exp {−𝛼̃
𝜎})
𝑔
1
𝑡0
𝑗=1 𝑗=1
𝑚 𝑚
𝑓 󵄩 󵄩 󵄩 󵄩
×( ∑𝑎𝑗𝑖+ 1 𝐻𝑗 𝛿2 𝑝𝑀𝑒⊖(−𝛼) (𝑠−𝜏𝑗𝑖1 (𝑠) , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 𝑓 𝑔
+ 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 ∑ (𝑎𝑗𝑖+ 1 𝐻𝑗 + 𝑝𝑗𝑖+ 1 𝐻𝑗 )
𝑗=1 𝑗=1
𝑚 󵄩 󵄩
𝑔
+ ∑𝑝𝑗𝑖+ 1 𝐻𝑗 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑠 − 𝜎𝑗𝑖1 (𝑠) , 𝑡0 ) = 𝑎𝑖+1 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 𝑒−𝑎𝑖− ⊕𝛼 (𝑡2 , 𝑡0 )
1
𝑗=1
𝑎𝑖+1 󵄩 󵄩
− 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 (𝑒−𝑎𝑖− (𝑡2 , 𝑡0 ) − 1)
󵄩 󵄩 𝑎𝑖−1
× 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 ) Δ𝑠
1

𝑚 𝑚
𝑓 𝑔
𝑚 × ( ∑𝑎𝑗𝑖+ 1 𝐻𝑗 × exp {−𝛼̃
𝜏} + ∑𝑝𝑗𝑖+ 1 𝐻𝑗 exp {−𝛼̃
𝜎})
󵄩 󵄩 𝑓
+ 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 − 𝜏𝑗𝑖1 (𝑡2 ) , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 ∑𝑎𝑗𝑖+ 1 𝐻𝑗 𝑗=1 𝑗=1
𝑗=1
𝑚
󵄩 󵄩 𝑓 𝑔
𝑚 + 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 ∑ (𝑎𝑗𝑖+ 1 𝐻𝑗 + 𝑝𝑗𝑖+ 1 𝐻𝑗 )
󵄩 󵄩 𝑔
+ 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 − 𝜎𝑗𝑖1 (𝑡2 ) , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 ∑ 𝑝𝑗𝑖+ 1 𝐻𝑗 𝑗=1
𝑗=1 󵄩 󵄩
≤ 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
󵄩 󵄩
≤ 𝑎𝑖+1 𝑒−𝑎𝑖− (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
+
{ 𝑎𝑖
1 𝑚
𝑓 𝑔
+ 𝑎𝑖+1 𝛿2 𝑝𝑀𝑒⊖𝛼
󵄩 󵄩
(𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 × { 1 + ∑ (𝑎𝑗𝑖+ 1 𝐻𝑗 + 𝑝𝑗𝑖+ 1 𝐻𝑗 )
𝛿2 𝑝𝑀 𝑗=1
𝑡2
{
× ∫ 𝑒−𝑎𝑖 (𝑡2 , 𝜎 (𝑠)) 𝑎𝑖+1 𝑚
𝑡0 1 + 𝑓
+ − ∑ (𝑎𝑗𝑖1 𝐻𝑗 𝜏}
exp {−𝛼̃
𝑚
𝑎𝑖1 𝑗=1
𝑓
× ( ∑ 𝑎𝑗𝑖+ 1 𝐻𝑗 𝑒⊖(−𝛼) (𝑠 − 𝜏𝑗𝑖1 (𝑠) , 𝑡2 )
𝑔 }
𝑗=1
+ 𝑝𝑗𝑖+ 1 𝐻𝑗 exp {−𝛼̃
𝜎}) }
𝑚 }
𝑔
+ ∑𝑝𝑗𝑖+ 1 𝐻𝑗 𝑒⊖𝛼 (𝑠 − 𝜎𝑗𝑖1 (𝑠) , 𝑡2 ) ) Δ𝑠 󵄩 󵄩
< 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩
𝑗=1
󵄩 󵄩 +
{ 𝑎𝑖1 𝑎𝑖+1 𝑚 + 𝑓
+ 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 𝑔 }
× { + (1 + − ) ∑ (𝑎𝑗𝑖1 𝐻𝑗 + 𝑝𝑗𝑖+ 1 𝐻𝑗 )}
𝑚 𝑀 𝑎𝑖1 𝑗=1
𝑓 { }
× ∑ (𝑎𝑗𝑖+ 1 𝐻𝑗 𝑒⊖𝛼 (𝑡2 − 𝜏𝑗𝑖1 (𝑡2 ) , 𝑡2 )
󵄩󵄩 ∗󵄩
𝑗=1 < 𝛿2 𝑝𝑀𝑒⊖𝛼 (𝑡2 , 𝑡0 ) 󵄩󵄩𝜙 − 𝜙 󵄩󵄩󵄩 .
𝑔 (55)
+𝑝𝑗𝑖+ 1 𝐻𝑗 𝑒⊖𝛼 (𝑡2 − 𝜎𝑗𝑖1 (𝑡2 ) , 𝑡2 ))
󵄩 󵄩
≤ 𝑎𝑖+1 𝑒−𝑎𝑖− (𝑡2 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 We also get a contradiction.
1
Journal of Applied Mathematics 11

Case 3. Equation (47) is not true and (45), (46), and (48) By the above four cases, for other cases of negative
are all true. Then, there exists 𝑡3 ∈ (𝑡0 , +∞)T and 𝑗0 ∈ proposition of (44), we can obtain a contradiction. Therefore,
{1, 2, . . . , 𝑚} such that (44) holds. Let 𝑝 → 1, then (43) holds. We can take −𝜆 = ⊖𝛼,
󵄨󵄨 󵄨
󵄨󵄨]𝑗0 (𝑡3 )󵄨󵄨󵄨 ≥ 𝑝𝑀𝑒⊖𝛼 (𝑡3 , 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , then 𝜆 > 0 and −𝜆 ∈ R+ . Hence, we have that
󵄨 󵄨
󵄨󵄨 󵄨󵄨
󵄨󵄨]𝑗0 (𝑡)󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , 𝑡 ∈ (𝑡0 , 𝑡3 )T , 󵄩 󵄩
󵄨 󵄨 (56) |V (𝑡)|1 ≤ 𝑀 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 𝑒−𝜆 (𝑡, 𝑡0 ) , 𝑡 ∈ [−𝜃, ∞)T , 𝑡 ≥ 𝑡0 ,
󵄨󵄨 󵄨 󵄩 ∗󵄩
󵄨󵄨]𝜄 (𝑡)󵄨󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙 󵄩󵄩󵄩 , (62)

for 𝜄 ≠ 𝑗0 , 𝑡 ∈ (𝑡0 , 𝑡3 ]T , 𝜄 = 1, 2, . . . , 𝑚.
Therefore, there must be a constant 𝛿3 ≥ 1 such that which means that the almost periodic solution 𝜔(𝑡) of (2) is
󵄨󵄨 󵄨 exponentially stable. This completes the proof of Theorem 18.
󵄨󵄨]𝑗0 (𝑡3 )󵄨󵄨󵄨 = 𝛿3 𝑝𝑀𝑒⊖𝛼 (𝑡3 , 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 ,
󵄨 󵄨
󵄨󵄨 󵄨󵄨
󵄨󵄨]𝑗0 (𝑡)󵄨󵄨 < 𝛿3 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , 𝑡 ∈ (𝑡0 , 𝑡3 )T ,
󵄨 󵄨 (57)
󵄨󵄨 󵄨 󵄩 ∗󵄩
5. An Example
󵄨󵄨]𝜄 (𝑡)󵄨󵄨󵄨 < 𝛿3 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙 󵄩󵄩󵄩 ,
In this section, we present an example to illustrate the
for 𝜄 ≠ 𝑗0 , 𝑡 ∈ (𝑡0 , 𝑡3 ]T , 𝜄 = 1, 2, . . . , 𝑚. feasibility of our results obtained in previous sections.
Then, in a similar way, in view of (38), we have that
Example 1. Let 𝑛 = 𝑚 = 2. Consider the following neutral-
󵄨󵄨 󵄨
󵄨󵄨]𝑗0 (𝑡3 )󵄨󵄨󵄨 < 𝛿3 𝑝𝑀 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 𝑒⊖𝛼 (𝑡3 , 𝑡0 ) type BAM neural networks with delays on a time scale T
󵄨 󵄨
1 1 𝑛
×(+ − ∑ (𝑏𝑖𝑗+0 𝐻𝑖ℎ + 𝑞𝑖𝑗+0 𝐻𝑖𝑘 )) (58)
𝑀 𝑏𝑗0 𝑖=1 2
󵄩 󵄩 𝑥𝑖Δ (𝑡) = − 𝑎𝑖 (𝑡) 𝑥𝑖 (𝑡) + ∑𝑎𝑗𝑖 (𝑡) 𝑓𝑗 (𝑦𝑗 (𝑡 − 𝜏𝑗𝑖 (𝑡)))
< 𝛿3 𝑝𝑀𝑒⊖𝛼 (𝑡3 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 , 𝑗=1
which is also a contradiction. 2
Case 4. Equation (48) is not true and (45), (46), and (47) + ∑ 𝑝𝑗𝑖 (𝑡) 𝑔𝑗 (𝑦𝑗Δ (𝑡 − 𝜎𝑗𝑖 (𝑡))) + 𝐼𝑖 (𝑡) ,
are all true. Then, there exists 𝑡4 ∈ (𝑡0 , +∞)T and 𝑗1 ∈ 𝑗=1
(63)
{1, 2, . . . , 𝑚} such that 2
󵄨󵄨 Δ 󵄨
󵄨󵄨]𝑗 (𝑡4 )󵄨󵄨󵄨 ≥ 𝑝𝑀𝑒⊖𝛼 (𝑡4 , 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , 𝑦𝑗Δ (𝑡) = − 𝑏𝑗 (𝑡) 𝑦𝑗 (𝑡) + ∑𝑏𝑖𝑗 (𝑡) ℎ𝑖 (𝑥𝑖 (𝑡 − 𝜁𝑖𝑗 (𝑡)))
󵄨 1 󵄨 𝑖=1
󵄨󵄨 Δ 󵄨󵄨
󵄨󵄨]𝑗 (𝑡)󵄨󵄨 < 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 , 𝑡 ∈ (𝑡0 , 𝑡4 )T , 2
󵄨 1 󵄨
(59) + ∑𝑞𝑖𝑗 (𝑡) 𝑘𝑖 (𝑥𝑖Δ (𝑡 − 𝜍𝑖𝑗 (𝑡))) + 𝐽𝑗 (𝑡) ,
󵄨󵄨󵄨]Δ (𝑡)󵄨󵄨󵄨 < 𝑝𝑀𝑒 (𝑡, 𝑡 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 ,
󵄨󵄨 𝜄 󵄨󵄨 ⊖𝛼 0 󵄩 󵄩 𝑖=1

for 𝜄 ≠ 𝑗1 , 𝑡 ∈ (𝑡0 , 𝑡4 ]T , 𝜄 = 1, 2, . . . , 𝑚.
Hence, there must be a constant 𝛿4 ≥ 1 such that where 𝑡 ∈ T, 𝑖, 𝑗 = 1, 2 and the coefficients are as follows:
󵄨󵄨 Δ 󵄨
󵄨󵄨]𝑗 (𝑡4 )󵄨󵄨󵄨 ≥ 𝛿4 𝑝𝑀𝑒⊖𝛼 (𝑡4 , 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 ,
󵄨 1 󵄨
󵄨󵄨 Δ 󵄨󵄨 󵄩 ∗󵄩 𝑎1 (𝑡) = 0.975 + 0.025 sin 𝑡, 𝑎2 (𝑡) = 0.93 + 0.01 cos 𝑡,
󵄨󵄨󵄨]𝑗1 (𝑡)󵄨󵄨󵄨 < 𝛿4 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙 󵄩󵄩󵄩 , 𝑡 ∈ (𝑡0 , 𝑡4 )T ,
󵄨󵄨 Δ 󵄨󵄨
(60) 𝑎11 (𝑡) = 0.003 sin 𝑡, 𝑎12 (𝑡) = 0.002 cos 𝑡,
󵄨󵄨]𝜄 (𝑡)󵄨󵄨 < 𝛿4 𝑝𝑀𝑒⊖𝛼 (𝑡, 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩 ,
󵄨 󵄨 𝑎21 (𝑡) = 0.004 sin 𝑡, 𝑎22 (𝑡) = 0.001 sin 𝑡,
for 𝜄 ≠ 𝑗1 , 𝑡 ∈ (𝑡0 , 𝑡4 ]T , 𝜄 = 1, 2, . . . , 𝑚.
𝑝11 (𝑡) = 0.001 + 0.003 cos 𝑡, 𝑝12 (𝑡) = 0.001 sin 𝑡,
Similarly, in view of (38), we obtain that
󵄨󵄨 Δ 󵄨 𝑝21 (𝑡) = 0.006 + 0.001 sin 𝑡, 𝑝22 (𝑡) = 0.003 cos 𝑡,
󵄨󵄨]𝑗 (𝑡4 )󵄨󵄨󵄨 < 𝛿4 𝑝𝑀𝑒⊖𝛼 (𝑡4 , 𝑡0 ) 󵄩󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩󵄩
󵄨 1 󵄨
𝑓1 (𝑢) = 0.2 sin 𝑢, 𝑓2 (𝑢) = 2 cos 𝑢,
𝑏𝑗+1 𝑏𝑗+1 𝑛
×{ + (1 + ) ∑ (𝑏𝑖𝑗+0 𝐻𝑖ℎ + 𝑞𝑖𝑗+0 𝐻𝑖𝑘 )} 𝑔1 (𝑢) = 0.2 cos 𝑢, 𝑔2 (𝑢) = 2 sin 𝑢,
𝑀 𝑏𝑗−1 𝑖=1
󵄩 󵄩 𝑏1 (𝑡) = 0.95 + 0.01 sin √3𝑡, 𝑏2 (𝑡) = 0.92 + 0.01 cos √2𝑡,
< 𝛿4 𝑝𝑀𝑒⊖𝛼 (𝑡4 , 𝑡0 ) 󵄩󵄩󵄩𝜙 − 𝜙∗ 󵄩󵄩󵄩 .
(61) 𝑏11 (𝑡) = 0.002 sin √2𝑡, 𝑏12 (𝑡) = 0.003 cos 𝑡,
It is also a contradiction. 𝑏21 (𝑡) = 0.001 sin 𝑡, 𝑏22 (𝑡) = 0.004 cos 𝑡,
12 Journal of Applied Mathematics

𝑞11 (𝑡) = 0.001 cos √3𝑡, 𝑞12 (𝑡) = 0.004 sin 𝑡, such as shunting inhibitory cellular neural networks, Cohen-
Grossberg neural networks, and so on.
𝑞21 (𝑡) = 0.003 sin 𝑡, 𝑞22 (𝑡) = 0.004 cos 𝑡,
ℎ1 (𝑢) = sin 𝑢, ℎ2 (𝑢) = 1.5 cos 𝑢, Acknowledgment
𝑘1 (𝑢) = cos 𝑢, 𝑘2 (𝑢) = 1.5 sin 𝑢, This work is supported by the National Natural Sciences
Foundation of People’s Republic of China under Grant
𝐼1 (𝑡) = 𝐼2 (𝑡) = 0.6 sin √3𝑡, 10971183.
𝐽1 (𝑡) = 𝐽2 (𝑡) = 0.5 cos 𝑡, 𝜏𝑗𝑖 (𝑡) = 𝜎𝑗𝑖 (𝑡) = 0.12 sin 𝑡,
References
𝜁𝑖𝑗 (𝑡) = 𝜍𝑖𝑗 (𝑡) = 0.37 sin 𝑡, 𝑖, 𝑗 = 1, 2.
[1] B. Kosko, “Bidirectional associative memories,” IEEE Transac-
(64) tions on Systems, Man, and Cybernetics, vol. 18, no. 1, pp. 49–60,
1988.
By calculating, we have
[2] Y. Li, “Global exponential stability of BAM neural networks
with delays and impulses,” Chaos, Solitons & Fractals, vol. 24,
𝑎1+ = 1, 𝑎1− = 0.95, no. 1, pp. 279–285, 2005.
𝑎2+ = 0.94, 𝑎2− = 0.92, [3] H. Zhao, “Global stability of bidirectional associative memory
neural networks with distributed delays,” Physics Letters A, vol.
𝑏1+ = 0.96, 𝑏1− = 0.94, 297, no. 3-4, pp. 182–190, 2002.
[4] Q. Zhou, “Global exponential stability of BAM neural networks
𝑏2+ = 0.93, 𝑏2− = 0.91, with distributed delays and impulses,” Nonlinear Analysis: Real
World Applications, vol. 10, no. 1, pp. 144–153, 2009.
+ +
𝑎11 = 0.003, 𝑎12 = 0.002, [5] Y. Li, “Existence and stability of periodic solution for BAM
neural networks with distributed delays,” Applied Mathematics
+ +
𝑎21 = 0.004, 𝑎22 = 0.001, and Computation, vol. 159, no. 3, pp. 847–862, 2004.
+ + [6] B. Zheng, Y. Zhang, and C. Zhang, “Global existence of periodic
𝑝11 = 0.004, 𝑝12 = 0.001, solutions on a simplified BAM neural network model with
+ +
(65) delays,” Chaos, Solitons & Fractals, vol. 37, no. 5, pp. 1397–1408,
𝑝21 = 0.007, 𝑝22 = 0.003, 2008.
+ + [7] Y. Xia, J. Cao, and M. Lin, “New results on the existence
𝑏11 = 0.002, 𝑏12 = 0.003, and uniqueness of almost periodic solution for BAM neural
+ + networks with continuously distributed delays,” Chaos, Solitons
𝑏21 = 0.001, 𝑏22 = 0.004, & Fractals, vol. 31, no. 4, pp. 928–936, 2007.
+ + [8] L. Zhang and L. Si, “Existence and exponential stability of
𝑞11 = 0.001, 𝑞12 = 0.004,
almost periodic solution for BAM neural networks with variable
+ + coefficients and delays,” Applied Mathematics and Computation,
𝑞21 = 0.003, 𝑞22 = 0.004,
vol. 194, no. 1, pp. 215–223, 2007.
𝑓 𝑔 𝑓 𝑔 [9] A. Chen, L. Huang, and J. Cao, “Existence and stability of
𝐻1 = 𝐻1 = 0.2, 𝐻2 = 𝐻2 = 2,
almost periodic solution for BAM neural networks with delays,”
𝐻1ℎ = 𝐻1𝑘 = 1, 𝐻2ℎ = 𝐻2𝑘 = 1.5. Applied Mathematics and Computation, vol. 137, no. 1, pp. 177–
193, 2003.
If T = R, then 𝜇(𝑡) = 0, and if T = Z, then 𝜇(𝑡) = 1. We can [10] Y. Li and X. Fan, “Existence and globally exponential stability
verify for the above two cases, and all conditions of Theorems of almost periodic solution for Cohen-Grossberg BAM neu-
ral networks with variable coefficients,” Applied Mathematical
17 and 18 are satisfied. Therefore, whether T = R or T = Z,
Modelling, vol. 33, no. 4, pp. 2114–2120, 2009.
(63) has an almost periodic solution, which is exponentially
stable. That is, the continuous-time neural network and its [11] C. Liu, C. Li, and X. Liao, “Variable-time impulses in BAM
neural networks with delays,” Neurocomputing, vol. 74, no. 17,
discrete-time analogue have the same dynamical behaviors.
pp. 3286–3295, 2011.
[12] Z. Zhang and K. Liu, “Existence and global exponential stability
6. Conclusion of a periodic solution to interval general bidirectional associa-
tive memory (BAM) neural networks with multiple delays on
In this paper, we establish some sufficient conditions ensuring time scales,” Neural Networks, vol. 24, no. 5, pp. 427–439, 2011.
the existence and exponential stability of almost periodic [13] Y. Li and S. Gao, “Global exponential stability for impulsive
solutions for a class of neutral-type BAM neural networks BAM neural networks with distributed delays on time scales,”
with delays on time scales. Our results obtained in this paper Neural Processing Letters, vol. 31, no. 1, pp. 65–91, 2010.
are completely new even in case of the time scale T = R [14] J. H. Park, C. H. Park, O. M. Kwon, and S. M. Lee, “A
or Z and complementary to the previously known results. new stability criterion for bidirectional associative memory
Besides, our method used in this paper may be used to neural networks of neutral-type,” Applied Mathematics and
study many other neutral-type neural networks with delays Computation, vol. 199, no. 2, pp. 716–722, 2008.
Journal of Applied Mathematics 13

[15] R. Rakkiyappan and P. Balasubramaniam, “New global expo- [33] Y. Li and C. Wang, “Almost periodic functions on time scales
nential stability results for neutral type neural networks with and applications,” Discrete Dynamics in Nature and Society, vol.
distributed time delays,” Neurocomputing, vol. 71, no. 4–6, pp. 2011, Article ID 727068, 20 pages, 2011.
1039–1045, 2008. [34] Y. Li and C. Wang, “Uniformly almost periodic functions and
[16] R. Rakkiyappan and P. Balasubramaniam, “LMI conditions almost periodic solutions to dynamic equations on time scales,”
for global asymptotic stability results for neutral-type neural Abstract and Applied Analysis, vol. 2011, Article ID 341520, 22
networks with distributed time delays,” Applied Mathematics pages, 2011.
and Computation, vol. 204, no. 1, pp. 317–324, 2008. [35] J. Zhang, M. Fan, and H. Zhu, “Existence and roughness of
[17] C. Bai, “Global stability of almost periodic solutions of Hopfield exponential dichotomies of linear dynamic equations on time
neural networks with neutral time-varying delays,” Applied scales,” Computers & Mathematics with Applications, vol. 59, no.
Mathematics and Computation, vol. 203, no. 1, pp. 72–79, 2008. 8, pp. 2658–2675, 2010.
[18] B. Xiao, “Existence and uniqueness of almost periodic solutions [36] V. Lakshmikantham and A. S. Vatsala, “Hybrid systems on time
for a class of Hopfield neural networks with neutral delays,” scales,” Journal of Computational and Applied Mathematics, vol.
Applied Mathematics Letters, vol. 22, no. 4, pp. 528–533, 2009. 141, no. 1-2, pp. 227–235, 2002.
[19] H. Xiang and J. Cao, “Almost periodic solution of Cohen- [37] Y. Li and K. Zhao, “Robust stability of delayed reaction-diffusion
Grossberg neural networks with bounded and unbounded recurrent neural networks with Dirichlet boundary conditions
delays,” Nonlinear Analysis: Real World Applications, vol. 10, no. on time scales,” Neurocomputing, vol. 74, no. 10, pp. 1632–1637,
4, pp. 2407–2419, 2009. 2011.
[20] K. Wang and Y. Zhu, “Stability of almost periodic solution [38] Y. K. Li, K. H. Zhao, and Y. Ye, “Stability of reaction-diffusion
for a generalized neutral-type neural networks with delays,” recurrent neural networks with distributed delays and neu-
Neurocomputing, vol. 73, no. 16–18, pp. 3300–3307, 2010. mann boundary conditions on time scales,” Neural Processing
[21] J. Liu and G. Zong, “New delay-dependent asymptotic stability Letters, vol. 36, pp. 217–234, 2012.
conditions concerning BAM neural networks of neutral type,” [39] J. Shen and J. Cao, “Consensus of multi-agent systems on time
Neurocomputing, vol. 72, no. 10–12, pp. 2549–2555, 2009. scales,” IMA Journal of Mathematical Control and Information,
[22] R. Samli and S. Arik, “New results for global stability of a vol. 29, no. 4, pp. 507–517, 2012.
class of neutral-type neural systems with time delays,” Applied
[40] Y. K. Li, “Periodic solutions of non-autonomous cellular neural
Mathematics and Computation, vol. 210, no. 2, pp. 564–570,
networks with impulses and delays on time scales,” IMA Journal
2009.
of Mathematical Control and Information, 2013.
[23] R. Samidurai, S. M. Anthoni, and K. Balachandran, “Global
exponential stability of neutral-type impulsive neural networks
with discrete and distributed delays,” Nonlinear Analysis: Hybrid
Systems, vol. 4, no. 1, pp. 103–112, 2010.
[24] R. Rakkiyappan, P. Balasubramaniam, and J. Cao, “Global
exponential stability results for neutral-type impulsive neural
networks,” Nonlinear Analysis: Real World Applications, vol. 11,
no. 1, pp. 122–130, 2010.
[25] Y. Li, L. Zhao, and X. Chen, “Existence of periodic solutions
for neutral type cellular neural networks with delays,” Applied
Mathematical Modelling, vol. 36, no. 3, pp. 1173–1183, 2012.
[26] P. Balasubramaniam and V. Vembarasan, “Asymptotic stability
of BAM neural networks of neutral-type with impulsive effects
and time delay in the leakage term,” International Journal of
Computer Mathematics, vol. 88, no. 15, pp. 3271–3291, 2011.
[27] B. Aulbach and S. Hilger, “Linear dynamic processes with inho-
mogeneous time scale,” in Nonlinear Dynamics and Quantum
Dynamical Systems, vol. 59 of Mathematical Research, pp. 9–20,
Akademie, Berlin, Germany, 1990.
[28] L. Erbe and S. Hilger, “Sturmian theory on measure chains,”
Differential Equations and Dynamical Systems, vol. 1, no. 3, pp.
223–244, 1993.
[29] V. Lakshmikantham, S. Sivasundaram, and B. Kaymakcalan,
Dynamic Systems on Measure Chains, Kluwer Academic, Dor-
drecht, The Netherlands, 1996.
[30] R. P. Agarwal and M. Bohner, “Basic calculus on time scales and
some of its applications,” Results in Mathematics, vol. 35, no. 1-2,
pp. 3–22, 1999.
[31] S. Hilger, “Analysis on measure chains—a unified approach to
continuous and discrete calculus,” Results in Mathematics, vol.
18, no. 1-2, pp. 18–56, 1990.
[32] M. Bohner and A. Peterson, Dynamic Equations on Time Scales:
An Introduction with Applications, Birkhäuser, Boston, Mass,
USA, 2001.
Advances in Advances in Journal of Journal of
Operations Research
Hindawi Publishing Corporation
Decision Sciences
Hindawi Publishing Corporation
Applied Mathematics
Hindawi Publishing Corporation
Algebra
Hindawi Publishing Corporation
Probability and Statistics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

The Scientific International Journal of


World Journal
Hindawi Publishing Corporation
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Submit your manuscripts at


http://www.hindawi.com

International Journal of Advances in


Combinatorics
Hindawi Publishing Corporation
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Journal of Journal of Mathematical Problems Abstract and Discrete Dynamics in


Complex Analysis
Hindawi Publishing Corporation
Mathematics
Hindawi Publishing Corporation
in Engineering
Hindawi Publishing Corporation
Applied Analysis
Hindawi Publishing Corporation
Nature and Society
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences

Journal of International Journal of Journal of

Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014


Function Spaces
Hindawi Publishing Corporation
Stochastic Analysis
Hindawi Publishing Corporation
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014