Вы находитесь на странице: 1из 13

RESEARCH METHODOLOGY GTU PhD (Engg. And Comp. Sc.

) July 2013 Assignments for Engineering and Computer Science

Submitted by: Amit Rathod Enrollment No.:129990911008(Ele.& Commu.) Assistant Professor Government Engineering College, Bhavnagar

Amit C. Rathod (EC) 1432

Page 0

Assignment-1
Part-1 Aim: List top five journals as per impact factors

The Name of top five journals as per impact factors in my area of research and the screen shot of list displayed by the web-site are as under.

Sr No 1 2 3 4 5

Journal Name Elsevier journal on Applied soft computing IEEE transaction on Parallel and distributed systems Elsevier journal on Parallel computing Springer journal on Computing ACM transaction on Reconfigurable technology and systems

Impact factor 2.14 1.4 1.214 0.807 0.65

Amit C. Rathod (EC) 1432

Page 1

Amit C. Rathod (EC) 1432

Page 2

Amit C. Rathod (EC) 1432

Page 3

Part-2 Aim: List five more cited author in the field of High performance Computing

The Name of five more cited author in the field of wireless communication as per h index and the screen shot of their publication list displayed by the web-site are as under.
Sr.No 1 2 3 4 5 Author Ajit Abraham Abhishek Mitra Ching Chang Wong Nadia Nedjah Alberto Sangiovanni Vincentelli H- Index 52 8 16 13 93 i-10 index 254

510

Amit C. Rathod (EC) 1432

Page 4

Amit C. Rathod (EC) 1432

Page 5

Amit C. Rathod (EC) 1432

Page 6

Part-3 Aim: Screen shot of your bibliography database tool JebRef.

Amit C. Rathod (EC) 1432

Page 7

Assignments II
Paper Title: A hardware accelerator for Particle Swarm Optimization Authors: Rogrio M. Calazan,Nadia Nedjah,Luiza M. Mourelle - Post-Graduate Program of Electronics Engineering, Faculty of Engineering, State University of Rio de Janeiro, Rio de Janeiro, Brazil Published in: Elsevier journal of Applied Soft Computing-2013 [Article in press] Part I Particle swarm optimization is a field of artificial intelligence wherein a decentralized collective behavior of individuals that interact with each other as well as with the environment is at the basis to conclude an intelligent decision with respect to a given problem. As a population-based method, however, PSO still suffers from the problem of time-consuming evolution process to derive an answer. When it comes to embedded and industrial applications, for example, optimization of memory usage, navigation of mobile sensors, and evolutionary mobile robots, etc., the situation will be even worse because these applications generally use low-performance microprocessors with limited computational resources, rather than high-performance desktop personal computers, as the computational platform. As a result, poor execution performance might occur because of the evolutionary nature of PSO. An FPGA implementation helps to increase the performance of PSO. So this is my concern of area I found in this paper. Issue Covered This paper is oriented to present an efficient architecture for parallel PSO in reconfigurable FPGA hardware. Highly iteration complexity of an algorithm and computational time required to solve the iteration. Parallel version of PSO is addressed Random number generation (RNG) in reconfigurable hardware issue is covered Performance of the coprocessor and comparison of it with the Microblaze-based implementation has been done

Objective of the research To propose an efficient architecture for parallel PSO to be implemented in reconfigurable FPGA hardware. To propose an architecture that can be viewed as a coprocessor that operates together with the Microblaze processor in order to solve specific applications, optimize performance and free up the processor during execution of PSO processor. To validate the performance of the proposed architecture, by comparing the execution time of PSO with and without the coprocessor.
Page 8

Amit C. Rathod (EC) 1432

Modeling/experimental techniques used In this paper PSO, Algorithm has been parallelized as shown in Fig. 1. It follows from the idea that the work performed by a given particle is independent of that done by the other particles of the swarm,except in terms of Gbest, and thus the computation done by the particles could be executed simultaneously. This algorithm has a synchronization point at the election of Gbest, as shown in Fig. 2, wherein p1, . . ., pn denote the n particles of the swarm, and v(p1), . . . , v(pn) and x(p1), . . ., x(pn) the respective velocities and positions. Each particle computes the corresponding fitness, velocity and position, independently and in parallel with the other particles, until the election of Gbest. In order to synchronize the process and prevent using incorrect values of Gbest, the velocity and position computations can only commence once Gbest has been chosen among the Pbest values of all particles of the swarm. The stopping criterion verification is also done synchronously by the parallel processes.

Figure1. Parallel computation as performed by n particles within a swarm.


Amit C. Rathod (EC) 1432 Page 9

Solution developed by co-processor accelerator The main component of the coprocessor architecture is termed as the SWARM unit. It is responsible for the correct operation and synchronization of the swarm. It starts by enabling the START unit that generates the initial position and velocity of each particle in the population. The stochastic nature of the PSO algorithm requires the use of random numbers generator. An LFSR (Linear Feedback Shift Register) is responsible in generating random numbers for single precision. This is done according to the maximum and minimum value of the domain as well as on number of particles allowed. As these values are loaded, the unit enables the particles to start the fitness calculation. Whenever the particle is ready to inform the value of the respective Pbest, the comparator checks, the values returned by the particles as to whether the Gbest register should be updated. The SWARM unit synchronizes the work of the particles, allowing that the calculations of velocity and position are started only once Gbest has been correctly elected. The state machine CTRL, among other controls enables registers NDIM. Implementation of SWARM unit is designed in FPGA co-processor which is interfaced with Microblaze soft core processor using FSL link as shown in figure 2. The Microblaze and the coprocessor PSO were synthesized in a Xilinx Virtex 6 FPGA.

Figure2. PSO coprocessor connected to the Microblaze through FSL. Statistical functions used Six functions were implemented as the FITNESS Unit in order to assess the performance of the proposed PSO coprocessor architecture. These functions are the Sphere as defined in (1), the Rosenbrock function (2), the DeJong function (3), the F6 function (4), the function Rastrigin (5) and the Schwefel function (6).
Amit C. Rathod (EC) 1432 Page 10

(1) (2) (3) (4)

(5)

(6)

Finding presented A comparison of the optimization times for the benchmark fitness functions is illustrated. Speedup is obtained by the coprocessor when compared to Microblaze processor implementation. Performance of PSO coprocessor, specifically on a Xilinx Virtex 6 FPGA (operating at 200 MHz, is up to 18 times faster than the implementation on Microblaze in the worst case and up to 135 times faster in the best case. An increase in particle number does not require a bigger deal of resource requirement. However, it causes an increase in performance of the optimization process.

Conclusion made The scalability of the hardware depends on the number of particles used and the complexity of the fitness function. The achieved best acceleration was up to 135 and not less than 20 that on the Microblaze.

Amit C. Rathod (EC) 1432

Page 11

Part II Following point can be conceder to improve the extend Further improvement in computational efficiency, design flexibility, and resources utilization are concerned. By adding more flexibility at application level such as population size, number of bit strings for encoding, and inertia weight can be modified by users to suit the needs of a specific application. A hardware random number generator (RNG) is also one considerable point which can generate random numbers with better randomness in addition to a particle re-initialization scheme to promote exploration search during the optimization. Further work can be carried out to measure the performance of PSO on a different coprocessor having a different real life application with different soft core processor.

***

Amit C. Rathod (EC) 1432

Page 12

Вам также может понравиться