Академический Документы
Профессиональный Документы
Культура Документы
org
AbstractThis literature review attempts to provide a brief overview of some of the most common
segmentation techniques, and a comparison between them.It discusses Graph based methods, Medical image segmentation research papers and Color Image based Segmentation Techniques. With the growing research on image segmentation, it has become important to categorise the research outcomes and provide readers with an overview of the existing segmentation techniques in each category. In this paper, different image segmentation techniques starting from graph based approach to color image segmentation and medical image segmentation, which covers the application of both techniques, are reviewed.Information about open source software packages for image segmentation and standard databases are provided. Finally, summaries and review of research work for image segmentation techniques along with quantitative comparisons for assessing the segmentation results with different parameters are represented in tabular format, which are the extracts of many research papers.
Index TermsGraph based segmentation technique, medical image segmentation, color image
segmentation, watershed (WS) method, F-measure, computerized tomography (CT) images
I.
Introduction
Image segmentation is the process of separating or grouping an image into different parts. There are currently many different ways of performing image segmentation, ranging from the simple thresholding method to advanced color image segmentation methods. These parts normally correspond to something that humans can easily separate and view as individual objects. Computers have no means of intelligently recognizing objects, and so many different methods have been developed in order to segment images. The segmentation process in based on various features found in the image. This might be color information, boundaries or segment of an image. The aim of image segmentation is the domain-independent partition of the image into a set of regions, which are visually distinct and uniform with respect to some property, such as grey level, texture or colour. Segmentation can be considered the first step and key issue in object recognition, scene understanding and image understanding. Its application area varies from industrial quality control to medicine, robot navigation, geophysical exploration, military applications, etc. In all these areas, the quality of the final results depends largely on the quality of the segmentation. In this review paper we will discuss on graph based segmentation techniques, color image segmentation techniques and medical image segmentation, which is the real time application and very important field of research. The mathematical details are avoided for simplicity.
2.1
SEGMENTATION METHODS
A family of graph-theoretical algorithms based on the minimal spanning tree are capable of detecting several kinds of cluster structure in arbitrary point sets [1][2]. In the year 1971,C. T. Zahn presented a paper, where brief discussion is made of the application of cluster detection to taxonomy and the selection of good feature spaces for pattern recognition. Detailed analyses of several planar cluster detection problems are illustrated by text and figures. The well-known Fisher iris data, in four-dimensional space, have been analysed by these methods.During 1993,N.R. Pal and S.K. Pal [3] presented a review paper on image segmentation techniques. They mentioned that, selection of an appropriate segmentation technique largely depends on the type of images and application areas. It has been established by Pal and Pal that the grey level distributions within the object and the background can be more closely approximated by Poisson distributions than that by the normal distributions. An interesting area of investigation is to find methods of
2.2
Healey [12] includes the edge information to guide the colour segmentation process, while in [13] the authors combine the texture features extracted from each sub-band of the colour space with the colour features using heuristic merging rules. In [14], the authors discuss a method that employs the colour texture information for the model-based coding of human images, while Shigenaga [15] adds the spatial frequency texture features sampled by Gabor filters to complement the CIE Lab (CIE is the acronym for the Commission International dEclairage) colour image information. In order to capture the colourtexture content, Rosenfeld et al. [16] calculated the absolute difference distributions of pixels in multi-band images, while Hild et al. [17] proposed a bottom-up segmentation framework where the colour and texture feature vectors were separately extracted and then combined for knowledge indexing. In papers [113,115 122,127,128,132,133] extraction of colour image and texture are derived as a sequence of serial processes. The study of chromatic content has been conducted by Paschos and Valavanis [18]. Shafarenko et al. [19] explored segmentation of randomly textured colour images. In this approach the segmentation process is
www.iosrjournals.org
2 | Page
www.iosrjournals.org
3 | Page
www.iosrjournals.org
4 | Page
www.iosrjournals.org
5 | Page
Open source software for medical image analysis[110]: Several open source software packages are available for performing analysis of medical images. Open source software packages for performing analysis of medical images are listed below:
www.iosrjournals.org
6 | Page
3.1 DATABASES& THEIR WEBSITES Several databases have been proposed by the computer vision community that consist of a large variety of colour images that can be employed in the quantification of colour image segmentation algorithms. Standard database is required for implementation of image segmentation techniques or method on input images. These databases are also useful for evaluation of methods or evaluation of results. It can also be useful for validating the available results by comparisons with the standard database. There are some publicly available databases containing images with colour image segmentation characteristics. They are listed below with their respective websites along with the referred research paper. Name of Databases Database Web address Berkeley Segmentation Dataset and Benchmark (2001) [21] http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/ McGill calibrated colour image database (2004) Outex database (2002) VisTex database (1995) [24] http://vismod.media.mit.edu/vismod/imagery/VisionTexture/vistex.html Caltech-256 (2007) [113]http://www.vision.caltech.edu/Image_Datasets/Caltech256/ [111] http://tabby.vision.mcgill.ca [112] http://www.outex.oulu.fi/
The Prague Texture Segmentation Data Generator and Benchmark (2008) [114] http://mosaic.utia.cas.cz/ Pascal VOC (updated 2009) CUReT (1999) SIMPLIcity (2001) Minerva (2001) The Texture Library Database (updated 2009) BarkTex (1998) Corel Commercially available [115]http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2009/#devkit [116]http://www.cs.columbia.edu/CAVE/software/curet/ [117]http://wang.ist.psu.edu/docs/related/ [118]http://www.paaonline.net/benchmarks/minerva/ http://textures.forrest.cz
[119]ftp://ftphost.uni-koblenz.de/outgoing/vision/Lakmann/BarkTex
www.iosrjournals.org
7 | Page
1.
2.
3.
4.
Automatic image segmentation by dynamic region growth and multiresolution merging(GSEG).[45 ] Colourtexture segmentation using unsupervised graph cuts.UGC .[54] CTexan adaptive unsupervised segmentation algorithm based on colourtexture coherence.[62] Segmentation of natural images using self-organising feature maps. [65] B-JSEG(Wang et al.[120]
Proposed technique UGC compared with JSEG (Precision-recal, F-measure (avg) Proposed technique CTex compared with JSEG
5.
UGC Color Color-texton Precision 0.77 0.42 0.57 0.64 Recall 0.75 0.46 0.49 0.51 F-measure 0.76 0.44 0.53 0.57 PR Indexmean PR Indexstandard_deviation JSEG 0.77 0.12 CTex 0.80 0.10
Human
JSEG
6.
Area overlap
7.
HSEG 29.3
8.
www.iosrjournals.org
8 | Page
10.
11.
12.
Morphological Description of Color Images for Content-Based Image Retrieval[129] Morphological segmentation on learned boundaries.(WF and WS) [132] Textured image segmentation based on modulation models[137] An adaptive and progressive approach for efficient gradientbased multiresolution color image segmentation [141]
HCM 28.24 11.15 253.14 FCM 30.57 16.03 1617.1 PFCM 29.53 12.00 2.281 CNN 39.39 17.95 2.125 Histogram AC CSD CDE CSG MHW MHL 1 127 89 9 548 470 443
Method GCE Precision WF level 2 0.19 0.64 NC (6 reg.) 0.29 0.52 WS Vol (18 reg.)0.22 0.54 NC (18 reg.) 0.23 0.44 Algorithms Avg.Values of BCE Proposed 0.4876 DCA+K-Means 0.5454 DCA+WCE 0.5044 GRF JSEG EG
13.
14.
Avg.Time(sec) 240 16.2 Avg.NPR NPR>0.7 (# Images) Environment Algorithm TextonBoost Yang et al. Auto-Context Our approach
15.
CRF model
www.iosrjournals.org
9 | Page
2.
3.
A new iris segmentation method that can be used to accurately extract iris regions from nonideal quality iris images[102] Thyroid Segmentation and Volume Estimation in Ultrasound Images[103]
Segmentation Measurement Indices[%] Methods Accuracy Sensitivity Specificity PPV NPV The proposed 96.54 91.98 97.69 91.17 97.91 Method AWMF+ACM 94.56 85.66 96.90 88.42 92.27 AWMF+ Watershed 88.27 78.80 90.78 70.19 94.14 TPR FPR DC C&V 0.7301 0.0075 0.2680 Coupled Shape 0.7299 0.0050 0.2351 Coupled Shape and 0.7291 0.0049 0.2335 Relative pose Observer type of diagnosis Correct seg. seg. process Relaxation Region Growing Frontier detection 70 93 89 1-
4.
Coupled NonParametric Shape and Moment-Based InterShape Pose Priors for Multiple Basal Ganglia Structure Segmentation[104]
5.
Image structure representation and processing: a discussion of some segmentation methods in cytology[108]
Uncorrect
30 7 11
IV.
CONCLUSION
Color has been widely used for image and video. Since the introduction of color distributions as descriptors of image content, various research projects have addressed the problems of color spaces, illumination invariance, color quantization, and color similarity functions. Many different methods have been developed to enhance the limited descriptive capacity of color distributions. We have presented here the state of the art of color-based methods that can be used to segment color images. The most surprising element that emerges from our study of color indexing and segmentation is that most of the methods analysed do not explore the problem of how to deal with color in a device independent way. Very seldom are details given, or
www.iosrjournals.org
10 | Page
www.iosrjournals.org
11 | Page
References
[1] C. T. Zahn, "Graph-theoretical methods for detecting and describing gestalt clusters, IEEE Transactions on Computers, pp. 68-86, Vol. 20, No. 1, 1971. Heng-Da Cheng and Ying Sun, A Hierarchical Approach to Colour Image Segmentation Using Homogeneity, IEEE transactions on image processing, vol. 9, no. 12, December 2000. N.R. Pal and S.K. Pal, A review on image segmentation techniques, Pattern Recognition 26 (1993), pp. 12771294. N.R. Pal and D. Bhandari, Image thresholding: some new techniques, Sign. Process. 33 (1993), pp. 139158. Y. J. ZHANG, A survey on evaluation methods for image segmentation, Pattern Recognition, Volume 29, Issue 8, August 1996, Pages 1335-1346. Y. Zhang,Influence of image segmentation over feature measurement, Pattern Recognition Lett. 16 (1995), pp. 201206. M. Tabb and N. Ahuja, Unsupervised multiscale image segmentation by integrated edge and region detection, IEEE Transactions on Image Processing, Vol. 6, No. 5, 642-655, 1997. Jianbo Shi and Jitendra Malik, "Normalized Cuts and Image Segmentation", IEEE Transactions on pattern analysis and machine intelligence, pp 888-905, Vol. 22, No. 8, 2000. Bir Bhanu and Jing Peng, Adaptive Integrated Image Segmentation and Object Recognition, IEEE transactions on systems, man, and cyberneticspart c: applications and reviews, vol. 30, no. 4, November 2000. Leo Grady and Eric L. Schwartz, "Isoperimetric Graph Partitioning for Image Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 469-475, Vol. 28, No. 3, 2006.
[2]
[6] [7]
[8]
[9]
[10]
www.iosrjournals.org
12 | Page
[12]
[13]
A.P. Dhawan, A. Sicsu, Segmentation of images of skin lesions using colour and texture information of surface pigmentation, Computerized Medical Imaging and Graphics 16 (3) (1992) 163 177. S. Ishibashi, F. Kishino, Colourtexture analysis and synthesis for model based human image coding, Proceedings of the SPIEThe International Society for Optical Engineering 1605 (1) (1991) 242 252. A. Shigenaga, Image segmentation using colour and spatial-frequency representations, in: Proceedings of the Second International Conference on Automation, Robotics and Computer Vision (ICARCV 92), vol. 1, 1992, pp. CV -1.3/1-5. A. Rosenfeld, C.Y. Wang, A.Y. Wu, Multispectral texture, IEEE Transactions on Systems, Man and Cybernetics SMC -12 (1) (1982) 7984. M. Hild, Y. Shirai, M. Asada, Initial segmentation for knowledge indexing, in: Proceedings of the 11th IAPR International Conference on Pattern Recognition, vol. 1, 1992, pp. 587 590. G. Paschos, K.P. Valavanis, Chromatic measures for colour texture description and analysis, in: Proceedings of the IEEE International Symposium on Intelligent Control, 1995, pp. 319 325. L. Shafarenko, M. Petrou, J. Kittler, Automatic watershed segmentation of randomly textured colour images, IEEE Transactions on Image Processing 6 (11) (1997) 1530 1544. M.A. Hoang, J.M. Geusebroek, A.W. Smeulders, Colour texture measurement and segmentation, Signal Processing 85 (2) (2005) 265275. D. Martin, C. Fowlkes, D. Tal, J. Malik, A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics, in: Proceedings of the IEEE International Conference on Computer Vision (ICCV 01), 2001, pp. 416425. H. Wang, X.H. Wang, Y. Zhou, J. Yang, Colour texture segmentation using quaternion-Gabor filters, IEEE International Conference on Image Processing, 2006, pp. 745 748. A. Jain, G. Healey, A multiscale representation including opponent colour features for texture recognition, IEEE Transactions on Image Processing 7 (1) (1998) 124 128. VisTexdatabase (1995) http://vismod.media.mit.edu/vismod/imagery/VisionTexture/vistex.html Y. Deng, B.S. Manjunath, Unsupervised segmentation of colour texture regions in images and video, IEEE Transactions on Pattern Analysis and Machine Intelligence 23 (8) (2001) 800810 JSEG source code is available online at the following website: /http://vision.ece.ucsb.edu/segmentation/jseg. M. Mirmehdi, M. Petrou, Segmentation of colour textures, IEEE Transactions on Pattern Analysis and Machine Intelligence 22(2)(2000) 142159. C.L. Huang, T.Y. Cheng, C.C. Chen, Colour images segmentation using scale space filter and Markov random field, Pattern Recognition 25 (10) (1992) 12171229. Y. Wang, J. Yang, N. Peng, Unsupervised colour texture segmentation based on soft criterion with adaptive mean-shift clustering, Pattern Recognition Letters 27 (5) (2006) 386 392. T. Gevers, Image segmentation and similarity of colour texture objects, IEEE Transactions on Multimedia 4 (4) (2002) 509 516. Y. Zheng, J. Yang, Y. Zhou, Y. Wang, Colour texture based unsupervised segmentation using JSEG with fuzzy connectedness, Journal of Systems Engineering and Electronics 17 (1) (2006) 213 219. S.Y. Yu, Y. Zhang, Y.G. Wang, J. Yang, Unsupervised colour texture image segmentation, Journal of Shanghai Jiaotong University (Science) 13E (1) (2008) 71 75. M. Krinidis, I. Pitas, Colour texture segmentation based on the modal energy of deformable surfaces, IEEE Transactions on Image Processing 18 (7) (2009) 1613 1622.
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24] [25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
www.iosrjournals.org
13 | Page
[34] [35]
[36]
[37]
[38]
[39]
[40]
[41]
[42]
[43]
[44]
[45]
[46]
[47]
[48]
[49]
[50]
[51]
[52]
[53]
[54]
www.iosrjournals.org
14 | Page
[56]
[57]
[58]
[59]
[60]
[61] [62]
[63]
[64]
[65]
[66]
[67]
[68]
[69]
[70]
[71]
[72]
[73]
[74]
[75] [76]
[77]
www.iosrjournals.org
15 | Page
[79]
[80]
[81]
[82]
[83]
[84]
[85]
[86]
[87]
[88]
[89]
[90]
[91]
[92]
[93]
[94]
[95]
[96]
[97]
[98]
[99]
www.iosrjournals.org
16 | Page
[101]
[102]
[103]
[104]
[105]
[106]
[107]
[108]
[109]
[113] [114]
[115]
[116]
[117]
[118]
[119] [120]
[121]
www.iosrjournals.org
17 | Page
[123]
[124]
[125]
[126]
[127]
[128]
[129]
[130]
[131]
[132]
[133]
[134]
[135]
[136]
[137]
[138]
[139]
[140]
[141]
[142]
www.iosrjournals.org
18 | Page
Janak B. Patel (born in 1971) received B.E. (Electronics & Communication Engg from L.D. College
of Engg. Ahmedabad, affiliated with Gujarat University and M.E. (Electronics Communication & System Engg.) in 2000 from Dharmsinh Desai Institute of Technology, Nadiad, affiliated with Gujarat University. He is Asst. Prof. & H.O.D. at L.D.R.P. Institute of Technology & Research, Gandhinagar, and Gujarat. Currently, he is pursuing his Ph.D. program at Indian Institute of Technology, Roorkee under quality improvement program of AICTE, India. His research interest includes digital signal processing, image processing, bio-medical signal and image processing. He has 7 years of industrial and 12 years of teaching experience at Engineering College. He taught many subjects in EC, CS and IT disciplines. He is a life member of CSI, ISTE & IETE.
R.S. Anand received B.E., M.E. and Ph.D. in Electrical Engg. from University of Roorkee (Indian
Institute of Technology, Roorkee) in 1985, 1987 and 1992, respectively. He is a professor at Indian Institute of Technology, Roorkee. He has published more than 100 research papers in the area of image processing and signal processing. He has also supervised 10 PhDs, 60 M.Tech.s and also organized conferences and workshops. His research areas are biomedical signals and image processing and ultrasonic application in non-destructive evaluation and medical diagnosis. He is a life member of Ultrasonic Society of India.
www.iosrjournals.org
19 | Page
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE) ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 20-30 www.iosrjournals.org
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
Mohit Mittal*,Rajat Kamboj**,Shivani Sehgal***
*Student of Electrical & Electronics Engg.,Doon Valley Institute of Engg. & Technology,Karnal,India. **Assistant Professor,Department of EEE, Doon Valley Institute of Engg. & Technology,Karnal,India. *** Assistant Professor,Department of EEE, Doon Valley Institute of Engg. & Technology,Karnal,India.
Abstract- This work proposes a new algorithm which investigates the performance of Distribution system with
multiple DG sources for the reduction in the line loss, by knowing the total number of DG units that the user is interested to connect. Strategic placement of multiple DG sources for a distribution system planner is a complex combinatorial optimization problem. The new and fast algorithm is developed for solving the power flow for radial distribution feeders taking into account embedded distribution generation sources. Also, new approximation formulas are proposed to reduce the number of required solution iterations. Power flow techniques (PF) for calculating Network performance index (NPI),Genetic algorithm in search of best locations, with considering NPI as fitness function.
I.
INTRODUCTION
The impact of DG in system operating characteristics , such as electric losses, voltage profile, stability and reliability needs to be appropriately evaluated. The problem of DG allocation and sizing is of great importance. The installation of DG units at non-optimal places can result in an increase in system losses, implying in an increase in costs and thereof having an effect of opposite to the desired. For that reason the use of an optimization method capable of indicating the best solution for a given distribution network can be very useful for the system planning engineer when dealing with the increase of DG penetration that is happening nowadays.
II.
MATHEMATICAL FORMULATION
System Description Consider a three-phase, balanced radial distribution feeder with n buses, I laterals and sublatera generations. Also, nDG distributed and nC shunt capacitors as shown in figure 331ls.
Fig. 1 : Redial Distribution feeder model including DG and Capacitor The three recursive branch power flow equations are: Pi+1
Qi+1
=
=
(2.1 a) (2.1 b)
www.iosrjournals.org
20 | Page
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
V2i+1 =
(2.1 c)
The following terminal conditions should be satisfied [6]: i. At the end of the main feeder, laterals and sublaterals as shown in fig.2.1: Pn = Qn = 0 Pkm = Qkm =0 ii. The voltage at bus k is the same voltage of its lateral i.e: Vk = Vk0 The real and reactive power losses of each section connecting two buses are: Plossi+1 = (Pi2 + Qi2 / Vi2)ri+1 Qlossi+1 = (Pi2 + Qi2/Vi2)xi+1 (2.5) (2.6) (2.4) (2.2) (2.3)
III.
DG SIZING ISSUES
For single DG case, The DG optimal Size will be done by using Analytical Method based on Exact Loss Formula. The real power loss in a system is given by
N
(2.7)
Where Xij = cos i j , = i j Ri + jxij=zij are the ijth element of [ZBus] matrix. Analytical method is based on the fact that the power loss against injected power is a parabolic function and at minimum losses the rate of change of losses with respect to injected power becomes zero. PL = 2 Pi Q ) = 0 (2.8)
Where Pi is the real power injection at node I, which is the difference between real power generation and the real power demand at that node. Pi (iPDGi PDi) results in to PDgi - PDi+1/ij[ijQi-i=1jiN](ijPj-ijQi) (2.10) (2.9)
Where Pdgi is the real power injection from DG placed node I, and Pli is Load at node i. By combining equations
The above equation gives the optimum size of DG for each bus I, for the loss to be minimum. Any size of DG other than Pdgi placed at bus I, will lead to higher loss. In calculating the optimum sizes of DG at various locations, using equation (2.10), it was assumed that the values of variable remain unchanged. This result in small difference between the optimum sizes obtained by this approach and repeated load flow.
www.iosrjournals.org
21 | Page
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System IV. PROPOSED ALGORITHM
4.1 Algorithm for Single DG case The developed algorithm is explained stepwise as follows: Step 1: First Read the Distribution Network Data and DG size. Step 2: Give Network Data to Power flow Algorithm to get base case power flow. Step 3: Save base case power flow for later use. Step 4: Calculate the Network performance for Different DG. Step 5: Insert the DG at which the NPI value closer to unity & optimize the DG size. Step 6: Evaluate NPI with the help of base case power flow and Power flow with single DG inserted case. Step 7: Print results and stop. 4.2 Algorithm for Multiple DG The following Algorithm is developed with the help of New and Fast Power Flow Solution Algorithm and Genetic Algorithm and is used to get the appropriate results. The developed algorithm is explained stepwise as follows: Step 1: First read the Distribution Network Data. Step 2: Give Network Data to New and Fast Powerflow Algorithm to get base case powerflow. Step 3: Save base case powerflow for later use. Step 4: Read Inserting Distributed Generators capacities. [Market available DG sizes 100KW, 220KW, 300KW, 500KW, 750KW, 1KW, 1.6MW] Step 5: Read Bus numbers for DG insertions from Genetic Algorithm. If this is first time GA will give Initial population, otherwise GA will give New Population. Step 6: Again apply powerflow for Distribution system with the inserted DG at the position from GA with the help of New and Fast powerflow Algorithm. Step 7: Evaluate Multi Objective Index with the help of Base case powerflow and powerflow with DG inserted. Step 8: Repeat step 6&7 for all combination of GA population. Step 9: Check for number. Step 10: Give NPI values as fitness values to the GA. Step 11: With the help of fitness function values GA will do the operations (Selection, Crossover, Mutation, and Elitism) and generate new population next go to step5. Step 12: Save the n best results and go to next Step Step 13: For every best result decrease the capacity of DG with fixed % of their individual capacities and do the same for the maximum number of iterations. Step 14: Lower limit of DG capacity will be 1% of Total load. Step 15: Run powerflow with for all combinations of new capacities of DG and calculate NPI. Step 16: Now compare these results with previously saved results fitness values, and print the results according to the best fitness values. Step 17: Stop Corresponding Flowchart is as follows:
V.
5.1
TEST SYSTEM The radial system with 33 buses and 32 branches with the total load of 3.715 MW and 2.28 MVAR, 11KV is taken as test system. The single line diagram of 33bus distribution system is shown in Figure 6.1. System Data is given in appendix-B.
19
20
21
22 26 27 28 29 30 31 32 33
S/S 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
www.iosrjournals.org
22 | Page
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
23
24
25
5.2 GENETIC ALGORITHM SET UP Representation A Binary genetic algorithm (BGA) is employed to generate the combination of bus numbers. Initialize Population An initial population of size 30 is selected. Selection Tournament Selection is chosen for testing. Reproduction Two point Cross Over [0.8], [60% to 95% range] and Binary Mutation of ratio [0.05], [0.5% to 1% range] is used. Generation-Elitism Generation Elitism is taken 5, which copies the best chromosomes into next generation. Fitness Evaluation The network performance index for the distribution system with DG sources is aimed to be maximum and is selected for fitness evaluation. Termination The algorithm stops if the number of generations reaches 300, each simulation is a fairly lengthy process, but given that this process is a strategic one, the durations reasonable. 5.3 RELEVANCE FACTORS OF NPI The NPI will numerically describe the impact of DG, considering a given location and size, on a distribution network. Close to unity values for the Network Performance Index means higher DG benefits. Table 6.0 shows the value for the relevance factors utilized in here, considering a normal operation stage analysis. ILp W1 0.33 ILQ W2 0.10 IVDQ W3 IVR W4 0.15 0.10 Table.1.NPI Relevance factors VSI W5 0.32
5.4. Results and analysis A series of simulations were run to evaluate the performance of Distribution System With a defined number of potential DG units. The capacities of DGs are considered in two ways with constant capacity and with tuned capacity. These were for the best set of 1,3 and 5 DG units located within the 32 possible sites and the corresponding Distribution system performance results. Base case PLOAD QLOAD PLOSSES QLOSSES PUTILITY QUTILIITY 3.715MW 2.28MVAR 210.3KW 137.3KVAR 3.916MW 2.417MVAR Table.2 Base System load flow Data Bus Number 1 2 3 4 5 6 7 8 9 Voltage (p.u) 1.0000 0.9972 0.9836 0.9763 0.9691 0.9511 0.9475 0.9339 0.9276
VSI 0.675
www.iosrjournals.org
23 | Page
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
10 11 12 13 14 15 16 17 18 19 20 20 21 22 23 24 25 26 27 28 29 30 31 32 33 0.9217 0.9208 0.9193 0.9132 0.9109 0.9095 0.9088 0.9067 0.9067 0.9061 0.9967 0.9931 0.9924 0.9918 0.9801 0.99734 0.9701 0.9491 0.9466 0.9351 0.9269 0.9234 0.9192 0.9183 0.9180 Table 2.1 Base case Voltage Profile
voltage(per unit)
Fig. 3. Base case Voltage Profile Plot Single DG Case The Following table represents the IVD, IVR values for best combination IVD IVR
www.iosrjournals.org
24 | Page
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
0.95499 NPI 0.5489 0.5477 0.5411 0.5328 0.5331 Bus Number 31 32 33 30 6 PDG (KW) 1296.190 1244.690 1183.540 1471.350 1491.520 P Loss (KW) 105.496 105.718 107.125 108.178 108.671 0.95393 Q Loss (KVAR) 78.833 79.501 82.370 80.041 80.375 VSI 0.7971 0.7923 0.7865 0.8134 0.7923 PUTILITY (KW) 2524.298 2576.028 2638.548 2351.828 13332.152 QUTLITY (KVAR) 2378.832 2379.506 2382.369 2380.041 2380.376
Table 2.2. Single DG Test Result with 5 best combinations Bus Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 Voltage (p.u) 1.0000 0.9988 0.9936 0.9925 0.9917 0.9877 0.9844 0.9713 0.9652 0.9596 0.9587 0.9573 0.9514 0.9492 0.9478 0.9472 0.9452 0.9446 0.9982 0.9947 0.9940 0.9933 0.9900 0.9834 0.9802 0.9875 0.9874 0.9852 0.9840 0.9848 0.9889 0.9881 0.9878
www.iosrjournals.org
25 | Page
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
voltage(per unit)
Fig.4. Voltage Profile plot with Single DG at bus 31 3DG Case The following table represents the IVD, IVR values for best combination. IVD 0.9675 IVR 0.95657
Without Tuning DG sizes are 1000 KW, 750 KW, and 500 KW NPI Bus DG PDG PLoss Number (KW) (Total) (KW) (KW) 31 1000 0.79106 14 750 2250 69.339 25 500 30 1000 70.8966 0.78828 25 750 2250 9 16 500 29 1000 0.78611 15 750 2250 71.741 16 500 11 1000 0.77022 31 750 2250 75.8641 4 500 30 1000 0.7644 9 750 2250 77.088 25 500 Table 2.4.
QLoss (KVAR)
VSI
51.6276
0.9176
57.962
0.9104
535.890
52.3996
0.89038
1536.743
54.9118
0.89486
1540.665
55.8658
0.86670
1542.087
www.iosrjournals.org
26 | Page
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
With Tuning NPI Bus Numbe r 31 14 25 30 25 16 29 15 16 11 31 4 30 9 25 DG (KW) PDG (Total) (KW) 2027.69 9 2159.98 2146.80 5 201.25 2104.10 2 PLoss (KW) QLoss (KVAR) VSI PUTILIT Y (KW) QUTI LITY (KVA R) 2348.2 76 2349.0 21 2349.4 8 2351.4 28 2352.2 98
0.79612
0.78903
0.78657
0.77024
0.7625
899.977 678.183 449.838 955.671 724.049 480.026 955.744 713.188 477.872 980 731.25 490 931.994 706.110 465.997 Table 2.5.
68.4845
48.2754
0.9273
1755.786
69.031
49.207
0.9138
1624.328
70.181
49.479
0.9119
1638.377
72.1742
51.4284
0.8987
1585.924
74.52
52.297
0.88297
18685.421
Bus Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Voltage (p.u) 1.0000 0.9995 0.9980 0.9974 0.9971 0.9942 0.9918 0.9871 0.9861 0.9874 0.9858 0.9861 0.9874 0.9879 0.9865 0.9859 0.9840 0.9835 0.9989 0.9954 0.9947 0.9940 0.9959 0.9922 0.9918 0.9936 0.9930 0.9889 0.9863 0.9862 0.9884
www.iosrjournals.org
27 | Page
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
32 33 0.9876 0.9873
Table2.6 Voltage Profile with 3DGs at bus 31, 14, 25 5 DG Case The Following table represents the IVD, IVR values for best combination. IVD 0.9949 NPI Bus Number PDG (KW) PDG (Total) (KW) P Loss (KW) IVR 0.9852 Q Loss (KVAR) VSI PUTILITY (KW) QUTLITY (KVAR)
0.7877
0.7847
0.7761
0.7664
24 1000 18 250 33 750 2800 74.9616 51.5129 0.91643 989.9644 8 300 9 500 32 1000 25 250 2 750 2800 75.2615 53.1826 0.91241 990.262 15 300 12 500 3 1000 12 250 13 750 2800 78.5738 78.5738 54.9075 0.89104 32 300 31 500 30 1000 25 250 14 750 2800 86.984 86.984 60.672 0.88.632 27 300 21 500 Table. 2.7. 5DG Test Results without tuning 5 best combinations
2351.515
2335.184
2354.902
2360.675
NPI
Bus Number 24 18 33 8 9 32 25 2 15 12 3 12 13 32 31 30 25
PDG (KW) 868.4808 222.6871 671.4525 264.5321 438.6601 922.604 324.181 695.465 279.598 461.302 940.3366 237.7294 709.5862 283.8345 477.8723 821.808 206.4949
P Loss (KW)
Q Loss (KVAR)
VSI
0.79027
2465.813
687794
47.6779
0.92094 1317.967
2374.678
0.78721
2593.151
71.703
50.63
0.91868 1193.553
2350.631
0.7772
2650.359
71.9391
2350.845
0.77432
2287.264
72.488
51.145
0.91399 1500.224
2351.145
www.iosrjournals.org
28 | Page
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
14 27 21 603.998 244.058 410.904 Table. 2.8.
Bus Number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
Voltage (p.u) 1.0000 1.0001 0.9989 0.9996 1.0007 1.0009 0.9985 0.9939 0.9929 0.9924 0.9925 0.9929 0.9941 0.9946 0.9933 0.9927 0.9908 0.9902 1.0001 1.0012 0.0018 1.0012 0.9961 0.9910 0.9891 1.0007 1.0006 0.9966 0.9940 0.9939 0.9900 0.9892 0.9889
Table2.9 Voltage Profile with 3DGs at bus 24,18,33,8,9 In table 2, the base system load data, losses, voltage, voltage stability index and utility generated real and reactive power are given. The base case voltage profile and corresponding voltage profile plot are in table 2.1 and fig.3 , the single DG case results are given for best five combinations in NPI priority order. This is the case in which the user wants to connect single DG unit to utility in order to reduce loss and to improve voltage profile and voltage stability index of the distribution system. The voltage improvement and voltage profile plot with this case is shown in table 2.4 and fig.4. In the table 2.4, the 3DG case results are given for best five best combinations in NPI priority order. This is the case in which the user wants to connect 3DG with market available DG capacities to utility in order to reduce loss and to improve voltage profile, voltage stability index and hence the NPI of the distribution system from the previous single DG case. The voltage improvement and voltage profile plot with this case is shown in table 2.6 and table 2.5, represents the 3DG case results for five best combinations with tuning of market available DG sizes, which results reduction in losses, improvement in voltage stability index and hence NPI of Distribution system from case of fixed DG capacities.
www.iosrjournals.org
29 | Page
Analytical approaches for Optimal Placement and sizing of Distributed generation in Power System
Similarly in the 5DG case results are given for best five combinations are given in the tables 2.7 and 2.8 with market available DGs and with tuning of market available DGs. These cases results reduction in losses improvement in voltage profile, voltage stability index of system and hence NPI.
VI
CONCLUSION
This work present a method of combining a new and fast power flow and genetic algorithms with an aim to provide a means of finding the combination of sites within a distribution network for connecting a predefined number of DGs. The network performance index is used in finding best combination of sites within network. In doing so it evaluates the distribution system performance with DG capacities and maximizes the Network Performance index. Voltage stability index is used to determine the weak branches in the distribution network. Its use world be to enable Distribution System Planner to search a network for the best sites to strategically connect a small number of DGs among a large number of potential combinations in order to improve Distribution system performance. This work concentrated on the technical constraints on DG development like voltage limits, thermal limits and especially the loss reduction (DG impacts on losses is an area that is being extensively researched at present). This work can be easily being adapted to cope with variable energy sources.
REFERENCES
[1] H. B. Puttgen, P. R. Mac Gergor, F. C. Lambert, Distributed generation: Semantic hype or the dawn of a new era?, power & Energy Magazine, IEEE, Vol.1, Issue. 1, pp.22-29, 2003 W. EI-Khattam, M.M.A. Salama, Distributed generation technologies definitions and benefits, Electric Power Systems Research, vol.71, issue 2, pp. 119-128, October 2004. Thomas Ackermann, Goran Andersson, Lennart Soder, Distributed generation: a definition, Electric power system research, vol . 57 Issue 3, pp. 195-204-20 April 2001. J.L. Del Monaco, The role of distributed generation in the critical electric power infrastructure, power engineering societ y winter meeting, 2001, IEEE, vol. 1, pp. 144-145,2001. Xu. Ding, A.A. Girgis, Optimal load shedding strategy in power systems with distributed generation, Power engineering Society Winter Meeting, 2001, IEEE vol.2, pp, 788 793, 2001. N.S. Rau and Yih Heui Wan, Optimum Location of resources in distributed planning, IEEE Transactions on Power Systems , vol. 9, issue. 4, pp. 2014- 2020, 1994. J.O. Kim, S.W. Nam, S. K. Park, C. Singh, Dispersed generation planning using improved Hereford ranch algorithm, Electric Power Systems Research, vol. 47, issue 1, pp. 47-55, October 1998. T. Griffin, K. Tomso vic, D. Secrest, A. Law, Placement of dispersed generation systems for reduced losses, Proceedings of the 33rd Annual Hawaii International Conference on System Science, 2000, IEEE, Jan 2000. K. Nara, Y. Hayashi, K. Ikeda, T. Ashizawa, Application of tabu search to optimal placement of distributed generators, Power Engineering Society Winter Meeting, 2001, IEEE, vol. 2, pp.918 923, 2001. Caisheng Wang, M.H.Nehrir,, Analytical approaches for optimal placement of distributed generation sources in power systems , IEEE Transactions on power Systems, vol. 19, issue 4, pp.2068 2076, 2004. T. Gozel, M. H. Hocaoglu, U. Eminoglu, A. Balikci, Optimal placement and sizing of distributed generation on radial feeder with different static load models, int ernational Conference on Future Power Systems, IEEE, pp. 1-6, issue. Now. 2005. Naresh Acharya, Pukar Mahat, N. Mithulananthan, An analytical approach for DG allocation in primary distribution network, Electrical Power and energy systems, vol.28 pp.669-678, feb.2006.
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
www.iosrjournals.org
30 | Page
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE) ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 31-37 www.iosrjournals.org
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
Ghanshyam Bhutra1, Piyush Routray2, Subrat Rath3, Sankalp Mohanty4
(Dept. of EEE, SoA University, India) (Dept. of E&I, SoA University, India) 3(Dept. of EEE, SoA University, India) 4 (Dept. of EEE, SoA University, India)
2 1
ABSTRACT : The motivation for creating humanoids arises from the diverse socio-economical interests
ranging from the restoration of day-to-day activities of differently abled to assisting humans in nearly inaccessible areas such as mines, radiation sites, military projects, etc. Recent developments in the field of image processing, thus enabling depth imaging and skeleton tracking easily has greatly increased the potential of accurately inferring the signals of human operator. The time constraints on various jobs make them grossly dependant on real time data processing and execution. Also, the acceptance by the industrial community depends on the accuracy of the complete system. The objective is to develop a proof based accurate system to assist human operation in potentially inaccessible areas. The system has to analyze image feed from the camera and deduce the gestures of the operator. Then the system communicates wirelessly with the self designed SemiHumanoid which in-turn, imitates the operator with maximum accuracy.
Keywords Humanoid Robot, MATLAB, Microsoft Kinect, Servo Motor, Skeleton Tracking. I. INTRODUCTION
Humanoid robots, in the future, seem to be the best alternative for human labor. Communication at work place environment greatly depends on body movements apart from direct audio signals. While verbal signals give brief idea of the details of the work to be done, it is the physical demonstration that tends to have maximum impact. Much work has progressively been done in the fields of gesture recognition and implementation [1]-[3]. However, with advent of comparatively cheaper and easy to use depth imaging techniques such as Microsoft Kinect, the accuracy of gesture recognition can be increased to greater percentage. Perfect imitation of the human, by a robot, requires tremendous calculations and harmonious work of many different aspects of the system. A common form of mobile robot today is semi-autonomous, where the robot acts partially on its own, but there is always a human in the control loop through a link i.e. Telerobotic. In this technique, there are no sensors on the bot, those it may use to take a decision. One of the possible methods of controlling such a bot is by amalgamation of depth imaging and skeletal tracking of the human operator. For extending the area of application of the bot, it has to be wirelessly controlled and equipped with fine maneuvering techniques. The signal transmission demands many-to-many communication with optimum use of bandwidth and data protection. If achieved with maximum accuracy this technology stands to be of great applications at outer space exploration, remote surgeries, hostile industrial and military conditions, security purpose and of course, gaming and recreational activities.
II.
APPROACH
The approach to the problem was made on a calculated basis by first simulating on software and then implementing as hardware. Broadly, the approach can be divided into four categories dealing with different works. Successive amalgamations of the acquired results lead to the final goal. First step is to model the Humanoid. It is followed designing of the control architecture, which is the suite of control required for behavioral synchronization between human and the BOT. Next step caters to the acquisition and processing of image feed from the Kinect sensor. Thus, the controlling parameters are determined here. Final step involves wirelessly transmitting the inferred values of the controlling parameters to the Bot. the microcontrollers (mcu) used in the robot decodes these parameters and thus redirects the robot to react accordingly. The approach has been discussed thoroughly through the following paragraphs.
www.iosrjournals.org
31 | Page
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
2.1 Hardware Modeling of the Humanoid The software design was readied with 6 Degrees of Freedom. The model was divided into various links and every link was modeled individually in the SolidWorks software. All the links were mated and ADAMS was used to test the model under real physics by simultaneously solving equations for kinematics, statics, quasistatics and dynamics for multi-body system. Specifically for simulation purpose, the virtual model is designed based on principles of kinematic modeling by Denavit-Hartenberg methodology for robotic manipulators [4]. There are total 18 Degrees of Freedom (DOF). Each arm has 5 DOF and each leg has 4 DOF. (Fig.1)
figure 1. Adams plant model of humanoid for simulation. The destination coordinates are fed into the ADAMS plant which has been configured to take input in the form of angular velocities and output the current joint angles. The ADAMS plant is executed in MATLAB (Simulink) environment with a function file to control it. The stability of the humanoid was inspired from previous works done in this regard [5] [8]. There are two separate methods to control the motion of a humanoid i.e. DC (Direct Control) and CC (Command Control).Direct Control performs the conversion of 3D joint coordinates of the user to servo angles whereas the Command Control converts information to a single command for the robots understanding. The information that is sent is the destination positions for the tool frame and the calculation of joint angles is left for the model. For gain of maneuvering options in terms of speed and simplicity for most cases, the biped locomotion was voted against in favor of wheeled maneuvering system. Next step was to construct the hardware model. The material for construction was chosen to be Aluminium and 6 Servo Motors of the rating 60g/17kg/0.14sec were used. For the base of the BOT, four motors, two each of the rating 12V, 250 rpm, 60Kgcm and 12V, 250 rpm, 30Kgcm were used. To regulate the speed of the driving motors, PWM was executed with Motor Drivers. The gross weight of the BOT is 10kg (approx.) including the weight of the BECs (Battery Eliminator Circuits) and Power Supply. LiPo Batteries of the rating 11.1V, 6000 mAh 25~50C (Qty-2) and 11.1V, 850 mAh 20C (Qty-4) have been used for power supply. The servos operate in the range 4.8V~6.0V and thus BECs were used to convert 11.1V to 6.0V. The completed structure of the robot is shown as below. (Fig.2)
www.iosrjournals.org
32 | Page
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
2.2 Control Architecture As shown above (fig.3), for convenience this has been described in two parts, AVR Controller and ZIGBEE Communication. The AVR microcontroller that has been used is ATMEGA 32 which is an 8-bit MCU. This controls the servo motors and the driving motors. There are in-total 10 motors to be controlled (6 Servo + 4 Shunt). Four MCUs have been used, out of which three control the servos and the remaining one controls the driving motors. Each MCU has been interfaced with an LCD to continuously display its status. ZIGBEE is the standard that defines a set of communication protocols for low-data-rate, very low power applications. We are using XBee RF modules for communication. These modules interface to a host device through a logic-level asynchronous serial port. The link between XBee and ATMEGA 32 takes place via UART (Universal Asynchronous Receiver/Transmitter). The data format and transmission speeds are configurable. The AVR CPU connects to AVR UART via six registers i.e. UDR, UCSRA, UCSRB, UCSRC, UBRRH and UBRRL. 2.3 Depth Image Processing & Gesture Interpretation. Depth Imaging is the central point of this publication as it can be considered a giant leap in terms of boosting accuracy while operator imitation. The sensor being used is Kinect which is a motion sensing input device by Microsoft for XBOX 360 video game console. It is a horizontal bar connected to a small base with a motorized pivot and is designed to be positioned lengthwise. The device features an "RGB camera, depth sensor and multi-array microphone running proprietary software", which provide full-body 3D motion capture, facial recognition and voice recognition capabilities. The depth sensor consists of an infrared laser projector combined with a monochrome CMOS sensor, which captures video data in 3D under any ambient light conditions. The sensing range of the depth sensor is adjustable, and the Kinect software is capable of automatically calibrating the sensor based on the physical environment, accommodating for the presence of the user.
Figure 4. Kinect Sensor. Third party software and open source drivers were implemented to gain compatibility with MATLAB. At the heart of Kinect, lies a time of flight camera that measures the distance of any given point from the sensor using the time taken by near-IR light to reflect from the object. In addition to it, an IR grid is projected across the scene to obtain deformation information of the grid to model surface curvature. Cues from RGB and depth stream from the sensor are used to fit a stick skeleton model to the human body. The reference frame for the Kinect sensor is a predefined location approximately at the center of the view. The horizontal right hand side of observation denotes the x axis and transverse axis denotes y axis The z axis is the linear distance between the Kinect and user, the farther the user the higher the z value. We monitor the real time positions and orientations of 15 different data points in the users body, namely: Head, Neck, Left Shoulder, Left Elbow, Left Hand, Right Shoulder, Right Elbow, Right Hand, Torso, Left Hip, Left Knee, Left Foot, Right Hip, Right Knee and Right Foot. The sensor generates a 2D Matrix of 225x7 dimension with the above mentioned datapoints for a single person. The seven columns are User ID, Tracking Confidence, x, y,z (coordinates in mm) and the next X,Y (the pixel value of the corresponding data point). Using the raw data points from Kinect, vectors are constructed and using those, the angles between different links and the lengths of different links were found. The skeletal map is obtained according to user specisified link and node colours as shown in fig.5a&b.
www.iosrjournals.org
33 | Page
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
Figure5 a. Skeletal mapping in low light conditions. Figure5 b. Skeletal mapping in sufficient lighting. The data points are then used to calculate and derive values ranging from link length to angular velocity as discussed in the later part. 2.4 Data transmission and Robot Control As mentioned earlier, the technology defined by the ZigBee specification is intended to be simpler and less expensive than other WPANs, such as Bluetooth. ZigBee is targeted at radio-frequency (RF) applications that require a low data rate, long battery life, and secure networking. One of the Xbee pair is connected to the PC using Xbee USB adapter board and the other one is interfaced to the Microcontroller. The data enters the modules UART from the serial port of the PC through the DI pin as ansynchronous serial signal. The signal should idle be high when no data is being transmitted. Each data byte consists of a start bit(low), 8 databits (least significant being first) and a stop bit. Now that the information to be sent and method of communication is ready, the next step is finding out the memory addressing technique so that the data transmitted should be in synchronization with the data received. In order to address the 10 actuated joints with a single duplex channel of 8 bit word length we had to involve an addressing strategy. The number of bits required to address 10 individual motors would ideally be 4 bits and to transmit 8 bits of data to each of the motors makes the word length of 12 bits which is not available. So instead of delivering the word in a single transmission the information is broken in 2 nibbles of 4 bit length and 2 consecutive transmissions are required to successfully transmit the 8 bit of data. With this split we had to assign 1 bit to distinguish between the higher and the lower nibble which again created a problem as the word length of a transmission is limited to 8 bits only. Two separate 4-bit address for each motor are used to send the higher and lower nibble of data from the source. This scheme allows us to address a 40 maximum of 256 different addressees and transmit information of 8 bits to each one in 2 consecutive transmission. The analogy, for betterunderstanding can be presented as in fig.6.
Figure 6. Diagramatic analogy of communication method. After the reception of the control signal at the remotely located manipulators the microcontrollers controlling the servos decodes the servo angles from the control signal and implements it. Controlling a servo using a microcontroller requires no external driver like H-bridge only a control signal needs to be generated for the servo to position it in any fixed angle. The control signal is 50Hz (i.e. the period is 20ms) and the width of positive pulse controls the angle. This implementation of the servo angle completes 1 cycle of the process as it ends with the beginning of the kinect sensor inputting the joint information to the control system. Thus, in this way the behavioral synchronization between an operator and the BOT is achieved.
www.iosrjournals.org
34 | Page
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid III. IMPLEMENTATION
The dynamic process of behavioral synchronization through a humanoid begins with derivation of real time information about the joints of the user be it prismatic (linear) or revolute (rotational). With degree of freedom as high as 10 the task becomes extremely tedious and requires tremendous processing power. One of the major problems in monitoring multiple joints information is the definition of the reference frame with respect to which information about the other frames can be derived. Even though the reference for individual joints may vary it is convenient to process all the joints with a single reference point. The Kinect Sensor provides this information to the central control block using a combination of infrared imaging and RGB camera. The infrared laser creates a depth profile of the area in display and the RGB camera returns RGB matrix. The sensor generates a 2D matrix of 225 x 7 dimension with the above mentioned data points for a single person and a maximum tracking capability of 15 people. The seven columns are User ID, Tracking Confidence, x, y, z (coordinates in mm) and the next X, Y (the pixel value of the corresponding data point). Using this information we calculate the joint angles in different planes between different links as well as the link lengths between data points. Having raw joint information is not enough for further operations, because of that on the next step the vector between joints should be constructed. As an example below a description of construction of vector between joints of right hand has been given: Connecting the right shoulder, right elbow and right elbow and palm determines the angle for right shoulder. Right shoulder = , , Right elbow = , , Right palm = , , Vect1 = , , , , Vect2 = , , , , Now the angle between the vectors is calculated by using a simple geometrical operation such as angle between two vectors. Joint angle = cos1
1 2 1 2
Using the x, y and z coordinates derived from Kinect sensor and using the formula for distance calculation the link lengths are found.
length = (xrs xre )2 + (yrs yre )2 + (zrs zre )2
After calculating the joint angles and link lengths, the next step in the behavioral synchronization would be to realize these real time parameters in the desired form. This step includes conversion of the parameters to actual servo angles. It is necessary for tuning purpose between the real angle values of joints and robot servo values. Another reason for the necessity of this step is because the real human joint angle values range from 0~360, when the robot servo motor can accept values from 0~255. Now that the destination coordinates of the individual tool frames for the humanoid robot have been derived, two simultaneous steps follow, embedding the gesture movements into the virtual (software) and real (hardware) model. Since each arm has 3DOF, the tool frame coordinates x, y and z depends on 3 angles 1 , 2 and 3 which can be expressed as: 1 x y = T 2 z 3 Here T is the transformation matrix that relates the coordinates with the joint angles.
www.iosrjournals.org
35 | Page
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid
1 x y = J 2 z 3 J denotes the Jacobian matrix and x, y and z are the product of error signal of the corresponding axis and the proportionality constantk p and1 , 2 and 3 are the joint anglular velocities. x = (xdestination x) k p y = (ydestination y) k p z = (zdestination z) k p To evaluate the angular velocities, we use the expression, 1 x 2 = J1 y z 3 These angular velocities are then converted as discussed earlier and then transmitted to the Robot. The servomotors attached at designated joints thus move accordingly and operator imitation is achieved.
IV.
CONCLUSION
The implemented architecture suggests simple, low cost and robust solution for controlling a semihumanoid with human gestures. The complete setup requires the synchronization of software and hardware. The above mentioned problems were faced for setting it up. This project can serve as a development platform for interaction between human and BOT via gestures and with the quality of resources of products, more sophistication can be inculcated. Presently biped motion and holding mechanism in the hands are being worked upon. The legged motion is expected to enable the robot to maneuver over uneven terrains easily than the wheels. The picking/holding mechanism would increase its applicability in the industries by manifolds. Thus the greater goal of attaining better man-machine teams for dynamic environments can be realized with perfection.
ACKNOWLEDGEMENTS
We acknowledge the research facilities extended to us at Centre for Artificial Intelligence and Robotics (DRDO), Bengaluru for carrying out the simulations and learning other mechanical nuances of humanoids. We also thank faculty advisor of the ITER Robotics Club, Asst. Prof Farida A Ali for her continuous guidance and support over the period of research.
www.iosrjournals.org
36 | Page
Depth Image Processing and Operator Imitation Using a Custom Made Semi Humanoid REFERENCES
[1] David Koterkamp, Eric Huber and R. Peter Bonasso, Recognising and Interpreting Gestures on a Mobile Robot , Thirteenth National Conference on Artificial Intelligence (AAAI), 1996. Stefan Waldherr, Roseli Romero and Sebastian Thrun, A Gesture Based Interface for Human-Robot Interaction, Kluwer Academic Publishers, 2000. Hideaki Kuzuoka, Shin'ya Oyama, Keiichi Yamazaki, Kenji Suzuki and Mamoru Mitsuishi, GestureMan: A Mobile Robot that Embodies a Remote Instructor's Actions, Proceedings of SCW00, December 2-6, 2000. John J. Craig, Introduction to Robotics, pg70-74, 3rd Edition, Pearson Education International. 2005. Vitor M. F. Santos and Filipe M. T. Silva, Engineering Solutions to Build an Inexpensive Humanoid Robot Based on a Distribut ed Control Architecture, 5thIEEE -RAS InternationalConference on Humanoid Robots, 2005. Engineering Solutions to Build an Inexpensive Humanoid Robot Based on a Distributed Control Architecture, 5thIEEE-RAS InternationalConference on Humanoid Robots, 2005. Santos, F. Silva, Development of a Low Cost Humanoid Robot: Components and Technological Solutions, in Proc. 8th International Conference on Climbing and Walking Robots, CLAWAR2005, London, UK, 2005. J.-H. Kim et al. Humanoid Robot HanSaRam: Recent Progress and Developments, J. of Comp. Intelligence, Vol 8, n1, pp.4555, 2004. Jung-Hoon Kim and Jun-Ho Oh, Realization of dynamic walking for the humanoid robot platform KHR -1, Advanced Robotics, Vol. 18, No. 7, pp. 749 768 (2004) K. Hirai et al., The Development of Honda Humanoid Robot, Proc. IEEE Int. Conf. on R&A, pp. 1321-1326, 1998. [Kinect] Use With Matlab www.timzaman.nl KINECTHACKS www.kinecthacks.com
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
www.iosrjournals.org
37 | Page
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE) ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 38-48 www.iosrjournals.org
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power
Anita Pakharia1, Manoj Gupta2
1
Department of Electrical Engineering Assistant Professor, Global College of Technology, Jaipur, Rajasthan, India 2 Professor, Poornima College of Engineering, Jaipur, Rajasthan, India
ABSTRACT : Dynamic Voltage Restorer (DVR) has become very popular in recent years for compensation
of voltage sag and swell. The voltage sag and swell is very severe problem of power quality for an industrial customer which needs urgent attention for its compensation. There are various methods for the compensation of voltage sag and swell. One of the most popular methods of sag and swell compensation is Dynamic Voltage Restorer (DVR), which is used in both low voltage and medium voltage applications. In this work, our main focus is on DVR. DVR compensate the voltage sag by injecting voltage as well as power into the system. The compensation capability of this is mainly influenced by the various load conditions and voltage dip to be compensated. In this work the Dynamic Voltage Restorer is designed and simulated with the help of Matlab Simulink for sag compensation. Efficient control technique (Parks Transformations) is used for mitigation of voltage sag through which optimized performance of DVR is obtained. The performance of DVR is analyzed on various conditions of active and reactive power of load at a particular level of dc energy storage. Parameters of load are varied and the results are analyzed on the basis of output voltages.
Keywords Structure and control technique for dvr, dvr test system, power quality. I. INTRODUCTION
Power quality (PQ) issue has attained considerable attention in the last decade due to large penetration of power electronics based loads and microprocessor based controlled loads. On one hand these devices introduce power quality problem and on other hand these mal-operate due to the induced power quality problems. PQ disturbances cover a broad frequency range with significantly different magnitude variations and can be non-stationary, thus, appropriate techniques are required to compensate these events/disturbances [1]. The growing concern for power quality has led to development for variety of devices designed for mitigating power disturbances, primarily voltage sag and swell. Voltage sag and swell are most wide spread power quality issue affecting distribution systems, especially industries, where involved losses can reach very high values. Short and shallow voltage sag can produce dropout of a whole industry [2]. In general, it is possible to consider voltage sag and swell as the origin of 10 to 90% power quality problems. The main causes of voltage sag are faults and short circuits, lightning strokes, and inrush currents and swell can occur due to a single line-to ground fault on the system and also be generated by sudden load decreases, which can result in a temporary voltage rise on the unfaulted phases [1] [2]. The voltage sag and swell is very severe problem for an industrial customer which needs urgent attention for its compensation. Among several devices, a Dynamic Voltage Restorer (DVR) is a novel custom power device proposed to compensate for voltage disturbances in a distribution system. The dynamic voltage restorer is the most efficient and effective power device used in power distribution networks. Its appeal includes lower cost, smaller size, and its fast dynamic response to the disturbance. Our main focus in this thesis is on the Dynamic Voltage Restorer (DVR) [3]. This device using series-connected VSC is used to inject controlled voltage (controlled amplitude and phase angle) between the Point of Common Coupling (PCC) and the load [3] [4]. For proper voltage sag and swell compensation, it is necessary to derive suitable and fast control scheme for inverter switching. Here, we develop the simulation model using MATLAB SIMULINK and also discusses simulation results with different load conditions.
II.
DVR
DVR is connected in the utility primary distribution feeder. This location of DVR mitigates the certain group of customer by faults on the adjacent feeder as shown in fig. 1. The point of common coupling (PCC) feds
www.iosrjournals.org
38 | Page
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power
the load and the fault. The voltage sag in the system is calculated by using voltage divider rule [5]. The general configuration of the DVR consists of: (a) Series injection transformer (b) Energy storage unit (c) Inverter circuit (d) Filter unit (e) DC charging circuit (f) A Control and Protection system Energy storage device is the most expensive component of the DVR, therefore it is essential requirement to use such mitigation strategy at which DVR can operate with minimum energy storage requirement. Different voltage sag mitigation strategies including pre-sag, in-phase, and phase advance compensation have described in this thesis. Injection of active power by DVR is related to energy storage. DVR injecting large amount of active power requires bigger size of energy storage leading to more expensive scheme. Therefore, optimization of energy storage can be obtained by optimizing DVR active power injection. In case of zero active power injection by DVR, it injects reactive power only to compensate for voltage sag [6] [7].
Step-down Transformer
LOAD1
AC Source
Step-down Transformer
Sensitive Load
Transmission Line
Distribution Line
Control of DVR is performed by using d-q coordinate system. This transformation allows DC components, which is much simpler than AC components. The dqo transformation or Parks transformation is used to control of DVR. The dqo method gives the sag depth and phase shift information with start and end times. The quantities are expressed as the instantaneous space vectors. Firstly convert the voltage from a-b-c reference frame to d-q-o reference [8]. cos cos 3 Vd 2 Vq = sin () sin 3 Vo 1 1
2 2 2
1 1
1 2
Va Vb Vc
(1)
Above equation 1 defines the transformation from three phase system a, b, c to dqo stationary frame. In this transformation, phase A is aligned to the d axis that is in quadrature with the q-axis [9]. The theta () is defined by the angle between phases A to the d-axis. The error signal is used as a modulation signal that allows generating a commutation pattern for the power switches (IGBTs) constituting the voltage source converter. The commutation pattern is generated by means of the sinusoidal pulse width modulation (SPWM) technique, voltages are controlled through the modulation [8] [9].
III.
Electrical circuit model of DVR test system is shown in fig.2. System parameters are listed in table 1. Voltage sag is created at load terminals via a three-phase fault. Load voltage is sensed and passed through a sequence analyzer [10]. The magnitude is compared with reference voltage. MATLAB Simulation model of the DVR is shown in fig.3 which is display at the end if this paper. Table 1 shows the values of system parameters. System comprises of 15 kV, 50 Hz generator, feeding transmission lines through a 3-winding transformer connected in Y//, 15/115/11 [10] [11].
www.iosrjournals.org
39 | Page
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power
Three Phase Fault 'X'
VS
Source Series Injection Transformer
load
G VDC Storage
IGBT Inverter
A B C
Pulse PLL
_ Vin +
1 Positive Sequence Magnitude Constant
Vref
S.No. 1. 2. 3. 4. 5. 6. 7. 8.
System Quantities Main Supply Voltage Per Phase Line Impedance Series Transformer Turns Ratio Filter Inductance Filter Capacitance Load Resistance Load Inductance Line Frequency Inverter Specifications 15 KV
Ratings
Ls = 0 .006 H , Rs=0.002 1:1 1mh 0.50 F 40 0.05 H 50 HZ IGBT Based, 3 Arms, 12 Pulse, Carrier Frequency=1024 HZ Sample Time= 0.5 sec.
9.
Here, the outputs of a three-phase half-bridge inverter are connected to the utility supply series transformer. Once a voltage disturbance occurs, with the aid of dqo transformation based control scheme (Parks Transformation), the inverter output can be steered in phase with the incoming ac source while the load is maintained constant. As for the filtering scheme of the proposed method, output of inverter is installed with capacitors and inductors [11] [12].
IV.
SIMULATION RESULTS
Dynamic Voltage Restorer is simulated using MATLAB SIMULINK and the results are analyzed on the basis of output voltage. Various cases of different active power of load at different dc energy storage are
www.iosrjournals.org
40 | Page
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power
considered to study the impact on sag waveform and compensated waveform as shown in fig. 4 to 15. These various cases are listed in table 2 and discussed below. Case I : A three-phase fault is created via a fault resistance of 0.55 , load 1 is 5 KW, 100 VAR and load 2 is 10 KW, 100 VAR which results in a voltage sag of 10.07 %. Transition time for the fault is considered from 0.1 sec to 0.14 sec as shown in fig.4. Fig.5 and 6 shows the voltage injected by the DVR and the corresponding load voltage. The simulation results and DVR performance in presence of dc energy storage reveals that 99.43 % of sag is compensated and deviation of 0.57 % is attained from three phase source voltage with 600 V of dc energy storage
1.5 x 10
4
0.5
Voltage (Volts)
-0.5
-1
-1.5
0.2
(a)
15000
10000
Voltage (volts)
5000
0.2
(b)
Fig. 4 Three phase voltage sag at load 5 KW, 100 VAR: (a) Source voltage (b) RMS value of source voltage
1.5 x 10
4
0.5
Voltage (volts)
-0.5
-1
-1.5
0.2
(a)
15000
10000
Voltage (volts)
5000
0.2
(b)
Fig. 5 Three phase injected voltage at load 5 KW, 100 VAR: (a) Injected voltage (b) RMS value of injected voltage
www.iosrjournals.org
41 | Page
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power
1.5 x 10
4
0.5
Voltage (volts)
-0.5
-1
-1.5
0.2
(a)
15000
10000
Voltage (volts)
5000
0.2
(b)
Fig. 6 Three Phase Compensated Voltage at load 5 KW, 100 VAR: (a) Load (Improved) Voltage (b) RMS Value of Load (Improved) Voltage Case II : A three-phase fault is created via a fault resistance of 0.55 , load 1 is 10 KW, 100 VAR and load 2 is 10 KW, 100 VAR which results in a voltage sag of 28 %. Transition time for the fault is considered from 0.1 sec to 0.14 sec as shown in fig.7. Fig.8 and 9 shows the voltage injected by the DVR and the corresponding load voltage. The simulation results and DVR performance in presence of dc energy storage reveals that 98.46 % of sag is compensated and deviation of 1.54 % is attained from three phase source voltage with 3.1 KV of dc energy storage.
1.5 x 10
4
0.5
Time (volts)
-0.5
-1
-1.5
0.2
(a)
15000
10000
Voltage (volts)
5000
0.2
(b)
Fig. 7. Three phase voltage sag at load10 KW, 100 VAR (a) Source voltage (b) RMS value of source voltage
www.iosrjournals.org
42 | Page
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power
1.5 x 10
4
0.5
Voltage (volts)
-0.5
-1
-1.5
0.2
(a)
15000
10000
Voltage (volts)
5000
0.2
(b)
Fig. 8. Three phase injected voltage at load 10 KW, 100 VAR: (a) Injected voltage (b) RMS value of injected voltage
1.5 x 10
4
0.5
Voltage (volts)
-0.5
-1
-1.5
0.2
(a)
15000
10000
Voltage (volts)
5000
0.2
(b) Fig. 9. Three phase compensated voltage at load 10 KW, 100 VAR: (a) Load (improved) voltage (b) RMS value of load (improved) voltage Case III : A three-phase fault is created via a fault resistance of 0.55 , load 1 is 50 KW, 100 VAR and load 2 is 10 KW, 100 VAR which results in a voltage sag of 80 %. Transition time for the fault is considered from 0.1 sec to 0.14 sec as shown in fig. 10. Fig.11 and 12 shows the voltage injected by the DVR and the corresponding load voltage. The simulation results and DVR performance in presence of dc energy storage reveals that 97.96 % of sag is compensated and deviation of 2.03 % is attained from three phase source voltage with 8.5 KV of dc energy storage.
www.iosrjournals.org
43 | Page
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power
1.5 x 10
4
0.5
Voltage (volts)
-0.5
-1
-1.5
0.2
(a)
15000
10000
Voltage (volts)
5000
0.2
(b)
Fig. 10. Three phase voltage sag at load 50 KW, 100 VAR: (a) Source voltage (b) RMS value of source voltage
1.5 x 10
4
0.5
Voltage (volts)
-0.5
-1
-1.5
0.2
0.3
(a)
15000
10000
Voltage (volts)
5000
0.2
0.3
(b)
Fig. 11. Three phase injected voltage at load 50 KW, 100 VAR: (a) Injected voltage (b) RMS value of injected voltage
1.5 x 10
4
0.5
Voltage (volts)
-0.5
-1
-1.5
0.2
0.3
(a)
www.iosrjournals.org
44 | Page
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power
15000
10000
Voltage (volts)
5000
0.2
0.3
(b)
Fig. 5.12. Three phase compensated voltage at load 50 KW, 100 VAR: (a) Load (improved) voltage (b) RMS value of load (improved) voltage Case IV : A three-phase fault is created via a fault resistance of 0.55 , load 1 is 1 MW, 100 VAR and load 2 is 10 KW, 100 VAR which results in a voltage sag of 97.12 %. Transition time for the fault is considered from 0.1 sec to 0.14 sec as shown in fig.13. Fig.14 and 15 shows the voltage injected by the DVR and the corresponding load voltage. The simulation results and DVR performance in presence of dc energy storage reveals that 99.39 % of sag is compensated and deviation of 0.61 % is attained from three phase source voltage with 13 KV of dc energy storage.
1.5 x 10
4
0.5
Voltage (volts)
-0.5
-1
-1.5
0.2
(a)
15000
10000
Voltage (volts)
5000
0.2
(b)
Fig. 5.13. Three phase voltage sag at load 1MW, 100 VAR: (a) Source voltage (b) RMS value of source voltage
1.5 x 10
4
0.5
Voltage (volts)
-0.5
-1
-1.5
0.2
0.3
(a)
www.iosrjournals.org
45 | Page
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power
15000
10000
Voltage (volts)
5000
0.2
0.3
(b)
Fig. 5.14. Three phase injected voltage at load 1 MW, 100 VAR: (a) Injected voltage (b) RMS value of injected voltage
1.5 x 10
4
0.5
Voltage (volts)
-0.5
-1
-1.5
0.2
0.3
(a)
15000
10000
Voltage (volts)
5000
0.2
0.3
(b)
Fig. 5.15. Three phase compensated voltage at load 1 MW, 100 VAR: (a) Load (improved) voltage (b) RMS value of load (improved) voltage
TABLE 2 VARIATIONS WITH PARAMETERS OF ACTIVE POWER OF LOAD
Case I II III IV
Load 1 5 KW, 100 VAR 10 KW, 100 VAR 50 KW, 100 VAR 1 MW, 100 VAR
Load 2 10 KW, 100 VAR 10 KW, 100 VAR 10 KW, 100 VAR 10 KW, 100 VAR
Hence, simulation results shows that by increasing the dc storage the effect of voltage sag in output voltage is decreased. As can be seen from table 2 with increment in load 1 while keeping load 2 constant the voltage sag continuously increases which can be compensated by proportional increase in the dc energy storage.
www.iosrjournals.org
46 | Page
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power V. CONCLUSION
Hence foregoing analysis shows the impact of DVR and dc energy storage on voltage sag compensation. Consideration on different active power of load with varying dc energy storage shows that with increasing values of load there is an enhancement of voltage sag which can be further compensated by adjusting the values of dc energy storage. The simulation results are compared on the basis of output voltages and output waveforms shows that DVR is fairly efficient for compensation of sag and swell. Further dimensions to this work can be added by analyzing other disturbances of power quality like voltage swell, harmonics etc. Various sag compensations cases are discussed while varying the active power of load.
Continuous
powergui A B C A B C
A B C a b c
A B C
Vabc Iabc a b c
RMS Scopesag sag scopedata1 RMS RMSload improvedwave scopedata2 Scope10 improvedwave1 scopedata5 RMSsag Scopesagrms sag1 scopedata4
A B C
Three-Phase Breaker Three-Phase Series RLC Load1 Three-Phase Parallel RLC Load 2
Vabc A B C A B C a b c A B C a b c
Scope9
A B C
Vabc Iabc a b c
Three-Phase Source
DC Voltage Source
+ N
g A B
A B C
A B C
node 10
A Vabc Iabc B a b C c
C C
A A
B B
Scope12 wave
dq0 abc sin_cos
node 10
RMS2
Scope7 PWM
Signal(s)Pulses A N B C A Vabc Iabc B a b C c
abc_to_dq0 Transformation1
Scope5
scopedata7
Transport Delay
PWM Generator
abc_to_dq0 Transformation
sag compansator
REFERENCES
[1] Mladen Kezunovic, A Novel Software Implementation Concept for Power Quality Study, IEEE Transactions on power delivery, vol. 17, NO. 2, April 2002. Boris Bizjak, Peter Planini, Classification of Power Disturbances using Fuzzy Logic, University of Maribor - FERI, Smetanova 17, Maribor, SI 2000. J. G. Nielsen, M. Newman, H. Nielsen, and F. Blaabjerg, Control and Testing of DVR at Medium Voltage, IEEE Transaction on Power Electronics, Vol.19.No.3, May 2004. Burke J.J., Grifith D.C., and Ward J., Power qualityTwo different perspectives, IEEE Trans. on Power Delivery, vol. 5, pp. 15011513, 1990. Margo P., M. Heri P., M. Ashari, Hendrik M. and T. Hiyama, Compensation of Balanced an d Unbalanced Voltage Sags using Dynamic Voltage Restorer Based on Fuzzy Polar Control, International Journal of Applied Engineering Research, ISSN 0973 4562 Volume 3, Number 3, pp. 879 890, 2008. Y. W. Lie, F. Blaabjerg, D. M. Vilathgamuwa, P. C. Loh, Design and Comparison of High Performance Stationary -Frame Controllers for DVR Implementation, IEEE Transactions on Power Electronics, Vol.22, No.2, March 2007.
[2]
[3]
[4]
[5]
[6]
www.iosrjournals.org
47 | Page
DVR Based Compensation of Voltage Sag due to Variations of Load: A Study on Analysis of Active Power
[7] P.W. Lehn and MR. Iravani, Experimental Evaluation of STATCOM Closed Loop Dynamics , IEEE Trans. Power Delivery, vol. 13, no.4, pp.1378-1384 October 1998. Rosli Omar, Nasrudin Abd Rahim, Marizan Sulaiman, Modeling and 8simulation for voltage sags/swells mitigation using dynamic voltage restorer (dvr), Journal of Theoretical and Applied Information Technology, JATIT, 2005 - 2009. S. Chen, G. Joos, L. Lopes, and W. Guo, A nonlinear control method of dynamic voltage restorers, IEEE 33rd Annual Power Electronics Specialists Conference, pp. 88- 93, 2002. H.P. Tiwari, Sunil Kumar Gupta, Dynamic Voltage Restorer Based on Load Condition, International Journal of Innovation, Management and Technology, Vol. 1, No. 1, ISSN: 2010-0248, April 2010. Toni Wunderline and Peter Dhler, Power Supply Quality Improvement with A Dyna mic Voltage Restorer (DVR), IEEE Transaction, 1998. Carl N.M.Ho, Henery and S.H. Chaung, Fast Dynamic Control Scheme for Capacitor - Supported Dynamic Voltage Restorer: Design Issues, Implementation and Analysis, IEEE Transaction, 20 07.
[8]
[9]
[10]
[11]
[12]
Anita Pakharia obtained her B.E. (Electrical Engineering) in year 2005 and pursuing M. Tech. in Power systems (batch 2009) from Poornima College of Engineering, Jaipur. She is presently Assistant Professor in the Department of Electrical Engineering at the Global College of Technology, Jaipur. She has more than 6 years of teaching experience. She has published five papers in National/ International Conferences. Her field of interest includes Power Systems, Generation of electrical power, Power electronics, Drives and Non-conventional energy sources etc.
Manoj Gupta received B.E. (Electrical) and M. Tech. degree from Malaviya National Institute of Technology (MNIT), Jaipur, India in 1996 and 2006 respectively. In 1997, he joined as Electrical Engineer in Pyrites, Phosphates and chemicals Ltd. (A Govt. of India Undertaking), Sikar, Rajasthan. In 2001, he joined as Lecturer in Department of Electrical Engineering, Poornima College of Engineering, Jaipur, India and now working as Associate Professor in Department of Electrical Engineering in the same. His field of interest includes power quality, signal/ image processing, electrical machines and drives. Mr. Gupta is a life member of Indian Society for Technical Education (ISTE), and Indian Society of Lighting Engineers (ISLE).
www.iosrjournals.org
48 | Page
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE) ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 49-52 www.iosrjournals.org
Performance Analysis of Coded OFDM Signal for Radio over Fiber Transmission
Shikha Mahajan1, Naresh Kumar2
2
(ECE, U.I.E.T/ Panjab University, India) (Assistant Professor, ECE, U.I.E.T/Panjab University, India)
ABSTRACT : The main aim of this paper is to analyze the influence of the modulation techniques; on the
performance of coded orthogonal frequency-division multiplexing (COFDM) based Radio over Fiber (RoF) system. The modelling of the COFDM scheme for RoF system has been completely done by using software Optisystem. Both analysis and simulation results are presented for 16-QAM and 16-QPSK modulation techniques and performance is analyzed for optical amplifier and Erribium Doped Fiber Amplifier (EDFA). The parameter Quality factor (Q) of the COFDM signal transmitted through the fiber optic link is examined. From the analysis for Radio over Multimode Fiber with link length of 2 km (medium haul communication) optical amplifier gives better results than EDFA for either of the modulation schemes for set of bit rates.
Keywords COFDM, Convolutional Encoder, Multimode Fiber, Quality Factor, Radio over Fiber I. INTRODUCTION
RoF technology is one of the recent advancements in optical communication, finding applications in cable TV networks, base station links for mobile communication and antenna remoting [1]. Particularly, radio over multimode fiber system (ROMMF) has gained much attention recently for its suitability to deliver highpurity signals in analog and digital format for short-reach coverage such as wireless local area networks and ultrawideband radio signals [2]. COFDM has been specified for digital broadcasting systems for both Digital Audio Broadcasting (DAB), Digital Video Broadcasting (DVB-T) [3]. COFDM is particularly well matched to these applications, since it is very tolerant of the effects of multipath (provided a suitable guard interval is used). Indeed, it is not limited to 'natural' multipath as it can also be used in so-called Single-Frequency Networks (SFNs) in which all transmitters radiate the same signal on the same frequency. A receiver may thus receive signals from several transmitters, normally with different delays and thus forming a kind of 'unnatural' additional multipath. Provided the range of delays of the multipath (natural or 'unnatural') does not exceed the designed tolerance of the system (slightly greater than the guard interval) all the received-signal components contribute usefully. For short-distance applications, COFDM can be used efficiently for high data rate signal delivery and to tolerate the effect of modal dispersion. Also, the exponent refractive index profile effect has been studied and shown that the best performance can be achieved near the optimum value of exponent refractive index profile [4]. COFDM system employs conventional forward error correction codes and interleaving for protection against burst errors caused by deep fades in the channel. Often it is concatenated with a block code for improving the performance [5]. A Viterbi decoder for decoding a convolution code is easy to implement. Codes of different rates are used in conjunction with different modulation schemes to support different quality of services. While error correction codes provide coding gain in the system, interleaving provides diversity gain. For COFDM, when a deep fade occurs in the channel the bits within the deep fade are erased. Interleaving the bits across different frequency bins distributes the energy within a symbol among different sub-carriers. Since distinct sub-carriers undergo different fading conditions, the probability that all the bits corresponding to a symbol are lost, decreases significantly. An uncoded OFDM system cannot exploit frequency diversity. Modal dispersion is the dominant performance limiting factor in MMFs. The advantage of MMFs over single mode fibers (SMFs) is their relaxed coupling tolerance, their larger core diameters, which lead to reduced system-wide installation and maintenance costs. MMFs allow the propagation of multiple guided modes albeit with different propagation constants. The difference in the mode propagation times leads to intermodal dispersion, which severely limits the fiber bandwidth. In ROF, COFDM impact has been demonstrated with MMF as a promising modulation scheme for mitigating the modal dispersion penalty [6]. By implementing coding and interleaving across sub-carriers, the
www.iosrjournals.org
49 | Page
Performance Analysis of Coded OFDM Signal for Radio over Fiber Transmission
effect of multipath fading is minimised since interleaving reorganises the number of bits in a way so as to avoid the effects of fading. Convolutional coding is being used commonly in COFDM systems, and it can give coding gain at different coding rates [7]. Thus, COFDM makes the transmitted signal robust to multipath fading and distortion. Also it has been observed in [8] that the frequency selectivity becomes significant with increasing the length of the MMF. This reveals the need for COFDM.
PRBSG
BER Analyzer
Decoder
OFDM Demodulator
O/E Converter
Fig. 1.Block Diagram of COFDM based RoF System Pseudo-random bit sequence generator (PRBSG) data format is used which is convolutionally encoded at rate and generator polynomials 1338 and 1718. For downlink simulation, a narrow bandwidth continuous wave (CW) (wavelength, 1300 nm) from laser diode is modulated via a Mach-Zehnder Modulator (MZM). An ideal pre amplifier before optoelectronic conversion by PIN-PD having centre frequency of 1300 nm and post amplifier is used at uplink simulation. Optical link i.e. a multi mode fiber of length 2 km is used. In receiver section, optical signal is detected by a PIN-photodiode. Q-factor values of signals are measured by BER analyzer at base station.
www.iosrjournals.org
50 | Page
Performance Analysis of Coded OFDM Signal for Radio over Fiber Transmission
The received COFDM signal is demodulated and decoded to retrieve the data. Table 2 summarizes MMF parameters.
Table 2.MMF parameters Value -100 ps/(nm km) 1.25 dB/km 1300 nm 2 km
Fig. 2.Comparative analysis of QAM using optical amplifier and EDFA From the Fig 2, above it can be seen that for bit rates used in simulation, QAM with optical amplifier gives better Quality factor than QAM with EDFA. Also as the bit rate increases the quality factor begins to degrade for both the modulation techniques for values greater than equal to 800 Mbps and falls below the value required for communication purpose. The fall in the Q factor is due to increase in dispersion as bit rate increases.
www.iosrjournals.org
51 | Page
Performance Analysis of Coded OFDM Signal for Radio over Fiber Transmission
Fig. 3.Comparative analysis of QPSK using optical amplifier and EDFA From the Fig 3, above it can be seen that for bit rates used in simulation, QPSK with optical amplifier gives better Quality factor values than QPSK with EDFA. Also as the bit rate increases the quality factor begins to degrade for both the modulation techniques for values greater than equal to 800 Mbps and falls below the value required for communication purpose. The fall in the Q factor is due to increase inn dispersion as bit rate increases. From above two figures 2 and 3, it can be analyzed that either of the modulation techniques i.e. QAM or QPSK gives better performance with optical amplifier than used with EDFA.
VI. CONCLUSION
From the results for Radio over MMF with link length of 2 km (medium haul communication) based upon the comparative analysis between QAM and QPSK modulation techniques with optical amplifier and EDFA for same values of increasing bit rates, optical amplifier gives better results than EDFA for either of the modulation techniques. After 800 Mbps the quality factor value falls below permissible value for communication.
REFERENCES
[1] [2] Hamed Al Raweshidy, Shozo Komaki, Radio over fiber technologies for mobile communications networks, pp. 128-132 (2002). Guennec, Y.l., Pizzinat, A., Meyer, S., et al., Low-cost transparent radio-over-fiber system for in- building distribution of UWB signals, pp. 26492657 (2009). Stott, J.H., The DVB terrestrial (DVB-T) specification and its implementation in a practical modem, pp. 255-260 (1996). A.M. Matarneh, S.S.A. Obayya, Bit-error ratio performance for radio over multimode fiber system using coded orthogonal frequency division multiplexing, pp. 151157 (2011). Arun Agarwal, Kabita Agarwal, Design and Simulation of COFDM for High Speed Wireless Communication and Performance Analysis, pp. 22-28 (2011). Dixon, B.J., R.D., Iezekiel, S., Orthogonal frequency-division multiplexing in wireless communication systems with multimode fiber feeds, pp. 14041409 (2001). Prasad, R., OFDM for wireless communication systems, Artech House, (2004). Aser M. Matarneh, S. S. A. Obayya, I. D. Robertson, Coded orthogonal frequency division multiplexing transmission over graded index multimode fiber, (2007).
[3] [4]
[5]
[6]
[7] [8]
www.iosrjournals.org
52 | Page
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE) ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 53-61 www.iosrjournals.org
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their Comparative Effectiveness.
Mr.Vishal G. Jadhao, Prof.Prafulla D. Gawande
1, 2,
(Department Of Electronics and Telecommunication, Sipna College of Engineering and Technology Amravati, India) 2 (Department of Electronics & Telecommunication Engineering, SIPNA College of Engineering and Technology, Amravati 444701 (M.S.)
ABSTRACT: In data communication the codes are used to for security and effectiveness which is thoroughly
followed by the network. Here in LBC (linear block code), COC (convolution code), Concatenated codes (CC) are used. The work presented here is used the to make comparative effectiveness of this codes in order to make secure data analysis. Error coding is a method of detecting and correcting these errors to ensure information is transferred intact from its source to destination. Error coding uses mathematical formulas to encode data bits at the source into longer bit words for is transmission. Decoding of the code word is possible at side of receiver. The extra bits in the code word provide redundant bit, according to the coding scheme used, will allow the destination to use the decoding process to determine if the communication mediums expected error rate, signal to noise ratio and whether or not data retransmission is possible. Above coding technique implemented system can show error of exact bit in given data this error display bitwise. Faster processors and better communications technology make more complex coding schemes, with better error detecting and correcting capabilities, possible for smaller embedded systems, allowing for more robust communications.
Keywords: Concatenated code error control coding technique, Convolutional codes error control coding
technique, Embedded Communication, Fault Tolerant Computing, linear block codes error control coding technique, Real-Time System, Software Reliability
I.
INTRODUCTION
Environmental interference and physical defects in the communication medium can cause random bit errors during data transmission. Error coding is a method of detecting and correcting these errors to ensure information is transferred intact from its source to its destination. Error coding is used for fault tolerant computing in computer memory, magnetic and optical data storage media, satellite and deep space communications, network communications, cellular telephone networks, and almost any other form of digital data communication. Error coding uses mathematical formulas to encode data bits at the source into longer bit words for transmission. The code word can then be decoded at the destination to retrieve the information. The extra bits in the code word provide redundancy that, according to the coding scheme used, will allow the destination to use the decoding process to determine if the communication mediums expected error rate, and whether or not data retransmission is possible. Faster processors and better communications technology make more complex coding schemes, with better error detecting and correcting capabilities, possible for smaller embedded systems, allowing for more robust communications. During digital data transmission error will occurring this error can detect and correct this error binary signal and original signal can show bitwise. System can implemented using MATLAB.The introduction of the paper should explain the nature of the problem, previous work, purpose, and the contribution of the paper. The contents of each section may be provided to understand easily about the paper.
II.
CODING TECHNIQUES
1. Overview of Error Control Coding: Error-correcting codes are used to reliably transmit digital data over unreliable communication channels subject to channel noise. When a sender wants to transmit a possibly very long data stream using a block code, the sender breaks the stream up into pieces of some fixed size. Each such piece is called message and the procedure given by the block code encodes each message individually into a codeword, also called a block in the context of block codes. The sender then transmits all blocks to the receiver, who can in turn use some decoding mechanism to (hopefully) recover the original messages from the possibly
www.iosrjournals.org
53 | Page
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their Comparative Effectiveness
corrupted received blocks. The performance and success of the overall transmission depends on the parameters of the channel and the block code. The theoretical contribution of Shannons work in the area of channel coding was a useful definition of information and several channel coding theorems which gave explicit upper bounds, called the channel capacity, on the rate at which information could be transmitted reliably on a given. In the context of this paper, the result of primary interests the noisy channel coding theorem for continuous channels with average power limitations. This theorem states that the capacity C of a band limited additive white Gaussian noise (AWGN) channel with bandwidth W, a channel model that approximately represents many practical digital communication and storage systems. 1.1. Linear Block Codes: Linear block codes are so named because each code word in the set is a linear combination of a set of generator code words. If the messages are k bits long, and the code words are n bits long (where n > k), there are k linearly independent code words of length n that form a generator matrix. To encode any message of k bits, you simply multiply the message vector u by the generator matrix to produce a code word vector v that is n bits long. Linear block codes are very easy to implement in hardware, and since they are algebraically determined, they can be decoded in constant time. They have very high code rates, usually above 0.95. They have low coding overhead, but they have limited error correction capabilities. They are very useful in situations where the BER of the channel is relatively low, bandwidth availability is limited in the transmission, and it is easy to retransmit data. One class of linear block codes used for high-speed computer memory are SEC/DED (single-errorcorrecting/double-error-detecting) codes. In high speed memory, bandwidth is limited because the cost per bit is relatively high compared to low-speed memory like disks . The error rates are usually low and tend to occur by the byte so a SEC/DED coding scheme for each byte provides sufficient error protection. Error coding must be fast in this situation because high throughput is desired. SEC/DED codes are extremely simple and do not cause a high coding delay. 1.2. Convolutional Codes: Convolutional codes are generally more complicated than linear block codes, more difficult to implement, and have lower code rates (usually below 0.90), but have powerful error correcting capabilities. They are popular in satellite and deep space communications, where bandwidth is essentially unlimited, but the BER is much higher and retransmissions are infeasible. Convolutional codes are more difficult to decode because they are encoded using finite state machines that have branching paths for encoding each bit in the data sequence. A well-known process for decoding convolutional codes quickly is the Viterbi Algorithm. The Viterbi algorithm is a maximum likelihood decoder, meaning that the output code word from decoding a transmission is always the one with the highest probability of being the correct word transmitted from the source. The main difference between block codes and the Convolutional codes (or recurrent) codes is the following: In block code, a block of n digits generated by the encoder in a particular time unit depends only on block of k input message digits generated by the encoder in a time unit depends on not only the block of k message digits within that time unit, but also on the preceding (N-1) blocks of message digits (N>1). Usually the values of k and n will be small. 1.3. Concatenated codes: The field of channel coding is concerned with sending a stream of data at the highest possible rate over a given communications channel, and then decoding the original data reliably at the receiver, using encoding and decoding algorithms that are feasible to implement in a given technology. Shannon's channel coding theorem shows that over many common channels there exist channel coding schemes that are able to transmit data reliably at all rates R less than a certain threshold C, called the channel capacity of the given channel. In fact, the probability of decoding error can be made to decrease exponentially as the block length N of the coding scheme goes to infinity. However, the complexity of a naive optimum decoding scheme that simply computes the likelihood of every possible transmitted codeword increases exponentially with N, so such an optimum decoder rapidly becomes infeasible . Dave Forney showed that concatenated codes could be used to achieve exponentially decreasing error probabilities at all data rates less than capacity, with decoding complexity that increases only polynomially with the code block length.
www.iosrjournals.org
54 |Page
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their Comparative Effectiveness III. INDENTATIONS AND EQUATIONS
1. Linear block code: Theorem 1: Let a linear block code C have a parity check matrix H. The minimum distance of C is equal to the smallest positive number of columns of H which are linearly dependent. This concept should be distinguished from that of rank, which is the largest number of columns of H which are linearly independent. Proof: Let the columns of H be designated as d0; d1; : : : ; dn1. Then since cHT = 0 for any codeword c, we have 0 = c0d0 + c1d1 ++ cn1dn1 Let c be the codeword of smallest weight, w = w(c) = dmin. Then the columns of H corresponding to the elements of c are linearly dependent. 2 Based on this, we can determine a bound on the distance of a code: dmin = n- k + 1: The Singleton bound This follows since H has n k linearly independent rows. (The row rank = the column rank.) So any combination of n k + 1 columns of H must be linearly dependent. For a received vector r, the syndrome is s = rHT Obviously, for a codeword the syndrome is equal to zero. We can determine if a received vector is a codeword. Furthermore, the syndrome is independent of the transmitted codeword. If r = c + e, s = (c + e)HT = eHT Furthermore, if two error vectors e and e have the same syndrome, then the error Vectors must differ by a nonzero codeword. That is, if eHT = eHT then (e e) HT = 0 This means they must be a codeword. 2. Convolutional Code: 2.1 Coding and decoding with Convolutional Codes: Convolutional codes are commonly specified by three parameters; (n, k, m). n = number of output bits k = number of input bits m = number of memory registers The quantity k/n called the code rate is a measure of the efficiency of the code. Commonly k and n parameters range from 1 to 8, m from 2 to 10 and the code rate from 1/8 to 7/8 except for deep space applications where code rates as low as 1/100 or even longer have been employed. Often the manufacturers of Convolutional code chips specify the code by parameters (n, k, L), the quantity L is called the constraint length of the code and is defined by Constraint Length, L = k (m-1)
www.iosrjournals.org
55 |Page
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their Comparative Effectiveness
The constraint length L represents the number of bits in the encoder memory that affect the generation of the n output bits. The constraint length L is also referred to by the capital letter K, which can be confusing with the lower case k, which represents the number of input bits. Sometimes K is defined as equal to product the of k and m. Often in commercial spec, the codes are specified by (r, K), where r = the code rate k/n and K is the constraint length. The constraint length K however is equal to L 1. 2.2 Code Parameters and the Structure of the Convolutional code: The Convolutional code structure is easy to draw from its parameters. First draw m boxes representing the m memory register. Then draw n modulo-2 adders to represent the n output bits. Now connect the memory registers to the adders using the generator polynomial. This is a rate 1/3 code. Each input bit is coded into 3 output bits. The constraint length of the code is 2. The 3 output bits are produced by the 3 modulo-2 adders by adding up certain bits in the memory registers. The selection of which bits are to be added to produce the output bit is called the generator polynomial (g) for that output bit. For example, the first output bit has a generator polynomial of (1, 1, 1). The output bit 2 has a generator polynomial of (0, 1, 1) and the third output bit has a polynomial of (1, 0, 1). The output bits just the sum of these bits. v1 = mod2 (u1 + u0 + u-1) v2 = mod2 ( u0 + u-1) v3 = mod2 (u1 + u-1) from fig.(1) The polynomials give the code its unique error protection quality. One (3,1,4) code can have completely different properties from an another one depending on the polynomials chosen. 3. Concatenated Codes:Let Cin be a [n, k, d] code, that is, a block code of length n, dimension k, minimum Hamming distance d, and rate r =n/k, over an alphabet A:
Let Cout is a [N, K, D] code over an alphabet B with |B| = |A|k symbols: The inner code Cin takes one of |A|k = |B| possible inputs, encodes into an n-tuple over A, transmits, and decodes into one of |B| possible outputs. We regard this as a (super) channel which can transmit one symbol from the alphabet B. We use these channel N times to transmit each of the N symbols in a codeword of Cout. The concatenation of Cout (as outer code) with Cin (as inner code), denoted Cout Cin, is thus a code of length Nn over the alphabet A:[1]
It maps each input message m = (m1, m2, ..., mK) to a codeword (Cin(m'1), Cin(m'2), ..., Cin(m'N)), where (m'1, m'2, ..., m'N) = Cout(m1, m2, ..., mK). The key insight in this approach is that if Cin is decoded using a maximum-likelihood approach (thus showing an exponentially decreasing error probability with increasing length), and Cout is a code with length N = 2nr that can be decoded in polynomial time of N, then the concatenated code can be decoded in polynomial time of its combined length n2nr = O(Nlog(N)) and shows an exponentially decreasing error probability, even if Cin has exponential decoding complexity.[1] This is discussed in more detail in section decoding concatenated codes. In a generalization of above concatenation, there are N possible inner codes Cin,i and the i-th symbol in a codeword of Cout is transmitted across the inner channel using the i-th inner code. The Justesen codes are examples of generalized concatenated codes, where the outer code is a Reed-Solomon code 3.1. Theorem: The distance of the concatenated code CoutCin is at least dD, that is, it is a [nN, kK, D'] code with D' dD.
www.iosrjournals.org
56 |Page
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their Comparative Effectiveness
Proof: Consider two different messages m1 m2 BK. Let denote the distance between two codewords. Then Thus, there are at least D positions in which the sequence of N symbols of the codewords Cout(m1) and Cout(m2) differ. For these positions, denoted i, we have
Consequently, there are at least dD positions in the sequence of nN symbols taken from the alphabet A in which the two codewords differ, and hence
2. If Cout and Cin are linear block codes, then CoutCin is also a linear block code. This property can be easily shown based on the idea of defining a generator matrix for the concatenated code in terms of the generator matrices of Cout and Cin. Figures and Tables: 1. Convolutional code:
Fig.1 Sequential Statement 1.1. polynomials are selected for given table: G1 110 1101 11010 110101 110101 110111 G2 111 1110 11101 111011 110101 1110011
Constraint Length 3 4 5 6 7 8
www.iosrjournals.org
57 |Page
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their Comparative Effectiveness
9 10 110111 1101110 01 111001101 111001100 1
www.iosrjournals.org
58 |Page
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their Comparative Effectiveness
Fig.3 Bitwise graph of convolutional code 2. Linear block code resulted binary message and bitwise graph
www.iosrjournals.org
59 |Page
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their Comparative Effectiveness
3.
4.
IV.
1 2 3 4
CONCLUSION
Show in the BER to Eo/No graph the convolutional code is very good capability of error correction then linear block code. Convolutional code is very difficult to implementation then linear block code. Concatenated code is both binary and non binary data can be send through encoder most useful for error correction capability. Convolutional code is better than linear block code and concatenated code.
www.iosrjournals.org
60 |Page
Performance Analysis of Linear Block Code, Convolution code and Concatenated code to Study Their Comparative Effectiveness
5 Important reasons to use coding are achieving dependable data storage in the face of minor data corruption/loss mid gaining the ability to provide high precision I/O even on noisy transmission lines such as cellular phones (error coding decouples message precision from analog noise level). Most use full in real time system can show error bitwise exact bit is show in result. Error signal can easily display on screen command window of matlab. The error coding technique for an application should be picked based on The types of errors expected on the channel (e.g., burst errors or random bit error) Whether or not it is possible to retransmit the information (codes for error detection only or error "detection and correction) Expected error tale on the communication channel (high or low) There are several tradeoffs between channel efficiency and the amount of coding/decoding logic implemented
6 7 8 9 10 11
ACKNOWLEDGEMENTS
We are thankful Prof. A. P. Thakare HOD (Department Of Electronics and Telecommunication), Dr. Prof. A. A. Gurjar Sipna C.O.E.T. Amravati for their encouragement and support.
References
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. Kai Yang and Xiaodong Wang, Analysis of Message-Passing Decoding Of Finite-Length Concatenated Codes,IEEE Transaction On Communication,VOL.59,No.8,August 2011. Application of error control coding Deniel J. Costello, Jr. Fellow, IEEE, Joachim Hagenauer, Fellow, IEEE, Hideki Imai, Fellow, IEEE, and Stephen B. Wicker, Sr. member, IEEE. IEEE Transactions on Information theory, Vol. 44, No. 6, October 98. Error Correction in a Convolutionally Coded System,Wang,Member,IEEE,WaniunZhao,Member,IEEE,and Georgios B. Giannakis,Fellow,IEEE TransactionCommunications,Vol.56,No.11,November 2008. S. Lin and D. J. Costello, Error Control Coding. Englewood Cliffs, NJ: Prentice Hall, 1982. W. W. Peterson and E. J. Weldon, Jr., Error Correcting Codes, 2nd ed. Cambridge, MA: The MIT Press, 1972. V. Pless, Introduction to the Theory of Error-Correcting Codes, 3rd ed. New York: John Wiley & Sons, 1998. K. Sam Shanmugam, Digital and Analog Communication Systems. S. Lin and D. J. Costello, Error Control Coding. Englewood Cliffs, NJ: Prentice Hall, 1982. M. Michelson and A. H. Levesque, Error Control Techniques for Digital Communication. New York: John Wiley & Sons, 1985. W. W. Peterson and E. J. Weldon, Jr., Error Correcting Codes, 2nd ed. Cambridge, MA: The MIT Press, 1972. V. Pless, Introduction to the Theory of Error-Correcting Codes, 3rd ed. New York: John Wiley & Sons, 1998. C. Schlegel and L. Perez, Trellis Coding. Piscataway, NJ: IEEE Press, 1997 S. B. Wicker, Error Control Systems for Digital Communication and Storage. Englewood Cliffs, NJ: Prentice Hall, 1995.
First Author-Mr.Vishal G. Jadhao He is also pursuing his M.E. in Digital Electronics from the sipnas College of Engineering and Technology, Amravati (India). His areas of interest are Digital Communication System and Digital Signal Processing. Second Author-Prof.Prafulla D. Gawande He is currently working as Professor in Electronics and Telecommunication Engineering Department, Sipnas College of Engineering, Amravati (India)
www.iosrjournals.org
61 |Page
IOSR Journal of Electrical and Electronics Engineering (IOSRJEEE) ISSN : 2278-1676 Volume 1, Issue 1 (May-June 2012), PP 62-67 www.iosrjournals.org
(ECE department, AL-Aman College of engineering/ Jawaharlal Nehru technological university Kakinada, India )
ABSTRACT : Man has gloriously innovated, discovered, profoundly invented some of the state of the art
technologies and designed modern structures that stand apart from other civilizations .yet there has been growing concern of deteriorating mother earth. Of late, this very dooms day can just be limited to a sci-fi story unlike we approach extra terrestrial life or explore celestial bodies for mineral extraction. Until and unless the very typical and running alternatives are implemented for this desperate eco-friendly transformations. Human civilization has leaded the path to qualitative technology with greater efficiency than previously used. Yet there has been a demerit for all the technologies that were developed ever since the idea of having technology came. By this we put before you some of the exquisite ways to make use of pristine forces of nature which have been trivialized since the era of modern age.
II.
This is a simple definition of efficiency of a device that converts one energy form into another. : It is a number between 0 and 1, or between 0 and 100%. The key word in the definition is useful energy output (EO). Were it not there, the efficiency of any device would be 100%, because of the law of conservation of energy (which states that energy cannot be created or destroyed). Instead, we see that quite a few devices used in daily life have very low efficiencies. The purpose of a device determines its useful energy output. For example, we want light from a lamp, but we get mostly heat; only 5% of the energy input (electricity) is converted into light, so the efficiency of a conventional incandescent light bulb is 5%. It is obvious why fluorescent lights are preferable for heavy-duty use (e.g., basements, kitchens, public buildings). Similarly, diesel engines (the kinds used in most trucks and in some cars, especially the European ones) are typically more efficient than conventional spark-ignition engines (SIE) [2] The familiar term 'fuel economy' or ' mileage' (distance traveled per liter of fuel consumed) is equivalent to the efficiency of an automobile; it is just expressed in different units. Instead of being expressed in units of mechanical energy (work), the useful energy output is given as the average number of kilometers traveled. And instead of being expressed in units of chemical energy, the energy input is given as the number of liters of fuel [3]. Refer figure-(1)
III.
Maximum power transfer theorem states that, to obtain maximum external power from a source with a finite internal resistance, the resistance of the load must be equal to the resistance of the source as viewed from the output terminals. The theorem results in maximum power transfer, and not maximum efficiency. If the resistance of the load is made larger than the resistance of the source, then efficiency is higher, since a higher percentage of the source power is transferred to the load, but the magnitude of the load power is lower since the total circuit resistance goes up.[4]
www.iosrjournals.org
62 | Page
IV.
The law of conservation of energy states that the amount of energy in a system will stay constant throught time . Therefore this law is stating that no matter how much time goes by , the amount of energy in a certain state of matter will remain the same , hence conserved , no matter how much time has passed . Therefore energy cannot be created nor destroyed but it can be transformed from one form to another form .[8] A motor is fixed to one of the rod and to the other end of the rod a dynamo is fixed. The axle of the motor is perpendicular to the rod and the same with the dynamo. Now the tires are fixed to the axel of the motor and also to the dynamo. The positive terminal of the motor is fixed to the positive terminal of the dynamo and vice-versa. Let us for a while consider that efficiency of the dynamo and motor in conversion of the given input is only 75% that means 25% of the given input is wasted, when input is converted into another form of energy.[9] Consider an electrical motor and a dynamo of coinciding ratings and parameters, connected to either ends of a supporting rod. However care must be taken that these devices are port vise synchronized (with parallel terminals) .let each of these two devices attached with one rotating tire so that it would constitute to a bicycle like movement. Now the arrangement is kept on the ground and is given an initial manual trust which causes the whole arrangement to move forward, because of this moment the dynamo will rotate. It produces 75 % of electrical energy to which 100% of input mechanical energy is given (cause of pushing). The 75% of the electrical output produced by the dynamo is given as input to the motor; it converts its input into 50% of mechanical energy. As a result the motor rotates the wheel and the whole arrangement moves forward. Again the dynamo rotates and converts its 50% of the mechanical input into 25% of electrical out, which is in turn given to the motor, whose mechanical conversion will be 0%. This results in deceleration of the system and finally the system stops. This example proves that conversion of energy involves in wastage of energy and 100% of energy conversion is not possible.[10] . refer fig-(2) and table-1
V.
SIMPLIFIED TRAVELLER
There are many vehicles capable of converting solar power into mechanical one and the electrical power to mechanical. The vehicle most popular for electrical power conversion into mechanical energy is Ebike. Due to the limited battery charge, it cannot travel long distances as there are no alternatives except finding a power source (like E-bunks ) , therefore the only solution is to be precautious . Now there arises the necessity of having a vehicle which is capable of converting both solar energy and mechanical energy into electrical energy and it should not need any external source to be recharged with in the mid way. This paper puts forward the model called simplified traveler (ST) which possess all the above features. The simplified traveler is basically a bicycle, with advanced features. The simplified traveler is powered by solar panel, dynamo and simple muscular pedaling so this has three power sources , when one kind of energy source fails the other can be utilized . In this scenario the ST has two alternate power producing sources which under any circumstances can set the ST in motion. For an instance if its cloudy the solar panel cannot produce sufficient energy to set ST in motion. In this case when ST is pedaled the dynamo also rotates
www.iosrjournals.org
63 | Page
VI.
This is a simple arrangement of mirrors cut in small uniform dimensions like square. It includes the positioning of such mirrors on to a concave surface which is in turn tilted towards the sun. The arrangement is such that the reflected light from it gets focused on to a particular area. We should arrange a solar panel of high power or any heat conversion energy device. This arrangement facilitates to produce the same amount of energy from N number of individual solar panels when compared with a single solar panel assisted by concave reflecting surfaces [9]. As a result the cost of production reduces thousands of times. Because of this arrangement thousands of joules of energy can be concentrated at one point from where thousands of watts of energy can be produced with less loss in energy conversion [12]. Refer fig-(4) (5). Thus EPGSP is very useful in power generation.
VII.
positive terminal
tyres
www.iosrjournals.org
64 | Page
SUN
FR T RAYS INCIDEN OM SUN
RO M YS F D RA
C CON
AVE
BATTERY
www.iosrjournals.org
65 | Page
VIII.
SIMULATION RESULTS
www.iosrjournals.org
66 | Page
REFERENCES
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] Georg Hille, Werner Roth, and Heribert Schmidt, Photovoltaic systems, Fraunhofer Institute for Solar Energy Systems, Freiburg, Germany, 1995. OKA Heinrich Wilk, Utility connected photovoltaic systems, contribution to design handbook, Expert meeting Montreux, October 1921, 1992, International Energy Agency (IEA): Solar Heating and Cooling Program. Stuart R. Wenham, Martin A. Green, and Muriel E. Watt, Applied photovoltaics, Centre for photovoltaic devices and system s, UNSW. N. Ashari, W. W. L. Keerthipala, and C. V. Nayar, A single phase parallel connected Uninterruptible power supply/Demand side management system, PE -275-EC (08-99), IEEE Transactions on Energy Conversion, August 1999. C. V. Nayar, and M. Ashari, Phase power balancing of a diesel generator using a bidirectional PWM inverter, IEEE Power Engineering Review 19 (1999). C. V. Nayar, J. Perahia, S. J. Philips, S. Sadler, and U. Duetchler, Optimized powerelectronic device for a solar powered c entrifugal pump,Journal of the Solar Energy Society of India, SESI Journal 3(2), 87 98 (1993). Ziyad M. Salameh, and Fouad Dagher, The effect of electrical array reconfiguration on the performance of a PV -powered volumetric water pump, IEEE Transactions on Energy Conversion 5 653 658 (1990). C. V. Nayar, S. J. Phillips, and W. L. James, T. L. Pryor, D. Remmer, Novel wind/diesel/battery hybrid system, Solar Ener gy 51, 6578 (1993). W. Bower, Merging photovoltaic hardware development with hybrid applications in the U.S.A. Proceedings Solar 93 ANZSES, Fremantle, Western Australia (1993). Carlson, D. E. 1995. Recent Advances in Photovoltaics, 1995 Proceedings of the Intersociety Engineering Conference on Energ y Conversion 1995, p. 621-626. www.google.com www.en-wikipedia.org
www.iosrjournals.org
67 | Page