Академический Документы
Профессиональный Документы
Культура Документы
Contents
1 Introduction 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Procedure 2.1 OpenCV and OpenGL on the iPhone 2.1.1 OpenCV . . . . . . . . . . . . 2.1.2 OpenGL . . . . . . . . . . . . 2.2 Calibrated System . . . . . . . . . . 2.2.1 Intrinsic Calibration . . . . . 2.2.2 Extrinsic Calibration . . . . . 2.2.3 Optical Flow . . . . . . . . . 2.2.4 Enhancements . . . . . . . . 2.3 OpenGL . . . . . . . . . . . . . . . . 2.3.1 OpenGL - Calibrated . . . . 2.3.2 Blender . . . . . . . . . . . . 1 1 1 2 2 2 2 2 3 4 4 5 6 6 6
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
Experimental Results 3.1 Calibrated Results . . . . . . . . . . . . 3.1.1 Intitial Intrinsic Calibration . . . 3.1.2 Extrinsic Calibration Ambiguity 3.1.3 OpenCV Pose Problem . . . . . 3.1.4 OpenGL Pose Problem . . . . . Discussion 4.1 Overview . . . . . . 4.2 Uncalibrated - GRIC 4.3 Stereo reconstruction 4.4 OpenGL . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
8 . 8 . 8 . 8 . 9 . 10 12 12 12 12 12 13 15
. . .
. . . . 3D . .
. . . . . . . . . . . . . . . . world points . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
References 6
What we did 16 6.1 Andrew Abril . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 6.2 Jose Rafael Caceres . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Appendix 17
1 Introduction
1.1 Overview
With the rising popularity of using computer vision in smart phones, there has been a need to implement unique ways to interact with the phone and the software. The recent trend has been toward augmented reality, which is bringing virtual scene into reality. There has been little attempt to do the opposite, bringing the user into the virtual world. [2]
1.2 Objective
To acheive our objective, these goals must be met: Capture the iphones camera pose by using OpenCv and chessboard as a marker Use the captured poses to transform them from the real world to the virtual world using OpenGL. Use OpenGLto create a virtual world with one camera as the view point. The virtual world will be displayed on the iPhone depending on the users movements.
2 Procedure
2.1 OpenCV and OpenGL on the iPhone
Originally, both OpenCV and OpenGL have been mainly used on desktops and laptops in order to take advantage of their processing power. Recently, there has been an increase in demand to use these libraries on mobile devices for many applications. 2.1.1 OpenCV
Since porting the OpenCV library onto the iPhone is such a new idea, there is little documentation available for it. Many of the pre-compiled projects that include the OpenCV libraries only do a subset of OpenCVs functions. This is mainly due to Apples recent acceptance of letting users implement applications using the iPhones camera. The most stable version of OpenCV used on the iPhone is OpenCV 2.0, limiting the functions available from the current version OpenCV 2.2. The version used in iWorld is OpenCV 2.1, which allowed the use of some of OpenCVs dened ags; however, many of the functions introduced in OpenCV 2.1 could not be used due to technical reasons. Many of OpenCVs functions were designed to take advantage of a 32-bit processor. These functions suer in performance when implemented on an iPhone. Specically, the type of images OpenCV(IplImage) uses is dierent than the iPhones (UIImage). The conversion between these two images is an important factor in the application because OpenCV does not recognize UIImages as inputs to functions, and the iPhone does not recognize IplImages as an output to the screen. Another function that slowed down the application was nding the chessboard every frame. Compared to a laptop, the iPhone ran at much lower speed. In order to x this problem, optical ow was used to track the features after they were initially found using OpenCVs nd chessboard function. 2.1.2 OpenGL
OpenGL has been ocially supported by the iPhone but as a trimmed down version, named OpenGL ES. OpenGL ES only supports primitive vertices that are either triangular, linear or just mere points. This cause drawing to be cumbersome than the original OpenGL that support more vertices such as quads. Another important function that is not available is the perspective projection of a real world cameras intrinsic parameters to an OpenGL camera. OpenGL ES uses a dierent function, glFrustrum (that takes in dierent inputs), to set up this projection matrix; therefore it was necessary to make an intermediate function that took the real world cameras intrinsic parameters and convert them to parameters used by glFrustrum. Yet, this function was easy enough to implement as there are many implementations available.
MATLAB calibration toolbox. This was done because the MATLAB toolbox provides more detailed information about the calibration results than OpenCVs cvCalibrateCamera2() function. For instance, MATLAB provides axis orientation information and re-projection error. Using the intrinsic values given by MATLAB, the extrinsic parameters were calculated every frame. This provided the rotation and translation of the world with respect to the camera. 2.2.1 Intrinsic Calibration
With MATLABs calibration toolbox, the intrinsic parameters were calculated using the chessboard method. Ten images of a chessboard was used at dierent orientation (Figure 1), with the iPhone camera being stationary. After extracting the grid corners from each image, the intrinsic values were calculated. The axis-coordinates are shown in Figure 2.
Figure 2: This image shows the axis orientation of iPhone camera, where the x-axis lies along the vertical (where the x-axis becomes more positive going downward), the y-axis is along the horizontal (where the y-axis becomes more positive going to the right) and the z-axis is out of the board. 2.2.2 Extrinsic Calibration
The chessboard method in OpenCV was used in order to nd image points (each corner represents an image point). The world points for each corner were arbitrarily chosen. The OpenCV function, cvFindExtrinsicParams2() was used in order to nd the pose (rotation and translation) of the camera at each frame. This function was used over the original OpenCV function, cvCalibrateCamera2(), which nds both intrinsic and extrinsic parameters, in order to minimize extrinsic value error. Since our camera does not change, the intrinsic parameters ought to remain constant. Using cvCalibrateCamera2() would imply that the estimated intrinsic values are liable to change. 2.2.3 Optical Flow
Lucas-Kanade optical ow is used to track features between two frames to capture the ane motion of the features. This algorithm is based on two assumptions that aect the project: brightness constancy and temporal persistence. Brightness constancy is the notion that a pixels brightness in gray scale changes very little or not at all. In other words, the feature looks the same over the frames. Temporal persistence requires that the features move slowly frame to frame to ensure that the feature stays within the cameras view. In order to increase performance, Lucas-Kanades optical ow was adopted in lieu of OpenCVs slower method of repeatedly nding chessboard corners. First the chessboard 4
is located within the image by using OpenCVs chessboard detector. After the corner points have been extracted, optical ow is used to track their ane motion. If a tracked feature is not successfully found, OpenCVs chessboard detector is called to relocate the corner points. 2.2.4 Enhancements
RGB to Gray-scale: The two main bottle necks of the application was tracking features (nding the chessboard) and converting the RGB (red, blue, green) image to a gray-scale image. OpenCV requires gray-scale images for their nding and tracking feature algorithms; however, their implementation of converting an image from RGB to gray-scale was slow in a mobile device because this function was optimized to run on a 32-bit processor. This conversion combined with the iPhones limited processing power, caused the application to run extremely slow. As a result, another algorithm was implemented in order to remedy the problem. To convert to gray-scale, the weighted sum of the RGB components of the image was taken. OpenCV implements a weighted sum of: Y = (0.299)R + (0.587)G + (0.114)B to convert from RGB to gray-scale. In the application the average: Y = (R + G + B)/3 was chosen because it produced better speed while not compromising accuracy. UIImage to IplImage: Another challenge that was encountered was the conversion between the UIImage and IplImage. The conventional way of doing this was to get the core graphics image reference and draw the reference to the image data that will be used to the IplImage structure. This method proved to be slow for the application since it was called every frame. Cleverly, since there was direct access to the raw byte data of the frame; therefore, this data was copied to a buer and made into an IplImage structure in order to execute OpenCV functions, avoiding the conversion from UIImage to IplImage. (2) (1)
2.3 OpenGL
OpenGL (Open Graphics Library) is an opened source library initially written in C that allows rendering of 2D and 3D graphics. OpenGL will be used to translate the users motion into the virtual world. The key aspect in transforming a camera pose from OpenCV to OpenGL is to understand the axis-orientation for OpenCV (real camera) and OpenGL (virtual camer). This is shown in Figure 3. This will be used in order to give a realistic virtual experience, as the virtual world will move according to the users movements. [5]
Figure 3: In the real camera (top), the axisorientation uses the right-hand coordinate system with the x-axis in the horizontal, the y-axis in the vertical and the z-axis in coming out of the camera, while in the virtual camera (right) have the y and z-axis in opposite direction.
2.3.1
OpenGL - Calibrated
For the calibrated experiment, a simple virtual world was used in order to convey the idea. A 3D cone was used to show how the users translation aects the cones movement. If the user moves to the right, the cone will move to the left. This shows the realistic way humans view objects in the real world. Similarly, moving the camera toward the cone will bring the cone closer and vice versa. The function, glfrustum(), played an important part in the cones movement. This function species the window view can be seen by OpenGL. If the window is too large, the cone will be unresponsive to changes in the translation of the world with respect to the camera.
3 Experimental Results
3.1 Calibrated Results
The following results are from using a fully calibrated iPhone at every frame using the chessboard calibration method. 3.1.1 Intitial Intrinsic Calibration
Initially, the iPhones camera was calibrated in order to nd the intrinsic parameters (focal length, principal points, etc.) in order to more accurately calculate the extrinsic parameters of each frame. Ten images of chessboards were used in dierent orientation to get a precise measurement. MATLABs calibration toolbox also gives us the reprojection error (in pixels). The re-projection error was about one percent and is shown graphically in Figure 4 below.
Figure 4: The graph above shows the re-projection error of every image and every point in that image represented as colored crosses (each color is a unique image). This is the geometric quantity that compares the measured points in the 2D image to re-projected points of the 2D image estimated by the calculated by the camera parameters.
3.1.2
It is important to always have the the same point of origin when calibrating, especially when calibration is occurring every frame, since it will set the axis-orientation of the camera. A problem encountered was that the point of origin would randomly change from the top left corner to the bottom right corner; this gives us orientation of origin 7
ambiguity. The change in origin would sometimes switch the x with the y-axis and vice versa. This problem was due to the way OpenCVs method, FindChessboardCorners() detected corners. If the chessboard was a square, there would orientation ambiguity; however, if its width and height were of dierent lengths, the ambiguity seemed to disappear. Figure 5 shows this ambiguity.
Figure 5: The gure above clearly shows that the origin orientation (which starts at (0,0) from the rst blue corner) changes drastically from frame ten (left) to thirty two (right). On frame ten, the origin starts in the upper left hand corner, but the origin starts on the upper right hand corner in frame thirty two.
3.1.3
OpenCV assumes an orientation of the x-axis on the horizontal, y-axis on the vertical and the z-axis out of the board, as shown in Figure 3. Yet, in Figure 2, the x-axis is on the vertical while the y-axis is on the horizontal. This orientation is due to the iPhone rotating the image, so when OpenCV calculated the translation and rotation of the camera, the pose was actually incorrect by 90 degrees along the z-axis. This gave incorrect results when translating the cameras orientation to OpenGL. To solve this problem, an oset transformation matrix (with rotation of 90 degrees along z-axis) was multiplied to the calculated transformation matrix each frame. This rotation problem can clearly be seen by the gures below.
Figure 6: The image on the left shows the OpenCV pose problem. The outputted iPhone image, is turned 90 degrees on the z-axis, which causes an incorrect estimation when OpenCV calculates the rotation and translation. A simple rotation along the z-axis xes this problem, which can be seen on the image to the right. 3.1.4 OpenGL Pose Problem
The output of OpenCVs calculated camera orientation mimics the orientation of the marker (chessboard). Although this is correct, it resulted in an undesirable birds eye view of the virtual world because it does not project the pose needed for the application (parallel to the oor). To accommodate this eect, another oset was applied. After this oset was applied, the relative translation and rotation agreed with human-like vision. If the user moves the iPhone closer to the chessboard, the user will move forward in game (z direction). The birds eye view and its correction can be seen in the gure below.
Figure 7: The image on the left shows the OpenGL pose problem. Since the camera is looking at the marker from a birds eye view, the OpenGL will render the 3D objects in that same exact way, as seen on the image to the left. This is not desirable and is xed by multiplying the initial transformation matrix with an oset to simulate human vision.
10
4 Discussion
4.1 Overview
In the eld of engineering, it is important to aim to make a project as perfect as possible, but equally as important to know when to compromise and adjust to meet specic requirements. This was a major obstacle faced in this project; there were many methods that were intended to be implemented but because of time, could not be nished.
4.4 OpenGL
The main extension for OpenGL would be to extend the virtual world to be a true exploration scene. This can include the world to be a forest full of dierent grass, shrubs, tress and wildlife. The user would be able to move around and feel as if he/she was actually in a forest. Also, real life rotation and translation restrictions must be imposed in the virtual world. The user should not be able to rotate and see underneath the oor (x-axis) or translate in the air above the ground (y-axis). 11
12
Yet, the ethical issues that are encountered with the introduction of robotic system into the social human life are not limited to the work place. There has been the recent trend to developing anthropomorphic robots targeted for children and the elderly. The main reason for this trend is to create care-giving robots that can satisfy the need for a companion within the vulnerable members of society. Other reasons for this trend have been to monitor the health of the elderly or act as a nanny for children that require far more attention their parent may be able to provide. Nevertheless, this trend raises the ethical issue on whether this is a form of deception and is ethically acceptable. That is these robots are designed to create the illusion that human relationship could be form with them. Certainty, todays robots have not achieved the level that a normal person can be confused to whether it is another personal being or not. However, many of the robots developed for children the elderly do provide the illusion that they have some low level of understanding and personality. Sonys articial intelligence robotic dog (AIBO) is able to mimic a normal dog to some degree. It can walk in a dog like form and chase a ball. It also can detect distance, acceleration, sound, vibration, pressure and voice commands that enables the robot to recognized many situation and respond adequately. Similarly, it is able to show a variety of expression like happiness, sadness, fear and surprise through body movement and the color and shape of its eyes. Other robots as the Hello Kitty robot are marketed primarily for parent who are not able to spend time with their children. That is the robot will keep the child happy and occupied. The vulnerable youngest and the elderly are most aect by the anthropomorphism of the robot mainly because both have a strong need for social contact and lack the technological knowledge behind the robot. That is the knowledge that though the robot may posses human characteristic is still not a personal being. It is worthy to note that the problem is not the anthropomorphic characteristic of the robot. Young children many times pretend that their toys are actual beings. Yet, in this case the child has the understand that it is just play time and the toys themselves do not posses the characteristics. Similarly, elderly with Alzheimer may forget that the robot is but a mimicker of human characteristic. [6] There are several consequences to the anthropomorphism within the robotic system. Children can spend too much time with the robots and thus diminish their interaction with other human beings. This will hurt the children understanding on how to interact with other humans. The care-giver has a strong inuence on a childs development since most of the childs learning will come through mimic it. Negative consequences can also be found within the elderly group. If they start to image that they have a relationship with the robot they may start to think that they have to take care of the robot at the expense of their own well being. Similarly, the family of the elder may think that the robot satised all of her needs for companion, causing the elder to feel even more lonely. However, not all consequences are negative. There are studies that show that robots in elderly can reduce their stress level. But these study do suggest that the robot cannot substitute human interaction. [8]
13
References
[1] Mirza Tahir Ahmed, Matthew N. Dailey, Jos Luis Landabaso, and Nicolas Herrero. e Robust key frame extraction for 3d reconstruction from video streams. In VISAPP (1), pages 231236, 2010. [2] Gary Bradski and Adrian Kaehler. Learning OpenCV. OReilly Media Inc., 2008. [3] J. Dietsch. People meeting robots in the workplace [industrial activities]. Robotics Automation Magazine, IEEE, 17(2):15 16, 2010. [4] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University Press, ISBN: 0521540518, second edition, 2004. [5] Philip Rideout. iPhone 3D Programming. OReilly Media Inc., 2010. [6] A. Sharkey and N. Sharkey. Children, the elderly, and interactive robots. Robotics Automation Magazine, IEEE, 18(1):32 38, march 2011. [7] G. Veruggio. Roboethics [tc spotlight]. 17(2):105 109, 2010. Robotics Automation Magazine, IEEE,
[8] G. Veruggio, J. Solis, and M. Van der Loos. Roboethics: Ethics applied to robotics [from the guest editors]. Robotics Automation Magazine, IEEE, 18(1):21 22, march 2011.
14
6 What we did
6.1 Andrew Abril
The project was a 50/50 teamwork eort. We wrote the code together (physically in the same room), switching who typed whenever an idea struck. The only independent work done was research. I mostly researched on how to improve the project in general (uncalibrated, etc), while my partner improved what was already done (performance issues).
15
7 Appendix
Extrinsic calibration using the chessboard method and optical ow to nd + track:
(void ) CalibrateCameraWithOpticalFlow : ( I p l I m a g e ) imgB { // i n t i a l i z e p a r a m e t e r s i n t row = 6 ; i n t column = 7 ; i n t board = row column ; int c o r n e r c o u n t ; // t h i s f l a g o n l y t e l l s me i f i d i d t h e f i r s t i n i t t i a l i z a t i o n b o o l f l a g=f a l s e ; // c a l i b r a t e o f need t o i n i t // do o p t i c a l f l o w t o t r a c k p o i n t s i f ( NeedToInit ) { // i n i t i a l i z e c a l i b r a t i o n b u f f e r s and p a r a m e t e r s / C r e a t e b u f f e r s t h a t a r e i n t i a l i z e d o n l y once I used Mi t o d e t e c t i f t h e y have been i n t i a l i z e d t h o u g h any one c o u l d had been u s e . Maybe I s h o u l d had c h e c k each one b u t t h a t s e r v e d t o o much trouble . / i f ( ! Mi ) { c o r n e r s =(CvPoint2D32f ) c v A l l o c ( board s i z e o f ( c o r n e r s [ 0 ] ) ) ; i m a g e p o i n t s = cvCreateMat ( board , 2 , CV 32FC1 ) ; o b j e c t p o i n t s= cvCreateMat ( board , 3 , CV 32FC1 ) ; point counts = cvCreateMat ( 1 , 1 , CV 32SC1 ) ; d i s t o r t i o n c o e f f s = cvCreateMat ( 5 , 1 , CV 32FC1 ) ; r o t a t i o n v e c t o r s = cvCreateMat ( 3 , 1 , CV 32FC1 ) ; t r a n s l a t i o n v e c t o r s = cvCreateMat ( 1 , 3 , CV 32FC1 ) ; r o t a t i o n m a t = cvCreateMat ( 3 , 3 , CV 32FC1 ) ; Mi = cvCreateMat ( 3 , 3 , CV 32FC1 ) ; f l a g=t r u e ; } //Mi v a l u e s were c a l c u l a t e d p r i o r t o t h i s p r o j e c t i n MatLab CV MAT ELEM( Mi , f l o a t , 0 , 0 ) = 4 5 9 . 2 4 5 3 3 3 1 f ; CV MAT ELEM( Mi , f l o a t , 0 , 1 ) = 0 . 0 f ; CV MAT ELEM( Mi , f l o a t , 0 , 2 ) = 2 1 8 . 2 7 3 2 8 5 f ; CV MAT ELEM( Mi , f l o a t , 1 , 0 ) = 0 . 0 f ; CV MAT ELEM( Mi , f l o a t , 1 , 1 ) = 4 5 9 . 2 4 5 3 3 3 1 f ; CV MAT ELEM( Mi , f l o a t , 1 , 2 ) = 1 7 8 . 9 6 9 1 1 6 f ; CV MAT ELEM( Mi , f l o a t , 2 , 0 ) = 0 . 0 f ; CV MAT ELEM( Mi , f l o a t , 2 , 1 ) = 0 . 0 f ; CV MAT ELEM( Mi , f l o a t , 2 , 2 ) = 1 . 0 f ; // d i s t s t u f f CV MAT ELEM( d i s t o r t i o n c o e f f s , f l o a t , 0 , 0 ) = 0 . 0 7 0 9 6 9 f ; CV MAT ELEM( d i s t o r t i o n c o e f f s , f l o a t , 1 , 0 ) = 0 . 7 7 7 6 4 7 f ; CV MAT ELEM( d i s t o r t i o n c o e f f s , f l o a t , 2 , 0 ) = 0.009131 f ; CV MAT ELEM( d i s t o r t i o n c o e f f s , f l o a t , 3 , 0 ) = 0.013867 f ;
16
CV MAT ELEM( d i s t o r t i o n c o e f f s , f l o a t , 4 , 0 ) = 5.141519 f ; // B u f f e r CV MAT ELEM( p o i n t c o u n t s , int , 0 , 0 ) = board ; // u n d i s t o r t image [ s e l f U n d i s t o r t : imgB ] ; // f i n d t h e c h e s s b o a r d p o i n t s t h a t w i l l be t r a c k e d w i t h optical flow int s u c c e s s = cvFindChessboardCorners ( imgB , c v S i z e ( row , column ) , corners , &c o r n e r c o u n t , CV CALIB CB ADAPTIVE THRESH | CV CALIB CB FILTER QUADS | CV CALIB CB FAST CHECK) ; cvFindCornerSubPix ( imgB , c o r n e r s , c o r n e r c o u n t , c v S i z e ( 1 1 , 1 1 ) , c v S i z e ( 1 , 1) , c v T e r m C r i t e r i a (CV TERMCRIT EPS+ CV TERMCRIT ITER, 3 0 , 0 . 1 ) ) ; i f ( ( s u c c e s s ) &&(c o r n e r c o u n t == board ) ) { // s e t up t h e w o r l d p o i n t and image p o i n t for calibration f o r ( i n t i = 0 , j = 0 ; j < board ; ++i , ++j ) { CV MAT ELEM( i m a g e p o i n t s , f l o a t , i , 0 ) = corners [ j ] . x ; CV MAT ELEM( i m a g e p o i n t s , f l o a t , i , 1 ) = corners [ j ] . y ; // t h i s s h o u l d o n y l run once i n t h i s f o r loop if ( flag ) { CV MAT ELEM( o b j e c t p o i n t s , f l o a t , i , 0 ) = j / column ; CV MAT ELEM( o b j e c t p o i n t s , f l o a t , i , 1 ) = j%column ; CV MAT ELEM( o b j e c t p o i n t s , f l o a t , i , 2) = 0.0 f ; } } NeedToInit=f a l s e ; } } else { // t r a c k t h e i n i t l i a t i z e P o i n t s i f ( corners ){ char f e a t u r e s f o u n d [ board ] ; f l o a t f e a t u r e e r r o r s [ board ] ;
17
int w i n s i z e = 5 ; CvSize p y r s z = c v S i z e ( imgA>width +8, imgB>h e i g h t /3) ; I p l I m a g e pyrA = cvCreateImage ( p y r s z , IPL DEPTH 32F , 1 ) ; I p l I m a g e pyrB = cvCreateImage ( p y r s z , IPL DEPTH 32F , 1 ) ; CvPoint2D32f c o r n e r s B = ( CvPoint2D32f ) c v A l l o c ( board s i z e o f ( c o r n e r s B [ 0 ] ) ) ; cvCalcOpticalFlowPyrLK ( imgA , imgB , pyrA , pyrB , corners , cornersB , board , cvSize ( win size , win size ) , 3, features found , feature errors , c v T e r m C r i t e r i a ( CV TERMCRIT ITER | CV TERMCRIT EPS, 2 0 , 0 . 3 ) , 0 ); // draw o p t i c a l f l o w f o r ( i n t i =0; i <c o r n e r c o u n t ; i ++) { i f ( features found [ i ]) { p r i n t f ( Got i t /n ) ; CvPoint p0 = c v P o i n t ( cvRound ( c o r n e r s [ i ] . x ) , cvRound ( c o r n e r s [ i ]. y) ) ; CvPoint p1 = c v P o i n t ( cvRound ( cornersB [ i ] . x ) , cvRound ( cornersB [ i ]. y) ) ; c v L i n e ( imgC , p0 , p1 , CV RGB ( 2 5 5 , 2 5 5 , 2 5 5 ) , 2) ; } } // c h e c k t h e p o i n t i n t n u m O f S u c c e s s f u l P o i n t s =0; f o r ( i n t k=0; k<board ; k++) { i f ( f e a t u r e s f o u n d [ k ] && f e a t u r e e r r o r s [ k ] <550) n u m O f S u c c e s s f u l P o i n t s ++; } i f ( n u m O f S u c c e s s f u l P o i n t s != board ) NeedToInit=t r u e ; else { f o r ( i n t i = 0 , j = 0 ; j < board ; ++i , ++j ) { CV MAT ELEM( i m a g e p o i n t s , f l o a t ,
18
i , 0) = cornersB [ j ] . x ; CV MAT ELEM( i m a g e p o i n t s , f l o a t , i , 1) = cornersB [ j ] . y ; } // c r e a t e t h e image p o i n t s } c v R e l e a s e I m a g e (&imgA ) ; imgA = imgB ; c o r n e r s=c o r n e r s B ; } else NeedToInit=t r u e ; } // c a l i b r a t e i f ( ! NeedToInit ) { // f i n d e x t r i n s i c and o u t p u t t h e v a l u e s // solvePnP ( o b j e c t p o i n t s , i m a g e p o i n t s , Mi , d i s t o r t i o n c o e f f s , rotation vectors , translation vectors , true ) ; cvFindExtrinsicCameraParams2 ( o b j e c t p o i n t s , i m a g e p o i n t s , Mi , d i s t o r t i o n c o e f f s , r o t a t i o n v e c t o r s , translation vectors ) ; f l o a t e l e m e n t 1 = CV MAT ELEM( t r a n s l a t i o n v e c t o r s , f l o a t , 0 , 0) ; f l o a t e l e m e n t 2 = CV MAT ELEM( t r a n s l a t i o n v e c t o r s , f l o a t , 0 , 1) ; f l o a t e l e m e n t 3 = CV MAT ELEM( t r a n s l a t i o n v e c t o r s , f l o a t , 0 , 2) ; // // // f l o a t v e c x=CV MAT ELEM( r o t a t i o n v e c t o r s , f l o a t , 0 , 0) ; f l o a t v e c y=CV MAT ELEM( r o t a t i o n v e c t o r s , f l o a t , 1 , 0) ; f l o a t v e c z=CV MAT ELEM( r o t a t i o n v e c t o r s , f l o a t , 2 , 0) ; float s c a l e =1.00; cvRodrigues2 ( r o t a t i o n v e c t o r s , rotation mat ) ; // s e t t h e o u t p u t s CameraPose . w . x = e l e m e n t 1 / s c a l e ; CameraPose . w . y = 1 e l e m e n t 2 / s c a l e ; CameraPose . w . z = 1 e l e m e n t 3 / s c a l e ; // s e t t h e r o t a t i o n o u t p u t // t h h e x and t h e y a r e i n v e r s e d l i k e t h e t r a n s l a t i o n // x CameraPose . x . x=CV MAT ELEM( r o t a t i o n m a t , f l o a t , 0 , 0 ) ; CameraPose . x . y=CV MAT ELEM( r o t a t i o n m a t , f l o a t , 1 , 0 ) ; CameraPose . x . z=CV MAT ELEM( r o t a t i o n m a t , f l o a t , 2 , 0 ) ; // / y CameraPose . y . x=CV MAT ELEM( r o t a t i o n m a t , f l o a t , 0 , 1 ) ;
19
CameraPose . y . y=CV MAT ELEM( r o t a t i o n m a t , f l o a t , 1 , 1 ) ; CameraPose . y . z=CV MAT ELEM( r o t a t i o n m a t , f l o a t , 2 , 1 ) ; // z CameraPose . z . x=CV MAT ELEM( r o t a t i o n m a t , f l o a t , 0 , 2 ) ; CameraPose . z . y=CV MAT ELEM( r o t a t i o n m a t , f l o a t , 1 , 2 ) ; CameraPose . z . z=CV MAT ELEM( r o t a t i o n m a t , f l o a t , 2 , 2 ) ; // o u t p u t s t u f f f l o a t r o t x x = CV MAT ELEM( r o t a t i o n m a t , f l o a t , 0 , 0 ) ; f l o a t r o t x y = CV MAT ELEM( r o t a t i o n m a t , f l o a t , 1 , 0 ) ; f l o a t r o t x z = CV MAT ELEM( r o t a t i o n m a t , f l o a t , 2 , 0 ) ;
f l o a t r o t y x = CV MAT ELEM( r o t a t i o n m a t , f l o a t , 0 , 1 ) ; f l o a t r o t y y = CV MAT ELEM( r o t a t i o n m a t , f l o a t , 1 , 1 ) ; f l o a t r o t y z = CV MAT ELEM( r o t a t i o n m a t , f l o a t , 2 , 1 ) ; f l o a t r o t z x = CV MAT ELEM( r o t a t i o n m a t , f l o a t , 0 , 2 ) ; f l o a t r o t z y = CV MAT ELEM( r o t a t i o n m a t , f l o a t , 1 , 2 ) ; f l o a t r o t z z = CV MAT ELEM( r o t a t i o n m a t , f l o a t , 2 , 2 ) ;
i f ( Output ) [ Output r e l e a s e ] ; Output = [ [ NSMutableString a l l o c ] initWithFormat :@The v e c t o r i s : \ n ] ; [ Output appendFormat :@:% f , :% f , :% f \n , rotxx , rotyx , rotzx ] ; [ Output appendFormat :@:% f , :% f , :% f \n , rotxy , rotyy , rotzy ] ; [ Output appendFormat :@:% f , :% f , :% f \n , r o t x z , r o t y z , rotzz ] ; [ Output appendFormat :@:% f , :% f , :% f \n , element1 , element2 , element3 ] ; // [ Output appendFormat :@:% f , :% f , :% f \n , imgB>width , imgB >h e i g h t , 0 . 0 ] ; }
Rendeering and transformation of the real camera to virtual camera to represent the users movement:
20
// C r e a t e t h e d e p t h b u f f e r . glGenRenderbuffersOES ( 1 , &m d e p t h R e n d e r b u f f e r ) ; glBindRenderbufferOES (GL RENDERBUFFER OES, m d e p t h R e n d e r b u f f e r ) ; g l R e n d e r b u f f e r S t o r a g e O E S (GL RENDERBUFFER OES, GL DEPTH COMPONENT16 OES, width , height ) ; // C r e a t e t h e f r a m e b u f f e r o b j e c t ; a t t a c h t h e d e p t h and c o l o r b u f f e r s . glGenFramebuffersOES ( 1 , &m f r a m e b u f f e r ) ; glBindFramebufferOES (GL FRAMEBUFFER OES, m f r a m e b u f f e r ) ; g l F r a m e b u f f e r R e n d e r b u f f e r O E S (GL FRAMEBUFFER OES, GL COLOR ATTACHMENT0 OES, GL RENDERBUFFER OES, m colorRenderbuffer ) ; g l F r a m e b u f f e r R e n d e r b u f f e r O E S (GL FRAMEBUFFER OES, GL DEPTH ATTACHMENT OES, GL RENDERBUFFER OES, m depthRenderbuffer ) ; // Bind t h e c o l o r b u f f e r f o r r e n d e r i n g . glBindRenderbufferOES (GL RENDERBUFFER OES, m c o l o r R e n d e r b u f f e r ) ;
// l o a d cube t e x t u r e g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 0 ] ) ; g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) ; g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) ;
m r e s o u r c e >LoadPngImage ( p u r p l e . j p g ) ; void p i x e l s = m r e s o u r c e >GetImageData ( ) ; i v e c 2 s i z e= m r e s o u r c e >GetImageSize ( ) ; glTexImage2D (GL TEXTURE 2D, 0 , GL RGBA, s i z e . x , s i z e . y , 0 , GL RGBA, GL UNSIGNED BYTE, p i x e l s ) ; m r e s o u r c e >UnloadImage ( ) ;
// l o a d c y l i n d e r g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 1 ] ) ; g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) ; g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) ;
21
// l o a d f l o o r t e x t u r e g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 2 ] ) ; g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MIN FILTER, GL LINEAR) ; g l T e x P a r a m e t e r i (GL TEXTURE 2D, GL TEXTURE MAG FILTER, GL LINEAR) ;
m r e s o u r c e >LoadPngImage ( g r e e n . j p g ) ; p i x e l s = m r e s o u r c e >GetImageData ( ) ; s i z e= m r e s o u r c e >GetImageSize ( ) ; glTexImage2D (GL TEXTURE 2D, 0 , GL RGBA, s i z e . x , s i z e . y , 0 , GL RGBA, GL UNSIGNED BYTE, p i x e l s ) ; m r e s o u r c e >UnloadImage ( ) ;
g l V i e w p o r t ( 0 , 0 , width , h e i g h t ) ; g l E n a b l e (GL DEPTH TEST) ; // s e t up camera p r o j e c t i o n glMatrixMode (GL PROJECTION) ; g l F r u s t u m f ( 0.467889 f , 0 . 4 6 7 8 8 9 , 0.467889 , 0 . 4 6 7 8 8 9 , 1 , 1 0 0 0 ) ;
//
= = = =
1; 0; 0; 0;
// z offset . z . x = 0;
22
offset . z . y = 0; offset . z . z = 1; o f f s e t . z .w = 0 ; // /w offset offset offset offset } void RenderingEngine1 : : Render ( ) const { glClearColor ( 0 . 5 f , 0.5 f , 0.5 f , 1) ; g l C l e a r (GL COLOR BUFFER BIT | GL DEPTH BUFFER BIT) ; glPushMatrix ( ) ;
= = = =
0; 0; 0; 1;
g l M u l t M a t r i x f ( Trans . P o i n t e r ( ) ) ; // t o make i t l o o k good g l T r a n s l a t e f ( 4 , 9 , 0 ) ; g l R o t a t e f ( 90 , 0 , 0 , 1 ) ; g lR ot at e f (90 , 0 , 1 , 0) ; g l R o t a t e f ( 10 , 0 , 0 , 1 ) ; // e n a b l e v e r t e x c o o r d i n a t e a r r a y when glDrawArrays i s c a l l e d g l E n a b l e C l i e n t S t a t e (GL VERTEX ARRAY) ; // e n a b l e normal a r r a y when glDrawArrays i s c a l l e d g l E n a b l e C l i e n t S t a t e (GL NORMAL ARRAY) ; // e n a b l e t e x t u r e c o o r d i n a t e a r r a y when glDrawArrays i s c a l l e d g l E n a b l e C l i e n t S t a t e (GL TEXTURE COORD ARRAY) ; g l E n a b l e (GL TEXTURE 2D) ;
// b e g i n i n g o f cube // l o a d t e x t u r e f o r cube g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 0 ] ) ; // draw cube glPushMatrix ( ) ; g l T r a n s l a t e f ( 2 , 0 . 5 , 1 ) ; g l V e r t e x P o i n t e r ( 3 , GL FLOAT, 0 , c u b e V e r t s ) ; // g l N o r m a l P o i n t e r (GL FLOAT, 0 , bananaNormals ) ; g l T e x C o o r d P o i n t e r ( 2 , GL FLOAT, 0 , cubeTexCoords ) ; glDrawArrays (GL TRIANGLES, 0 , cubeNumVerts ) ; glPopMatrix ( ) ;
// end o f cube
23
// b e g i n n i n g o f c l y i n d e r // l o a d t e x t u r e f o r c l y i n d e r g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 1 ] ) ; // draw c l y i d n e r glPushMatrix ( ) ; g l T r a n s l a t e f (3 , 1 , 1) ; g l V e r t e x P o i n t e r ( 3 , GL FLOAT, 0 , c y l i n d e r V e r t s ) ; g l T e x C o o r d P o i n t e r ( 2 , GL FLOAT, 0 , c y l i n d e r T e x C o o r d s ) ; glDrawArrays (GL TRIANGLES, 0 , cylinderNumVerts ) ; glPopMatrix ( ) ; // end o f c y l i n d e r // b e g i n i n g o f f l o o r // l o a d t e x t u r e // l o a d t e x t u r e f o r c l y i n d e r g l B i n d T e x t u r e (GL TEXTURE 2D, m g r i d T e x t u r e [ 2 ] ) ; // / draw f l o o r g l V e r t e x P o i n t e r ( 3 , GL FLOAT, 0 , p l a n e V e r t s ) ; g l T e x C o o r d P o i n t e r ( 2 , GL FLOAT, 0 , planeTexCoords ) ; glDrawArrays (GL TRIANGLES, 0 , planeNumVerts ) ; // end o f f l o o r
g l D i s a b l e C l i e n t S t a t e (GL VERTEX ARRAY) ; g l D i s a b l e C l i e n t S t a t e (GL TEXTURE COORD ARRAY) ; g l D i s a b l e C l i e n t S t a t e (GL NORMAL ARRAY) ; g l D i s a b l e (GL TEXTURE 2D) ; // g l D i s a b l e C l i e n t S t a t e (GL COLOR ARRAY) ; glPopMatrix ( ) ; } void RenderingEngine1 : : UpdateAnimation ( f l o a t t i m e S t e p ) { i f ( m animation . Current == m animation . End ) return ; m animation . E l a p s e d += t i m e S t e p ; i f ( m animation . E l a p s e d >= AnimationDuration ) { m animation . Current = m animation . End ; } else { f l o a t mu = m animation . E l a p s e d / AnimationDuration ; m animation . Current = m animation . S t a r t . S l e r p (mu, m animation . End ) ; } } void RenderingEngine1 : : S e t T r a n s f o r m a t i o n ( mat4 t r a n s ) { /
24
O f f s e t matrix mat4 o f f s e t = {
0,
1,
0, 1, 0, 0,
0, 0, 0, 0,
0, 1, 0,
0, 0, 1};
/ // / Trans = t r a n s o f f s e t ; Trans = t r a n s ; mat3 temp ; i f ( ! S t a r t && ( t r a n s . w . x !=0) ) { S t a r t=t r u e ; temp=t r a n s . ToMat3 ( ) ; L a s t ( temp . Transposed ( ) ) ; } else { // e x t r a c t t h e r o t a t i o n from t h e t r a n s temp=t r a n s . ToMat3 ( ) ; mat4 temp2 ; temp2 ( temp ) ; mat4 newRotation= temp2 L a s t ; // s e t b a c k t h e r o a t i o n newRotation . w . x=t r a n s . w . x ; newRotation . w . y=t r a n s . w . y ; newRotation . w . z=t r a n s . w . z ; Trans=newRotation o f f s e t ; // } / // presenTtw // pastTw // p a s t T P r e s e n t //WTpas presentTW }
25