You are on page 1of 138

DESIGN OF A 4-CHANNEL VIRTUAL INSTRUMENT VIDEO ADAPTER FOR DIGITAL DATA

MULTIPLEXING WITH FACIAL VECTOR IDENTIFICATION

Allen Joshua G. Calderon


Daniela L. Espeña
Eugene B. Oca

Technological Institute of the Philippines


938 Aurora Boulevard, Cubao, Quezon City

April 2019
ACKNOWLEDGEMENT

This design project became a reality with the kind support and help of many individuals. We would
like to extend our sincere gratitude to all of them.

First, to our Almighty God, for giving us strength, guidance, good health, knowledge and wisdom,
grace and blessing that enabled us to finish this research.

To our adviser, Engr. Ryann A. Alimuin, for the support and valuable knowledge you have imparted
and your active involvement and guidance that is beneficial for this study. Without your assistance and
dedication throughout the process, this paper would have never been completed.

To our program chair, Engr. Shearyl U. Arenas, who has the character and substance of a
professional, who continually and convincingly conveyed a spirit of adventure in regard to this research. This
design project would not have been possible without your guidance and persistent help.

Most importantly, none of this could have happened without the support of our family. This thesis
stands a testament of your unconditional love and encouragement.

2
APPROVAL SHEET

The design project entitled “Design of A 4-Channel Virtual Instrument Video Adapter for Digital Data
Multiplexing with Facial Vector Identification” prepared by Allen Joshua G. Calderon, Daniela L.
Espeña, Eugene B. Oca, of the Electronics Engineering Department was examined and evaluated by the
members of the Student Design Evaluation Panel, and is hereby recommended for approval.

ENGR. RYANN A. ALIMUIN


Adviser

Accepted by in partial fulfilment of the requirements for the degree of Bachelor of Science in Electronics
Engineering.

ENGR. SHEARYL U. ARENAS DR. JESUSA N. PADILLA


Program Chair, Electronics Engineering Dean, College of Engineering and Architecture

3
PROOFREADER APPROVAL SHEET

This project entitled “Design of A 4-Channel Virtual Instrument Video Adapter for Digital Data
Multiplexing with Facial Vector Identification” prepared by Allen Joshua G. Calderon, Daniela L.
Espeña, Eugene B. Oca, of the Electronics Engineering Department, in partial fulfilment of the requirements
for the degree of Bachelor of Science in Electronics Engineering is hereby proofread.

MS. PRESCILLA LUZONG


Proof Reader
Faculty, College of Education

4
ABSTRACT

Crime is one of the major problems here in the Philippines. Crimes can happen in public or private places
such as in buildings, colleges, business or private properties. With the aid of surveillance systems, it did help
a lot in terms of the security and protection of the citizens as well as the establishments that have been
installed with surveillance systems.

A surveillance system is used for monitoring a particular vicinity 24/7. One of the subsystems of a surveillance
system is the DVR which records the real-time footage taken by CCTV Cameras. Some of the existing DVRs
limits its function in the recording which only features motion detection without any training of a, particularly
registered faces for target identification which requires human intervention.  In addition, data logging is not
yet integrated into DVRs according to some journals. This kind of system is very beneficial especially to
crimes such as theft or burglary.

According to Rappler, an online news website based in the Philippines, there was a total of 100,668 index
crimes recorded from January to November 2017. These statistics were based from the PNP's DIDM
(Directorate for Investigation and Detective Management) which represents a drop of 21.8% from the year
2016 where it has a crime rate of 128, 730. For theft, almost a third was dropped from 46, 232 to 32,356.

In lieu of the problem, the proponents thought of something that would further help the existing devices which
are to design of a 4-Channel Virtual Instrument Video Adapter for Digital Data Multiplexing with Facial Vector
Identification.

5
LIST OF TABLES

Table 1 Review of the Existing Journals ...................................................................................................... 16


Table 2 Definition of Terms .......................................................................................................................... 36
Table 3 Marketing and Engineering Requirements ...................................................................................... 37
Table 4 Design Requirements ..................................................................................................................... 38
Table 5 Target Specifications ...................................................................................................................... 39
Table 6 Design Methods for the Design Alternatives ................................................................................... 53
Table 7 Summary of Design Alternatives ..................................................................................................... 54
Table 8 Morphological Chart ........................................................................................................................ 55
Table 9 Design Consideration to Satisfy the Criterion ................................................................................. 61
Table 10 Designer’s Raw Ranking Criterion Summary ................................................................................ 62
Table 11 AHP Decision Matrix ..................................................................................................................... 64
Table 12 Design Constraints ....................................................................................................................... 66
Table 13 Price Differentiation of Design Alternatives ................................................................................... 68
Table 14 Economic Constraint Ranking....................................................................................................... 69
Table 15 Performance Constraint ................................................................................................................ 70
Table 16 Summary of Performance Trade-off ............................................................................................. 74
Table 17 Availability of Materials based from the Philippines ...................................................................... 75
Table 18 Manufacturability Constraint Ranking ........................................................................................... 76
Table 19 Summary of the Output Parameters based on Different Constraints ............................................ 77
Table 20 Overall Ranking of Design Alternatives......................................................................................... 80
Table 21 First Iteration for Sensitivity Analysis ............................................................................................ 82
Table 22 Second Iteration for Sensitivity Analysis ....................................................................................... 83
Table 23 Summary of Output Parameters ................................................................................................... 84
Table 24 Summary of Performance Constraint Ranking Using Pareto Pairwise Comparison ..................... 87
Table 25 Design Alternatives Normalized Values ........................................................................................ 89
Table 26 Technical Specifications ............................................................................................................... 89
Table 27 Summary of the Confidence of Unauthorized Person ................................................................. 108
Table 28 Recognition and Conversion Time Testing ................................................................................. 111
Table 29 Signal-to-Noise Ratio of the system with 90 iterations. ............................................................... 113
LIST OF EQUATIONS

Equation 1 Pulse Width Sampler ................................................................................................................. 30


Equation 2 Equation used for Integral Image ............................................................................................... 32
Equation 3 Fourier Transform ...................................................................................................................... 49
Equation 4 Spatial domain image processing .............................................................................................. 51
Equation 5 Intensity of the arbitrary point before and after transformation .................................................. 51
Equation 6 ADC Conversion ........................................................................................................................ 56
Equation 7 Raw Signal ................................................................................................................................ 57
Equation 8 Sampled Value .......................................................................................................................... 57
Equation 9 Summation of the Weighting Factors ......................................................................................... 63
Equation 10 Percentage Difference between Two Values ........................................................................... 67
Equation 11 Subordinate Ranking ............................................................................................................... 67
Equation 12 Overall Ranking ....................................................................................................................... 67
Equation 13 Formula in determining the Number of Iterations for the Sensitivity Analysis .......................... 82
Equation 14 Minimization Objective ............................................................................................................. 84
Equation 15 Maximization Objective ............................................................................................................ 84
Equation 16 Accuracy Formula (Fix Formatting).......................................................................................... 91

7
LIST OF FIGURES

Figure 1 Project Development Cycle ........................................................................................................... 20


Figure 2 Trigrid Electrical Services .............................................................................................................. 22
Figure 3 Architecture of a CCV System ....................................................................................................... 22
Figure 4 Weighted Bounding Single Neural Network ................................................................................... 24
Figure 5 Background Subtraction applied at the sample video frame .......................................................... 24
Figure 6 Comparison between the original image after filtering using HMMOD ........................................... 25
Figure 7 Median Filtering Example for an Image Adjustment ...................................................................... 25
Figure 8 Comparison of success rates achieved by various algorithms ...................................................... 26
Figure 9 Level 0 Block Diagram ................................................................................................................... 27
Figure 10 Level 1 Block Diagram ................................................................................................................. 27
Figure 11 Level 1 Specific Block Diagram ................................................................................................... 28
Figure 12 Conceptual Framework................................................................................................................ 28
Figure 13 Input, Process, Output ................................................................................................................. 29
Figure 14 General Process of Face Recognition ......................................................................................... 31
Figure 15 An example of rectangle features showing relative to the enclosing detection window ............... 32
Figure 16 Example where Integral Image is utilized .................................................................................... 32
Figure 17 Basic Concept of Multiplexing...................................................................................................... 33
Figure 18 Basic Concept of FDM ................................................................................................................. 34
Figure 19 FDM Multiplexing Process ........................................................................................................... 34
Figure 20 FDM Demultiplexing Process ...................................................................................................... 35
Figure 21 Operation of Time-Division Multiplexing ...................................................................................... 35
Figure 22 Flowchart of the System .............................................................................................................. 40
Figure 23 General System Flowchart of Artificial Neural Network ............................................................... 41
Figure 24 Specific System Flowchart of Artificial Neural Network................................................................ 42
Figure 25 Block Diagram for Basis .............................................................................................................. 44
Figure 26 Mathematical Model of the System .............................................................................................. 44
Figure 27 Design Alternative 1 .................................................................................................................... 45
Figure 28 Mathematical model for Design Alternative 1 .............................................................................. 45
Figure 29 Comparison of Adaptive Filtering ................................................................................................. 46
Figure 30 Adaptive Filter characteristics around the edge texture ............................................................... 46
Figure 31 Testing results of proposed Adaptive Filter.................................................................................. 47
Figure 32 Test Performance for Design Alternative 1 .................................................................................. 47
Figure 33 Design Alternative 2 .................................................................................................................... 48
Figure 34 Mathematical model for Design Alternative 2 .............................................................................. 48
Figure 35 Learning curves for the frequency bin-normalized algorithm (1), frequency domain Newton
algorithm (2), and proposed new algorithm (3) ............................................................................................ 49
Figure 36 Global filtering processing times for BLOB and TUBULAR shape detections .............................. 50
Figure 37 Test Performance for Design Alternative 2 .................................................................................. 50
Figure 38 Design Alternative 3 .................................................................................................................... 51
Figure 39 Mathematical model for Design Alternative 3 .............................................................................. 51
Figure 40 Evaluation Results ....................................................................................................................... 52
Figure 41 Test Performance for Design Alternative 3 .................................................................................. 52
Figure 42 Performance Matrix ..................................................................................................................... 54
Figure 43 Mean Difference of Design Alternatives....................................................................................... 55
Figure 44 Basic Schematic Diagram of an Analog-to-Digital Converter....................................................... 56
8
Figure 45 PIC Microcontroller ...................................................................................................................... 58
Figure 46 PIC Architecture .......................................................................................................................... 58
Figure 47 RS232 to USB ............................................................................................................................. 59
Figure 48 Ranking Scale Diagram ............................................................................................................... 67
Figure 49 Trade-off Ranking Scale for Economic Constraint ....................................................................... 70
Figure 50 Trade-off Ranking Scale for Performance Constraint .................................................................. 74
Figure 51 Manufacturability Constraint Ranking Trade-off ........................................................................... 77
Figure 52 Overall Trade-Off Ranking ........................................................................................................... 81
Figure 53 Sensitivity Analysis ...................................................................................................................... 81
Figure 54 Computational Time under Performance Constraint Ranking Using Pareto Pairwise Comparison
.................................................................................................................................................................... 85
Figure 55 Extraction under Performance Constraint Ranking Using Pareto Pairwise Comparison ............. 86
Figure 56 Accuracy under Performance Constraint Ranking Using Pareto Pairwise Comparison............... 86
Figure 57 Confidence under Performance Constraint Ranking Using Pareto Pairwise Comparison ........... 87
Figure 58 Economic Constraint Ranking Using Pareto Pairwise Comparison ............................................. 88
Figure 59 Manufacturability Constraint Ranking Using Pareto Pairwise Comparison .................................. 88
Figure 60 Graphical Processing Unit using CUDA....................................................................................... 90
Figure 61 Creating Vector for Datasets ....................................................................................................... 90
Figure 62 Testing Accuracy ......................................................................................................................... 90
Figure 63 Test and Evaluation Results ........................................................................................................ 91
Figure 64 Confusion Matrices with 2 known persons................................................................................... 91
Figure 65 Confusion Matrices with 2 known persons (Normalized) ............................................................. 92
Figure 66 Test and Evaluation Results for the 3 known persons ................................................................. 92
Figure 67 Confusion Matrices with 3 known persons................................................................................... 93
Figure 68 Confusion Matrices with 3 known persons (Normalized) ............................................................. 94
Figure 69 Test and Evaluation Results for the 3 known persons and an unknown ...................................... 95
Figure 70 True and Predicted Labels........................................................................................................... 96
Figure 71 Confusion Matrix with 3 known person and an unknown ............................................................. 96
Figure 72 Average Precision Accuracy Test ................................................................................................ 97
Figure 73 Prediction Accuracy vs Distance ................................................................................................. 98
Figure 74 Normal Angle ............................................................................................................................... 99
Figure 75 Closed eyes ................................................................................................................................. 99
Figure 76 Left tilt angle ................................................................................................................................ 99
Figure 77 Right tilt angle ............................................................................................................................ 100
Figure 78 Normal Angle ............................................................................................................................. 100
Figure 79 Closed eyes ............................................................................................................................... 100
Figure 80 Left tilt angle .............................................................................................................................. 100
Figure 81 Right tilt angle ............................................................................................................................ 101
Figure 82 Normal ....................................................................................................................................... 101
Figure 83 Closed eyes ............................................................................................................................... 101
Figure 84 Left tilt angle .............................................................................................................................. 101
Figure 85 Right tilt angle ............................................................................................................................ 102
Figure 86 Normal ....................................................................................................................................... 102
Figure 87 closed eyes................................................................................................................................ 102
Figure 88 Left tilt angle .............................................................................................................................. 102
Figure 89 Right tilt angle ............................................................................................................................ 103
Figure 90 Normal ....................................................................................................................................... 103
9
Figure 91 Closed eyes ............................................................................................................................... 103
Figure 92 Left tilt angle .............................................................................................................................. 103
Figure 93 Right title angle .......................................................................................................................... 104
Figure 94 Normal ....................................................................................................................................... 104
Figure 95 Closed eyes ............................................................................................................................... 104
Figure 96 Left tilt angle .............................................................................................................................. 104
Figure 97 Right tilt angle ............................................................................................................................ 105
Figure 98 Normal ....................................................................................................................................... 105
Figure 99 Closed eyes ............................................................................................................................... 105
Figure 100 Left tilt angle ............................................................................................................................ 105
Figure 101 Right tilt angle .......................................................................................................................... 106
Figure 102 Normal Pace at 1 ft .................................................................................................................. 106
Figure 103 Normal Pace at 2 ft .................................................................................................................. 106
Figure 104 Normal Pace at 3 ft .................................................................................................................. 107
Figure 105 Normal Pace at 4 ft .................................................................................................................. 107
Figure 106 Normal Pace at 5 ft .................................................................................................................. 107
Figure 107 Normal Pace at 6 ft .................................................................................................................. 108
Figure 108 Normal Pace at 7 ft .................................................................................................................. 108
Figure 113 Graph of the Confidence of Unauthorized Person ................................................................... 110
Figure 114 Detection Time ........................................................................................................................ 112
Figure 115 Extraction Time ....................................................................................................................... 112
Figure 116 Recognition Time ..................................................................................................................... 113
Figure 117 Signal-to-Noise Ratio ............................................................................................................... 116
Figure 118 Company Logo ........................................................................................................................ 120
Figure 119 Business model of the company .............................................................................................. 121

10
ACRONYMS

ANN Artificial Neural Network


CCTV Closed Circuit Television
DL Deep Learning
DVR Digital Video Recorder
DSP Digital Signal Processing
MUX Multiplexer
PL Programming Language
VI Virtual Instrument
ISO International Organization for Standardization
GUI Graphical User Interface
GPU Graphics Processing Unit
CNN Convolutional Neural Network
MCDA Multi-Criteria Decision Analysis
ROI Return of Investment
TABLE OF CONTENTS
ACKNOWLEDGEMENT ................................................................................................................................ 2
APPROVAL SHEET ...................................................................................................................................... 3
PROOFREADER APPROVAL SHEET .......................................................................................................... 4
ABSTRACT ................................................................................................................................................... 5
LIST OF TABLES .......................................................................................................................................... 6
LIST OF EQUATIONS ................................................................................................................................... 7
LIST OF FIGURES ........................................................................................................................................ 8
ACRONYMS ................................................................................................................................................ 11
TABLE OF CONTENTS............................................................................................................................... 12
CHAPTER 1: PROJECT BACKGROUND ................................................................................................... 14
1.1 The Project .................................................................................................................................. 15
1.2 Prior Art Search ........................................................................................................................... 16
1.3 Statement of the Problem ............................................................................................................ 18
1.4 Project Objectives ........................................................................................................................ 18
1.4.1 General Objective ................................................................................................................ 18
1.4.2 Specific Objectives .............................................................................................................. 18
1.5 Significance of the Study ............................................................................................................. 18
1.6 Project Scope and Delimitations .................................................................................................. 19
1.7 Project Development ................................................................................................................... 20
1.7.2 Research about the Problem ............................................................................................... 20
1.7.3 Requirement Specification ................................................................................................... 21
1.7.4 Generation of Concepts ....................................................................................................... 21
1.7.5 Designing............................................................................................................................. 21
1.7.6 System Integration ............................................................................................................... 21
1.7.7 System Test ......................................................................................................................... 21
1.7.8 Delivery and Acceptance ..................................................................................................... 21
CHAPTER 2: GENERAL DESIGN INPUTS ................................................................................................. 22
2.1 The Client .................................................................................................................................... 22
2.2 Review of Related Literature ....................................................................................................... 22
2.3 Concept of the Study ................................................................................................................... 27
2.4 Design Inputs ............................................................................................................................... 28
2.4.1 Conceptual Framework ........................................................................................................ 28
2.4.2 Theoretical Framework ........................................................................................................ 29
2.5 Definition of Terms....................................................................................................................... 36

12
2.6 Design Standards ........................................................................................................................ 36
2.7 Design Requirements .................................................................................................................. 38
2.8 Target Specifications ................................................................................................................... 39
CHAPTER 3: DESIGN MORPHOLOGY ...................................................................................................... 40
3.1 Generating Design Alternatives ................................................................................................... 40
3.1.1 Overview of the Conceptual Design Developed................................................................... 40
3.1.2 Subsystems ......................................................................................................................... 43
3.2 Design Alternatives ...................................................................................................................... 44
3.2.1 Simulation ............................................................................................................................ 44
3.2.2 Design Alternative 1............................................................................................................. 45
3.2.3 Design Alternative 2............................................................................................................. 48
3.2.4 Design Alternative 3............................................................................................................. 51
3.3 Design Methods ........................................................................................................................... 53
3.4 Morphological Chart..................................................................................................................... 55
3.5 Design Components .................................................................................................................... 56
3.5.1 Data Encoder ....................................................................................................................... 56
3.5.2 Artificial Neural Network ...................................................................................................... 59
3.6 Multiple Constraints ..................................................................................................................... 61
3.7 Testing and Assessment Plan ..................................................................................................... 63
3.7.1 Constraint Assessment Method ........................................................................................... 63
3.7.2 Final Design Assessment Method ....................................................................................... 64
CHAPTER 4: CONSTRAINTS, TRADE-OFFS, AND SENSITIVITY ANALYSIS ......................................... 66
4.1 Design Constraints ...................................................................................................................... 66
4.2 Design Trade-off Analysis............................................................................................................ 66
4.3 Design Constraints Assessment .................................................................................................. 67
4.3.1 Economic Constraint............................................................................................................ 67
4.3.2 Performance Constraint ....................................................................................................... 70
4.3.3 Manufacturability Constraint ................................................................................................ 74
4.4 Winning Design............................................................................................................................ 77
4.5 Sensitivity Analysis ...................................................................................................................... 81
4.6 Design Optimization..................................................................................................................... 83
4.7 Final Design................................................................................................................................. 89
4.7.1 Technical Specifications ...................................................................................................... 89
4.7.2 Testing ................................................................................................................................. 90

13
4.8 Evaluation of Objectives ............................................................................................................ 116
CHAPTER 5: SUMMARY OF FINDINGS, CONLCUSION, AND RECOMMENDATION ........................... 117
5.1 Results and Analysis ................................................................................................................. 117
5.2 Conclusion ................................................................................................................................. 117
5.3 Recommendation....................................................................................................................... 117
CHAPTER 6: BUSINESS MODEL ............................................................................................................. 119
6.1 Executive Summary ................................................................................................................... 119
6.2 Company Description ................................................................................................................ 119
6.3 Mission ...................................................................................................................................... 120
6.4 Vision ......................................................................................................................................... 120
6.5 Market Research ....................................................................................................................... 120
6.6 Market Information ..................................................................................................................... 120
6.7 Business Model ......................................................................................................................... 121
BIBLIOGRAPHY ........................................................................................................................................ 122
APPENDICES ........................................................................................................................................... 124

CHAPTER 1: PROJECT BACKGROUND

14
1.1 The Project

Technology is growing rapidly because of the innovation implemented by scientists, engineers, and inventors.
Virtual instrumentation is one of the important trends resulted from the enormous use of computers, which
presents the advantages of flexible software improvement of user-defined tools for control process and
visualization of data. Producing adaptable systems, the use of modular programming makes it spontaneous
and user-friendly (Petrişor, Foşalău, & Măriut, 2012). According to National Instruments, with virtual
instrumentation, professionals can decrease their development time, design exceptional products with much
lower design costs. The utilization of virtual instrumentation and the alteration hardware of the project is
tremendously simple and can be carried out with a low price (Visan, Jurian, & Lita, 2017).

Considered having this kind of system, it is evident that it needs a medium for its application. One of those
media that is commonly used is CCTV cameras or it can be some device that has an interruption in
transmitting information electronically such as phone calls and traffic on the Internet. Surveillance is used
with the aid of governments for crime prevention, intelligence gathering, and the safety of a process, person,
object, or the investigation of crime. The vicinity of surveillance is increasingly a subject matter of educational
study, consisting of journals, books, and research centers.

In this case, it is expected to have a virtual instrument representation of a DVR that would function the same
as the existing device. Its whole agenda is to provide an equivalent virtual instrument that has the same
capability and functionality. A key advantage in connected educational experiments is the possibility to
assimilate results online, in the environment of computers and complex networks (Arsinte, Sumalan, & Lupu,
2017). In recent years, the density of surveillance cameras and other technologies has increased
exponentially. With the vast utilization of the system, further technical flexibility for CCTV systems is required.

15
1.2 Prior Art Search

The proponents took time to research about the related technologies that would be applicable for the design
project. Table 1shows the title of each article, its publication number and the proponent’s feedback regarding
the said patent.

Table 1 Review of the Existing Journals

PRIOR ART REVIEW

A device and a method for operating the same are


Smart Image Processing CCTV Camera
proposed. The camera can process, compress and store
Device and Method for Operating Same
digital images, and Zoom to a captured image using a digital
(US20060098729A1)
image-capturing component therein.

A digital video recording system that has an image


Digital Video Recording System
processor that has memory where the time and date
(US7116353B2)
information as to when the image was acquired.
A digital video recorder system with an integrated DVD
recording device accepts analog streams and converts it
Digital Video Recording System
into MPEG formatted stream for internal transfer and
(US8577205)
manipulation and converts it back into TV output signals to
a monitor.
A system that relates to an improvement for recording a
digital video data stream and capable of recording an MPEG
Digital Video Recording System and its
transport stream to be digitally broadcasted and a system
Recording Medium (US2001009604A1)
for recording support information of an MPEG transport
stream.
Centralized Digital Video Recording A centralized DVR and reproduction system linking several
System with Bookmarking and reproduction and control units to a centralized server that
Playback from Multiple Location allows users to access the same program, pause and play
(US20050166258A1) from the same marker points made by the user.
Device for Receiving, Displaying and An invention related to a device for receiving, storing and
Simultaneously Recording Television displaying television images which comprises buffer means
Images via a Buffer (WO9922513) for storing television storages.

Method and System for a Distributed A system and method for remote display and control of an
Video Recorder (US20030188320) A/V data stream from a capture device.

An invention that relates to a mobile security system for


Mobile Digital Video Recording System mass transit vehicles in which multiple cameras capture a
(US7768548B2) plurality of images that are integrated with sound and data
of interest and saved to a storage device.

16
DISCLOSED FEATURES OF THESIS / INVENTION
Target Data
EXISTING PRIOR ARTS Modular Compression Storage Playback Notifications Microcontroller
Identification Logging
Category
Design of a 4-Channel
Virtual Instrument Video
Adapter for Digital Data ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓
Multiplexing with Face
Vector Identification
Smart Image Processing
CCTV Camera Device and
✓ ✘ ✘ ✓ ✓ ✓ ✓ ✘
Method for Operating Same
(US20060098729A1)
Digital Video Recording
✘ ✘ ✘ ✓ ✓ ✓ ✘ ✘
System (US7116353B2)
Digital Video Recording
✘ ✘ ✘ ✓ ✓ ✓ ✘ ✘
System (US8577205)
Digital Video Recording
System and its Recording ✘ ✘ ✘ ✓ ✓ ✓ ✘ ✘
Medium (US2001009604A1)
Centralized Digital Video
Recording System with
Bookmarking and Playback ✘ ✘ ✘ ✓ ✓ ✓ ✘ ✘
from Multiple Location
(US20050166258A1)
Device for Receiving,
Displaying and
Simultaneously Recording ✘ ✘ ✘ ✓ ✓ ✓ ✘ ✘
Television Image via a Buffer
(WO9922513A2)
Method and System for a
Distributed Video Recorder ✘ ✘ ✘ ✓ ✓ ✓ ✘ ✘
(US20030188320A1)
Mobile Digital Video
Recording System ✘ ✘ ✘ ✓ ✓ ✓ ✘ ✘
(US7768548B2)

17
1.3 Statement of the Problem

Surveillance System is being broad since security is considered as top priority. It is evident that having this
kind of system in companies, buildings, hospitals, and private properties will be beneficial since it records the
real-time footage of the vicinity. According to Philippine National Police- Directorate for Investigation and
Detective Management (PNP-DIDM), there was a total of 100,668 index crimes recorded from January to
November 2017 which represents a drop of 21.8% from the previous year (2016) where it reached a crime
rate of 128, 730. Specifically, for theft/burglary, it dropped almost one-third from 46, 232 crimes down to 32,
356. With that, it was necessary to provide security surveillance and one of the ways is through CCTV
Cameras.

Surveillance System is used for monitoring a particular area. One of the subsystems of a surveillance system
is the DVR or Digital Video Recording which records the real-time footage taken by CCTV Cameras. Each
camera is connected to a DVR which results in its bulkiness. Some of the existing DVRs limits its function in
the recording which only features motion detection without any training of a particular target registered faces
for target identification which requires human intervention.

In response to this problem, the proponents propose to design a 4-Channel Virtual Instrument Video Adapter
for Digital Data Multiplexing with Facial Vector Identification.

1.4 Project Objectives

1.4.1 General Objective


To design a 4 – channel virtual instrument video adapter for digital data multiplexing with facial vector
identification using artificial neural network.

1.4.2 Specific Objectives


Upon the completion of the project, the proponents aim:

1. To develop a virtual instrument that utilizes machine learning algorithms which performs real-
time video recording to refine existing DVR hardware with a success unit testing rate of 80%.
2. To design a digital filter/feature extraction for image processing with face detection success rate
of at least 80%.
3. To incorporate machine learning algorithms using artificial neural networks by means of face
vector identification for target identification with an accuracy of at least 80% face recognition.
4. To employ the analog-to-digital conversion for the multiplexing of the analog input signal from
the CCTV camera with ≤ 20ms conversion delay.
5. To test the performance by measuring the duration of video feed conversion process with
acceptable quality with an SNR > 1.

1.5 Significance of the Study

This study generally contributes to the general welfare who are selling and distributing such as CCTV
cameras because they can integrate the features of the design. Significantly, it can help different companies
in a way that the virtual instrument is utilized. Moreover, it is more convenient to have a virtual instrument to

18
strengthen the security and with that, it is considered as economically acceptable. In addition, the system
itself is flexible since it is run by software and it can be modified easily in any way the user desires. This kind
of system would be beneficial and contribute to different aspects of society.

For schools and offices, it can contribute in a way that the faces of professors in different departments may
be trained into the database so that unregistered people entering the offices/departments can be detected
and the person who is in charge will be notified. Moreover, it can be added to the executive offices of the
school premise.

For farms and other rural areas, it will help them in securing their properties by monitoring their areas with
target identification. By this, they can be alarmed and notified if there are intruders who are in the vicinity
where they can quickly report into the nearest police station or barangay hall if it’s theft or burglary.

1.6 Project Scope and Delimitations

This design project focuses on the improvement of technology as virtual instruments are being broad and
essential since it does the same function and operation similar to the existing technology. Techniques such
as digital filtering and data multiplexing have been utilized. The proponents will use digital filters since the
input signal (analog) will be converted into digital to eliminate unnecessary factors such as noise.

The proposed design is only limited up to four (4) inputs only. Since the proponents consider CCTV cameras
as their input in the system, it will only acquire the video property of the CCTV and disregarding the audio
capacity. The main input of the system is limited for CCTV feeds only which lessens the complexity of the
existing product because it includes some subsystems that are not essential. Also, the proponents’ product
can be scalable and flexible. For scalability, it can automatically detect the number of CCTVs connected in
the system. On the other hand, it is considered that the encoding of CCTV feeds is not alike, the system can
also detect the decoding scheme of video for flexibility.

In order to have a high accuracy for the facial recognition in the system, it is indeed that the camera should
see the face of the person detected since the system will get some features from the face itself.

This project will not cover the possible effects if the number of inputs is changed, retrieving of data from the
database, the accuracy of its feature in facial recognition and data logging. In addition, it does not cover the
facial recognition for animals since the system is only limited to humans.

19
1.7 Project Development

Problem
Identification

Delivery and
Research
Acceptance

Requirement
System Test
Specification

System Generation of
Integration Concepts

Designing

Figure 1 Project Development Cycle

The project follows the agile project development cycle which allows an iterative approach on the project
implementation. Figure 1shows the iterative cycle of the project applying the concept of agile.

The design process cycle is divided into eight tasks, namely (1) problem identification, (2) research, (3)
requirement specification, (4) generation of concepts, (5) designing, (6) software development, (7) system
test, and (8) delivery and acceptance. Each phase repeats itself when necessary which are defined below:

1.7.1 Project Identification


Identifying the problem is the fundamental step in any research or project proposal. In this manner, the
problems has been identified by searching for a client and know what your client is currently facing. Also, it
would be best if the problem that you have identified is also experienced by other people.

1.7.2 Research about the Problem


Through thorough research and deep investigation, the problem will be validated whether it is worth solving
or not. It is essential to have a background research so that it can be beneficial to the needs of people
especially those who are experiencing the same problem as our client does. The proposed design of the
solution should be based with the client’s specifications.

20
1.7.3 Requirement Specification
In order to meet the needs of the client, specifications must be determined. These specifications will be
established in accordance with the International Organization for Standardization or ISO. According to the
official site of ISO, the International Organization for Standardization is an international standard-setting body
composed of representatives from various national standards organizations.

1.7.4 Generation of Concepts


In generating the whole concept of the project, a lot of factors have been considered such as the results of
some prior studies, the differences or gap in the existing technologies, some underlying physics and most
importantly, the client’s criteria and constraints. The proponents came up with different types of solution
approach to address the problem.

1.7.5 Designing
The next step after coming up with the design concept is to validate it. Once everything is settled and the
idea has been validated, this is the phase where all the system design and simulations started. Different
simulations will be done using different software applications such as MATLAB, LabVIEW and so on. Upon
simulation, the design may undergo some iterations for room improvements.

1.7.6 System Integration


Assembling of the subsystems is done in this stage where it will be integrated to each other to complete the
whole system.

1.7.7 System Test


After doing all the necessary procedures in formulating the solution, it has to be tested several times until it
meets the desired specifications. Errors are expected in this phase since it is inevitable in a project proposal.
Also, this is the time for changing and makes improvements to make it better and more functional as expected.

1.7.8 Delivery and Acceptance


Upon the completion of the project, the proponents will deliver the device to the client for acceptance of the
project. The proponents will discuss the do’s and don’ts to the client about the system and how to avoid
system failure.

21
CHAPTER 2: GENERAL DESIGN INPUTS

2.1 The Client

The proponents’ client, Trigrid Electrical Services, is a privately owned electrical and
communications solutions provider that specializes in providing customers with technical expertise on the
uninterruptable power supply (UPS), power distribution unit and electrical wiring for customer data center and
Solar and Electrical Services. One of the services rendered by Trigrid is Electrical and IT Design which
includes the installation of CCTV cameras. One of the possible customers of Trigrid Electric Services is the
owner of a farm in Teresa, Rizal.

Figure 2 Trigrid Electrical Services


In Sitio Mayugat, Teresa, Rizal, there's a 3-hectare farm that needs to be installed multiple CCTV cameras
such as in the powerhouse, smart farm, pig pens, and poultry houses. There are ducks and chickens roaming
around the farm. According to them, the prominent problem in the farm was theft since the farm has an edge
in the technology around the area. Given the circumstances, a surveillance system that is integrated
with target identification and data logging must be installed on that farm.  

2.2 Review of Related Literature

Virtual instrumentation, a modular instrumentation hardware and the use of customizable software program
to generate a user-defined measuring system (Katole, Kavita; Padole, Dinesh, 2011). Virtual instruments can
be simple or very complex (Goldberg, 2000). The software allows complicated and highly-priced equipment
to be transformed using less complicated and less costly hardware. (Prema, K.; Kumar, N. Senthil; Sunitha,
K.A., 2009). A huge plus is that it gives us the opportunity of the whole customization in accordance to our
needs, and except that realizing it manipulate by means of ourselves, you can acquire a great deal lower
price than those on the market (Blaga, Cotfas, Cotfas, & Balint, 2012).

Figure 3 Architecture of a CCV System


(Source: Saputra, et al. (2016), Face Detection and Tracking using Live Video Acquisition in CCTV and
Webcam)

22
A study investigated by Andrade et al., they demonstrated the development of a virtual instrument, LipoTool,
for a new health care device. It allows more effective evaluation of physique fat, computerized information
processing, recording results and storage in a database, with high conceivable new studies (Andrade, T. F.;
Quintas, M. R.; Silva, C. M.; Restivo, M. T.; Chouzal, M. F.; Amaral, T. M., 2012).

In (Lu, Guoliang; Yang, Xiaoqiang; Li, Hongwei; Huang, Yu, 2012), they developed a fault testing device for
fire control system using a virtual instrument with convenience in portability, varied functions, and better
performance. Computer laboratories outfitted by means of flexible hardware components make training and
research cheaper, hence greater productivity where forth enhances the way college students learn (Žídek,
Jan; Bilík, Petr; Wittassek, Tomáš, 2011). On the other hand, in (Bielski, A.; Lohmann, C. P.; Maier, M.; Zapp,
D.; Nasseri, M. A., 2016), a robotic eye surgery assistant was developed to eradicate the problem of trembling
which is caused by human motions. The developed eye surgery assistant was integrated with a graphical
user interface for surgeons to get an overview of all the status of their systems.

In a surveillance system, the video that is captured using the CCTV camera is technically an analog signal,
for the computer to be able to read it the signal must be changed into a digital signal. In this case, we will use
the theory of the analog to digital conversion; sampling, quantizing and encoding. Then to input the different
input signal channels to the computer through a single input, a multiplexer must be used. A multiplexer is a
device that has multiple inputs and a single output.

A number of variable bit-rate data streams of input signals digital signal processing or DSP may be
multiplexed into a constant capacity signal. An inadequate number of constant bit-rate data input signals may
be multiplexed through the time division multiplexer or the so-called TDM into a higher bit-rate flow of data.
Subsequently, the signals being digitally filtered through digital signal processing (DSP) to process the image
for the integration of motion detection and data logging technology using artificial neural networking.

According to science and technology, neural networks are machines that are modeled on the human’s
nervous system and brain. It’s made up of interconnecting nodes to process information like the human brain.
A study published in the Australian Journal of Forensic Sciences, because of the surveillance systems, in
line with the modern technologies we have provided public places a new level of surveillance security.

The density of surveillance cameras has increased exponentially in recent years, and other technologies
such as cameras in mobile phones provide an itinerant source of surveillance (Porter, 2009). Due to the
increase in wide usage of the system, more technical flexibility for CCTV systems are essential where human
detection technologies are beneficial in this kind of system that can provide more efficient surveillance system.
The system works primarily based on human detection ensuring the most efficient way of storing footages.
Where forth it is also useful in prompting alarm system when human appearance is detected (Badaoui& Al-
Jumaily, 2010).

In a study investigated by Alimuin, Guiron and Dadios, they implemented a single neural network or SNN
which categorizes objects through weighed bounding technology for a surveillance system. Their system can
detect and recognize multiple objects and identify their classifications real-time. The CCTV cameras
continuously record the situations. Hence there is an unnecessary memory wastage if there is nothing
happening in front of the camera. Also, the CCTV system does not provide alerts of a burglary happening at
a particular time (Upasana, Manisha, Mohini, &Pradnya, 2015).

23
Figure 4 Weighted Bounding Single Neural Network
(Source: Alimuin, et al. (2017), Surveillance Systems Integration for Real-time Object Identification using
Weighted Bounding Single Neural Network)

In principle, the task of recording and storing footages are the basics of the existing surveillance systems.
The analysis of threats done, may it be manual or semi-automatic, by operators who monitor the videos
nonstop (Jahagirdar & Nagmode, 2016). The output of the CCTV camera footages are coded in H.264 or in
MJPEG (Saponara, Pilato, & Fanucci, 2016). Utilizing face detection and tracking detection technologies can
develop systems that are able to detect human occurrence. Hence, live video acquisition is an application of
this research based on the detected face on the surveillance (Saputra & Amin, 2016).

Figure 5 Background Subtraction applied at the sample video frame


(Source: Mahidra, et al. (2017), Pedestrian Flow Counter using Image Processing)

Aside from security and safety, surveillance systems have an extensive variety of applications in several
aspects of life. It plays a crucial role in innumerable surveillance applications. In recent years, automation of
surveillance systems for public safety enhancement gained popularity (Sodanil & Intarat, 2015). Nevertheless,
often the quality in surveillance systems is low and does not meet certain requirements. Hence, there is a
need for enhancement in order to make it apt for further processing (Hibell, 2006).

24
Figure 6 Comparison between the original image after filtering using HMMOD
(Source: Sodanil, et al. (2015), A Development of image Enhancement for CCTV Images)

In modern cities, relatively advanced surveillance systems are often installed. Whether it be on the streets
and on/in the buildings, a CCTV camera can be easily found anywhere. However, most monitoring systems
are mostly used for acquiring data, which are manually analyzed if necessary (Chmielewska, Marianna,
Marciniak, Dabrowski, & Walkowiak, 2015). Video Image Detection Systems (VIDSs) and CCTVs are used
for traffic monitoring. The VIDS researched so far are¬ used for vehicle detection, traffic volume, and vehicle
speed measurement, and so on (Im, Hong, Jeon, & Hong, 2016).

Figure 7 Median Filtering Example for an Image Adjustment


(Source: Eamthanakul, et al. (2017), The Traffic Congestion Investigating System by Image Processing
from CCTV Camera)

An approach that targets the development of end-to-end systems which automatically learns the best options
for the task at hand from the raw data is referred to as Deep Learning or DL. It replaces and outperforms
various overhand features of a certain system (Papadopoulos, Machairidou, &Daras, 2016). In a study
conducted by Shetty et al, they used video inpainting which is a mechanism of restoring missing regions in a
series of frames. On the other hand, image inpainting is the process of recovering the missing parts of a
varying image by using neighbouring statistics of the hole. Most of the recorded footages are too much for
an operator to watch, therefore distinguishing any enthralling object or event has become a time-consuming
task. This sort of task can be possible for a few personal videos. However, this tedious task is impossible
once there'll be endless video (Shetty, Vishwakarma, Harisha, & Agrawal, 2017).

25
Figure 8 Comparison of success rates achieved by various algorithms
(Source: Jahagirdar, et al. (2016), Evaluation of Moving Object Detection Techniques for Low Resolution
videos)

The proposed algorithm shows consistent results for videos recorded with various view angles as the current
frame is used to update the background giving the advantage of FD algorithm. ABSFD removes drawbacks
of BSS, FD, and ABS and uses the advantage of each one with a comparable complexity level. Choice of
good temporal scale and value of ‘α’ to be used in ABSFD depends on the size and speed of the object and
variations in background. The proposed algorithm shows promising results as the first step in action
recognition (Jahagirdar & Nagmode, 2016).

Real-world CCTV footage typically poses inflated challenges in object following because of Pan-Tilt-Zoom
operations, low camera quality and various operating environments (Dimou, Medentzidou, Garcia, & Darns,
2016). Since image processing needs a performance comparison formed between a high-end GPU hopped-
up i5 processor, a high computational power, and an alternative onboard processor that are accessible within
the market (Madhira & Shukla, 2017).

A study investigated by Ki et al, they’re developing a sensor for image processing that extracts the tracks the
moving vehicles, vehicles from the video image of CCTV, and extracts traffic info similar to volume, link travel
speeds, and more. Additionally, they’re also accustomed in monitor areas or things that would assist security
personnel to investigate acts harmful for both humans and infrastructures (Manlises, Martinez, Belenzo,
Perez, & Postrero, 2015). We searched for a region of the same size within the estimated area based on the
stored intensity of the vehicle, so as to minimize the Root Mean Square value of the subtracted intensity (Ki,
Choi, Joun, Ahn, & Cho, 2017). The image processing technology applied within the systems is image
segmentation, object tracking, and extraction that is ready to capture the movement of objects once applied
to a surveillance system via CCTV cameras (Kongurgsa, Chumuang, & Ketcham, 2017).

26
2.3 Concept of the Study

Figure 9 Level 0 Block Diagram

Figure 9 shows the general block diagram of the application of the proponent’s project proposal, which is a
4-channel virtual instrument video adapter for digital data multiplexing. It shows the simplest way of
visualizing the process of the proposed design coming from the CCTV analog signal down to the human-
computer interaction.

Figure 10 Level 1 Block Diagram

Figure 10 shows the specific block diagram which includes the relationship between entities in the system.
From the video feed coming from the CCTV cameras, an analog signal that carries the information will be
transmitted to an Analog-to-Digital Converter where it converts the signal (analog) into digital which will pass
through the Multiplexer then Serial Communication where the transmitting of signals is simultaneously sent
and received one bit at a time. The output then will be demultiplexed and be processed through image filtering,
image enhancement, and image recognition. The computer will process every bit of signal which is now in
binary form or in a digitalized form. Then, a virtual instrument software will do the job including the
computational intelligence, digital signal processing, storage and of course, the graphical user interface. The
virtual instrument plays an important role since it replicates the tasks which supposed to be a task of electronic
circuit hardware.

27
Figure 11 Level 1 Specific Block Diagram

Figure 11 above specifies the part of the algorithm where the design is applied. Based from the diagram, the
system that will modified is in Image Filtering.

2.4 Design Inputs

The Design Inputs are the needed information of the subsystem that are going to be used for the design of a
Design of A 4-Channel Virtual Instrument Video Adapter for Digital Data Multiplexing with Facial Vector
Identification. In order to determine the design inputs, the working principle of conceptual framework must be
considered.

2.4.1 Conceptual Framework

Figure 12 Conceptual Framework

Figure 13 shows the conceptual framework of the design project which includes the input, process and output.
The input is divided into (3) three namely Humanware, Hardware and Software. The process indicates the
summary of the step-by-step procedure in creating the system. Lastly, the output specifies the desired
goal/product in this project which is mainly about the Design of a 4-channel virtual instrument video adapter
for digital data multiplexing with facial vector identification using artificial neural network.

28
Figure 13 Input, Process, Output

2.4.2 Theoretical Framework

Surveillance Systems
The term surveillance came from the French words “sur” and “veiller” which means “from above” and “to
watch”. By definition, surveillance system is a system where the activities, behaviour and other phenomena
are being monitored for different purposes such as observing, managing, or protecting the people in the
vicinity. In this kind of system, it is indeed that there’s a need of an electronic equipment such as closed-
circuit television cameras or it can be an interception of electronically transmitted information like Internet
traffic or phone calls. Surveillance is used by governments for intelligence gathering, prevention of crime, the
protection of a process, person, group or object, or the investigation of crime.

The area of surveillance is increasingly a topic of academic study, including through research centers, books
and peer-reviewed academic journals. According to James Clapper, the US director of national intelligence,
that in the future, intelligence services might use the [internet of things] for identification, surveillance,
monitoring, location tracking, and targeting for recruitment, or to gain access to networks or user credentials
(Thielman, 2016).

Closed-Circuit Television
In 1942, A German engineer named Walter Bruch first introduced the technology in CCTV which was setup
for monitoring of V-2 rockets which was installed by Siemens AG at Test Stand V-II in Peenemunde, Nazi,
Germany. Upon achieving this kind of technology, an American government contractor named Vericon
started promoting the system in 1949 which was launched on a commercial basis (Staff, 2014). During those

29
times, the function of this system is limited for live monitoring only since they did not have any equipment or
components that allowed the recording of the footage.

Closed-Circuit Television, commonly known as CCTV, is an electrical equipment in which the feeds/signals
are monitored primarily for monitoring, surveillance or even security purposes. This kind of cameras
communicate with monitors or video recorders across a transmission medium such as wired medium (e.g.
coaxial cable) or wireless communication transmission. Some of the common objectives of having a CCTV
includes traffic monitoring, maintaining perimeter security in different areas and observing behaviors which
could be beneficial for future use. Additionally, CCTVs can maintain public order, prevent antisocial behavior
and nuisance, and to provide evidence to relevant enforcement agencies.

Electronic and Digital Filters


In electrical filtering, a voltage proportional to the profile, is passed through a two-resistor-capacitor (2RC)
network. Considering that the 2RC network has memory, the output is not only a function of the input at any
instant of time, but also a function of prior values. In effect, the 2RC network computes a running average of
current and previous voltages, but gives reduced weights to voltages more distant in the past
(BalaMuralikrishnan, 2009). In Signal processing, a filter is an electronic device or process that removes
unwanted and undesired signals and noise that interferes with the information signal itself. Filtering is a class
of signal processing, the defining feature of filters being the complete or partial suppression of some aspect
of the signal. Most often, this means removing some frequencies or frequency bands.

In Digital Signal Processing, a digital filter is the basic block of a system because the frequency response of
the filter depends with their respective values of its coefficients. A lot of software programs are capable of
computing the values of the coefficients depending on the desired frequency response which are represented
with a fairly high degree of precision. On the other hand, when a digital filter is applied in a hardware, the
person who wants to use it needs to represent these coefficients with the smallest number of bits but still
gives acceptable resolution because having an excess number of bits tends to increase the size of the
registers, buses, adders, multipliers and other hardware used to process the signal. Thus, the be it precisions
used to represents numbers are important in the performance of real-world signal processing designs (Litwin,
2000).

The sample is the one that converts an analog signal into a digital signal. The most common type of
modulation in the sampling is the pulse amplitude modulation.

𝒑(𝒕) = ∑ [𝒖𝒔 (𝒕 − 𝒌𝑻) − 𝒖𝒔 (𝒕 − 𝒌𝑻 − 𝒑)]


𝒌=−∞

Equation 1 Pulse Width Sampler

According to Priyabrata, the process of selecting frequencies or range of frequencies in a given signal and
the attenuation of the frequency components outside the chosen range is called filtering. In general, the
signals we receive are mixed with other kinds of undesired signals called noise, so the filter is a discrete
processor in which the signal desired at output is the result of the signal with the noise attenuated. The
frequency range allowed to pass through the filter is called Pass Band, and the range where the signal is

30
cancelled is the Stop Band. The separation area between passband and stop band is called Cut-off frequency
(Aldaz, 2015).

Generally, filters have (2) two functionalities, either signal separation or signal restoration. When a signal has
been interfered with undesired signals, noise or other unwanted signals, signal separation is applicable. On
the other hand, if there is a distorted signal in some ways, signal restoration is the key. A digital filter uses a
digital processor to perform numerical calculations on sampled values of the signal (Steven W. Smith, 2011).

Face Detection and Recognition


Face detection is considered as one of the fields under object detection that is based on computer-human
intervention. One of the most specific objects that can be processed in images are people, particularly, faces.
Face detection is the first step before face analysis, face recognition and detection of other parameters
involving human’s face. Generally, object detection is a procedure wherein it deals with the detection of
different objects from a particular classification (e.g. cars, buildings, people or faces) in an image or a camera
feed (Dang & Sharma, 2017).

Figure 14 General Process of Face Recognition


(Source: Dang, et al. (2017), Review and Comparison of Face Detection Algorithm)

The general process of face recognition is shown in Figure 15. One of the algorithms that involves face
recognition is through Haar Basis Functions. In this manner, the integral image representation for images
has been done in order to compute these features which can be computed from an image using a few
operations per pixel. Once the computation is done, it can actually be computed at any scale in a constant
time.

31
Figure 15 An example of rectangle features showing relative to the enclosing detection window
(Source: Viola et al. (2011), Rapid Object Detection using a Boosted Cascade of Simple Features)

Figure 15 shows the example of rectangular features relatively to the enclosing detection window. The sum
of pixels in the shaded part of the rectangles is subtracted to the sum of the pixels of the unshaded portion
of the rectangle.

With the use of integral image, an intermediate representation for the image, rectangle features can be
computed. The integral image at location x, y contains the sum of the pixels to the left, and above of x, y
inclusive:

𝑖𝑖(𝑥, 𝑦) = ∑ 𝑖(𝑥 ′ , 𝑦 ′ )
𝑥 ′ ≤𝑥,𝑦 ′ ≤𝑦
Equation 2 Equation used for Integral Image

Figure 16 Example where Integral Image is utilized

With four array references, the sum of the pixels can be computed within the portion of rectangle D. the value
of the integral image at location 1 (the point intersection of rectangles A, B, C and D) is the sum of the pixels
in rectangle A. for the value of location 2, it can be computed as 𝐴 + 𝐵. At location 3, the value can be
computed as 𝐴 + 𝐶. At location 4, it is defined as 𝐴 + 𝐵 + 𝐶 + 𝐷.

32
Where:

𝑖𝑖(𝑥, 𝑦) − 𝑖𝑛𝑡𝑒𝑔𝑟𝑎𝑙 𝑖𝑚𝑎𝑔𝑒

𝑖(𝑥, 𝑦) − 𝑜𝑟𝑖𝑔𝑖𝑛𝑎𝑙 𝑖𝑚𝑎𝑔𝑒

By using the following pair of recurrences:

𝑠(𝑥, 𝑦) = 𝑠(𝑥, 𝑦 − 1) + 𝑖(𝑥, 𝑦)

𝑖𝑖(𝑥, 𝑦) = 𝑖𝑖(𝑥 − 1, 𝑦) + 𝑠(𝑥, 𝑦)

Where:

𝑠(𝑥, 𝑦) − 𝑐𝑢𝑚𝑢𝑙𝑎𝑡𝑖𝑣𝑒 𝑟𝑜𝑤 𝑠𝑢𝑚

𝑠(𝑥, −1) = 0

𝑖𝑖(−1, 𝑦) = 0

With that, the integral image can be computed in one pass over the original image.

Multiplexing and Demultiplexing

One of the techniques that allows the transmission of two or more signals across a single data link is
multiplexing. Multiplexing is the process of making the most efficient use of the available channel capacity
because in multiplexing, the capacity of the channel is being shared by multiple communication stations.

Figure 17 Basic Concept of Multiplexing


(Source: (Kharagpur, 2015), Multiplexing of Signals)

Based from the figure above, multiplexer is connected to demultiplexer through a single data link. The main
function of multiplexer is to combine data from multiple input lines and propagates them through a single
transmission medium which has a high capacity data link, then it will be demultiplexed at the other end and
delivered to its respective output lines.

33
There are a lot of techniques in multiplexing which can be categorized into (3) three types:
 Frequency-Division Multiplexing (FDM)
 Time-Division Multiplexing (TDM)
o Synchronous
o Asynchronous

Frequency-division Multiplexing or FDM is one off the types of multiplexing where the frequency spectrum is
divided into several channels giving each channel an exclusive possession in a particular frequency band.

Figure 18 Basic Concept of FDM


(Source: (Kharagpur, 2015), Multiplexing of Signals)

Each message signal is translated into different frequency bands using modulation techniques which are
then combined through a linear summing circuit in the multiplexer. Afterwards, the output/resulted signal is
transmitted along the single transmission medium by electromagnetic means as shown in Figure 18. In FDM,
the basic way is to divide the available bandwidth of the physical transmission medium into smaller,
independent frequency channels.

Figure 19 FDM Multiplexing Process


(Source: (Kharagpur, 2015), Multiplexing of Signals)

34
Figure 20 FDM Demultiplexing Process
(Source: (Kharagpur, 2015), Multiplexing of Signals)

At the receiving end, the signal passes through a band-pass filter which does the separation of the frequency
channels individually. Then, the outputs of the band-pass filter are demodulated and distributed in
accordance to their respective output channels.

Second type of multiplexing is called Time-Division Multiplexing or TDM. If in FDM, all signals operate at the
same time having different frequencies, but in TDM, all signals operate with the same frequency in different
period of time. In this type of multiplexing, all data sources are sequentially sample and combines them to
create a composite band signal that travels through the transmission media which is demultiplexed at the
receiving end.

Figure 21 Operation of Time-Division Multiplexing


(Source: (Kharagpur, 2015), Multiplexing of Signals)

TDM is subdivided into (2) two types namely Synchronous and Asynchronous. In Synchronous TDM, each
time slot into a fixed source. On the other hand, Asynchronous TDM, also known as statistical TDM, is a type
of TDM that allocates the time slots on demand to divide the input channels, which saves the channel capacity.

35
2.5 Definition of Terms

The following terminologies listed below are the used in the context of this paper.

Table 2 Definition of Terms

Term Definition

It is the software that will be utilized to refine existing


Virtual Instrument
DVRs.
The techniques used in the design for the feature
Digital Filter extraction and to reduce unwanted noise for better
target identification.
The method used before the signal passes through
Multiplexing
the filtering process.
The processes used in converting the analog input
Digital Signal Processing
signal for the CCTV camera into digital signals
One of the features of MATLAB that utilizes an
Simulink interactive graphical representation for modelling,
simulating and analysing different systems.
The principle or concept used as an additional
Artificial Neural Network
feature of the system that utilizes facial recognition.
It can be either a device or a computer program that
Codec
is used for encoding and decoding of a digital signal.
A type of communication where the transmitting of
Serial Communication signals is simultaneously sent and received one bit
at a time.
It is the process where the analog signals are
converted into digital bits to avoid unnecessary or
Analog-to-Digital Converter
unwanted signals that may occur upon
transmission.
It is the process of identifying or recognizing the face
Face Recognition detected that is present in an image or video feed
which attempts to establish whose face it is.

2.6 Design Standards

The proponents’ Design Standards based on the following Marketing and Engineering Requirements.

The Marketing and Engineering Requirements are based on the following client’s preferences:

36
Table 3 Marketing and Engineering Requirements

MARKETING
ENGINEERING REQUIREMENTS JUSTIFICATION
REQUIREMENTS

For a clearer video to be achieve, the


higher resolution is required.
The input video recording must be ≥ According to manufacturers, DVRs
1, 4, 6 30 fps with 4 input channels at least usually have 4 to 8 input channels that
480p or 720p resolution. can record 120fps for analog CCTV
cameras, 120fps at 720p and 60fps at
1080p for TVI/AHD CCTV cameras.
A conversion delay must be
The system will be a PC-based
considered because the feed is real-
3 system that must have ≤ 20ms
time and the input signal is analog
conversion delay.
signal.
For better quality input signal, the
The system must reduce noise signal-to-noise ratio must be greater
1,3
interference where SNR must be >1. than one so that the noise will not
dominate the transmitted input signal.
For better quality input signal, the
The system must be able to optimize
signal-to-noise ratio must be greater
1,3,5 video quality coming from the video
than one so that the noise will not
feed.
dominate the transmitted input signal.
For intensive security and reliable
surveillance system, specific target
identifications are used. Based on
The target identification and data existing devices, when the motion
1, 2, 5, 6
logging must be real-time. detection system is trigger, the system
is activated and start the recording
and notifies the owner that motion is
detected in the place.

Marketing Requirements:
1. The system must be able to do HD video recording.
2. The system must have motion detection, face recognition, data logging features.
3. The system must be easy to operate.
4. The system must have more than one input video channel.
5. Real-time data transfer from CCTV to host PC.
6. The system must be able to adapt in any existing CCTV cameras.

37
Table 4 Design Standards

SOURCE REQUIREMENTS
ISO/IEC 14496-11:2015(en) The design should use a compression method for
Coding of audio-visual objects the video feeds gathered by the system.
The design should specify a common output file
format that can be extracted from the video-
ISO 22311:2012(en) surveillance contents collection systems by an
Video-surveillance exchangeable data storage media to allow end-
users to access digital video-surveillance contents
and perform their necessary processing.
The design should be applicable to the complete
ISO/IEC 25010:2011(en)
human-computer system, including both computer
Software Quality Requirements and Evaluation
systems in use and software products in use.
ISO/IEC 15938-5:2003(en) The system must have its own graphical user
Multimedia content interface interface.
The system must have a codec that must satisfy
ISO/IEC 23000-10:2012(en)
the H.264 and MPEG output format for the
Multimedia application format
multimedia application format of the system.

2.7 Design Requirements

The following are the Design Requirements used in fabricating the design project.

Table 5 Design Requirements

DESIGN REQUIREMENTS
Image Sensor ½.8’ 2.4 Megapixel CMOS
Effective Pixels 1984 (H) x 1225 (V)
Camera
Electronic Shutter 1/3s – 1/100,000s
Minimum Illumination 0.05 lux/F1.4, lux IR on
Processor At least 4GHz
Operating System Windows XP or later
Computer
Internal Storage Minimum of 1TB
Random Access Memory (RAM) At least 8 GB

38
2.8 Target Specifications

The following target specifications of the system are defined based on engineering standards and the clients’
criteria and constraints.

Table 6 Target Specifications

TARGET SPECIFICATION

Conversion Delay ≤ 20 ms

Minimum Camera Count Supported 4 Camera

Minimum Frame Resolution 640 x 480 Pixels


A pop-up notification will appear within ≤ 1 second
Visual Notification
upon activity detection.
Data Logging Real-time data logging upon human detection.

Compression Format (codec) H.264 or MPEG

Signal-to-Noise Ratio >1

39
CHAPTER 3: DESIGN MORPHOLOGY

3.1 Generating Design Alternatives

3.1.1 Overview of the Conceptual Design Developed

The proponents’ goal is to design a 4-channel virtual instrument video adapter that would enable target
identification using artificial neural network in CALABARZON, specifically in the province of Rizal. The focus
of the proponents’ design is on a system that would help in the field of surveillance especially in rural areas
where technology is still obscure.

Figure 22 Flowchart of the System


The system process is illustrated in Figure 22. The system input is coming from the camera feeds which will
be processed and examined by the system. Upon receiving, the system will decide if there is a face detected

40
in the camera feeds. If there is no any face detection, the system will continue acquiring camera feeds but if
there is a face detection, it will decide if the face is recognized and part of the database or not at all. Once
the detected face is recognized, the system logs the date, time and the camera number. If in case that the
detected face is not recognized by the system, the system logs the date, time, and the camera which follows
the activation of the alarm and notification system.

Figure 23 General System Flowchart of Artificial Neural Network

Figure 23 shows the flow of the process in terms of Artificial Neural Network. The system will be fed by input
images which will determine if there’s a face available upon detection. If there is no face detected, the system
will continue acquiring images. If there is a face detected, it will process the prediction of the images by
41
checking the datasets that pass through the Artificial Neural Model Training. If the predicted image is part of
the database, the system will log. Once the predicted image is not part of the database, the alarm and
notification system will activate then logs the necessary information (e.g. date, time, etc).

Initiate DL
Classifier

Deep
Learning ROI

Figure 24 Specific System Flowchart of Artificial Neural Network

Figure 24 shows the process of artificial neural network starts with gathering of datasets, the datasets came
from different authorized personnel which has a count ranging from 2000 to 5000 images in grayscale and
adaptive threshold for better quality of extraction of details from datasets. And then, these datasets will
undergo neural network model training, which initiates the random forest classifier and then this classifier will
take the datasets and undergo the process of curve fitting to refine the model into the datasets and then the
classifier will be stored locally which can be used for predictions.

On the other hand, the CCTV feed will be process frame by frame resulting to a processing of an image for
every loop of the system. For every frame, Haar cascade is used to detect multiple faces from images, and
then all faces will be enumerated and then the region of interest will be extracted which the face alone
meaning the surrounding of faces will be removed/neglected. This ROI will be fed into the classifier which is
the random forest classifier and will process prediction through normalizing the ROI wherein the 2-
dimensional array form of ROI will be converted into 1 dimensional array and then this data will be pass
through the neural network model and then the result will be the prediction.

42
3.1.2 Subsystems

Analog-to-Digital Converter
Analog-to-Digital Converter is the conversion process of an analog signal into digital signal that involves the
sampling rate that will avoid aliasing during conversion. In choosing the required ADC, it needs to consider
the Nyquist sampling rate and frequencies. ADC involves 3 steps namely Sampling, Quantizing and Encoding.
In sampling, it reduces the continuous-time signal into a discrete-time signal. The minimum sampling rate
should be at least twice the highest data frequency of the analog signal. Quantizing is the partitioning the
signal range into a number of discrete quanta which then matches the input signal to the correct quantum. In
other words, it is the process of breaking down analog values into a set of finite states. Lastly, in encoding, it
is the process the provides the required digital signal that can be propagated over a trans-mission media
which can also be used in different purposes.

Multiplexer and Demultiplexer


Multiplexer, also known as data selector, is a type of a device which enables multiple input signals to use a
single transmission medium (shared line) which can be categorized into Frequency-Division Multiplexing
(FDM) or Time-Division Multiplexing (TDM). Multiplexing involves two or more input signals (analog or digital)
and combining them into one signal for transmitting it in a single medium. The main objective of multiplexing
is it enables the signals to be more efficient upon transmission over a given communication channel, where
it also decreases the transmission costs.

Serial Communication
Digital communications today are very useful due to some disadvantages of analog communications, one of
the applications of digital communication is about digital transmissions to provide internet coverage and
another one is about digital television to provide a clear and good quality videos. And in this paper, the
proponents used digital communication and this has two kinds, the first one is parallel communication,
wherein the data is being transmitted in parallel therefore it needs a communication channel for every data
stream, another one is the serial communication wherein the data is being transmitted in a shared medium
in sequential order which is the technique used.

Filter
Filter is an electric circuit which transmits or rejects input signals selectively depending in the intervals of
frequencies. Filters can be categorized into two domains which are analog filters and digital filters. Analog
filters enable to remove signals either above or below its chosen cut-off frequency. If an analog filter re-moves
signals above a certain frequency which allows low frequency to pass through the filter, it is called low pass
filters. On the other hand, if the analog filter removes all signals below a certain frequency and it allows to
pass frequencies higher than the cut off frequency, it is called high pass filter. Whereas, analog filters are
made up of components such as resistors, capacitors, inductors, op amps, and such.
Subsequently, in digital filter, it only allows digital input signals. It works by oversampling, averaging and
programmable.

Image Recognition
In the computing industry, there have been a lot of advancement and just a few years ago, the term neural
network becomes viral in the internet, wherein neural network imitates the human neuron, their behaviour,
nature, and functions. In terms of computing, the use of neural network basically introduced by image
recognition wherein a lot of samples called datasets will be feed into the neural network so that the system

43
will be trained. After the training, the system now will be able to recognize images depending on the datasets
supplied and this is the image recognition technology.

3.2 Design Alternatives

In this design project, design alternatives involve variation on multiplexing scheme, image filtering and image
enhancement. The design must be able to accept CCTV feeds and process every frame to predict if the
person on the frame/image exists on the dataset. If the person does not exist on the dataset, the system will
activate alarm and notification system which is necessary to make the administrators aware about the
unauthorized person in the vicinity. The design alternatives varied in a way such that it can meet the required
design specifications provided by the client that involves constrains which lead us to come up with design
alternatives.

Figure 25 Block Diagram for Basis

Figure 26 Mathematical Model of the System

3.2.1 Simulation

There are a lot of methods involve in filtering since its main objective is to remove some of the unnecessary
parts of the signal. Depending on the part which will be removed from the signal but in this study, band pass
filter will be utilized. In a bandpass filter, it only allows signals in a range of frequencies to pass through the
circuit and neglects all the unnecessary frequencies. It only limits the amount of bandwidth of the output
signal that would pass through it. Transfer function is one of the ways to have a mathematical representation

44
of the system. It relates the output and the response of the system in a linear-time invariant, Simulink is
considered as one of the features of MATLAB that provides an interactive graphical representation for
modelling, simulating and analysing different systems. Considering that Simulink is far way different from the
common of programming in MATLAB, it uses a comprehensive library of predefined blocks to construct the
graphical models of the system.

3.2.2 Design Alternative 1

Figure 27 Design Alternative 1

In this design, the proponents decided to utilizes adaptive filtering.

Figure 28 Mathematical model for Design Alternative 1

A study conducted by (Li & Xiuhua, 2008), they developed a new type of adaptive median filtering algorithm
based on detection criterion and median value is presented. This detection criterion is under such an
assumption: inherent relationships exist among neighbour pixels; the gray value of neighbour pixels changes
slowly and still changes.

45
Figure 29 Comparison of Adaptive Filtering

In Figure 29 by (Russo, 2005), a method for the removal of Gaussian noise in images has been presented.
The approach adopts adaptive multi-pass processing based on piecewise linear filtering and edge detection
in order to achieve a very accurate restoration of the image data. A technique for automatic parameter tuning
has also been presented. Computer simulations have shown that the method is effective and yields
satisfactory results in a wide range of noise variances.

Figure 30 Adaptive Filter characteristics around the edge texture


In Figure 30 which was a study conducted by (Yoshino & Naito, 2011), experimental results show that the bit
reduction performance is improved by 1.6 points in the maximum case against the conventional scheme.
Furthermore, the proposed scheme contributes to improving the subjective picture quality.

According to (Minar, et al., 2016), adaptive filters have very promising result. The results of detection over
HRF database were for accuracy between 91.92 % - 96.37 %, but most of results had accuracy between 95-
96%. Hence, there results were very stable and there was no big deviation between results.

46
Figure 31 Testing results of proposed Adaptive Filter
(Source: (Minar, et al., 2016))

Figure 31 shows the Sensitivity (SE) is value for true positivity. It is probability that detected black pixel
belongs to blood vessel. Specificity (SP) is value for true negativity. It is probability that detected black pixel
belongs to background. Accuracy (ACC) of detection algorithm based on equation: ACC = ( TP + TN ) / ( TP
+ FN + FP + TN ).

Figure 32 Test Performance for Design Alternative 1

Figure 32 shows the testing of the filter built using the adaptive filtering algorithm. it can be seen that the
noise and, in the video, feed was extracted thus making the filtered image black and white.

47
3.2.3 Design Alternative 2

Figure 33 Design Alternative 2

In the second design, the proponents still utilize frequency domain filtering. Frequency domain techniques
are widely used for long-tap adaptive filters to reduce the computational complexity and improve the
convergence performance (Zhou, Chen, & Li, 2007). The only difference compared with the previous design
is that this design utilizes algorithmic transformation wherein the proponents can make patterns in the image
space more interpretable and to make highly skewed distributions lessen.

Figure 34 Mathematical model for Design Alternative 2

The Fourier Transform will decompose an image into its sinus and cosines components. In other words, it
will transform an image from its spatial domain to its frequency domain. The idea is that any function may be

48
approximated exactly with the sum of infinite sinus and cosines functions. The Fourier Transform is a way
how to do this.

Mathematically a two-dimensional images Fourier transform is:


𝑁−1 𝑁−1
𝑘𝑡 𝑙𝑗
𝐹(𝑘. 𝑙) = ∑ ∑ 𝑓(𝑖, 𝑗) 𝑒 −𝑖2𝜋( 𝑁 +𝑁)
𝑖=0 𝑗=0

𝑒 𝑖𝑥 = cos 𝑥 + 𝑖 sin 𝑥
Equation 3 Fourier Transform

Here f is the image value in its spatial domain and F in its frequency domain. The result of the transformation
is complex numbers. Displaying this is possible either via a real image and a complex image or via a
magnitude and a phase image. However, throughout the image processing algorithms only the magnitude
image is interesting as this contains all the information, we need about the images geometric structure.

Figure 35 Learning curves for the frequency bin-normalized algorithm (1), frequency domain Newton
algorithm (2), and proposed new algorithm (3)
(Source: (Zhou & DeBrunner, 2006))

In Figure 35, the researchers claimed that it computationally superior. Additionally, the proposed algorithm
performs better than that method because it does not need to perform the FFT filtering in our method. Thus,
our method does not suffer from bias caused by the circular convolution that results from the FFT filtering.

49
Figure 36 Global filtering processing times for BLOB and TUBULAR shape detections
(Source: (Graca, Falcao, Kumar, & Figueiredo, 2013))

In Figure 36, the filtering process represents the functionality with higher impact in the global processing time,
so we implemented several versions and we conclude that time-domain approaches executing on GPU are
faster for small filters and that frequency-domain GPU methods are more efficient for larger filters. Through
parallelization of the algorithm, we obtain a speedup up to 17x on images with 576 x 576 pixel and up to 49x
on images with 1728 x 1728 pixel.

Figure 37 Test Performance for Design Alternative 2

Figure 37 shows the comparison between the original and the magnitude spectrum of the image, this filter
shows the frequency magnitudes of the original image, this filter does not return any important detail about
the image, it is just showing the frequency magnitudes

50
3.2.4 Design Alternative 3

Figure 38 Design Alternative 3

In this design, the proponents utilize the usage of spatial domain filtering.

Spatial domain refers to an aggregate of pixels consisting in an image. Spatial domain image processing is
processing over the pixels directly as expressed in the following equation:

𝑔(𝑥, 𝑦) = 𝜏[𝑓(𝑥, 𝑦)]


Equation 4 Spatial domain image processing

where f( x, y) and g( x, y) are the input and the processed image through the mathematical mapping “𝜏”
defined over ( x, y). When this mapping/operator is applied on any arbitrary point of coordinate ( x, y) to get
the processed point at the same coordinate ( x, y), this mathematical mapping “𝜏” is called as intensity
operator or intensity mapping or gray-level transformation.

𝑠 = 𝜏[𝑟]
Equation 5 Intensity of the arbitrary point before and after transformation

Figure 39 Mathematical model for Design Alternative 3

(Source: (Das, 2015))

51
Figure 40 Evaluation Results
(Source: (Horiuchi &Tominaga, 2010))

The bar chart indicates the average score of both experiments, and each straight line on the bar indicates
the standard deviation of all scores. It should be noted that the proposed algorithm obtains a remarkable
score with a high average and a small standard deviation.

Figure 41 Test Performance for Design Alternative 3


From the image shown, design alternative 3 uses spatial filtering which lessen the noise of the image and
retain the colors from the original image. In machine learning, the proponents need to extract just the
important details such as contour of a face which is most likely to be unique for every person the same with
the face linings which serves the fingerprint of a person and therefore design alternative 3 is could not be
effective in terms of machine learning application.

52
3.3 Design Methods

The design alternatives can be dissected into two subsystems namely the Software (Program) and the
Hardware (Adapter). Each subsystem has its own process on how they will be designed in accordance with
the specifications presented in Chapter 2. The process, tools and methodologies have been presented as
shown in Table 7.

Table 7 Design Methods for the Design Alternatives

SUBSYSTEM PROCESS TOOLS METHODOLOGY

Research Phase

Acquire techniques on
Cite some studies and how to design and apply
Academic Journals,
references about virtual digital filters as part of
Research Papers and
instruments, digital the virtual
Articles, Related
filters, different representation of DVRs
Literatures and Studies,
algorithms and artificial from scientific journals,
& Textbooks.
neural networks. articles and research
papers.

Simulation Phase

Software (Program-
Algorithm) Generate codes for
Python, MATLAB or Conduct simulations to
simulation that would be
LabVIEW (Software test the accuracy of the
used for the testing of
Applications) system.
the system.

Execution Phase

Apply the codes


Test the generated
generated from
codes for the digital Python, MATLAB or
MATLAB, LabVIEW or
filters (virtual LabVIEW and the
Python to determine the
instrument) which can software involved in
captured feeds that
be applied using CCTV cameras.
pass through the
artificial neural network.
system.

53
Table 8 Summary of Design Alternatives

SUMMARY OF DESIGN ALTERNATIVES


FILTERING TECHNIQUE ADVANTAGES DISADVANTAGES
 Fast computational time,  Converts the image
Design 1 Adaptive Filtering  Extract important details into black and white
from the image. image.
 Hard to analyse
 Fastest computational
Design 2 Frequency Domain Filtering since this is in
time.
frequency domain.
 Retain image color  Long computational
Design 3 Spatial Domain Filtering
 Reduces white noise. time.

Table 8 show the summary of the design alternatives as well as advantages and disadvantages of each
design alternatives. A performance matrix was conducted to further analyze the system performance and
mean difference of each design alternatives.

Figure 42 Performance Matrix

Figure 42 shows the computational time for each of the design alternatives in millisecond. Design alternative
1 took 3.59ms, design alternative 2 took 1.18ms and design alternative 3 took 10.85ms. This performance
matrix took about 2700+ iteration to get the visible mean difference of each design alternatives.

54
Design Alternative Mean Difference
30

25

20

15

10

1531
1
86
171
256
341
426
511
596
681
766
851
936
1021
1106
1191
1276
1361
1446

1616
1701
1786
1871
1956
2041
2126
2211
2296
2381
2466
2551
2636
2721
2806
Design 1 Design 2 Design 3

Figure 43 Mean Difference of Design Alternatives

Figure 43shows the 2700+ iterations for the comparison of each of the design alternatives.

3.4 Morphological Chart


The morphological chart is presented to generate design alternatives having the combination of means to
each subsystem. This helps the proponents in determining to design its design alternatives that can satisfy
and meet the requirements for the objectives.

Table 9 Morphological Chart

SUBSYSTEMS DESIGN 1 DESIGN 2 DESIGN 3


Channels 4 channels
Target
Back Propagation Forward Propagation Sigmoid Function
Identification Algorithm
Machine Learning
Artificial Neural Network K-Nearest Neighbor Support Vector Machine
Algorithm
Frequency Domain
Digital Filter Adaptive Filtering Spatial Domain Filtering
Filtering
USB port
Serial Connection
BNC connector
Local Storage
Storage
Cloud-Based Storage

In the experiments, many types of feature extraction and filtering techniques were used to see which among
the three design alternatives is best for this kind of application. For design alternative 1, face detection and
binarization is used since binary color is more appropriate for face recognition applications and also to lessen
55
the processing time. The second alternative design uses the magnitude spectrum that depends on the face
light blending but since this is a magnitude spectrum, the face cannot be extracted therefore it requires to
process the whole frame which is not appropriate for face recognition. The third alternative design uses the
blurring approach which is used to lessen the noise of the actual image which is good but still, processing of
a colored image is not appropriate for face recognition and because of this instance, the first alternative
design leads the ranking.

3.5 Design Components

3.5.1 Data Encoder


Analog to Digital Converter
Analog-to-Digital Converter or ADC is a type of an electronic circuit used in converting analog signals into
digital pulses or binary forms. This is mostly used in the industry mainly because some of the environmental
measurable parameter is in analog form. Given the fact that these types of signals need an
integrator/intermediate device that would allow the conversion process in order to communicate with digital
processors such as microcontrollers and microprocessors.

Figure 44 Basic Schematic Diagram of an Analog-to-Digital Converter

(Source: Agarwal (2014), How to Convert the Analog Signal to Digital Signal by ADC Converter)

The principle behind the conversion of an analog signal is it needs to be converted to a certain number of
bits N. The bits’ sequence represents the number where each bit has the double of the weight of the next
starting from the Most Significant Bit (MSB) up to the Least Significant Bit (LSB).

𝑵−𝟏
𝑽𝒓𝒆𝒇
𝑽𝒊𝒏 = ∑ 𝒃𝒏 𝟐𝒏
𝟐𝑵
𝒏=𝟎

Equation 6 ADC Conversion

56
This equation simply explains the mathematical approach wherein the more bits lead to more precision in the
digital representation where the range is in between 0 and V ref.

Digital Filters
In digital filter, it utilizes the function of a digital process to do numerical calculations on the given sampled
values of the signals. Its processor can be a PC (Personal Computer) or a specialized DSP (Digital Signal
Processor) chip. The operation of digital filters is presented in a step-by-step procedure.

Having a raw signal which is to be filitered digitally is in a form of any waveform described by the function,

𝑽 = 𝒙(𝒕); t = time
Equation 7 Raw Signal

The signal is sampled at time intervals which is denoted by “h” and represents the sampling interval. The
sampled value at time 𝑡 = 𝑖ℎ is
𝑥𝑖 = 𝑥(𝑖ℎ)
Equation 8 Sampled Value

Moreover, the digital values that are transferred from the ADC down to the processor can be in a form of a
sequence
𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , … …

Which corresponds to the values of the signal waveform at a given time of

𝑡 = 0 , ℎ, 2ℎ , 3ℎ , 4ℎ , ….

And if 𝑡 = 0, it is the instant at which sampling has started.

PIC Microcontroller
PIC microcontrollers are a family of specialized microcontroller chips produced by Microchip Technology in
Chandler, Arizona. The acronym PIC stands for "peripheral interface controller," although that term is rarely
used nowadays. A microcontroller is a compact microcomputer designed to govern the operation of
embedded systems in motor vehicles, robots, office machines, medical devices, mobile radios, vending
machines, home appliances, and various other devices. A typical microcontroller includes a processor,
memory, and peripherals.

57
Figure 45 PIC Microcontroller

(Source: https://www.elprocus.com/introduction-to-pic-microcontrollers-and-its-architecture/)

The PIC microcontrollers appeal to hobbyists and experimenters, especially in the fields of electronics and
robotics. Key features include wide availability, low cost, ease of reprogramming with built-in EEPROM
(electrically erasable programmable read-only memory), an extensive collection of free application notes,
abundant development tools, and a great deal of information available on the Internet. The PIC
microcontrollers often appear under the brand name PICmicro.

Figure 46 PIC Architecture

(Source: https://www.elprocus.com/introduction-to-pic-microcontrollers-and-its-architecture/)

Every PIC microcontroller has a set of registers that also function as RAM (random access memory). Special
purpose control registers for on-chip hardware resources are also mapped into the data space. Every PIC
has a stack that saves return addresses. The stack was not software-accessible on the earlier versions of
the PIC, but this limitation was removed in later devices.

58
RS232 to USB Converter
A USB to serial adapter, also referred to as a USB serial converter or RS232 adapter is a small electronic
device which can convert a USB signal to serial RS232 data signals. RS232 is the type of signal which is in
many older PCs and is referred to as a serial COM port. A USB to serial adapter typically converts between
USB and either RS232, RS485, RS422 or TCP signals, however some USB to serial adapters have other
special conversion features such as custom baud rates, high-speed or other.

Figure 47 RS232 to USB


(Source: https://www.amazon.com/Manhattan-Serial-Converter-Connects-205146/dp/B0007OWNYA)

Even the USB to serial adapter RS232 standard is an older communication protocol it is still used by many
modern serial RS232 devices in both business and consumer markets and is also often used for personal
and office serial devices. More recently most new computers do not have a built-in COM port so a USB serial
adapter is often used for connecting many types of serial devices to a computer. A standard USB to serial
adapter is a very useful device for connecting equipment such as printers, scanners, scales and GPS devices,
but also most business and consumer equipment can be connected to a computer by using an industrial
grade adapter.

The USB serial adapter is made in different versions and grades, the 1-port adapter is very popular and one
of the most commonly used adapters. Two to four port adapters are also frequently used if the user needs
more than one serial port. Multi-port adapters can be connected to a computer by using just one USB cable,
where after the computers operating system will create multiple virtual COM ports. Grades range from
standard consumer grade to industrial grade and even mission critical grade adapters. The reason is that it
is a very reliable way of data communication and it is fairly easy to understand for engineers so it's easy to
develop and program new devices. Another reason is that since the serial protocol has existed for such a
long time, many devices already have the serial protocol built-in so it’s easy to integrate new devices with old
devices.

3.5.2 Artificial Neural Network

Face Detection and Recognition


Face Detection is a computer technology that has the ability to identify the presence of people’s faces (or
even animals) within digital images that utilizes machine learning and different algorithms to detect human
faces within larger images which contains numerous objects. In face detection, the system identifies if there

59
is a human face present in either a video feed or image. While the process is complex (due to the fact that in
this case, it’s coming from a real-time video feed), one of the easiest features to detect in a human face is
through a pair of eyes.

In order to perform face detection, the working principle of Haar Feature-based Cascade Classifier is
optimized. The algorithm that will perform the operation needs a lot of positive and negative images to train
the classifier itself which will be extract some features from the images. Images shown below are the example
where Haar Features is applied. Each feature is a single value obtained by subtracting the pixels under the
white triangle from the pixels under the black triangle.

Figure 48 128 hypersphere


(Source: (University, n.d.))

Figure 49 Face landmarks


(Source: https://openface-api.readthedocs.io/en/latest/openface.html)

60
According to an article, a hypersphere is a mathematical object that live on four-dimensional space with the
properties both similar to and uniquely different from an ordinary sphere. The surface of the hypersphere is
three dimensional -- we could walk around on a hypersphere and not know the difference from our own space.
In fact, it may be true that our own world is a very large hypersphere.

Practically speaking, most of the image section does not contain faces. With that, the method focuses on a
specified section where there can be face qualities where the concept of Cascade of Classifiers is introduced.
In the process of Cascade of Classifiers, it groups the features into different levels of classifiers and apply
successively. If a window fails in the first stage, it will automatically discard it but if it passes, it will proceed
to the next level and continue the process. The window which passes all stages is considered as a face
region.

Neural Network Classifier


Neural network imitates the function of a real human neuron network where the neuron carries electrical
impulses and these are important part of the brain. In terms of neural network being run by computers, it is
composed of mathematical equations, algorithms, and even basic arithmetic as well. Neural network works
also like what human brain does, it predicts an output from the input being fed and sometimes there will be
a given an unlabelled dataset and the neural network will be the one to cluster the datasets into their
respective groups depending on their characteristics. Neural network classifier is a method being used for
prediction in a neural network such as decision trees and random forest which are very popular nowadays,
they do different ways in determining the most accurate predictions which depends on the given dataset.

Python Programming Language


Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-
level built in data structure, combined with dynamic typing and dynamic binding, make it very attractive for
Rapid Application Development, as well as a scripting or glue language to connect existing components
together.

3.6 Multiple Constraints

The proponents decided to have three parameters that would serve as a Key Performance Indicator namely,
Economic, Manufacturability and Performance.

Table 10 Design Consideration to Satisfy the Criterion

Very Very
2 4 Low 6 Moderate 8 High 10
Low High

61
Table 11 Designer’s Raw Ranking Criterion Summary
Design
Design Constraints Criterion Importance
Consideration/Justification
This design project strives to have
a balance between cost and
performance and such, cost is
given an important level of 3
(Moderate). The cost of each
design should satisfy the
Economic 8 condition;
≤ 2000
Thru budget analysis, the cost of
each design alternative is
evaluated. Prices from both local
and online stores are considered
on this constraint.
The performance of each design
alternative is measured thru its
conversion delay which has to be
less than or equal to 20
milliseconds to provide fast real-
Performance 10 time CCTV feed. Moreover, the
accuracy should be equal or
greater than 90% and for the
confidence, it should be equal or
greater than 25% having eight (8)
registered faces in the database.
The availability of the components
and equipment in each design
alternative has been considered.
Manufacturability 6 Several numbers of stores
including online stores selling
subsystems for each design
alternative locally is summarized.

Table 11 shows the Designer’s Raw Ranking Criterion Summary considering different constraints such as
Economics, Performance and Manufacturability. The level of importance of these constraints has been set
where performance has the highest level of importance which was set to 10 (Very High). Economics Criterion
has a level of importance of 8 (Moderate). Lastly, Manufacturability has the least level of importance of 6
(Very Low) which only states the availability of the materials and equipment needed.

One of the critical aspects of a design project is its performance. Since this design project includes
surveillance system, this has to be fast and reliable with outstanding and acceptable performance so that it
can provide real-time and good quality service such as image filtering. All CCTV feeds will undergo process
of transmission, conversion, and filtration which should deliver important information in timely manner.

62
Therefore, the conversion delay has considered in this design project to show which design alternative will
provide the negligible or acceptable conversion delay which can be measured by adding timer that will start
before the conversion and will stop right after the conversion. Aside from the conversion delay, there are
other parameters that the proponents have considered such as extraction, accuracy and confidence of the
system.

Secondly, Manufacturability. One of the factors that a designer should consider is the availability of the
materials within the vicinity. Before starting its prototype, it must be guaranteed that the components and set
of equipment needed are available and accessible in the market.

Last but not the least, Economic. Since the target market of this project is the CCTV Companies and some
public or private sectors in the Philippines, the system should be at low cost where the design must consider
the simplest design, incorporating the lowest number of subsystems and components, without compromising
efficiency. As such, the proponents must carefully evaluate each and every component’s cost every time.

3.7 Testing and Assessment Plan

3.7.1 Constraint Assessment Method

One of the Multi Criteria Decision Making Method that was developed by T.L. Saaty is the Analytic Hierarchy
Process or AHP. AHP is a method to derive ratio scales from paired comparisons. It was developed as a
reaction to the finding that there is a miserable lack of common, easily understood and easy-to-implement
methodology to enable the taking of complex decisions. It’s almost universal adoption as a new paradigm for
decision-making coupled with its ease of implementation and understanding constitute its success. More than
that, it has proved to be a methodology capable of producing results that agree with perceptions and
expectations.

The Table 12 is the output of AHP analysis in a decision matrix form which contains of m rows and n columns.
The design alternatives are placed at the column headings part which will be compared based on the criterion
selected by the proponents. In addition, the body of the matrix are the 𝜔𝑚 where the criteria are assigned
with different levels of importance. While, 𝛼𝑖𝑗 shows the ratings of the 𝑗 𝑡ℎ alternative related to the 𝑖 𝑡ℎ and𝑆𝑖
is the summation of the weighting factors and design alternative weights using the formula:

𝐒𝐣 = ∑ 𝛚𝐢 𝛂𝐢𝐣
𝐢=𝟏

Equation 9 Summation of the Weighting Factors

63
Table 12 AHP Decision Matrix

DESIGN OPTION 1 DESIGN OPTION 2 DESIGN OPTION N

Criteria 1 𝜔1 𝛼11 𝛼12 ⋯ 𝛼1𝑛

Criteria 2 𝜔2 𝛼21 𝛼22 ⋯ 𝛼2𝑛

⋮ ⋮ ⋮ ⋮ ⋯ ⋮

Criteria
𝜔𝑚 𝛼𝑚1 𝛼𝑚2 ⋯ 𝛼𝑛𝑚
𝒎

𝑚 𝑚 𝑚

Score 𝑆1 = ∑ 𝜔𝑖 𝛼𝑖1 𝑆2 = ∑ 𝜔𝑖 𝛼𝑖2 ⋯ 𝑆𝑛 = ∑ 𝜔𝑖 𝛼𝑖𝑛


𝑖=1 𝑖=1 𝑖=1

To evaluate the proponents’ design alternatives, the designers consider the functionality that can satisfy the
economic, performance, reliability and manufacturability constraints. The working principle of the model on
trade-off strategies made by Otto and Antonsson (1991) which was presented in the engineering design is
utilized. The given constraints were identified and ranked from the most important to the least important.
Their ranking is based on the perception of the designers.

3.7.2 Final Design Assessment Method

In this design project, the proponents carefully considered pertinent realistic constraints. The engineering
constraints that are relevant to the design project are: (1) Economics, (2) Performance and (3)
Manufacturability. The following constraints were considered in the evaluation of trade-offs of the project.

Economics
Economics is a parameter of utmost consideration. Since the target market of this project is the CCTV
Companies and farmers from Luzon, the system should be at low cost where the design must consider the
simplest design, incorporating the lowest number of subsystems and components, without compromising
efficiency. As such, the proponents must carefully evaluate each and every component’s cost every time.
The price of the raw components of the device/system is based on Alexan Commercial, Al-Glo Audio
Electronics, Deeco and E-Gizmo Mechatronix Central. Table 14will help the proponents to identify which
design will be chosen in the trade off.

𝐴𝐷𝐶 + 𝑆𝐶 + 𝑀𝐶 ≤ 2000

64
Where,

ADC – Analog-to-Digital Converter


SC – Serial Communication (RS232 to USB Converter)
MC – Microcontroller

In this constraint, the proponents consider the price differentiation of the microcontroller, RS232 to USB
Converter (Serial Communication) and the Analog-to-Digital Converter in every design alternative.

Performance
One of the critical aspects of a design project is its performance. Since this design project includes
surveillance system, this has to be fast and reliable with outstanding or acceptable performance so that it can
provide real-time and good quality service such as image filtering. All CCTV feeds will undergo process of
transmission, conversion, and filtration and these should deliver important information in timely manner
therefore, the computational time has considered in this design project to show which design alternative will
provide the negligible or acceptable conversion delay which can be measured by adding timer that will start
before the conversion and will stop right after the conversion. In addition, there are some parameters involved
such as its accuracy, extraction, and confidence of the system.

Manufacturability
One of the factors that a designer should consider is the availability of the materials within the vicinity. Before
starting its prototype, it must be guaranteed that the components and set of equipment are available and
accessible in the market.

For this constraint, manufacturability, the proponents decided to consider the availability of the RS232 to USB
Converter for Serial Communication wherein a lot of electronic outlets were considered such as Alexan
Commercial, E-Bay, RS-Philippines, Deeco, E-Gizmo Mechatronix Central and such.

65
CHAPTER 4: CONSTRAINTS, TRADE-OFFS, AND SENSITIVITY ANALYSIS

4.1 Design Constraints

In every design, constraints have an important role of the development for the project to succeed. Constraints
in a design help to narrow choices when creating a project which is part of the development of the project.
Furthermore, it sets the limitations of the design project considering its cost, components, requirements, and
operation conditions following the engineering requirements and standards that would help the proponents
to analyze, justify and evaluate all the design alternatives that are possible to use in order to address the
problem and for the validation for every design that is best for the project.

Table 13 Design Constraints

CONSTRAINTS PARAMETERS LIMIT VALUE

Computational Time (ms) ≤2

Extraction Yes
Performance
Accuracy (%) ≥ 90

Confidence (%) ≥ 25

Economics Cost (₱) ≤ 2000

Manufacturability Availability Available or Not Available

4.2 Design Trade-off Analysis

This section presents the approach by the proponents that will be utilized as part of the trade-off between
every characteristic, how the design constraints will be utilized to quantify the best design and how the criteria
for the trade-offs are measured to analyze numerically. Trade-off analysis is a situational decision in
evaluating the d2esign constraints identified by the proponents to select the best design for the project.

The alternatives were developed based on the engineering requirements, standards, and constraints which
will be further evaluated in order to choose the winning design to be implemented in the project. Moreover,
Economic, Performance and Manufacturability are the criterion to be evaluated using the Analytical Hierarchy
Process.

66
Figure 50 Ranking Scale Diagram

Equation 10 below illustrates how the percentage difference between the two values being evaluated is
computed.

𝐺𝑜𝑣𝑒𝑟𝑛𝑖𝑛𝑔 𝑣𝑎𝑙𝑢𝑒 − 𝐿𝑜𝑤𝑒𝑟 𝑣𝑎𝑙𝑢𝑒


% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
𝐺𝑜𝑣𝑒𝑟𝑛𝑖𝑛𝑔 𝑣𝑎𝑙𝑢𝑒
Equation 10 Percentage Difference between Two Values

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 𝐺𝑜𝑣𝑒𝑟𝑛𝑖𝑛𝑔 𝑅𝑎𝑛𝑘 − (% 𝐷𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒)


Equation 11 Subordinate Ranking

Equation 12 states the formula to compute the overall rank based on the designer’s constraints.

𝑶𝒗𝒆𝒓𝒂𝒍𝒍 𝑹𝒂𝒏𝒌 = ∑ 𝑤𝑒𝑖𝑔ℎ𝑡 𝑥 𝑟𝑎𝑛𝑘


Equation 12 Overall Ranking

4.3 Design Constraints Assessment

4.3.1 Economic Constraint


In this constraint, the proponents consider the price differentiation of the microcontroller, RS232 to USB
Converter (Serial Communication) and the Analog-to-Digital Converter in every design alternative.

67
Table 14 Price Differentiation of Design Alternatives

DESIGN DESIGN DESIGN


SUBSYSTEM
ALTERNATIVE 1 ALTERNATIVE 2 ALTERNATIVE 3
Microcontroller Php 401.78 Php 401.78 Php 401.78
Serial Communication
(RS232 to USB Php 250.00 Php 290.00 Php 300.00
Converter)
Analog-to-Digital
Php 450.00 Php 450.00 Php 450.00
Converter
Total Php 1,101.78 Php 1,141.78 Php 1,151.78

To determine which criterion gets the highest rank, the least possible cost for RS232 to USB Converter is
Php 200.00.

Design Alternative 1

UsingEquation 10,

250.00 − 200.00
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
250.00

= 0.2 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 2

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 2

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑅𝑎𝑛𝑘 = 𝟖

Design Alternative 2

Using Equation 10,

290.00 − 200.00
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
290.00

= 0.2413793103 𝑥 10

68
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 2.413793103

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 2.413793103

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑅𝑎𝑛𝑘 = 𝟕. 𝟓𝟖𝟔𝟐𝟎𝟔𝟖𝟗𝟕

Design Alternative 3

Using Equation 10,

300.00 − 200.00
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
300.00

= 0.333333333 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 3.33333333

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 3.33333333

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑅𝑎𝑛𝑘 = 𝟔. 𝟑𝟑𝟑𝟑𝟑𝟑𝟑𝟑𝟑

Table 15 Economic Constraint Ranking

DESIGN DESIGN DESIGN


SUBSYSTEM
ALTERNATIVE 1 ALTERNATIVE 2 ALTERNATIVE 3
Microcontroller 10 10 10
Serial Communication (RS232
8 7.586206897 6.333333333
to USB Converter)
Analog-to-Digital Converter 10 10 10
Total 9.33333 9.1954022 8.888888889

69
Figure 51 Trade-off Ranking Scale for Economic Constraint

4.3.2 Performance Constraint


In performance constraint, the proponents utilize the parameters such as computational time, extraction,
accuracy and confidence of the system.

Table 16 Performance Constraint

PARAMETERS DESIGN DESIGN DESIGN


(IMAGE FILTERING) ALTERNATIVE 1 ALTERNATIVE 2 ALTERNATIVE 3
Computational
3.59 ms 1.18 ms 10.85 ms
Time
Extraction Yes No No
Accuracy 92.977 82.5 61.068
Confidence 42 59.33 77.47

Computational Time

Design Alternative 1

Using Equation 10,

3.59 − 3.59
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
3.59

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0

70
Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 0

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 𝟏𝟎

Design Alternative 2

Using Equation 10,

3.59 − 1.18
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
3.59

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.671309 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 6.713091922

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 6.713091922

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 𝟑. 𝟐𝟖𝟔𝟗𝟎𝟖

Design Alternative 3

Using Equation 10,

|3.59 − 10.85|
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
3.59

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = (2.022284) 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝟐𝟎. 𝟐𝟐𝟐𝟖𝟒𝟏𝟐𝟑

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 20.22284123

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = −𝟏𝟎. 𝟐𝟐𝟐𝟖𝟒𝟏𝟐𝟑

Accuracy

71
Design Alternative 1

Using Equation 10,


92.977 − 92.977
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
92.977

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 0

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 𝟏𝟎

Design Alternative 2

Using Equation 10,


92.977 − 82.5
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
92.977

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.11268 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 1.1268

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 1.1268

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 𝟖. 𝟖𝟕𝟑𝟏𝟔

Design Alternative 3

Using Equation 10,


92.977 − 61.068
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
92.977

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.34319 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 3.4319

Using Equation 11,

72
𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 3.4319

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 𝟔. 𝟓𝟔𝟖


Confidence

Design Alternative 1

Using Equation 10,


77.47 − 42
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
77.47

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.45785 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 4.5785

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 4.5785

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 𝟓. 𝟒𝟐𝟏𝟒𝟓𝟑

Design Alternative 2

Using Equation 10,


77.47 − 59.33
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
77.47

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0.234155 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 2.34155

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 2.34155

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 𝟕. 𝟔𝟓𝟖𝟒𝟒

Design Alternative 3

Using Equation 10,


77.47 − 77.47
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
77.47

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0

73
Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 0

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 𝟏𝟎

Table 17 Summary of Performance Trade-off

PARAMETERS DESIGN DESIGN DESIGN


(IMAGE FILTERING) ALTERNATIVE 1 ALTERNATIVE 2 ALTERNATIVE 3
Computational
10 3.286908 -10.22284123
Time
Extraction 10 0 0
Accuracy 10 8.87316 6.568
Confidence 5.421453 7.65844 10
Rank 8.85536 4.954627 1.586289

Figure 52 Trade-off Ranking Scale for Performance Constraint

4.3.3 Manufacturability Constraint

For this constraint, manufacturability, the proponents decided to consider the availability of the RS232 to USB
Converter for Serial Communication wherein a lot of electronic outlets were considered such as Alexan
Commercial, E-Bay, RS-Philippines, Deeco, E-Gizmo Mechatronix Central and such.

74
Table 18 Availability of Materials based from the Philippines
DISTRIBUTORS
SERIAL
COMMUNICATION E-Gizmo
Alexan RS- RANKING
(RS232 TO USB E-Bay Deeco Mechatronix
CONVERTER) Commercial Philippines
Central

Design Alternative 1 3

Design Alternative 2 3

Design Alternative 3 3

Where:

- Available - Not Available

Design Alternative 1

Using Equation 10,

3−3
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
3

= 0 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 0
𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑅𝑎𝑛𝑘 = 𝟏𝟎

Design Alternative 2

Using Equation 10,

3−3
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
3

75
= 0 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0

Using Equation 11,

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 0

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑅𝑎𝑛𝑘 = 𝟏𝟎

Design Alternative 3

Using Equation 10,

3−3
% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑥 10
3

= 0 𝑥 10

% 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒 = 0

Using Equation 11,


𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒 𝑅𝑎𝑛𝑘 = 10 − 0

𝑆𝑢𝑏𝑜𝑟𝑑𝑖𝑛𝑎𝑡𝑒𝑅𝑎𝑛𝑘 = 𝟏𝟎

Table 19 Manufacturability Constraint Ranking


DESIGN DESIGN DESIGN
MANUFACTURABILITY
ALTERNATIVE 1 ALTERNATIVE 2 ALTERNATIVE 3
Serial Communication
(RS232 to USB 10 10 10
Converter)
Rank 10 10 10

76
Figure 53 Manufacturability Constraint Ranking Trade-off

4.4 Winning Design

Table 19 shows the summary of the output parameters after considering all the given constraints set by the
designers.

Table 20 Summary of the Output Parameters based on Different Constraints

DESIGNER’S
DESIGN
RAW PARAMETER/S DESIGN 1 DESIGN 2 DESIGN 3
CONSTRAINTS
RANKING
Computational
Time
Performance 10 Extraction 8.85536 4.954627 1.586289
Accuracy
Confidence
Availability of
Manufacturability 6 10 10 10
the Materials
Economic 8 Price/Cost 9.33333 9.1954022 8.888888889

77
Design Alternative 1

Performance

Using Equation 10,


|8.85536 − 8.85536|
𝑅𝑎𝑛𝑘 = 10 − 𝑥 10
8.85536

𝑅𝑎𝑛𝑘 = 𝟏𝟎
Manufacturability

Using Equation 10,

|10 − 10|
𝑅𝑎𝑛𝑘 = 10 − 𝑥 10
10

𝑅𝑎𝑛𝑘 = 𝟏0

Economic

Using Equation 10,

|9.33333 − 9.33333|
𝑅𝑎𝑛𝑘 = 10 − 𝑥 10
9.33333

𝑅𝑎𝑛𝑘 = 𝟏𝟎

Design Alternative 2

Performance

Using Equation 10,

|8.85536 − 4.954627|
𝑅𝑎𝑛𝑘 = 10 − 𝑥 10
8.85536

𝑅𝑎𝑛𝑘 = 𝟓. 𝟓𝟗𝟓𝟎𝟓

Manufacturability

Using Equation 10,


|10 − 10|
𝑅𝑎𝑛𝑘 = 10 − 𝑥 10
10

𝑅𝑎𝑛𝑘 = 𝟏0

78
Economic

Using Equation 10,


|9.33333 − 9.1954922|
𝑅𝑎𝑛𝑘 = 10 − 𝑥 10
9.33333

𝑅𝑎𝑛𝑘 = 𝟗. 𝟖𝟓𝟐𝟑𝟒𝟖

Design Alternative 3

Performance

Using Equation 10,


|8.85536 − 1.586289|
𝑅𝑎𝑛𝑘 = 10 − 𝑥 10
8.85536

𝑅𝑎𝑛𝑘 = 𝟏. 𝟕𝟗𝟏𝟑𝟑𝟐

Manufacturability

Using Equation 10,

|10 − 10|
𝑅𝑎𝑛𝑘 = 10 − 𝑥 10
10

𝑅𝑎𝑛𝑘 = 𝟏0

Economic

Using Equation 10,

|9.33333 − 8.888888889|
𝑅𝑎𝑛𝑘 = 10 − 𝑥 10
9.33333

𝑅𝑎𝑛𝑘 = 𝟗. 𝟓𝟐𝟑𝟖𝟏𝟐𝟗

79
Table 21 Overall Ranking of Design Alternatives

DESIGNER’S DESIGN DESIGN DESIGN


DESIGN
RAW PARAMETER/S ALTERNATIVE ALTERNATIVE ALTERNATIVE
CONSTRAINTS
RANKING 1 2 3
Computational
Time
Performance 10 Extraction 10 5.59505 1.791332
Accuracy
Confidence
Availability of
Manufacturability 6 10 10 10
the Materials
Economic 8 Price/Cost 10 9.852348 9.5238129
Overall Ranking 10 8.115386 6.42099

Design Alternative 1

Using Equation 12,

10 6 8
( 𝑥10 ) + ( 𝑥 10 ) + ( 𝑥 10 ) = 𝟏𝟎
24 24 24

Design Alternative 2

Using Equation 12,

10 6 8
( 𝑥 5.59505) + ( 𝑥 10) + ( 𝑥 9.852348 ) = 𝟖. 𝟏𝟏𝟓𝟑𝟖𝟔
24 24 24

Design Alternative 3

Using Equation 12,

10 6 8
( 𝑥 1.791332) + ( 𝑥 10) + ( 𝑥 9.5238129) = 𝟔. 𝟒𝟐𝟎𝟗𝟗𝟐
24 24 24

80
Figure 54 Overall Trade-Off Ranking

According to Figure 54Figure 1 which illustrates the Overall Trade-Off Ranking, Design Alternative 1 is the
winning design in accordance with the given constraints set by the designer.

4.5 Sensitivity Analysis

Sensitivity Analysis
12

10 10
10
8.115386 7.960215
8
6.42099
6.087154
6

0
1st Iteration 2nd Iteration

Design Alternative 1 Design Alternative 2 Design Alternative 3

Figure 55 Sensitivity Analysis


A sensitivity analysis is a method used to know how different values of an independent variable affect a
certain dependent variable under a given set of assumptions. This method is used within exact boundaries
that depend on one or more input variables, such as the result that changes in interest rates have on bond
prices.

81
There are two methods of SAs, namely, the local sensitivity analysis, and the global sensitivity analysis. The
local sensitivity analysis studies how the small variations in inputs around a given value (such as the ranking
of a constraint) can change the value of the output (winning design). The global sensitivity analysis, on the
other hand, considers the mathematical model, wherein the change of one parameter into multiple input
variations can vary the outcome.

In doing the sensitivity analysis, the expected output must be the same in a way that the winning design must
maintain its position even though there will be a change in subsystem or ranking. In order to know how many
iterations will be made, the linear combination will be used. Considering there are three constraints that needs
to have their rankings iterative, the formula to be used is

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠 = 𝑛!

Equation 13 Formula in determining the Number of Iterations for the Sensitivity Analysis

Where n = number of constraints. The proponents have defined three constraints (Economics, Performance
and Manufacturability). Upon trade-off, it has been concluded that the manufacturability constraint doesn’t
affect the overall performance of the system. Therefore, it can be excluded for the sensitivity analysis.

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠 = 2!

= 2𝑥1

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠 = 𝟐

Table 22 First Iteration for Sensitivity Analysis

DESIGNER’S DESIGN DESIGN DESIGN


DESIGN
RAW PARAMETER/S ALTERNATIVE ALTERNATIVE ALTERNATIVE
CONSTRAINTS
RANKING 1 2 3
Computational
Time
Performance 10 Extraction 10 5.59505 1.791332
Accuracy
Confidence
Availability of
Manufacturability 6 10 10 10
the Materials
Economic 8 Price/Cost 10 9.852348 9.5238129
Overall Ranking 10 8.115386 6.42099

82
Table 23 Second Iteration for Sensitivity Analysis

DESIGNER’S DESIGN DESIGN DESIGN


DESIGN
RAW PARAMETER/S ALTERNATIVE ALTERNATIVE ALTERNATIVE
CONSTRAINTS
RANKING 1 2 3
Computational
Time
Performance 8 Extraction 10 5.59505 1.791332
Accuracy
Confidence
Availability of
Manufacturability 6 10 10 10
the Materials
Economic 10 Price/Cost 10 9.852348 9.5238129
Overall Ranking 10 7.960215 6.087154

Design Alternative 1

Using Equation 12,

8 10
( 𝑥10 ) + ( 𝑥 10 ) = 𝟏𝟎
18 18

Design Alternative 2

Using Equation 12,

8 10
( 𝑥 5.59505) + ( 𝑥 9.852348 ) = 𝟕. 𝟗𝟔𝟎𝟐𝟏𝟓
18 18

Design Alternative 3

Using Equation 12,

8 10
( 𝑥 1.791332) + ( 𝑥 9.5238129) = 𝟔. 𝟎𝟖𝟕𝟏𝟓𝟒
18 18

4.6 Design Optimization

In determining of the winning design among the given design alternatives, there are a lot of ways or methods
that can be done and one of those is through Pareto Pairwise Comparison. In this case, the proponents
decided to utilize the minimization and maximization objective formulas which are the following:

83
𝑀𝑎𝑥𝑟𝑎𝑤 − 𝑃𝐶𝑟𝑎𝑤
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( )+1
𝑀𝑎𝑥𝑟𝑎𝑤 − 𝑀𝑖𝑛𝑟𝑎𝑤

Equation 14 Minimization Objective

𝑃𝐶𝑟𝑎𝑤 − 𝑀𝑖𝑛𝑟𝑎𝑤
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( )+1
𝑀𝑎𝑥𝑟𝑎𝑤 − 𝑀𝑖𝑛𝑟𝑎𝑤

Equation 15 Maximization Objective

Equation 14 is used when the ideal value in every constraint is higher as compared to other parameters. On
the other hand, when the ideal value is the least value, Equation 15 is utilized.

Table 24 Summary of Output Parameters

DESIGN DESIGN DESIGN


DESIGN CONSTRAINTS PARAMETER/S
ALTERNATIVE 1 ALTERNATIVE 2 ALTERNATIVE 3

Computational
3.59 ms 1.18 ms 10.85 ms
Time

Extraction Yes No No
Performance
Accuracy 92.977 82.5 61.068

Confidence 42 59.33 77.47


Availability of the
Manufacturability 10 10 10
Materials
Economic Price/Cost 9.33333 9.1954022 8.888888889

Pairwise Comparison starting from the bottom:

1. Compare Option 3 to 2 – Neither one dominated the other


2. Compare Option 3 to 1 – Neither one dominated the other
3. Compare Option 2 to 1 – Neither one dominated the other
4. After three pairwise comparisons we conclude that all three options are Pareto Optimal.

84
Performance

Computational Time

Using Equation 14,

Design Alternative 1
10.85 − 3.59
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( )+1=𝟖
10.85 − 1.18

Design Alternative 2
10.85 − 1.18
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( ) + 1 = 𝟏𝟎
10.85 − 1.18

Design Alternative 3
10.85 − 10.85
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( )+1= 𝟏
10.85 − 1.18

Figure 56 Computational Time under Performance Constraint Ranking Using Pareto Pairwise Comparison

Extraction

Using Equation 15,

Design Alternative 1
5−0
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( ) + 1 = 𝟏𝟎
5−0

Design Alternative 2
0−0
PCnorm = 9 ( )+1=𝟏
5−0

Design Alternative 3
0−0
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( )+1=𝟏
5−0

85
Figure 57 Extraction under Performance Constraint Ranking Using Pareto Pairwise Comparison

Accuracy

Using Equation 15,

Design Alternative 1
92.977 − 61.068
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( ) + 1 = 𝟏𝟎
92.977 − 61.068

Design Alternative 2
82.5 − 61.068
PCnorm = 9 ( )+1=𝟕
92.977 − 61.068

Design Alternative 3
61.068 − 61.068
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( )+1=𝟏
92.977 − 61.068

Figure 58 Accuracy under Performance Constraint Ranking Using Pareto Pairwise Comparison

Confidence

Using Equation 15,

Design Alternative 1
42 − 42
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( )+1=𝟏
77.47 − 42

86
Design Alternative 2
59.33 − 42
PCnorm = 9 ( )+1=𝟓
77.47 − 42

Design Alternative 3
77.47 − 42
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( ) + 1 = 𝟏𝟎
77.47 − 42

Figure 59 Confidence under Performance Constraint Ranking Using Pareto Pairwise Comparison

Table 25 Summary of Performance Constraint Ranking Using Pareto Pairwise Comparison

DESIGN DESIGN DESIGN


ALTERNATIVE 1 ALTERNATIVE 2 ALTERNATIVE 3
Computational
8 10 1
Time
PERFORMANCE
Extraction 10 1 1
Accuracy 10 7 1
Confidence 1 5 10
Summation 7.25 5.75 3.25

Economic

Using Equation 14,

Design Alternative 1
1,151.78 − 1,101.78
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( ) + 1 = 𝟏𝟎
1,151.78 − 1,101.78

Design Alternative 2
1,151.78 − 1,141.78
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( )+1=𝟑
1,151.78 − 1,101.78

Design Alternative 3
1,151.78 − 1,151.78
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( )+1=𝟏
1,151.78 − 1,101.78
87
Figure 60 Economic Constraint Ranking Using Pareto Pairwise Comparison

Manufacturability

Using Equation 14,

Design Alternative 1
5−5
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( ) + 1 = 𝟏𝟎
5−5

Design Alternative 2
5−5
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( ) + 1 = 𝟏𝟎
5−5

Design Alternative 3
5−5
𝑃𝐶𝑛𝑜𝑟𝑚 = 9 ( ) + 1 = 𝟏𝟎
5−5

Figure 61 Manufacturability Constraint Ranking Using Pareto Pairwise Comparison

88
Table 26 Design Alternatives Normalized Values

DESIGN DESIGN DESIGN


ALTERNATIVE 1 ALTERNATIVE 2 ALTERNATIVE 3
PERFORMANCE 7.25 5.75 3.25
ECONOMIC 10 3 1
MANUFACTURABILITY 10 10 10
SUMMATION 9.0833 6.25 4.75

After performing pairwise comparison of the rows, the proponents conclude that Design Alternative 1
dominates all other options, and it is better than the other options based on each of the three criteria as
shown in Table 26.

4.7 Final Design

4.7.1 Technical Specifications

After series of simulations, analyses and considering different constraints through trade-offs, the target
specifications set in chapter 2 were finalized to satisfy the target values obtained through testing and
validation of the final system.

Table 27 Technical Specifications

SPECIFICATION TARGET

Conversion Delay ≤ 20 ms

Minimum Camera Count Supported 4 Camera

Minimum Frame Resolution 640 x 480 Pixels


A pop-up notification will appear within ≤ 1 second
Visual Notification
upon activity detection.
Data Logging Real-time data logging upon human detection.

Compression Format (codec) H.264 or MPEG

Signal-to-Noise Ratio >1

89
4.7.2 Testing

4.7.2.1 General Testing

The proponents showcased the accuracy testing and analysis of the face recognition subsystem which is
very crucial for the system since it is intended for the security and surveillance that will classify if the person
has the authority of entry within the vicinity of the other way around.

Figure 62 Graphical Processing Unit using CUDA


CUDA is a parallel computing platform or hardware that is used for GPU-based processes. It is the processor
utilized for running the system. The proponents used Graphics Processing Unit (GPU) which is also an
emerging technology today for multiple times faster than the Central Processing Unit (CPU) which is
commonly used for multi-threaded software.

Figure 63 Creating Vector for Datasets


Highlighted in Figure 63 indicates that the system is working on creating the vector for datasets which means
that it creates a multi-dimensional vector gathered from many samples of faces from the authorized persons.
It also indicates that the dataset is created and then it gives information about the amount of labels and the
dimension of dataset which utilizes the key features from faces to build the dataset and it is composed of 128
dimension hypersphere. It only means that it uses 128 size of data to gather information about the face that
will extract signature-like features that only one person can have, just like fingerprints on face. Afterwards, it
will save the trained model of face recognition so that the system can load it upon system start-up.

Figure 64 Testing Accuracy

Figure 64 shows the training accuracy with tolerance which came from having test samples and using the
face recognition model, it makes a prediction array that will be matched to the true values and the result
above shows a hundred percent accuracy which means that the probability of making correct predictions is
beyond expected. Mathematically, it can be calculated using the formula:

90
Equation 16 Accuracy Formula (Fix Formatting)

Figure 65 Test and Evaluation Results


The trained model is standardized. Therefore, it has zero-mean and unit variance which helped the face
recognition model to see the data clearer which resulted to have a zero tolerance. This accuracy is the mean
value of all prediction accuracy.

Figure 66 Confusion Matrices with 2 known persons


Figure 65 shows the confusion matrix wherein two persons are included in the datasets, and the confusion
matrix clearly shows that the predicted values are all correct through matching it with the true values. Upon
observation, only the diagonal elements have values and those elements that are not in the diagonal is the
errors. It means that it did not match the true values and as a result, the face recognition has a very small
probability to have a mistake predictions.

91
Figure 67 Confusion Matrices with 2 known persons (Normalized)
Figure 66 shows the normalized value which represent the values in ratios so that the data is more
represented and interpreted with just ratios.

Figure 68 Test and Evaluation Results for the 3 known persons

Figure 67 shows the evaluation of the face recognition model with three authorized persons. It has 87%
accuracy of predictions. Precision is the measure of how much precise is the classifier would be or how close

92
the results of predictions to each other. As a result, 72% precision for the first person, and the other two have
a 100% precision.

True positive rate or recall as a term is used to measure the capability of the classifier to predict a class with
the minimum errors. It describes how well the classifier in classifying a class multiple times without mistakes
is. As a result, the first person got 100% of recall score, the second one got 71% and the third one got 90%.

Misclassification rate is the measure of how often is the classifier predicted with an incorrect result and from
this test, the classifier got 13.11% of misclassification rate. F1 score is the relationship between the precision
and recall, having either low precision or low recall score will affect the f1 score and as a result for this test,
the first person got 84%, the second person got, 83% and a third person got 95%.

Support is the number of samples for the specific class, therefore the support for all of the three class is 150.
Confusion matrix shows the numerical values in a two-dimensional array including the normalized array of
the confusion matrix.

Figure 69 Confusion Matrices with 3 known persons

In this confusion matrix, y is labeled as the true values and the x-axis is labeled as predicted value. The
confusion matrix is very helpful in analyzing the performance of the machine learning algorithms which is in
this case, the classifier. This matrix represents the multiplication-like table which cross-relates the two axes.
As a result, the classifier predicted 150 tests correctly and exactly for allencalderon, and the 44 tests are

93
predicted as dan, and 15 tests are predicted as eugeneoca. Here we can see the errors of the classifier and
the main goal here is having the diagonal have the most values.

For the 2nd column, the classifier predicted 106 tests correctly and no predictions for allencalderon and
eugeneoca which is expected because the classifier predictions have no errors at all.

Lastly, in the 3rd column, there are 135 tests predicted correctly for eugeneoca and no predictions for the
other two. This confusion matrix can also be called a tool for optimization since the goal is to minimize errors
and have the most values correctly through the diagonal results.

Figure 70 Confusion Matrices with 3 known persons (Normalized)


Figure 70 shows the normalized values of numerical values from previous figure where this time, it represents
the ratios or percentage based from the lowest possible value up to the maximum value from the result of
confusion matrix. Interpreting the result through this representation, it shows that the diagonal elements of
the confusion matrix gives acceptable percentage of correct predictions with minimal errors surrounding the
diagonal elements.

94
Figure 71 Test and Evaluation Results for the 3 known persons and an unknown
Figure 71 shows the result of evaluation for the three persons and an additional unknown identity. The
unknown dataset contains different random faces taken from the internet, then the classifier will try to predict
these images if the three persons exist for every image in the unknown datasets. As a result, there are some
misclassification in the unknown entity that contributes to the accuracy which is now 85%.

95
Figure 72 True and Predicted Labels

Figure 72 shows the relationship between the true label and the predicted labels which is in good shape
since the diagonal elements have high values and those surrounding it have none or at least low values.

Figure 73 Confusion Matrix with 3 known person and an unknown


Figure 73 shows the normalized confusion matrix where color bar at the right is displayed that represents the
colors and maps it with the ratios. The higher the ratio, the darker it will be. As a result, the diagonal elements
of the confusion matrix indicates a dark blue colors with good values shown inside it which means the
evaluation gives a good result.

96
4.7.2.2 Accuracy Testing

CASES
NORMAL CLOSED EYES LEFT TILT RIGHT TILT AVERAGE
DISTANCE (ft)

1 99.33333333 96.6666667 97.33333333 98 97.33333334

2 98.66666667 91.33333333 94.66666667 92.66666667 99.33333333

3 88.66666667 95.33333333 94.66666667 94.66666667 94.88888889

4 83.33333333 87.33333333 98.66666667 98.66666667 94.88888889

5 90.66666667 96.66666667 90 97.33333333 94.66666667

6 100 97.33333333 94 91.33333333 94.22222222

7 97.33333333 67.33333333 98.66666667 99.33333333 88.44444444

Average Precision Accuracy Test


102
100
98
96
94
92
90
88
86
84
82
1 2 3 4 5 6 7

Average Precision Accuracy Test

Figure 74 Average Precision Accuracy Test

97
Figure 74 shows the graph of the average precision accuracy test having different distances from 1 foot to 7
feet. In each distance, the proponents get the average of the percentage accuracy with different cases of
normal, closed eyes, left tilt angle and right tilt angle.

Prediction Accuracy VS Distance (ft)


120

100

80
Prediction Accuray

60

40

20

0
Normal Closed Eyes Left Tilt Right Tilt
1 99.33333333 96.6666667 97.33333333 98
2 98.66666667 91.33333333 94.66666667 92.66666667
3 88.66666667 95.33333333 94.66666667 94.66666667
4 83.33333333 87.33333333 98.66666667 98.66666667
5 90.66666667 96.66666667 90 97.33333333
6 100 97.33333333 94 91.33333333
7 97.33333333 67.33333333 98.66666667 99.33333333
8 100 100 97.33333333 98.66666667
9 98 90.66666667 95.33333333 100
10 100 99.33333333 99.33333333 98

Figure 75 Prediction Accuracy vs Distance

In Figure 75, it shows the relationship of each prediction accuracy with different angles or cases to different
distances.

98
The following images were from the data gathering for the prediction accuracy with respect to the distance
and angle of the face. (Normal, Closed eyes, Left and Right Tilt)

AUTHORIZED PERSON

Distance at 1 ft

Figure 76 Normal Angle

Figure 77 Closed eyes

Figure 78 Left tilt angle

99
Figure 79 Right tilt angle

Distance at 2 ft

Figure 80 Normal Angle

Figure 81 Closed eyes

Figure 82 Left tilt angle

100
Figure 83 Right tilt angle

Distance at 3 ft

Figure 84 Normal

Figure 85 Closed eyes

Figure 86 Left tilt angle

101
Figure 87 Right tilt angle

Distance at 4 ft

Figure 88 Normal

Figure 89 closed eyes

Figure 90 Left tilt angle

102
Figure 91 Right tilt angle

Distance at 5 ft

Figure 92 Normal

Figure 93 Closed eyes

Figure 94 Left tilt angle

103
Figure 95 Right title angle

Distance at 6 ft

Figure 96 Normal

Figure 97 Closed eyes

Figure 98 Left tilt angle

104
Figure 99 Right tilt angle

Distance at 7 ft

Figure 100 Normal

Figure 101 Closed eyes

Figure 102 Left tilt angle

105
Figure 103 Right tilt angle

UNAUTHORIZED PERSON

Figure 104 Normal Pace at 1 ft

Figure 105 Normal Pace at 2 ft

106
Figure 106 Normal Pace at 3 ft

Figure 107 Normal Pace at 4 ft

Figure 108 Normal Pace at 5 ft

107
Figure 109 Normal Pace at 6 ft

Figure 110 Normal Pace at 7 ft

Table 28 Summary of the Confidence of Unauthorized Person

Distance (ft) Confidence Average


10
10
10
20
20
1 19
70
20
10
10
10
20
20
2 20 18
20
40

108
40
10
0
0
10
10
50
10
20
10
3 35
40
40
60
90
20
10
10
10
10
10
4 14
20
20
20
10
20
0
0
0
10
0
5 9
20
40
10
10
0
20
0
6 40 36
90
50

109
0
0
60
50
50
30
10
0
0
20
7 22
20
30
0
60
50

Confidence of Unauthorized Person


40

35

30

25

20

15

10

0
1 Category 2 3 4 5 6 7

Confidence of Unauthorized Person

Figure 111 Graph of the Confidence of Unauthorized Person


Figure 111 shows the graphical representation of the confidence of the system once it detects an
unauthorized person which varies with respect to distance. As an unauthorized, it is expected that the data
should be as low as possible since it doesn’t include in the datasets of the system.

110
4.7.2.3 Detection, Recognition and Extraction Time Testing

The table below shows the detection time, extraction time and recognition time having 29 iterations.

Table 29 Recognition and Conversion Time Testing

Iteration Detection Time (ms) Extraction Time (ms) Recognition Time (ms)
1 1581 37 1
2 115 33 1
3 109 33 1
4 102 483 1
5 98 85 1
6 100 82 1
7 104 78 1
8 98 80 1
9 100 79 1
10 99 90 1
11 99 80 1
12 99 78 1
13 100 69 1
14 101 82 1
15 98 89 0
16 99 69 0
17 99 76 1
18 100 71 1
19 102 69 0
20 99 79 1
21 99 81 1
22 97 84 0
23 101 77 0
24 99 79 1
25 99 76 0
26 100 88 0
27 98 86 0
28 102 128 0
29 99 97 1
AVERAGE 151.5862069 90.96551724 0.689655172

111
Detection Time (ms)
1800
1600
1400
1200
1000
800
600
400
200
0
0 5 10 15 20 25 30 35

Figure 112 Detection Time

Figure 112 shows the graph of the detection time with 29 iterations. X-axis of the graph represents the number
of iterations while the Y-axis indicates the amount of time upon detection which is in milliseconds.

Extraction Time (ms)


600

500

400

300

200

100

0
0 5 10 15 20 25 30 35

Figure 113 Extraction Time

Figure 113 shows the graph of the extraction time with 29 iterations. X-axis of the graph represents the
number of iterations while the Y-axis indicates the amount of time upon detection which is in milliseconds.

112
Recognition Time (ms)
1.2

0.8

0.6

0.4

0.2

0
0 5 10 15 20 25 30 35

Figure 114 Recognition Time

Figure 114 shows the graph of the recognition time with 29 iterations. X-axis of the graph represents the
number of iterations while the Y-axis indicates the amount of time upon detection which is in milliseconds.

4.7.2.4 Signal-to-Noise Ratio

Table 30 Signal-to-Noise Ratio of the system with 90 iterations.


Iterations S N (gain) N (normalized) S/N (Normalized) SNR in dB
1 1 63.21 0.6321 1.5820281601 3.984284193
2 1 65.31 0.6531 1.5311590874 3.700406325
3 1 65.03 0.6503 1.5377518069 3.73772492
4 1 65.47 0.6547 1.5274171376 3.679153182
5 1 65.25 0.6525 1.5325670498 3.70838968
6 1 64.80 0.6480 1.5432098765 3.768499883
7 1 65.91 0.6591 1.5172204521 3.620973767
8 1 65.69 0.6569 1.5223017202 3.650014763
9 1 66.46 0.6646 1.5046644598 3.548793261
10 1 66.08 0.6608 1.5133171913 3.598599314
11 1 67.12 0.6712 1.4898688915 3.462961044
12 1 67.07 0.6707 1.4909795736 3.469433873
13 1 67.37 0.6737 1.4843402108 3.430669053
14 1 67.07 0.6707 1.4909795736 3.469433873
15 1 68.34 0.6834 1.4632718759 3.306500511
16 1 67.51 0.6751 1.4812620353 3.41263784
17 1 68.17 0.6817 1.4669209330 3.32813412

113
18 1 67.51 0.6751 1.4812620353 3.41263784
19 1 67.51 0.6751 1.4812620353 3.41263784
20 1 67.51 0.6751 1.4812620353 3.41263784
21 1 80.31 0.8031 1.2451749471 1.904607482
22 1 80.19 0.8019 1.2470382841 1.91759573
23 1 81.30 0.8130 1.2300123001 1.798189088
24 1 82.13 0.8213 1.2175818824 1.709963543
25 1 82.13 0.8213 1.2175818824 1.709963543
26 1 81.74 0.8174 1.2233912405 1.751307332
27 1 82.35 0.8235 1.2143290832 1.68672793
28 1 82.29 0.8229 1.2152144854 1.693058754
29 1 81.52 0.8152 1.2266928361 1.77471658
30 1 82.35 0.8235 1.2143290832 1.68672793
31 1 82.13 0.8213 1.2175818824 1.709963543
32 1 82.13 0.8213 1.2175818824 1.709963543
33 1 81.68 0.8168 1.2242899119 1.757685418
34 1 81.80 0.8180 1.2224938875 1.744933927
35 1 81.13 0.8113 1.2325896709 1.81637048
36 1 82.57 0.8257 1.2110936175 1.663554308
37 1 84.16 0.8416 1.1882129278 1.497885464
38 1 83.94 0.8394 1.1913271384 1.520620702
39 1 83.78 0.8378 1.1936022917 1.537192879
40 1 84.45 0.8445 1.1841326229 1.468006922
41 1 84.16 0.8416 1.1882129278 1.497885464
42 1 84.28 0.8428 1.1865211201 1.485509461
43 1 84.36 0.8436 1.1853959222 1.477268579
44 1 84.00 0.8400 1.1904761905 1.514414279
45 1 83.94 0.8394 1.1913271384 1.520620702
46 1 83.78 0.8378 1.1936022917 1.537192879
47 1 84.00 0.8400 1.1904761905 1.514414279
48 1 83.72 0.8372 1.1944577162 1.543415607
49 1 83.56 0.8356 1.1967448540 1.560031374
50 1 83.78 0.8378 1.1936022917 1.537192879
51 1 84.00 0.8400 1.1904761905 1.514414279
52 1 84.39 0.8439 1.1849745230 1.474180262
53 1 84.45 0.8445 1.1841326229 1.468006922
54 1 84.22 0.8422 1.1873664213 1.491695258
55 1 84.16 0.8416 1.1882129278 1.497885464
56 1 84.00 0.8400 1.1904761905 1.514414279
57 1 84.39 0.8439 1.1849745230 1.474180262
58 1 83.78 0.8378 1.1936022917 1.537192879

114
59 1 83.56 0.8356 1.1967448540 1.560031374
60 1 83.94 0.8394 1.1913271384 1.520620702
61 1 84.00 0.8400 1.1904761905 1.514414279
62 1 83.72 0.8372 1.1944577162 1.543415607
63 1 83.72 0.8372 1.1944577162 1.543415607
64 1 83.78 0.8378 1.1936022917 1.537192879
65 1 84.22 0.8422 1.1873664213 1.491695258
66 1 84.00 0.8400 1.1904761905 1.514414279
67 1 83.56 0.8356 1.1967448540 1.560031374
68 1 84.83 0.8483 1.1788282447 1.42901066
69 1 84.61 0.8461 1.1818933932 1.451566099
70 1 84.62 0.8462 1.1817537225 1.45053958
71 1 84.83 0.8483 1.1788282447 1.42901066
72 1 84.83 0.8483 1.1788282447 1.42901066
73 1 89.99 0.8999 1.1112345816 0.916114964
74 1 85.21 0.8521 1.1735711771 1.390188694
75 1 84.61 0.8461 1.1818933932 1.451566099
76 1 84.61 0.8461 1.1818933932 1.451566099
77 1 85.32 0.8532 1.1720581341 1.378983064
78 1 84.77 0.8477 1.1796626165 1.435156337
79 1 84.61 0.8461 1.1818933932 1.451566099
80 1 85.44 0.8544 1.1704119850 1.366775206
81 1 84.67 0.8467 1.1810558639 1.445408805
82 1 84.83 0.8483 1.1788282447 1.42901066
83 1 84.61 0.8461 1.1818933932 1.451566099
84 1 84.55 0.8455 1.1827321112 1.457727761
85 1 84.61 0.8461 1.1818933932 1.451566099
86 1 84.83 0.8483 1.1788282447 1.42901066
87 1 85.21 0.8521 1.1735711771 1.390188694
88 1 84.77 0.8477 1.1796626165 1.435156337
89 1 84.67 0.8467 1.1810558639 1.445408805
90 1 85.18 0.8518 1.1739845034 1.393247285
AVERAGE 1.983089446

115
SNR in dB
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88

SNR in dB

Figure 115 Signal-to-Noise Ratio


Figure 115 shows the graphical representation of the Signal-to-Noise Ratio with 90 iterations. Based from
the calculations, the average value of SNR in decibels is 1.983 dB.

4.8 Evaluation of Objectives

The first objective is to develop a virtual instrument that utilizes machine learning algorithms
which performs real-time video recording to replace existing DVR hardware with a success unit testing rate
of 80%. For the first objective, the testing method that is used is the unit testing for some significant methods.
Evaluation method for the testing would be averaging the unit testing success rate.

The second objective is to design a digital filter/feature extraction for image processing with face detection
success rate above 80% wherein the convolutional neural network is utilized for the face detection testing
method. Data obtained from the testing is then evaluated using the face detection success rate.

The third objective is to incorporate machine learning algorithms using artificial neural networks by means of
face vector identification for target identification with accuracy at least 80% face recognition. Consequently,
prediction confidence is used for the testing method while confusion matrix and f1 score for the evaluation of
data.

The fourth objective is to employ the analog-to-digital conversion for the multiplexing of the analog input
signal from the CCTV camera with ≤ 20ms conversion delay. This is tested by getting the average of the start
time of the system and its output time. Hence, the data obtained are evaluated based on the existing devices
that have minimal (real-time) conversion delay.

The fifth objective is to test the performance by measuring the duration of video feed conversion process with
acceptable quality with a SNR greater than 1. This is tested using the signal-to-noise ratio per frame and is
then evaluated by averaging the output signal-to-noise ratios.

116
CHAPTER 5: SUMMARY OF FINDINGS, CONLCUSION, AND RECOMMENDATION

5.1 Results and Analysis

After performing all the necessary methods and procedures to attain and reach the object, the proponents
have gathered the results and the process on how to obtain the target percentage as stated in Chapter 1.
The proponents were able to test and evaluate the prediction accuracy of the system which resulted a good
outcome since it can identify person with minimal errors. It was also tested having different distances and
angles such as normal, close eyes, left and right tilt to prove that the system can detect the face detected by
the cameras. Moreover, the proponents were able to evaluate the existing noise in every frame of the camera
feeds and as mentioned in chapter 4, the evaluation of the Signal-to-Noise ratio was good and it met the
required SNR stated in the objectives. In addition, the proponents were able to show the graph of the Signal-
to-Noise Ratio with 90 iterations and based from the calculations, the average value of SNR in decibels is
1.983 dB.

In terms of the detection, extraction and recognition time, the proponents were able to show a graphical
representation with 29 iterations wherein the x-axis of the graph represents the number of iterations while
the y-axis indicates the amount of time upon detection which is in milliseconds.

5.2 Conclusion

After the series of testing and evaluations, the proponents have concluded the following: The face recognition
accuracy was testing in various conditions such as the face tilting and the subject’s distance from the camera.
The average results of each distances starting from 1 foot to 7 feet were 97.33%, 99.33%, 94.88%, 94.88%,
94.88%, 94.22%, and 88.44%, respectively. On the other hand, the confidence score for the target
identification were tested under various distances too, and the results were very satisfying with the scores
from 19,18, 35,14, 9, 36 where the distances varies from 1 foot to 7 feet, respectively. However, for the
conversion delay testing we had the listed the average of the detection time, extraction time and recognition
time and iterated it with 29 iterations, the average of the respective time frame were as follow: 151.57ms for
the detection time, 90.97ms recognition time, and 0.69ms for the extraction time. Lastly, the target SNR
greater than 1 was achieved having 90 iterations with an average of 1.98dB.

5.3 Recommendation

In terms of choosing the machine learning algorithm, the proponents see the possibility of trying other types
of machine learning algorithm knowing that there are a lot of options. A better technique will result to higher
accuracy for the system.

For future designers who wants to venture into this study, the design can be further improved by having a
laptop with better specifications such as GPU (at least NVIDIA GEFORCE GT 740M), Memory (at least 8 Gb)
and CPU (at least 2 GHz). Further improvement may, however, lead to a higher cost. They can also consider
developing a mobile application to have higher security where it can display the exact footage of the detection
after notifying the user through SMS. In addition, they can include cloud monitoring system in the mobile
application itself. On the other hand, the future designers may add up the notification system wherein it will

117
notify the nearest police station for faster response through the mobile application where it can also display
the map which indicates the address or GPS of the victim.

For the upscaling of the study, the future designers may consider expanding the system by trying to add
some features related to public security wherein the system itself may include the data of wanted criminals
in a local barangay and once the surveillance cameras detect the person involved, it will notify the police
station with the location of the camera where it is detected.

118
CHAPTER 6: BUSINESS MODEL

6.1 Executive Summary

As part of the community, we always want to secure your safety from every harm that has the possibility to
occur in the society where theft/burglary is considered as one of the crimes that have a high crime rate in the
Philippines. It can happen in public or private places such as buildings, colleges, universities, and
business/private properties. According to Rappler, there was a total of 100,668 index crimes recorded from
January to November 2017. This statistic was based from the PNP’s DIDM (Directorate for Investigation and
Detective Management) which represents a drop of 21.8% from the year 2016 where it has a crime rate of
128, 730. For theft, almost a third was dropped from 46, 232 to 32,356. The existing solution which resolves
this kind of problem is through a surveillance system where it is used for monitoring a particular vicinity 24/7.
One of the subsystems of a surveillance system is the DVR which records the real-time footage taken by
CCTV Cameras. Some of the existing DVRs limits its function in the recording which only features motion
detection without any training of a particular registered face for target identification which requires human
intervention.  In addition, data logging is not yet integrated into DVRs according to some published journals

Consider having this kind of scenarios, it is evident that it has to be solved through a surveillance system. In
this case, it is expected to have a virtual instrument representation of a DVR that would function the same as
the existing device. Its whole agenda is to provide an equivalent virtual instrument that has the same
capability and functionality that contains the following: (1) a digital filter that is used for image processing (2)
a machine learning algorithm that uses artificial neural networks by means of face vector identification for
target identification. Using the acquired knowledge in filtering, digital signal processing, broadcast
communication, analog and digital electronics, logic circuits and the main concept of statistics and probability,
the system would obtain higher accuracy based on its assigned technical specifications.

6.2 Company Description

Al-Technologies is a startup engineering solution company that focuses on innovating and designing virtual
instruments in surveillance system components such as digital video recorder (DVR) which includes target
identification using Artificial Neural Network, data logging, notification system, and cloud storage. In 2018,
AI-Technologies serves a 4-Channel Virtual Instrument that employs real-time surveillance in a vicinity with
target identification and data logging.

Our company offers:


 IT Consultation and Design
 Server and Storage Installation
 Network Design and Implementation

Al-Technologies is committed into sustainable and innovative development of technology and our logo is a
visual representation of that commitment. The most visible in the logo is “eye” which mainly symbolizes the
clairvoyance of current technology developments.

119
Figure 116 Company Logo
6.3 Mission

The AI-Technologies is committed:


 To establish double security in the surveillance system through target identification
 To produce the best and reliable system employing virtual instruments
 To be the top engineering solution provider in terms of surveillance system engaging the concept of
Artificial Neural Network in the Philippines.

6.4 Vision

AI-Technologies is an engineering solution company with competent and hardworking engineers and
programmers committed to create an excellent real-time surveillance system utilizing virtual instrument that
the output meets the promulgated and regulated standards of the agencies involved.

6.5 Market Research

The proponents’ product offers adaptability and security in the market compared to the existing device or
system. Aside from being able to adapt to any types of CCTV system, it is also considered as low-
maintenance and user-friendly. The possible target market of the company is CCTV companies or farm
owners. The said businesses and private bodies will have benefits in terms of the following aspect: (1)
accuracy, (2) security, (3) finances, and (4) easy-to-use.

6.6 Market Information

The proponents utilize a business model wherein the target market is the Philippines’ surveillance industry
and farmers. The primary target would be all farms in the Philippines. Since the problem that the proponents
are trying to solve is prominent in farms which are definitely needed by the farmers/owners. That need is
applicable anywhere in the country and that would serve as the Total Available Market (TAM). The
Serviceable Available Market (SAM) would be the farms in Luzon provinces. Furthermore, the Serviceable
Obtainable Market (SOM) would be all farms in Region IV-A (CALABARZON).

120
6.7 Business Model

Figure 117 Business model of the company

121
BIBLIOGRAPHY

Aldaz, E. D. (7 May, 2015). Digital Signal Processing Filtering Algorithm. Thesis.


Andrade, T. F.; Quintas, M. R.; Silva, C. M.; Restivo, M. T.; Chouzal, M. F.; Amaral, T. M. (2012). Virtual
instrumentation for a new health care device. Remote Engineering and Virtual Instrumentation.
Bilbao.
Badaoui, R., & Al-Jumaily, A. (2010). Fuzzy logic based human detection for CCTV recording application.
2010 6th International Conference on Advanced Information Management and Service (IMS). Seoul,
South Korea: IEEE. Retrieved from https://ieeexplore.ieee.org/document/5713470/
Bielski, A.; Lohmann, C. P.; Maier, M.; Zapp, D.; Nasseri, M. A. (2016). Graphical user interface for a robotic
workstation in a surgical environment. Medicine and Biology Society (EMBC). Orlando.
Blaga, E., Cotfas, P., Cotfas, D., & Balint, M. (2012). Tensile testing machine based on virtual instrumentation.
Remote Engineering and Virtual Instrumentation. Spain.
Chmielewska, A., Marianna, P., Marciniak, T., Dabrowski, A., & Walkowiak, P. (2015). Application of the
projective geometry in the density mapping based on CCTV monitoring. IEEE 2015 Signal
Processing: Algorithms, Architectures, Arrangements, and Applications (SPA) (pp. 179-184). Poznan,
Poland: IEEE.
Dang, K., & Sharma, S. (2017). Review and Comparison of Face Detection Algorithms. IEEE Journals, 629-
630.
Dimou, A., Medentzidou, P., Garcia, F. A., & Darns, P. (2016). Multi-target detection in CCTV footage for
tracking applications using deep learning techniques. IEEE 2016 IEEE International Conference on
Image Processing (ICIP) (pp. 928-932). Phoenix, AZ, USA : IEEE.
Goldberg, H. (2000). What is virtual instrumentation? Instrumentation & Measurement Magazine, 3(4), 10-
13.
Hibell, L. (2006). Surveillance Video Image Enchancement inCollaboration with Porthsmouth City Council
CCTV. University of Portsmouth.
Im, H., Hong, B., Jeon, S., & Hong, J. (2016). Bigdata analytics on CCTV images for collecting traffic
information. EEE 2016 International Conference on Big Data and Smart Computing (BigComp) (pp.
525-528). Hong Kong, China : IEEE.
Jahagirdar, A., & Nagmode, M. (2016). Evaluation of moving object detection techniques for low resolution
videos. Institution of Engineering and Technology 3rd International Conference on Electrical,
Electronics, Engineering Trends, Communication, Optimization and Sciences (EEECOS 2016).
Tadepalligudem, India: IEEE.
Katole, Kavita; Padole, Dinesh. (2011). Novel Multi-core Perspective Approach for Automotive System with
Virtual Instrumentation. Emerging Trends in Engineering and Technology . Mauritius.
Ki, Y.-K., Choi, J.-W., Joun, H.-J., Ahn, G.-H., & Cho, K.-C. (2017). Real-time estimation of travel speed using
urban traffic information system and CCTV. IEEE 2017 International Conference on Systems,
Signals and Image Processing (IWSSIP) (pp. 1-5). Poznań, Poland: IEEE.
Kongurgsa, N., Chumuang, N., & Ketcham, M. (2017). Real-Time intrusion — Detecting and alert system by
image processing techniques. 2017 10th International Conference on Ubi-media Computing and
Workshops (Ubi-Media). Pattaya, Thailand: IEEE.
Lu, Guoliang; Yang, Xiaoqiang; Li, Hongwei; Huang, Yu. (2012). Fault Testing Device of Fire Control System
Based on Virtual Instrumentation Technology. Computer Science and Service System (CSSS).
Nanjing.

122
Madhira, K., & Shukla, A. (2017). Pedestrian flow counter using image processing. 2017 International
Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS). Chennai,
India: IEEE.
Manlises, C. O., Martinez, J. M., Belenzo, J. L., Perez, C. K., & Postrero, M. K. (2015). Real-time integrated
CCTV using face and pedestrian detection image processing algorithm for automatic traffic light
transitions. IEEE 2015 International Conference on Humanoid, Nanotechnology, Information
Technology,Communication and Control, Environment and Management (HNICEM) (pp. 1-4). Cebu
City, Philippines: IEEE.
Papadopoulos, G. T., Machairidou, E., & Daras, P. (2016). Deep cross-layer activation features for visual
recognition. IEEE 2016 IEEE International Conference on Image Processing (ICIP) (pp. 923-927).
Phoenix, AZ, USA: IEEE.
Porter, G. (2009). CCTV images as evidence. Australian Journal of Forensic Sciences, 41(1), 11-25.
Retrieved 24 6, 2018, from http://tandfonline.com/doi/full/10.1080/00450610802537960
Prema, K.; Kumar, N. Senthil; Sunitha, K.A. (2009). Online temperature control based on virtual
instrumentation. Control, Communication, Energy Conservation and Automation .
Saponara, S., Pilato, L., & Fanucci, L. (2016). Exploiting CCTV camera system for advanced passenger
services on-board trains. IEEE 2016 IEEE International Smart Cities Conference (ISC2) (pp. 1-6).
Trento, Italy: IEEE.
Saputra, D. I., & Amin, K. M. (2016). Face detection and tracking using live video acquisition in camera closed
circuit television and webcam. IEEE 2016 1st International Conference on Information Technology,
Information Systems and Electrical Engineering (ICITISEE) (pp. 154-157). Yogyakarta, Indonesia :
IEEE.
Shetty, V., Vishwakarma, S., Harisha, & Agrawal, A. (2017). Design and implementation of video synopsis
using online video inpainting. IEEE 2017 2nd IEEE International Conference on Recent Trends in
Electronics, Information & Communication Technology (RTEICT) (pp. 1208-1212). Bangalore, India :
IEEE.
Sodanil, M., & Intarat, C. (2015). A Development of Image Enhancement for CCTV Images. IEEE 2015 5th
International Conference on IT Convergence and Security (ICITCS) (pp. 1-4). Kuala Lumpur,
Malaysia: IEEE.
Steven W. Smith, P. (2011). Digital Filtering. In P. Steven W. Smith, The Scientist and Engineer's Guide to
Digital Signal Processing. California Technical Publishing.
Upasana, A., Manisha, B., Mohini, G., & Pradnya, K. (November, 2015). Real Time Security System using
Human Motion Detection. International Journal of Computer Science and Mobile Computing, 4(11),
245-250. Retrieved from https://www.ijcsmc.com/docs/papers/November2015/V4I11201554.pdf
Žídek, Jan; Bilík, Petr; Wittassek, Tomáš. (2011). Application of graphical programming and benefit of virtual
instrumentation in teaching of state-of-the-art instrumentation. Intelligent Data Acquisition and
Advanced Computing Systems: Technology and Applications. Prague.

123
APPENDICES

A. Design of Experiment, where (n=8)

 Design Alternative 1

Expected ID Predicted ID Score Confidence


2.00 2.00 17.80 11.25
2.00 7.00 18.00 15.38
2.00 2.00 21.00 17.98
2.00 2.00 19.60 34.25
2.00 2.00 26.40 88.57
2.00 2.00 26.20 81.94
2.00 2.00 26.00 83.10
2.00 2.00 21.40 48.61
2.00 6.00 19.80 20.73
2.00 2.00 18.20 28.17
2.00 2.00 22.20 40.51
2.00 2.00 17.00 3.66
2.00 6.00 14.20 0.00
2.00 2.00 21.20 41.33
2.00 2.00 22.00 34.15
2.00 2.00 20.00 25.00
2.00 2.00 18.60 16.25
2.00 2.00 21.80 19.78
2.00 2.00 24.00 62.16
2.00 2.00 24.40 69.44
2.00 2.00 22.00 35.80
2.00 2.00 21.60 36.71
2.00 2.00 19.20 11.63
2.00 2.00 20.40 22.89
2.00 2.00 17.80 5.95
2.00 2.00 21.80 31.33
2.00 2.00 21.60 28.57
2.00 2.00 22.20 42.31
2.00 2.00 27.00 84.93
2.00 2.00 21.20 30.86
2.00 2.00 24.40 71.83
2.00 2.00 20.80 40.54
2.00 2.00 24.00 51.90

124
2.00 2.00 24.20 61.33
2.00 2.00 22.60 41.25
2.00 2.00 21.00 26.51
2.00 2.00 24.00 48.15
2.00 2.00 20.00 17.65
2.00 2.00 25.20 68.00
2.00 2.00 21.60 36.71
2.00 2.00 22.80 42.50
2.00 2.00 20.40 37.84
2.00 2.00 22.20 38.75
2.00 2.00 24.00 66.67
2.00 2.00 19.80 30.26
2.00 2.00 22.80 72.73
2.00 2.00 24.20 47.56
2.00 2.00 21.40 35.44
2.00 2.00 20.00 38.89
2.00 2.00 24.80 79.71
2.00 2.00 22.20 19.35
2.00 2.00 23.00 57.53
2.00 2.00 25.60 66.23
2.00 2.00 17.00 1.19
2.00 2.00 26.00 78.08
2.00 2.00 26.00 66.67
2.00 2.00 21.60 52.11
2.00 2.00 17.60 2.33
2.00 2.00 24.60 41.38
2.00 2.00 20.20 21.69
2.00 2.00 22.60 48.68
2.00 2.00 21.60 42.11
2.00 2.00 23.80 30.77
2.00 2.00 21.40 32.10
2.00 2.00 24.00 50.00
2.00 2.00 20.80 38.67
2.00 2.00 24.40 35.56
2.00 2.00 22.00 25.00
2.00 2.00 27.60 76.92
2.00 2.00 21.20 34.18
2.00 6.00 18.20 19.74
2.00 2.00 21.80 39.74

125
2.00 2.00 18.40 3.37
2.00 2.00 19.20 23.08
2.00 2.00 20.40 12.09
2.00 2.00 20.80 44.44
2.00 2.00 24.40 48.78
2.00 2.00 22.00 34.15
2.00 2.00 19.20 9.09
2.00 2.00 19.40 14.12
2.00 2.00 24.20 40.70
2.00 2.00 25.00 66.67
2.00 2.00 21.40 42.67
2.00 2.00 21.40 21.59
2.00 2.00 20.40 17.24
2.00 2.00 22.60 54.79
2.00 2.00 22.00 54.93
2.00 2.00 23.60 53.25
2.00 2.00 20.60 41.10
2.00 2.00 22.00 48.65
2.00 2.00 21.80 14.74
2.00 2.00 22.20 20.65
2.00 2.00 23.40 37.65
2.00 2.00 22.40 47.37
2.00 2.00 23.60 45.68
2.00 2.00 24.20 63.51
2.00 2.00 23.60 49.37
2.00 2.00 20.60 13.19
2.00 2.00 20.00 36.99
2.00 2.00 20.20 50.75
2.00 2.00 21.80 67.69
2.00 2.00 21.60 24.14
2.00 2.00 21.60 52.11
2.00 2.00 19.80 28.57
2.00 2.00 21.00 41.89
2.00 2.00 22.40 43.59
2.00 2.00 23.20 65.71
2.00 2.00 23.60 40.48
2.00 2.00 23.40 62.50
2.00 2.00 23.20 61.11
2.00 6.00 19.40 0.00

126
2.00 2.00 22.20 29.07
2.00 2.00 20.80 46.48
2.00 2.00 19.80 26.92
2.00 2.00 23.00 64.29
2.00 2.00 22.40 53.42
2.00 2.00 22.00 27.91
2.00 2.00 22.20 65.67
2.00 2.00 23.40 74.63
2.00 2.00 21.80 31.33
2.00 2.00 21.00 50.00
2.00 2.00 22.20 42.31
2.00 2.00 20.40 25.93
2.00 2.00 20.40 43.66
2.00 2.00 22.40 41.77
2.00 2.00 25.00 71.23
2.00 2.00 21.20 12.77
2.00 2.00 23.80 45.12
2.00 2.00 21.00 11.70
2.00 2.00 19.40 15.48
2.00 2.00 22.80 52.00
2.00 2.00 23.60 53.25
2.00 2.00 23.80 63.01
2.00 2.00 22.00 54.93
2.00 2.00 25.40 76.39
2.00 2.00 25.20 80.00
2.00 2.00 21.20 37.66
2.00 6.00 18.20 9.64
2.00 2.00 21.60 20.00
2.00 2.00 22.80 40.74
2.00 2.00 20.20 40.28
2.00 2.00 25.00 98.41
2.00 2.00 26.20 89.86
2.00 2.00 27.00 100.00
2.00 2.00 26.00 88.41
2.00 2.00 23.00 82.54
2.00 2.00 24.20 80.60
2.00 2.00 20.00 44.93
2.00 2.00 22.80 26.67
2.00 2.00 24.80 69.86

127
2.00 2.00 24.60 59.74
2.00 2.00 25.40 38.04
2.00 2.00 25.00 71.23
2.00 2.00 18.80 36.23
2.00 2.00 22.80 58.33
2.00 2.00 22.00 27.91
2.00 2.00 21.80 51.39
2.00 2.00 19.80 23.75
2.00 2.00 18.00 36.36
2.00 2.00 19.00 26.67
2.00 2.00 24.00 73.91
2.00 2.00 21.80 60.29
2.00 2.00 20.00 40.85
2.00 2.00 23.00 45.57
2.00 2.00 18.20 10.98
2.00 2.00 21.60 44.00
2.00 2.00 18.60 22.37
2.00 2.00 18.40 10.84
2.00 2.00 18.80 20.51
2.00 2.00 21.40 64.62
2.00 2.00 21.40 27.38
2.00 2.00 20.80 31.65
2.00 2.00 18.00 11.11
2.00 2.00 18.60 24.00
2.00 2.00 21.00 43.84
2.00 2.00 22.20 21.98
2.00 2.00 19.20 12.94
2.00 2.00 24.40 84.85
AVERAGE 2.00 2.14 21.89 42.00

ERROR 7.02247191
RELIABILITY 92.97752809

 Design Alternative 2

Expected ID Predicted ID Score Confidence


2.00 2.00 48.00 16.50
2.00 2.00 62.00 87.88
2.00 2.00 49.20 9.82
2.00 2.00 56.40 57.54

128
2.00 2.00 53.00 24.41
2.00 1.00 51.80 18.26
2.00 2.00 58.60 63.69
2.00 2.00 54.00 33.00
2.00 2.00 63.60 97.52
2.00 2.00 55.60 37.62
2.00 2.00 56.40 55.80
2.00 2.00 61.20 75.86
2.00 2.00 66.00 100.00
2.00 2.00 66.20 100.00
2.00 2.00 66.80 100.00
2.00 2.00 60.20 70.06
2.00 2.00 65.60 100.00
2.00 2.00 66.00 100.00
2.00 2.00 67.20 100.00
2.00 2.00 71.40 100.00
2.00 2.00 68.80 100.00
2.00 2.00 69.20 100.00
2.00 2.00 66.80 100.00
2.00 2.00 62.00 81.29
2.00 2.00 66.40 100.00
2.00 2.00 69.60 100.00
2.00 2.00 69.20 100.00
2.00 2.00 67.80 100.00
2.00 2.00 60.80 69.83
2.00 2.00 62.40 100.00
2.00 2.00 65.60 100.00
2.00 2.00 65.20 100.00
2.00 2.00 65.20 100.00
2.00 2.00 72.00 100.00
2.00 2.00 59.60 62.84
2.00 2.00 55.20 47.59
2.00 2.00 54.40 39.49
2.00 2.00 52.60 29.56
2.00 2.00 52.60 29.56
2.00 2.00 52.80 36.08
2.00 2.00 60.40 76.61
2.00 2.00 59.60 72.25
2.00 2.00 47.40 0.00

129
2.00 1.00 49.60 8.77
2.00 2.00 48.00 1.27
2.00 1.00 61.60 77.01
2.00 1.00 59.40 67.80
2.00 2.00 46.80 6.85
2.00 1.00 55.20 39.39
2.00 2.00 49.20 16.59
2.00 2.00 51.40 24.15
2.00 2.00 52.40 31.66
2.00 2.00 53.60 43.32
2.00 2.00 50.40 26.63
2.00 2.00 49.60 15.35
2.00 2.00 51.00 23.19
2.00 2.00 47.80 0.42
2.00 2.00 51.80 20.47
2.00 2.00 48.60 4.74
2.00 1.00 54.00 34.33
2.00 1.00 49.00 12.90
2.00 2.00 47.20 2.16
2.00 2.00 48.40 2.98
2.00 1.00 51.00 14.86
2.00 1.00 50.60 15.00
2.00 1.00 57.20 48.96
2.00 2.00 49.20 13.89
2.00 2.00 51.00 25.00
2.00 1.00 45.80 1.78
2.00 1.00 62.80 100.00
2.00 1.00 64.20 100.00
2.00 1.00 67.60 100.00
2.00 1.00 67.40 100.00
2.00 1.00 70.40 100.00
2.00 1.00 69.80 100.00
2.00 1.00 81.60 100.00
2.00 1.00 89.00 100.00
2.00 1.00 77.80 100.00
2.00 1.00 68.00 100.00
2.00 1.00 71.20 100.00
2.00 1.00 77.60 100.00
2.00 1.00 69.00 100.00

130
2.00 1.00 72.40 100.00
2.00 1.00 63.40 100.00
2.00 1.00 59.40 100.00
2.00 1.00 61.40 100.00
2.00 1.00 50.00 44.51
2.00 2.00 67.40 100.00
2.00 2.00 70.80 100.00
2.00 2.00 58.00 71.60
2.00 2.00 59.00 67.61
2.00 2.00 51.60 25.85
2.00 2.00 49.60 18.10
2.00 2.00 58.20 66.29
2.00 2.00 58.60 70.35
2.00 2.00 48.40 10.50
2.00 1.00 60.80 93.63
2.00 1.00 61.40 100.00
2.00 1.00 61.80 100.00
2.00 1.00 51.60 32.31
2.00 2.00 59.80 80.12
2.00 2.00 58.80 59.78
2.00 1.00 55.00 34.80
2.00 2.00 53.20 28.50
2.00 1.00 61.40 75.43
2.00 1.00 59.80 69.89
2.00 1.00 60.60 77.19
2.00 1.00 55.80 41.62
2.00 2.00 50.00 20.19
2.00 1.00 59.60 71.26
2.00 2.00 55.20 34.63
2.00 2.00 54.20 43.39
2.00 2.00 70.00 100.00
2.00 2.00 89.00 100.00
2.00 1.00 56.00 75.00
2.00 1.00 49.20 20.59
2.00 1.00 47.40 15.05
2.00 2.00 55.20 46.03
2.00 2.00 46.80 6.85
2.00 1.00 51.00 35.64
2.00 1.00 56.20 46.35

131
2.00 2.00 50.60 16.59
2.00 1.00 49.80 8.73
2.00 1.00 49.20 13.89
2.00 1.00 51.00 16.97
2.00 2.00 48.80 16.19
2.00 2.00 55.60 47.87
2.00 2.00 59.60 78.44
2.00 1.00 56.40 41.00
2.00 2.00 50.80 22.12
2.00 2.00 48.00 2.13
2.00 2.00 63.60 100.00
2.00 2.00 62.20 100.00
2.00 2.00 61.00 99.35
2.00 2.00 48.00 4.80
2.00 2.00 54.40 46.24
2.00 2.00 61.60 100.00
2.00 1.00 48.80 3.39
2.00 1.00 59.40 60.54
2.00 2.00 63.60 100.00
2.00 1.00 63.40 93.29
2.00 2.00 59.80 89.24
2.00 2.00 64.40 100.00
2.00 2.00 51.60 27.09
AVERAGE 2.00 1.65 58.62 59.33

ERROR 17.5
RELIABILITY 82.5

 Design Alternative 3

Expected ID Predicted ID Score Confidence


2 2 52.8 74.83
2 2 48.2 43.45
2 2 56.4 100
2 2 56.2 100
2 2 62.6 100
2 2 66.8 100
2 2 57 100
2 2 48 51.9
2 2 52.2 93.33

132
2 2 51.6 91.11
2 2 52 100
2 2 49.6 75.89
2 2 48.8 57.42
2 2 50.2 70.75
2 2 54.6 100
2 2 51 70
2 2 56.4 100
2 2 49.2 69.66
2 2 50.6 80.71
2 2 48.8 46.99
2 2 56.6 100
2 2 56.6 100
2 2 53 100
2 2 50.2 75.52
2 2 50.8 100
2 2 54.4 100
2 2 59 100
2 2 52.8 100
2 2 60.8 100
2 2 52.8 95.56
2 2 52.2 94.78
2 2 47.4 60.14
2 2 50 98.41
2 2 55.8 100
2 2 47.6 36.78
2 2 50.4 73.79
2 2 51.6 74.32
2 0 43.4 0.93
2 2 46.8 32.2
2 2 51 100
2 2 48 58.94
2 2 48.2 44.31
2 2 52.4 83.22
2 2 43.6 6.34
2 2 58.2 100
2 2 60.6 100
2 2 55.4 100
2 2 53.2 62.2

133
2 2 55.6 100
2 2 41.6 5.58
2 2 51.6 70.86
2 2 47.6 32.22
2 0 41.6 3.48
2 2 44.6 28.9
2 2 49.8 44.77
2 2 52.8 61.96
2 2 46.2 20.94
2 2 48.6 40.46
2 2 57.8 100
2 2 54.4 94.29
2 2 64.8 100
2 2 59 100
2 2 64.4 100
2 2 53.8 100
2 2 57.6 100
2 2 50.6 62.18
2 2 54.4 100
2 2 60 100
2 2 51.6 66.45
2 2 58 100
2 2 46.8 36.05
2 2 49.6 71.03
2 2 69.4 100
2 2 58.2 100
2 2 59 100
2 2 52.6 68.59
2 0 44.6 18.62
2 0 43 10.26
2 2 42.4 6
2 2 44.4 47.02
2 2 46.2 35.88
2 2 53.6 87.41
2 2 57.8 100
2 2 46.6 35.47
2 2 60.2 100
2 2 58.2 100
2 2 63 100

134
2 2 49 42.44
2 2 61.8 100
2 2 56 100
2 2 52.2 80
2 2 62.2 100
2 2 55.2 69.33
2 2 44.8 23.76
2 2 47 46.88
2 2 54.8 100
2 2 42.6 29.09
2 2 55.6 100
2 2 50.8 89.55
2 2 59.2 100
2 2 49.4 58.33
2 0 41.2 7.29
2 0 46.8 20.62
2 0 52.4 63.75
2 0 45.8 16.24
2 0 44.8 10.89
2 0 44.6 12.63
2 0 41 6.22
2 2 48.2 32.42
2 2 50.4 46.51
2 2 47.6 29.35
2 2 44.8 16.67
2 0 50.2 41.01
2 0 52.2 75.17
2 0 54 65.64
2 0 62.6 100
2 0 57.8 93.96
2 0 65.6 100
2 0 69.8 100
2 0 64.2 100
2 0 59.2 97.33
2 0 61.2 100
2 0 60 96.08
2 0 70.8 100
2 0 56 85.43
2 0 72 100

135
2 0 61.8 100
2 0 69.6 100
2 0 62 100
2 0 64.6 100
2 0 50.6 42.94
2 0 54.4 81.33
2 0 61.6 100
2 0 72.4 100
2 0 65.8 100
2 0 64.2 100
2 0 58.2 86.54
2 0 76.2 100
2 0 70 100
2 0 58 100
2 0 55.4 87.16
2 0 41 1.49
2 0 62.6 100
2 0 59.6 100
2 0 65.2 100
2 0 66.8 100
2 0 59.4 100
2 0 46.2 20.31
2 0 64.4 100
2 0 52.4 40.11
2 0 46.6 21.35
2 0 46.8 22.51
2 2 47.6 76.3
2 2 55.6 100
2 2 56.2 100
2 2 65.2 100
2 2 52.8 100
2 2 60.2 100
2 2 57.6 100
2 2 63.4 100
2 2 59 100
2 2 57.8 100
2 2 56.6 100
2 2 63.8 100
2 2 61.2 100

136
2 2 57 100
2 2 71.4 100
2 2 56 97.18
2 2 58.8 100
2 2 62.4 100
2 2 70.2 100
2 2 64.2 100
2 2 59.6 100
2 2 50 93.8
2 2 57.4 100
2 2 60.4 100
2 2 59.6 100
2 2 57.8 100
2 2 54 100
2 2 64.6 100
2 2 58.8 100
2 2 64 100
AVERAGE 2.00 1.44 55.11 77.47

ERROR 38.93129771
RELIABILITY 61.06870229

137
138