Академический Документы
Профессиональный Документы
Культура Документы
0 and estateMaps
April, 2008
Disclaimer
No portion of the work referred to in the dissertation has been submitted in support of an application
for another degree or qualification of this or any other university or other institution of learning.
The World Wide Web (WWW) is a well known phenomenon which enables billions of users to
connect to an information pool used and updated by users across the globe. Closely linked with the
WWW is a fairly new and loosely defined concept called Web 2.0 which is defined as the second
generation of the WWW.
This report details the development of my hybrid application which exploits a combination of freely
available online services, to create a lightweight ‘mashup’ application. The application also provides
information concerning the surrounding area of the property, creating a rich and informative
experience for the end user.
Included in this report are the project requirements, design, implementation, testing and the
background surrounding Web 2.0 and the different technologies which are associated with it.
~ Tim O'Reilly
I would like to thank my supervisor Sean Bechhofer for his guidance and direction throughout the
project. I would also like to thank my parents and family members for their support and advice during
the project.
I would like to say special thanks to my friends Asif, Michael and Shehzad for their support during the
project and keeping me in focus during the final year.
1. Introduction.................................................................................................................................. 11
2. Background .................................................................................................................................. 13
2.3.4 XML.................................................................................................................................. 20
2.3.5 Mashup.............................................................................................................................. 20
2.3.7 XSLT.................................................................................................................................. 23
2.3.10 PlanningAlerts................................................................................................................... 25
3. Design............................................................................................................................................ 27
4. Implementation ............................................................................................................................ 36
5. Testing........................................................................................................................................... 54
5.1 Debugging................................................................................................................................. 54
7.3 My Learning.............................................................................................................................. 65
8. Appendices.................................................................................................................................... 66
Appendix E - A typical XML file for a house search request which is returned to the client .......... 75
Mock-up 1 ..................................................................................................................................... 81
Mock-up 2 ..................................................................................................................................... 82
References............................................................................................................................................ 84
FIGURE 25 - Control flow graph for processing a House XML file on the client side........................55
Many organisations such as Microsoft, Yahoo and Google have adopted the Web 2.0 principle by
making their data and services freely available to the public. These services are accessed via APIs
(Application Programming Interface) using a combination of technologies such as AJAX and XML
which are discussed in sections 2.3.2 and 2.3.4 respectively.
This report details the development of my hybrid/mashup application called estateMaps which uses a
number of freely available online services to create a rich and informative experience for the end user.
The report also describes the basic background and concepts of Web 2.0 and the technologies which
are associated with estateMaps such as REST, SOAP, XML and AJAX.
The second objective was to join the different information sources into one single lightweight
application which as mentioned is termed a hybrid/mashup application. This required further research
into the different technologies available such as JavaScript and AJAX which have both been discussed
in the Background section of this report.
• Develop a Java servlet which can explicitly call third party services and generate response
based on the user’s request.
1.2 Deliverables
The developed artefact will provide users with the ability to search for saleable real estate across the
UK with information about the surrounding area. This information will include crime statistics about
the region, average house prices, planning requests in the area and images relevant to the location.
The application will be formed as a fully operational web based application allowing the user to
navigate the features in a simplistic and familiar environment. Real estate will be displayed on a
geographical map with markers to represent the location of each property.
Chapter 2 discuses the background of some of the technologies which are used within the context of
Web 2.0 and are used in the implementation of the project.
The Design and the implementation of the application are discussed in Chapter 3 and 4. Chapter 3
describes the challenges associated with estateMaps and the architectural design. Chapter 4 describes
the implementation of the application and how the core components of the application are
implemented into the system.
Chapter 5 tests the system against the initial system requirements and other various testing techniques
such as Black and White Box testing. Chapter 6 ends the implementation and aims to evaluate the
effectiveness of my application against other products available on the market.
Finally Chapter 7 outlines the future of Web 2.0 and the possibilities of Web 3.0 already on the
horizon. Chapter 7 also outlines the direction and manifestation which the internet is taking.
Looking at the statistics shown in figure 1, there seems to be an increased interest in using mapping
services at this moment in time. Real estate is a relatively niche market and has not be been exploited
to its full potential. Combining real estate and a mapping service seems be a good basis for mashup
service hence the idea of estateMaps was formed.
There are many services which are able to list real estate geographically on a mapping service one of
which is the Nestoria [11] web service. Nestoria is of particular interest to estateMaps because it is the
main data driving force for the application. The Nestoria web service is mentioned in greater detail
during the third section of this chapter. Figure 2 shows the interface of the Nestoria web application it
provides the user with the key functionality of estateMaps but lacks information such as information
relevant to the search location and recent planning requests in the area.
I also researched a number of real estate agents to see if they could offer a service which was as rich
and interactive as estateMaps, but to no prevail. Main&Main [10] which is a popular real estate agent
in the UK offers a similar service but has less functionality than both Nestoria and estateMaps.
Main&Main do not provide any information in regards to the surrounding area, their primary goal is to
inform the user of the location of the property utilising a Google map.
No examples could be found at the time of writing this report which uses AJAX to dynamically update
the map when the users make a new search request based in the UK. This keeps in context with the
idea of Web 2.0 to create applications which is interactive for the user and feels as if though it is
running seamlessly on their local machine.
There are no governing specifications which classify a Web 2.0 application instead a number of
publishers such as Tim O’Reilly president of O'Reilly Media have published their views on what
characteristics classify a Web 2.0 application.
O’Reilly published an interesting article titled “What Is Web 2.0” [2]. In this article O’Reilly mentions
the seven key competences of web 2.0 which have been summarised below.
• Data is the Next Intel Inside – The data is the key driving force of the application not from
one source but from many.
• End of the Software Release Cycle – Using the Internet as base operating system to run
applications so users don’t have to install or update the software. The person/company
providing the service will update the application which separates the concern from the user.
• Lightweight Programming Models – The program should adopt a loosely coupled and flexible
approach which allows other components and even different applications to utilise its
functionality. Using lightweight programming models means we can reach more people to
enable data collection and a more intelligent web [2].
• Software Above the Level of a Single Device – The application should not be restricted to one
platform it should be able to take advantage of all technologies available today and in the
future.
• Rich User Experiences – Allowing the user to take control over the web site and providing the
user with an experience which is interesting, innovative and informative in order to create ICT
as effective tool.
Looking at these characteristics we can already identify that there are number of practical uses for
Web 2.0 applications such as forums, blogs and even price comparison websites amongst a host of
others. EstateMaps adheres to the ideas put forward by O’Reilly, but the key feature can be seen as
integrating data from multiple information sources.
In simple terms a web service can be described as self-contained and self-describing components that
can be published, discovered, and invoked across the network.
A web service is normally provided in the form of an API, an interface which allows a web browser or
server to interact with another application. The computational logic is usually calculated on the host
which is providing the web service which explicitly separates the two applications. The main
protocols or architectures used for web services are SOAP and REST.
SOAP (Simple Object Access Protocol) is a computer protocol used for exchanging XML based
messages over the internet. SOAP allows computer programs to communicate on different operating
systems, different technologies and programming languages. WSDL (Web Service Definition
Language) is used to describe the methods and the location of a web service. [3].
The typical SOAP architecture is shown in figure 4, it captures how data is transmitted from the
requester to the provider. The service requester sends a SOAP message in the XML format to the
provider which instantiates a WSDL binding (connection) which the requester can use to invoke the
methods or functions of the provider.
Representational State Transfer (REST) is not a protocol instead it is termed as an architectural style
[6]. It exploits the current HTTP /HTTPS protocol in order to send requests to a web server in the
Each element on the internet is seen a resource and the URL is the resource locator [5]. The
representation is the information that the requester is interested in. The resource or data is sent back
from the web server in a number of different standards which include XML, HTML, GIF etc.
Figure 4 highlights a simple GET request using the basic principles of REST. The starting portion of
the URL is the location of the resource i.e. the server. The second portion of the URL is the name of
the resource which the user wishes to retrieve, in this case an XML file called Text. The server
processes the second portion of the URL in order to send back the resource to the user. Both sending
and receiving the request exploits the HTTP protocol as illustrated in the example.
Location Resource
REST is of particular importance to estateMaps because a number of the third party services used
within the application use this architecture.
2.3.2 AJAX
Asynchronous JavaScript and XML (AJAX) is a relatively new concept which is able to transform the
way static WebPages are updated and refreshed in an intuitive manner. Following the principles of
Web 2.0 AJAX enables developers to create a new experience for the end user, WebPages with a
seamless flow of data exchange [7].
AJAX is not a new Scripting language instead it manipulates existing standards to create faster, more
reliably and friendlier user applications. AJAX allows portions of a website to be updated without the
need to update the entire page. This creates the perception that the application is actually running on
the local machine rather than a service over the WWW.
AJAX uses JavaScript as the scripting language of choice to instantiate AJAX function calls. XML is
the standardised way to retrieve data using the AJAX libraries. Data does not necessarily have to be in
Figure 1 illustrates the difference between the classic web application and the AJAX enabled
approach. The classic web application model generates a new HTML document each time the user
interface sends a request to the server. This method is expensive in regards to time and bandwidth
both to the client and the service provider as a whole page needs to be generated for the smallest of
changes to the client side [8].
The AJAX model uses JavaScript to call a new component which has been added to the model. The
AJAX engine acts as a channel between the web server and the JavaScript running on the client side.
The JavaScript is unable to call a web server directly so the AJAX engine is used to send a HTTP
request to the server which generates response based on the user’s request. The response is often in the
XML format which can then directly be interpreted by the JavaScript running on the client side. The
JavaScript is able to manipulate the page dynamically without the need for the page being refreshed
creating an effect that response has been generated by the user’s machine.
There are main advantages to using the AJAX approach, but every new technology also comes with its
associated disadvantages, AJAX is also prone to this syndrome. The major disadvantage in using
AJAX is compatibility issues between the ways certain web browsers handle the AJAX libraries. In
internet explorer 6 and below the XMLHttpRequest object is not recognised as a valid JavaScript
function instead the ActiveXObject object has to be used in order intuitively call the web server [8].
This leaves the programmer with many cumbersome validation checks to carry out before the AJAX
functions are actually used. Fortunately Internet explorer 7 now includes the support for the
XMLHttpRequest as AJAX becomes more of standardised protocol but the implications still exist for
users using older browsers.
One of the other main concerns of AJAX is network latency between calls to the server. Without
proper consideration for the size of the data being sent it could potentially leave users waiting longer
than usual for the interface to be displayed due to bottlenecks and other network related issues. This
may be confusing for some users and in turn they may end the session before the data is actually
loaded. Another implication is the when data is loaded between server calls the page layout may be
adjusted while the user is still interacting with the data.
If AJAX is used in a thought out manner it can be used as powerful tool to create a fully fledged
application over the WWW. I see AJAX as a fundamental part of Web 2.0 and as its popularity
increases it will become a day to day experience for all Web users.
These feeds are often in the XML which in such means that they were never intended to be readable in
their raw format. This gives developers a great degree of flexibility as most modern programming
languages have XML readers/parsers which can be explicitly used to manipulate and interoperate the
feed. Each feed consists of one or more elements; an element is an object of interest which contains
such information as a title, description and a link to the information provider [9]. The two main web
feed formats are RSS and Atom which are discussed later in this section.
2.3.4 XML
Extensible Mark-up Language (XML) is a programming construct which is distinctively used to store
and describe data in a generic manner. XML is a lightweight flexible technology which allows authors
to define their own tags and document structure, unlike HTML where the tags are predefined. XML is
not used to replace HTML but to compliment it, so publishing data can be kept separate from the
design element. A typical example of the XML syntax can be seen in Figure 6 which shows how the
technology incorporates a tree like structure.
<inventory>
<drink>
<lemonade>
<price>$2.50</price>
<amount>20</amount>
</lemonade>
<pop>
<price>$1.50</price>
<amount>10</amount>
</pop>
</drink>
<snack>
<chips>
<price>$4.50</price>
<amount>60</amount>
</chips>
</snack>
</inventory>
2.3.5 Mashup
A Mashup is a web application which combines a number of different information sources into one
integrated application. The idea of a mashup application is to reuse existing services and data
available on the internet to keep in context with the web 2.0 paradigm. As mentioned during the
Introduction of this report there are many organisations which allow public access to their resources
via APIs. There are three distinct architectural elements in mashup applications these are the
information provider, the client and the site itself each of which play a distinct role.
Mashup applications first came to light when developers noticed that most of the resources they
required already existed in one form or another. Instead of mimicking these resources they decided to
Figure 5 shows the typical inputs and platforms which are associated with mashup applications. The
figure demonstrates how mashup applications are not only intended for developers it also involves the
normal daily users to interact and create their own mashups using such platforms as Facebook [22]
which allows users to create their own content and widgets [23]. It also emphasizes the division of
skill-level between the developers and normal users and how developers can reuse existing resources
to create new and interesting applications.
The three key benefits of the mashup applications can be seen as;
• Effective leverage of Web parts – As mentioned Mashups are built on existing services,
adding code when it can't be sourced from internal or external suppliers or to provide
integration "glue" between the parts [12].
• Simple, lightweight software models and services – Mashups are typically built using bits of
code sourced from other vendors which usually develop all major libraries required and carry
out the computational logic. Typical examples include Google Maps or a YouTube video
player, such services originally required a massive investment from its creator [12].
GeoRSS [14] is essentially the same thing as RSS 2.0 except it contains an extra tag which defines the
geospatial location which is relevant to each item within the feed. It also has a few other predefined
tags which are not part of the RSS 2.0 specification. This information keeps in context with the Web
2.0, information with meaning. GeoRSS is of particular importance to estateMaps because the
information for the planning alerts is actually received in this format. The planning alerts element of
the system is discussed in section 2.3.10. Figure 9 shows the structure of a GeoRSS document, the
example is actually taken from a document which estateMaps is able to process.
2.3.7 XSLT
XSLT (Extensible Stylesheet Language Transformations) is also another technology which utilises the
XML syntax. It enables one XML document to be transformed into another for example XML to RSS
or XML to HTML. XSLT can be considered to be a template processor. A template processor (also
known as a template engine or a template parser) is software or a software component that is designed
to combine one or more templates [16] to produce a single resulting document. The XSLT parser
reads the structure of the XML document to manipulate the document as specified in the XSLT file.
XSLT processing is a W3C [15] standard. All major web browsers incorporate a XSLT engine along
with all major programming languages such as Java and Microsoft C#. Figure 10 shows a high level
diagram of how template processing is actually carried out.
EstateMaps also incorporates a XSLT engine which is used to transform all XML documents from the
third party services into the estateMaps XML format. The design decision is discussed more during
the third section of this report. Figure 11 shows a snippet from a XSLT document which estateMaps
uses to transform GeoRSS to XML.
One of the most interesting features of Google Maps is that it is able to reverse geocode for a given
address or postcode. Reverse geocoding is the process whereby an address can be mapped to its
geospatial equivalent. Unfortunately this service is rather temperamental and does not always give an
exact location for the given address hence estateMaps has its own geocoded postal information which
is discussed in section 3.4.2.
Google Maps is an integral component of estateMaps as a majority of the processing and interactions
by the user is carried out on the map. Section 3 demonstrates how the Google Maps service is actually
integrated into estateMaps.
2.3.8 Nestoria
As mentioned during section 2.1 Nestoria [11] is the data driving force for estateMaps. Nestoria is a
free web service which is able to query over thirty estate agents across the UK and produce an
automated reply in the XML format. Nestoria utilises the REST architecture in order for the public to
access their data and services which is explained during section 2.2.1.
2.3.9 Flickr
Just like Google Maps and Nestoria, Flickr [18] is a freely accessible web API which enables the
public to access a library of pictures which are uploaded by its users. Flickr is considered to be one of
the earliest web 2.0 applications enabling users to share and distribute their pictures across the
internet. Unlike Nestoria, Flickr supports both the REST and SOAP architecture.
An interesting feature of Flickr and integral part of estateMaps is that flicker supports geocoding of
their images which means images can be searched by location. Previous search techniques involved
looking at the tags associated with the image which could be skewed by misinterpretations of the data.
Flickr is used in estateMaps to allow users to see scenic images which are relevant to their search
criteria by querying the entire Flickr library of images.
2.3.10 PlanningAlerts
PlanningAlerts [19] is a web based API which is able to collate planning information from a number
of UK local authority planning websites. When a house developer or owner wishes to make drastic
improvements to their house they have to get approval from their local authority. Nearby residents are
able to view these planning requests and make appeals if they feel that request will affect them in any
way.
EstateMaps is able to use the information from PlanningAlerts and plot the information on a Google
Map. Users are able to see planning requests which are relevant to their search request which may
prove useful to see what type of improvements are being accepted in that area.
The information from PlanningAlerts is provided in the GeoRSS format, this means that each element
within the feed contains geospatial information as described in 2.2.6. PlanningAlerts uses the REST
architecture as described in 2.2.1.
PlanningAlerts receives its data by a process called ‘Scraping’. Scraping is the process whereby an
application reads the structure of a document i.e. HTML and extracts any relevant information.
PlanningAlerts has number of these scrapers which read the HTML of every local authority in the UK
3.1 Requirements
In order to achieve the project objectives a list of functional and non-functional requirements were
drawn up. The requirements were formed from the preliminary project objectives and from the initial
research phase of the project. These requirements underline the fundamental operations which the end
application must fulfil.
FREQ1. The application must be able to utilise the Nestoria API in order to display properties
for sale geographically on a world map.
FREQ2. The application should allow users to search for property’s based on multiple search
conditions.
FREQ3. The house description, price and picture should be displayed where the information is
available.
FREQ4. Crime statistics about the area in which the property lies within must be displayed to
the user.
FREQ5. The application should allow multiple houses to be displayed on the map where
appropriate.
FREQ6. Caching techniques will be implemented into the application in order to reduce the
number of responses which will be generated by the server.
FREQ7. The application must be able to generate an informative response if the client does
not have all the required plug-ins into order to view the website.
NFREQ1. The application should follow good HCI principles which will make it easier for the
user to navigate the website.
NFREQ2. The application must be able to deal with the potential unreliability of third party
services.
NFREQ3. The server should be able to generate responses concurrently based on multiple client
sessions.
NFREQ4. The application should be built in a modular design to allow for extensibility in the
future and to cater for unreliability of third party services.
NFREQ5. The performance of the application is dependent upon a number of factors which
include the bandwidth of the user, time to generate the response, speed of the third
party services and the bandwidth available to site is host.
Many psychology studies have concluded that eight seconds, plus or minus two is the
longest which a user a user will wait before they end the session [25]. Based on this
study the application must generate a response within the ten second barrier.
NFREQ6. The application must be able to cater for cross browser compatibility in order to reach
all potential users.
• Requirements – This process was used to gather the functional and non functional
requirements for estateMaps. The requirements can be found in section 3.1.
• Analysis & Design – The analysis and design phase was combined into one phase for
estateMaps. In this section the system architecture was agreed and mock-up interfaces were
created to see how the finished application would look. The mock up interfaces can be found
in Appendix F.
• Testing – In this phase the application was put through black box testing. The application was
tested against a number of predefined tests. The testing is discussed in more detail in chapter
5.
• Evaluation – The program was evaluated and success of the application was measured against
the initial requirements.
It was decided during the preliminary planning of the project that only two iterations were needed in
order to create estateMaps because of the time limitations of the project. The development plan can be
found in appendix A.
The advantage of using the iterative approach is that incremental updates can be made to the
application throughout its lifespan. Another advantage of using the methodology is that feedback is
received at the end of each iteration which means the application can be reassessed against its initial
requirements. At the end of each iteration the program is deployed to its intended market. In some
cases the application is actually deployed to the end users with minimal testing so that the users
become the testers. Figure 12 shows the iterative model which was the exact process which was used
to create estateMaps.
Managing and changing requirements is one of the most expensive characteristics in modern software
engineering. Fortunately the iterative approach allows for requirements to be changed or altered after
each iteration. During the second and final iteration of estateMaps it was decided that the application
should be able to display average house prices. Appendix B shows the initial requirements where as
the requirements in section 3.1 shows the requirements which were amended during the second
iteration.
EstateMaps also used this notion of risk so the most complex components of the system were actually
made during the first iteration where as the second iteration was used for perfective programming.
EstateMaps can still be seen as a beta application and functionality can be added as requirements
change.
Even though most of the processing is carried out on the servlet (discussed later in this section) there
is still some processing logic which has to be carried out on the client side. The index.html file
directly communicates with a JavaScript file called requestProc. RequestProc contains the processing
logic required to generate a message to send to the servlet. This message is dispatched using AJAX
using the HTTP protocol which is discussed in section 2.6.6. RequestProc also processes the response
from the servlet which is always received in the XML format. While the response is being processed
requestProc communicates with the googleMaps service in order to plot the various elements on the
map.
Another interesting feature which is not displayed on the diagram is that estateMaps is able to
populate the user’s search bar with predictive results as shown in figure 14. This dropdown box is
dynamically generated as the user types characters into the search box. In order to populate the
dropdown box a SJAX (Synchronous JavaScript) call is sent to the server. AJAX and SJAX are
relatively similar expect with SJAX the script is sent to a sleep mode until a response is received from
the server. With AJAX the script is able to continue and carry out other functions until the response is
received. Because the response and request are relatively small, the box is updated more or less
instantaneously.
In order to reduce requests to the server and bandwidth estateMaps caches every result sent to the
client. The cached results are kept until the user’s browser is closed. EstateMaps does not store any
information on the client side because many users disable the use of cookies on their machines due to
security breaches.
Tomcat is a java based web servlet. It uses the Java syntax in order to invoke server requests utilising
the HTTP protocol. It was decided that Tomcat should be used because of its integration with the java
language and prior experience with using the technology. Other technologies could have been used
such ASP which are discussed in the fourth chapter of this report.
The main entry point into the servlet is the Process Request component. This component processes the
request from the client and determines what action needs to be taken. For example an image request or
planning request. This component also validates the request and the response in order to make sure
that a valid location has been received. Locations are validated by using the estateMaps database of
cities and postcodes. This data is freely available from easypeasy.com [28].
The XSLT engine is the primary component which communicates with the third party services. The
XSLT engine takes a URL and XSL file as its input. The component is able to request a stream of the
URL, this stream should conform to the XML syntax. If there are any errors in the document the
process is stopped and a message is sent to the client side informing the user of the problem. The
component is then able to read both the XSL file and the stream and convert it into the estateMaps
XML format which is passed back to the relevant component as demonstrated in figure 13. The
internal operations of the server are discussed in greater detailed during the fourth chapter of this
report.
Carrying out a majority of the processing on the client side would have resulted in slow response
times and large memory requirements because of the amount of XML processing which is required.
Using the three tier architecture enables future developments to be carried out in a controlled
environment without any restrictions from the client. Figure 15 illustrates a possible client side
architecture for estateMaps.
3.4 Problems
Another problem identified during the design process was the fact that there are no other services
which are able to provide a similar service to Nestoria. This means the application is dependent on this
one service and if it does go down then estateMaps will always not be able to function. In order to
overcome this estateMaps has been built on a component based architecture so as other web services
become available they can be easily added to the system.
My initial thought was to take a look at the Royal Mail [29] postal web site to see if they provided a
free postcode or address lookup service. Upon my research I discovered that Royal Mail in fact does
offer such a service but also charge a lot of money in order to access their data. During my research I
also discovered a website [28] which offered a file which contained the following information for the
UK and Scotland.
• Every City.
Thankfully the postcode and address data maps every city to a postcode and a latitude/longitude as
discussed in section 3.4.2.1. This information is used to map crime statistics to each postcode and city
in the UK. The crime statistics data only provides information for cities so each postcode is actually
mapped to its nearest city
Crime_Data
Field Type Null
1:1
crime_ID Int(5) No
Force Text No
City Text No
1:1 Population figures Int(11) No
Violence_against_person_06 Int(11) No
Violence_against_person_07 Int(11) No
Sexual_offences_06 Int(11) No
Sexual_offences_07 Int(11) No
Robbery_offences_06 Int(11) No
Robbery offences_07 Int(11) No
Burglar_ dwelling_06 Int(11) No
Burglary_dwelling_07 Int(11) No
Theft_of_a_motor_vehicle_06 Int(11) No
Theft_of_a_motor_vehicle_07 Int(11) No
Theft_from_a_vehicle_06 Int(11) No
Theft_from_a_vehicle_07 Int(11) No
Post_Data
Field Type Null
Postcode Text No
District Text No
City (FK) Text No
1:N X_ref Int(11) No
Y_ref Int(11) No
Latitude Double No
Longitude Double No
1:N Postcode_ID Int(5) No
Crime_ID (FK) Int(5) No
• Due to greater development experience with Java opposed to C# it was decided that more time
could be spent adding functionality and making the code more elegant rather than learning
the C# platform.
• Java supports a number of free libraries which can be used to generate graphs which have to
be paid with its Microsoft equivalent.
• Java supports cross platform compatibility where as Microsoft .net only has limited support in
Linux and no support in Max OSX.
PHP was also another interesting option but due to no prior scripting experience with the technology
the learning curve may have been too great.
• HTTP Monitor – Allows the user to see information about the HTTP requests sent and
received and other useful information such as data about the processing servlet, performance
figures and the request headers.
• Database Connection Monitor – allows easy testing of a database connection and can be used
to see statistical information about a request such as performance ad number of requests.
• Apache Tomcat – Tomcat is the server side container used to send and receive requests to the
servlet using the HTTP protocol.
• MySQL – MySQL is a local database used to store various information such postcode,
location and crime statistics data.
• SQLYog – SQLYog [33] is a frontend application (GUI) for MySQL which makes it easier to
interact with MySQL client.
• House – This class is used to store information about each house which has been found from
the Nestoria web Service. It contains a simple constructor which sets the various variables
within the class. There are many GET methods which can be used to retrieve a particular
piece of information for a house object.
• HouseXMLParse – The request which the Nestoria web service generates is in the XML
format. This class process the generated XML request from the Nestoria web service and
generates a list of House objects.
• XsltEngine – XsltEngine is used to communicate with the external third party services. It
takes a XSL file and URL as its inputs. The URL is the location of the resource which needs
to be fetched. The URL is streamed across the internet and the response is saved. The
response is then translated into the estateMaps XML format using the input XSL file.
• ImageFinder – This class is used to retrieve a locationID from the Flickr web service to
retrieve images relevant to a particular area. Flickr uses a unique ID for each city or postcode
in the UK. For example the postcode M16 corresponds to “keGop2.YA5qp3lp39g” and
Manchester Corresponds to “cTto9E.bCZ7g_w". The XML response is also parsed by this
class. The imageFinder functionality is discussed in more detail during this chapter.
• GraphGenerator – This class is used to generate a dynamic graph for the crime statistics data.
The class utilises the freely available JOpenChart Toolkit [39].
• HouseXMLGeneration – When a new house search request is sent to the servlet a response is
expected in the XML format. This class is used to generate a XML file from a list of house
objects which are processed by the HouseXMLParse class.
• PlanLocater – The PlanLocater class is used to retrieve planning Information for a particular
area. It utilises the XsltEngine to convert a XML stream into the estateMaps XML planning
format.
• DatabaseLink – The database link class is used to create JDBC MySQL connection to access
the estateMaps database. The class creates a connection to the database and is also able to
destroy a connection once it is no longer needed.
..........
reqType=getHouse&locIn=Manchester&bedNum=Any&numRes=10&saleType=buy&priceVal=Any
reqType=goWidget&uIn=Manchester&saleType=buy
reqType=goGraph&vi07=5891&rob07=718&sexu07=
reqType=goWidget&uIn=Manchester&saleType=buy
reqType=goPlan&uIn=Manchester
..........
The server side code which reads the result string is actually directly accessible using built in methods
of the Apache Tomcat servlet. The code below illustrated how the server side code is able to read the
location of the search and the request type.
..........
//Gets the location Parameter from the HTTP call associated with everycall from the JS
String location = spaceRemove(request.getParameter("uIn"));
//HTTP request type
String reqType = request.getParameter("reqType");
..........
The XSLT engine takes a URL and XSL file as its input. The XSL file is a static file which resides on
the servlet which is used to transform the document into the estateMaps XML format. The XsltEngine
streams the URL across the internet and stores the response in a temporary character stream
(StringWriter). This stream is then converted using the inbuilt Java XSLT transformer which is also
known as TransformerFactory.
.................
public StringWriter transformXslt()
{
try Read the URL
{
xmlSource = new StreamSource(xmlUrl.openStream()); Read the XSL file
xsltSource = new StreamSource(xsltUrl.openStream()); (local loop back)
trans = transFact.newTransformer(xsltSource);
trans.transform(xmlSource, result);
}
catch (TransformerException ex)
{ Transform the file
ex.printStackTrace();
}
catch (IOException ex) Exception handling in case of errors
{
generated from the transformation
ex.printStackTrace();
}
return resultString;
}
.................
The reason why I decided that a XSLT engine would be useful is that even if any of the third party
services decide to change the structure of their XML feeds it means the internal operations of the
program will not have to be changed. The only part of the application which will have to be changed
is the XSL file. This means that the program will not have to be recompiled which us usually means
that the service has to be stopped. A snippet of the house XSL file can be seen below and the full
version can be found in Appendix C.
.................
<?xml version="1.0" encoding="ISO-8859-1"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output omit-xml-declaration="no" method="xml" /> Set the output type
<xsl:template match="/">
<estateMaps>
<channel> Select each listing in the
<xsl:for-each select="opt/response/listings"> Nestoria XML file
<item>
<title><xsl:value-of select="@title"/></title>
<description><xsl:value-of select="@summary"/></description>
<bed><xsl:value-of select="@bedroom_number"/></bed>
.................
Select title tag for the
current item
Write these tags to the file
HouseXmlParse has one specific purpose which is to read the XML feed and add each house element
into an Array List. The SAX parser is instantiated by creating a new instance of the
DocumentBuilderFactory. The DocumentBuilderFactory Defines a factory API that enables
applications to obtain a parser that produces DOM object trees from XML documents [37]. DOM is a
W3C standard which enables such documents as XML and HTML to be dynamically accessed and
update the content, structure and style of the document [38]. The code below illustrates how the
DocumentBuilderFactory is instantiated.
................
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
dom = db.parse(new InputSource(new StringReader(xmlIn)));
.................
Once the DocumentBuilderFactory has a DOM representation of the document you are able to
directly access tags within the document. The pseudo code for reading the estateMaps converted XML
document is shown below. A full listing of a typical XML document generated by the XSLT engine
can be found in Appendix D.
.................
private House createHouseObj(NodeList nodeIn)
{ Gets the “Title” attributes value and stores it in a
//Iterate the XML item
for(int g = 0 ; g < nodeIn.getLength();g++) temporary variable. This is done for each attribute in the
{ feed, until are attributes are found.
Element e = (Element)nodeIn.item(g);
String nodeName = e.getNodeName();
//Check the node name and set the data which is applicable.
if(nodeName.equalsIgnoreCase("title"))
{
try
{
//get the title of the house Attributes received from the XML feed are made into a
title = e.getTextContent();
House object
}
}
The house class does not just store the information from the feed it also adds the crime statistics data
relevant to that property when the house object is created. This is done by retrieving the location of
that property and querying the estateMaps database in order to find the nearest city to that location.
Once the result set is retrieved from the database this information is also stored alongside the
information from the feed. The house class has a number of getter and setter methods which makes it
Now that the Array List has been populated with the house objects from the XML feed a new XML
file must be generated with the added crime statistics data to be sent back as the response. This is
handled by the class called HouseXmlGeneration.
................
//getServerResponse class instantiating the HouseXmlGeneration to build the new XML Feed.
HouseXmlGeneration sd = new HouseXmlGeneration(HouseXmlObj.getHouses())
String res = sd.bulidXML();
................
HouseXmlGeneration accepts a generic Array List with the type set as House. It iterates through each
house element in the Array and builds a XML file as string. Once the XML is generated it is sent
back to the waiting client side code.
................
//Public default constructor.
public HouseXmlGeneration(ArrayList<House> listOfHouse)
{
listToGen = new ArrayList<House>(); //Initiate array
if (listOfHouse.size() > 0) Initiates the variables
{ required to build the XML feed.
//Copy the input array to the class array.
listToGen = listOfHouse;
}
//Initiate the result string with the XML starting tag.
xmlRepresentation = "<?xml version=\"1.0\"?>";
}
/*iterate through each of the house objects in the array, get the house
*Data via the accessor methods and append the result string.*/ Retrieves each House object in the list
for (int i=0;i<listToGen.size();i++) and appends the string with the
{ relevant data in the XML format
//append the xmlRepresentation with data from the house object
................
This process is very similar to that of sending planning information and sending images which is
relevant to a specific location. The difference is that different rest URL’s are used for the service and
When a graph request is sent to the server side code, the crime statistics data is actually stored in the
HTTP request string as discussed in section 4.5.1. As discussed in section 4.5.2 the crime statistics
data are actually sent to the client side when a house search request is generated. These crime
statistics data once again are actually sent back to the server side in order to generate the graph.
The reason why this design decision was taken is because instead of re-querying the database it was
actually much faster to send the statistics back in the request String. The second reason behind this is
that a graph is only generated when the user selects to see these figures for that particular property, so
they are not generated when a house search request is processed. These graphs could have been
generated when the house search request was being processed but because of the time required to
generate each graph it often consumed too many resources on the server and took too long.
The graph request is actually handled to a similar fashion as the house search request. The get
ServerResponse first processes the HTTP request which contains the crime statistics data. This data is
then passed to a dedicated class to create graphs called GraphGenerator.
.................
Passes the variables to the GrapgGenerator,
//Instance to the GraphGenerator class which sends back a filename to the newly
GraphGenerator graphIns = new GraphGenerator();
created graph.
/*Creates a graph and stores the filename and location to the
*"filename" String*/
String filename = graphIns.graphGen(violence_against_person06, robbery_offences06,
sexual_offences06, burglary_dwelling06, theft_motor06,
theft_from_motor06, violence_against_person07,
robbery_offences07, sexual_offences07,
burglary_dwelling07, theft_motor07, theft_from_motor07);
.................
Once the GraphGenerator class receives these figures it populates a double array which is used by the
freely JOpenChart Toolkit to generate a graph. This graph is actually generated as a JPG image.
Figure 21 illustrates the structure of the double array.
.................
New Double Array to store both the 2006
int[][] model = and 2007 crime statistics.
{
{
violence_against_person06, robbery_offences06, sexual_offences06,burglary_dwelling06,
theft_motor06, theft_from_motor06
},
{
violence_against_person07,robbery_offences07,sexual_offences07,
burglary_dwelling07,theft_motor07,theft_from_motor07
}
};
String[] columns = {"Robbery offences", "Burglary dwelling offences", "Sexual offences", "Theft from a
vehicle", "Theft of a motor vehicle offences", "Violence against the person offences"};
String[] rows = {"2006", "2007"}; Array used to represent the columns the
graph.
String Graphtitle = "Crime statistics 2006 vs 2007";
Now that graph is has been stored in a temporary location the graph is sent back to the waiting client
byte by byte as it is not possible to send a jpg file back directly. The file is read in chunks of 1024
bytes which is sent to the client in pieces and is regenerated by the client’s browser.
.................
FileInputStream in = new FileInputStream(file);
byte[] buf = new byte[1024];
int count = 0;
Reads the file 1024 bytes at a time and sends
while ((count = in.read(buf)) >= 0)
each piece to the client where it is
{
regenerated.
out.write(buf, 0, count);
}
in.close();
.................
The ImageFinder class take a location as its argument. This location can either be a city or postcode.
This location is sent to the Flicker web service in a similar fashion to that used in the XSLT engine as
discussed in section 4.5.2. The response generated from the Flickr web service is a XML file which is
parsed using the SAX java parser.
.................
Instance to the ImageFinder class.
//GetServerResponse class
ImageFinder getImages = new ImageFinder();//new instance to ImageFinder
//Find the flickr location ID relevant to the location
String locationId = getImages.getLocation(spaceRemove(location));
.................
Reads the XML response from the Flickr
service and finds a relevant ID.
.................
//ImageFinder class
public String getLocation(String locationIn)
{
try URL to the Flickr service with the city or
{ postcode sent as one of the parameters,
/*URL to the flickr REST service, string is manipulated to include
* location for which images are needed.*/
String flickRest = ("http://api.flickr.com/services/rest/?method=flickr.places.find&api_key=060b5570
2b4417d02a7b1338fc904f17&query="+locationIn+" UK");
//Converts the String type URL to a Java URL type.
URL xmlUrl = new URL(flickRest);
//Opens the URL to retrieve a XML file with location information.
.................
Once the response from the Flickr web service has been processed the location ID is inserted into the
Flickr image retriever URL, which is different to the location retriever. The rest of the process is
similar to that of the house search request which is discussed in section 4.5.2.
.................
String xslIn = " estateMaps/estate_assets/image.xsl";
//String for flickr REST service manipulated with location
String xmlIn =("http://api.flickr.com/services/rest/?method=flickr.photos.search&api_key=060b55702b441
7d02a7b1338fc904f17&place_id="+locationId+"&tags=city%2C+landmark&accuracy=
11&safe_search=&per_page=55");
//Convert both XML
XsltEngine getGeoRSS = new XsltEngine(xmlIn,xslIn);
.................
while (rs.next())
{ Query the database to find all locations
//bool to represent that there is postcodes relevent which match the lefts hand side of the input
empty = false; string. Limited to 10 results.
//BULID XML based on the XML schema
xmlRepre = xmlRepre + ("<obj"+count+">");
xmlRepre = xmlRepre + (rs.getString("postcode")); Iterates the result set and creates a simple
xmlRepre = xmlRepre + ("</obj"+count+">"); XML file which is bent back as the
} response.
.................
.................
<script src="http://maps.google.com/maps?file=api&v=2&key=ABQIAAAAE8b
K52PSl1wF253tQaCPhQ6h-ORlaK6C_AdeoPM7gXAW7YO
RTo5R9ZAAAl6MSF9T7Dv6DyeLldzw"type="text/javascript"></script>
.................
As the page is loading the first JavaScript script which the browser intercepts is a function which must
be processed before the page is actually loaded. This function calls the Google Maps service in order
to create a new map object and place it on the screen. Now that the map has been created it can be
easily updated using the Google Maps API. The code below demonstrates the instillation of the
Google Map.
The Caching XmlHttpRequest Wrapper offers a simpler interface to initiating a request with the
inbuilt XmlHttpRequest in most modern web browsers. The way that AJAX technology actually
works is discussed in section 2.3.2.
When a request is sent to the servlet the wrapper has the capability to save this response in a
repository and the request string in a temporary Array. This temporary Array is compared with the
request string before it is sent to the servlet to see if the request has been processed before. If the
request has been sent before the cached response is used instead of sending a new request to the
servlet. Figure 22 shows how the internal operations of the caching system operate diagrammatically.
The Wrapper is simply instantiated by calling a global variable which is initialised as the page is
loading. The Wrapper has a number of different methods but the one which is of particular interest is
URL to the servlet with the Call back method String to signify response
request string should be cached if it’s not
.................
Http.get({url: url,callback: processResponse,cache: Http.Cache.GetCache}, ['.getcache']);
If the request has not been sent to the servlet before then a HTTP request is sent to the servlet and the
response is saved in a repository. Every result which is sent to the servlet is cached in estateMaps
using the method and code illustrated in this section.
.................
if ((Http._get.readyState != Http.ReadyState.Uninitialized) &&
(Http._get.readyState != Http.ReadyState.Complete))
{
this._get.abort();
} Uses the HTTP request object in the browser to open a
Http._get.open(method, url, true); connection to the servlet using the passed in URL.
.................
In order to parse the XML response the browsers inbuilt XML parser is used which is incorporated in
most modern web browsers. The parser reads the XML into memory and converts it into an XML
DOM object that can be accessed directly with JavaScript functions. As mentioned in section 4.5.2
DOM is a W3C [38] standard which enables such documents as XML and HTML to be dynamically
updated and accessed. Once a house element has been found in the XML file its associated variables
are passed to a JavaScript method called createMarker. A typical XML file which is received from the
servlet for a search request can be found in Appendix E.
.................
var rss = doc.getElementsByTagName('estateMaps').item(0); Generates DOM representation of the
try feed
{
for (var i = 0; i < rss.childNodes.length; i++)
{
channel = rss.childNodes.item(i); Iterates through layers in the XML
for (var j = 0; j < channel.childNodes.length; j++) file
{
item = channel.childNodes.item(j); Reached the elements of houses
for (var k = 0; k < item.childNodes.length; k++)
{
interData = item.childNodes.item(k); Processes the current house and takes it associated
GoogleMaps object representation of element = interData.childNodes.item(0); values and populates them in some temporary
a Latitude and Longitude if (interData.tagName=='title') variables
.................
} Adds the point to a array which is used by a googleMaps method to
point = new GLatLng(lat,longi); find the center location of the elements on the map
bounds.extend(point);
createMarker(type, point,image,description, price, title, bed, bath ,link, popfig, robbery_offences07,
Passes the house values to the Sexual_offences07, Theft_from_a_vehicle07, dwelling_offences07, Violence07, theft_car07,
createMarker method to add it to the map robbery_offences06, Sexual_offences06, Theft_from_a_vehicle06, dwelling_offences06,
Violence06, theft_car07
}
}
setContext(bounds); GoogleMaps method to set the center of the map. Uses the
................. array populated earlier to find the central location
The createMarker method takes the values of each house and generates a Google Marker and Google
Bubble. The Google Bubble is what the user interacts with. It contains three tabs each of which are
used for displaying:
The Google Bubble is created using an Array which contains three GInfoWindowTab objects which
hold the associated HTML code and data for the three pieces of information.
Tabs
House information
displayed here using
HTML code.
Marker
The marker is positioned on the map using the Point object which was created in the code on the
previous page. The Google Bubble has an event action which is trigged when the user clicks on the
marker. Clicking on the marker displays the Google Bubble. Once the user clicks on the marker again
the bubble disappears. This process is carried out for each house in the XML file.
A similar process is carried out for listing the planning alerts information except that the Google
Bubble only has one tab. The image data is not processed until the user clicks on the Images tab. At
this point a request is sent to the servlet or the cache repository and the HTML in the Image
GInfoWindowTab is updated with links to the pictures. Once the information is fully processed the
user is notified with a message on the GUI.
The library was only able to read entries which were pre-populated in an array on the client side. In
order for it to compatible with estateMaps it needed to be modified so that entries could be read from
the estateMaps database. There were a number of design challenges in modifying the auto suggest
drop down box.
The main problem which was encountered was the way in which the AJAX technology operates.
When the AJAX call is sent to the servlet the calling method is put into a sleep mode, but the rest of
the script carries on functioning. The way in which the libraries works meant that the Array of values
had to be populated before the rest of the script could carry out its work. As mentioned in section 3.3
this required the use of SJAX (Synchronous JavaScript and XML). The code below demonstrates the
modified code and the use of SJAX.
.................
StateSuggestions.prototype.requestSuggestions = function (oAutoSuggestControl, bTypeAhead)
{ Modified so that a request is sent to the
this.rand.length =0; servlet to populate the array.
var aSuggestions = new Array();
var sTextboxValue = oAutoSuggestControl.textbox.value;
this.ajaxGo(sTextboxValue); Input value from the search box
for (var i=0; i < this.rand.length; i++)
{
Gets the values from aSuggestions.push(this.rand[i]);
the } Pushes the values from the array in to drop
servlet and populates oAutoSuggestControl.autosuggest(aSuggestions, bTypeAhead); down box
the array };
In order to test estateMaps a number of different testing techniques were used. During the entire
development phase of the project continuous debugging was carried out and at the end black box and
white box tests were carried out. This chapter explorers these tests and shows how estateMaps
performed.
5.1 Debugging
As the application was being developed the code was continuously debugged in order to test the
various components in the system worked correctly. In order to satisfy that XML feeds were being
generated to conform to the W3C [42] standard, Mozilla Firefox’s internal checking system was used.
Each XML feed which was generated by the Java code were being dumped to a local repository so
that they could be checked for errors. The JavaScript, AJAX, HTML and CSS code was also tested
using Firefox’s internal error console.
Netbeans also provided a number of different debugging tools which helped manage and test various
elements of the system. The most useful tool which Netbeans provided was the dynamic variable
analyser. This tool allowed me to visualise the state of a specific variable in the application at any
specific moment during the runtime. The HTTP and database monitor allowed me to visualise any
errors which were being generated by either of the components.
Code breaks were also being used in the code in order to examine where errors were forming in the
application. Code breaks enables the code to be executed line by line to see the exact point and
referencing methods which cause the application to break.
In order to understand some of the branches in the application particularly the XML processing
control Graphs were drawn. Drawing control flow graphs enabled me to see the exact path which the
Figure 25 Control flow graph for processing a House XML file on the client side.
Looking at the table we can see that most of the tests cases were passed except for one. During the
design phase of estateMaps it was decided that the application should support multiple browsers. But
due to the time limitations of the project it was not possible to cater for both Firefox and Internet
explorer.
NFREQ6 - The application must be able to cater for cross browser compatibility in order to reach all
potential users.
This requirement was of fairly low priority as it does not affect any of the functionality of estateMaps.
With the application running on a single browser the requirement is still sufficiently fulfilled.
Once a search location has been inputted into the search bar and search button is clicked the
application does not refresh the screen. Instead the application fades out the Google Map and an
animated image is displayed to inform the user that the request is being processed as illustrated in
figure 28.
As soon as the processing has been completed the markers can clearly be seen on the Google Map
which represents the location of each property for sale. Clicking on the marker brings up the
information about the property in a Google Bubble which can clearly been seen in figure 29.
The images are presented in the form of a collage, once the user clicks on the image it expands to the
full size where the user can cycle through all of the images or return to the collage.
The user may wish to see planning information local to the properties which are displayed on the
screen. This can be done by clicking the planning button on the advanced search bar. In a similar
fashion to that of house being displayed on the map markers are used to represent the location of each
property where the information is available. Clicking on the marker displays a bubble with the
relevant planning information which can be seen in figure 33.
In every search request the average house price is also displayed in the left hand corner of the screen.
As the search request changes the information is also updated to reflect the search criteria.
There were a number of difficulties in regards to time management and feature creep during the
development of estateMaps. During the preliminary planning of the project it was estimated that the
first iteration would be completed within four weeks. But because of design complications with the
development of the XSLT processing it actually took longer than expected. Another element which
delayed the first iteration was the fact that it was decided new features were being introduced during
the second iteration. This required changes in the structure of the program and the code. The
Deployment plan in appendix A reflects these changes.
If the traditional waterfall approach was adopted for estateMaps it could have resulted in a project
which was more structured, but would not have been able to cater for changing requirements. In
today’s fast moving IT environment, this should be one of the fundamental requirements in creating a
new application or hardware.
Overall I was pleased with the implementation of estateMaps, I believe it was created in a modular
and loosely coupled fashion which makes it easier to add extra functionality in the future. In order to
test how easy it was to integrate a new service into estateMaps I created test a class which utilised the
existing components within the system. I successfully integrated the Zillow [47] web service into
estateMaps. Zillow offers a similar service to Nestoria but only retrieves properties from America.
The test class was successfully able to generate a list of House objects and display the information for
each property within the list.
In this test the robustness of the application was also put to test as when a House object is created the
crime statistics data is also added at that time. Because there is no crime statistics data for America the
EstateMaps successfully met all the core requirements which were set out during the design of the
project. Testing has shown that the program is relatively bug free. The main bits of the project which
could have been further improved on are discussed below.
• More of the processing could have been carried out on the client side to ease the amount of
resources required on the server side.
• The caching system could be improved by utilising cookies in order to free up memory
resources on the client side.
• The crime statistics data could have been further granulated to allow statics for suburbs as
well as cities.
Another interesting feature which was actually thought of during the implementation of the project
was allowing the users to upload their own properties for sale. The system can easily be amended to
add this functionality by reusing the current components in the system. All that would be required
from the user is information about the property. The complications occur when users start to upload
data which is effectively nothing to do with their properties. Flittering out this data would have to be
done automatically which would have to be handled by a whole new sub system. The second
complication which would occur is the handling of the data. With security breaches in regards to data
constantly in the public eye, the data would have to be under strict review and stringently obey the
data protection act.
....”Web 2.0 may be a buzz word, but it's still attracting big bucks. Some of the novelty
surrounding Web 2.0 has worn away since the term first gained traction in 2004, but venture
capitalists in search of the next big thing are still pouring money into the industry.
Venture capitalists invested $3 billion in Web 2.0 firms last year, up nearly 9 percent from
2004, according to the MoneyTree report by PricewaterhouseCoopers and the National
Venture Capital Association.
As the growing penetration of broadband makes the Internet a routine part of life, these so-
called Web 2.0 sites - which revolve around user-oriented and user-driven content - are
revolutionizing the way people interact with the Web..” [44]
It is certainly true that web 2.0 sites are slowly becoming a part of everyday life with such websites as
Facebook and Flickr allowing the users to get involved in the web experience. In regards to
estateMaps being a web 2.0 application I believe that some of the ideas have been used to create
estateMaps but not strictly applied. I think estateMaps effectively demonstrates how the internet can
be used to as a platform to run an interactive application which utilises data from a number of
different existing information sources.
The future of the internet will unquestionably include the concept of web 2.0 but people have already
started to see beyond web 2.0 and conceptualised web 3.0. The so called founder of the internet, Tim
Berners-Lee stated;
...”People keep asking what Web 3.0 is. I think maybe when you've got an overlay of scalable
vector graphics - everything rippling and folding and looking misty - on Web 2.0 and access
to a semantic Web integrated across a huge space of data, you'll have access to an
unbelievable data resource....”[45]
Looking at Tim Berners-Lee’s reply to web 3.0 we can already there is no clear indication of what
web 3.0 will include expect that SVG (scalable Vector Graphics) will play an important role. SVG
files are basically XML files which are used to describe a two-dimensional vector graphic, both static
and animated.
I believe that skills that I have learned during the implementation of this project will provide
indispensable during my career and improve my competency and confidence in the field of Software
Engineering.
Appendix E – A typical XML file for a house search request which is returned to the client.
FREQ1. The application must be able to utilise the Nestoria API in order to display properties
for sale geographically on a world map.
FREQ2. The application should allow users to search for property’s based on multiple search
conditions.
FREQ3. The house description, price and picture should be displayed where the information is
available.
FREQ4. Crime statistics about the area in which the property lies within must be displayed to
the user in manner which is easy to understand.
FREQ5. The application should allow multiple houses to be displayed on the map where
appropriate.
FREQ6. The application must be able to display the most popular search terms in the form of a
Tag Cloud.
FREQ7. Caching techniques will be implemented into the application in order to reduce the
number of responses which will be generated by the server.
FREQ8. The application must be able to generate an informative response if the client does
not have all the required plug-ins into order to view the website.
Non-functional requirements
NFREQ1. The application should follow good HCI principles which will make it easier for the
user to navigate the website.
NFREQ2. The application must be able to deal with the potential unreliability of third party
services.
NFREQ3. The server should be able to generate responses concurrently based on multiple client
sessions.
NFREQ4. The application should be built in a modular design to allow for extensibility in the
future and to cater for unreliability of third party services.
Many psychology studies have concluded that eight seconds, plus or minus two is the
longest which a user a user will wait before they end the session. Based on this study
the application must generate a response within the ten second barrier.
NFREQ6. The application must be able to cater for cross browser compatibility in order to reach
all potential users.
</bed>
<bath>
<xsl:value-of select="@bathroom_number"/>
</bath>
<price>
<xsl:value-of select="@price_formatted"/>
</price>
<type>
<xsl:value-of select="@listing_type"/>
</type>
<type_price>
<xsl:value-of select="@price_type"/>
</type_price>
<link>
<xsl:value-of select="@lister_url"/>
</link>

<lat>
<xsl:value-of select="@latitude"/>
</lat>
<long>
<xsl:value-of select="@longitude"/>
</long>
</item>
</xsl:for-each>
</channel>
</estateMaps>
</xsl:template>
</xsl:stylesheet>
Mock-up 1
Google Map
6. Pierre. (13/10/2008) Building Web Services the REST Way [online]. Available from
http://pierrebsas.blogspot.com/2007/10/building-web-services-rest-way.html [Accessed
04/04/2008]
11. Nestoria.co.uk (n.d.) Property and Homes Search by Nestoria [online]. Available from
http://www.nestoria.co.uk [Accessed 11/03/2008]
12. Hinchcliffe, D (17/04/2008) Web 2.0 success stories driving WOA and informing SOA
[online]. Available from http://blogs.zdnet.com/Hinchcliffe/?cat=32 [Accessed 12/04/2008]
14. georss.org (n.d.) Geographically Encoded Objects for RSS feeds [online]. Available from
http://georss.org/ [Accessed 15/04/2008]
20. Patton, T. (05/02/2007) Mashups put a new face on the Web [online]. Available from
http://articles.techrepublic.com.com/5100-3513-6156271.html [Accessed 26/03/2008]
22. Facebook.com (n.d.) Facebook is a social utility that connects you with the people around
you [online]. Available from http://www.facebook.com/ [Accessed 01/04/2008]
23. Cashmore, P. (31/10/2006) Facebook This! Facebook Copies Digg and del.icio.us [online].
Available from http://mashable.com/2006/10/31/facebook-copies-digg-and-delicious-
bookmark-buttons-on-every-site/ [Accessed 29/03/2008]
24. Fish4.co.uk (n.d.) fish4 homes - property, flats and houses for sale [online]. Available from
http://www.fish4.co.uk/iad/homes [Accessed 23/03/2008]
25. WebSiteOptimization.com (30/10/2006) Response Time: Eight Seconds, Plus or Minus Two
[online]. Available from http://www.websiteoptimization.com/speed/1/ [Accessed
21/03/2008]
26. Spence, I., Bittner, K. (15/03/2005) What is iterative development? [online]. Available from
http://www.ibm.com/developerworks/rational/library/mar05/bittner/ [Accessed 23/03/2008]
27. Mysql.com (n.d.) The world's most popular open source database [online]. Available from
http://www.mysql.com/ [Accessed 19/03/2008]
28. Nik. (06/08/2008) Free UK postcode data file Longitude Latitude SQL PHP code search
[online]. Available from http://www.easypeasy.com/guides/article.php?article=64 [Accessed
03/04/2008]
29. Royalmail.com (n.d.) Postcode Address File (PAF®) [online]. Available from
http://www.royalmail.com/portal/rm/jump2?mediaId=400085&catId=400084 [Accessed
06/04/2008]
31. The Apache Software Foundation. (n.d.) Apache Tomcat [online]. Available from
http://tomcat.apache.org/ [Accessed 15/03/2008]
33. Webyog.com (n.d.) SQLyog MySQL GUI - Community Edition [online]. Available from
http://www.webyog.com/en/downloads.php [Accessed 18/03/2008]
35. Jdom.org (n.d.) JDOM [online]. Available from http://www.jdom.org/ [Accessed 31/03/2008]
36. Xerces.apache.org (n.d.) Xerces Java Parser 1.4.4 [online]. Available from
http://xerces.apache.org/xerces-j/ [Accessed 14/03/2008]
38. W3C.com (22/10/2003) Document Object Model FAQ [online]. Available from
http://www.w3.org/DOM/faq.html [Accessed 17/03/2008]
39. Müller, S. (n.d.) JOpenChart Library and Toolkit [online]. Available from
http://jopenchart.sourceforge.net/ [Accessed 12/03/2008]
42. W3c.org (11/12/2007) Validator for XML Schema [online]. Available from
http://www.w3.org/2001/03/webdata/xsv [Accessed 15/03/2008]
44. Wong, G. (13/07/2006) Follow the Web 2.0 money [online]. Available from
http://money.cnn.com/2006/07/13/technology/web_2.0/index.htm [Accessed 05/04/2008]
45. Shape.it (n.d.) Shaping from the Web 2.0 to the Web 3.0 [online]. Available from
http://shape.it/index.php?option=com_content&task=view&id=16 [Accessed 11/04/2008]