Вы находитесь на странице: 1из 4

Experiment Case Study

Scaling plagiarism detection in federated clouds


The challenge
MTA SZTAKI - Hungarian Academy of Sciences, Institute for Computer Science and

Detecting crosslanguage plagiarism is both data and CPU intensive, so our KOPI service definitely needed a place for testing evolved scalability.

Building Reliable Computing Services on Clouds Control - performs basic and application-oriented research in an interdisciplinary setting of
computer science, engineering, information technology, intelligent systems, process control and wide-area networking. MTA SZTAKI runs several public web-based services, such as the KOPI Plagiarism Search Portal (kopi.sztaki.hu). KOPI is the most popular plagiarism detection service in Hungary running since 2004. For professors and teachers KOPI gives the opportunity to compare theses and home assignments to all documents uploaded before, or to various

Mt Pataki Chief Architect of KOPI MTA SZTAKI

document sets openly accessible on the Internet. Students can check their own written works to see if the amount of citation has exceeded the limit set by their home institution. In 2011 KOPI offered a new feature, cross-language plagiarism detection for the first time all over the world. For example, with the cross-language feature it is now possible to find paragraphs taken from the English Wikipedia and translated into Hungarian. This new technique is costly in terms of processing and data storage. The operation of KOPI is based on a queue which collects new requests and delegates these requests to processing nodes. Users upload their documents into the queue to be checked for similarities. When it is done, a search report is sent via email that lists all the parts suspicious of plagiarism. Bursts in usage are a natural characteristic of our service. New theses arrive in large quantities at regular submission periods. In such overloaded situations users had to wait several days for the similarity search report. KOPI is implemented as a multi-layer infrastructure which means an interconnected set of hosts with different capabilities. Bottlenecks may occur at any of the layers, therefore a healthy proportion of various components must be maintained. In the periods of bursts the infrastructure should be scaled up so that the requested actions can be completed within the preferred time scale. In case of the queue management problem, the scaling depends on several factors: the ratio and size of incoming requests, the cost and time of booting new components as well as the available capacities. We also need to prepare for situations when clouds of our partners are federated with our clouds, and we need to optimize the usage of the joint capacities as well as to use federated clouds in case of usage peaks.

We faced sudden, serious capacity problems with KOPI. Now we can expand our service environment dynamically, even into other clouds.

The solution Elastic scaling to improve response time and fault tolerance
During our experiment named KOPFire we adapted our service components to BonFIRE so that we could run realistic but simplified KOPI services in multiple instances. Then we took the following steps in experimentation:
We measured the effects of increasing the throughput of different service components. For this, we collected possible atomic scaling actions and the time needed to perform these actions. As a result we could also determine the optimal ratio of system components and the most suitable configuration options. In any scaling task some kind of metric is needed to measure the improvements achieved. In our case we had to introduce a new metric: characters per second (cps) which measures how many characters of documents have been processed in a second. The mean of the measured cps in a longer period can characterize the processing speed of the service. Therefore, the size of the document queue and the current processing speed can be used for triggering the scaling actions. We created an automated scaling solution that can speed up or slow down documentprocessing, depending on the number and size of documents waiting in the queue. When we run out of capacity on one cloud, we can expand to other clouds in the federation provided by BonFIRE. At the same time, we also need to rationalize the number of cloud resources used. The scaling solution is based on a set of scripts using the BonFIRE API to manage cloud resources and the BonFIRE Monitoring Framework to collect measurements about virtual machines. The scaling script detects situations when scaling is beneficial, selects and executes appropriate scaling actions and ensures that the new components are properly configured to work in cooperation with others. Several scaling algorithms can be plugged into the scaling script. We implemented and compared several algorithms using greedy, lazy or speed-oriented adaptation. Samples taken from the usage statistics of the real service were used to test and tune the scaling algorithms. Although we cannot say there is a best algorithm for all different usage patterns, each algorithm is good enough to raise the throughput of the service to an acceptable level. Furthermore, the scaling script was enhanced to continuously check the state of service components and to replace failed components. This means that as a side effect of elastic scaling we get additionally improved fault-tolerance for the service.

Lszl Kovcs Head of Department MTA SZTAKI

Results
Using BonFIREs unique infrastructure, we created an elastic scaling solution for our KOPI plagiarism detection service. Based on the rich functionality and extensible monitoring of BonFIRE, we could try and compare a large variety of test setups. More

With the elastic scaling solution tested on BonFIRE, KOPI can cope with increased usage attracted by our planned innovations.

than a hundred test cases were run to measure and evaluate the performance of scaling in realistic situations. These experiments helped us immensely to find the appropriate scaling solution which at the end enables us to provide a faster and more reliable service to our growing user community. With the new scaling solution, the processing time of the documents can be predicted and thus it can be guaranteed that the search report is sent within a couple of hours regardless of the waiting queue size.

Added value of using BonFIRE


As the BonFIRE infrastructure provides enhanced facilities, SZTAKI could experiment with:
Managing and using large, partitioned indexes in a federated Cloud environment, Automating complex scaling experiments using Restfully and Ruby scripts, Ubiquitous monitoring of the whole BonFIRE testbed including measurements on physical hardware, virtual machines and software components using built-in and custom metrics, Testing the service on widely different hardware and software configurations at different locations, Examine implemented fault-tolerance mechanisms during various simulated hardware and software failures, Identify the important factors which affect the performance of service components as well as the deployment time of new components.

Next steps
The results will be used to improve the KOPI service to offer better response time and availability for the user. By applying the experiment findings we can avoid situations when it takes days to process and compare waiting documents. Large parts of the scaling script can be migrated to our production environment due to the standardsfriendly and extensible BonFIRE API. The BonFIRE experiment results are essential for KOPI to serve new users brought by further planned innovative features.

Your free and open Cloud testbed

BonFIRE Facility: open for experimentation


Following two successful Open Calls that have given academic researchers, developers and SMEs access to the Cloud infrastructures in BonFIRE, it is possible to apply for running experiments as part of the Open Access initiative. This is your chance to get involved and test your innovative ideas for free!
In Open Access, you will have access to a multi-site Cloud facility for applications, services and systems experimentation: Large-scale, heterogeneous and virtualised compute, storage and networking resources Full control of your resource deployment In-depth monitoring and logging of both physical and virtual resources Advanced Cloud and network features Ease of use of experimentation

BonFIRE allows users to evaluate the effects of converged service and network infrastructures; assess the socio-economic impact of new Cloud services; and to combine Cloud computing and data storage with novel networking scenarios. Essentially, BonFIRE enables developers to research new, faster, cheaper or more flexible ways of running applications. Open Access will continue through 2014.

BonFIRE Project
The BonFIRE Project has funded experiments selected trough two open calls. This series of public case studies highlights the challenges and results of each experiment and the added value of the BonFIRE multi-cloud facility.

The BonFIRE consortium brings together world leading industrial and academic organisations in cloud computing to deliver a robust, reliable and sustainable facility for large scale experimentally-driven cloud research.

The research leading to these results has received funding from the European Commission's Seventh Framework Programme (FP7/20072013) under grant agreement n 257386

Contact information
Experiment contact information: Andrs Micsik (micsik@sztaki.hu). The research leading to these results has received funding from the Phone:+36 1 2796248 European Commission's Seventh Framework Programme (FP7/2007-

Provided by the Foundation

2013) BonFIRE contact information: under grant agreement n 257386 More information: www.bonfire-project.eu Contact: bonfire @ bonfire-project.eu

Вам также может понравиться