Академический Документы
Профессиональный Документы
Культура Документы
A thesis submitted in part fulfilment of the degree of BA (Hons) in Computer Science with the supervision of Dr. Chris Bleakley
School of Computer Science and Informatics University College Dublin 05 May 2011
Project Specification
The goal of this project is to develop a program, which will use to sound to inform the user about the current state of the system. The advantage of using sound over graphs and numbers is that users can do something else (e.g. word processing or email) while still getting some information on the state of the system. Mandatory: Write and test a program to: *Repeatedly read one item of status information from a PC. For example, possibly: Memory usage CPU usage Network usage *Generate a sound in real-time, which represents the level of the chosen status item. For example, possibly: Indicate memory usage by the loudness of a rumbling sound Indicate CPU usage by the speed of a ticking sound Indicate network usage by bell rings *Provide a graphical user interface, which allows the sound to be switched on and off and the volume adjusted Assess the effectiveness and usability of the program by user survey Discretionary: Increase the number of items being monitored to four. Use different sonification techniques for each. Exceptional: Provide a graphical user interface, which allows the user to associate monitoring items with various sonification techniques. Extend the program to monitor several status items simultaneously.
Abstract
The aim of this project is to design and develop a program that uses different sonification techniques to monitor various aspects of a computers system. System monitors are used to monitor multiple system resources such as a computers Central Processing Unit (CPU) usage, memory usage and network usage. System monitors are useful as they allow the user to keep track of the computers performance. Sonification is the use of non-speech audio to convey information. This project will aim to implement a system-monitoring program, which uses different sonification techniques to represent the information gathered from the monitor.
Table of Contents
1
Introduction ................................................................................................................. 6
1.1
Project Description ......................................................................................................... 6
1.2
Structure of Report ......................................................................................................... 7
3
Approach ................................................................................................................... 13
3.1
Plan Outline .................................................................................................................. 13
3.2
Design Issues ............................................................................................................... 14
7 References ................................................................................................................ 25
1 Introduction
1.1 Project Description
A system-monitoring program will be developed to monitor the performance of multiple computer resources, such as CPU usage, network usage and memory usage. The data retrieved from this program will be presented to the user through the use of sonification, which is the use of non-speech audio to convey information [7]. The purpose of this is to give the user regular feedback on how their system is performing without taking the user away from their current tasks. System monitoring is very useful as it informs the user on how there computer is performing allowing them to make changes to their system in order to improve performance such as closing programs, which are not being using in order to free up RAM allowing the computer to work faster. The resulting program that will be created will use different sounds to represent data collected from monitoring the system. The sounds will be generated using .wav files, these files will then be manipulated within the program to adjust to changes in system status. Many system-monitoring programs represent data retrieved through the visual mediums of graphs and tables, which contain numbers and percentages. These methods are effective and do accurately represent the data obtained however if a user is working on an essay or reading an online article it is not practical to be constantly referring back to a graph or chart for the purposes of monitoring the system. The aim of this project is to create a monitoring system that runs in the background that constantly alerts the user of the state of the system without the user being detracted from their current activities. There are few applications that display system information through the use of audio in existence. One such application is Heart Monitor [14] that exclusively monitors a systems CPU usage. The information provided by the application is then displayed to the user in sound that resembles a beating heart. The faster the heartbeats relates to a higher percentage in CPU usage. While the application provides the user with system information using audio it is limited to only providing CPU usage. This project will look to develop on the concept used in Heart Monitor and hopes to build on this technique to provide auditory graphs to display various system status information. The language that will be used throughout this project is Java. This language was selected because it has many classes that support sound processing and management of sound files. There are also many libraries and APIs that can be added to existing classes that provide methods for collecting system status information. The Eclipse IDE is the environment in which the program will be created.
6 of 26
05 May 2011
2 Background Research
In order to get a better understanding of what should be achieved with this project research was carried out. Previous research papers and reports into the area of sonification have been used to gain a deeper insight into the application and benefits of this method.
2.1 Sonification
Sonification is the use of non-speech audio to convey information [7]. It can and is used in many different areas as an alternative to visual information. This section will look at the advantages sonification can have over visualisation, it will also look at some sonification techniques and successful applications of sonification in different disciplines. Sonification is useful in computing disciplines as it allows the user to receive information from their computer without them having to close or minimize windows. With computers getting smaller to the point of handhelds the screen is also getting smaller meaning screen space is becoming more and more valuable, by using sonification to present the user with information means no screen space will be used allowing more space for visual applications.
7 of 26
05 May 2011
The use of audio information over visual information has many advantages. The visual system can only give detailed information about a small area of focus whereas the auditory system can provide general information from anywhere, even outside our peripheral vision [2]. Sound can be heard by humans and animals from any direction such as from behind, above, side to side whereas visual representations can only be seen within ones direct or peripheral vision. This makes sonification more versatile in terms of presenting information. Unlike sound, visual sources are ignorable meaning it is easy to miss something even when you are looking directly at it especially if there is a brief change in the information being visualised. It is very difficult to ignore a change in sound. The use of sonification over speech audio has many advantages one being the shortness in length of non-speech sounds. Non-speech sounds can be shorter meaning they can be heard more rapidly this results in the user gaining more information faster than speech outputs. Speech audio is a lot slower in presenting information, as a user has to listen to the message from start to finish and then comprehend some words before the message is understood. With speech, much like text, some information can take multiple combinations of words to describe something simple which can take time, also if the information is ever changing then speech output may not be able to keep up. Since sonification is interdisciplinary the next section will look at different applications and devices that use sonification techniques to convey information in multiple disciplines. 2.1.1 Applications of Sonification Sonification has been and still is used in a multitude of disciplines such as science, engineering, medicine, oceanography and computer science. This section will look at key tools, technologies and applications that use sonification techniques in these different areas. The Geiger counter is one of the first and most primitive applications of sonification [11]. It is a device that is used to detect radiation, when radiation is detected the device clicks at regular intervals, the more radiation detected the quicker the rate of clicks. Studies and experiments have shown that detecting radiation levels using audio is more effective than using visual displays [11] applications such as this take advantage of a persons ability to hear the smallest changes in audio. In medicine sonification is widely used as a means of monitoring different areas of the body. One device that is used today by surgeons and medical professionals is the Pulse-oximeter. The pulseoximeter monitors the levels of oxygen in a patients blood, it produces a tone that changes in pitch with the level of oxygen [6]. This device allows a surgeon to monitor critical information regarding the patient but also allows them to focus entirely on the surgery. Other medical devices that incorporate sonification are heart rate monitors that display the rate of the heart beat using a sequence of beeps the more frequent the beeps are, the faster your heart is beating. Primarily fitness enthusiasts who need to monitor their heart when carrying out training use these. Many heart monitors come with a visual display that graphs the heart beat but when taking part in an activity like running or training for a marathon the runner would not want to break stride to check their heart rate, sonification allows them to keep to a pace while also monitoring their heart. In fishing and naval activities SONAR (SOund Navigation And Ranging) is a technique that has been used since the early 1900s by fishermen and the Navy as away of detecting how far objects such as mines, submarines, wreckage are away under water. A burst of sound is sent such as a ping the echo from where the sound bounces off the nearest object is listened for [8]. The distance of the object is calculated by dividing the elapsed time by twice the speed of sound. This
8 of 26 05 May 2011
technology was very useful during World War I and World War II as it helped detect enemy submarines and underwater mines. 2.1.2 Sonification Techniques This section will look at the different techniques are used in sonification. A key factor in what techniques to use comes down to the study of psychoacoustics which is how sound is perceived. If psychoacoustics is not considered when choosing sounds for sonification could lead to the user being unable to remember the sounds or differentiate between sounds [2]. Different factors need to be taken into account when dealing with psychoacoustics such as the frequency of the sound, which is the number of times a sound wave will repeat itself; the pitch and loudness of a sound and the timbre of a sound, which is the quality of the sound [2]. It helps a listener differentiate between two sounds, which have the same pitch and loudness. It gives the listener the ability to know which is the guitar when it is played at the same time as a piano at the same loudness and pitch. SonicFinder [5] is an application that was developed my Apple in the late 1980s. SonicFinder makes use of Auditory Icons, which uses everyday sounds to represent actions and objects in an interface [2]. SonicFinder is a perfect example of how psychoacoustics is used in sonification. The application assigned sounds to various icons and actions within the interface. Some examples of the use of sonification techniques within SonicFinder are: When a folder is clicked in an interface a sound of paper was played, A wooden sound was assigned to files and metal sounds were assigned to applications. When a file or application is opened a whooshing sound is played [4]. If the user drags a file across the screen a scraping sound is returned. Audio was also applied to events that happen within the interface such as the act of copying a file. The sound of a liquid being poured into container was used for copying, the rise in pitch represent the progress in copying [2]. The sounds used in this application clearly relate to icons and events, these sounds are unambiguous and represent actions taken by the user extremely well. 2.1.3 Sonification Tools and Software There is few sonification tools and software available that assist in the application of sonification as a way to display data. The few sonification toolkits that are available are not cross platform or rely on specific hardware. MUSART (MUSical Audio transfer function Real-time toolkit) [15] is a sonification toolkit that uses various audio transfer functions to map sound to datasets. This tool allows a user to view various datasets through sound. The user can select how they want their dataset to be mapped by selecting pitch, timbre, loudness, frequency and other elements. They can also choose to emphasize certain data items of a set if it has more importance. Software like MUSART is used in the area of seismology especially in mapping sound to seismic data as a way to detect hidden patterns and anomalies [15] that could be difficult to see by eye. A specific example is the use of MUSART to detect faults in rocks. MUSART applies a drumbeat to the rock. Where a fault is detected there is a rapid beat played, when no fault is detected the rhythm slows down. This is
9 of 26
05 May 2011
useful for seismologists as it can allow them to monitor multiple data simultaneously by applying different sound maps to different datasets. MUSART is written in C++ making it a platform-dependant application [15]. Sonification Sandbox [16] is an application written in Java that takes concepts and other techniques from MUSART but with the use of the Java Virtual Machine, sonification sandbox can be used on multiple platforms. The sonification sandbox allows users to import CSV files from external software like Microsoft Excel and create auditory graphs from these files. Users can map the datasets to dimensions such as timbre and loudness. Visual graphs can be exported from Sandbox in various formats such as sound files, images and data files. While Sonification Sandbox is a very useful application for creating auditory graphs from data sets it can also be seen as a stepping stone for further research into the area of creating auditory graphs to represent data.
Figure 1.
10 of 26
05 May 2011
Heart Monitor [14] is an application available to Mac users that exclusively monitors a systems CPU usage displaying the information both visually and audibly. When the application is running a moving image of a beating heart is displayed on the screen, the rate at which the heart beats depends on the CPU usage (faster beat higher load average). The user also has a choice of turning on sound where the beats of the heart can be heard. While this application successfully applies a sonification technique to monitor a part of the system it is limited as it only monitors the CPU usage of the system. The application being developed, which is the subject of this report, will implement sonification techniques to monitor various aspects of the system not just one exclusive aspect. 2.2.2 System Monitoring APIs An important area that was researched for this project was the availability of system monitoring APIs that can be implemented in Java. This is important because when developing the program an API that can provide methods that can satisfy the needs of the program reliably is needed. For this project an API that provides methods to monitor system aspects such as the CPU usage, memory usage and network usage efficiently is needed. JSysmon [18] is an open source Java library that provides access to system monitoring information from a Java application. It is available to download freely from sourceforge.net. JSysmon supports only implementations for CPU usage and Memory usage at present, this limits its ability to provide a system monitoring application with enough resources to monitor information from other areas of the system such as network usage and disc space. There is very little support offered by JSysmon making it difficult to seek assistance if troubles with the library arise. It is for this reason that it was not used within this project. The SIGAR (System Information Gatherer) API developed by Hyperic [13] is a system monitoring API that can be implement across various platforms including Linux, Windows, Mac OSX and Solaris. It provides implementations for various system data including CPU usage, Memory usage and network usage. It is a powerful API as it is implemented in the C programming language with bindings to a multitude of languages such as Python, Pearl, and Ruby and most importantly for this project, Java. These bindings allow the API to access system information regardless of the platform. There is also ample support available with the SIGAR API including forums, sample code and mailing lists. SIGARs versatility and wide availability of support have led to it being used within this project.
used for musical composition. JMusic is suitable for manipulating audio files such as increasing the files frequency, sample rate and volume. There is a vast amount of support available for this library including a book written by its co-creator Andrew Brown entitled Making Music with Java [3]. JMusic is a useful resource for manipulating audio files but can lose clarity when it comes to creating synthesised audio in real-time. Originally audio files were to be used within this project but after running multiple tests it was decided that they were not suitable the reasons for this are discussed in the following chapter. JFugue [9] is another open source java API that wraps JavaSound MIDI methods into an easy to use, straightforward library. A Complete Guide To JFugue [12] is a comprehensive users manual available to download at a small cost from the website, it is full of easy to follow examples on how to synthetically generate sounds using a large array of instruments. It also provides examples on how these sounds can be manipulated (volume change, pitch change, harmonies). Its simplicity is its greatest attribute making it a perfect library for this project.
12 of 26
05 May 2011
3 Approach
In order to develop an idea on how to approach the project an understanding of what should be achieved at the end. The first step in completing the project was to develop a plan that would be used to divide up the project into separate tasks and milestones. Analysing the project specifications and pinpointing key milestones achieved this. As this project involved using sound to display information to the user, it was important to first develop a method for playing back audio using java. Once this was achieved it would give a better understanding on how to approach the rest of the project.
13 of 26
05 May 2011
Figure 2.
Figure 3.
14 of 26
Initial Design
05 May 2011
The sound-processing program was designed to be able to access a database of different .wav sound files and manipulate them in multiple ways to reflect changes in system information. After extensive testing taken out on a .wav file it was decided that the use of .wav files to display system information was not a sufficient approach. In order to playback a sound file in java it needs to be read into a clip, which is a mixer input in which audio can be loaded into memory prior to playback [17]. Since a sound file needed to be played back repeatedly at very regular intervals with the possibility of changes to how its played (increase/decrease in pitch, frequency of playback, volume change) it was found that the program struggled to keep up with the constant changes as a file would needed to be loaded into memory every time it was to be played back. It was from this that it was decided that Midi would be used to generate sounds in real-time, meaning no sounds would needed to be pre-loaded into memory.
Figure 4.
15 of 26
Instrument Piano
16 of 26
05 May 2011
In order to gather the CPU information periodically a timer method called CpuTimer() was created; this method uses Javas Timer class to schedule tasks. The method takes in two integer values, the first integer specifies how long to wait before the timer begins, the second integer specifies how long each interval should be. On execution of this timer method the systems CPU usage is printed out every two seconds along with the chord that accompanies it. Each class in the project uses a similar timer method.
Instrument
Note Pattern and Octave C#-E (Octave 2) C#-E (Octave 3) C#-E (Octave 4) C#-E (Octave 5)
Synth Bass
17 of 26
05 May 2011
4.6 Uptime
The uptime of a computer is a measure of the total time a computer has been running for without any downtime. It is important to give a machine some downtime, as some updates that can be critical to an operating systems performance require reboots. Uptime can also be used to measure machines reliability by showing how long the machine can run without crashing. The uptime class monitors the machines uptime and displays the time textually within the GUIs JTextPanel. To represent every second passed a ticking noise is presented to the user, after every minute passed the sound of a bird chirping is then played, once an hour has passed a trumpet pattern is played.
4.7 GUI
With keeping in line with the project specifications a Graphical User Interface (GUI) was designed to allow users the ability to easily choose what aspect of the system they would like to monitor. The start button initializes the application prompting the display of the systems information within the text field (see Figure 5.).
Figure 5.
Once the application has been initialized the user can then choose what aspect of the system would they like to monitor by selecting its radio button. The radio buttons are all part of a button group meaning that only one can be selected at any one time. Along with displaying system information through various sonification techniques the application will also display the information visually within the text field. The GUI also provides a slider for controlling the volume of the application and a mute button to turn the sound off completely.
18 of 26
05 May 2011
Figure 6.
Figure 6 shows the snapshot of the Big Top application monitoring the CPU utilization of the system. The Y-axis represents the percentage CPU used and the X-axis represents the time the application has been monitoring for. The blue line represents the System% CPU usage, the green, represents the User% CPU usage, which is the focus of the tests carried out. Big Top was set to sample the CPU utilization every 2 seconds. The image shows that the systems user CPU% rises dramatically after 10 seconds, it is at this point that the Youtube video began to play.
19 of 26
05 May 2011
Figure 7.
A similar trend can be seen in Figure 7, which shows the systems user CPU% utilization gathered by the SIGAR powered java application developed for this project. Figure 7 shows the same dramatic increase in CPU usage roughly 10 seconds after deployment of the program, the trend also follows the Big Top snapshot nearly identically. The two applications differ slightly in percentages this is a result of the rounding off of the CPU% utilization within the Java application. It is clear to see that the system monitoring application developed for this project is capable of accurately retrieving system information.
20 of 26
05 May 2011
Figure 8.
Figure 8 shows the snapshot from Big Top graphing the number of running processes with the Yaxis representing the number of processes while the X-axis represents the time. The black line represents the number of running processes, the red line represents the number of sleeping processes and the blue line in the image represents the total number of processes within the system and is the focus of this test. The image shows an increase in the number of processes close to the 25-second mark, at this time a simple text-editing program was deployed resulting in an increase in running processes. The text-editing program was then terminated roughly 20 seconds later where the image shows a decrease in the number of processes.
21 of 26
05 May 2011
Figure 9.
Again the java system monitoring application was deployed simultaneously with Big Top. The results can be seen in Figure 9. Similarly to the CPU tests a near identical trend can be seen in the results from both applications.
5.3 Memory
Again the same test as the previous two tests were run to assess the accuracy of the java system monitor against Big Top. The java program was set to monitor the systems free memory, as SIGAR doesnt provide an option to monitor wired and active and inactive memory separately. The same testing conditions were carried out; both Big Top and the java application were deployed simultaneously and were set to sample every 2 seconds for a period of 1 minute. The results can be seen in Figure 10 and 11.
Figure 10.
22 of 26
Figure 10 shows the systems memory usage information gathered by Big Top, the blue line shows the systems free memory and will be the focus of discussion. The image shows a decrease in free memory at the beginning of deployment cause by the opening of an application on the system and consistently stays around 9.5MB for the duration of the test. A similar pattern can be seen in Figure 11.
Figure 11.
The results from the tests ran on the java system monitor (Figure 11) closely mirrors the results from the Big Top test. The initial drop in system memory in Figure 11 may look more dramatic than in Figure 9; this is due to the scaling of the graphs. The trend follows the same pattern as in the Big Top test, the free memory available in the system stays around the 9.5-10Mb region there are also increases and decreases in both results at the same time in the tests.
23 of 26
05 May 2011
6.1 Acknowledgments
I wish to acknowledge and thank my supervisor, Dr. Chris Bleakley, whose encouragement, guidance and support from the initial stages to the final product helped me in completing this project.
24 of 26
05 May 2011
7 References
[1] Bjango. I-stat menus 3. Bjango. [Online] 2011. http://bjango.com/mac/istatmenus/. [2] Brewster, S A. Non Speech Auditory Outputs. Human- Computer Interaction Handbook. Mahwaj : Lawrence Erlbaum Associates, 2002, pp. 220-239. [3] Brown, A R. Making Music with Java. Brisbane : Lulu, 2005. [4] Frohlich. Auditory Computer Human Interaction: An integrated Approach. Austria : Universitiat Wien, 2007. [5] Gaver, W. The SonicFinder: An Interface that uses Auditory Icons. California : University of California, 1989, Human Computer Interaction, Vol. 4, pp. 67-94. [6] Kramers, G. and Walker, B. Sonification Report: Status of the Field and Research Agenda. Santa Fe. : The International Community for Auditory Display, 1998. [7] McGee, R. Auditory Displays and Sonification: An Introduction and Overview. California : University of California, 2009. [8] Ridgesoft. Sonar Made Simple. Ridgesoft. [Online] 2003. [Cited: 11 April 2011.] http://www.ridgesoft.com/articles/sonar/SonarMadeSimple.pdf. [9] Koelle, David. JFugue. JFugue. [Online] 2010. [Cited: 30 March 2011.] http://www.jfugue.org/ [10] Sorenson, A and Brown, A. jMusic - Music Composition in Java. jMusic. [Online] 2009. [Cited: 25 February 2011.] http://jmusic.ci.qut.edu.au/. [11] Tzelgov, J., et al. Radiation Detection by Ear and Eye. 1, 1987, Human Factors, Vol. 29, pp. 87-98. [12] Koelle, D. A Complete Guide to JFugue: Programming Music in Java. 2008.
[13] Hyperic. Hyperic API (System Information Gatherer and Reporter). Hyperic. [Online] 2010. [Cited: 1 March 2011.] http://www.hyperic.com/products/sigar. [14] MacUpdate. Heart Moniter 1.3. MacUpdate. [Online] 2010. [Cited: 27 December 2010.] [15] Joseph, A. J. and Lodha, S. K. MUSART: Musical Audio Transfer Function Real-Time Toolkit. Kyoto : International Conference on Audion Display, 2002. [16] Walker, B. and Cothran, J. Sonification Sandbox: A Graphical Toolkit For Auditory Graphs. Boston : International Conference on Auditory Display ICAD, 2003.
25 of 26
05 May 2011
[17] Sun Microsystems. Java Sound API: Programmer's Guide. California : s.n., 1999. [18] SourceForge. JSysmon. SourceForge. [Online] 2010. [Cited: 5 January 2011.] http://sourceforge.net/projects/jsysmon/.
26 of 26
05 May 2011