Вы находитесь на странице: 1из 57

CISUC has six research groups: G1: Cognitive and Media Systems G2: Adaptive Computation G3: Software

and Systems Engineering G4: Communications and Telematics G5: Information Systems G6: Evolutionary and Complex Systems
The list of PhD Proposals is divided by each group.

Each proposal has a reference Gx.y (x is the number of the group; y is the number of the proposal within that group).

G1: Cognitive and Media Systems http://cisuc.dei.uc.pt/csg/ PhD Thesis Proposal: G1.1
Title: Human readable ATP proofs in Euclidean Geometry Keywords: Automatic Theorem Proving, Axiomatic Proofs in Euclidean Geometry. Supervisor: Summary:
Automated theorem proving (ATP) in geometry has two major lines of research: axiomatic proof style and algebraic proof style (see [6], for instance, for a survey). Algebraic proof style methods are based on reducing geometry properties to algebraic properties expressed in terms of Cartesian coordinates. These methods are usually very efficient, but the proofs they produce do not reflect the geometry nature of the problem and they give only a yes/no conclusion. Axiomatic methods attempt to automate traditional geometry proof methods that produce human-readable proofs. Building on top of the existing ATPs (namely GCLCprover [5, 4, 8, 9, 10] to the area method [1, 2, 3, 7, 8, 11] or ATPs dealing with construction [6] the goal is to built an ATP capable of producing human-readable proofs, with a clean connection between the geometric conjectures and theirs proofs. Prof. Pedro Quaresma (pedro@mat.uc.pt)

References:
[1] Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong Zhang. Automated production of traditional proofs for constructive geometry theorems. In Moshe Vardi, editor, Proceedings of the Eighth Annual IEEE Symposium on Logic in Computer Science LICS, pages 4856. IEEE Computer Society Press, June 1993. [2] Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong Zhang. Automated generation of readable proofs with geometric invariants, I. multiple and shortest proof generation. Journal of Automated Reasoning, 17:325347, 1996. [3] Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong Zhang. Automated generation of readable proofs with geometric invariants, II. theorem proving with full-angles. Journal of Automated Reasoning, 17:349370, 1996. [4] Predrag Janicic and Pedro Quaresma. Automatic verification of regular constructions in dynamic geometry systems. In Proceedings of the ADG06, 2006. [5] Predrag Janicic and Pedro Quaresma. System description: Gclcprover + geothms. In Ulrich Furbach and Natarajan Shankar, editors, IJCAR 2006, LNAI. Springer-Verlag, 2006. [6] Noboru Matsuda and Kurt Vanlehn. Gramy: A geometry theorem prover capable of construction. Journal of Automated Reasoning, (32):333, 2004. [7] Julien Narboux. A decision procedure for geometry in coq. In Proceedings TPHOLS 2004, volume 3223 of Lecture Notes in Computer Science. Springer, 2004. [8] Pedro Quaresma and Predrag Janicic. Framework for constructive geometry (based on the area method). Technical Report 2006/001, Centre for Informatics and Systems of the University of Coimbra, 2006. [9] Pedro Quaresma and Predrag Janicic. Geothms - geometry framework. Technical Report 2006/002, Centre for Informatics and Systems of the University of Coimbra, 2006. [10] Pedro Quaresma and Predrag Janicic. Integrating dynamic geometry software, deduction systems, and theorem repositories. In J. Borwein and W. Farmer, editors, MKM 2006, LNAI. Springer-Verlag, 2006. [11] Jing-Zhong Zhang, Shang-Ching Chou, and Xiao-Shan Gao. Automated production of traditional proofs for theorems in euclidean geometry i. the hilbert intersection point theorems. Annals of Mathematics and Artificial Intelligenze, 13:109137, 1995.

PhD Thesis Proposal: G1.2


Title: Formal languages in the knowledge base management Keywords: Formal language, knowledge base, knowledge base inconsistency, knowledge base management. Supervisor: Summary:
Knowledge base management is intended as the acquisition and normalization of new knowledge and the confrontation of this knowledge with existing knowledge, resolving potential conflicts and updating it. When updating a knowledge base, several problems may arise. Some of them are the redundancy of the updated knowledge and the knowledge base inconsistency. Formal languages can be used with success to a development of an innovative system to do the knowledge base management, resolving potential updating problems. Prof. Maria de Ftima Gonalves (mflfag@iscac.pt)

PhD Thesis Proposal: G1.3

Title: Image classification and retrieval based on stylistic and aesthetic criteria Keywords: Content Based Image Retrieval, Computational Aesthetics, Artificial Intelligence Supervisor: Prof. Penousal Machado (machado@dei.uc.pt) Summary: The increasing volume of digital multimedia content, both online and offline, coupled with the unstructured nature of the World Wide Web (WWW) makes the need for appropriate classification and retrieval technique more pressing than ever. As a result, there is a growing interest in Content Based Image Retrieval (CBIR), which is clearly demonstrated by the (exponentially) increasing number of research papers in these areas. The automatic classification of images according to stylistic and aesthetic criteria would allow: image browsers and search engines to take into account the user's aesthetic preferences; on-line artwork sites to tailor their offer to match the implicit preferences revealed by the previous purchases of a specific user; online museums to reorganize their exhibitions according to user preferences, thus offering personalized virtual tours; digital cameras to make suggestions regarding photographic composition. Additionally, this type of system could be used to automatically index image databases or even, if coupled with an image generation system, to create images of a particular style or possessing certain aesthetic qualities.

As the title indicates, the main goal of this thesis is the development of techniques for stylistic and aesthetic based image classification and retrieval, which is a relatively unexplored area. Focusing on our own research efforts, in [1] we used a subset of the features proposed in this project an ANN classifier for author identification. To the best of our knowledge, [3] was the first computational system that dealt with aesthetic classification and/or evaluation tasks. Works such as [3,4,5] explore the use of an autonomous image classifier in the context of evolutionary art. This thesis will be conducted in the Cognitive Media Systems Group of CISUC in close collaboration with the RNASA Laboratory of The University of A Corua and is an integral part of the research project TIN2008-06562/TIN. A one-year renewable scholarship is available. 1. 2. 3. 4. 5. Machado, P. and Romero, J. and Ares, M. and Cardoso, A. and Manaris, B. , "Adaptive Critics for Evolutionary Artists", 2nd European Workshop on Evolutionary Music and Art, Coimbra, Portugal, April 2004 Machado P, Cardoso A. Computing aesthetics. In: Oliveira F, editor. XIVth Brazilian Symposium on Artificial Intelligence SBIA98. LNAI Series. Porto Alegre, Brazil: Springer; 1998. p. 219294 Machado, P. and Romero, J. and Cardoso, A. and Santos, A. , "Partially Interactive Evolutionary Artists", New Generation Computing, Special Issue on Interactive Evolutionary Computation, H. Takagi, January 2005 Machado, P. and Romero, J. and Santos, A. and Cardoso, A. and Pazos, A. , "On the development of evolutionary artificial artists", Computers & Graphics, Vol. 31, # 6, pp. 818826, Elsevier, December 2007 Machado P, Romero J, Manaris B. Experiments in computational aesthetics. The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music. Springer, Romero. J and Machado, P. (eds). Natural Computing Series. 2007. pp 381-415

PhD Thesis Proposal: G1.4


Title: Artificial intelligence Approaches to Artistic Creativity Keywords: Artificial Intelligence, Computational Art Supervisor: Prof. Penousal Machado (machado@dei.uc.pt) Summary: Artistic behavior is one of the most celebrated qualities of the human mind. Although artistic manifestations vary from culture to culture, dedication to artistic tasks is common to all. In other words, artistic behavior is a universal trait of the human species [1]. The current, Western definition of art is relatively new. However, a dedication to artistic endeavors such as the embellishment of tools, body ornamentation, or gathering of unusual, arguably aesthetic, objects can be traced back to the origins of humanity. That is, art is ever-present in human history and prehistory [2]. In the words of Leonardo da Vinci Art is the Queen of all sciences communicating knowledge to all the generations of the world. We consider that creativity, emotion, the perception of beauty and artistic behavior are fundamental aspects of Intelligence and that, as such, Artificial Intelligence approaches that ignore these aspects miss an important part of what makes us Humans. Additionally, If machines could understand and

affect our perceptions of beauty and happiness, they could touch peoples lives in fantastic new ways (Hugo Liu). As the title indicates this thesis seeks: to grasp a deeper understanding of artistic creative behavior; to study and develop models that may capture essential aspects of beauty, emotional response and creativity; and ultimately to develop intelligent agents that implement these models. To pursue this goal the candidate will be integrated in the multifaceted team of researchers of the Cognitive Media Systems Group of CISUC which possesses a vast experience in fields such as: Computational Aesthetics, Evolutionary Computation, Artificial Neural Networks, Music Information Retrieval, Creative Systems and Computational Art. 1. 2. 3. Romero J and Machado P. The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music (eds). Springer. Natural Computing Series. 2007. pp 381-415 Dissanayake E, Homo Aestheticus, University of Washington Press, 1995 Machado P, Romero J, Manaris B. Experiments in computational aesthetics. The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music. Springer, Romero. J and Machado, P. (eds). Natural Computing Series. 2007. pp 381-415

PhD Thesis Proposal: G1.5

Title: Self-Adaption and Evolution of Bio-Inspired Algorithms Keywords: Adaptation, Evolution, Bio-Inspired Algorithms, Complexity Science Supervisor: Prof. Penousal Machado (machado@dei.uc.pt), Dr. Jorge Tavares (jorge.tavares@ieee.org) Summary: In spite of some performance improvements of biologically inspired techniques, such as evolutionary algorithms and swarm intelligence, it is a fact that biology knowledge has advanced faster than our ability to incorporate novel ideas from life science disciplines into these methods. As such, and taken that nature has been an inspiration to several different kinds of optimization and learning algorithms, we consider that it is still a source for improvement and new techniques. Usually, in order to achieve competitive results, it is often required the development of problem specific operators and representations, and parameter fine-tuning [1,2]. As a result, much of the research practice of Bio-Inspired Algorithms focuses on these aspects. Following the research work done on this topic [3,4], this thesis should contribute to the study and design of nature-inspired methods that can adapt themselves to the problem they are solving [3-5]. The evolution of these components such as representation, operators and parameters [3], may contribute to performance improvements, give insight to the idiosyncrasies of particular problems, alleviate the burden of researchers when designing bio-inspired algorithms, and push frontiers of problem-solving.

References: 1. Eiben, A.E., and Smith, J.E., "Introduction to Evolutionary Computing", Natural Computing Series, Springer, 2007. 2. Beyer, H.-G., and Meyer-Nieberg, S., Self-Adaptation in Evolutionary Algorithms, In F. Lobo, C. Lima, and Z. Michalewicz, editors, Parameter Setting in Evolutionary Algorithm, 47-75, Springer, Berlin, 2007. 3. Tavares, J. and Machado, P. and Cardoso, A. and Pereira, F. B. and Costa, E. , "On the Evolution of Evolutionary Algorithms", in Proc. of the EuroGP 2004 Proceedings, 7th European Conference on Genetic Programming, Coimbra, Portugal, April 2004. 4. Machado, P. and Tavares, J. and Cardoso, A. and Pereira, F. B. and Costa, E. , "Evolving Creativity", Computational Creativity Workshop, 7th European Conference in Case Based Reasoning, Madrid, August 2004. 5. Oltean, M. Evolving Evolutionary Algorithms Using Linear Genetic Programming. Evolutinary Computation Journal, MIT Press, 13, 3, 387-410, 2005.

PhD Thesis Proposal: G1.6


Title: Case-Based Hierarchical-Task Network Planning Keywords: Planning, Case-based Planning, Decision-theoretic planning Supervisor: Prof. Lus Macedo (macedo@dei.uc.pt) Summary: Hierarchical-Task Network (HTN) planning is a planning methodology that is more expressive than STRIPS-style planning. Given a set of tasks that need to be performed (the planning problem), the planning process decomposes them into simpler subtasks until primitive tasks or actions that can be directly executed are reached. Methods provided by the domain theory indicate how tasks are decomposed into subtasks. However, for many real-world domains, sometimes it is hard to collect methods to completely model the generation of plans. For this reason an alternative approach that is based on cases of methods has been taken in combination with methods. Real-world domains are usually dynamic and uncertain. In these domains actions may have several outcomes, some of which may be more valuable than others. Planning in these domains require special techniques for dealing with uncertainty. Actually, this has been one of the main concerns of the planning research in the last years, and several decisiontheoretic planning approaches has been proposed and used successfully, some based on the extension of classical planning and others on Markov-Decision Processes. In these decision-theoretic planning frameworks actions are usually probabilistic conditional actions, preferences over the outcomes of the actions is expressed in terms of an utility function, and plans are evaluated in terms of their expected utility. The main goal is to find the plan or set of plans that maximizes an expected utility function, i.e, to find the optimal plan.

In this thesis a planner that combines the technique of decision-theoretic planning with the methodology of HTN planning should be built in order to deal with uncertain, dynamic large-scale real-world domains [Macedo & Cardoso, 2004]. Unlike in regular HTN planning, methods for task decomposition shouldnt be used, but instead cases of plans. The planner should generate a variant of a HTN - a kind of AND/OR tree of probabilistic conditional tasks - that expresses all the possible ways to decompose an initial task network. References: Macedo, L. and A. Cardoso (2004). Case-Based, Decision-Theoretic, HTN Planning. Advances in Case-Based Reasoning: Proceedings of the 7th European Conference on Case-Based Reasoning. P. Calero and P. Funk. Berlin, Springer: 257-271. Macedo, L. The Exploration of Unknown Environments by Affective Agents. PhD Thesis, 2006.

PhD Thesis Proposal: G1.7


Title: Collaborative Multi-Agent Exploration of 3-D Dynamic Environments Keywords: Exploration, multi-agent systems Supervisor: Prof. Lus Macedo (macedo@dei.uc.pt) Summary: Exploration gathers information about the unknown. Exploration of unknown environments by artificial agents (usually mobile robots) has actually been an active research field [Macedo & Cardoso, 2004]. The exploration domains include planetary exploration (e.g., Mars or lunar exploration), search for meteorites in Antarctica, volcano exploration, map-building of interiors, etc. Several exploration techniques have been proposed and tested either in simulated and real, indoor and outdoor environments, using single or multiple agents. The main advantage of multi-agent approaches is to avoid covering the same area by two or more agents. However, there is still much to be done especially in dynamic environments as those mentioned above. Besides, real environments, however, consist of objects. For example, office environments possess chairs, doors, garbage cans, etc., cities comprise several kinds of buildings (houses, offices, hospitals, churches, etc.), cars, etc. Many of these objects are non-stationary, that is, their locations may change over time. This observation motivates research on a new generation of mapping algorithms, which represent environments as collections of objects. At a minimum, such object models would enable a robot to track changes in the environment. For example, a cleaning robot entering an office at night might realize that a garbage can has moved from one location to another. It might do so without the need to learn a model of this garbage can from scratch, as would be necessary with existing robot mapping techniques.

This thesis addresses the problem of finding multi-agent strategies to address the problem of collaborative exploration of unknown, 3-D, dynamic environments. The strategy or strategies should be tested against other exploration strategies found in the literature. References: Macedo, L. The Exploration of Unknown Environments by Affective Agents. PhD Thesis, 2006. Macedo, L. and A. Cardoso (2004). Exploration of Unknown Environments with Motivational Agents. Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems. N. Jennings and M. Tambe. New York, IEEE Computer Society: 328 - 335.

PhD Thesis Proposal: G1.8


Title: Simulating Consciousness in Computers Keywords: Consciousness, Affect, Emotion Supervisor: Prof. Lus Macedo (macedo@dei.uc.pt) Summary: Consciousness is a characteristic of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment. Some researchers attempt to explain consciousness directly in neurophysiological or physical terms, while others offer cognitive theories of consciousness whereby conscious mental states are reduced to some kind of representational relation between mental states and the world. There are a number of such representational theories of consciousness currently on the market, including higher-order theories which hold that what makes a mental state conscious is that the subject is aware of it in some sense. We generally agree that human beings are conscious, and that much simpler life forms, such as bacteria, are not. Many of us attribute consciousness to higher-order animals such as dolphins and primates. Academic research is investigating the extent to which animals are conscious. This suggests the hypothesis that consciousness has co-evolved with life, which would require it to have some sort of added value, especially survival value. People have therefore looked for specific functions and benefits of consciousness. Bernard Baars (1997), for instance, states that "consciousness is a supremely functional adaptation" and suggests a variety of functions in which consciousness plays an important, if not essential, role: prioritization of alternatives, problem solving, decision making, brain processes recruiting, action control, error detection, planning, learning,

adaptation, context creation, and access to information. Antnio Damsio (1999) regards consciousness as part of an organism's survival kit, allowing planned rather than instinctual responses. He also points out that awareness of self allows a concern for one's own survival, which increases the drive to survive, although how far consciousness is involved in behaviour is an actively debated issue. The possibility of machine (or robot) consciousness has intrigued philosophers and nonphilosophers alike for decades. Could a machine really think or be conscious? Could a robot really subjectively experience the smelling of a rose or the feeling of pain? Several tests, such as those based on the Turing Test, have been developed which attempt to provide an operational definition of consciousness and try to determine whether computers and other non-human animals can demonstrate through their behavior, by passing these tests, that they are conscious The goal of this thesis is simulating consciousness in computers based on one or more theories about human consciousness.

PhD Thesis Proposal: G1.9


Title: Bridging the Gap Between Web 2.0 Collaborative Environments and the Semantic Web Keywords: Ontologies, Semantic Web, Web 2.0, Collaborative and Social Environments Supervisor: Prof. Paulo Gomes (pgomes@dei.uc.pt) Summary: New kinds of highly popular user-centered applications such as blogs, folksonomies, and wikis, have come to be known as "Web 2.0". The reason for their immediate success is the fact that no specific skills are needed for participating. These new kinds of tools do not only provide data but also generate a lot of weakly structured meta data. One perfect example is tagging. Here users add tags to a resource which can be seen as a kind of meta data. Tags are supposed to describe, from the users point of view, the resource. Such meta data is easy to produce but it lacks any kind of formal grounding used in the Semantic Web. On the other hand the Semantic Web complements the described bottom-up effort of the Web 2.0 community in a top down manner as, one of its central points is a fixed vocabulary, typed relations and a stronger knowledge representation based on some kind of ontology. Such structure is typically something users have in mind when they provide their information. But for researcher it is hidden in the data and needs to be extracted. Techniques to analyze network structures or weak knowledge representations like those found in the Web 2.0 have a long tradition in different other disciplines, like social network analysis, machine learning or data mining. These kinds of automatic

mechanisms are necessary to extract the hidden information and to reveal the structure in a way that the Semantic Web community can benefit from, and thus provide added value to the end user. On the other hand the established way to represent knowledge gained from the unstructured data can be beneficial for the Web 2.0 in that it provides Web 2.0 users with enhanced Semantic Web features to structure their data. The aim of this thesis is to bridge the gap between the Semantic Web and the Web 2.0 environments. Since both ideas have in common the improvement of search and semantics in the web, the combination of these techniques is an important step towards a more intelligent web as Tim Berners-Lee envisioned [Berners-Lee, T., J. Hendler, and O. Lassila, The Semantic Web. Scientific American, 2001. 284(5): p. 34-43]. Techniques can be, but are not limited to, social network analysis, graph analysis, machine learning, ontology learning, text mining or web mining methods [Peter Mika. Ontologies are us: A unified model of social networks and semantics. Journal of Web Semantics 5 (1), page 515, 2007].

PhD Thesis Proposal: G1.10


Title: Intelligent Knowledge Management using the Semantic Web Keywords: Semantic Web, Knowledge Management, Artificial Intelligence, Ontologies Supervisor: Prof. Paulo Gomes (pgomes@dei.uc.pt) Summary: Nowadays, companies gather and store big amounts of information in databases. This information presents potential high value knowledge for a company. But most of this information or data is not transformed in knowledge, remaining lost in data bases or document repositories. Software development is a knowledge intensive activity involving several types of know-how and skills. Usually development teams have several members, which makes sharing and dissemination of knowledge crucial for project success. One evolving technology that can be used with the purpose of building knowledge management tools for the software development area is the semantic web. Semantics are the lost chain between information/data and knowledge, and the semantic web provides the infrastructure needed for making a true sharing of knowledge possible. The semantic web is an infrastructure providing semantics associated with words in web resources. But, by itself it does not provide a tool for knowledge management. What are needed, are tools that enable the usage of the semantic web in an intelligent way, so that users can take advantage of knowledge sharing. The main problem to be dealt with in this thesis is how a team of software development engineers can be aided by a tool, or a set of tools, that enable them to reuse knowledge in a more efficient way, thus increasing their productivity.

The main objective of this thesis is to develop a set of tools based on the semantic web. These tools are intended to have a set of intelligent characteristics, such as: learning, proactive reasoning, semantic searching and retrieval of knowledge, representation of knowledge, knowledge acquisition, personalization, and others. Several reasoning methods have been developed in Artificial Intelligence and are ideal candidates to be used in this research work. Some of the results of this research work are new algorithms and methodologies for knowledge management.

PhD Thesis Proposal: G1.11


Title: A Markov Logic Reasoning Engine for the Semantic Web Keywords: Semantic Web, Markov Logics, Ontologies Supervisor: Prof. Paulo Gomes (pgomes@dei.uc.pt) Summary: A Markov logic network (MLN) is a first-order knowledge base with a weight attached to each formula, and can be viewed as a template for constructing Markov networks. From the point of view of probability, MLNs provide a compact language to specify very large Markov networks, and the ability to flexibly and modularly incorporate a wide range of domain knowledge into them. From the point of view of first-order logic, MLNs add the ability to soundly handle uncertainty, tolerate imperfect and contradictory knowledge, and reduce brittleness. Many important tasks in statistical relational learning, like collective classification, link prediction, link-based clustering, social network modeling, and object identification, are naturally formulated as instances of MLN learning and inference. The semantic web is an infrastructure providing semantics associated with words in web resources. The foundations of the semantic web are ontologies and descriptive logics, which by itself do not deal well uncertainty. MLNs have the ability to deal with both worlds: logics and statistics. The main problem to be dealt with in this thesis is how to build a reasoning engine for the semantic web infrastructure using MLNs. The applications of this reasoning engine are immense, from natural language processing to text mining and web mining, to the core of knowledge management applications.

PhD Thesis Proposal: G1.12


Title: Semantic Mining of Software Resources Keywords: Semantic Mining, Web Mining, Semantic Web, Software Reuse, Knowledge Management Supervisor: Prof. Paulo Gomes (pgomes@dei.uc.pt) Summary: Semantic Mining (Berendt et. al. 2002, Stumme et. Al. 2006) combines Semantic Web technologies with Web Mining in a way that both contribute to the enrichment and discovery of new knowledge. These two areas can be combined in the following ways: extracting semantics from the Web; using semantics for web mining; and mining the Semantic Web. Extracting semantics from the Web requires the use of Web Mining techniques to extract the semantics that are present in page content and structure, by learning, mapping and merging Ontologies in the Web. Another way to extract semantics from the web is by Web Usage Mining, which explores the user navigation paths and actions to infer new knowledge. Using semantics for Web Mining comprises the improvement of Web Mining results by exploiting the ontologies and other semantic structures that are present in the Semantic Web. This can be especially important to the sharing of knowledge among communities in the same scientific area, thus making a real web of knowledge. The mining of the Semantic Web is also a way of gathering and finding new knowledge from an already organized structure, but that can be important at a more abstract and complex level of reasoning. All these techniques described can be used at a local level, such as a project Intranet or in a particular knowledge domain. The aim of this thesis is to use Semantic Mining to explore and enhance the knowledge associated with software development, so that it can be shared within a organization that develops software. The main idea is to apply Semantic Mining techniques to software repositories, so that new knowledge are extracted, stored and indexed, to be reused by software engineers in the development of new software or maintenance of already developed systems. References: Berendt, B., Hotho, A., Stumme, G. (2002). Towards semantic web mining. In Horrocks, I., Hendler, J.A., eds.: The Semantic Web. In Horrocks, I., Hendler, J.A., eds.: Proceedings of the First International Semantic Web Conference, Springer. 264278. G. Stumme, A. Hotho, and B. Berendt, Semantic web mining: State of the art and future directions, Web Semantics: Science, Services and Agents on the World Wide Web, vol. 4, no. 2, pp. 124143, June 2006.

PhD Thesis Proposal: G1.13


Title: Algorithms for Semantic Annotation of Positioning Information Keywords: locations, places, positioning systems, location based services Supervisors: Prof. Francisco Cmara (camara@dei.uc.pt)
Prof. Carlos Bento (bento@dei.uc.pt)

Summary:
Although we find today a myriad of positioning technologies (from the common GPS to Wireless, GSM cell or Ultra Wide Band positioning algorithms), the interpretation of what exactly position means is still cumbersome. For example, the information that we are at latitude 4,234W and longitude 30,123N or my current GSM cell ID is 1098 is poor in terms of meaning for a user. Informations such as I am in Morrocco, my current location is in Coimbra or I am at work are clearly richer and useful for a wealth of applications. This is known as the From Position to Place problem (Hightower, 2003) and is currently a hot topic in the Ubiquitous Computing area. The primary goal of this PhD project is to study and develop methodologies that can contribute to solving the problem just described. The approach expected will likely take into account the user model, context and social interaction. This work is one of the central topics of research of the Ubiquitous Systems Group of the AILab and has a high potential of applicability in a range of state-of-the-art ubiquitous systems.

PhD Thesis Proposal G.1.14


Thesis: "YouTrace: Collaborative Map Generation" Keywords: Ubquitous Computing; Map Making; Map Matching; GPS traces; Intelligent Transport Systems Supervisors: Francisco Cmara Pereira and Ana Almeida In the YouTrace project, we propose to develop a social networking platform for sharing localization (GNSS) traces. Working on inspirations from well-known Web2.0 applications (e.g. Wikipedia, YouTube), the vision is to provide a platform where users voluntarily share their localization traces in order to get services that improve their quality of life and allow for social interaction. Such platform must be responsible for aggregating those traces into a collaborative Map of the World, providing social networking services and adding intelligent data analysis tools to support decision making, both at the level of the individual user as well as of the urban transport policy maker.

This PhD thesis should focus on the development of efficient algorithms for Aggregation, Update, Filtering and Map Matching of the incoming GPS traces. In CMS, work has already been started in these algorithms and the incoming PhD student will be integrated in a very motivated team.

PhD Thesis Proposal G.1.15


Thesis: Individual Mobility Optimization from Trace Analysis Keywords: Spatial Data Analysis; GPS Traces; Artificial Intelligence; Intelligent Transport Systems Supervisor: Francisco Cmara Pereira Summary: Current widespread use of GPS (and other localization technology) receivers is leading to the generation of very large amounts of movement data. Due to privacy and security concerns, it is understandable that such data is scarcely available unless good value added is provided to the contributor. Starting with individual uses that guarantee privacy, yet allowing value added services (for example, a person that keeps own movement traces could analyze the efficiency of his/her mobility in terms of time/cost/fuel spent), this thesis should focus on the study of algorithms for analysis of movement traces. After the algorithms focusing on individual use, the should also approaches the analysis of aggregated sets of traces, enabling, for example, the extraction of movement patterns in the city.

PhD Thesis Proposal G.1.16


Thesis: Data Analysis in the City Keywords: Spatial Data Analysis; Data Fusion; Intelligent Transport Systems Supervisor: Francisco Cmara Pereira Summary: Within the CityMotion project (a collaboration with MIT, IST and FEUP), a number of different kinds of data inputs from the city are expected to be available. For example, from taxi fleets, cell phone usage and traffic detectors. This thesis intends to focus on the search for algorithms that extract new information out of this data, particularly that information that can only be obtained by the fusion on two or more of these sources. The student will learn and work with Spatial Data Analysis algorithms, Data Fusion techniques, and any other options that seem promising.

PhD Thesis Proposal: G1. 17


Title: Alternative specification and visualization representations in initial programming learning Keywords: computer science education; programming learning; alternative
representations.

Supervisor: Prof. Maria Jos Marcelino (zemar@dei.uc.pt) Summary:


The main objective of this thesis is to study, propose and validate, on one hand, new alternative forms of representation for algorithm and program specification and, on the other, new alternative ways of algorithm and program visualization to support initial programming learning and evaluate their impact on the quality of the achieved students learning. Initial programming learning is quite hard for the majority of students. It is usually supported by one (or more) of three typical modes of algorithm/program representation: pseudo code, flowcharts and code in a specific programming language. In what concerns algorithm/program visualization several approaches have also been used: variable log, debugging helps, simulated algorithm/program animation. Each student has her/his own preferences about these representation and visualization metaphors. There are particular types of programming problems that are mandatory in initial programming learning and for which typical students solutions (good as well as erroneous) have been identified. We believe that, although final programs must be coded in one particular programming language, during initial learning stages many programming students could benefit from the study and implementation of diverse alternative solution representations as well as visualizations, especially if they are more close to students previous experience and context. In the scope of this thesis student preferential alternative representations both at the level of algorithm and program specification and of results visualization will be identified and evaluated. After new forms will be developed and proposed in order to cope with students more commonly found difficulties that traditional approaches can not deal with. These new forms will be afterwards the object of thorough evaluation.

PhD Thesis Proposal: G1.18


Title: Learning communities to support initial programming learning Keywords: computer science education; programming learning; learning
communities.

Supervisor: Prof. Antnio Jos Mendes (toze@dei.uc.pt) Summary:


Initial programming learning is known as a hard task to many novice students at college level, leading to high failure and drop out in many courses. Many reasons can be found for this scenario and several approaches have been proposed to facilitate

students learning. However, problems continue to exist and it is necessary to investigate new solutions that may help programming students and teachers. Learning communities concept exists for some time. It has been presented as a way to create rich learning contexts where teachers, students and other people, namely experts, can coexist and collaborate in the production of knowledge, consequently leading to learning enhancement. This thesis proposal includes first the study of representative learning communities successful cases and characteristics, and after the study, proposal and creation of a learning communities support platform specially adapted to the needs of students during programming learning. The platform and its utilization will undergo a full evaluation, in order to access its success in promoting programming learning. It is expected that this platform includes innovative characteristics, for example the inclusion of virtual members that may interact with real members when necessary and specially tailored features and tools that may improve the quality of programming learning.

PhD Thesis Proposal: G1.19


Title: Problem solving patterns and remediation strategies in programming learning Keywords: computer science education; programming learning; learning
communities.

Supervisors: Prof. Antnio Jos Mendes (toze@dei.uc.pt)


Prof. Maria Jos Marcelino (zemar@dei.uc.pt)

Summary:
Initial programming learning is known as a difficult task to many novice students at college level. In those courses it is common to use a set of typical problems to introduce students to basic programming concepts and also to stimulate them to develop their first programs and programming skills. This work is essential, since it should allow beginners to develop the basic programming problem-solving skills necessary to be further developed and refined later. So, this first learning stage is crucial to students performance in all programming related courses. This thesis proposal includes a study about the different ways students approach these typical basic problems, leading to the identification of common problem solving patterns. Some of these patterns will be adequate, while others will not lead to the development of correct solutions, being considered wrong or erroneous patterns that must be identified and corrected in students strategies knowledge. Based on this information, the thesis main objective will be the proposal, implementation and evaluation of methods and/or tools that may identify novice students strategies, categorize typical wrong patterns and common errors, and interact with them giving personalized remediation feedback when necessary. The forms of this feedback must also be studied, so that it becomes effective not only to help students to solve the current problem, but mainly to help them to develop better approaches that may lead to correct solutions in later problems and learning stages.

PhD Thesis Proposal: G1.20


Title: Cognitive skills to programming learning Keywords: computer science education; cognitive skills; motivation; problem solving; programming learning; Supervisors: Prof. Antnio Jos Mendes (toze@dei.uc.pt) Prof. Ana Cristina Almeida FPCE (calmeida@fpce.uc.pt) Summary: In the last years programming courses have become more and more difficult for many students. High failure and dropout rates are evidence of those difficulties. However, it is common to find in the same course novice students with many learning difficulties side by side with others that are able to learn programming basics without too much effort. This often results in a very unbalanced situation where a good number of novices get high grades while the remaining fails. Medium grades are not as common in programming courses as they are in other courses. The above situation can have several reasons, such as different backgrounds, cognitive skills, motivations, interests or study methods. Possibly all these aspects are relevant and play a role in students learning capacities. That is why it is interesting and relevant to compare these aspects in fast learning students with slow learning students in the context of initial programming courses (and possibly also between expert programmers and non-programmers). Can we determine the cognitive skills more relevant to programming learning? Can we define a taxonomy that includes the most important cognitive skills necessary to programming learning? Can we evaluate or develop (technology-based) instruments that allow programming teachers to better know their students characteristics and needs? Can we find ways (possibly technology-based) to help students to develop the skills they dont have? This thesis main objective will be to provide answers to the above research questions. This means that it will include a diagnosis and an intervention phases. The first includes an in-depth study of students cognitive characteristics and skills confronting students with programming learning difficulties with students who learn programming without major difficulties, working in the same context. Ideally this phase should end with the proposal of taxonomy of the necessary cognitive skills to learn programming. The second phase will be based on the analysis of the results of the field study and should propose strategies and/or tools that may help students to develop the necessary skills. Ideally, this proposal should be evaluated to verify its validity.

PhD Thesis Proposal: G1.21


Title: Mathematical skills and programming learning: the ability to solve problems Keywords: computer science education; mathematics; problem solving; programming learning; Supervisors: Prof. Antnio Jos Mendes (toze@dei.uc.pt) Prof. Ana Maria Almeida (amca@mat.uc.pt)

Summary: Initial programming learning is quite hard for many students. Although several factors may contribute to this situation, the lack of basic mathematical proficiency is probably one of the most relevant. In fact, some preliminary studies made by our research group established that programming learning difficulties are often accompanied by a deep lack of basic mathematical concepts. However, it is not clear which of the basic mathematical concepts and cognitive competencies are the more important to develop the needed programming skills, or even if the development of those concepts and abilities has a direct impact in programming learning. If this is the case, how to develop tools that enable the students to rapidly acquire this competencies in the context of basic programming education? This thesis main objective will be to provide answers for the above research questions. This means that it will include both a diagnosis and an intervention phases. The first phase (field study) implies that a comparison of mathematical knowledge and skills must be made and should confront results provided by students with programming learning difficulties with results provided by students who learn programming without major difficulties, working in the same context. An also interesting study would be to consider groups of programming experts and novices. The second phase will be based on the analysis of the results of the field study and should conclude with a proposal of specific teaching and learning strategies that may be applied in the context of programming courses and that may lead to an improvement in the learning results of many students. Ideally, this proposal should be instantiated and evaluated so as to ascertain its validity.

PhD Thesis Proposal: G1.22


Title: Experimental Web-based Mathematical Learning Keywords: Experimental learning, learning communities, mathematical learning, collaborative learning Supervisors: Prof. Ana Maria de Almeida (amca@mat.uc.pt) Prof. Maria Jos Marcelino (zemar@dei.uc.pt) Summary: It is claimed by many actors in the learning process that one of the major causes for students failure in science subjects in general, and in Maths in particular, is the lack of immediate application of concepts to real situations, which, in many instances, cannot be obtained through the usual class book exercises. But how can we bring real world problems into the classroom? There seems to be an easy answer: simply use the Web and the tools developed for the Information Age! This theme intends to devise a model, in the form of guidelines, and produce a case study of implementing experimental E-learning in schools to promote the scientific method and the successful apprehension of elementary mathematical concepts. Towards this goal, it is necessary to identify some major mathematical keystones competences for grades 5 to 9 of Elementary Portuguese Schools, and activities centred on an appealing real science based thematic and intended for collaborative learning. The proposed activities should use computer-based tools and be devised so that they can be done in the classroom using the Web. They should also allow the collaborative interaction with distant users (other students in different schools). After the study on chosen schools, it should be evaluated for the necessary conclusions and inferences to be made.

PhD Thesis Proposal: G1.23


Title: Entropy Optimization with Information Theory Keywords: Shannon's Entropy, Kolmogorov Complexity, Information Measure, NPcompleteness Supervisors: Prof. Ana Maria de Almeida (amca@mat.uc.pt) Summary: A very simple concept lays in the core of most of the applications found for Information Theory results: Shannon's Entropy. It not only describes an optimal transmission rate or a

safety assurance for encription but its results, corolaries and uses go far beyond this. In the vast majority of applications tt is fundamental to proceed with a maximization of an entropy functions so as to derive the required results. But this optimization rarely is trivial or well known, involving imcomplete information. In particular, this is the case for the analisys of NP-hard problem instances. This thesis proposes the study and development of a Maximum Entropy Diagonis tool for information characterization in the presence of imcomplete knowdlege.

PhD Thesis Proposal: G1.24


Title: Similarity Measures and Applications Keywords: Kolmogorov Complexity, Normalized distances, Machine learning, Clustering, Universal Similarity Metric Supervisors: Prof. Ana Maria de Almeida (amca@mat.uc.pt)

Summary: A reccurring problem within Knowledge based approachs is the need to identify patterns and , moreover, to apply recognition tools, which needs a similarity measure. But how can we measure similarities between: two genotypes, two computer programs or two ecocardiographic lines? This theme intends to study similarity distance measures useful for data-mining, pattern recognition, learning e automatic semantic extraction. After a state-of-the-art survey, the focus of this study should rely on the confront two very different approachs: that of Normalized Information Distance (based on Kolmogorov Complexity) versus the more common Optimization strategies, and derive the guidelines for choosing the more adequate approach to specific applications like the ones above mentioned.

PhD Thesis Proposal: G1.25


Title: Melody Detection in Polyphonic Audio Keywords: music information retrieval, melody detection in polyphonic audio. Supervisor: Prof. Rui Pedro Paiva (ruipedro@dei.uc.pt) Summary:

Melody extraction from polyphonic audio is a research area of increasing interest in Music Information Retrieval (MIR). It has a wide range of applications in various fields, including music information retrieval (particularly in query-by-humming, where the user hums a tune to search a database of musical audio), automatic melody transcription, performance and expressiveness analysis, extraction of melodic descriptors for music content metadata, and plagiarism detection, to name but a few. This area has become increasingly relevant in recent years, as digital music archives are continuously expanding. The current state of affairs presents new challenges to music librarians and service providers regarding the organization of large-scale music databases and the development of meaningful methods of interaction and retrieval. Several different approaches have been proposed in recent years, most of them evaluated and compared in the corresponding track of the Music Information Retrieval Evaluation eXchange (MIREX, a small competition that takes place every year). In [Paiva, 2006], the problem of melody detection in polyphonic audio was addressed following a multistage approach, inspired by principles from perceptual theory and musical practice. The system comprises three main modules: pitch detection, determination of musical notes (with precise temporal boundaries, pitches, and intensity levels), and identification of melodic notes. The main objective of this thesis is to build on the work carried out in [Paiva, 2006] to tackle several open issues in the developed system, namely: derive a more efficient pitch detector, improve note determination in the presence of complex dynamics such as strong vibrato, address the current limitation in the melody/accompaniment discrimination task, improve the reliability of melody detection in signals with lower signal-to-noise-ratio, add top-down information flow to the system (e.g., the effect of memory and expectations), add context information (e.g., piece tonality, rhythmic information), augment the song evaluation database etc.
References: Rui Pedro Paiva, Melody Detection in Polyphonic Audio, PhD Thesis, Department of Informatics Engineering, University of Coimbra, 2006, Portugal.

PhD Thesis Proposal: G1.26


Title: Audio Fingerprinting and Music Identification Keywords: music information retrieval, music identification, audio fingerprinting. Supervisor: Prof. Rui Pedro Paiva (ruipedro@dei.uc.pt) Summary: Music identification systems aim to recognize songs based on their playback in moderate noisy environments. In most current platforms (e.g., Shazam, Gracenote

MusicID, 411-Song), you dial the number of the service provider with your cell phone, hold your phone towards the source of the music for a few seconds (from 3 to 20, depending on the provider) and then wait for a message containing the identification of the song (artist, title, etc.). Such applications are based on audio fingerprinting techniques, where an individual signature is extracted for each song in the database, and then compared with the fingerprint computed for the query sample. Present challenges in the area include the identification of songs in disturbed conditions, e.g., noisy environments, poor recordings, etc., or using only a few seconds of audio for matching. The main objective of this thesis is to improve the state of the art on music identification by investigating and extending the current techniques and proposing new approaches to the problem (e.g., hashing and search techniques, feature extraction approaches, etc.)
References:

- Eugene Weinstein and Pedro Moreno (2007). Music Identification with Weighted Finite-State Transducers, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2007. - Jaap Haitsma and Ton Kalker. (2002). A highly robust audio-fingerprinting systems. Proceedings of the 3rd International Conference on Music Information Retrieval. - Avery Wang (2003). An industrial-strength audio search algorithm. Proceedings of the 4th International Conference on Music Information Retrieval, invited talk.

PhD Thesis Proposal: G1.27


Title: Audio Music Mood Analysis Keywords: music information retrieval, music mood analysis. Supervisor: Prof. Rui Pedro Paiva (ruipedro@dei.uc.pt) Summary: Audio music mood-based classification is a research area of increasing interest in Music Information Retrieval (MIR). It has a wide range of applications in fields such us automatic music classification, playlist generation and similarity analysis. In fact, recent studies identify music mood/emotion as an important criterion used by people in music retrieval and organization. Moreover, music psychology and education recognize the emotion component of music as the one most strongly associated with music expressivity. The analysis of audio music in terms of mood/emotion is challenging in its very nature: mood is a subjective notion, techniques are still in an embryonic stage and a uniform evaluation framework is yet to be agreed upon. Nevertheless, the Music Information Retrieval Evaluation eXchange (MIREX, a small competition that takes place every year), has in 2007 (and for the first time) a track on audio music mood classification,

which will certainly give a strong impulse towards the improvement of techniques and creation of evaluation standards. The main objective of this thesis is to analyze audio music in terms of mood content information, e.g., contentment, depression, exuberance, anxiety, This involves the study and derivation of mood-like features, development of mood-based classifiers and mood-based similarity metrics. This can be further applied to mood-based music recommendation systems. The PhD candidate will have the opportunity to work in a cutting-edge research area with several open and exciting research possibilities, with plenty of room for scientific innovation.
References:

- Juslin P.N., Karlsson J., Lindstrm E., Friberg A. and Schoonderwaldt E. (2006) Play It Again With Feeling: Computer Feedback in Musical Communication of Emotions, Journal of Experimental Psychology: Applied, Vol. 12, No.2, pp. 79-95. - Lu, Liu and Zhang (2006), Automatic Mood Detection and Tracking of Music Audio Signals, IEEE Transaction on Audio, Speech and Language Processing, Vol. 14, No. 1., pp. 5-18.

G2: Adaptive Computation http://cisuc.dei.uc.pt/acg/

PhD Thesis Proposal: G2.1


Title: Adaptive Mining for Detecting Trends in Evolving Data Sets Keywords: Web mining, machine learning, and pattern recognition. Supervisor: Prof. Bernardete Ribeiro (bribeiro@dei.uc.pt) Summary: A wide range of applications require the analysis of underlying data that is generated by a non-stationary process, i.e., a process that evolves over time. Recently, the discovery of trends as data streams in has become a major challenge. Examples of such data include Web click-streams, network traffic monitoring, trade surveillance for security fraud and money laundering, dynamic tracing of stock fluctuations, biomedical signals monitoring, climate data from satellite measurements, financial time series, etc. As most decision making tasks rely on the up-to-dateness of their supporting data, the evolving nature of the data creates tremendous complexity for many mining algorithms. As most of the existing data mining techniques assume that the underlying data is generated by stationary processes, such techniques may not be suitable for analyzing evolving data sets. On the other hand, users are often interested in changes embodied by the data. To this end, the goal of this research is to develop mining algorithms more effective and efficient in view of changing data characteristics and for extracting patterns describing these changes. It will be expected that techniques developed from this research will be applied to a wide variety of applications including Web mining, monitoring of biomedical signals and others.
Proposal This PhD will comprehend the following initial tasks: (1) Study state-of-the-art of existing methods for temporal data mining; (2) Construct a benchmark of a nonstationary data set [3] Develop techniques for clustering, classification and detecting

frequent patterns from data [4] Building accurate models for evolving data [5] Develop techniques of detecting changes in evolving data (5) Evaluate and determine
performance measures; [6] Propose a general framework to detect trends in evolving data sets.

PhD Thesis Proposal: G2.2


Title: Learning From Heterogeneous Data Sources Keywords: Machine Learning, Clustering, Data Mining Supervisor: Prof. Bernardete Ribeiro (bribeiro@dei.uc.pt) Summary: In the last decade we have witnessed a dramatic growth in the availability of data from a variety sources along with an increasing diversity of the data types. For example, bioinformatics tasks can exploit protein sequences, gene expression profiles and ontologies; image classification tasks can use data collected from different sensors and countless more. These heterogeneous data sets allow a researcher to represent different characteristics of a sample superseding the capability of a more homogenous data set. The questions of how to deal with this heterogeneity and how to weight the importance of different sources of data and information remain to be solved. Recently empirical experiments have shown that by using heterogeneous features it is possible to increase overall performance and to obtain significant gains in systems detection rates. While these heterogeneous data sets are plentiful in a variety of machine learning applications, including biomedicine, image processing, web mining, goal detection, and business, to name a few areas, conventional machine learning algorithms may be limited by the general underlying assumptions that the training data available are drawn from a single source and that each sample is represented by a single vector of variables. Therefore, more sophisticated learning and data fusion methods are necessary to make the best use possible of heterogeneous data sets. Learning from heterogeneous sources of data, and learning in semi-supervised learning settings exploring novel learning approaches such as kernel methods, ensemble approaches and feature selection/extraction methods is the main theme of this research.
Proposal This PhD will comprehend the following initial tasks: (1) Study state-of-the-art of existing methods for heterogeneous data mining; (2) Construct a benchmark of a heterogeneous data set [3] Develop techniques for clustering, classification and detecting

patterns from heterogeneous data [4] Building supportive underlying assumptions and accurate models for heterogeneous data (5) Evaluate and determine performance
measures; [6] Propose a general framework for target detect in multivariate heterogeneous data sets.

PhD Thesis Proposal: G2.3


Title: Assigning Confidence Score in Page Ranking for Intelligent Web Search Keywords: Graph Mining, Machine Learning, Ranking, Text Mining, Supervisor: Prof. Bernardete Ribeiro (bribeiro@dei.uc.pt) Summary: Web has become the main centre of research around the globe. Users face themselves with an overload of data when a simple search is fed into Google or a similar web search engine. A recurrent problem is to unveil the desired information from the wealth of available search results. Ranking, which can be achieved by providing a meaningful score for each classification decision, is important in most practical settings. For instance, text retrieval systems typically produce a ranking of documents and let a user decide how far down that ranking to go. Several Learning Machine machine learning techniques allow the definition of scores or confidences coupled with their classification decisions. The main idea of the current proposal is to explore ranking systems based on Bayesian graph-based data mining which has recently gained a high level of attraction due to its broad range of applications. The basic idea is to devise a ranking approach able to quantify the important role of a node as the degree to which it has direct and indirect relationships with other nodes in a graph. Moreover classification systems can be improved by enriching information and information representation with external background information, such as, ontology-related data. Evaluation can be done on benchmarks, but also with real users defining the goals and assessing the final results, including score changes in final ranking. The visual examples and applications are provided to demonstrate the effectiveness of our approaches. Summary: Web has become the main centre of research around the globe. Users face themselves with an overload of data when a simple search is fed into Google or a similar web search engine. A recurrent problem is to unveil the desired information from the wealth of available search results. Ranking, which can be achieved by providing a meaningful score for each classification decision, is important in most practical settings. For instance, text retrieval systems typically produce a ranking of documents and let a user decide the search depth. Most of the current approaches use machine learning techniques that allow the definition of scores or confidences coupled with classification decisions. Moreover, these classification systems can be improved by enriching information (and information representation) with external background information, such as, ontology-related data. Graph-based data mining is a recently emerging approach able to quantify the important role of a node as the degree to which it has direct and indirect relationships with other nodes in a graph. The most popular web ranking system is based on this method which is still sub-optimal.

The main idea of the current proposal is to devise a ranking approach based on the combination of machine learning techniques and graph mining (e.g. Graph Clustering, Graph Kernels etc.) joining the advantages of both systems. Evaluation will be done on real benchmarks, synthetic data created by graph generators, but also with real users defining the goals and assessing the final results, including score changes in the final ranking.
Proposal

This PhD will comprehend the following initial tasks: (1) Study state-of-the-art of existing methods for page ranking on the web; (2) Construct a benchmark of a data set either from the web or based on graph generators [3] Develop techniques for clustering, classification and detecting patterns from data [4] Building supportive underlying assumptions and accurate ranking models for web data (5) Evaluate and determine performance measures; [6] Propose a general framework for ranking on web data sets.

PhD Thesis Proposal: G2.4


Title: Homecare Diagnosis of Pediatric Obstructive Sleep Apnea Keywords: homecare; obstructive sleep apnea; reduction of complexity; biosignals
processing; computational intelligence; automatic diagnosis.

Supervisor: Prof. Jorge Henriques (jh@dei.uc.pt) Summary:


The main goal of this work is to investigate homecare solutions that could stratify normal and apnea events for diagnostic purposes in children suspected for the presence of obstructive sleep apnea syndrome. Obstructive sleep apnea syndrome (OSAS) is a condition whereby recurrent episodes of airway obstruction are associated with asphyxia and arousal from sleep. It is estimated to affect between 1 and 3% of young children and its potential consequences include excessive daytime somnolence, behavioral disturbances and learning deficits, pulmonary and systemic hypertension, and growth impairment. The currently accepted method for diagnosis of OSAS is overnight polysomnography (PSG), done in sleep laboratories, where multiple signals are collected by means of face mask, scalp electrodes, chest bands etc. It monitors different activities, including brain waves (EEG), eye movement (EOG), muscle activity (EMG), heartbeat (ECG), blood oxygen levels and respiration. However, the diagnosis of OSAS from these huge collection of data is sometimes not straightforward to clinicians, since major relations between features and consequents are most often very high dimensional, non-linear and complex. These requirements impose the necessity of innovative signal processing techniques and computational intelligent data interpretation methodologies, such as neural networks and fuzzy systems. One of the main goal of this work is to provide clinicians with the tools that can help them in their diagnosis.

Although PSG is considered the gold standard for diagnosis of OSAS, given the relatively high medical costs associated with such tests and the insufficiency number of pediatric sleep laboratories, PSG is not readily accessible to children in all geographic areas. Thus, analysis of the validity of alternative diagnostic approaches should be done, even assuming their accuracy is suboptimal. The second goal of this work points in this direction. It aims investigating the viability to reduce the number and complexity of measurements in order to make possible the stratification of OSAS in children natural environment.

PhD Thesis Proposal: G2.5


Title: Architectures and algorithms for real-time learning in interpretable neurofuzzy systems

Keywords: on-line learning; neuro-fuzzy systems; interpretability; machine learning Supervisor: Prof. Antnio Dourado (dourado@dei.uc.pt) Summary:
The development of fuzzy rules to knowledge extraction from data acquired in real time needs new recursive techniques for clustering to produce well designed fuzzy-systems. For Takagi Sugeno-Kang (TSK) systems this applies mainly to the antecedents, while for Mamdani type it applies both for the antecedents and consequents fuzzy sets. To increment pos-interpretability of the fuzzy rules, such that some semantic may be deduced from the rules, pruning techniques should be developed to allow a humaninterpretable labelling of the fuzzy sets in the antecedents and consequents of the rules. For this purpose convenient similarity measures between fuzzy sets and techniques for merging fuzzy rules should be developed and applied. The applications envisaged are in industrial processes and medical fields.

PhD Thesis Proposal: G2.6


Title: Intelligent Monitoring of Industrial Processes with application to a Refinery Keywords: intelligent process monitoring; multidimensional scaling; computational
intelligence; clustering

Supervisor: Prof. Antnio Dourado (dourado@dei.uc.pt)

Summary:
High dimensional data in industrial complexes can be profitably used for advanced process monitoring if it is reduced to a dimension where human interpretability is easily verified. Multidimensional scaling may be used to reduce it to two or three dimensions if appropriate measures of similarity/dissimilarity are developed. The measures express the distance between attributes, the essence of the information, and a similar difference should be guaranteed in the reduced space in order to preserve the informative content of the data. Research of appropriate measures and reduction method is needed. In the reduced space, classification of the actual operating point should be dome through appropriate recursive clustering and pattern recognition techniques. The classification is intended to evidence clearly the quality level of the actual and past operating points in such a way that the human operator finds in it a useful decision support system for the daily operation of the mill. The work has as applications the process of visbreaker in the Galp Sines Refinery.

PhD Thesis Proposal: G2.7


Title: Intelligent supervision of the colour transition in a paper machine Keywords: Computational intelligent methodologies, support decision system, pulp and paper industry Supervisors: Prof. Jorge Henriques (jh@dei.uc.pt) and Prof. Alberto Cardoso (alberto@dei.uc.pt) Summary: The main goal of this work consists in the developing of a learning and support decision system, to be applied in the colour transition process during the production of different types of paper, in a paper machine. Computational intelligent methodologies will be implemented, in order to acquire, understand and analyze the experimental knowledge captured by operators during the transition phase between operation regimes. This work will contribute with a clear identification and characterization of the best transition approaches, as well as a unification strategy concerning the best practices to be followed. Specifically, it is expected to improve the optimization of the colour transition process (whiteness and tonality) during the paper production in the pulp and paper industry at the Portucel and Soporcel group, in Figueira da Foz. A critical situation in an industrial environment concerns the procedures to be performed during the transitions between operation regimes. Ideally, these transitions should occur minimizing both the time interval duration and the overshoot inherent to that transition. Additionally, this fact becomes much more pertinent when significant costs are involved. Regarding the transition of papers colour, to be produced in a paper machine, although the process is automatically controlled, the operator has to manually define the set-point of the colour values, as well as the instants when they should occur. Thus, since this procedure is complex and non trivial, significant settling times are observed until the paper being produced achieves a new steady state colour specification. However, and given that the paper machine under consideration produces 1500 meters of paper/minute, a huge waste of paper is verified, with all inherent operating cost associated. From the above, it is imperative that those transitions occur with stability and during the minimum time interval possible. Despite the difficulties,

operators are able to deal with the large diversity and complexity of information involved in the transition process, given their experience and evidence based knowledge. However, formulate this process in a systematic and precise way, is a challenging task. One of the main goals of this work is to develop and implement computational intelligence strategies (neural networks, fuzzy systems, neuro-fuzzy systems, etc) to address this challenge. Take into account the historical data of past transitions, as well as operators know-how and expertise, a solution will be developed, able to learn and incorporate the available information and experience. The developed solution will be a valuable tool in order to provide the best strategies to follow, regarding the colour transition process during the production of different types of paper.

PhD Thesis Proposal: G2.8


Title: Management system to support ideas and projects in a collaborative environment Keywords: Computational intelligent methodologies, collaborative environment, pulp and paper industry support decision system,

Supervisors: Prof. Alberto Cardoso (alberto@dei.uc.pt) and Prof. Jorge Henriques (jh@dei.uc.pt) Summary: The main goal of this proposal is to investigate and develop a solution for a web based management system that could act as a platform to support and facilitate the process of creation, development and management of ideas, projects or new products, in a collaborative environment. The success and effectiveness of these processes are strongly dependent on the accurate application of Information and Communication Technologies (ICT). In this context, the applications for collaborative environments will contribute to the natural settling of synergies between several intervenients, providing an effective support to the processes associated to the creation, development and management of ideas, projects or new products.

PhD Thesis Proposal: G2.9


Title: Dynamical platform for training, simulation and decision support Keywords: Adaptive computation modelling and simulation. Supervisors: Prof. (pgil@dei.uc.pt) Summary: Alberto methods, support decision system, training,

Cardoso

(alberto@dei.uc.pt)

and

Prof.

Paulo

Gil

The development of simulation systems is a very important and valuable tool in the industrial context, namely to understand the process dynamical behaviour, to train and improve the knowledge of operators and to decision support action. The basis of the simulator for one specific petroleum refinery would be a non-linear hybrid model, obtained from the high dimensional data and the operators knowlede, using adaptive computation methods (neural networks, fuzzy, neuro-fuzzy, ). The overall system should be web based and the interfaces should be similar to the panels currently used by the operators.

PhD Thesis Proposal: G2.10


Title: Adaptive Intelligent Supervision in Changing Environments Keywords: Computational intelligent methodologies, collaborative environment, pulp and paper industry Supervisors: Prof. (alberto@dei.uc.pt) Paulo Gil (pgil@dei.uc.pt) and support decision system,

Prof.

Alberto

Cardoso

Summary: Supervision can be regarded as the manifold process of collecting relevant information from the world (monitoring), predicting futures states and acting accordingly, whenever required. When the systems nature is itself time varying this high level framework must be materialized or implemented in an adaptive way. This means that the supervisor performance should be permanently assessed and parameters adapted in real time to cope with a changing environment. Another issue, of key importance, involving supervision in real world applications concerns the study and implementation of intelligent methodologies enhancing the overall system robustness in case of disturbances and faults events, including varying latency times in network communications. Subjects where contributions are expected: Non-linear modelling using artificial neural networks number of layers and the lag window; Real time adaptation of models mechanisms for recursive parameters adjustment in noisy environments; Fault diagnosis study and implementation of techniques for fault detection and isolation; Intelligent systems behavior conditioning study and implementation of reconfiguration methodologies assuring acceptable performance level in the presence of faults.

G3: Software and Systems Engineering PhD Thesis Proposal: G3.1


Title: Self-Healing Techniques for Legacy Application Servers Keywords: Autonomic computing, self-healing, dependability. Supervisor: Prof. Lus Moura e Silva (luis@dei.uc.pt) Summary:
One of the actual big-challenges of the computer industry is to deal with the complexity of the systems. The Autonomic Computing initiative driven by IBM defined the following functional areas as the cornerstone of an autonomic system: self-configuration, selfhealing, self-optimization and self-protection. The self-healing property refers to the automatic prediction and discovery of potential failures and the automatic correction to possibly avoid downtime of the computer system. This leads to the vision of computers that heal themselves and do not depend so much on a system manager to take care of. While there has been some interesting work on self-healing techniques for missioncritical systems there is a long way to achieve that goal in commercial off-the-shelve (COTS) servers running Apache/Linux, Tomcat, JBoss, Microsoft .Net. The purpose of this PhD is to study and propose low-cost and highly-effective self-healing techniques for these application servers. One of the potential causes of failures in 24x7 server systems is the occurrence of software aging. The phenomena should be studied in detail together with high-level techniques for application-level failure detection. Some mathematical techniques should be applied to detect software aging and to forecast the potential time for the failure of the server system. When the aging is detected the server system should apply pro-actively a software rejuvenation technique to avoid the potential crash and to keep the service up and running. Techniques for microrejuvenation should be further studied to avoid downtime of the server. The final result of this PhD should be a set of software artifacts and the refinement of data analysis techniques to apply in COTS application servers in order to predict failures and software aging in advance and to apply some corrective action to avoid a server crash. Proposal This PhD will comprehend the following initial tasks: (1) State-of-the-art about Autonomic Systems, Self-healing, Software Aging, Software Rejuvenation, Microrebooting and Dependability Benchmarking; (2) Machine learning techniques to forecast the failures and software aging; (4) Application-level techniques for failure prediction and early detection; (5) Micro-rejuvenation techniques for application servers; (6) Extension of the techniques SOA-based and N-tier applications; (7) Dependability benchmarking; (8) Implementation of an experimental framework; (9) Analysis of experimental results.

PhD Thesis Proposal: G3.2


Title: Wired self-emerging ad hoc network Keywords: peer-to-peer, ad hoc, distributed hash table. Supervisor: Prof. Filipe Arajo (filipius@dei.uc.pt) Summary:
In recent years computer communication is departing from the client-server architecture and moving increasingly more toward a peer-to-peer architecture. One aspect that characterizes this kind of interaction is the opportunistic participation of many of the peers: they connect to the network for only a few moments, just to discover and download (or not) what they are looking for and then they disconnect. Interestingly, mobility and battery exhaustion can reproduce this same trend in wireless ad hoc networks, comprised of devices that use radio broadcast to communicate. While wired peer-to-peer and wireless ad hoc networks share a number of common features, like self-configuration, decentralized and fault-tolerant operation, they have however an important difference: wired peer-to-peer networks run as overlay networks on top of the IP infrastructure. This raises the following question: can we take the paradigms from wireless networks and create IP-less self-organizing wired networks? Our goal is to plug-in and out new devices or even entire networks from the wired infrastructure in a scalable and decentralized way and without the need for any a priori configuration. In contrast, current IP networks can only scale, because they are highly hierarchical and they require a considerable amount of human assistance. As a consequence they are often highly congested, expensive to maintain and unreliable. The fundamental difference between the solution we seek and wireless ad hoc networks has to do with available bandwidth. In fact, the most important constraint that makes collection of routing information so challenging and that limits the pace of change of topology in wireless ad hoc networks is the (lack of) available bandwidth. Available bandwidth is a very scarce resource, because it is shared among all the nodes. This makes it theoretically impossible to create a wireless ad hoc network that scales with the number of nodes. As a consequence, algorithms for wireless ad hoc networks are often localized or have, at most, very limited information of distant regions of the network. This is very unlike the situation in wired networks: for the same pace of topological change, the supply of bandwidth is not shared and it is much larger. This paves the way for better and more powerful solutions, which, we believe are largely unexplored in literature. Proposal: This PhD work encompasses the following tasks: (a) review of the state-of-the-art; (b) design of the architecture; (c) evaluation of the scalability of the architecture (admissible number of nodes and topological changes versus available bandwidth); (d) exact and range-based lookup algorithms that leverage on previous work on distributed hash tables and peer-to-peer file-sharing applications; (e) design of an interconnection infrastructure, to connect islands of wired ad hoc networks with the IP network.

PhD Thesis Proposal: G3.3


Title: Fast Moving Wireless Ad Hoc Nodes Keywords: peer-to-peer, wireless ad hoc, wireless infrastructured, Wireless Access for the Vehicular Environment (WAVE). Supervisors: Prof. Filipe Arajo (filipius@dei.uc.pt) Summary:
In recent years we have assisted to an increasing interest in wireless networks. While most current applications seem to be set for sensor networks, we can foresee many other applications for mobile ad hoc or mixed ad hoc/infrastructured networks, where nodes are mainly mobile and communication goes beyond simple data gathering of a sensor network. For instance, applications can enhance the behavior of a crowd by providing additional services to users holding mobile wireless devices, like search for a given person that is momentarily lost, search for a person that matches some social interests, exchange of diverse information, of a product, etc. Another context that is extremely promising is that of a spontaneous network formed by cars in a road, enriched with some infrastructure that is able to provide traffic, weather and other information to drivers. By letting cars share their information, it may be possible to save significant costs in the infrastructure and still considerable improve the quality and quantity of information. In this PhD work we want to leverage on some existing routing algorithms for wireless ad hoc networks and make them work on particular environments with specific patterns of mobility. Interestingly, in networks with a high degree of mobility it is often possible to increase the speed of the flow of information, because mobility creates more opportunities to exchange this information. In particular, we want to consider a scenario where the network is comprised of fast-moving cars equipped with IEEE 802.11p network adapters (Wireless Access for the Vehicular Environment WAVE). This is a case where part of the information is created and sent to some points of the infrastructure through a chain of nodes, while at the same type, cars can also introduce new information in the network, for instance by signaling their presence to cars in front, in the rear or to cars traveling in the opposite direction. In particular, the information shared with cars going in the opposite direction first and then with the base stations located along the road is of paramount utility as this has the potential to propagate very accurate data of traffic jams or accidents at virtually no cost. We expect to use similar principles to more complex but slower-moving networks comprised of people with handheld or other wireless devices walking in crowds. Proposal: This PhD work encompasses the following tasks: (a) review of the state-of-the-art; (b) design of routing algorithms for environments with high mobility; (c) design of information-sharing applications for environments with high mobility; (d) simulation in real environments.

PhD Thesis Proposal: G3.4


Title: Detecting Software Aging in Database Servers Keywords: Software aging, software rejuvenation, autonomic computing, database management systems, dependability benchmarking Supervisors: Prof. Marco Vieira (mvieira@dei.uc.pt)
Prof. Lus Moura e Silva (luis@dei.uc.pt)

Summary:
One of the main problems in software systems that have some complexity is the problem of software aging, a phenomenon that is observed in long-running applications where the execution of the software degrades over time leading to expensive hangs and/or crash failures. Software aging is not only a problem for desktop operating systems: it has been observed in telecommunication systems, web-servers, enterprise clusters, OLTP systems, spacecraft systems and safety-critical systems. Software aging happens due to the exhaustion of systems resources, like memory-leaks, unreleased locks, non-terminated threads, shared-memory pool latching, storage fragmentation, data corruption and accumulation of numerical errors. There are several commercial tools that help to identify some sources of memory-leaks in the software during the development phase. However, not all the faults can be avoided and those tools cannot work in third-party software modules when there is no access to the source-code. This means that existing production systems have to deal with the problem of software aging. The natural procedure to combat software aging is to apply the well-known technique of software rejuvenation. Basically, there are two basic rejuvenation policies: time-based and prediction-based rejuvenation. The first applies a rejuvenation action periodically, while the second makes use of predictive techniques to forecast the occurrence of software aging and apply the action of rejuvenation strictly only when necessary. The goal of this PhD Thesis is to study the phenomena of software aging in commercial database engines, to devise and implement some techniques to collect vital information from the engine and to forecast the occurrence of aging or potential anomalies. With this knowledge the database engine can apply a controlled action of rejuvenation to avoid a crash or a partial failure of its system. The ultimate goal is to improve the autonomic computing capabilities of a database engine, mainly when subjected to high workload and stress-load from the client applications. Proposal: The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art about software aging, rejuvenation and dependability benchmarking; (b) development of a tool for dependability benchmarking of database engines; (c) development of a workload and stress-load tool for databases; (d) infrastructure of probes (using Ganglia) to collect vital information from a database engine; (e) development of mathematical techniques to forecast the occurrence of software aging (time-series analysis, data-mining, machine-learning, neural-networks);(f) experimental study. Analysis of results; (g) adaptation of rejuvenation techniques for database engines; (h) writing of papers;

PhD Thesis Proposal: G3.5


Title: Security benchmarking of COTS components Keywords: Software reliability, Security benchmarking, Experimental evaluation, Dependability benchmarking Supervisors: Prof. Henrique Madeira (henrique@dei.uc.pt]
Prof. Joo Dures (jduraes@dei.uc.pt)

Summary:
One of the main problems in software systems is the vulnerability to malicious attacks. Complex systems and systems that have high degree of interaction with other systems or users are more prone to be successfully attacked. The consequences of a successful attack are potentially very severe and may include the theft of critical-mission information and trade secrets. Given the pervasive nature of software systems in modern society, the issue of security and testing for vulnerability to attacks is an important research area. The vulnerability of software systems are caused by several factors. Two of these factors are the integration of third-party off-the-shelf components to build larger components, and bad programming practices. The integration of third-party generic-purpose components may introduce vulnerabilities in the larger system due to interface mismatch between the components that may be exploited for attacks. Bad programming practices may lead to weaknesses that may be exploited by tailored user inputs. Testing a system or component against malicious attacks is a difficult problem and is currently an open research area. Testing for vulnerabilities to malicious attacks can not be performed as traditional testing because there is no previous knowledge about the nature of the attacks. However, these attacks follow a logic based on exploiting possible weaknesses inside the software and this logic can be used to forecast the existence of vulnerabilities. The goal of this PhD Thesis is to study the phenomena of attacks to software systems and devise a methodology to assess the vulnerability to these attacks. This includes the proposal of experimental techniques to test systems and components following the logic of dependability benchmarking and experimental evaluation. It is expected that at the conclusion of the Thesis there is case study with practical results of assessment security and vulnerability forecasting for comparison purposes. Web servers are suggested as on of possible case studies. Fault injection techniques and robustness testing techniques should be considered as enabling techniques for the purposes of the Thesis.

Proposal:
The PhD work will comprehend the following initial tasks: (a) overview of the state-of-the-art about software security, software vulnerabilities, software defects, validation methods, robustness testing and dependability benchmarking; (b) development of methods and tools for analysis of the identification of patterns related to vulnerabilities and the automated testing of the possible vulnerabilities (case studies include web-servers)

(c) proposal of generic test methodologies for evaluation of software vulnerability to malicious attacks based on software defects and program pattern analysis for system comparison purposes; (d) proposal of formal methodologies for experimental assessment of security and vulnerability forecasting on third-party (black-box) software components; (e) development of experimental infrastructure of tools for practical demonstration of the above to real systems (case studies include web-servers); (f) experimental study. Analysis of results; (g) writing of papers;

Target conferences to publish papers: - Dependable Systems and Networks (DSN) - International Conference on COTS-Based Software System (ICCBSS) - International Conference on Computer Safety, Reliability and Security (SAFECOMP) - International Symposium on Software Reliability Engineering (ISSRE)

PhD Thesis Proposal: G3.6


Title: Plug-and-Play with Declarative Stream Processing for Planned Wireless Sensor
Networks

Supervisor: Prof. Pedro Furtado (pnf@dei.uc.pt)


Summary: While WSNs are usually complex to configure and code by non-experts, they are typically sensing systems with application-dependent requirements concerning stream processing, system configuration and reconfiguration. This thesis consists in proposal of approaches for extending languages and generating code for all components of an application system. The candidate will be integrated into a research team currently working in the EU project Ginseng, and any valuable research work and results can be integrated into the current effort.

Project: EU FP7 project Ginseng (http://www.ict-ginseng.eu/) More information and/or interest: if you are interested in this opportunity, contact
me as soon as possibly to pnf@dei.uc.pt

PhD Thesis Proposal: G3.7


Title: Algorithms for Configuring, Monitoring and Adapting Stream Processing in Performance Controlled Wireless Sensor Networks with Fault-Tolerance.

Supervisor: Prof. Pedro Furtado (pnf@dei.uc.pt)


Summary: We are developing a system for configurable stream processing over WSNs which respect performance and fault-tolerance characteristics. This system must be able to handle any unexpected condition gracefully, which involves monitoring, diagnosing and reacting to those unexpected conditions. This thesis consists in proposing all the mechanisms, from modified routing to stream processing, that will ensure the graceful features. The candidate will be integrated into a research team

currently working in the EU project Ginseng, and any valuable research work and results can be integrated into the current effort.

Project: EU FP7 project Ginseng (http://www.ict-ginseng.eu/) More information and/or interest: if you are interested in this opportunity, contact
me as soon as possibly directly to pnf@dei.uc.pt

PhD Thesis Proposal: G3.8


Title: Timely ACID Transactions in DBMS Keywords: Databases, transaction processing, performance and QoS, timely
transactions, real-time databases, fault-tolerance

Supervisors: Prof. Marco Vieira (mvieira@dei.uc.pt)


Prof. Henrique Madeira (henrique@dei.uc.pt)

Summary:
On time data management is becoming a key difficulty faced by the information infrastructure of most organizations. A major problem is the capability of database applications to access and update data in a timely manner. In fact, database applications for critical areas (e.g., air traffic control, factory production control, etc.) are increasingly giving more importance to the timely execution of transactions. Database applications with timeliness requirements have to deal with the possible occurrence of timing failures, when the operations specified in the transaction do not complete within the expected deadlines. For instance, in a database application designed to manage information about a critical activity (e.g., a nuclear reactor), a transaction that reads and store the current reading of a sensor must be executed in a short time as the longer it takes to execute the transaction the less useful the reading becomes. This way, when a transaction is submitted and it does not complete before a specified deadline that transaction becomes irrelevant and this situations must be reported to the application/business layer in order to be handled in an adequate way. In spite of the importance of timeliness requirements in database applications, commercial DBMS do not assure any temporal properties, not even the detection of the cases when the transaction takes longer than the expected/desired time. The goal of this work is to bring timeliness properties to the typical ACID (atomicity, consistency, integrity, durability) transactions, putting together classic database transactions and recent achievements in the field of real time and distributed transactions. This work will be developed in the context of the TACID (Timely ACID Transactions in DBMS) research project, POSC/EIA/61568/2004, funded by FCT.

Proposal The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art about timely computing and real-time databases; (b) characterization of timed transactions; (c) analysis of DBMS core implementations; (d) infrastructure to support timely execution of ACID transactions; (e) development of mathematical techniques to

forecast transactions execution times; (f) Implementation and evaluation; (g) writing papers;

PhD Thesis Proposal: G3.9


Title: Security Benchmarking for Transactional Systems Keywords: Security, benchmarking, database management systems, transactional
systems

Supervisors: Prof. Marco Vieira (mvieira@dei.uc.pt)


Prof. Henrique Madeira (henrique@dei.uc.pt)

Summary:
One of the main problems faced by organizations is the protection of their data against unauthorized access or corruption due to malicious actions. Database management systems (DBMS) constitute the kernel of the information systems used today to support the daily operations of most organizations and represent the ultimate layer in preventing unauthorized access to data stored in information systems. In spite of the key role played by the DBMS in the overall data security, no practical way has been proposed so far to characterize the security in such systems or to compare alternative solutions concerning security features. Benchmarks are standard tools that allow evaluating and comparing different systems or components according to specific characteristics (e.g., performance, robustness, dependability, etc.). In this work we are particularly interested in benchmarking security aspects of transactional systems. Thus, the main goal is to research ways to compare transactional systems from a security point-of-view. This work will be developed in the context of a research cooperation with the Center for Risk and Reliability of the University of Maryland, MA, USA. During this work the student will have the opportunity to visit the University of Maryland in order to carry out joint work with local researchers. Proposal The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art about security, security evaluation, and dependability benchmarking; (b) definition of a security benchmarking approach for transactional systems; (c) study of attacks for security benchmarking; (d) definition of a standard approach for security evaluation and comparison; (e) implementation and evaluation; (f) writing papers;

PhD Thesis Proposal: G3.10


Title: Dependability Benchmarking for Distributed and Parallel Database Environments Keywords: Databases, distributed systems, parallel systems, dependability
benchmarking, fault-tolerance

Supervisors: Prof. Marco Vieira (mvieira@dei.uc.pt)


Prof. Henrique Madeira (henrique@dei.uc.pt)

Summary:
The ascendance of networked information in our economy and daily lives has increased the awareness of the importance of dependability features. In many cases, such as in ecommerce systems, service outages may result in a huge loss of money or in an unaffordable loss of prestige for companies. In fact, due to the impressive growth of the Internet, some minutes of downtime in a server somewhere may be directly exposed as loss of service to thousands of users around the world. Database systems constitute the kernel of the information systems used today to support the daily operations of most organizations. Additionally, in recent years, there has been an explosive growth in the use of databases for decision support decision support systems. The biggest differences between decision support systems and operational systems, besides their different goal, are the type of operations executed and the supporting database platform. While operational systems execute thousands or even millions of small transactions per day, decision support systems only execute a small number of queries on the data (in addition to the loading operations executed offline). Advanced database technology, such as parallel and distributed databases, is a way to achieve high performance and availability in both operational and decision support systems. However, although distributed and parallel database systems are increasingly being used in complex business-critical systems, no practical way has been proposed so far to characterize the impact of faults in such environments or to compare alternative solutions concerning dependability features. The fact that many businesses require very high dependability for their database servers shows that a practical tool that allows the comparison of alternative solutions in terms of dependability is of utmost importance. In spite of the pertinence of having dependability benchmarks for distributed and parallel database systems, the reality is that no dependability benchmark has been proposed so far. A dependability benchmark is a specification of a standard procedure to assess dependability related measures of a computer system or computer component. The awareness of the importance of dependability benchmarks has increased in the recent years and dependability benchmarking is currently the subject of strong research. In a previous work the first know dependability benchmark for transactional systems has been proposed. However, this benchmark focuses singleserver transactional databases. The goal of this work proposal is to study the problem of dependability benchmarking in distributed and parallel databases. One of the key aspects to be addressed is to figure out how to apply a faultload (set of faults and stressful conditions that emulate real faults experienced by systems in the field) in a distributed/parallel environment. Several types of faults will be considered, namely: operator faults, software faults, and hardware faults (including network faults).

Proposal The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art about parallel and distributed databases, dependability assessment and dependability benchmarking; (b) definition of a dependability benchmarking approach for distributed and parallel databases; (c) study of typical faults in distributed and parallel databases environments; (d) definition of a standard approach for dependability evaluation and comparison in distributed and parallel databases; (e)implementation and evaluation; (f) writing papers;

PhD Thesis Proposal: G3.11


Title: Techniques to Improve Performance in Affordable Data Warehouses Keywords: Data warehousing, on-line analytical processing (OLAP), performance,
parallel and distributed databases

Supervisors: Prof. Jorge Bernardino (jorge@isec.pt)


Prof. Henrique Madeira (henrique@dei.uc.pt)

Summary:
In recent years, there has been an explosive growth in the use of databases for decision support. These systems, generically called Data Warehouses, involve manipulations of massive amounts of data that push database management technology to the limit, especially in what concerns to performance and scalability. In fact, typical data warehouse utilization has an interactive characteristic, which assumes short query response time. Therefore, the huge data volumes stored in a typical data warehouse and the queries complexity with their intrinsic ad-hoc nature make the performance of query execution the central problem of large data warehouses. The main goal of this work is to investigate ways to allow a dramatic reduction of the hardware, software, and administration cost when compared to traditional data warehouses. The affordable data warehouses solution will be built upon the high scalability and high performance of the DWS (Data Warehouse Stripping) technology. Starting from the classic method of uniform partitioning at low level (facts), DWS includes a new technique that distributes a data warehouse by an arbitrary number of computers. Queries are executed in parallel by all the computers, guaranteeing a nearly optimal speedup. This work will focus various aspects related to: Automatic data balancing: As each node in the cluster may have different processing capabilities, it is important to provide load balancing algorithms that automatically provide the best data distribution. With these mechanisms the system will be able to reorganize the data whenever it is needed, in order to make the load in each node as balanced as possible, allowing similar response times for every nodes. Auto administration and tuning: by using a cluster of machines the administration complexity and costs tend to increase dramatically. Although the several nodes have normally similar configurations, some discrepancies are expected due to the heterogeneous nature of the cluster. To achieve the best configuration we need to tune each node individually. Thus, we have to develop a solution for automatic administration and tuning in distributed data warehouses that allows a reduction of the administration cost and an efficient use of the system resources.

Proposal The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art; (b) characterization of affordable data warehouses requirements; (c) infrastructure for improved performance in affordable data warehouses; (d) implementation and evaluation; (e) writing papers;

PhD Thesis Proposal: G3.12


Title: Towards High Dependability in Affordable Data Warehouses Keywords: Data warehousing, on-line analytical processing (OLAP), fault-tolerance,
data security, parallel and distributed databases

Supervisors: Prof. Marco Vieira (mvieira@dei.uc.pt)


Prof. Henrique Madeira (henrique@dei.uc.pt)

Summary:
In recent years, there has been an explosive growth in the use of databases for decision support. These systems, generically called Data Warehouses, involve manipulations of massive amounts of data that push database management technology to the limit, especially in what concerns to performance and dependability. The huge data volumes stored in a typical data warehouse make performance and availability two centrals problem of large data warehouses. An affordable data warehouses solution is being built upon the high scalability and high performance of the DWS (Data Warehouse Stripping) technology. Starting from the classic method of uniform partitioning at low level (facts), DWS includes a new technique that distributes a data warehouse by an arbitrary number of computers. The fact that the data warehouse is distributed over a large number of computers raises new challenges as the probability of failure of one or more computers greatly increases. The main goal of this work is to investigate ways to achieve high-dependability in the affordable data warehouses solution while allowing a dramatic reduction of the hardware, software, and administration cost when compared to traditional data warehouses. Thus, this work will focus various aspects related to: Data security: the affordable data warehouses solution will be based on open-source database management systems (DBMS). However, these databases do not provide the security mechanisms normally available in commercial DBMS. In addition, the distributed database approach increases the data security requirements. The goal is to investigate the security needs for distributed data warehouses over open source DBMS and to propose advanced mechanisms that improve the overall system security. Data replication and recovery: comparing to a single-server database, one of the consequences of the use of a cluster of affordable machines is the increase of the probability of failure. This way, one of the goals is to research a new technique for data replication and recovery that allows the system to continue working in the presence of failures in several nodes and facilities the recovery of failed nodes. Proposal The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art; (b) characterization of affordable data warehouses dependability requirements;

(c) infrastructure to support high-dependability in distributed data warehouses; (d) Implementation and evaluation; (e) writing of papers;

PhD Thesis Proposal: G3.13


Title: Using Design Patterns in Modeling and Simulation Keywords: Model Reuse, Design Patterns, Parallel & Distributed Simulation Supervisor: Prof. Fernando Barros (barros@dei.uc.pt) Summary:
Software Design Patterns (SDPs) are a widely used technique in software development. Time-based SDPs have been developed for building real-time software and its use is a promising approach to build modeling and simulation software. This proposal intends to develop new SDPs to help the development of reusable simulation models and reusable simulation kernels able to deal with both conservative and optimistic parallel & distributed approaches.

PhD Thesis Proposal: G3.14


Title: Modeling and Simulation of Adaptive Sampling Systems Keywords: Numerical Simulation, Sampled-Based Systems Supervisor: Prof. Fernando Barros (barros@dei.uc.pt) Summary:
Adaptive step-size numerical methods permits to improve simulation performance while yielding the same accuracy of fixed step-size methods. The use of asynchronous numerical solvers permits to concentrate computation power in most demanding models enabling larger systems to be represented. Digital Control and Digital signal processing areas are currently exploiting multirate sampling and adaptive sampling techniques as a more efficient alternative to conventional fixed sampling rate approaches. In this proposal, we intend to develop new algorithms based on adaptive sampling and make their application to numerical solvers, event detectors, and signal and control systems.

G4: Communications and Telematics http://cisuc.dei.uc.pt/lct/

PhD Thesis Proposal: G4.1


Title: Security in Wireless Sensor Networks Keywords: Sensor Networks, Security, Mobility Supervisor: Prof. Jorge S Silva (sasilva@dei.uc.pt) Summary:
Although in the last years, we witnessed the increase in processing capabilities and in bandwidth of communication systems, several researchers consider that, in a near future, an inversion of trends will occur. These new computational systems will not consist of devices with higher processing power, but simply networks of sensors. Wireless Sensor Networks (WSNs) are composed by a high number of nodes, each one equipped with a microprocessor, low memory and a basic communication system. The integration of WSNs in the Internet will revolutionize several concepts and it will require new paradigms. More recently, a new group was created at the IETF, the 6LoWPAN, which is responsible to produce problem statements, assumptions and goals for network elements with restricted requirements such as limited power, in which the WSNs can be included. However, security mechanisms for these networks are scarce and inefficient. The research work of this PhD program will comprise the study and the proposal of new models for the security of these new Internet elements. This is particularly important in mobile environments, as the research community generally assumed sensors as static nodes. This PhD work will provide security mechanisms to the application, transport, network, and to the link layer. The new models proposed in this PhD work should consider security issues to protect against eavesdropping and malicious behavior. From bootstrapping devices to end to end confidentiality, passing through integrity and authenticity, it is important to support all the security features. Since wireless sensor nodes are energy constrained devices, they are also extremely exposed to deny of service due to battery exhausting. This work will be done in the context of European Project GINSENG Performance Control in Wireless Sensor Network, FP7-ICT-2007-2

PhD Thesis Proposal: G4.2


Title: Quality of Service in Wireless Sensor Networks Keywords: Sensors, QoS, Mobility Supervisor: Prof. Jorge S Silva (sasilva@dei.uc.pt) Summary:
Quality of Service (QoS) metrics for Wireless Sensor Networks (WSNs) may include those Quality of Service (QoS) metrics for Wireless Sensor Networks (WSNs) may include those typically considered in wired networks, such as delay, jitter, data rate and packet loss. Additional performance metrics, related to the type of network and the critical nature of the applications also need to be considered. In WSNs, these are mostly related to reliability and accuracy. Network dependability is also important for being able to support QoS assurances, demanding appropriate routing protocols and an ability to dynamically alter the current topology so that performance can be satisfied without unnecessary waste of energy. This research work is going to define and control QoS parameters, identify principles, propose paradigms that increase the performance of wireless sensor networks, provide models and algorithms for the implementation of data reliability and error-controlled scenarios. The required performance assurances can be met by controlling the architecture of the network, its topology, the routing and the processing of the information. This PhD work will also incorporate in these algorithms several properties which support the quality (not functionality) of the WSN: scalability, robustness, security, heterogeneity and real-time operation. This work will be done in the context of European Project GINSENG Performance Control in Wireless Sensor Network, FP7-ICT-2007-2

PhD Thesis Proposal: G4.3


Title: Mobility in Wireless Sensor Networks Keywords: Sensors, Mobility. 4G Supervisor: Prof. Jorge S Silva (sasilva@dei.uc.pt) Summary:
Mobility support for multi hop routing in infrastructure-less networks is still a challenging issue. High mobility requirement of the application also affects the design for other characteristics such as localization and synchronization. This PhD proposal will study and provide new models of mobility for Wireless Sensor Networks (WSN). As many applications for WSN involve some type of mobility, the fundamental question is how to disseminate data in time and with required accuracy to

the interested user. This research work will evaluate the impact by mobility of sensor nodes, sinks, and relays; and in which way the mobility can improve the network performance and enable new applications. On the one hand it is needed to consider sensor network applications with inherent mobility of nodes. On the other hand, it has been shown that mobility increases communication capacity in ad-hoc networks, and this result may be further exploited in WSNs. The issues and targets for optimization are dependent on which nodes are moving and whether the movement is random, predictable or even controllable. Even though mobility can be exploited to increase spatial coverage or connectivity, the primary requirement for any mobility solution is "energy efficiency". This often means maximizing the lifetime of the WSN. This work will be done in the context of European Project GINSENG Performance Control in Wireless Sensor Network, FP7-ICT-2007-2

PhD Thesis Proposal: G4.4


Title: Multimedia in Wireless Sensor Networks Keywords: Sensors, Multimedia Supervisor: Prof. Jorge S Silva (sasilva@dei.uc.pt) Summary:
Wireless multimedia sensor networks will enhance existing sensor network applications Transmission of video and sound in Wireless Sensor Networks (WSNs) has recently caught the attention of the research community with increasing applications of sensors in several areas. However multimedia network management, is not an easy task as it needs to consider resources, theories, tools and techniques to manipulate data provided by different and possibly mixed media with the goal to extract relevant information. This PhD proposal is particularly interested in applying independent platforms and programming environment, like Java or nescC, and to propose new paradigms for transmitting multimedia information in heterogeneous networks, which include sensors and actuators. To optimize the use of resources this PhD proposal will use cross-layer techniques and in-network processing. The platforms studied will be supported by low-resolution imaging motes, medium-resolution imaging motes based on stargates platforms and by Web cameras. This approach will enable multi-resolution views, using different flows of different granularity cameras, providing different views and enhancing the understanding of the monitoring environment.

PhD Thesis Proposal: G4.5


Title: Routing for Resilience in Ambient Networks Keywords: Routing, resilience, ambient networks Supervisors: Prof. Edmundo Monteiro (edmundo@dei.uc.pt)
Prof. Marilia Curado (marilia@dei.uc.pt)

Summary:
The new types of applications and technologies used for nowadays communication among users, and the diversity of types of users have shown that traditional routing paradigms are not capable of coping with these recent realities. Therefore, the role of routing in IP networks has shifted from single shortest path routing to multiple path routing subject to multiple constraints such as Quality of Service requirements and fault tolerance. Moreover, traditional routing protocols have several problems concerning routing information distribution which compromises routing decisions. Namely, routing decisions that are based on inaccurate information due to bad routing configurations either caused by faulty or malicious actions will cause severe disruption in the service that the network should provide. These are particularly important issues in networks that involve different types of communication devices and media, such as happens in ambient networks. Ambient networks pose an additional challenge to routing protocols, since network composition changes very often when compared to traditional IP networks, and networks are expected to cooperate among each other on-demand without relying on previous configuration. Moreover, associated with the dynamic structure of ambient networks, traffic patterns in ambient networks also change very often due to the composition and decomposition of network structure. The work proposed for this thesis aims at studying the existing vulnerabilities of actual routing protocols used in the Internet and to propose a resilient routing scheme that overcomes these weaknesses in order to improve network availability and survivability. The work will comprise the study of the state of the art of routing protocols for resilience, the characteristics of ambient networks, and the proposal of enhancements to existent routing schemes in order to improve the contribution of the routing protocol for the resilience of ambient networks. The research work of the PhD candidate will be included in the European Union Integrated Project WEIRD (WiMAX Extension to Isolated Research Data networks http://www.ist-weird.eu).

PhD Thesis Proposal: G4.6


Title: Community networks connectivity and service guarantees Keywords: Mobility, nomadicity, community networks Supervisor: Prof. Fernando Boavida (boavida@dei.uc.pt)

Summary:
A number of recent technological developments have enabled the formation of wireless community-wide local area networks. Dispersed users (residents or moving users) within the boundaries of a geographical region (neighbourhood or municipality) form a heterogeneous network and enjoy network services such as Internet connectivity. This environment, named Community Networks, is well suited for both traditional Internet access and the deployment of peer-to-peer services. Achieving and retaining connectivity in this highly heterogeneous environment is a major issue. Although the technology advances in wireless networks are fairly mature, one further step in the management of Community Networks is to provide mobility and nomadicity support. Nomadicity allows connectivity everywhere while mobility includes the maintenance of the connections and sessions while the node is moving from one place to another. Mobility and nomadicity in community and home networks still place several challenges, as these environments are highly heterogeneous. Seamless handover between different layer two technologies is still a challenge. Seamless multimedia content distribution to the home may include several network technologies, such as WLAN, power-line, GRPS or UMTS. Thus, the inter-layer issues involved are complex and a lot of work is required to match MIP6 with them in order to provide seamless mobility for multimedia information. The ability to support sessions on multiple access networks is another open issue. These issues will be the central concern of the proposed PhD work. The research work of the PhD candidate will be included in the European Union IST FP6 CONTENT Network of Excellence (CONTENT Content Networks and Services for Home Users, http://www.ist-content.eu) and will be carried out in close cooperation with a foreign institution.

G5: Information Systems http://isg.dei.uc.pt/ PhD Thesis Proposal: G5.1


Title: Learning Contexts and Social Networking Keywords: e-learning 2.0, e-learning, learning contexts, learning, LMS, social
networking, technology enhanced learning (TEL), web 2.0

Supervisor: Prof. Antonio Dias de Figueiredo (adf@dei.uc.pt) Summary:


This thesis concentrates on the design, implementation and management of learning contexts. The major tenet of our Learning Contexts Project, which has been gaining strength in the course of over thirty years devoted to ICT and Education, is that the future of learning is not to be found just on content, but also, and very much, on context, that is, on making learning happen within activity rich, interaction rich, and culturally rich social environments that never existed, that the intelligent use of

technology is making possible, and where different paradigms apply. Many of the most dynamic fields of research in learning and education, such as computer supported cooperative learning, situated learning, or learning communities relate to learning contexts. Hundreds of expressions used in education such as project based learning, action learning, learning by doing, case studies, scenario building, simulations, role playing pertain to learning contexts. The advantage of concentrating on context, as a whole, rather than on the multiplicity of its manifestations studied by disparate research groups is that, by doing so, we can articulate that multitude of theories and practices into a single, coherent, organic, and operational worldview. The proposed thesis pushes forward our current efforts in this field by exploring the relationship between Learning Contexts and Social Networking. This may include collaboration with another of our projects, the Serendipty Project, centred on the development of a serendipitous social search engine for which we hold a US patent application. In order to stimulate the creativity of the candidates, plenty of leeway will be given to them, so that they may choose to concentrate on theoretical aspects, on practical educational issues, or on the specification and design of the ideal, and yet inexistent, learning context management system (LXMS). Prospective candidates whishing to clarify the research implications of learning contexts may download from the journal Interactive Educational Multimedia our paper Learning Contexts: a Blueprint for Research. Further information can be obtained by downloading Chapter 1, Context and Learning: a Philosophical Framework, of our book Managing Learning in Virtual Settings: the Role of Context, published by Information Science Publishing (Idea Group). Successful candidates will have, or be willing to develop throughout their PhD, a mixed profile of educational technology and educational and social researcher.
- Figueiredo, A. D. (2005) Learning Contexts: A Blueprint for Research, Interactive Educational Multimedia, No. 11, October 2005 http://www.ub.es/multimedia/iem/down/c11/Learning_Contexts.pdf - Figueiredo, A. D. and Afonso, A. P. (2005) Context and Learning: a Philosophical Framework, in Figueiredo, A. D. and A. P. Afonso, Managing Learning in Virtual Settings: The Role of Context, Information Science Publishing (Idea Group), October 2005. http://www.idea-group.com/downloads/excerpts/Figueiredo01.pdf

PhD Thesis Proposal: G5.2


Title: Quality management and information systems: getting more than the sum of
the parts

Keywords: Quality Management, Information Systems Supervisor: Prof. Paulo Rupino (rupino@dei.uc.pt) Summary:
Quality Management of products, services and business processes is, today, a key issue for the success of most companies operating in global contexts. In fact, holding a Quality certification, such as established by ISO 9001:2000 standards, is becoming a basic requirement for companies to play in several international markets. On the other

hand, the design and deployment of information systems is another key aspect to consider when modern organizations define their business models and strategy. It is quite surprising, thus, that in spite of the fact that both quality management and information systems architecting require intensive strategic analysis and the extensive involvement of staff and managers in the examination and redesign of business processes, the two endeavors are still treated as completely distinct. They are usually conducted as separate projects, handled by different teams, equipped with unconnected methodologies. The integrated design of these two pillars of modern organizations, in a manner that they depend on, support, and reinforce each other, enables a quantum leap, as it lets organizational tasks be reengineered in the light of: (i) effectiveness, consistency and evidence of compliance, as required by quality systems; and (ii) efficiency, harnessing the power of digital information storage, processing, and communication in the renewed business processes. Typical criticisms to traditional implementations of Quality Management Systems can also be alleviated, namely by reducing the added bureaucracy and overhead imposed on users by traditional implementations. The economic impact on organizations can be considerable, not only at the initial planning stage but, more importantly, throughout the lifecycle of operation of this unified system. The likelihood of synergy between quality management and IT infrastructure has been suggested by a few authors, but no systematic processes for leveraging those synergies can be found. A successful Ph.D. in this unexplored field will arm its holder with the skills and tools to act in an increasingly appealing consulting arena.

PhD Thesis Proposal: G5.3


Title: Visualizing and Manipulating Work Load Control over Business Networks Keywords: Information Systems, Work Load Control Methodology, Human-Computer
Interaction, Information Visualization, Direct Manipulation Interfaces, Delegation, Interface Agents

Supervisor: Prof. Licnio Roque (lir@dei.uc.pt) Summary:


In a previous project we designed a web-based planning and control system for Small and Medium Enterprises that operated on Make-To-Order clusters (a case in the Mouldmaking Industry) that implemented and adaptation of the Lancaster proposed Work Load Control planning methodology. The Work Load Control methodology enables production management across shopfloors by controlling pending work load levels across workcentres, thus effectively managing overlapping windows of opportunity for completing every task at each production unit. We have developed a system where the user sets the planning conditions and delegates in the system the generation of plan proposals. Unmet restrictions are then iteratively resolved by taking management decisions that adapt a candidate plan to actual production conditions and vice-versa. Some of the conclusions of the project relative to the adoption of the new planning tool were: a) the Work Load Control methodology while particularly flexible and adaptable for SME running MTO operating models poses a learning obstacle for people trained to think in time-sliced models (like those depicted in Gantt diagrams); b) a web-based

system while easy to deploy and manage poses a heavy cognitive load as the principal interaction mode is linguistic (using menus, dialogs, forms, with some graphics for visualizing resulting plans); c) current business globalization increasingly involves high levels of subcontract work that needs to be managed across enterprise networks with only partial knowledge of production conditions, which makes it difficult to use methodologies that assume full knowledge and control over production units. With clients, we have come to the conclusion that a planning tool to the Work Load Control methodology needs a visualization and direct manipulation tool to reduce the cognitive overhead posed by the complexity and non-intuitiveness of the methodology and enable the person to dynamically envision and track events across networks of enterprises. This case provides an ideal opportunity to attempt an integration of linguistic, direct manipulation and delegation modes of interface, develop novel visualizations and test usability evaluation techniques. The research implies acquiring a knowledge of the methodology, conceiving and studying appropriate solutions for the case study by designing innovative interaction techniques. Relevance is met in Decision Support Systems, Human Computer Interaction and, more generally, in the Information Systems academic and business communities.

PhD Thesis Proposal: G5.4


Title: Designing Games as Learning Contexts Keywords: Information Systems, Human-Computer Interaction, Context Engineering, Learning Games, Learning Contexts, Social Constructivism Supervisor: Prof. Licnio Roque (lir@dei.uc.pt) Summary:
Several authors have argued the idea that videogames could be exploited as learning environments. James Paul Gee has written extensibly on the subject of learning from computer games, noticing that we can hardly ignore the learning that takes place with this new medium. Mark Prensky argued this idea on the simulation aspects of games. Raph Koster, a designer and consultant, adopts the perspective that games are actually meant to be learned, and makes playing as learning the basis for game design. Simon Engenfeldt-Nielsen produced a PhD thesis on the educational potential of computer games, by analyzing cases that used commercially available games. Seeing games in the light of socio-technical theories such as Actor-Network we have come to an interpretation of games as constructions designed and built to enforce specific programs of action while its storytelling and underlying rules provide the basis for stable translation regimes by the player. Games as simulations can also be understood as embodied theories of physical or social reality. By providing specific embodied concepts and representations, physical or other abstract rules, characters with recognizable behavior, the designer builds a learning context with conditions of engagement that concur to enable playful experiences. These can be more or less flexible or open to interpretation through player choices, but always enforcing underlying worldviews or inscribed theories that are meant to be learned in the course of playing the game if the player is to achieve the games goal. It is this activity conditioning that we suppose to be the usable basis for explicitly conceiving games as learning devices.

An alternative approach would be to take game design itself as the learning activity and explore the learning potential inherent in the design activity. Either way, the research should focus on the methodological problem of explicitly modeling and building games as learning contexts. The Context Engineering approach can be used to frame development of specific contexts, by prototyping on available multiplayer game development technology and focusing on design aspects and their relation to the proposed problem. Adequate evaluation techniques should also be a consideration in the studied contexts if they are to be socially accepted as effective learning alternatives. Relevance is expected for the Learning Sciences, Human and Social Sciences, Information Systems and Human Computer Interaction, Game Studies, Media Studies, and society at large.

PhD Thesis Proposal: G5.5


Title: Visual Modeling Language for Game Design Keywords: Information Systems, Human-Computer Interaction, Visual Modeling Language, Games and Design Supervisor: Prof. Licnio Roque (lir@dei.uc.pt) Summary:
Game design is a complex activity that deals with multiple and heterogeneous concurrent constraints. A game designer has to consider combined effects of multiple elements effectively requiring a trans-disciplinary background that can range, with variable intensity, from Humanities to Media Studies, to Psychology and Sociology, to Aesthetics, to Informatics and Economics. While the advent of a common design language still seems far in the future, some design patterns can be readily recognized from the 30+ year history of videogame development. An example of a systematization is Bjrk and Holopainens Patterns in Game Design. Nonetheless, the idea of a visual language for game modeling and design seems not only possibly and relevant, but a pressing research goal. In pursuing the goal of a visual modeling language for game design it is expected that a knowledge of socio-technical studies of science and technology will give useful insights into the problems of trans-disciplinary and in particular the use of Actor-Network Theory constructs as a basis language for the analysis of game contexts. A requisite for such a language would be that we could build a prototype modeling tool that could be used to sketch and generate game coding to be used on a current software game platform. Another basic requisite would be that it could serve as a basis for design dialogue between people with diverse disciplinary backgrounds. This research project would involve the trans-disciplinary background study and drafting of o a language prototype that could then be evaluated and evolved by successive design iterations, against updated usability requirements. Relevance and adequacy would be judged based on actual empirical design experience and historical design accounts, in the fields of Information Systems, Human Computer Interaction, Game Development, Design, Games Studies and Media Studies.

PhD Thesis Proposal: G5.6


Title: Programming Games by Demonstration (and Learning to Program) Keywords: Information Systems, Human-Computer Interaction, Games and Design,
Programming by Demonstration, Programming by Example

Supervisor: Prof. Licnio Roque (lir@dei.uc.pt) Summary:


Game programming and development, from scratch, requires advanced skills and specialties often unavailable to a game designer or an artist, less alone to the proverbial man-on-the-street. The advent of general purpose game engines and game generation environments made it simpler and more affordable, or at least a less specialized task, to be able to develop and deploy complex game scenarios. Yet, the simpler game design attempt still requires if not the domain of complex programming at least some skills with a scripting language. Towards democratizing this technology and the videogame medium, and taking the historical lesson from what happened with the appearance of personal filming cameras and the development of the cinema, again with video and the TV, it seems interesting to work on a solution that would enable a wider public to become an active participant in the creation of interactive content. Resorting to and evolving programming by demonstration or programming by example techniques could play a significant part in lowering the learning barrier to achieve useful effects analogous to behavior scripting. Building a system that would enable reflexive action by letting the user inspect and animate the programming results of demonstrative actions, could serve as a basis for semi-autonomous learning of programming concepts and skills. It is intend that the candidate researcher pursues this goal by designing and building a prototype system on top of an existing software game platform or virtual environment and proceed to evaluate the generated concept through actual empirical cases with targeted user segments. Relevance and adequacy would be judged based on actual empirical design experience, with results published in the fields of Information Systems, Human Computer Interaction, Game Development, Design, Games Studies and Media Studies.

G6: Evolutionary and Complex Systems http://cisuc.dei.uc.pt/ecos/

PhD Thesis Proposal: G6.1


Title: Evolving Representations for Evolutionary Algorithms Keywords: Evolutionary Computation, Gene Regulatory Networks, Self-Organization, Supervisor: Prof. Ernesto Costa (ernesto@dei.uc.pt) Summary:
Evolutionary Algorithms typically approach the genotype - phenotype relationship in a simple way. As a matter of fact, conventional EAs consider the genotype as the complex structure and rely on more or less simple mechanisms to do the mapping from genotype to the phenotype. Some work is being done on the relation between the user-designed representations used by the EA (the genotype) and the fitness landscape induced by the problem (the phenotype). The idea is to understand the role played by representations for improving evolvability. This is important but we can advance a step further. Exploring ideas from developmental biology in the context of evolutionary algorithms is not new. Notwithstanding, the challenge here is to understand better how we can combine the theory of evolution with embryonic development in an unified framework and explore it computationally aiming at evolving the representations to be explored by an EA instead of design them offline, attaining a self-organized evolutionary algorithm (SOEA). To achieve that goal we have to identify the building blocks for representations as well as the transformational rules that end up in the definition of an adapted individual.

PhD Thesis Proposal: G6.2


Title: Simulation of the propagation of religious beliefs in artificial societies Keywords: Multi-agent systems, social simulation, religious beliefs. Supervisor: Prof. Joaquim Carvalho [joaquim@dei,uc.pt) Summary:
The computational simulation of processes involving beliefs has been researched in relation to models of agent behavior in multi-agent simulations. Many different definitions of beliefs have been proposed in that context, with practical applications in planning and AI.

In recent times there has been a new interest in the modeling of beliefs in artificial societies in order to explore real life phenomena like religions, sects and other collective sharing of representations that have an impact in agent behavior and overall distinctive traits of a society. In this setting the question of the propagation of beliefs is central. Belief propagation in simulated societies can be handled at very different overlapping levels. There is geography in religion belief in real world and simulated situations should be able to represent the effect of distance, accessibility and main lines of circulation in the diffusion of beliefs. Then there is the level of networking because propagation of beliefs requires contact between agents and/or their messages, in a way similar to the propagation of viruses. Finally there are questions of fitness regarding the environment because religious beliefs normally regulate the interaction of agents with the environment and can produce different fitness functions that depend, in a complex way, on the behavior, and hence the beliefs, of other agents present at the same locations (religion is often connected to specialization of economic activities for instance).

The project aims at researching models for representation of this different level by developing a system where different formalization of beliefs can be tested in a framework that provides the necessary interface with geographic, networking and environmental constraints, approaching real world situations. It is expected that the thesis will contribute to the production of a formal framework to describe the complex interaction of religious beliefs with territory, interpersonal relations and the environment.

PhD Thesis Proposal: G6.3


Title: Evolutionary Hybridization with State-of-the-art Exact Methods Keywords: Evolutionary computation, optimization, hybridization. Supervisor: Prof. Francisco Pereira (xico@dei.uc.pt) Summary:
Standard Evolutionary Algorithms (EA) often perform poorly when searching for good solutions for complex optimization problems and may benefit if they are combined with other techniques. Broadly speaking we can consider two large classes of hybrid architectures: the EA can be complemented with other search methods or it can be enhanced with problem specific heuristics that add explicit knowledge about the problem being solved. A drawback associated with research conducted on this topic is that many reported approaches are typically somewhat naive in nature and with a limited applicability. Moreover EAs are, in most situations, combined with basic standard procedures such as hill-climbing algorithms or simulated annealing. This project aims at conducting an inclusive study of evolutionary hybridization to analyze if it is possible to develop new architectures that perform better than todays methods. Special attention will be given to hybridization with exact algorithms, like linear programming or gradient-based search. The challenge is to understand how the

key properties of these exact techniques, such as the capability to reduce the search space or the effective exploration of neighborhoods, might be used by the EA to efficiently perform a global exploration of the search space. Several examples of optimization problems will be used to perform a comprehensive analysis of the developed hybrid architectures.

PhD Thesis Proposal: G6.4


Title: Stochastic Local Search for Geometric Folding Keywords: Algorithms, Local Search, Computational Geometry Supervisor: Prof. Lus Paquete ( paquete@dei.uc.pt )

Summary: The goal is to develop and analyse stochastic local search algorithms for solving a wide class of geometric folding problems. For a given geometric structure with a given folded state, the corresponding folding problem consists of knowing whether it is possible to reach a desired folded state with a given property. Several folding problems are NP-hard. This is the case of the Ruler Folding Problem, which consists of finding a flat folded state of an open polygonal chain (the ruler) with minimum length; it is always possible to flat fold a polygonal chain, but it is NP-hard to minimize its length. Also, the Map Folding Problem, that is, to find a flat folded state of a two-dimensional orthogonal paper with an arbitrary mountain-valley crease pattern becomes NP-complete if we want know whether it exist a flat folded state that uses all creases. Finally, the well-known Protein Folding Problem for the two and threedimensional HP model of protein folding energetics is NP-hard. Stochastic local search are simple solution methods based on local search that have been applied quite successfully to a number of combinatorial optimization problems. Indeed, they are among the state-of-the-art algorithms for solving classical hard problems such as the travelling salesman problem, the quadratic assignment problem, and the graph colouring problem. However, with the exception of the protein folding problem, they have never been applied to more general geometric folding problems. This work will contribute to the successful application of stochastic local search methods to geometric folding problems in general by proposing and analysing appropriate neighbourhood structures and effective search strategies to a number of optimization problems that arise in this field. References: Erik Demaine, Joseph ORourke, Geometric Folding Algorithms: Linkages, Origami, Polyhedra, Cambridge University Press, 2007

Holger Hoos, Thomas Sttzle, Stochastic Local Search: Foundations and Applications, Morgan Kaufmann, Elsevier, 2004.

Вам также может понравиться