Академический Документы
Профессиональный Документы
Культура Документы
and Systems Engineering G4: Communications and Telematics G5: Information Systems G6: Evolutionary and Complex Systems
The list of PhD Proposals is divided by each group.
Each proposal has a reference Gx.y (x is the number of the group; y is the number of the proposal within that group).
G1: Cognitive and Media Systems http://cisuc.dei.uc.pt/csg/ PhD Thesis Proposal: G1.1
Title: Human readable ATP proofs in Euclidean Geometry Keywords: Automatic Theorem Proving, Axiomatic Proofs in Euclidean Geometry. Supervisor: Summary:
Automated theorem proving (ATP) in geometry has two major lines of research: axiomatic proof style and algebraic proof style (see [6], for instance, for a survey). Algebraic proof style methods are based on reducing geometry properties to algebraic properties expressed in terms of Cartesian coordinates. These methods are usually very efficient, but the proofs they produce do not reflect the geometry nature of the problem and they give only a yes/no conclusion. Axiomatic methods attempt to automate traditional geometry proof methods that produce human-readable proofs. Building on top of the existing ATPs (namely GCLCprover [5, 4, 8, 9, 10] to the area method [1, 2, 3, 7, 8, 11] or ATPs dealing with construction [6] the goal is to built an ATP capable of producing human-readable proofs, with a clean connection between the geometric conjectures and theirs proofs. Prof. Pedro Quaresma (pedro@mat.uc.pt)
References:
[1] Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong Zhang. Automated production of traditional proofs for constructive geometry theorems. In Moshe Vardi, editor, Proceedings of the Eighth Annual IEEE Symposium on Logic in Computer Science LICS, pages 4856. IEEE Computer Society Press, June 1993. [2] Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong Zhang. Automated generation of readable proofs with geometric invariants, I. multiple and shortest proof generation. Journal of Automated Reasoning, 17:325347, 1996. [3] Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong Zhang. Automated generation of readable proofs with geometric invariants, II. theorem proving with full-angles. Journal of Automated Reasoning, 17:349370, 1996. [4] Predrag Janicic and Pedro Quaresma. Automatic verification of regular constructions in dynamic geometry systems. In Proceedings of the ADG06, 2006. [5] Predrag Janicic and Pedro Quaresma. System description: Gclcprover + geothms. In Ulrich Furbach and Natarajan Shankar, editors, IJCAR 2006, LNAI. Springer-Verlag, 2006. [6] Noboru Matsuda and Kurt Vanlehn. Gramy: A geometry theorem prover capable of construction. Journal of Automated Reasoning, (32):333, 2004. [7] Julien Narboux. A decision procedure for geometry in coq. In Proceedings TPHOLS 2004, volume 3223 of Lecture Notes in Computer Science. Springer, 2004. [8] Pedro Quaresma and Predrag Janicic. Framework for constructive geometry (based on the area method). Technical Report 2006/001, Centre for Informatics and Systems of the University of Coimbra, 2006. [9] Pedro Quaresma and Predrag Janicic. Geothms - geometry framework. Technical Report 2006/002, Centre for Informatics and Systems of the University of Coimbra, 2006. [10] Pedro Quaresma and Predrag Janicic. Integrating dynamic geometry software, deduction systems, and theorem repositories. In J. Borwein and W. Farmer, editors, MKM 2006, LNAI. Springer-Verlag, 2006. [11] Jing-Zhong Zhang, Shang-Ching Chou, and Xiao-Shan Gao. Automated production of traditional proofs for theorems in euclidean geometry i. the hilbert intersection point theorems. Annals of Mathematics and Artificial Intelligenze, 13:109137, 1995.
Title: Image classification and retrieval based on stylistic and aesthetic criteria Keywords: Content Based Image Retrieval, Computational Aesthetics, Artificial Intelligence Supervisor: Prof. Penousal Machado (machado@dei.uc.pt) Summary: The increasing volume of digital multimedia content, both online and offline, coupled with the unstructured nature of the World Wide Web (WWW) makes the need for appropriate classification and retrieval technique more pressing than ever. As a result, there is a growing interest in Content Based Image Retrieval (CBIR), which is clearly demonstrated by the (exponentially) increasing number of research papers in these areas. The automatic classification of images according to stylistic and aesthetic criteria would allow: image browsers and search engines to take into account the user's aesthetic preferences; on-line artwork sites to tailor their offer to match the implicit preferences revealed by the previous purchases of a specific user; online museums to reorganize their exhibitions according to user preferences, thus offering personalized virtual tours; digital cameras to make suggestions regarding photographic composition. Additionally, this type of system could be used to automatically index image databases or even, if coupled with an image generation system, to create images of a particular style or possessing certain aesthetic qualities.
As the title indicates, the main goal of this thesis is the development of techniques for stylistic and aesthetic based image classification and retrieval, which is a relatively unexplored area. Focusing on our own research efforts, in [1] we used a subset of the features proposed in this project an ANN classifier for author identification. To the best of our knowledge, [3] was the first computational system that dealt with aesthetic classification and/or evaluation tasks. Works such as [3,4,5] explore the use of an autonomous image classifier in the context of evolutionary art. This thesis will be conducted in the Cognitive Media Systems Group of CISUC in close collaboration with the RNASA Laboratory of The University of A Corua and is an integral part of the research project TIN2008-06562/TIN. A one-year renewable scholarship is available. 1. 2. 3. 4. 5. Machado, P. and Romero, J. and Ares, M. and Cardoso, A. and Manaris, B. , "Adaptive Critics for Evolutionary Artists", 2nd European Workshop on Evolutionary Music and Art, Coimbra, Portugal, April 2004 Machado P, Cardoso A. Computing aesthetics. In: Oliveira F, editor. XIVth Brazilian Symposium on Artificial Intelligence SBIA98. LNAI Series. Porto Alegre, Brazil: Springer; 1998. p. 219294 Machado, P. and Romero, J. and Cardoso, A. and Santos, A. , "Partially Interactive Evolutionary Artists", New Generation Computing, Special Issue on Interactive Evolutionary Computation, H. Takagi, January 2005 Machado, P. and Romero, J. and Santos, A. and Cardoso, A. and Pazos, A. , "On the development of evolutionary artificial artists", Computers & Graphics, Vol. 31, # 6, pp. 818826, Elsevier, December 2007 Machado P, Romero J, Manaris B. Experiments in computational aesthetics. The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music. Springer, Romero. J and Machado, P. (eds). Natural Computing Series. 2007. pp 381-415
affect our perceptions of beauty and happiness, they could touch peoples lives in fantastic new ways (Hugo Liu). As the title indicates this thesis seeks: to grasp a deeper understanding of artistic creative behavior; to study and develop models that may capture essential aspects of beauty, emotional response and creativity; and ultimately to develop intelligent agents that implement these models. To pursue this goal the candidate will be integrated in the multifaceted team of researchers of the Cognitive Media Systems Group of CISUC which possesses a vast experience in fields such as: Computational Aesthetics, Evolutionary Computation, Artificial Neural Networks, Music Information Retrieval, Creative Systems and Computational Art. 1. 2. 3. Romero J and Machado P. The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music (eds). Springer. Natural Computing Series. 2007. pp 381-415 Dissanayake E, Homo Aestheticus, University of Washington Press, 1995 Machado P, Romero J, Manaris B. Experiments in computational aesthetics. The Art of Artificial Evolution: A Handbook on Evolutionary Art and Music. Springer, Romero. J and Machado, P. (eds). Natural Computing Series. 2007. pp 381-415
Title: Self-Adaption and Evolution of Bio-Inspired Algorithms Keywords: Adaptation, Evolution, Bio-Inspired Algorithms, Complexity Science Supervisor: Prof. Penousal Machado (machado@dei.uc.pt), Dr. Jorge Tavares (jorge.tavares@ieee.org) Summary: In spite of some performance improvements of biologically inspired techniques, such as evolutionary algorithms and swarm intelligence, it is a fact that biology knowledge has advanced faster than our ability to incorporate novel ideas from life science disciplines into these methods. As such, and taken that nature has been an inspiration to several different kinds of optimization and learning algorithms, we consider that it is still a source for improvement and new techniques. Usually, in order to achieve competitive results, it is often required the development of problem specific operators and representations, and parameter fine-tuning [1,2]. As a result, much of the research practice of Bio-Inspired Algorithms focuses on these aspects. Following the research work done on this topic [3,4], this thesis should contribute to the study and design of nature-inspired methods that can adapt themselves to the problem they are solving [3-5]. The evolution of these components such as representation, operators and parameters [3], may contribute to performance improvements, give insight to the idiosyncrasies of particular problems, alleviate the burden of researchers when designing bio-inspired algorithms, and push frontiers of problem-solving.
References: 1. Eiben, A.E., and Smith, J.E., "Introduction to Evolutionary Computing", Natural Computing Series, Springer, 2007. 2. Beyer, H.-G., and Meyer-Nieberg, S., Self-Adaptation in Evolutionary Algorithms, In F. Lobo, C. Lima, and Z. Michalewicz, editors, Parameter Setting in Evolutionary Algorithm, 47-75, Springer, Berlin, 2007. 3. Tavares, J. and Machado, P. and Cardoso, A. and Pereira, F. B. and Costa, E. , "On the Evolution of Evolutionary Algorithms", in Proc. of the EuroGP 2004 Proceedings, 7th European Conference on Genetic Programming, Coimbra, Portugal, April 2004. 4. Machado, P. and Tavares, J. and Cardoso, A. and Pereira, F. B. and Costa, E. , "Evolving Creativity", Computational Creativity Workshop, 7th European Conference in Case Based Reasoning, Madrid, August 2004. 5. Oltean, M. Evolving Evolutionary Algorithms Using Linear Genetic Programming. Evolutinary Computation Journal, MIT Press, 13, 3, 387-410, 2005.
In this thesis a planner that combines the technique of decision-theoretic planning with the methodology of HTN planning should be built in order to deal with uncertain, dynamic large-scale real-world domains [Macedo & Cardoso, 2004]. Unlike in regular HTN planning, methods for task decomposition shouldnt be used, but instead cases of plans. The planner should generate a variant of a HTN - a kind of AND/OR tree of probabilistic conditional tasks - that expresses all the possible ways to decompose an initial task network. References: Macedo, L. and A. Cardoso (2004). Case-Based, Decision-Theoretic, HTN Planning. Advances in Case-Based Reasoning: Proceedings of the 7th European Conference on Case-Based Reasoning. P. Calero and P. Funk. Berlin, Springer: 257-271. Macedo, L. The Exploration of Unknown Environments by Affective Agents. PhD Thesis, 2006.
This thesis addresses the problem of finding multi-agent strategies to address the problem of collaborative exploration of unknown, 3-D, dynamic environments. The strategy or strategies should be tested against other exploration strategies found in the literature. References: Macedo, L. The Exploration of Unknown Environments by Affective Agents. PhD Thesis, 2006. Macedo, L. and A. Cardoso (2004). Exploration of Unknown Environments with Motivational Agents. Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems. N. Jennings and M. Tambe. New York, IEEE Computer Society: 328 - 335.
adaptation, context creation, and access to information. Antnio Damsio (1999) regards consciousness as part of an organism's survival kit, allowing planned rather than instinctual responses. He also points out that awareness of self allows a concern for one's own survival, which increases the drive to survive, although how far consciousness is involved in behaviour is an actively debated issue. The possibility of machine (or robot) consciousness has intrigued philosophers and nonphilosophers alike for decades. Could a machine really think or be conscious? Could a robot really subjectively experience the smelling of a rose or the feeling of pain? Several tests, such as those based on the Turing Test, have been developed which attempt to provide an operational definition of consciousness and try to determine whether computers and other non-human animals can demonstrate through their behavior, by passing these tests, that they are conscious The goal of this thesis is simulating consciousness in computers based on one or more theories about human consciousness.
mechanisms are necessary to extract the hidden information and to reveal the structure in a way that the Semantic Web community can benefit from, and thus provide added value to the end user. On the other hand the established way to represent knowledge gained from the unstructured data can be beneficial for the Web 2.0 in that it provides Web 2.0 users with enhanced Semantic Web features to structure their data. The aim of this thesis is to bridge the gap between the Semantic Web and the Web 2.0 environments. Since both ideas have in common the improvement of search and semantics in the web, the combination of these techniques is an important step towards a more intelligent web as Tim Berners-Lee envisioned [Berners-Lee, T., J. Hendler, and O. Lassila, The Semantic Web. Scientific American, 2001. 284(5): p. 34-43]. Techniques can be, but are not limited to, social network analysis, graph analysis, machine learning, ontology learning, text mining or web mining methods [Peter Mika. Ontologies are us: A unified model of social networks and semantics. Journal of Web Semantics 5 (1), page 515, 2007].
The main objective of this thesis is to develop a set of tools based on the semantic web. These tools are intended to have a set of intelligent characteristics, such as: learning, proactive reasoning, semantic searching and retrieval of knowledge, representation of knowledge, knowledge acquisition, personalization, and others. Several reasoning methods have been developed in Artificial Intelligence and are ideal candidates to be used in this research work. Some of the results of this research work are new algorithms and methodologies for knowledge management.
Summary:
Although we find today a myriad of positioning technologies (from the common GPS to Wireless, GSM cell or Ultra Wide Band positioning algorithms), the interpretation of what exactly position means is still cumbersome. For example, the information that we are at latitude 4,234W and longitude 30,123N or my current GSM cell ID is 1098 is poor in terms of meaning for a user. Informations such as I am in Morrocco, my current location is in Coimbra or I am at work are clearly richer and useful for a wealth of applications. This is known as the From Position to Place problem (Hightower, 2003) and is currently a hot topic in the Ubiquitous Computing area. The primary goal of this PhD project is to study and develop methodologies that can contribute to solving the problem just described. The approach expected will likely take into account the user model, context and social interaction. This work is one of the central topics of research of the Ubiquitous Systems Group of the AILab and has a high potential of applicability in a range of state-of-the-art ubiquitous systems.
This PhD thesis should focus on the development of efficient algorithms for Aggregation, Update, Filtering and Map Matching of the incoming GPS traces. In CMS, work has already been started in these algorithms and the incoming PhD student will be integrated in a very motivated team.
students learning. However, problems continue to exist and it is necessary to investigate new solutions that may help programming students and teachers. Learning communities concept exists for some time. It has been presented as a way to create rich learning contexts where teachers, students and other people, namely experts, can coexist and collaborate in the production of knowledge, consequently leading to learning enhancement. This thesis proposal includes first the study of representative learning communities successful cases and characteristics, and after the study, proposal and creation of a learning communities support platform specially adapted to the needs of students during programming learning. The platform and its utilization will undergo a full evaluation, in order to access its success in promoting programming learning. It is expected that this platform includes innovative characteristics, for example the inclusion of virtual members that may interact with real members when necessary and specially tailored features and tools that may improve the quality of programming learning.
Summary:
Initial programming learning is known as a difficult task to many novice students at college level. In those courses it is common to use a set of typical problems to introduce students to basic programming concepts and also to stimulate them to develop their first programs and programming skills. This work is essential, since it should allow beginners to develop the basic programming problem-solving skills necessary to be further developed and refined later. So, this first learning stage is crucial to students performance in all programming related courses. This thesis proposal includes a study about the different ways students approach these typical basic problems, leading to the identification of common problem solving patterns. Some of these patterns will be adequate, while others will not lead to the development of correct solutions, being considered wrong or erroneous patterns that must be identified and corrected in students strategies knowledge. Based on this information, the thesis main objective will be the proposal, implementation and evaluation of methods and/or tools that may identify novice students strategies, categorize typical wrong patterns and common errors, and interact with them giving personalized remediation feedback when necessary. The forms of this feedback must also be studied, so that it becomes effective not only to help students to solve the current problem, but mainly to help them to develop better approaches that may lead to correct solutions in later problems and learning stages.
Summary: Initial programming learning is quite hard for many students. Although several factors may contribute to this situation, the lack of basic mathematical proficiency is probably one of the most relevant. In fact, some preliminary studies made by our research group established that programming learning difficulties are often accompanied by a deep lack of basic mathematical concepts. However, it is not clear which of the basic mathematical concepts and cognitive competencies are the more important to develop the needed programming skills, or even if the development of those concepts and abilities has a direct impact in programming learning. If this is the case, how to develop tools that enable the students to rapidly acquire this competencies in the context of basic programming education? This thesis main objective will be to provide answers for the above research questions. This means that it will include both a diagnosis and an intervention phases. The first phase (field study) implies that a comparison of mathematical knowledge and skills must be made and should confront results provided by students with programming learning difficulties with results provided by students who learn programming without major difficulties, working in the same context. An also interesting study would be to consider groups of programming experts and novices. The second phase will be based on the analysis of the results of the field study and should conclude with a proposal of specific teaching and learning strategies that may be applied in the context of programming courses and that may lead to an improvement in the learning results of many students. Ideally, this proposal should be instantiated and evaluated so as to ascertain its validity.
safety assurance for encription but its results, corolaries and uses go far beyond this. In the vast majority of applications tt is fundamental to proceed with a maximization of an entropy functions so as to derive the required results. But this optimization rarely is trivial or well known, involving imcomplete information. In particular, this is the case for the analisys of NP-hard problem instances. This thesis proposes the study and development of a Maximum Entropy Diagonis tool for information characterization in the presence of imcomplete knowdlege.
Summary: A reccurring problem within Knowledge based approachs is the need to identify patterns and , moreover, to apply recognition tools, which needs a similarity measure. But how can we measure similarities between: two genotypes, two computer programs or two ecocardiographic lines? This theme intends to study similarity distance measures useful for data-mining, pattern recognition, learning e automatic semantic extraction. After a state-of-the-art survey, the focus of this study should rely on the confront two very different approachs: that of Normalized Information Distance (based on Kolmogorov Complexity) versus the more common Optimization strategies, and derive the guidelines for choosing the more adequate approach to specific applications like the ones above mentioned.
Melody extraction from polyphonic audio is a research area of increasing interest in Music Information Retrieval (MIR). It has a wide range of applications in various fields, including music information retrieval (particularly in query-by-humming, where the user hums a tune to search a database of musical audio), automatic melody transcription, performance and expressiveness analysis, extraction of melodic descriptors for music content metadata, and plagiarism detection, to name but a few. This area has become increasingly relevant in recent years, as digital music archives are continuously expanding. The current state of affairs presents new challenges to music librarians and service providers regarding the organization of large-scale music databases and the development of meaningful methods of interaction and retrieval. Several different approaches have been proposed in recent years, most of them evaluated and compared in the corresponding track of the Music Information Retrieval Evaluation eXchange (MIREX, a small competition that takes place every year). In [Paiva, 2006], the problem of melody detection in polyphonic audio was addressed following a multistage approach, inspired by principles from perceptual theory and musical practice. The system comprises three main modules: pitch detection, determination of musical notes (with precise temporal boundaries, pitches, and intensity levels), and identification of melodic notes. The main objective of this thesis is to build on the work carried out in [Paiva, 2006] to tackle several open issues in the developed system, namely: derive a more efficient pitch detector, improve note determination in the presence of complex dynamics such as strong vibrato, address the current limitation in the melody/accompaniment discrimination task, improve the reliability of melody detection in signals with lower signal-to-noise-ratio, add top-down information flow to the system (e.g., the effect of memory and expectations), add context information (e.g., piece tonality, rhythmic information), augment the song evaluation database etc.
References: Rui Pedro Paiva, Melody Detection in Polyphonic Audio, PhD Thesis, Department of Informatics Engineering, University of Coimbra, 2006, Portugal.
MusicID, 411-Song), you dial the number of the service provider with your cell phone, hold your phone towards the source of the music for a few seconds (from 3 to 20, depending on the provider) and then wait for a message containing the identification of the song (artist, title, etc.). Such applications are based on audio fingerprinting techniques, where an individual signature is extracted for each song in the database, and then compared with the fingerprint computed for the query sample. Present challenges in the area include the identification of songs in disturbed conditions, e.g., noisy environments, poor recordings, etc., or using only a few seconds of audio for matching. The main objective of this thesis is to improve the state of the art on music identification by investigating and extending the current techniques and proposing new approaches to the problem (e.g., hashing and search techniques, feature extraction approaches, etc.)
References:
- Eugene Weinstein and Pedro Moreno (2007). Music Identification with Weighted Finite-State Transducers, Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2007. - Jaap Haitsma and Ton Kalker. (2002). A highly robust audio-fingerprinting systems. Proceedings of the 3rd International Conference on Music Information Retrieval. - Avery Wang (2003). An industrial-strength audio search algorithm. Proceedings of the 4th International Conference on Music Information Retrieval, invited talk.
which will certainly give a strong impulse towards the improvement of techniques and creation of evaluation standards. The main objective of this thesis is to analyze audio music in terms of mood content information, e.g., contentment, depression, exuberance, anxiety, This involves the study and derivation of mood-like features, development of mood-based classifiers and mood-based similarity metrics. This can be further applied to mood-based music recommendation systems. The PhD candidate will have the opportunity to work in a cutting-edge research area with several open and exciting research possibilities, with plenty of room for scientific innovation.
References:
- Juslin P.N., Karlsson J., Lindstrm E., Friberg A. and Schoonderwaldt E. (2006) Play It Again With Feeling: Computer Feedback in Musical Communication of Emotions, Journal of Experimental Psychology: Applied, Vol. 12, No.2, pp. 79-95. - Lu, Liu and Zhang (2006), Automatic Mood Detection and Tracking of Music Audio Signals, IEEE Transaction on Audio, Speech and Language Processing, Vol. 14, No. 1., pp. 5-18.
frequent patterns from data [4] Building accurate models for evolving data [5] Develop techniques of detecting changes in evolving data (5) Evaluate and determine
performance measures; [6] Propose a general framework to detect trends in evolving data sets.
patterns from heterogeneous data [4] Building supportive underlying assumptions and accurate models for heterogeneous data (5) Evaluate and determine performance
measures; [6] Propose a general framework for target detect in multivariate heterogeneous data sets.
The main idea of the current proposal is to devise a ranking approach based on the combination of machine learning techniques and graph mining (e.g. Graph Clustering, Graph Kernels etc.) joining the advantages of both systems. Evaluation will be done on real benchmarks, synthetic data created by graph generators, but also with real users defining the goals and assessing the final results, including score changes in the final ranking.
Proposal
This PhD will comprehend the following initial tasks: (1) Study state-of-the-art of existing methods for page ranking on the web; (2) Construct a benchmark of a data set either from the web or based on graph generators [3] Develop techniques for clustering, classification and detecting patterns from data [4] Building supportive underlying assumptions and accurate ranking models for web data (5) Evaluate and determine performance measures; [6] Propose a general framework for ranking on web data sets.
Although PSG is considered the gold standard for diagnosis of OSAS, given the relatively high medical costs associated with such tests and the insufficiency number of pediatric sleep laboratories, PSG is not readily accessible to children in all geographic areas. Thus, analysis of the validity of alternative diagnostic approaches should be done, even assuming their accuracy is suboptimal. The second goal of this work points in this direction. It aims investigating the viability to reduce the number and complexity of measurements in order to make possible the stratification of OSAS in children natural environment.
Keywords: on-line learning; neuro-fuzzy systems; interpretability; machine learning Supervisor: Prof. Antnio Dourado (dourado@dei.uc.pt) Summary:
The development of fuzzy rules to knowledge extraction from data acquired in real time needs new recursive techniques for clustering to produce well designed fuzzy-systems. For Takagi Sugeno-Kang (TSK) systems this applies mainly to the antecedents, while for Mamdani type it applies both for the antecedents and consequents fuzzy sets. To increment pos-interpretability of the fuzzy rules, such that some semantic may be deduced from the rules, pruning techniques should be developed to allow a humaninterpretable labelling of the fuzzy sets in the antecedents and consequents of the rules. For this purpose convenient similarity measures between fuzzy sets and techniques for merging fuzzy rules should be developed and applied. The applications envisaged are in industrial processes and medical fields.
Summary:
High dimensional data in industrial complexes can be profitably used for advanced process monitoring if it is reduced to a dimension where human interpretability is easily verified. Multidimensional scaling may be used to reduce it to two or three dimensions if appropriate measures of similarity/dissimilarity are developed. The measures express the distance between attributes, the essence of the information, and a similar difference should be guaranteed in the reduced space in order to preserve the informative content of the data. Research of appropriate measures and reduction method is needed. In the reduced space, classification of the actual operating point should be dome through appropriate recursive clustering and pattern recognition techniques. The classification is intended to evidence clearly the quality level of the actual and past operating points in such a way that the human operator finds in it a useful decision support system for the daily operation of the mill. The work has as applications the process of visbreaker in the Galp Sines Refinery.
operators are able to deal with the large diversity and complexity of information involved in the transition process, given their experience and evidence based knowledge. However, formulate this process in a systematic and precise way, is a challenging task. One of the main goals of this work is to develop and implement computational intelligence strategies (neural networks, fuzzy systems, neuro-fuzzy systems, etc) to address this challenge. Take into account the historical data of past transitions, as well as operators know-how and expertise, a solution will be developed, able to learn and incorporate the available information and experience. The developed solution will be a valuable tool in order to provide the best strategies to follow, regarding the colour transition process during the production of different types of paper.
Supervisors: Prof. Alberto Cardoso (alberto@dei.uc.pt) and Prof. Jorge Henriques (jh@dei.uc.pt) Summary: The main goal of this proposal is to investigate and develop a solution for a web based management system that could act as a platform to support and facilitate the process of creation, development and management of ideas, projects or new products, in a collaborative environment. The success and effectiveness of these processes are strongly dependent on the accurate application of Information and Communication Technologies (ICT). In this context, the applications for collaborative environments will contribute to the natural settling of synergies between several intervenients, providing an effective support to the processes associated to the creation, development and management of ideas, projects or new products.
Cardoso
(alberto@dei.uc.pt)
and
Prof.
Paulo
Gil
The development of simulation systems is a very important and valuable tool in the industrial context, namely to understand the process dynamical behaviour, to train and improve the knowledge of operators and to decision support action. The basis of the simulator for one specific petroleum refinery would be a non-linear hybrid model, obtained from the high dimensional data and the operators knowlede, using adaptive computation methods (neural networks, fuzzy, neuro-fuzzy, ). The overall system should be web based and the interfaces should be similar to the panels currently used by the operators.
Prof.
Alberto
Cardoso
Summary: Supervision can be regarded as the manifold process of collecting relevant information from the world (monitoring), predicting futures states and acting accordingly, whenever required. When the systems nature is itself time varying this high level framework must be materialized or implemented in an adaptive way. This means that the supervisor performance should be permanently assessed and parameters adapted in real time to cope with a changing environment. Another issue, of key importance, involving supervision in real world applications concerns the study and implementation of intelligent methodologies enhancing the overall system robustness in case of disturbances and faults events, including varying latency times in network communications. Subjects where contributions are expected: Non-linear modelling using artificial neural networks number of layers and the lag window; Real time adaptation of models mechanisms for recursive parameters adjustment in noisy environments; Fault diagnosis study and implementation of techniques for fault detection and isolation; Intelligent systems behavior conditioning study and implementation of reconfiguration methodologies assuring acceptable performance level in the presence of faults.
Summary:
One of the main problems in software systems that have some complexity is the problem of software aging, a phenomenon that is observed in long-running applications where the execution of the software degrades over time leading to expensive hangs and/or crash failures. Software aging is not only a problem for desktop operating systems: it has been observed in telecommunication systems, web-servers, enterprise clusters, OLTP systems, spacecraft systems and safety-critical systems. Software aging happens due to the exhaustion of systems resources, like memory-leaks, unreleased locks, non-terminated threads, shared-memory pool latching, storage fragmentation, data corruption and accumulation of numerical errors. There are several commercial tools that help to identify some sources of memory-leaks in the software during the development phase. However, not all the faults can be avoided and those tools cannot work in third-party software modules when there is no access to the source-code. This means that existing production systems have to deal with the problem of software aging. The natural procedure to combat software aging is to apply the well-known technique of software rejuvenation. Basically, there are two basic rejuvenation policies: time-based and prediction-based rejuvenation. The first applies a rejuvenation action periodically, while the second makes use of predictive techniques to forecast the occurrence of software aging and apply the action of rejuvenation strictly only when necessary. The goal of this PhD Thesis is to study the phenomena of software aging in commercial database engines, to devise and implement some techniques to collect vital information from the engine and to forecast the occurrence of aging or potential anomalies. With this knowledge the database engine can apply a controlled action of rejuvenation to avoid a crash or a partial failure of its system. The ultimate goal is to improve the autonomic computing capabilities of a database engine, mainly when subjected to high workload and stress-load from the client applications. Proposal: The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art about software aging, rejuvenation and dependability benchmarking; (b) development of a tool for dependability benchmarking of database engines; (c) development of a workload and stress-load tool for databases; (d) infrastructure of probes (using Ganglia) to collect vital information from a database engine; (e) development of mathematical techniques to forecast the occurrence of software aging (time-series analysis, data-mining, machine-learning, neural-networks);(f) experimental study. Analysis of results; (g) adaptation of rejuvenation techniques for database engines; (h) writing of papers;
Summary:
One of the main problems in software systems is the vulnerability to malicious attacks. Complex systems and systems that have high degree of interaction with other systems or users are more prone to be successfully attacked. The consequences of a successful attack are potentially very severe and may include the theft of critical-mission information and trade secrets. Given the pervasive nature of software systems in modern society, the issue of security and testing for vulnerability to attacks is an important research area. The vulnerability of software systems are caused by several factors. Two of these factors are the integration of third-party off-the-shelf components to build larger components, and bad programming practices. The integration of third-party generic-purpose components may introduce vulnerabilities in the larger system due to interface mismatch between the components that may be exploited for attacks. Bad programming practices may lead to weaknesses that may be exploited by tailored user inputs. Testing a system or component against malicious attacks is a difficult problem and is currently an open research area. Testing for vulnerabilities to malicious attacks can not be performed as traditional testing because there is no previous knowledge about the nature of the attacks. However, these attacks follow a logic based on exploiting possible weaknesses inside the software and this logic can be used to forecast the existence of vulnerabilities. The goal of this PhD Thesis is to study the phenomena of attacks to software systems and devise a methodology to assess the vulnerability to these attacks. This includes the proposal of experimental techniques to test systems and components following the logic of dependability benchmarking and experimental evaluation. It is expected that at the conclusion of the Thesis there is case study with practical results of assessment security and vulnerability forecasting for comparison purposes. Web servers are suggested as on of possible case studies. Fault injection techniques and robustness testing techniques should be considered as enabling techniques for the purposes of the Thesis.
Proposal:
The PhD work will comprehend the following initial tasks: (a) overview of the state-of-the-art about software security, software vulnerabilities, software defects, validation methods, robustness testing and dependability benchmarking; (b) development of methods and tools for analysis of the identification of patterns related to vulnerabilities and the automated testing of the possible vulnerabilities (case studies include web-servers)
(c) proposal of generic test methodologies for evaluation of software vulnerability to malicious attacks based on software defects and program pattern analysis for system comparison purposes; (d) proposal of formal methodologies for experimental assessment of security and vulnerability forecasting on third-party (black-box) software components; (e) development of experimental infrastructure of tools for practical demonstration of the above to real systems (case studies include web-servers); (f) experimental study. Analysis of results; (g) writing of papers;
Target conferences to publish papers: - Dependable Systems and Networks (DSN) - International Conference on COTS-Based Software System (ICCBSS) - International Conference on Computer Safety, Reliability and Security (SAFECOMP) - International Symposium on Software Reliability Engineering (ISSRE)
Project: EU FP7 project Ginseng (http://www.ict-ginseng.eu/) More information and/or interest: if you are interested in this opportunity, contact
me as soon as possibly to pnf@dei.uc.pt
currently working in the EU project Ginseng, and any valuable research work and results can be integrated into the current effort.
Project: EU FP7 project Ginseng (http://www.ict-ginseng.eu/) More information and/or interest: if you are interested in this opportunity, contact
me as soon as possibly directly to pnf@dei.uc.pt
Summary:
On time data management is becoming a key difficulty faced by the information infrastructure of most organizations. A major problem is the capability of database applications to access and update data in a timely manner. In fact, database applications for critical areas (e.g., air traffic control, factory production control, etc.) are increasingly giving more importance to the timely execution of transactions. Database applications with timeliness requirements have to deal with the possible occurrence of timing failures, when the operations specified in the transaction do not complete within the expected deadlines. For instance, in a database application designed to manage information about a critical activity (e.g., a nuclear reactor), a transaction that reads and store the current reading of a sensor must be executed in a short time as the longer it takes to execute the transaction the less useful the reading becomes. This way, when a transaction is submitted and it does not complete before a specified deadline that transaction becomes irrelevant and this situations must be reported to the application/business layer in order to be handled in an adequate way. In spite of the importance of timeliness requirements in database applications, commercial DBMS do not assure any temporal properties, not even the detection of the cases when the transaction takes longer than the expected/desired time. The goal of this work is to bring timeliness properties to the typical ACID (atomicity, consistency, integrity, durability) transactions, putting together classic database transactions and recent achievements in the field of real time and distributed transactions. This work will be developed in the context of the TACID (Timely ACID Transactions in DBMS) research project, POSC/EIA/61568/2004, funded by FCT.
Proposal The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art about timely computing and real-time databases; (b) characterization of timed transactions; (c) analysis of DBMS core implementations; (d) infrastructure to support timely execution of ACID transactions; (e) development of mathematical techniques to
forecast transactions execution times; (f) Implementation and evaluation; (g) writing papers;
Summary:
One of the main problems faced by organizations is the protection of their data against unauthorized access or corruption due to malicious actions. Database management systems (DBMS) constitute the kernel of the information systems used today to support the daily operations of most organizations and represent the ultimate layer in preventing unauthorized access to data stored in information systems. In spite of the key role played by the DBMS in the overall data security, no practical way has been proposed so far to characterize the security in such systems or to compare alternative solutions concerning security features. Benchmarks are standard tools that allow evaluating and comparing different systems or components according to specific characteristics (e.g., performance, robustness, dependability, etc.). In this work we are particularly interested in benchmarking security aspects of transactional systems. Thus, the main goal is to research ways to compare transactional systems from a security point-of-view. This work will be developed in the context of a research cooperation with the Center for Risk and Reliability of the University of Maryland, MA, USA. During this work the student will have the opportunity to visit the University of Maryland in order to carry out joint work with local researchers. Proposal The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art about security, security evaluation, and dependability benchmarking; (b) definition of a security benchmarking approach for transactional systems; (c) study of attacks for security benchmarking; (d) definition of a standard approach for security evaluation and comparison; (e) implementation and evaluation; (f) writing papers;
Summary:
The ascendance of networked information in our economy and daily lives has increased the awareness of the importance of dependability features. In many cases, such as in ecommerce systems, service outages may result in a huge loss of money or in an unaffordable loss of prestige for companies. In fact, due to the impressive growth of the Internet, some minutes of downtime in a server somewhere may be directly exposed as loss of service to thousands of users around the world. Database systems constitute the kernel of the information systems used today to support the daily operations of most organizations. Additionally, in recent years, there has been an explosive growth in the use of databases for decision support decision support systems. The biggest differences between decision support systems and operational systems, besides their different goal, are the type of operations executed and the supporting database platform. While operational systems execute thousands or even millions of small transactions per day, decision support systems only execute a small number of queries on the data (in addition to the loading operations executed offline). Advanced database technology, such as parallel and distributed databases, is a way to achieve high performance and availability in both operational and decision support systems. However, although distributed and parallel database systems are increasingly being used in complex business-critical systems, no practical way has been proposed so far to characterize the impact of faults in such environments or to compare alternative solutions concerning dependability features. The fact that many businesses require very high dependability for their database servers shows that a practical tool that allows the comparison of alternative solutions in terms of dependability is of utmost importance. In spite of the pertinence of having dependability benchmarks for distributed and parallel database systems, the reality is that no dependability benchmark has been proposed so far. A dependability benchmark is a specification of a standard procedure to assess dependability related measures of a computer system or computer component. The awareness of the importance of dependability benchmarks has increased in the recent years and dependability benchmarking is currently the subject of strong research. In a previous work the first know dependability benchmark for transactional systems has been proposed. However, this benchmark focuses singleserver transactional databases. The goal of this work proposal is to study the problem of dependability benchmarking in distributed and parallel databases. One of the key aspects to be addressed is to figure out how to apply a faultload (set of faults and stressful conditions that emulate real faults experienced by systems in the field) in a distributed/parallel environment. Several types of faults will be considered, namely: operator faults, software faults, and hardware faults (including network faults).
Proposal The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art about parallel and distributed databases, dependability assessment and dependability benchmarking; (b) definition of a dependability benchmarking approach for distributed and parallel databases; (c) study of typical faults in distributed and parallel databases environments; (d) definition of a standard approach for dependability evaluation and comparison in distributed and parallel databases; (e)implementation and evaluation; (f) writing papers;
Summary:
In recent years, there has been an explosive growth in the use of databases for decision support. These systems, generically called Data Warehouses, involve manipulations of massive amounts of data that push database management technology to the limit, especially in what concerns to performance and scalability. In fact, typical data warehouse utilization has an interactive characteristic, which assumes short query response time. Therefore, the huge data volumes stored in a typical data warehouse and the queries complexity with their intrinsic ad-hoc nature make the performance of query execution the central problem of large data warehouses. The main goal of this work is to investigate ways to allow a dramatic reduction of the hardware, software, and administration cost when compared to traditional data warehouses. The affordable data warehouses solution will be built upon the high scalability and high performance of the DWS (Data Warehouse Stripping) technology. Starting from the classic method of uniform partitioning at low level (facts), DWS includes a new technique that distributes a data warehouse by an arbitrary number of computers. Queries are executed in parallel by all the computers, guaranteeing a nearly optimal speedup. This work will focus various aspects related to: Automatic data balancing: As each node in the cluster may have different processing capabilities, it is important to provide load balancing algorithms that automatically provide the best data distribution. With these mechanisms the system will be able to reorganize the data whenever it is needed, in order to make the load in each node as balanced as possible, allowing similar response times for every nodes. Auto administration and tuning: by using a cluster of machines the administration complexity and costs tend to increase dramatically. Although the several nodes have normally similar configurations, some discrepancies are expected due to the heterogeneous nature of the cluster. To achieve the best configuration we need to tune each node individually. Thus, we have to develop a solution for automatic administration and tuning in distributed data warehouses that allows a reduction of the administration cost and an efficient use of the system resources.
Proposal The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art; (b) characterization of affordable data warehouses requirements; (c) infrastructure for improved performance in affordable data warehouses; (d) implementation and evaluation; (e) writing papers;
Summary:
In recent years, there has been an explosive growth in the use of databases for decision support. These systems, generically called Data Warehouses, involve manipulations of massive amounts of data that push database management technology to the limit, especially in what concerns to performance and dependability. The huge data volumes stored in a typical data warehouse make performance and availability two centrals problem of large data warehouses. An affordable data warehouses solution is being built upon the high scalability and high performance of the DWS (Data Warehouse Stripping) technology. Starting from the classic method of uniform partitioning at low level (facts), DWS includes a new technique that distributes a data warehouse by an arbitrary number of computers. The fact that the data warehouse is distributed over a large number of computers raises new challenges as the probability of failure of one or more computers greatly increases. The main goal of this work is to investigate ways to achieve high-dependability in the affordable data warehouses solution while allowing a dramatic reduction of the hardware, software, and administration cost when compared to traditional data warehouses. Thus, this work will focus various aspects related to: Data security: the affordable data warehouses solution will be based on open-source database management systems (DBMS). However, these databases do not provide the security mechanisms normally available in commercial DBMS. In addition, the distributed database approach increases the data security requirements. The goal is to investigate the security needs for distributed data warehouses over open source DBMS and to propose advanced mechanisms that improve the overall system security. Data replication and recovery: comparing to a single-server database, one of the consequences of the use of a cluster of affordable machines is the increase of the probability of failure. This way, one of the goals is to research a new technique for data replication and recovery that allows the system to continue working in the presence of failures in several nodes and facilities the recovery of failed nodes. Proposal The PhD work will comprehend the following initial tasks: (a) overview of the state-ofthe-art; (b) characterization of affordable data warehouses dependability requirements;
(c) infrastructure to support high-dependability in distributed data warehouses; (d) Implementation and evaluation; (e) writing of papers;
the interested user. This research work will evaluate the impact by mobility of sensor nodes, sinks, and relays; and in which way the mobility can improve the network performance and enable new applications. On the one hand it is needed to consider sensor network applications with inherent mobility of nodes. On the other hand, it has been shown that mobility increases communication capacity in ad-hoc networks, and this result may be further exploited in WSNs. The issues and targets for optimization are dependent on which nodes are moving and whether the movement is random, predictable or even controllable. Even though mobility can be exploited to increase spatial coverage or connectivity, the primary requirement for any mobility solution is "energy efficiency". This often means maximizing the lifetime of the WSN. This work will be done in the context of European Project GINSENG Performance Control in Wireless Sensor Network, FP7-ICT-2007-2
Summary:
The new types of applications and technologies used for nowadays communication among users, and the diversity of types of users have shown that traditional routing paradigms are not capable of coping with these recent realities. Therefore, the role of routing in IP networks has shifted from single shortest path routing to multiple path routing subject to multiple constraints such as Quality of Service requirements and fault tolerance. Moreover, traditional routing protocols have several problems concerning routing information distribution which compromises routing decisions. Namely, routing decisions that are based on inaccurate information due to bad routing configurations either caused by faulty or malicious actions will cause severe disruption in the service that the network should provide. These are particularly important issues in networks that involve different types of communication devices and media, such as happens in ambient networks. Ambient networks pose an additional challenge to routing protocols, since network composition changes very often when compared to traditional IP networks, and networks are expected to cooperate among each other on-demand without relying on previous configuration. Moreover, associated with the dynamic structure of ambient networks, traffic patterns in ambient networks also change very often due to the composition and decomposition of network structure. The work proposed for this thesis aims at studying the existing vulnerabilities of actual routing protocols used in the Internet and to propose a resilient routing scheme that overcomes these weaknesses in order to improve network availability and survivability. The work will comprise the study of the state of the art of routing protocols for resilience, the characteristics of ambient networks, and the proposal of enhancements to existent routing schemes in order to improve the contribution of the routing protocol for the resilience of ambient networks. The research work of the PhD candidate will be included in the European Union Integrated Project WEIRD (WiMAX Extension to Isolated Research Data networks http://www.ist-weird.eu).
Summary:
A number of recent technological developments have enabled the formation of wireless community-wide local area networks. Dispersed users (residents or moving users) within the boundaries of a geographical region (neighbourhood or municipality) form a heterogeneous network and enjoy network services such as Internet connectivity. This environment, named Community Networks, is well suited for both traditional Internet access and the deployment of peer-to-peer services. Achieving and retaining connectivity in this highly heterogeneous environment is a major issue. Although the technology advances in wireless networks are fairly mature, one further step in the management of Community Networks is to provide mobility and nomadicity support. Nomadicity allows connectivity everywhere while mobility includes the maintenance of the connections and sessions while the node is moving from one place to another. Mobility and nomadicity in community and home networks still place several challenges, as these environments are highly heterogeneous. Seamless handover between different layer two technologies is still a challenge. Seamless multimedia content distribution to the home may include several network technologies, such as WLAN, power-line, GRPS or UMTS. Thus, the inter-layer issues involved are complex and a lot of work is required to match MIP6 with them in order to provide seamless mobility for multimedia information. The ability to support sessions on multiple access networks is another open issue. These issues will be the central concern of the proposed PhD work. The research work of the PhD candidate will be included in the European Union IST FP6 CONTENT Network of Excellence (CONTENT Content Networks and Services for Home Users, http://www.ist-content.eu) and will be carried out in close cooperation with a foreign institution.
technology is making possible, and where different paradigms apply. Many of the most dynamic fields of research in learning and education, such as computer supported cooperative learning, situated learning, or learning communities relate to learning contexts. Hundreds of expressions used in education such as project based learning, action learning, learning by doing, case studies, scenario building, simulations, role playing pertain to learning contexts. The advantage of concentrating on context, as a whole, rather than on the multiplicity of its manifestations studied by disparate research groups is that, by doing so, we can articulate that multitude of theories and practices into a single, coherent, organic, and operational worldview. The proposed thesis pushes forward our current efforts in this field by exploring the relationship between Learning Contexts and Social Networking. This may include collaboration with another of our projects, the Serendipty Project, centred on the development of a serendipitous social search engine for which we hold a US patent application. In order to stimulate the creativity of the candidates, plenty of leeway will be given to them, so that they may choose to concentrate on theoretical aspects, on practical educational issues, or on the specification and design of the ideal, and yet inexistent, learning context management system (LXMS). Prospective candidates whishing to clarify the research implications of learning contexts may download from the journal Interactive Educational Multimedia our paper Learning Contexts: a Blueprint for Research. Further information can be obtained by downloading Chapter 1, Context and Learning: a Philosophical Framework, of our book Managing Learning in Virtual Settings: the Role of Context, published by Information Science Publishing (Idea Group). Successful candidates will have, or be willing to develop throughout their PhD, a mixed profile of educational technology and educational and social researcher.
- Figueiredo, A. D. (2005) Learning Contexts: A Blueprint for Research, Interactive Educational Multimedia, No. 11, October 2005 http://www.ub.es/multimedia/iem/down/c11/Learning_Contexts.pdf - Figueiredo, A. D. and Afonso, A. P. (2005) Context and Learning: a Philosophical Framework, in Figueiredo, A. D. and A. P. Afonso, Managing Learning in Virtual Settings: The Role of Context, Information Science Publishing (Idea Group), October 2005. http://www.idea-group.com/downloads/excerpts/Figueiredo01.pdf
Keywords: Quality Management, Information Systems Supervisor: Prof. Paulo Rupino (rupino@dei.uc.pt) Summary:
Quality Management of products, services and business processes is, today, a key issue for the success of most companies operating in global contexts. In fact, holding a Quality certification, such as established by ISO 9001:2000 standards, is becoming a basic requirement for companies to play in several international markets. On the other
hand, the design and deployment of information systems is another key aspect to consider when modern organizations define their business models and strategy. It is quite surprising, thus, that in spite of the fact that both quality management and information systems architecting require intensive strategic analysis and the extensive involvement of staff and managers in the examination and redesign of business processes, the two endeavors are still treated as completely distinct. They are usually conducted as separate projects, handled by different teams, equipped with unconnected methodologies. The integrated design of these two pillars of modern organizations, in a manner that they depend on, support, and reinforce each other, enables a quantum leap, as it lets organizational tasks be reengineered in the light of: (i) effectiveness, consistency and evidence of compliance, as required by quality systems; and (ii) efficiency, harnessing the power of digital information storage, processing, and communication in the renewed business processes. Typical criticisms to traditional implementations of Quality Management Systems can also be alleviated, namely by reducing the added bureaucracy and overhead imposed on users by traditional implementations. The economic impact on organizations can be considerable, not only at the initial planning stage but, more importantly, throughout the lifecycle of operation of this unified system. The likelihood of synergy between quality management and IT infrastructure has been suggested by a few authors, but no systematic processes for leveraging those synergies can be found. A successful Ph.D. in this unexplored field will arm its holder with the skills and tools to act in an increasingly appealing consulting arena.
system while easy to deploy and manage poses a heavy cognitive load as the principal interaction mode is linguistic (using menus, dialogs, forms, with some graphics for visualizing resulting plans); c) current business globalization increasingly involves high levels of subcontract work that needs to be managed across enterprise networks with only partial knowledge of production conditions, which makes it difficult to use methodologies that assume full knowledge and control over production units. With clients, we have come to the conclusion that a planning tool to the Work Load Control methodology needs a visualization and direct manipulation tool to reduce the cognitive overhead posed by the complexity and non-intuitiveness of the methodology and enable the person to dynamically envision and track events across networks of enterprises. This case provides an ideal opportunity to attempt an integration of linguistic, direct manipulation and delegation modes of interface, develop novel visualizations and test usability evaluation techniques. The research implies acquiring a knowledge of the methodology, conceiving and studying appropriate solutions for the case study by designing innovative interaction techniques. Relevance is met in Decision Support Systems, Human Computer Interaction and, more generally, in the Information Systems academic and business communities.
An alternative approach would be to take game design itself as the learning activity and explore the learning potential inherent in the design activity. Either way, the research should focus on the methodological problem of explicitly modeling and building games as learning contexts. The Context Engineering approach can be used to frame development of specific contexts, by prototyping on available multiplayer game development technology and focusing on design aspects and their relation to the proposed problem. Adequate evaluation techniques should also be a consideration in the studied contexts if they are to be socially accepted as effective learning alternatives. Relevance is expected for the Learning Sciences, Human and Social Sciences, Information Systems and Human Computer Interaction, Game Studies, Media Studies, and society at large.
In recent times there has been a new interest in the modeling of beliefs in artificial societies in order to explore real life phenomena like religions, sects and other collective sharing of representations that have an impact in agent behavior and overall distinctive traits of a society. In this setting the question of the propagation of beliefs is central. Belief propagation in simulated societies can be handled at very different overlapping levels. There is geography in religion belief in real world and simulated situations should be able to represent the effect of distance, accessibility and main lines of circulation in the diffusion of beliefs. Then there is the level of networking because propagation of beliefs requires contact between agents and/or their messages, in a way similar to the propagation of viruses. Finally there are questions of fitness regarding the environment because religious beliefs normally regulate the interaction of agents with the environment and can produce different fitness functions that depend, in a complex way, on the behavior, and hence the beliefs, of other agents present at the same locations (religion is often connected to specialization of economic activities for instance).
The project aims at researching models for representation of this different level by developing a system where different formalization of beliefs can be tested in a framework that provides the necessary interface with geographic, networking and environmental constraints, approaching real world situations. It is expected that the thesis will contribute to the production of a formal framework to describe the complex interaction of religious beliefs with territory, interpersonal relations and the environment.
key properties of these exact techniques, such as the capability to reduce the search space or the effective exploration of neighborhoods, might be used by the EA to efficiently perform a global exploration of the search space. Several examples of optimization problems will be used to perform a comprehensive analysis of the developed hybrid architectures.
Summary: The goal is to develop and analyse stochastic local search algorithms for solving a wide class of geometric folding problems. For a given geometric structure with a given folded state, the corresponding folding problem consists of knowing whether it is possible to reach a desired folded state with a given property. Several folding problems are NP-hard. This is the case of the Ruler Folding Problem, which consists of finding a flat folded state of an open polygonal chain (the ruler) with minimum length; it is always possible to flat fold a polygonal chain, but it is NP-hard to minimize its length. Also, the Map Folding Problem, that is, to find a flat folded state of a two-dimensional orthogonal paper with an arbitrary mountain-valley crease pattern becomes NP-complete if we want know whether it exist a flat folded state that uses all creases. Finally, the well-known Protein Folding Problem for the two and threedimensional HP model of protein folding energetics is NP-hard. Stochastic local search are simple solution methods based on local search that have been applied quite successfully to a number of combinatorial optimization problems. Indeed, they are among the state-of-the-art algorithms for solving classical hard problems such as the travelling salesman problem, the quadratic assignment problem, and the graph colouring problem. However, with the exception of the protein folding problem, they have never been applied to more general geometric folding problems. This work will contribute to the successful application of stochastic local search methods to geometric folding problems in general by proposing and analysing appropriate neighbourhood structures and effective search strategies to a number of optimization problems that arise in this field. References: Erik Demaine, Joseph ORourke, Geometric Folding Algorithms: Linkages, Origami, Polyhedra, Cambridge University Press, 2007
Holger Hoos, Thomas Sttzle, Stochastic Local Search: Foundations and Applications, Morgan Kaufmann, Elsevier, 2004.