Вы находитесь на странице: 1из 27

User Experience

User experience (UX) is about how a person feels about using a product, system or service. User experience highlights the experiential, affective, meaningful and valuable aspects of human-computer interaction and product ownership, but it also includes a persons perceptions of the practical aspects such as utility, ease of use and efficiency of the system. User experience is subjective in nature, because it is about an individuals feelings and thoughts about the system. User experience is dynamic, because it changes over time as the circumstances change.

Definitions
ISO 9241-210[1] defines user experience as "a person's perceptions and responses that result from the use or anticipated use of a product, system or service". So, user experience is subjective and focuses on the use. The additional notes for the ISO definition explain that user experience includes all the users' emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomplishments that occur before, during and after use. The notes also list the three factors that influence user experience: system, user and the context of use. Note 3 of the standard hints that usability addresses aspects of user experience, e.g. "usability criteria can be used to assess aspects of user experience". Unfortunately, the standard does not go further in clarifying the relation between user experience and usability. Clearly, the two are overlapping concepts, with usability including pragmatic aspects (getting a task done) and user experience focusing on users feelings stemming both from pragmatic and hedonic aspects of the system. In addition to the ISO standard, there exist several other definitions for user experience. Some of them have been studied by Law et al. (2009)[2].

History
The term user experience was brought to wider knowledge by Donald Norman, User Experience Architect, in the mid1990s.[3] Several developments affected the rise of interest in the user experience: 1. Recent advances in mobile, ubiquitous, social, and tangible computing technologies have moved humancomputer interaction into practically all areas of human activity. This has led to a shift away from usability engineering to a much richer scope of user experience, where user's feelings, motivations, and values are given as much, if not more, attention than efficiency, effectiveness and basic subjective satisfaction (i.e. the three traditional usability metrics[4]).[5] 2. In website design, it was important to combine the interests of different stakeholders: marketing, branding, visual design, and usability. Marketing and branding people needed to enter the interactive world where usability was important. Usability people needed to take marketing, branding, and aesthetic needs into account when designing web-sites. User experience provided a platform to cover the interests of all stakeholders: making web sites easy to use, valuable, and effective for visitors. This is why several early user-experience publications focus on web-site user experience.[6][7][8][9]

The field of user experience was established to cover the holistic perspective to how a person feels about using a system. The focus is on pleasure and value rather than on performance. The exact definition, framework, and elements of user experience are still evolving.

Influences on user experience


Many factors can influence a user's experience with a system. To address the variety, factors influencing user experience have been classified into three main categories: user's state, system properties, and context (situation).[10] Studying typical users and contexts helps designing the system. These categories also help identify the reasons for a certain experience. The three main user-experience factors are best illustrated by a scenario below.

Example
Lisa is on her way home by bus, and wants to know how her husband is doing on a business trip. The bus is crowded and she did not get a seat, but she wants to use the time to contact her husband by phone. What affects her user experience with the mobile phone?  Lisas own mental state and characteristics (motivation, expectations, mood, know-how) and current physical resources (only one hand available for the phone)  The context, i.e. the current situation:      Physical (moving bus, views passing by, lighting, noise the environment Lisa feels via her senses); Social (fellow travellers, code of conduct, husband's availability - how other people affect user experience); Temporal (the duration of the bus trip time constraints); Infrastructural (availability of network, cost of calls and text messages, legal restrictions); and Task (sending a text message is part of a bigger "task" of two-way dialogue, other ongoing activities such as monitoring when to step out of the bus, possible interruptions). This context motivates Lisa to use text messaging as the means to communicate with her husband. The context also affects the interaction with the mobile phone and thereby the user experience.  The system needed for text messaging (mobile-phone and text-messaging service in this case): user interface and functionality (e.g. text-messaging software and keypad), design and brand, the replies coming from the husband. The primary value comes from the discussion itself, and all other parts of the system should support this purpose. Depending on the husband's messages, Lisa's emotions may range from delight to sorrow, from excitement to despair. User experience focuses, however, on Lisa's feelings about using the mobile phone, not those about her husband. Did the system enable her to communicate with the husband in the way she wanted in this context? Did the system delight her by exceeding her expectations or by attracting positive reactions from others?

Momentary emotion or overall user experience


The scenario above describes user experience of communicating with a relative via text messaging with a mobile phone. We can investigate user experience on many temporal levels, however. In the scenario above, we could study Lisa's changing emotions during interaction, her feelings about the episode as a whole, or her attitude towards her phone in general, i.e., her overall user experience. In the above example, focusing on the momentary emotions may not be the best way to understand Lisa's user experience, since her emotions were caused mainly by the content (the messages from Lisa's husband) and not by the examined system (mobile phone). However, in systems where the content has the primary focus, such as in electronic games, the flow of emotions may be the best way to evaluate user experience. Single experiences influence the overall user experience:[11]: the experience of a key click affects the experience of typing a text message, the experience of typing a message affects the experience of text messaging, and the experience of text messaging affects the overall user experience with the phone. The overall user experience is not simply a sum of smaller interaction experiences, because some experiences are more salient than others. Overall user experience is also influenced by factors outside the actual interaction episode: brand, pricing, friends' opinions, reports in media, etc. One branch in user experience research focuses on emotions, that is, momentary experiences during interaction: designing affective interaction and evaluating emotions. Another branch is interested in understanding the long-term relation between user experience and product appreciation. Especially industry sees good overall user experience with a company's products as critical for securing brand loyalty and enhancing the growth of customer base. All temporal levels of user experience (momentary, episodic, and long-term) are important, but the methods to design and evaluatethese levels can be very different.

User Experience design


User experience design (UXD) is a subset of the field of experience design that pertains to the creation of the architecture and interaction models that affect user experience of a device or system. The scope of the field is directed at affecting "all aspects of the users interaction with the product: how it is perceived, learned, and used." [1]

The designers
This field has its roots in human factors and ergonomics, a field that since the late 1940s has been focusing on the interaction between human users, machines and the contextual environments to design systems that address the user's experience.
[2]

The term also has a more recent connection to user-centered design principles and also incorporates

elements from similar user-centered design fields.

As with the fields mentioned above, user experience design is a highly multi-disciplinary field, incorporating aspects of psychology, anthropology, sociology, computer science, graphic design, industrial design and cognitive science. Depending on the purpose of the product, UX may also involve content design disciplines such as communication design, instructional design, or game design. The subject matter of the content may also warrant collaboration with a Subject Matter Expert (SME) on planning the UX from various backgrounds in business, government, or private groups.

The design
User experience design incorporates most or all of the above disciplines to positively impact the overall experience a person has with a particular interactive system, and its provider. User experience design most frequently defines a sequence of interactions between a user (individual person) and a system, virtual or physical, designed to meet or support user needs and goals, primarily, while also satisfying systems requirements and organizational objectives. Typical outputs include:          Site Audit (usability study of existing assets) Flows and Navigation Maps User stories or Scenarios Persona (Fictitious users to act out the scenarios) Site Maps and Content Inventory Wireframes (screen blueprints or storyboards) Prototypes (For interactive or in-the-mind simulation) Written specifications (describing the behavior or design) Graphic mockups (Precise visual of the expected end result)

Benefits
User experience design is integrated into software development and other forms of application development to inform feature requirements and interaction plans based upon the user's goals. New introduction of software must keep in mind the dynamic pace of technology advancement and the need for change. The benefits associated with integration of these design principles include:      Avoiding unnecessary product features Simplifying design documentation and customer-facing technical publications Improving the usability of the system and therefore its acceptance by customers Expediting design and development through detailed and properly conceived guidelines Incorporating business and marketing goals while catering to the user

User Interface design


User interface design or user interface engineering is the design of computers, appliances, machines, mobile communication devices, software applications, and websites with the focus on the user's experience and interaction. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goalswhat is often called user-centered design. Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design may be utilized to support its usability. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs. Interface design is involved in a wide range of projects from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered around their expertise, whether that be software design, user research, web design, or industrial design.

Processes
There are several phases and processes in the user interface design, some of which are more demanded upon than others, depending on the project. (Note: for the remainder of this section, the word system is used to denote any project whether it is a web site, application, or device.)  Functionality requirements gathering assembling a list of the functionality required by the system to accomplish the goals of the project and the potential needs of the users.  User analysis analysis of the potential users of the system either through discussion with people who work with the users and/or the potential users themselves. Typical questions involve:      What would the user want the system to do? How would the system fit in with the user's normal workflow or daily activities? How technically savvy is the user and what similar systems does the user already use? What interface look & feel styles appeal to the user?

Information architecture development of the process and/or information flow of the system (i.e. for phone tree systems, this would be an option tree flowchart and for web sites this would be a site flow that shows the hierarchy of the pages).

Prototyping development of wireframes, either in the form of paper prototypes or simple interactive screens. These prototypes are stripped of all look & feel elements and most content in order to concentrate on the interface.

Usability testing testing of the prototypes on an actual useroften using a technique called think aloud protocol where you ask the user to talk about their thoughts during the experience.

Graphic Interface design actual look & feel design of the final graphical user interface (GUI). It may be based on the findings developed during the usability testing if usability is unpredictable, or based on communication objectives and styles that would appeal to the user. In rare cases, the graphics may drive the prototyping, depending on the importance of visual form versus function. If the interface requires multiple skins, there may be multiple interface

designs for one control panel, functional feature or widget. This phase is often a collaborative effort between a graphic designerand a user interface designer, or handled by one who is proficient in both disciplines. User interface design requires a good understanding of user needs.

Requirements
The dynamic characteristics of a system are described in terms of dialogue requirements contained in seven principles of part 10 of the ergonomics standard, the ISO 9241. This standard establishes a framework of ergonomic "principles" for the dialogue techniques with high-level definitions and illustrative applications and examples of the principles. The principles of the dialogue represent the dynamic aspects of the interface and can be mostly regarded as the "feel" of the interface. The seven dialogue principles are:  Suitability for the task: the dialogue is suitable for a task when it supports the user in the effective and efficient completion of the task.  Self-descriptiveness: the dialogue is self-descriptive when each dialogue step is immediately comprehensible through feedback from the system or is explained to the user on request.  Controllability: the dialogue is controllable when the user is able to initiate and control the direction and pace of the interaction until the point at which the goal has been met.  Conformity with user expectations: the dialogue conforms with user expectations when it is consistent and corresponds to the user characteristics, such as task knowledge, education, experience, and to commonly accepted conventions.  Error tolerance: the dialogue is error tolerant if despite evident errors in input, the intended result may be achieved with either no or minimal action by the user.  Suitability for individualization: the dialogue is capable of individualization when the interface software can be modified to suit the task needs, individual preferences, and skills of the user.  Suitability for learning: the dialogue is suitable for learning when it supports and guides the user in learning to use the system. The concept of usability is defined in Part 11 of the ISO 9241 standard by effectiveness, efficiency, and satisfaction of the user. Part 11 gives the following definition of usability:  Usability is measured by the extent to which the intended goals of use of the overall system are achieved (effectiveness).   The resources that have to be expended to achieve the intended goals (efficiency). The extent to which the user finds the overall system acceptable (satisfaction).

Effectiveness, efficiency, and satisfaction can be seen as quality factors of usability. To evaluate these factors, they need to be decomposed into sub-factors, and finally, into usability measures. The information presentation is described in Part 12 of the ISO 9241 standard for the organization of information (arrangement, alignment, grouping, labels, location), for the display of graphical objects, and for the coding of information

(abbreviation, color, size, shape, visual cues) by seven attributes. The "attributes of presented information" represent the static aspects of the interface and can be generally regarded as the "look" of the interface. The attributes are detailed in the recommendations given in the standard. Each of the recommendations supports one or more of the seven attributes. The seven presentation attributes are:        Clarity: the information content is conveyed quickly and accurately. Discriminability: the displayed information can be distinguished accurately. Conciseness: users are not overloaded with extraneous information. Consistency: a unique design, conformity with users expectation. Detectability: the users attention is directed towards information required. Legibility: information is easy to read. Comprehensibility: the meaning is clearly understandable, unambiguous, interpretable, and recognizable.

The user guidance in Part 13 of the ISO 9241 standard describes that the user guidance information should be readily distinguishable from other displayed information and should be specific for the current context of use. User guidance can be given by the following five means:    Prompts indicating explicitly (specific prompts) or implicitly (generic prompts) that the system is available for input. Feedback informing about the users input timely, perceptible, and non-intrusive. Status information indicating the continuing state of the application, the systems hardware and software components, and the users activities.  Error management including error prevention, error correction, user support for error management, and error messages.  On-line help for system-initiated and user initiated requests with specific information for the current context of use.

Research past and ongoing


User interface design has been a topic of considerable research, including on its aesthetics.
[1]

In the past standards have

been developed, as far back as the eighties for defining the usablity of software products.[2] One of the structural basis has become the IFIP userinterface reference model. The model proposes four dimensions to structure the user interface:     The input/output dimension (the look) The dialogue dimension (the feel) The technical or functional dimension (the access to tools and services) The organizational dimension (the communication and co-operation support)

This model has greatly influenced the development of the international standard ISO 9241 describing the interface design requirements for usability. The desire to understand application-specific UI issues early in software development, even as an application was being developed, led to research on GUI rapid prototyping tools that might offer convincing simulations of how an actual application might behave in production use.[3] Some of this research has shown that a wide variety of programming tasks for GUI-based software can, in fact, be specified through means other than writing program code.[4]

Research in recent years is strongly motivated by the increasing variety of devices that can, by virtue of Moore's Law, host very complex interfaces.
[5]

There is also research on generating user interfaces automatically, to match a user's level of ability for different kinds of interaction.
[6

Web design
Web design is the process of designing websites a collection of online content including documents and applications that reside on a web server/servers.[citation needed] As a whole, the process of web design includes planning, post-production, research, advertising, as well as media control that is applied to the pages within the site by the designer or group of designers with a specific purpose. The site itself can be divided into it's main page, also known as the home page, which cites the main objective as well as highlights of the site's daily updates; which also contains hyperlinks that functions to direct viewers to a designated page within the site's domain.

Interaction Design
Interaction design (IxD) is "the practice of designing interactive digital products, environments, systems, and services."[1] Like many other design fields Interaction Design also has an interest in form but it's main focus in on behaviour.[1] What clearly marks Interaction design as a design field as opposed to a science or engineering field is that it is synthesis and imagining things as they might be,moreso than focusing on how things are.[2] Interaction Design is heavily focused on satisfying the needs and desires of the people who will use the product.[2] Where other disciplines like software engineering have a heavy focus on designing for technical stakeholders of a project.

History
The term interaction design was first coined by Bill Moggridge[3] and Bill Verplank in the mid 1980s. It would be another 10 years before other designers rediscovered the term and started using it.[2] To Verplank, it was an adaptation of the computer science term user interface design to theindustrial design profession.
[4]

To Moggridge, it was an improvement

over soft-face, which he had coined in 1984 to refer to the application of industrial design to products containing software (Moggridge 2006). In 1990, Gillian Crampton-Smith established an interaction design MA at the Royal College of Art (RCA) in London (originally entitled "computer-related design" and now known as Design Interactions).
[5]

In 2001, she helped found

the Interaction Design Institute Ivrea, a small institute in Northern Italy dedicated solely to interaction design; the institute moved to Milan in October 2005 and merged courses with Domus Academy. In 2007, some of the people originally involved with IDII have now set up the Copenhagen Institute of Interaction Design (CIID).

Today, interaction design is taught in many schools worldwide.

Methodologies
Goal Oriented Design
Goal Oriented Design (or Goal-Directed Design) "is concerned most significantly with satisfying the needs and desires of the people who will interact with a product or service."
[2]

Alan Cooper argues in The Inmates Are Running The Asylum that we need to take a new approach to how interactive software based problems are solved. The problems faced with designing computer based interfaces are fundamentally different to the challenges we face when designing interfaces for products that do not include software (i.e. hammers). Alan introduces the concept of cognitive friction, whereby we treat things as human when they are significantly complex enough, we cannot always understand how they behave, and computer interfaces are sufficiently complex as to be treated this way.[7] It is argued that we must truly understand the goals of a user (both personal and objective) in order to solve the problem in the best way possible and that the current approach is much oriented towards solving individual problems from the perspective of a business or other interested parties.
[6]

Personas
Goal oriented design as explained in The Inmates are running the asylum advocates for the use of personas, which are created after interviewing a significant number of users. The aim of a persona is to "Develop a precise description of our user and what he wishes to accomplish."[8] The best method as described within The Inmantes Are Running The Asylum is to construct fabricated users with names and back stories who represent real users of a given product.[8]These users are not as much a fabrication as a byproduct of the investigation process. The reason for constructing back stories for a persona is to make them believable, such that they can be treated as real people and thereneeds can be argued for.[8] Personas also help eliminate idiosyncrasies that may be attributed to a given individual.
[8]

Cognitive dimensions
The cognitive dimensions framework[9] provides a specialized vocabulary to evaluate and modify particular design solutions. Cognitive dimensions are designed as a lightweight approach to analysis of a design quality, rather than an indepth, detailed description. They provide a common vocabulary for discussing many factors in notation, UI or programming language design. Dimensions provide high-level descriptions of the the interface and how the user interacts with it such as consistency, error-proneness, hard mental operations, viscosity or premature commitment. These concepts aid in the creation of new designs from existing ones through design manoeuvresthat alter the position of the design within a particular dimension.

Affective interaction design


Throughout the process of interaction design, designers must be aware of key aspects in their designs that influence emotional responses in target users. The need for products to convey positive emotions and avoid negative ones is critical to product success.[10] These aspects include positive, negative, motivational, learning, creative, social and persuasive influences to name a few. One method that can help convey such aspects is the use of expressive interfaces. In software, for example, the use of dynamic icons, animations and sound can help communicate a state of operation, creating a sense of interactivity and feedback. Interface aspects such as fonts, color pallet, and graphical layouts can also influence an interface's perceived effectiveness. Studies have shown that affective aspects can affect a user's perception of usability.[10] Emotional and pleasure theories exist to explain peoples responses to the use of interactive products. These include Don Norman's emotional design model, Patrick Jordan's pleasure model, and McCarthy and Wright's Technology as Experience framework.

User Centered Design


In broad terms, user-centered design (UCD) or pervasive usability [1] is a design philosophy and a process in which the needs, wants, and limitations of end users of a product are given extensive attention at each stage of the design process. User-centered design can be characterized as a multi-stage problem solving process that not only requires designers to analyze and foresee how users are likely to use a product, but also to test the validity of their assumptions with regards to user behaviour in real world tests with actual users. Such testing is necessary as it is often very difficult for the designers of a product to understand intuitively what a first-time user of their design experiences, and what each user's learning curve may look like. The chief difference from other product design philosophies is that user-centered design tries to optimize the product around how users can, want, or need to use the product, rather than forcing the users to change their behavior to accommodate the product.

UCD models and approaches


For example, the user-centered design process can help software designers to fulfill the goal of a product engineered for their users. User requirements are considered right from the beginning and included into the whole product cycle. These requirements are noted and refined through investigative methods including: ethnographic study, contextual inquiry, prototype testing, usability testing and other methods. Generative methods may also be used including: card sorting, affinity diagraming and participatory design sessions. In addition, user requirements can be inferred by careful analysis of usable products similar to the product being designed.  Cooperative design: involving designers and users on an equal footing. This is the Scandinavian tradition of design of IT artifacts and it has been evolving since 1970.
[2]

Participatory design (PD), a North American term for the same concept, inspired by Cooperative Design, focusing on the participation of users. Since 1990, there has been a bi-annual Participatory Design Conference.[3] Contextual design, customer-centered design in the actual context, including some ideas from Participatory design[4]

All these approaches follow the ISO standard Human-centred design for interactive systems (ISO 9241-210, 2010).

Purpose
UCD answers questions about users and their tasks and goals, then uses the findings to make decisions about development and design. UCD of a web site, for instance, seeks to answer the following questions:       Who are the users of the document? What are the users tasks and goals? What are the users experience levels with the document, and documents like it? What functions do the users need from the document? What information might the users need, and in what form do they need it? How do users think the document should work?

Elements
As examples of UCD viewpoints, the essential elements of UCD of a web site are considerations of visibility, accessibility, legibility and language.

Visibility
Visibility helps the user construct a mental model of the document. Models help the user predict the effect(s) of their actions while using the document. Important elements (such as those that aid navigation) should be emphatic. Users should be able to tell from a glance what they can and cannot do with the document.

Accessibility
Users should be able to find information quickly and easily throughout the document, regardless of its length. Users should be offered various ways to find information (such as navigational elements, search functions, table of contents, clearly labeled sections, page numbers, color coding, etc). Navigational elements should be consistent with the genre of the document. Chunking is a useful strategy that involves breaking information into small pieces that can be organized into some type meaningful order or hierarchy. The ability to skim the document allows users to find their piece of information by scanning rather than reading. Bold and italic words are often used.

Legibility
Text should be easy to read: Through analysis of the rhetorical situation, the designer should be able to determine a useful font style. Ornamental fonts and text in all capital letters are hard to read, but italics and bolding can be helpful

when used correctly. Large or small body text is also hard to read. (Screen size of 10-12 pixel sans serif and 12-16 pixel serif is recommended.) High figure-ground contrast between text and background increases legibility. Dark text against a light background is most legible.

Language
Depending on the rhetorical situation, certain types of language are needed. Short sentences are helpful, as well as short, well-written texts used in explanations and similar bulk-text situations. Unless the situation calls for it, do not use jargon or technical terms. Many writers will choose to useactive voice, verbs (instead of noun strings or nominals), and simple sentence structure.

Rhetorical situation
A user-centered design is focused around the rhetorical situation. The rhetorical situation shapes the design of an information medium. There are three elements to consider in a rhetorical situation: Audience, Purpose, and Context.

Audience
The audience is the people who will be using the document. The designer must consider their age, geographical location, ethnicity, gender, education, etc.

Context
The context is the circumstances surrounding the situation. The context often answers the question: What situation has prompted the need for this document? Context also includes any social or cultural issues that may surround the situation.

Analysis tools used in user-centered design


There are a couple of main tools that are used in the analysis of user centered design, mainly: persona, scenarios, and essential use cases.[5]

Persona
During the UCD process, a persona of the user's need may be created. It is a fictional character with all the characteristics of the user. Personas are created after the field research process, which typically consists of members of the primary stakeholder (user) group answering questionnaires or participating in interviews, or a mixture of both. After results are gathered from the field research, they are used to create personas of the primary stakeholder group. Often, there may be several personas concerning the same group of individuals, since it is almost impossible to apply all the characteristics of the stakeholder group onto one character. The character depicts the "typical" or "average" individual in the primary stakeholder group, and is referred to throughout the entire design process.
[6]

There are also what's called a secondary

persona, where the character is not a member of the primary stakeholder group and is not the main target of the design, but their needs should be met and problems solved if possible. They exist to help account for further possible problems and difficulties that may occur even though the primary stakeholder group is satisfied with their solution. There is also an anti-persona, which is the character which the design process is not made for. Personas usually include a name and picture, demographics, roles and responsibilities, goals and tasks, motivations and needs, environment and context, and a

quote that can represent the character's personality. Personas are useful in the sense that they create a common shared understanding of the user group for which the design process is built around. Also, they help to prioritize the design considerations by providing a context of what the user needs and what functions are simply nice to add and have. They can also provide a human face and existence to a diversified and scattered user group, and can also create some empathy and add emotions when referring to the users. However, since personas are a generalized perception of the primary stakeholder group from collected data, the characteristics may be too broad and typical, or too much of an "average joe". Sometimes, personas can have stereotypical properties also, which may hurt the entire design process. Overall, personas are a useful tool that can be used since designers in the design process can have an actual person to make design measure around other than referring to a set of data or a wide range of individuals.

Scenario
A scenario created in the UCD process is a fictional story about the "daily life of" or a sequence of events with the primary stakeholder group as the main character. Typically, a persona that was created earlier is used as the main character of this story. The story should be specific of the events happening that relate to the problems of the primary stakeholder group, and normally the main research questions the design process is built upon. These may turn out to be a simple story about the daily life of an individual, but small details from the events should imply details about the users, and may include emotional or physical characteristics. There can be the "best case scenario", where everything works out best for the main character, the "worst case scenario", where the main character experiences everything going wrong around him or her, and an "average case scenario", which is the typical life of the individual, where nothing really special or really depressing occurs, and the day just moves on. Scenarios create a social context to which the personas exist in, and also create an actual physical world, instead of imagining a character with internal characteristics from gathered data an nothing else; there is more action involved in the persona's existence. A scenario is also more easily understood by people, since it is in the form of a story, and is easier to follow.[7] Yet, like the personas, these scenarios are assumptions made by the researcher and designer, and is also created from a set of organized data. Some even say such scenarios are unrealistic to real life occurrences. Also, it is difficult to explain and inform low level tasks that occur, like the thought process of the persona before acting.

Use case
In short, a use case describes the interaction between an individual and the rest of the world. Each use case describes an event that may occur for a short period of time in real life, but may consist of intricate details and interactions between the actor and the world. It is represented as a series of simple steps for the character to achieve his or her goal, in the form of a cause-and effect scheme. Use cases are normally written in the form of a chart with two columns: first column labelled actor, second column labelled world, and the actions performed by each side written in order in the respective columns. The following is an example of a use case for performing a song on a guitar in front of an audience. Actor World
[8]

choose music to play

pick up guitar

display sheet music

perform each note on sheet music using guitar

convey note to audience using sound

audience provides feedback to performer

assess performance and adjust as needed based on audience feedback

complete song with required adjustments

audience applause

The interaction between actor and the world is an act that can be seen in everyday life, and we take them as granted and don't think too much about the small detail that needs to happen in order for an act like performing a piece of music to exist. It is similar to the fact that when speaking our mother tongue, we don't think too much about grammar and how to phrase words; they just come out since we are so used to saying them. The actions between an actor and the world, notably, the primary stakeholder (user) and the world in this case, should be thought about in detail, and hence use cases should are created to take these tiny interactions to occur. An essential use case is a special kind of use case, also called an "abstract use case". Essential use cases describe the essence of the problem, and deals with the nature of the problem itself. While writing use cases, no assumptions about unrelated details should be made. In additions, the goals of the subject should be separated from the process and implementation to reach that particular goal. Below is an example of an essential use case with the same goal as the former example. Actor World

choose sheet music to perform

gathers necessary resources

provides access to resources

performs piece sequentially

convey and interprets performance

provides feedback

completes performance

Use cases are useful because they help identify useful levels of design work. They allow the designers to see the actual low level processes that are involved for a certain problem, which makes the problem easier to handle, since certain minor steps and details the user makes are exposed. The designers' job should take into consideration of these small problems in order to arrive at a final solution that works. Another way to say this is that use cases breaks a complicated task in to smaller bits, where these bits are useful units. Each bit completes a small task, which then builds up to the final bigger task. Like writing code on a computer, it is easier to write the basic smaller parts and make them work first, and then put them together to finish the larger more complicated code, instead to tackling the entire code from the very beginning. The first solution is less risky because if something goes wrong with the code, it is easier to look for the problem in the smaller bits, since the segment with the problem will be the one that does not work, while in the latter solution, the programmer may have to look through the entire code to search for a single error, which proves time consuming. The same reasoning goes for writing use cases in UCD. Lastly, use cases convey useful and important tasks where the designer can see which one are of higher importance than others. Some drawbacks of writing use cases include the fact that each actions, by the actor or the world, consist of little detail, and is simply a small action. This may possibly lead to further imagination and different interpretation of action from different designers. Also, during the process, it is really easy to over-simplify a task, since a small task from a larger task may consist of even smaller tasks. Picking up a guitar may involved thinking of which guitar to pick up, which pick to use, and think about where the guitar is located first. These tasks may then be divided into smaller tasks, such as first thinking of what colour of guitar fits the place to perform the piece, and other related details. Tasks may be split further down into even tinier tasks, and it is up to the designer to determine what is a suitable place to stop splitting up the tasks.[9] Tasks may not only be oversimplified, they may also be omitted in whole, thus the designer should be aware of all the detail and all the key steps that are involved in an event or action when writing use cases.

User-centered design, needs and emotions


The book "The Design of Everyday Things" (originally called "The Psychology of Everyday Things") was first published in 1986. In this book, Donald A. Norman describes the psychology behind what he deems 'good' and 'bad' design through examples and offers principles of 'good' design. He exalts the importance of design in our everyday lives, and the consequences of errors caused by bad designs. In his book, Norman uses the term "user-centered design" to describe design based on the needs of the user, leaving aside what he considers secondary issues like aesthetics. User-centered design involves simplifying the structure of

tasks, making things visible, getting the mapping right, exploiting the powers of constraint, and designing for error. Norman's overly reductive[citation needed] approach in this text was readdressed by him later in his own publication "Emotional Design." Other books in a similar vein include "Designing Pleasurable Products"
[10]

by Patrick W. Jordan, in which the author

suggests that different forms of pleasure should be included in a user-centered approach in addition to traditional definitions of usability.

User-centered design in product lifecycle management systems


Software applications (or often suites of applications) used in product lifecycle management (typically including CAD, CAM and CAx processes) can be typically characterized by the need for these solutions to serve the needs of a broad range of users, with each user having a particular job role and skill level. For example, a CAD digital mockup might be utilized by aa novice analyst, design engineer of moderate skills, or a manufacturing planner of advanced skills.

Web 2.0
The term Web 2.0 is associated with web applications that facilitate participatory information sharing, interoperability, user-centered design,[1] and collaboration on the World Wide Web. A Web 2.0 site allows users to interact and collaborate with each other in a social media dialogue as creators (prosumers) of user-generated content in a virtual community, in contrast to websites where users (consumers) are limited to the passive viewing of content that was created for them. Examples of Web 2.0 include social networking sites, blogs, wikis, video sharing sites, hosted services, web applications, mashups and folksonomies. The term is closely associated with Tim O'Reilly because of the O'Reilly Media Web 2.0 conference in late 2004.[2][3] Although the term suggests a new version of the World Wide Web, it does not refer to an update to any technical specification, but rather to cumulative changes in the ways software developers and end-users use the Web. Whether Web 2.0 is qualitatively different from prior web technologies has been challenged by World Wide Web inventor Tim Berners-Lee, who called the term a "piece of jargon",[4] precisely because he intended the Web in his vision as "a collaborative medium, a place where we [could] all meet and read and write". He called it the "Read/Write Web".[5]

History
The term "Web 2.0" was coined in January 1999 by Darcy DiNucci, a consultant on electronic information design (information architecture). In her article, "Fragmented Future", DiNucci writes:[6][7] The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfulls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven.

Her use of the term deals mainly with Web design, aesthetics, and the interconnection of everyday objects with the Internet; she argues that the Web is "fragmenting" due to the widespread use of portable Web-ready devices. Her article is aimed at designers, reminding them to code for an ever-increasing variety of hardware. As such, her use of the term hints at, but does not directly relate to, the current uses of the term. The term Web 2.0 did not resurface until 2002.
[8][9][10][11]

These authors focus on the concepts currently associated with the


[10]

term where, as Scott Dietzen puts it, "the Web becomes a universal, standards-based integration platform".
[11]

John Robb

wrote: "What is Web 2.0? It is a system that breaks with the old model of centralized Web sites and moves the power of the Web/Internet to the desktop."

In 2003, the term began its rise in popularity when O'Reilly Media and MediaLive hosted the first Web 2.0 conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you".
[12]

They argued that the activities of users generating content (in the

form of ideas, text, videos, or pictures) could be "harnessed" to create value. O'Reilly and Battelle contrasted Web 2.0 with what they called "Web 1.0". They associated Web 1.0 with the business models of Netscape and the Encyclopdia Britannica Online. For example, Netscape framed "the web as platform" in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for highpriced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the "horseless carriage" framed the automobile as an extension of the familiar, Netscape promoted a "webtop" to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.[13] In short, Netscape focused on creating software, updating it on occasion, and distributing it to the end users. O'Reilly contrasted this with Google, a company which did not at the time focus on producing software, such as a browser, but instead focused on providing a service based on data such as the links Web page authors make between sites. Google exploits this user-generated content to offer Web search based on reputation through its "PageRank" algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called "theperpetual beta". A similar difference can be seen between the Encyclopdia Britannica Online and Wikipedia: while the Britannica relies upon experts to create articles and releases them periodically in publications, Wikipedia relies on trust in anonymous users to constantly and quickly build content. Wikipedia is not based on expertise but rather an adaptation of the open source software adage "given enough eyeballs, all bugs are shallow", and it produces and updates articles constantly. O'Reilly's Web 2.0 conferences have been held every year since 2003, attracting entrepreneurs, large companies, and technology reporters. In terms of the lay public, the term Web 2.0 was largely championed by bloggers and by technology journalists, culminating in the 2006 TIME magazine Person of The Year (You). Grossman explains:
[14]

That is, TIME selected the masses of users who

were participating in content creation on social networks, blogs, wikis, and media sharing sites. In the cover story, Lev

It's a story about community and collaboration on a scale never seen before. It's about the cosmic compendium of knowledge Wikipedia and the million-channel people's network YouTube and the online metropolis MySpace. It's about the many wresting power from the few and helping one another for nothing and how that will not only change the world, but also change the way the world changes. Since that time, Web 2.0 has found a place in the lexicon; in 2009 Global Language Monitor declared it to be the onemillionth English word.
[15

Characteristics
Web 2.0 websites allow users to do more than just retrieve information. By increasing what was already possible in "Web 1.0", they provide the user with more user-interface, software and storage facilities, all through their browser. This has been called"Network as platform" computing.[3] Users can provide the data that is on a Web 2.0 site and exercise some control over that data.[3][16] These sites may have an "Architecture of participation" that encourages users to add value to the application as they use it.[2][3] The concept of Web-as-participation-platform captures many of these characteristics. Bart Decrem, a founder and former CEO of Flock, calls Web 2.0 the "participatory Web"[17] and regards the Web-as-information-source as Web 1.0. The Web 2.0 offers all users the same freedom to contribute. While this opens the possibility for rational debate and collaboration, it also opens the possibility for "spamming" and "trolling" by less rational users. The impossibility of excluding group members who dont contribute to the provision of goods from sharing profits gives rise to the possibility that rational members will prefer to withhold their contribution of effort and free ride on the contribution of others.[18] This requires what is sometimes called radical trust by the management of the website. According to Best,[19] the characteristics of Web 2.0 are: rich user experience, user participation, dynamic content, metadata, web standards and scalability. Further characteristics, such as openness, freedom[20] and collective intelligence[21] by way of user participation, can also be viewed as essential attributes of Web 2.0.

Technologies
The client-side/web browser technologies used in Web 2.0 development are Asynchronous JavaScript and XML (Ajax), Adobe Flash and the Adobe Flex framework, and JavaScript/Ajax frameworks such as Yahoo! UI Library, Dojo Toolkit, MooTools, andjQuery. Ajax programming uses JavaScript to upload and download new data from the web server without undergoing a full page reload. To allow users to continue to interact with the page, communications such as data requests going to the server are separated from data coming back to the page (asynchronously). Otherwise, the user would have to routinely wait for the data to come back before they can do anything else on that page, just as a user has to wait for a page to complete the reload. This also increases overall performance of the site, as the sending of requests can complete quicker independent of blocking and queueing required to send data back to the client....

The data fetched by an Ajax request is typically formatted in XML or JSON (JavaScript Object Notation) format, two widely used structured data formats. Since both of these formats are natively understood by JavaScript, a programmer can easily use them to transmit structured data in their web application. When this data is received via Ajax, the JavaScript program then uses the Document Object Model (DOM) to dynamically update the web page based on the new data, allowing for a rapid and interactive user experience. In short, using these techniques, Web designers can make their pages function like desktop applications. For example, Google Docs uses this technique to create a Web based word processor. Adobe Flex is another technology often used in Web 2.0 applications. Compared to JavaScript libraries like jQuery, Flex makes it easier for programmers to populate large data grids, charts, and other heavy user interactions.
[22]

Applications

programmed in Flex, are compiled and displayed as Flash within the browser. As a widely available plugin independent of W3C (World Wide Web Consortium, the governing body of web standards and protocols) standards, Flash is capable of doing many things which were not possible pre-HTML5, the language used to construct web pages. Of Flash's many capabilities, the most commonly used in Web 2.0 is its ability to play audio and video files. This has allowed for the creation of Web 2.0 sites where video media is seamlessly integrated with standard HTML. In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very popular means of creating Web 2.0 sites. At their core, these frameworks do not use technology any different from JavaScript, Ajax, and the DOM. What frameworks do is smooth over inconsistencies between web browsers and extend the functionality available to developers. Many of them also come with customizable, prefabricated 'widgets' that accomplish such common tasks as picking a date from a calendar, displaying a data chart, or making a tabbed panel. On the server side, Web 2.0 uses many of the same technologies as Web 1.0. New languages such as PHP, Ruby, Perl, Python, JSP and ASP are used by developers to output data dynamically using information from files and databases. What has begun to change in Web 2.0 is the way this data is formatted. In the early days of the Internet, there was little need for different websites to communicate with each other and share data. In the new "participatory web", however, sharing data between sites has become an essential capability. To share its data with other sites, a website must be able to generate output in machine-readable formats such as XML (Atom, RSS, etc) and JSON. When a site's data is available in one of these formats, another website can use it to integrate a portion of that site's functionality into itself, linking the two together. When this design pattern is implemented, it ultimately leads to data that is both easier to find and more thoroughly categorized, a hallmark of the philosophy behind the Web 2.0 movement. In brief, Ajax is a key technology used to build Web 2.0 because it provides rich user experience and works with any browser whether it is Firefox, Chrome, Internet Explorer or another popular browser. Then, a language with very good web services support should be used to build Web 2.0 applications. In addition, the language used should be iterative meaning that the addition and deployment of features can be easily and quickly achieved.

Concepts
Web 2.0 can be described in 3 parts which are as follows:  Rich Internet application (RIA) defines the experience brought from desktop to browser whether it is from a graphical point of view or usability point of view. Some buzzwords related to RIA are Ajax and Flash.

Service-oriented architecture (SOA) is a key piece in Web 2.0 which defines how Web 2.0 applications expose their functionality so that other applications can leverage and integrate the functionality providing a set of much richer applications (Examples are: Feeds, RSS, Web Services, Mash-ups)

Social Web defines how Web 2.0 tends to interact much more with the end user and make the end-user an integral part.

As such, Web 2.0 draws together the capabilities of client- and server-side software, content syndication and the use of network protocols. Standards-oriented web browsers may use plug-ins and software extensions to handle the content and the user interactions. Web 2.0 sites provide users withinformation storage, creation, and dissemination capabilities that were not possible in the environment now known as "Web 1.0". Web 2.0 websites include the following features and techniques: Andrew McAfee used the acronym SLATES to refer to them:
[23]

Search Finding information through keyword search. Links Connects information together into a meaningful information ecosystem using the model of the Web, and provides lowbarrier social tools. Authoring The ability to create and update content leads to the collaborative work of many rather than just a few web authors. In wikis, users may extend, undo and redo each other's work. In blogs, posts and the comments of individuals build up over time. Tags Categorization of content by users adding "tags"short, usually one-word descriptionsto facilitate searching, without dependence on pre-made categories. Collections of tags created by many users within a single system may be referred to as "folksonomies" (i.e., folk taxonomies). Extensions Software that makes the Web an application platform as well as a document server. These include software like Adobe Reader, Adobe Flash player, Microsoft Silverlight, ActiveX, Oracle Java, Quicktime, Windows Media, etc. Signals The use of syndication technology such as RSS to notify users of content changes.

While SLATES forms the basic framework of Enterprise 2.0, it does not contradict all of the higher level Web 2.0 design patterns and business models. In this way, a new Web 2.0 report from O'Reilly is quite effective and diligent in interweaving the story of Web 2.0 with the specific aspects of Enterprise 2.0. It includes discussions of self-service IT, the long tail of enterprise IT demand, and many other consequences of the Web 2.0 era in the enterprise. The report also makes many sensible recommendations around starting small with pilot projects and measuring results, among a fairly long list.
[24]

Usage
A third important part of Web 2.0 is the social Web, which is a fundamental shift in the way people communicate. The social web consists of a number of online tools and platforms where people share their perspectives, opinions, thoughts and experiences. Web 2.0 applications tend to interact much more with the end user. As such, the end user is not only a user of the application but also a participant by:       Podcasting Blogging Tagging Contributing to RSS Social bookmarking Social networking

The popularity of the term Web 2.0, along with the increasing use of blogs, wikis, and social networking technologies, has led many in academia and business to coin a flurry of 2.0s,[25] including Library 2.0,[26] Social Work 2.0,[27] Enterprise 2.0, PR 2.0,[28] Classroom 2.0,[29] Publishing 2.0,[30]Medicine 2.0,[31] Telco 2.0, Travel 2.0, Government 2.0,[32] and even Porn 2.0.[33] Many of these 2.0s refer to Web 2.0 technologies as the source of the new version in their respective disciplines and areas. For example, in the Talis white paper "Library 2.0: The Challenge of Disruptive Innovation", Paul Miller argues Blogs, wikis and RSS are often held up as exemplary manifestations of Web 2.0. A reader of a blog or a wiki is provided with tools to add a comment or even, in the case of the wiki, to edit the content. This is what we call the Read/Write web. Talis believes that Library 2.0 means harnessing this type of participation so that libraries can benefit from increasingly rich collaborative cataloging efforts, such as including contributions from partner libraries as well as adding rich enhancements, such as book jackets or movie files, to records from publishers and others.[34] Here, Miller links Web 2.0 technologies and the culture of participation that they engender to the field of library science, supporting his claim that there is now a "Library 2.0". Many of the other proponents of new 2.0s mentioned here use similar methods. The meaning of web 2.0 is role dependent, as Dennis D. McDonalds noted. For example, some use Web 2.0 to establish and maintain relationships through social networks, while some marketing managers might use this promising technology to "end-run traditionally unresponsive I.T. department[s]."
[35]

There is a debate over the use of Web 2.0 technologies in mainstream education. Issues under consideration include the understanding of students' different learning modes; the conflicts between ideas entrenched in informal on-line communities and educational establishments' views on the production and authentication of 'formal' knowledge; and questions about privacy, plagiarism, shared authorship and the ownership of knowledge and information produced and/or published on line. Marketing For marketers, Web 2.0 offers an opportunity to engage consumers. A growing number of marketers are using Web 2.0 tools to collaborate with consumers on product development, service enhancement and promotion. Companies can use Web 2.0 tools to improve collaboration with both its business partners and consumers. Among other things, company
[36]

employees have created wikisWeb sites that allow users to add, delete and edit contentto list answers to frequently asked questions about each product, and consumers have added significant contributions. Another marketing Web 2.0 lure is to make sure consumers can use the online community to network among themselves on topics of their own choosing.[37] Mainstream media usage of web 2.0 is increasing. Saturating media hubslike The New York Times, PC Magazine and Business Weekwith links to popular new web sites and services, is critical to achieving the threshold for mass adoption of those services.[38] Web 2.0 offers financial institutions abundant opportunities to engage with customers. Networks such as Twitter, Yelp and Facebook are now becoming common elements of multichannel and customer loyalty strategies, and banks are beginning to use these sites proactively to spread their messages. In a recent article for Bank Technology News, Shane Kite describes how Citigroup's Global Transaction Services unit monitors social media outlets to address customer issues and improve products. Furthermore, the FI uses Twitter to release "breaking news" and upcoming events, and YouTube to disseminate videos that feature executives speaking about market news.[39] Small businesses have become more competitive by using Web 2.0 marketing strategies to compete with larger companies. As new businesses grow and develop, new technology is used to decrease the gap between businesses and customers. Social networks have become more intuitive and user friendly to provide information that is easily reached by the end user. For example, companies use Twitter to offer customers coupons and discounts for products and services.[40] According to Google Timeline, the term Web 2.0 was discussed and indexed most frequently in 2005, 2007 and 2008. Its average use is continuously declining by 24% per quarter since April 2008.

Web 2.0 in education


Web 2.0 technologies provide teachers with new ways to engage students in a meaningful way. "Children raised on new media technologies are less patient with filling out worksheets and listening to lectures"[41] because students already participate on a global level. The lack of participation in a traditional classroom stems more from the fact that students receive better feedback online. Traditional classrooms have students do assignments and when they are completed, they are just that, finished. However, Web 2.0 shows students that education is a constantly evolving entity. Whether it is participating in a class discussion, or participating in a forum discussion, the technologies available to students in a Web 2.0 classroom does increase the amount they participate. Will Richardson stated in Blogs, Wikis, Podcasts and other Powerful Web tools for the Classrooms, 3rd Edition that, "The Web has the potential to radically change what we assume about teaching and learning, and it presents us with important questions to ponder: What needs to change about our curriculum when our students have the ability to reach audiences far beyond our classroom walls?"[42] Web 2.0 tools are needed in the classroom to prepare both students and teachers for the shift in learning that Collins and Halverson describe. According to Collins and Halverson, the self-publishing aspects as well as the speed with which their work becomes available for consumption allows teachers to give students the control they need over their learning. This control is the preparation students will need to be successful as learning expands beyond the classroom."[41]

Some may think that these technologies could hinder the personal interaction of students, however all of the research points to the contrary. "Social networking sites have worried many educators (and parents) because they often bring with them outcomes that are not positive: narcissism, gossip, wasted time, 'friending', hurt feelings, ruined reputations, and sometimes unsavory, even dangerous activities, [on the contrary,] social networking sites promote conversations and interaction that is encouraged by educators."
[43]

By allowing students to use the technology tools of Web 2.0, teachers are

actually giving students the opportunity to learn for themselves and share that learning with their peers. One of the many implications of Web 2.0 technologies on class discussions is the idea that teachers are no longer in control of the discussions. Instead, Russell and Sorge (1999) conclude that integrating technology into instruction tends to move classrooms from teacher-dominated environments to ones that are more student-centered. While it is still important for them to monitor what students are discussing, the actual topics of learning are being guided by the students themselves. Web 2.0 calls for major shifts in the way education is provided for students. One of the biggest shifts that Will Richardson points out in his book Blogs, Wikis, Podcasts, and Other Powerful Web Tools for Classrooms[42] is the fact that education must be not only socially, but collaboratively constructed. This means that students, in a Web 2.0 classroom, are expected to collaborate with their peers. By making the shift to a Web 2.0 classroom, teachers are creating a more open atmosphere where students are expected to stay engaged and participate in the discussions and learning that is taking place around them. In fact, there are many ways for educators to use Web 2.0 technologies in their classrooms. "Weblogs are not built on static chunks of content. Instead they are comprised of reflections and conversations that in many cases are updated every day [...] They demand interaction."[42] Will Richardson's observation of the essence of weblogs speaks directly to why blogs are so well suited to discussion based classrooms. Weblogs give students a public space to interact with one another and the content of the class. As long as the students are invested in the project, the need to see the blog progress acts as motivation as the blog itself becomes an entity that can demand interaction. For example, Laura Rochette implemented the use of blogs in her American History class and noted that in addition to an overall improvement in quality, the use of the blogs as an assignment demonstrated synthesis level activity from her students. In her experience, asking students to conduct their learning in the digital world meant asking students "to write, upload images, and articulate the relationship between these images and the broader concepts of the course, [in turn] demonstrating that they can be thoughtful about the world around them."[44] Jennifer Hunt, an 8th grade language arts teacher of pre-Advanced Placement students shares a similar story. She used the WANDA project and asked students to make personal connections to the texts they read and to describe and discuss the issues raised in literature selections through social discourse. They engaged in the discussion via wikis and other Web 2.0 tools, which they used to organize, discuss, and present their responses to the texts and to collaborate with others in their classroom and beyond. Web 2.0 calls for major shifts in the way education is provided for students. One of the biggest shifts that Will Richardson points out in his book Blogs, Wikis, Podcasts, and Other Powerful Web Tools for Classrooms is the fact that education must be not only socially, but collaboratively constructed. This means that students, in a Web 2.0 classroom, are expected to collaborate with their peers. However, in order to make a Web 2.0 classroom work, teachers must also collaborate by using the benefits of Web 2.0 to improve their best practices via dialogues with colleagues on both a small and a grand scale. "Through educational networking, educators are able to have a 24/7 online experience not unlike the rich connecting and sharing that have typically been reserved for special interest conferences."[43]

The research shows that students are already using these technological tools, but they still are expected to go to a school where using these tools is frowned upon or even punished. If educators are able to harness the power of the Web 2.0 technologies students are using, it could be expected that the amount of participation and classroom discussion would increase. It may be that how participation and discussion is produced is very different from the traditional classroom, but nevertheless it does increase.

Web-based applications and desktops


Ajax has prompted the development of websites that mimic desktop applications, such as word processing, the spreadsheet, and slide-show presentation. In 2006 Google, Inc. acquired one of the best-known sites of this broad class, Writely.
[45]

WYSIWYG wiki and blogging sites replicate many features of PC authoring applications.
[47]

Several browser-based "operating systems" have emerged, including EyeOS[46] and YouOS.

Although coined as such,

many of these services function less like a traditional operating system and more as an application platform. They mimic the user experience of desktop operating-systems, offering features and applications similar to a PC environment, and are able to run within any modern browser. However, these operating systems do not directly control the hardware on the client's computer. Numerous web-based application services appeared during the dot-com bubble of 19972001 and then vanished, having failed to gain a critical mass of customers. In 2005, WebEx acquired one of the better-known of these, Intranets.com, for $45 million.[48]

Web Application
Rich Internet applications (RIA) are web 2.0 applications that have many of the characteristics of desktop applications and are typically delivered via a web browser.

Distribution of media
XML and RSS
Many regard syndication of site content as a Web 2.0 feature. Syndication uses standardized protocols to permit endusers to make use of a site's data in another context (such as another website, a browser plugin, or a separate desktop application). Protocols permitting syndication include RSS(really simple syndication, also known as web syndication), RDF (as in RSS 1.1), and Atom, all of them XML-based formats. Observers have started to refer to these technologies as web feeds. Specialized protocols such as FOAF and XFN (both for social networking) extend the functionality of sites or permit endusers to interact without centralized websites.

Web APIs
Web 2.0 often uses machine-based interactions such as REST and SOAP. Servers often expose proprietary Application programming interfaces (API), but standard APIs (for example, for posting to a blog or notifying a blog update) have also come into use. Most communications through APIs involve XML or JSON payloads.

REST APIs, through their use of self-descriptive messages and hypermedia as the engine of application state, should be self describing once an entry URI is known. Web Services Description Language (WSDL) is the standard way of publishing a SOAP API and there are a range of web service specifications. EMML, or Enterprise Mashup Markup Language by the Open Mashup Alliance, is an XML markup language for creating enterprise mashups.

Criticism
Critics of the term claim that "Web 2.0" does not represent a new version of the World Wide Web at all, but merely continues to use so-called "Web 1.0" technologies and concepts. First, techniques such as AJAX do not replace underlying protocols like HTTP, but add an additional layer of abstraction on top of them. Second, many of the ideas of Web 2.0 had already been featured in implementations on networked systems well before the term "Web 2.0" emerged. Amazon.com, for instance, has allowed users to write reviews and consumer guides since its launch in 1995, in a form of self-publishing. Amazon also opened its API to outside developers in 2002.
[49]

Previous developments also came

from research in computer-supported collaborative learning and computer supported cooperative work (CSCW) and from established products like Lotus Notes and Lotus Domino, all phenomena that preceded Web 2.0. But perhaps the most common criticism is that the term is unclear or simply a buzzword. For example, in a podcast interview,[4] Tim Berners-Lee described the term "Web 2.0" as a "piece of jargon": "Nobody really knows what it means...If Web 2.0 for you is blogs and wikis, then that is people to people. But that was what the Web was supposed to be all along."[4] Other critics labeled Web 2.0 "a second bubble" (referring to the Dot-com bubble of circa 19952001), suggesting that too many Web 2.0 companies attempt to develop the same product with a lack of business models. For example, The Economist has dubbed the mid- to late-2000s focus on Web companies "Bubble 2.0".[50] Venture capitalist Josh Kopelman noted that Web 2.0 had excited only 53,651 people (the number of subscribers at that time to TechCrunch, a Weblog covering Web 2.0 startups and technology news), too few users to make them an economically viable target for consumer applications.[51] Although Bruce Sterling reports he's a fan of Web 2.0, he thinks it is now dead as a rallying concept.[clarification
needed][52]

Critics have cited the language used to describe the hype cycle of Web 2.0 utopianist rhetoric.
[54]

[53]

as an example of Techno-

In terms of Web 2.0's social impact, critics such as Andrew Keen argue that Web 2.0 has created a cult of digital narcissism and amateurism, which undermines the notion of expertise by allowing anybody, anywhere to share and place undue value upon their own opinions about any subject and post any kind of content, regardless of their particular talents, knowledge, credentials, biases or possible hidden agendas. Keen's 2007 book, Cult of the Amateur, argues that the core assumption of Web 2.0, that all opinions and user-generated content are equally valuable and relevant, is misguided. Additionally, Sunday Times reviewer John Flintoff has characterized Web 2.0 as "creating an endless digital forest of mediocrity: uninformed political commentary, unseemly home videos, embarrassingly amateurish music, unreadable poems, essays and novels", and also asserted thatWikipedia is full of "mistakes, half truths and misunderstandings".[55] Michael Gorman, fomer president of the American Library Association has been vocal about his opposition to Web 2.0 due to the lack of expertise that it outwardly claim though he believes that there is some hope for

the future as "The task before us is to extend into the digital world the virtues of authenticity, expertise, and scholarly apparatus that have evolved over the 500 years of print, virtues often absent in the manuscript age that preceded print".
[56]

Trademark
In November 2004, CMP Media applied to the USPTO for a service mark on the use of the term "WEB 2.0" for live events.
[57]

On the basis of this application, CMP Media sent a cease-and-desist demand to the Irish non-profit organization
[58]

IT@Cork on May 24, 2006,

but retracted it two days later.

[59]

The "WEB 2.0" service mark registration passed final PTO


[57]

Examining Attorney review on May 10, 2006, and was registered on June 27, 2006.

The European Union application

(application number 004972212, which would confer unambiguous status in Ireland) was [2] refused on May 23, 2007.

Web 3.0
See also: Semantic Web Definitions of Web 3.0 vary greatly. Some[60] believe its most important features are the Semantic Web and personalization. Focusing on the computer elements, Conrad Wolfram has argued that Web 3.0 is where "the computer is generating new information", rather than humans.[61] Andrew Keen, author of The Cult of the Amateur, considers the Semantic Web an "unrealisable abstraction" and sees Web 3.0 as the return of experts and authorities to the Web. For example, he points to Bertelsmann's deal with the German Wikipedia to produce an edited print version of that encyclopedia.[62] CNN Money's JessiHempel expects Web 3.0 to emerge from new and innovative Web 2.0 services with a profitable business model.[63] Futurist John Smart, lead author of the Metaverse Roadmap[64] echoes Sharma's perspective, defining Web 3.0 as the first-generation Metaverse (convergence of the virtual and physical world), a web development layer that includes TVquality open video, 3D simulations, augmented reality, human-constructed semantic standards, and pervasive broadband, wireless, and sensors. Web 3.0's early geosocial (Foursquare, etc.) and augmented reality (Layar, etc.) webs are an extension of Web 2.0's participatory technologies and social networks (Facebook, etc.) into 3D space. Of all its metaverse-like developments, Smart suggests Web 3.0's most defining characteristic will be the mass diffusion of NTSCor-better quality open video to TVs, laptops, tablets, and mobile devices, a time when "the internet swallows the television."
[66] [65]

Smart considers Web 4.0 to be the Semantic Web and in particular, the rise of statistical, machine-

constructed semantic tags and algorithms, driven by broad collective use of conversational interfaces, perhaps circa 2020. David Siegel's perspective in Pull: The Power of the Semantic Web, 2009, is consonant with this, proposing that

the growth of human-constructed semantic standards and data will be a slow, industry-specific incremental process for years to come, perhaps unlikely to tip into broad social utility until after 2020. According to some Internet experts Web 3.0 will allow the user to sit back and let the Internet do all of the work for them.[67] Rather than having search engines gear towards your keywords, the search engines will gear towards the user. Keywords will be searched based on your culture, region, and jargon.[68] For example, when going on a vacation you have to do separate searches for your airline ticket, your hotel reservations, and your car rental. With Web 3.0 you will be able

to do all of this in one simple search. The search engine will present the results in a comparative and easily navigated way to the user.

Вам также может понравиться