Вы находитесь на странице: 1из 40

2 - A history of HTML

Included in this chapter is information on:


y y y

How the World Wide Web began The events and circumstances that led to the World Wide Web's current popularity How HTML has grown from its conception in the early 1990s

Summary
HTML has had a life-span of roughly seven years. During that time, it has evolved from a simple language with a small number of tags to a complex system of mark-up, enabling authors to create all-singing-and-dancing Web pages complete with animated images, sound and all manner of gimmicks. This chapter tells you something about the Web's early days, HTML, and about the people, companies and organizations who contributed to HTML+, HTML 2, HTML 3.2 and finally, HTML 4. This chapter is a short history of HTML. Its aim is to give readers some idea of how the HTML we use today was developed from the prototype written by Tim Berners-Lee in 1992. The story is interesting - not least because HTML has been through an extremely bumpy ride on the road to standardization, with software engineers, academics and browser companies haggling about the language like so many Ministers of Parliament debating in the House of Commons.

1989: Tim Berners-Lee invents the Web with HTML as its publishing language
The World Wide Web began life in the place where you would least expect it: at CERN, the European Laboratory for Particle Physics in Geneva, Switzerland. CERN is a meeting place for physicists from all over the world, where highly abstract and conceptual thinkers engage in the contemplation of complex atomic phenomena that occur on a minuscule scale in time and space. This is a surprising place indeed for the beginnings of a technology which would,

eventually, deliver everything from tourist information, online shopping and advertisements, financial data, weather forecasts and much more to your personal computer. Tim Berners-Lee is the inventor of the Web. In 1989, Tim was working in a computing services section of CERN when he came up with the concept; at the time he had no idea that it would be implemented on such an enormous scale. Particle physics research often involves collaboration among institutes from all over the world. Tim had the idea of enabling researchers from remote sites in the world to organize and pool together information. But far from simply making available a large number of research documents as files that could be downloaded to individual computers, he suggested that you could actually link the text in the files themselves. In other words, there could be cross-references from one research paper to another. This would mean that while reading one research paper, you could quickly display part of another paper that holds directly relevant text or diagrams. Documentation of a scientific and mathematical nature would thus be represented as a `web' of information held in electronic form on computers across the world. This, Tim thought, could be done by using some form of hypertext, some way of linking documents together by using buttons on the screen, which you simply clicked on to jump from one paper to another. Before coming to CERN, Tim had already worked on document production and text processing, and had developed his first hypertext system, `Enquire', in 1980 for his own personal use. Tim's prototype Web browser on the NeXT computer came out in 1990.

Through 1990: The time was ripe for Tim's invention


The fact that the Web was invented in the early 1990s was no coincidence. Developments in communications technology during that time meant that, sooner or later, something like the Web was bound to happen. For a start, hypertext was coming into vogue and being used on computers. Also, Internet users were gaining in the number of users on the system: there was an increasing audience for distributed information. Last, but not least, the new domain name system had made it much easier to address a machine on the Internet.

Hypertext
lthough already established as a concept by academics as early as the 1940s, it was with the advent of the personal computer that hypertext came out of the cupboard. In the late 1980s, Bill Atkinson, an exceptionally gifted programmer working for Apple Computer Inc., came up with an application called Hypercard for the Macintosh. Hypercard enabled you to construct a series of on-screen `filing cards' that contained textual and graphical information. Users could navigate these by pressing on-screen buttons, taking themselves on a tour of the information in the process. Hypercard set the scene for more applications based on the filing card idea. Toolbook for the PC was used in the early 1990s for constructing hypertext training courses that had `pages' with buttons which could go forward or backward or jump to a new topic. Behind the scenes, buttons would initiate little programs called scripts. These scripts would control which page would be presented next; they could even run a small piece of animation on the screen. The application entitled Guide was a similar application for UNIX and the PC.

Hypercard and its imitators caught the popular imagination. However, these packages still had one major limitation: hypertext jumps could only be made to files on the same computer. Jumps made to computers on the other side of the world were still out of the question. Nobody yet had implemented a system involving hypertext links on a global scale.

The domain name system


By the middle 1980s, the Internet had a new, easy-to-use system for naming computers. This involved using the idea of the domain name. A domain name comprises a series of letters separated by dots, for example: `www.bo.com' or `www.erb.org.uk'. These names are the easy-to-use alternative to the much less manageable and cumbersome IP address numbers. A program called Distributed Name Service (DNS) maps domain names onto IP addresses, keeping the IP addresses `hidden'. DNS was an absolute breakthrough in making the Internet accessible to those who were not computer nerds. As a result of its introduction, email addresses became simpler. Previous to DNS, email addresses had all sorts of hideous codes such as exclamation marks, percent signs and other extraneous information to specify the route to the other machine.

Choosing the right approach to create a global hypertext system


To Tim Berners-Lee, global hypertext links seemed feasible, but it was a matter of finding the correct approach to implementing them. Using an existing hypertext package might seem an attractive proposition, but this was impractical for a number of reasons. To start with, any hypertext tool to be used worldwide would have to take into account that many types of computers existed that were linked to the Internet: Personal Computers, Macintoshes, UNIX machines and simple terminals. Also, many desktop publishing methods were in vogue: SGML, Interleaf, LaTex, Microsoft Word, and Troff among many others. Commercial hypertext packages were computer-specific and could not easily take text from other sources; besides, they were far too complicated and involved tedious compiling of text into internal formats to create the final hypertext system. What was needed was something very simple, at least in the beginning. Tim demonstrated a basic, but attractive way of publishing text by developing some software himself, and also his own simple protocol - HTTP - for retrieving other documents' text via hypertext links. Tim's own protocol, HTTP, stands for HyperText Transfer Protocol. The text format for HTTP was named HTML, for HyperText Mark-up Language; Tim's hypertext implementation was demonstrated on a NeXT workstation, which provided many of the tools he needed to develop his first prototype. By keeping things very simple, Tim encouraged others to build upon his ideas and to design further software for displaying HTML, and for setting up their own HTML documents ready for access.

Tim bases his HTML on an existing internationally agreed upon method of text mark-up
The HTML that Tim invented was strongly based on SGML (Standard Generalized Mark-up Language), an internationally agreed upon method for marking up text into structural units such as paragraphs, headings, list items and so on. SGML could be implemented on any machine. The idea was that the language was independent of the formatter (the browser or

other viewing software) which actually displayed the text on the screen. The use of pairs of tags such as <TITLE> and </TITLE> is taken directly from SGML, which does exactly the same. The SGML elements used in Tim's HTML included P (paragraph); H1 through H6 (heading level 1 through heading level 6); OL (ordered lists); UL (unordered lists); LI (list items) and various others. What SGML does not include, of course, are hypertext links: the idea of using the anchor element with the HREF attribute was purely Tim's invention, as was the now-famous `www.name.name' format for addressing machines on the Web. Basing HTML on SGML was a brilliant idea: other people would have invented their own language from scratch but this might have been much less reliable, as well as less acceptable to the rest of the Internet community. Certainly the simplicity of HTML, and the use of the anchor element A for creating hypertext links, was what made Tim's invention so useful.

September 1991: Open discussion about HTML across the Internet begins
Far from keeping his ideas private, Tim made every attempt to discuss them openly online across the Internet. Coming from a research background, this was quite a natural thing to do. In September 1991, the WWW-talk mailing list was started, a kind of electronic discussion group in which enthusiasts could exchange ideas and gossip. By 1992, a handful of other academics and computer researchers were showing interest. Dave Raggett from HewlettPackard's Labs in Bristol, England, was one of these early enthusiasts, and, following electronic discussion, Dave visited Tim in 1992. Here, in Tim's tiny room in the bowels of the sprawling buildings of CERN, the two engineers further considered how HTML might be taken from its current beginnings and shaped into something more appropriate for mass consumption. Trying to anticipate the kind of features that users really would like, Dave looked through magazines, newspapers and other printed media to get an idea of what sort of HTML features would be important when that same information was published online. Upon return to England, Dave sat down at his keyboard and resolutely composed HTML+, a richer version of the original HTML.

Late 1992: NCSA is intrigued by the idea of the Web


Meanwhile on the other side of the world, Tim's ideas had caught the eye of Joseph Hardin and Dave Thompson, both of the National Center for Supercomputer Applications, a research institute at the University of Illinois at Champaign-Urbana. They managed to connect to the computer at CERN and download copies of two free Web browsers. Realizing the importance of what they saw, NCSA decided to develop a browser of their own to be called Mosaic. Among the programmers in the NCSA team were Marc Andreessen - who later made his millions by selling Web products - and the brilliant programmer Eric Bina - who also became rich, courtesy of the Web. Eric Bina was a kind of software genius who reputedly could stay up three nights in succession, typing in a reverie of hacking at his computer.

December 1992: Marc Andreessen makes a brief appearance on WWW - talk


Early Web enthusiasts exchanged ideas and gossip over an electronic discussion group called WWW-talk. This was where Dave Raggett, Tim Berners-Lee, Dan Connolly and others debated how images (photographs, diagrams, illustrations and so on) should be inserted into HTML documents. Not everyone agreed upon the way that the relevant tag should be

implemented, or even what that tag should be called. Suddenly, Marc Andreessen appeared on WWW-talk and, without further to-do, introduced an idea for the IMG tag by the Mosaic team. It was quite plain that the others were not altogether keen on the design of IMG, but Andreessen was not easily redirected. The IMG tag was implemented in the form suggested by the Mosaic team on its browser and remains to this day firmly implanted in HTML. This was much to the chagrin of supporters back in academia who invented several alternatives to IMG in the years to come. Now, with the coming of HTML 4, the OBJECT tag potentially replaces IMG, but this is, of course, some years later.

March 1993: Lou Montulli releases the Lynx browser version 2.0a
Lou Montulli was one of the first people to write a text-based browser, Lynx. The Lynx browser was a text-based browser for terminals and for computers that used DOS without Windows. Lou Montulli was later recruited to work with Netscape Communications Corp., but nonetheless remained partially loyal to the idea of developing HTML as an open standard, proving a real asset to the HTML working group and the HTML Editorial Board in years to come. Lou's enthusiasm for good, expensive wine, and his knowledge of excellent restaurants in the Silicon Valley area were to make the standardization of HTML a much more pleasurable process.

Early 1993: Dave Raggett begins to write his own browser


While Eric Bina and the NCSA Mosaic gang were hard at it hacking through the night, Dave Raggett of Hewlett-Packard Labs in Bristol was working part-time on his Arena browser, on which he hoped to demonstrate all sorts of newly invented features for HTML.

April 1993: The Mosaic browser is released


In April 1993, version 1 of the Mosaic browser was released for Sun Microsystems Inc.'s workstation, a computer used in software development running the UNIX operating system. Mosaic extended the features specified by Tim Berners-Lee; for example, it added images, nested lists and fill-out forms. Academics and software engineers later would argue that many of these extensions were very much ad hoc and not properly designed.

Late 1993: Large companies underestimate the importance of the Web


Dave Raggett's work on the Arena browser was slow because he had to develop much of it single-handedly: no money was available to pay for a team of developers. This was because Hewlett-Packard, in common with many other large computer companies, was quite unconvinced that the Internet would be a success; indeed, the need for a global hypertext system simply passed them by. For many large corporations, the question of whether or not any money could be made from the Web was unclear from the outset. There was also a misconception that the Internet was mostly for academics. In some companies, senior management was assured that the telephone companies would provide the technology for global communications of this sort, anyway. The result was that individuals working in research labs in the commercial sector were unable to devote much time to Web

development. This was a bitter disappointment to some researchers, who gratefully would have committed nearly every waking moment toward shaping what they envisioned would be the communications system of the future. Dave Raggett, realizing that there were not enough working hours left for him to succeed at what he felt was an immensely important task, continued writing his browser at home. There he would sit at a large computer that occupied a fair portion of the dining room table, sharing its slightly sticky surface with paper, crayons, Lego bricks and bits of half-eaten cookies left by the children. Dave also used the browser to show text flow around images, forms and other aspects of HTML at the First WWW Conference in Geneva in 1994. The Arena browser was later used for development work at CERN.

May 1994: NCSA assigns commercial rights for Mosaic browser to Spyglass, Inc.
In May 1994, Spyglass, Inc. signed a multi-million dollar licensing agreement with NCSA to distribute a commercially enhanced version of Mosaic. In August of that same year, the University of Illinois at Champaign-Urbana, the home of NCSA, assigned all future commercial rights for NCSA Mosaic to Spyglass.

May 1994: The first World Wide Web conference is held in Geneva, with HTML+ on show
Although Marc Andreessen and Jim Clark had commercial interests in mind, the rest of the World Wide Web community had quite a different attitude: they saw themselves as joint creators of a wonderful new technology, which certainly would benefit the world. They were jiggling with excitement. Even quiet and retiring academics became animated in discussion, and many seemed evangelical about their new-found god of the Web. At the first World Wide Web conference organized by CERN in May 1994, all was merry with 380 attendees - who mostly were from Europe but also included many from the United States. You might have thought that Marc Andreessen, Jim Clark and Eric Bina surely would be there, but they were not. For the most part, participants were from the academic community, from institutions such as the World Meteorological Organization, the International Center for Theoretical Physics, the University of Iceland and so on. Later conferences had much more of a commercial feel, but this one was for technical enthusiasts who instinctively knew that this was the start of something big.

At the World Wide Web conference in Geneva. Left to right: Joseph Hardin from NCSA, Robert Cailliau from CERN, Tim Berners-Lee from CERN and Dan Connolly (of HTML 2 fame) then working for Hal software.

During the course of that week, awards were presented for notable achievements on the Web; these awards were given to Marc Andreessen, Lou Montulli, Eric Bina, Rob Hartill and Kevin Hughes. Dan Connolly, who proceeded to define HTML 2, gave a slide presentation entitled Interoperability: Why Everyone Wins, which explained why it was important that the Web operated with a proper HTML specification. Strange to think that at least three of the people who received awards at the conference were later to fly in the face of Dan's idea that adopting a cross-company uniform standard for HTML was essential. Dave Raggett had been working on some new HTML ideas, which he called HTML+. At the conference it was agreed that the work on HTML+ should be carried forward to lead to the development of an HTML 3 standard. Dave Raggett, together with CERN, developed Arena further as a proof-of-concept browser for this work. Using Arena, Dave Raggett, Henrik Frystyk Nielsen, Hkon Lie and others demonstrated text flow around a figure with captions, resizable tables, image backgrounds, math and other features.

A panel discussion at the Geneva conference. Kevin Altis from Intel, Dave Raggett from HP Labs, Rick `Channing' Rodgers from the National Library of Medicine.

The conference ended with a glorious evening cruise on board a paddle steamer around Lake Geneva with Wolfgang and the Werewolves providing Jazz accompaniment.

September 1994: The Internet Engineering Task Force (IETF) sets up an HTML working group
In early 1994, an Internet Engineering Task Force working group was set up to deal with HTML. he Internet Engineering Task Force is the international standards and development body of the Internet and is a large, open community of network designers, operators, vendors and researchers concerned with the evolution and smooth operation of the Internet architecture. The technical work of the IETF is done in working groups, which are organized by topic into several areas; for example, security, network routing, and applications. The IETF is, in general, part of a culture that sees the Internet as belonging to The People. This was even more so in the early days of the Web. he feelings of the good `ole days of early Web development are captured in the song, The Net Flag, which can be found `somewhere on the Internet'. The first verse runs as follows: The people's web is deepest red, And oft it's killed our routers dead. But ere the bugs grew ten days old, The patches fixed the broken code. Chorus:

So raise the open standard high Within its codes we'll live or die Though cowards flinch and Bill Gates sneers We'll keep the net flag flying here. In keeping with normal IETF practices, the HTML working group was open to anyone in the engineering community: any interested computer scientist could potentially become a member and, once on its mailing list, could take part in email debate. The HTML working group met approximately three times a year, during which time they would enjoy a good haggle about HTML features present and future, be pleasantly suffused with coffee and beer, striding about plush hotel lobbies sporting pony tails, T-shirts and jeans without the slightest care.

July 1994: HTML specification for HTML 2 is released


During 1993 and early 1994, lots of browsers had added their own bits to HTML; the language was becoming ill-defined. In an effort to make sense of the chaos, Dan Connolly and colleagues collected all the HTML tags that were widely used and collated them into a draft document that defined the breadth of what Tim Berners-Lee called HTML 2. The draft was then circulated through the Internet community for comment. With the patience of a saint, Dan took into account numerous suggestions from HTML enthusiasts far and wide, ensuring that all would be happy with the eventual HTML 2 definition. He also wrote a Document Type Definition for HTML 2, a kind of mathematically precise description of the language.

November 1994: Netscape is formed


During 1993, Marc Andreessen apparently felt increasingly irritated at simply being on the Mosaic project rather than in charge of it. Upon graduating, he decided to leave NCSA and head for California where he met Jim Clark, who was already well known in Silicon Valley and who had money to invest. Together they formed Mosaic Communications, which then became Netscape Communications Corp. in November, 1994. What they planned to do was create and market their very own browser. The browser they designed was immensely successful - so much so in fact, that for some time to come, many users would mistakenly think that Netscape invented the Web. Netscape did its best to make sure that even those who were relying on a low-bandwidth connection - that is, even those who only had a modem-link from a home personal computer - were able to access the Web effectively. This was greatly to the company's credit. Following a predictable path, Netscape began inventing its own HTML tags as it pleased without first openly discussing them with the Web community. Netscape rarely made an appearance at the big International WWW conferences, but it seemed to be driving the HTML standard. It was a curious situation, and one that the inner core of the HTML community felt they must redress.

Late 1994: The World Wide Web Consortium forms


The World Wide Web Consortium was formed in late 1994 to fulfill the potential of the Web through the development of open standards. They had a strong interest in HTML. Just as an orchestra insists on the best musicians, so the consortium recruited many of the best-known

names in the Web community. Headed up by Tim Berners-Lee, here are just some of the players in the band today (1997):

Members of the World Wide Web Consortium at the MIT site. From left to right are Henrick Frystyk Neilsen, Anselm Baird-Smith, Jay Sekora, Rohit Khare, Dan Connolly, Jim Gettys, Tim Berners -Lee, Susan Hardy, Jim Miller, Dave Raggett, Tom Greene, Arthur Secret, Karen MacArthur. y y y y y y y y

Dave Raggett on HTML; from the United Kingdom. Arnaud le Hors on HTML; from France. Dan Connolly on HTML; from the United States. Henrik Frystyk Nielsen on HTTP and on enabling the Web to go faster; from Denmark. Hkon Lie on style sheets; from Norway. He is located in France, working at INRIA. Bert Bos on style sheets and layout; from the Netherlands. Jim Miller on investigating technologies that could be used in rating the content of Web pages; from the United States. Chris Lilley on style sheets and font support; from the United Kingdom.

The W3 Consortium is based in part at the Laboratory of Computer Science at Massachusetts' Institute of Technology in Cambridge, Massachusetts, in the United States; and in part at INRIA, the Institut National de Recherche en Informatique et en Automatique, a French governmental research institute. The W3 Consortium is also located in part at Keio University in Japan. You can look at the Consortium's Web pages on `www.w3.org'. The consortium is sponsored by a number of companies that directly benefit from its work on standards and other technology for the Web. The member companies include Digital Equipment Corp.; Hewlett-Packard Co.; IBM Corp.; Microsoft Corp.; Netscape Communications Corp.; and Sun Microsystems Inc., among many others.

Through 1995: HTML is extended with many new tags


During 1995, all kinds of new HTML tags emerged. Some, like the BGCOLOR attribute of the BODY element and FONT FACE , which control stylistic aspects of a document, found themselves in the black books of the academic engineering community. `You're not supposed to be able to do things like that in HTML,' they would protest. It was their belief that such things as text color, background texture, font size and font face were definitely outside the scope of a language when their only intent was to specify how a document would be organized.

March 1995: HTML 3 is published as an Internet Draft


Dave Raggett had been working for some time on his new ideas for HTML, and at last he formalized them in a document published as an Internet Draft in March, 1995. All manner of HTML features were covered. A new tag for inserting images called FIG was introduced, which Dave hoped would supersede IMG, as well as a whole gambit of features for marking up math and scientific documents. Dave dealt with HTML tables and tabs, footnotes and forms. He also added support for style sheets by including a STYLE tag and a CLASS attribute. The latter was to be available on every element to encourage authors to give HTML elements styles, much as you do in desktop publishing. Although the HTML 3 draft was very well received, it was somewhat difficult to get it ratified by the IETF. The belief was that the draft was too large and too full of new proposals. To get consensus on a draft 150 pages long and about which everyone wanted to voice an opinion was optimistic - to say the least. In the end, Dave and the inner circle of the HTML community decided to call it a day. Of course, browser writers were very keen on supporting HTML 3 - in theory. Inevitably, each browser writer chose to implement a different subset of HTML 3's features as they were so inclined, and then proudly proclaimed to support the standard. The confusion was mindboggling, especially as browsers even came out with extensions to HTML 3, implying to the ordinary gent that normal HTML 3 was, of course, already supported. Was there an official HTML 3 standard or not? The truth was that there was not, but reading the computer press you might never have known the difference.

March 1995: A furor over the HTML Tables specification


Dave Raggett's HTML 3 draft had tackled the tabular organization of information in HTML. Arguments over this aspect of the language had continued for some time, but now it was time to really get going. At the 32nd meeting of the IETF in Danvers, Massachusetts, Dave found a group from the SGML brethren who were up in arms over part of the tables specification because it contradicted the CALS table model. Groups such as the US Navy use the CALS table model in complex documentation. After long negotiation, Dave managed to placate the CALS table delegates and altered the draft to suit their needs. HTML tables, which were not in HTML originally, finally surfaced from the HTML 3 draft to appear in HTML 3.2. They continue to be used extensively for the purpose of providing a layout grid for organizing pictures and text on the screen.

August 1995: Microsoft's Internet Explorer browser comes out


Version 1.0 of Microsoft Corp.'s Internet Explorer browser was announced. This browser was eventually to compete with Netscape's browser, and to evolve its own HTML features. To a certain extent, Microsoft built its business on the Web by extending HTML features. The ActiveX feature made Microsoft's browser unique, and Netscape developed a plug-in called Ncompass to handle ActiveX. This whole idea whereby one browser experiments with an extension to HTML only to find others adding support to keep even, continues to the present. In November 1995, Microsoft's Internet Explorer version 2.0 arrived for its Windows NT and Windows 95 operating systems.

September 1995: Netscape submits a proposal for frames


By this time, Netscape submitted a proposal for frames, which involved the screen being divided into independent, scrollable areas. The proposal was implemented on Netscape's Navigator browser before anyone really had time to comment on it, but nobody was surprised.

November 1995: The HTML working group runs into problems


The HTML working group was an excellent idea in theory, but in practice things did not go quite as expected. With the immense popularity of the Web, the HTML working group grew larger and larger, and the volume of associated email soared exponentially. Imagine one hundred people trying to design a house. `I want the windows to be double-glazed,' says one. `Yes, but shouldn't we make them smaller, while we're at it,' questions another. Still others chime in: `What material do you propose for the frames - I'm not having them in plastic, that's for sure'; `I suggest that we don't have windows, as such, but include small, circular port-holes on the Southern elevation...' and so on. You get the idea. The HTML working group emailed each other in a frenzy of electronic activity. In the end, its members became so snowed under with email that no time was left for programming. For software engineers, this was a sorry state of affairs, indeed: `I came back after just three days away to find over 2000 messages waiting,' was the unhappy lament of the HTML enthusiast. Anyway, the HTML working group still was losing ground to the browser vendors. The group was notably slow in coming to a consensus on a given HTML feature, and commercial organizations were hardly going to sit around having tea, pleasantly conversing on the weather whilst waiting for the results of debates. And they did not.

November 1995: Vendors unite to form a new group dedicated to developing an HTML standard
In November, 1995 Dave Raggett called together representatives of the browser companies and suggested they meet as a small group dedicated to standardizing HTML. Imagine his surprise when it worked! Lou Montulli from Netscape, Charlie Kindel from Microsoft, Eric Sink from Spyglass, Wayne Gramlich from Sun Microsystems, Dave Raggett, Tim BernersLee and Dan Connolly from the W3 Consortium, and Jonathan Hirschman from Pathfinder convened near Chicago and made quick and effective decisions about HTML.

November 1995: Style sheets for HTML documents begin to take shape
Bert Bos, Hkon Lie, Dave Raggett, Chris Lilley and others from the World Wide Web Consortium and others met in Versailles near Paris to discuss the deployment of Cascading Style Sheets. The name Cascading Style Sheets implies that more than one style sheet can interact to produce the final look of the document. Using a special language, the CSS group advocated that everyone would soon be able to write simple styles for HTML, as one would do in Microsoft Word and other desktop publishing software packages. The SGML contingent, who preferred a LISP-like language called DSSSL - it rhymes with whistle seemed out of the race when Microsoft promised to implement CSS on its Internet Explorer browser.

November 1995: Internationalization of HTML Internet Draft


Gavin Nicol, Gavin Adams and others presented a long paper on the internationalization of the Web. Their idea was to extend the capabilities of HTML 2, primarily by removing the restriction on the character set used. This would mean that HTML could be used to mark up languages other than those that use the Latin-1 character set to include a wider variety of alphabets and character sets, such as those that read from right to left.

December 1995: The HTML working group is dismantled


Since the IETF HTML working group was having difficulties coming to consensus swiftly enough to cope with such a fast-evolving standard, it was eventually dismantled.

February 1996: The HTML ERB is formed


Following the success of the November, 1995 meeting, the World Wide Web Consortium formed the HTML Editorial Review Board to help with the standardization process. This board consisted of representatives from IBM, Microsoft, Netscape, Novell, Softquad and the W3 Consortium, and did its business via telephone conference and email exchanges, meeting approximately once every three months. Its aim was to collaborate and agree upon a common standard for HTML, thus putting an end to the era when browsers each implemented a different subset of the language. The bad fairy of incompatibility was to be banished from the HTML kingdom forever, or one could hope so, perhaps. Dan Connolly of the W3 Consortium, also author of HTML 2, deftly accomplished the feat of chairing what could be quite a raucous meeting of the clans. Dan managed to make sure that all representatives had their say and listened to each other's point of view in an orderly manner. A strong chair was absolutely essential in these meetings. In preparation for an ERB meeting, specifications describing new aspects of HTML were made electronically available for ERB members to read. Then, at the meeting itself, the proponent explained some of the rationale behind the specification, and then dearly hoped that all who were present also concurred that the encapsulated ideas were sound. Questions such as, `should a particular feature be included, or should we kick it out,' would be considered. Each representative would air his point of view. If all went well, the specification might eventually see daylight and become a standard. At the time of writing, the next HTML standard, code-named Cougar, has begun its long journey in this direction. The BLINK tag was ousted in an HTML ERB meeting. Netscape would only abolish it if Microsoft agreed to get rid of MARQUEE ; the deal was struck and both tags disappeared. Both of these extensions have always been considered slightly goofy by all parties. Many tough decisions were to be made about the OBJECT specification. Out of a chaos of several different tags - EMBED , APP, APPLET , DYNSRC and so on - all associated with embedding different types of information in HTML documents, a single OBJECT tag was chosen in April, 1996. This OBJECT tag becomes part of the HTML standard, but not until 1997.

April 1996: The W3 Consortium working draft on Scripting comes out

Based on an initial draft by Charlie Kindel, and, in turn, derived from Netscape's extensions for JavaScript, a W3C working draft on the subject of Scripting was written by Dave Raggett. In one form or another, this draft should eventually become part of standard HTML.

July 1996: Microsoft seems more interested than first imagined in open standards
In April 1996, Microsoft's Internet Explorer became available for Macintosh and Windows 3.1 systems. Thomas Reardon had been excited by the Web even at the second WWW conference held in Darmstadt, Germany in 1995. One year later, he seemed very interested in the standardization process and apparently wanted Microsoft to do things the right way with the W3C and with the IETF. Traditionally, developers are somewhat disparaging about Microsoft, so this was an interesting turn of events. It should be said that Microsoft did, of course, invent tags of their own, just as did Netscape. These included the remarkable MARQUEE tag that caused great mirth among the more academic HTML community. The MARQUEE tag made text dance about all over the screen - not exactly a feature you would expect from a serious language concerned with structural mark-up such as paragraphs, headings and lists. The worry that a massive introduction of proprietary products would kill the Web continued. Netscape acknowledged that vendors needed to push ahead of the standards process and innovate. They pointed out that, if users like a particular Netscape innovation, then the market would drive it to become a de facto standard. This seemed quite true at the time and, indeed, Netscape has innovated on top of that standard again. It's precisely this sequence of events that Dave Raggett and the World Wide Web Consortium were trying to avoid.

December 1996: Work on `Cougar' is begun


The HTML ERB became the HTML Working Group and began to work on `Cougar', the next version of HTML with completion late Spring, 1997, eventually to become HTML 4. With all sorts of innovations for the disabled and support for international languages, as well as providing style sheet support, extensions to forms, scripting and much more, HTML 4 breaks away from the simplicity and charm of HTML of earlier years!

Dave Raggett, co-editor of the HTML 4 specification, at work composing at the keyboard at his home in Boston.

January 1997: HTML 3.2 is ready


Success! In January 1997, the W3 Consortium formally endorsed HTML 3.2 as an HTML cross-industry specification. HTML 3.2 had been reviewed by all member organizations, including major browser vendors such as Netscape and Microsoft. This meant that the specification was now stable and approved of by most Web players. By providing a neutral forum, the W3 Consortium had successfully obtained agreement upon a standard version of HTML. There was great rejoicing, indeed. HTML 3.2 took the existing IETF HTML 2 standard and incorporated features from HTML+ and HTML 3. HTML 3.2 included tables, applets, text flow around images, subscripts and superscripts. One might well ask why HTML 3.2 was called HTML 3.2 and not, let's say, HTML 3.1 or HTML 3.5. The version number is open to discussion just as much as is any other aspect of HTML. The version number is often one of the last details to be decided.

Update
Spring 1998: Cougar has now fully materialized as HTML 4.0 and is a W3C Proposed Recommendation. But do the major browsers implement HTML 4.0, you wonder? As usual

in the computer industry, there is no simple answer. Certainly things are heading in that direction. Neither Netscape's or Microsofts browser completely implements style sheets in the way specified, which is a pity, but no doubt they will make amends. There are a number of pecularities in the way that OBJECT works but we very much hope that this will also eventually be implemented in a more consistent manner.
Addison Wesley Longman 1998. All rights reserved

Internet
From Wikipedia, the free encyclopedia (Redirected from INTERNET) Jump to: navigation, search This article is about the public worldwide computer network system. For other uses, see Internet (disambiguation). Internet

Visualization from the Opte Project of the various routes through a portion of the Internet
Computer network types by area y Body (BAN) y Personal (PAN) y Near-me (NAN) y Storage (SAN) y Local (LAN) o Home (HAN) y Campus (CAN) y Metropolitan (MAN) y Wide (WAN) y Global (GAN) y Internet y Interplanetary Internet This box: view talk edit

The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail. Most traditional communications media including telephone, music, film, and television are reshaped or redefined by the Internet, giving birth to new services such as Voice over Internet Protocol (VoIP) and IPTV. Newspaper, book and other print publishing are adapting to Web site technology, or are reshaped into blogging and web feeds. The Internet has enabled or accelerated new forms of human interactions through instant messaging, Internet forums, and social networking. Online shopping has boomed both for major retail outlets and small artisans and traders. Business-to-business and financial services on the Internet affect supply chains across entire industries. The origins of the Internet reach back to research of the 1960s, commissioned by the United States government in collaboration with private commercial interests to build robust, faulttolerant, and distributed computer networks. The funding of a new U.S. backbone by the National Science Foundation in the 1980s, as well as private funding for other commercial backbones, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The commercialization of what was by the 1990s an international network resulted in its popularization and incorporation into virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population used the services of the Internet. The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.

Contents
[hide]
y y y

y y y

1 Terminology 2 History 3 Technology o 3.1 Protocols o 3.2 Structure 4 Governance 5 Modern uses 6 Services o 6.1 Information

y y y y y y

6.2 C i ti 6.3 Data t ansfer 7 Access 8 Social i act 9 See also 10 Notes 11 References 12 External links
o o

See also: Internet capitali ation conventions Int net is a short form of the technical term internet ork, 1] the result of interconnecting computer net orks with special gateways or routers. The Internet is also often referred to as t e Net. The term t e Internet, when referring to the entire global system of IP networks, has been treated as a proper noun and written with an initial capital letter. In the media and popular culture a trend has also developed to regard it as a generic term or common noun and thus write it as "the internet", without capitali ation. Some guides specify that the word should be capitali ed as a noun but not capitali ed as an adjective.

Depiction of the Internet as a cloud in network diagrams The terms Internet and orl Wi e Web are often used in everyday speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and softwareinfrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documen and other ts 2] resources, linked by hyperlinks and URLs. In many technical illustrations when the precise location or interrelation of Internet resources is not important, extended networks such as the Internet are often depicted as a cloud. 3] The verbal image has been formali ed in the newer concept of cloud computing.

Hi
Main article: History of the Internet

The USSR s launch of Sputnik spurred the United States to create the Advanced Research Projects Agency (ARPA, later DARPA) in February 1958 to regain a technological lead. 4][5] ARPA created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. The IPTO's purpose was to find ways to address the US military's concern about survivability of their communications networks, and as a first step interconnect their computers at the Pentagon, Cheyenne Mountain, and Strategic Air Command headquarters (SAC). J. C. R. Licklider, a promoter of universal networking, was selected to head the IPTO. Licklider moved from the Psycho -Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of timesharing.

Professor Leonard Kleinrock with the first ARPANET Interface Message Processors at UCLA At the IPTO, Licklider's successor Ivan Sutherland in 1965 got Lawrence Roberts to start a project to make a network, and Roberts based the technology on the work ofPaul Baran,[6] who had written an exhaustive study for the United States Air Force that recommended packet switching (opposed to circuit switching) to achieve better network robustness and disaster survivability. Roberts had worked at the MIT Lincoln Laboratory originally established to work on the design of the SAGE system. UCLA professor Leonard Kleinrock had provided the theoretical foundations for packet networks in 1962, and later, in the 1970s, for hierarchical routing, concepts which have been the underpinning of the development towards today's Internet. Sutherland's successor Robert Taylor convinced Roberts to build on his early packet switching successes and come and be the IPTO Chief Scientist. Once there, Roberts prepared a report called Resource haring Co uter Networks which was approved by Taylor in June 1968 and laid the foundation for the launch of the working ARPANET the following year. After much work, the first two nodes of what would become the ARPANET were interconnected between Kleinrock's Network Measurement Center at theUCLA's School of Engineering and Applied Science and Douglas Engelbart's NLS system at SRI International (SRI) in Menlo Park, California, on 29 October 1969. The third site on the ARPANET was

the Culler-Fried Interactive Mathematics center at the University of California at Santa Barbara, and the fourth was the University of Utah Graphics Department. In an early sign of future growth, there were already fifteen sites connected to the young ARPANET by the end of 1971. In an independent development, Donald Davies at the UK National Physical Laboratory developed the concept of packet switching in the early 1960s, first giving a talk on the subject in 1965, after which the teams in the new field from two sides of the Atlantic ocean first became acquainted. It was actually Davies' coinage of the wordingpacket and packet switching that was adopted as the standard terminology. Davies also built a packet switched [7] network in the UK, called the Mark I in 1970. Bolt, Beranek & Newman (BBN), the private contractors for ARPANET, set out to create a separate commercial version after establishing "value added carriers" was legali ed in the U.S.[8] The network they established was called Telenet and began operation in 1975, installing free public dial-up access in cities throughout the U.S. Telenet was the first packet-switching network open to the general public.[9] Following the demonstration that packet switching worked on the ARPANET, theBritish Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCI (now called ITU-T) around TT 1976. X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net, and Packet Satellite Net during the same time period. The early ARPANET ran on the Network Control Program (NCP), implementing the host-tohost connectivity and switching layers of the protocol s tack, designed and first implemented in December 1970 by a team called the Network Working Group (NWG) led bySteve Crocker. To respond to the network's rapid growth as more and mo locations connected, re Vinton Cerf and Robert Kahn developed the first description of the now widely used TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems. The first TCP/IP-based wide-area network was operational by 1 January 1983 when all hosts on the ARPANET were switched over from the older NCP protocols.

T3 NSFNET Backbone, c. 1992

In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network that became operational in 1988. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF. The NSFNET backbone was upgraded to 45 Mbps in 1991 and decommissioned in 1995 when it was replaced by new backbone networks operated by commercialInternet Service Providers. The opening of the NSFNET to other networks began in 1988.[10] The US Federal Networking Council approved the interconnection of the NSFNET to the commercialMCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISPs) beganoperations: UUNET, PSINet, and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet (by that time renamed to Sprintnet), Tymnet, Compuserve and JANET were interconnected with the growing Internet in the 1980s as the TCP/IP protocol became increasingly popular. The adaptability of TCP/IP to existing communication networks allowed for rapid growth. The open availability of the specifications and reference code permitted commercial vendors to build interoperable network components, such as routers, making standardized network gear available from many companies. This aided in the rapid growth of the Internet and the proliferationof localarea networking. It seeded the widespread implementation and rigorous standardization of TCP/IP on UNIX and virtually every other common operating system.

This NeXT Computer was used by Sir Tim Berners-Lee at CERN and became the world's first Web server. Although the basic applications and guidelines that make the Internet possible had existed for almost two decades, the network did not gain a public face until the 1990s. On 6 August 1991, CERN, a pan-European organization for particle research, publicized the newWorld Wide Web project. The Web was invented by British scientist Tim Berners-Lee in 1989. An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web.

Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of [11] Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.[12] As of 31 March 2011, the estimated total number of Internet users was 2.095 billion (30.2% of world population).[13]

Cascadi

S l S

From Wikipedia, the free encyclopedia (Redirected from CSS) Jump to: navigation, search "CSS" redirects here. For other uses, see CSS (disambiguation). For the use of CSS on Wikipedia, see Help:Cascading style sheets. Cascading Style Sheets

Fil ame extensi n Internet media type Devel ped by Initial release Type of format Standard(s)

.css

text/css

World Wide Web Consortium 17 December 1996; 14 years ago Style sheet language Level 1 (Recommendation) Level 2 (Recommendation) Level 2 Revision 1

(Recommendation) Cascading Style Sheets


y y y y y y y y y y y

CSS Animations Dynamic CSS Comparison of layout engines Comparison of stylesheet languages Internet Explorer box model bug CSS Zen Garden o The Zen of CSS Design CSSTidy Style sheet Tableless web design Holy Grail (web design) WikiBooks: Cascading Style Sheets

This box: view talk edit HTML


y y y y y y y y y y y y y y y y y y y y y

HTML and HTML5 Dynamic HTML XHTML XHTML Mobile Profile and C-HTML Canvas element Character encodings Document Object Model Font family HTML editor HTML element HTML Frames HTML5 video HTML scripting Web browser engine Quirks mode Style sheets Unicode and HTML W3C and WHATWG Web colors Web Storage Comparison of o document markup languages o web browsers o layout engines for  HTML  HTML5  HTML5 Canvas  HTML5 Media  Non-standard HTML

XHTML (1.1) This box: view talk edit

Cascading Style Sheets (CSS) is a style sheet language used to describe the presentation semantics (the look and formatting) of a document written in a markup language. Its most common application is to style web pages written in HTML and XHTML, but the language can also be applied to any kind of XML document, including plain XML, SVG and XUL. CSS is designed primarily to enable the separation of document content (written in HTML or a similar markup language) from document presentation, including elements such as the layout, colors, and fonts.[1] This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple pages to share formatting, and reduce complexity and repetition in the structural content (such as by allowing for tableless web design). CSS can also allow the same markup page to be presented in different styles for different rendering methods, such as on-screen, in print, by voice (when read out by a speech-based browser or screen reader) and on Braille-based, tactile devices. While the author of a document typically links that document to a CSS style sheet, readers can use a different style sheet, perhaps one on their own computer, to override the one the author has specified. CSS specifies a priority scheme to determine which style rules apply if more than one rule matches against a particular element. In this so-called cascade, priorities or weights are calculated and assigned to rules, so that the results are predictable. The CSS specifications are maintained by the World Wide Web Consortium (W3C). Internet media type (MIME type) text/css is registered for use with CSS by RFC 2318 (March 1998).

History
Style sheets have existed in one form or another since the beginnings of SGML in the 1970s. Cascading Style Sheets were developed as a means for creating a consistent approach to providing style information for web documents. As HTML grew, it came to encompass a wider variety of stylistic capabilities to meet the demands of web developers. This evolution gave the designer more control over site appearance but at the cost of HTML becoming more complex to write and maintain. Variations in web browser implementations i.e. ViolaWWW and WorldWideWeb[4] made consistent site appearance difficult, and users had less control over how web content was displayed. Robert Cailliau wanted to separate the structure from the presentation.[4] The ideal way would be to give the user different options and transferring three different kinds of style sheets: one for printing, one for the presentation on the screen and one for the editor feature.[4] To improve web presentation capabilities, nine different style sheet languages were proposed to the World Wide Web Consortium's (W3C) www-style mailing list. Of the nine proposals, two were chosen as the foundation for what became CSS: Cascading HTML Style Sheets (CHSS) and Stream-based Style Sheet Proposal (SSP). CHSS, a language that has some

resemblance to today's CSS, was proposed by Hkon Wium Lie in October 1994. Bert Bos was working on a browser called Argo, which used its own style sheet language called SSP.[5] Lie and Yves Lafon joined Dave Raggett to expand the Arena browser for supporting CSS as a testbed application for the W3C.[6][7][8] Lie and Bos worked together to develop the CSS standard (the 'H' was removed from the name because these style sheets could also be applied to other markup languages besides HTML). [9] Unlike existing style languages like DSSSL and FOSI, CSS allowed a document's style to be influenced by multiple style sheets. One style sheet could inherit or "cascade" from another, permitting a mixture of stylistic preferences controlled equally by the site designer and user. Lie's proposal was presented at the "Mosaic and the Web" conference (later called WWW2) in Chicago, Illinois in 1994, and again with Bert Bos in 1995.[9] Around this time the W3C was already being established, and took an interest in the development of CSS. It organized a workshop toward that end chaired by Steven Pemberton. This resulted in W3C adding work on CSS to the deliverables of the HTML editorial review board (ERB). Lie and Bos were the primary technical staff on this aspect of the project, with additional members, including Thomas Reardon of Microsoft, participating as well. In August 1996 Netscape Communication Corporation presented an alternative style sheet language called JavaScript Style Sheets (JSSS).[9] The spec was never finished and is deprecated.[10] By the end of 1996, CSS was ready to become official, and the CSS level 1 Recommendation was published in December. Development of HTML, CSS, and the DOM had all been taking place in one group, the HTML Editorial Review Board (ERB). Early in 1997, the ERB was split into three working groups: HTML Working group, chaired by Dan Connolly of W3C; DOM Working group, chaired by Lauren Wood of SoftQuad; and CSS Working group, chaired by Chris Lilley of W3C. The CSS Working Group began tackling issues that had not been addressed with CSS level 1, resulting in the creation of CSS level 2 on November 4, 1997. It was published as a W3C Recommendation on May 12, 1998. CSS level 3, which was started in 1998, is still under development as of 2009. In 2005 the CSS Working Groups decided to enforce the requirements for standards more strictly. This meant that already published standards like CSS 2.1, CSS 3 Selectors and CSS 3 Text were pulled back from Candidate Recommendation to Working Draft level.

[edit] Difficulty with adoption


Although the CSS 1 specification was completed in 1996 and Microsoft's Internet Explorer 3[9] was released in that year featuring some limited support for CSS, it was more than three years before any web browser achieved near-full implementation of the specification. Internet Explorer 5.0 for the Macintosh, shipped in March 2000, was the first browser to have full (better than 99 percent) CSS 1 support,[11] surpassing Opera, which had been the leader since its introduction of CSS support 15 months earlier. Other browsers followed soon afterwards, and many of them additionally implemented parts of CSS 2. As of August 2010, no (finished) browser has fully implemented CSS 2, with implementation levels varying (see Comparison of layout engines (CSS)).

Even though early browsers such as Internet Explorer 3[9] and 4, and Netscape 4.x had support for CSS, it was typically incomplete and afflicted with serious bugs. This was a serious obstacle for the adoption of CSS. When later 'version 5' browsers began to offer a fairly full implementation of CSS, they were still incorrect in certain areas and were fraught with inconsistencies, bugs and other quirks. The proliferation of such CSS-related inconsistencies and even the variation in feature support has made it difficult for designers to achieve a consistent appearance across platforms. Some authors resorted to workarounds such as CSS hacks and CSS filters to obtain consistent results across web browsers and platforms. Problems with browsers' patchy adoption of CSS along with errata in the original specification led the W3C to revise the CSS 2 standard into CSS 2.1, which moved nearer to a working snapshot of current CSS support in HTML browsers. Some CSS 2 properties that no browser successfully implemented were dropped, and in a few cases, defined behaviors were changed to bring the standard into line with the predominant existing implementations. CSS 2.1 became a Candidate Recommendation on February 25, 2004, but CSS 2.1 was pulled back to Working Draft status on June 13, 2005,[12] and only returned to Candidate Recommendation status on July 19, 2007.[13] In the past, some web servers were configured to serve all documents with the filename extension .css[14] as mime type application/x -pointplus [15] rather than text/css . At the time, the Net-Scene company was selling PointPlus Maker to convert PowerPoint files into Compact Slide Show files (using a .css extension).[16]

[edit] Variations
CSS has various levels and profiles. Each level of CSS builds upon the last, typically adding new features and typically denoted as CSS 1, CSS 2, and CSS 3. Profiles are typically a subset of one or more levels of CSS built for a particular device or user interface. Currently there are profiles for mobile devices, printers, and television sets. Profiles should not be confused with media types, which were added in CSS 2.
[edit] CSS 1

The first CSS specification to become an official W3C Recommendation is CSS level 1, published in December 1996.[17] Among its capabilities are support for:
y y y y y y

Font properties such as typeface and emphasis Color of text, backgrounds, and other elements Text attributes such as spacing between words, letters, and lines of text Alignment of text, images, tables and other elements Margin, border, padding, and positioning for most elements Unique identification and generic classification of groups of attributes

The W3C no longer maintains the CSS 1 Recommendation.[18]

[edit] CSS 2

CSS level 2 specification was developed by the W3C and published as a Recommendation in May 1998. A superset of CSS 1, CSS 2 includes a number of new capabilities like absolute, relative, and fixed positioning of elements and z-index, the concept of media types, support for aural style sheets and bidirectional text, and new font properties such as shadows. The W3C no longer maintains the CSS 2 recommendation.[19] CSS level 2 revision 1 or CSS 2.1 fixes errors in CSS 2, removes poorly-supported or not fully interoperable features and adds already-implemented browser extensions to the specification. In order to comply with the W3C Process for standardizing technical specifications, CSS 2.1 went back and forth between Working Draft status and Candidate Recommendation status for many years. CSS 2.1 first became a Candidate Recommendation on February 25, 2004, but it was reverted to a Working Draft on June 13, 2005 for further review. It was returned to Candidate Recommendation status on 19 July 2007 and was updated twice in 2009. However, since changes and clarifications were made to the prose it went back to Last Call Working Draft on 7 December 2010. Later it went into Proposed Recommendation on 12 April 2011.[20] It was published as a Recommendation on 7 June 2011. [21]
[edit] CSS 3

Instead of defining all features in a single, large specification like CSS 2, CSS 3 is divided into several separate documents called "modules". Each module adds new capability or extends features defined in CSS 2, over preserving backward compatibility. Work on CSS level 3 started around the time of publication of the original CSS 2 recommendation. The earliest CSS 3 drafts were published in June 1999. [22] Due to the modularization, different modules have different stability and are in different status.[23] As of March 2011, there are over 40 CSS modules published from the CSS Working Group.[22] Some modules such as Selectors, Namespaces, Color and Media Queries are considered stable and are either in Candidate Recommendation or Proposed Recommendation status.[24] Once CSS 2.1 is finalized and published as Recommendation, they are likely to go to Recommendation as well.[25] On 7 June 2011, the CSS 3 Color Module was published as a W3C Recommendation.[21]

Browser support
Further information: Comparison of layout engines (Cascading Style Sheets) Because not all browsers correctly parse CSS code, developed coding techniques known as CSS hacks can either filter specific browsers or target specific browsers (generally both are known as CSS filters). The former can be defined as CSS filtering hacks and the latter can be defined as CSS targeting hacks and both of which can be used to hide or show parts of the CSS to different browsers. This is achieved either by exploiting CSS-handling quirks or bugs in the browser, or by taking advantage of lack of support for parts of the CSS

specifications.[26] Using CSS filters, some designers have gone as far as delivering different CSS to certain browsers to ensure designs render as expected. Because very early web browsers were either completely incapable of handling CSS, or render CSS very poorly, designers today often routinely use CSS filters that completely prevent these browsers from accessing any of the CSS. Internet Explorer support for CSS began with IE 3.0 and increased progressively with each version. By 2008, the first Beta of Internet Explorer 8 offered support for CSS 2.1 in its best web standards mode. An example of a well-known CSS browser bug is the Internet Explorer box model bug, where box widths are interpreted incorrectly in several versions of the browser, resulting in blocks that are too narrow when viewed in Internet Explorer, but correct in standards-compliant browsers. The bug can be avoided in Internet Explorer 6 by using the correct doctype in (X)HTML documents. CSS hacks and CSS filters are used to compensate for bugs such as this, just one of hundreds of CSS bugs that have been documented in various versions of Netscape, Mozilla Firefox, Opera, and Internet Explorer (including Internet Explorer 7).[27][28] Even when the availability of CSS-capable browsers made CSS a viable technology, the adoption of CSS was still held back by designers' struggles with browsers' incorrect CSS implementation and patchy CSS support. Even today, these problems continue to make the business of CSS design more complex and costly than it was intended to be, and crossbrowser testing remains a necessity. Other reasons for continuing non-adoption of CSS are: its perceived complexity, authors' lack of familiarity with CSS syntax and required techniques, poor support from authoring tools, the risks posed by inconsistency between browsers and the increased costs of testing. Currently there is strong competition between Mozilla's Gecko layout engine used in Firefox, the WebKit layout engine used in Apple Safari and Google Chrome, the similar KHTML engine used in KDE's Konqueror browser, and Opera's Presto layout engineeach of them is leading in different aspects of CSS. As of August 2009, Internet Explorer 8, Firefox 2 and 3 have reasonably complete levels of implementation of CSS 2.1.[29]

[edit] Limitations
Some noted limitations of the current capabilities of CSS include: Poor controls for flexible layouts While new additions to CSS 3 provide a stronger, more robust feature-set for layout, CSS is still at heart a styling language (for fonts, colours, borders and other decoration), not a layout language (for blocks with positions, sizes, margins, and so on). These limitations mean that creating fluid layouts generally requires hand-coding of CSS, and has held back the development of a standards-based WYSIWYG editor.[citation needed ]. Selectors are unable to ascend CSS offers no way to select a parent or ancestor of an element that satisfies certain criteria. A more advanced selector scheme (such as XPath) would enable more sophisticated style sheets. However, the major reasons for the CSS Working Group rejecting proposals for parent selectors are related to browser performance and incremental rendering issues.[citation needed] Vertical control limitations

While horizontal placement of elements is generally easy to control, vertical placement is frequently unintuitive, convoluted, or impossible. Simple tasks, such as centering an element vertically or getting a footer to be placed no higher than bottom of viewport, either require complicated and unintuitive style rules, or simple but widely unsupported rules.[clarification needed ] Absence of expressions There is currently no ability to specify property values as simple expressions (such as margin-left: 10% 3em + 4px; ). This would be useful in a variety of cases, such as calculating the size of columns subject to a constraint on the sum of all columns. However, a working draft with a calc() value to address this limitation has been published by the CSS WG. [30] Internet Explorer versions 5 to 7 support a proprietary expression() statement,[31] with similar functionality. This proprietary expression() statement is no longer supported from Internet Explorer 8 onwards, except in compatibility modes. This decision was taken for "standards compliance, browser performance, and security reasons".[31] Lack of column declaration While possible in current CSS 3 (using the column-count module),[32] layouts with multiple columns can be complex to implement in CSS 2.1. With CSS 2.1, the process is often done using floating elements, which are often rendered differently by different browsers, different computer screen shapes, and different screen ratios set on standard monitors. Cannot explicitly declare new scope independently of position Scoping rules for properties such as z-index look for the closest parent element with a position:absolute or position:relative attribute. This odd coupling has undesired effects such as it is impossible to avoid declaring a new scope when one is forced to adjust an element's position, preventing one from using the desired scope of a parent element. Pseudo-class dynamic behavior not controllable CSS implements pseudo-classes that allow a degree of user feedback by conditional application of alternate styles. One CSS pseudo-class, ":hover", is dynamic (equivalent of javascript "onmouseover") and has potential for abuse (e.g., implementing cursor-proximity popups),[33] but CSS has no ability for a client to disable it (no "disable"-like property) or limit its effects (no "nochange"-like values for each property).

[edit] Advantages
Flexibility By combining CSS with the functionality of a Content Management System, a considerable amount of flexibility can be programmed into content submission forms. This allows a contributor, who may not be familiar or able to understand or edit CSS or HTML code to select the layout of an article or other page they are submitting onthe-fly, in the same form. For instance, a contributor, editor or author of an article or page might be able to select the number of columns and whether or not the page or article carries an image. This information is then passed to the Content Management System, and the program logic evaluates the information and determines, based on a certain number of combinations, how to apply classes and IDs to the HTML elements, therefore styling and positioning them according to the pre-defined CSS for that particular layout type. When working with large-scale, complex sites, with many

contributors such as news and informational sites, this advantage weighs heavily on the feasibility and maintenance of the project. Separation of content from presentation CSS facilitates publication of content in multiple presentation formats based on nominal parameters. Nominal parameters include explicit user preferences, different web browsers, the type of device being used to view the content (a desktop computer or mobile Internet device), the geographic location of the user and many other variables. Site-wide consistency Main articles: Separation of presentation and content and Style sheet (web development) When CSS is used effectively, in terms of inheritance and "cascading," a global style sheet can be used to affect and style elements site-wide. If the situation arises that the styling of the elements should need to be changed or adjusted, these changes can be made by editing rules in the global style sheet. Before CSS, this sort of maintenance was more difficult, expensive and time-consuming. Bandwidth A stylesheet, whether internal to the source document or separate, will specify the style once for a range of HTML elements selected by class, type or relationship to others. This is much more efficient than repeating style information inline for each occurrence of the element. An external stylesheet is usually stored in the browser cache, and can therefore be used on multiple pages without being reloaded, further reducing data transfer over a network. Page reformatting Main article: Progressive enhancement With a simple change of one line, a different style sheet can be used for the same page. This has advantages for accessibility, as well as providing the ability to tailor a page or site to different target devices. Furthermore, devices not able to understand the styling still display the content.

(sometimes called a flat page[1]) is a web page that is delivered A to the user exactly as stored, in contrast to dynamic web pages which are generated by a web application. Consequently a static web page displays the same information for all users, from all contexts, subject to modern capabilities of a web server to negotiate content-type or language of the document where such versions are available and the server is configured to do so. Static web pages are often HTML documents stored as files in the file system and made available by the web server over HTTP. However, loose interpretations of the term could include web pages stored in a database, and could even include pages formatted using a template and served through an application server, as long as the page served is unchanging and presented essentially as stored.

static web page

[edit] Advantages and disadvantages


Advantages

y y y y

No programming skills are required to create a static page. Inherently publicly cacheable (ie. a cached copy can be shown to anyone). No particular hosting requirements are necessary. Can be viewed directly by a web browser without needing a web server or application server, for example directly from a CD-ROM or USB Drive.

Disadvantages
y y

Any personalization or interactivity has to run client-side (ie. in the browser), which is restricting. Maintaining large numbers of static pages as files can be impractical without automated tools.

A dynamic web page is a kind of web page that has been prepared with fresh information (content and/or layout), for each individual viewing. It is not static because it changes with the time (ex. a news content), the user (ex. preferences in a login session), the user interaction (ex. web page game), the context (parametric customization), or any combination thereof.

Contents
[hide]
y y

y y y y

1 Properties associated with dynamic web pages 2 Two types of dynamic web sites o 2.1 Client-side scripting and content creation o 2.2 Server-side scripting and content creation o 2.3 Combining client and server side 3 Disadvantages 4 History 5 See also 6 References

[edit] Properties associated with dynamic web pages


Classical hypertext navigation occurs among "static" documents, and, for web users, this experience is reproduced using static web pages, meaning that a page retrieved by different users at different times is always the same, in the same form. However, a web page can also provide a live user experience. Content (text, images, form fields, etc.) on a web page can change in response to different contexts or conditions. In dynamic sites, page content and page layout are created separately. The content is retrieved from a database and is placed on a web page only when needed or asked. This allows for quicker page loading, and it allows just about anyone with limited web design experience to update their own website via an administrative tool. This set-up is ideal for those who wish to make frequent changes to their websites including text and image updates, e.g. e-commerce.

[edit] Two types of dynamic web sites


[edit] Client-side scripting and content creation
Using client-side scripting to change interface behaviors within a specific web page, in response to mouse or keyboard actions or at specified timing events. In this case the dynamic behavior occurs within the presentation. Such web pages use presentation technology called rich interfaced pages. Client-side scripting languages like JavaScript or ActionScript, used for Dynamic HTML (DHTML) and Flash technologies respectively, are frequently used to orchestrate media types (sound, animations, changing text, etc.) of the presentation. The scripting also allows use of remote scripting, a technique by which the DHTML page requests additional information from a server, using a hidden Frame, XMLHttpRequests, or a Web service. The Client-side content is generated on the user's computer. The web browser retrieves a page from the server, then processes the code embedded in the page (often written in JavaScript) and displays the retrieved page's content to the user. The innerHTML property (or write command) can illustrate the client-side dynamic page generation: two distinct pages, A and B, can be regenerated as document.innerHTML = A and document.innerHTML = B ; or "on load dynamic" by document.write(A) and document.write(B) . There are also some utilities and frameworks for converting HTML files into JavaScript files. For example webJS[1] uses innerHTML property for rendering pages from converted HTML on client-side. The first "widespread used" version of JavaScript was 1996 (with Netscape 3 an ECMAscript standard).

[edit] Server-side scripting and content creation


A program running on the web server (server-side scripting) is used to change the web content on various web pages, or to adjust the sequence of or reload of the web pages. Server responses may be determined by such conditions as data in a posted HTML form, parameters in the URL, the type of browser being used, the passage of time, or a database or server state. Such web pages are often created with the help of server-side languages such as PHP, Perl, ASP, ASP.NET, JSP, ColdFusion and other languages. These server-side languages typically use the Common Gateway Interface (CGI) to produce dynamic web pages. These kinds of pages can also use, on the client-side, the first kind (DHTML, etc.). Server-side dynamic content is more complicated: (1) The client sends the server the request. (2) The server receives the request and processes the server-side script such as [PHP] based on the query string, HTTP POST data, cookies, etc.

The dynamic page generation was made possible by the Common Gateway Interface, stable in 1993. Then Server Side Includes pointed a more direct way to deal with server-side scripts, at the web servers.

[edit] Combining client and server side


Ajax is a web development technique for dynamically interchanging content with the serverside, without reloading the web page. Google Maps is an example of a web application that uses Ajax techniques and database.

[edit] Disadvantages
y

Search engines work by creating indexes of published HTML web pages that were, initially, "static". With the advent of dynamic web pages, often created from a private database, the content is less visible[2]. Unless this content is duplicated in some way (for example, as a series of extra static pages on the same site), a search may not find the information it is looking for. It is unreasonable to expect generalized web search engines to be able to access complex database structures, some of which in any case may be secure.

[edit] History
It is difficult to be precise about "dynamic web page beginnings" or chronology, because the precise concept makes sense only after the "widespread development of web pages": HTTP has been in use since 1990, HTML, as standard, since 1996. The web browsers explosion started with 1993's Mosaic. It is obvious, however, that the concept of dynamically driven websites predates the internet, and in fact HTML. For example, in 1990, before the general public use of the internet, a dynamically driven remotely accessed menu system was implemented by Susan Biddlecomb, at the University of Southern California BBS on a 16 line TBBS system with TDBS add-on.

Web application
From Wikipedia, the free encyclopedia (Redirected from Web applications) Jump to: navigation, search For applications accessed through the web that are executed client side, see Web app. This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (January 2010)

Google Calendar is a contact- and time-management web application offered by Google.

Horde groupware is an open source web application. A web application is an application that is accessed over a network such as the Internet or an intranet. The term may also mean a computer software application that is hosted in a browser controlled environment (e.g. a Java applet)[citation needed] or coded in a browser-supported language (such as JavaScript, combined with a browser-rendered markup language like HTML) and reliant on a common web browser to render the application executable. Web applications are popular due to the ubiquity of web browsers, and the convenience of using a web browser as a client, sometimes called a thin client. The ability to update and maintain web applications without distributing and installing software on potentially

thousands of client computers is a key reason for their popularity, as is the inherent support for cross-platform compatibility. Common web applications include webmail, online retail sales, online auctions, wikis and many other functions.

Contents
[hide]
y y y y y y y y y y y

1 History 2 Interface 3 Structure 4 Business use 5 Writing web applications 6 Applications 7 Benefits 8 Drawbacks 9 See also 10 References 11 External links

[edit] History
In earlier computing models, e.g. in client-server, the load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own client program which served as its user interface and had to be separately installed on each user's personal computer. An upgrade to the server-side code of the application would typically also require an upgrade to the client-side code installed on each user workstation, adding to the support cost and decreasing productivity. In contrast, web applications use web documents written in a standard format such as HTML (and more recently XHTML), which are supported by a variety of web browsers. Generally, each individual web page is delivered to the client as a static document, but the sequence of pages can provide an interactive experience, as user input is returned through web form elements embedded in the page markup. During the session, the web browser interprets and displays the pages, and acts as the universal client for any web application. In 1995, Netscape introduced a client-side scripting language called JavaScript, which allowed programmers to add some dynamic elements to the user interface that ran on the client side. Until then, all the data had to be sent to the server for processing, and the results were delivered through static HTML pages sent back to the client In 1996, Macromedia introduced Flash, a vector animation player that could be added to browsers as a plug-in to embed animations on the web pages. It allowed the use of a scripting language to program interactions on the client side with no need to communicate with the server.

In 1999, the "web application" concept was introduced in the Java language in the Servlet Specification version 2.2. [2.1?].[1][2] At that time both JavaScript and XML had already been developed, but Ajax had still not yet been coined and the XMLHttpRequest object had only been recently introduced on Internet Explorer 5 as an ActiveX object.[3] In 2005, the term Ajax was coined, and applications like Gmail started to make their client sides more and more interactive.

[edit] Interface

Webconverger operating system provides an interface for web applications. Through Java, JavaScript, DHTML, Flash, Silverlight and other technologies, applicationspecific methods such as drawing on the screen, playing audio, and access to the keyboard and mouse are all possible. Many services have worked to combine all of these into a more familiar interface that adopts the appearance of an operating system. General purpose techniques such as drag and drop are also supported by these technologies. Web developers often use client-side scripting to add functionality, especially to create an interactive experience that does not require page reloading. Recently, technologies have been developed to coordinate client-side scripting with server-side technologies such as PHP. Ajax, a web development technique using a combination of various technologies, is an example of technology which creates a more interactive experience.

[edit] Structure
Applications are usually broken into logical chunks called "tiers", where every tier is assigned a role.[4] Traditional applications consist only of 1 tier, which resides on the client machine, but web applications lend themselves to a n -tiered approach by nature.[4] Though many variations are possible, the most common structure is thethree-tiered application.[4] In its most common form, the three tiers are called presentation, application and storage, in this order. A web browser is the first tier (presentation), an engine using some dynamic Web content technology (such as ASP, ASP.NET, CGI, ColdFusion, JSP/Java, PHP, Perl, Python, Ruby on Rails or Struts2) is the middle tier (application logic), and a database is the third tier (storage).[4] The web browser sends requests to the middle tier, which services them by making queries and updates against the database and generates a user interface. For more complex applications, a 3-tier solution may fall short, and it may be beneficial to use an n-tiered approach, where the greatest benefit is breaking the business logic, which

resides on the application tier, into a more fine-grained model.[4] Another benefit may be adding an integration tier that separates the data tier from the rest of tiers by providing an easy-to-use interface to access the data.[4] For example, the client data would be accessed by calling a "list_clients()" function instead of making a SQL query directly against the client table on the database. This allows the underlying database to be replaced without making any change to the other tiers.[4] There are some who view a web application as a two-tier architecture. This can be a "smart" client that performs all the work and queries a "dumb" server, or a "dumb" client that relies on a "smart" server.[4] The client would handle the presentation tier, the server would have the database (storage tier), and the business logic (application tier) would be on one of them or on both.[4] While this increases the scalability of the applications and separates the display and the database, it still doesn't allow for true specialization of layers, so most applications will outgrow this model. [4]

[edit] Business use


An emerging strategy for application software companies is to provide web access to software previously distributed as local applications. Depending on the type of application, it may require the development of an entirely different browser-based interface, or merely adapting an existing application to use different presentation technology. These programs allow the user to pay a monthly or yearly fee for use of a software application without having to install it on a local hard drive. A company which follows this strategy is known as an application service provider (ASP), and ASPs are currently receiving much attention in the software industry.

[edit] Writing web applications


There are many web application frameworks which facilitate rapid application development by allowing the programmer to define a high-level description of the program.[5] In addition, there is potential for the development of applications on Internet operating systems, although currently there are not many viable platforms that fit this model. The use of web application frameworks can often reduce the number of errors in a program, both by making the code simpler, and by allowing one team to concentrate just on the framework. In applications which are exposed to constant hacking attempts on the Internet, security-related problems can be caused by errors in the program. Frameworks can also promote the use of best practices[6] such as GET after POST.

[edit] Applications
Examples of browser applications are simple office software (word processors, online spreadsheets, and presentation tools), but can also include more advanced applications such as project management, computer-aided design, video editing and point-of-sale.

[edit] Benefits

This article may contain ori inal research. Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed. More details may be available on the talk page. (April 2011)
y y y y y

Web applications do not require any complex "roll out" procedure to deploy in large organizations. A compatible web browser is all that is needed; Browser applications typically require little or no disk space on the client; They require no upgrade procedure since all new features are implemented on the server and automatically delivered to the users; Web applications integrate easily into other server-side web procedures, such as email and searching. They also provide cross-platform compatibility in most cases (i.e., Windows, Mac, Linux, etc.) because they operate within a web browser window.

[edit] Drawbacks
This article may contain ori inal research. Please improve it by verifying the claims made and adding references. Statements consisting only of original research may be removed. More details may be available on the talk page. (April 2011)
y y

y y

In practice, web interfaces, compared to thick clients, typically force significant sacrifice to user experience and basic usability. Web applications absolutely require compatible web browsers. If a browser vendor decides not to implement a certain feature, or abandons a particular platformor operating system version, this may affect a huge number of users; Standards compliance is an issue with any non-typical office document creator, which causes problems when file sharing and collaboration becomes critical; Browser applications rely on application files accessed on remote servers through the Internet. Therefore, when connection is interrupted, the application is no longer usable. However, if it uses HTML5 API's such as Offline Web application caching,[7] it can be downloaded and installed locally, for offline use. Google Gears, although no longer in active development, is a good example of a third party plugin for web browsers that provides additional functionality for creating web applications; Since many web applications are not open source, there is also a loss of flexibility, making users dependent on third-party servers, not allowing customizations on the software and preventing users from running applications offline (in most cases). However, if licensed, proprietary software can be customized and run on the preferred server of the rights owner; They depend entirely on the availability of the server delivering the application. If a company goes bankrupt and the server is shut down, the users have little recourse. Traditional installed software keeps functioning even after the demise of the company that produced it (though there will be no updates or customer service); Likewise, the company has much greater control over the software and functionality. They can roll out new features whenever they wish, even if the users would like to wait until the bugs have been worked out before upgrading. The option of simply skipping a weak software version is often not available. The company can foist unwanted features on the users or cut costs by reducing bandwidth. Of course, companies will try to keep the good will of their customers, but the users of web

applications have fewer options in such cases unless a competitor steps in and offers a better product and easy migration; The company can theoretically track anything the users do. This can cause privacy problems.

[edit] See also


y y y y

Software as a service (SaaS) Web 2.0 Web services Web widget

[edit] References
1. ^ Alex Chaffee (2000-08-17). "What is a web application (or "webapp")?". http://www.jguru.com/faq/view.jsp?EID=129328. Retrieved 2008-07-27. 2. ^ James Duncan Davidson, Danny Coward (1999 -12-17). Java ervlet pecification ("Specification") Version: 2.2 Final Release. Sun Microsystems. pp. 4346. http://java.sun.com/products/servlet/download.html Retrieved 2008-07-27. . 3. ^ "Dynamic HTML and XML: The XMLHttpRequest Object". Apple Inc. http://developer.apple.com/internet/webcontent/xmlhttpreq.html Retrieved 2008-06. 25. 4. ^ a b c d e f g h i j Jeremy Petersen. "Benefits of using the n-tiered approach for web applications". http://www.adobe.com/devnet/coldfusion/articles/ntier.html. 5. ^ Multiple (wiki). "Web application framework". Docforge. http://docforge.com/wiki/Web_application_framework Retrieved 2010-03-06. . 6. ^ Multiple (wiki). "Framework". Docforge. http://docforge.com/wiki/Framework. Retrieved 2010-03-06. 7. ^ Multiple. "Offline Web applications - HTML5". WHATWG. http://www.whatwg.org/specs/web-apps/current-work/multipage/offline.html. Retrieved 2010-08-09.
 

[edit] External links


Wikimedia Commons has media related to: Internet applications

y y y y

HTML 5 Draft recommendation, changes to HTML and related APIs to ease authoring of web-based applications. The Other Road Ahead An article arguing that the future lies on the server, not rich interfaces on the client Web Applications at the Open Directory Project Web Applications Working Group at the World Wide Web Consortium (W3C)

Web page
From Wikipedia, the free encyclopedia Jump to: navigation, search

A screenshot of a web page on Wikipedia A web page or webpage is a document or information resource that is suitable for the World Wide Web and can be accessed through a web browser and displayed on a monitor or mobile device. This information is usually in HTML or XHTML format, and may provide navigation to other web pages via hypertext links. Web pages frequently subsume other resources such as style sheets, scripts and images into their final presentation. Web pages may be retrieved from a local computer or from a remoteweb server. The web server may restrict access only to a private network, e.g. a corporateintranet, or it may publish pages on the World Wide Web. Web pages are requested and served from web servers using Hypertext Transfer Protocol (HTTP). Web pages may consist of files of static text and other content stored within the web server's file system (static web pages), or may be constructed by server-side software when they are requested (dynamic web pages). Client-side scripting can make web pages more responsive to user input once on the client browser.

Вам также может понравиться