Wendy’s Bolinas Interviews – HYPERTEXT It’s an important reminder that the success of the current web has its roots in very basic principles, which are almost forgotten today, but their rediscovery is on the way. The URI, an abstract identifier for a resource, and the URL, a concrete way to retrieve a resource, were designed to be universal, independent from proprietary server implementations and therefore allowed the linkage of documents across inhomogeneous technical infrastructure. All the technical details of information retrieval are abstracted away from the user, so he arrives at the impression of using only one environment instead of many different networks, protocols and servers. As soon as the addressing scheme is open+standardized, clients and servers agree on open+standardized protocols for resource retrieval and the client is able to read the data in open+standardized file formats, a rather chaotic network can come together to perform the task of exchanging documents online. If I interpret Robert Cailliau correctly, the initial goal was to provide easy access to research documents at CERN, and it seems like HTML was initially intended for building lists of such existing documents stored in non-web formats, plus some additional tags to make those lists look pretty and to annotate the document references. In the beginning, Berners-Lee and Cailliau could hardly expect that anybody would write new documents in HTML, so I assume that they just started to link documents together with HTML. Over time, people realized that they could put their information directly into the HTML format, so it was kind of inevitable that more and more tags were needed to enhance HTML into a document description format. It’s quite interesting to consider the development of HTML in terms of XHTML, semantic web and HTML as an interface for online applications, but apart from that, as many people want a lot of different things from HTML, it was forced to become a generalized format for all of those different use cases. The current web isn’t a terribly good solution for text oriented data or literature in particular, so it’s our job to use the basic principles of the web, which perform very well in the area of information interchange, and build a new client on top of it, better suited for the specific needs of representing human knowledge and conversation. Today, the term “web” is synonymously used for HTML, while in fact it should be used for URLs + HTTP + links, viewing HTML as only one of many possible applications on top of those very basic mechanisms. This prospect gained some traction in recent years with the ReST movement, so developers are more likely to understand the concept and will be ready to implement it. I do not accept any criticism regarding DOM or other XML related techniques as they are widespread, universal tools for handling text oriented data, and project Xanadu would be faced with a lot of unnecessary work to provide a similar programmability for their document formats. The technology doesn’t cause the problems, it’s the human politics that do.