Hypertext: DKR Scenario 29 Dec

This is an answer to Demo@50: Welcome Builder!, but please also consider the more practical interactive mapping of my project results to an OHS. Everything that follows is up for discussion, adjustment, extension, removal and implementation. A Dynamic Knowledge Repository, according to my current understanding, is a comprehensive collection of documents and other media supported by software to deal with it, created and curated by a community of people who are consumers, contributors, knowledge workers and stewards.

Can we make a 21st Century DKR happen? Yes, it’s not too difficult. It’s a totally different question if we can make it big, because besides the technological, legal and contentual solutions, there’s also the social/political aspect, timing and luck. On the other hand, I already count it as victory if the manage to augment our own matters or if we improve on the current state of affairs – it’s about bootstrapping, and I’m unable to predict what people will use the tools for in the future. Additionally, I believe that DKRs do exist today, for example the Wikipedia.

What infrastructures and components would have to be built? All of them and many, many more. It’s not so important what we will end up with eventually, but how it is built and what policies are applied. Formats and protocols need to be open, software needs to be libre-licensed, online services need to be self-hostable, dependencies need to be avoided, offline usage and preservation needs to be an option, social mechanisms should prevent abuse and encourage constructive collaboration. Most of the potential of the digital revolution wasn’t realized yet because strong powers entrap unsuspecting users in silos, where we need to build an alternative of interchangability on every level, basically liberating ourselves and humanity from totally artificial limitations.

What aspect of the DKR do you feel should be built and how do you want to contribute, based on the list of “What is missing” and other aspects of Doug’s work? As said, everything of it should be build, and a lot more. When it comes to me, I’m more a plumber in the background who builds many small intercompatible things to retain flexibility in an attempt to avoid programming myself into a corner, unmanageable complexity and bloatware. I have trouble with UI/UX design, but slowly try to catch up. Therefore, I’m 100% in for the Journal, don’t care too much about xFiles (as the underlying data structures should be abstracted away, will be subject for optimization experts and only needs to comply with the requirements given above), I would like to learn the Chorded Keyset but don’t know how to build one, are fine with any kind of linking but rather want more of it than less, and really like the notion of ViewSpecs although I’m not sure if they can work in some of the most complex situations. Regarding other aspects of Doug’s work, establishing useful augmented reality would be the killer in my opinion on the technical/instrastructural level. Furthermore, I would like to support collaboration and research in human-computer interaction, but definitively would go for the low-hanging fruit and advance from there.

If you are already working on something you feel is naturally part of a DKR, how do you feel it can interact with other components? I don’t know how my software can interact with other components, I’m not aware of a lot of existing other components or format specs, except Ted Nelson’s EDL format/concept and span selector, which aren’t libre licensed and need improvement, therefore I “interact” with them by implementing libre licensed alternatives that are almost compatible if not for trademark law creeping into the format spec. Time Browser and Author sound interesting, but I didn’t look into them yet. My components are intended to interact with other open/libre components to the fullest extend possible, and be it by means of data conversion.

Please write a walkthrough to help communicate our personal perspectives (featuring Joe). There is no single Joe, but a lot of different Joes, each with his own individual or pre-defined configuration. One of the Joes sits down at a computer, which allows him to continue reading/working where he left off the last time, or switch to new publications in his field of interest, or to look things up with one of the agents that use different metics, employing different methods to be most effective. He decides to continue with his current reading, where he adds a few comments, a new paragraph, annotations, links, fixes some errors, marks semantic meanings, and by doing so he discovers other documents, meanings and relations. His lifestream records his journey, which he can share or keep for himself, but it’s not unlikely that an agent/policy will decide what kind of activity will be publicly shared, so Joe usually doesn’t bother to take manual action. If a “document” (it’s more a text composition from diverse sources, rendered by many layers of standard, public and custom ViewSpecs) is particulary interesting to him, he tears it apart in order to fit portions of it into his own collection/archive, which in effect constitutes another new document, public or private. Joe also does this frequently together with other people: family members, colleagures, friends and complete strangers. Because he’s involved in several communities, he uses different, very specific ViewSpecs tailored for particular tasks, but it can happen from time to time that he doesn’t have and can’t find a ViewSpec that suits his needs, and as he didn’t learn yet how to build one himself, he either asks his neighbor or the plenty of people who provide services for the public network infrastructure, its data and software. No matter if Joe is awake or sleeping, agents/bots work for him, for others, for the content, for the network, to keep it clean, to answer questions, to do all sorts of operations, but in an intelligent way in cooperation so resources don’t get wasted unnecessarily.

As Joe is a little bit paranoid, he has the habit to print his favorite pieces or produce other physical manifestations in his living room (sometimes he’s too lazy and just orders them cheaply from an on-demand service), so he doesn’t need to worry about loosing anything in case “the lights go out”. Even if this should ever happen, most of the manifestations can be read in again automatically with some kind of 3D printed, matrix code, standardized OCR, RFID or punch card technology. Speaking of the physical world, Joe still needs to do business in it of course, but wherever he goes, online repositories go with him (well, the truth is that he rarely enjoys connectivity because of bandwith and legal constraints, not even to mention the long distances with no network access at all, so he’s forced to work with offline repos and auto-sync with the rest of the world later). Especially when in cities, places, people, objects and time present themselves in an augmented way via his mobile device, so he can either learn about or interact with them, make them do things as they’re too networked and compatible in a network of things. Quite frankly, a substantial portion of his “document work” is related to what he discusses and designs together with others in meetings or is linked to physical stuff, so he can overlay relevant data if he wants, with all the tools and capabilities available to him as he would at home in front of his computer. Now, those things are just a fraction of what Joe does on a typical day, but as the full account doesn’t fit on a computer screen and requires an Open Hypertext System to be properly presented, please continue reading here.

Hypertext: Mother of All Demos 50th anniversary

With less than one year left to prepare for the 50th anniversary of the Mother of all Demos, it’s about time to look at the projects that potentially can be presented on the 2018-12-09. The event is an important milestone to determine if we made significant progress in realizing Doug’s vision for the last 50 years, and it seems like we’re a little bit late. Still, I like to address the big picture first before committing to blind actionism.

But what does it even mean to “realize Doug’s vision” in the context of 2018? I have to admit that I’ve never watched the entire demo in one go, it’s just very difficult to spend a hour of quality Internet time in front of my computer, passively watching things that happen on a screen. I wondered why this is the case, and my impression is that the demo was made for end users, not system developers. Doug didn’t explain/discuss the technical details or what their results could mean conceptually for human-computer interaction, and as there’s no way for me to actually use their system today, that particular showcase is kind of practically irrelevant for me. It feels more like an advertisement, and to some extend, it is. If I’m not mistaken, part of the goal for the demo was to attract further funding for the project, which is perfectly fine, as there was no fast/cheap way to make the system a universally availabe product soon in those days. So I see the great demo more like a PR stunt, which in fact served pretty well to introduce people to the notion of personal, networked, visual and augmented interaction with a computer. So how can we replicate a similar effect? Do we even have to?

The world is very different now. In 1968, computers (especially for personal use) were a new thing, so it was natural to explore this technology – how can it help with solving problems of ever-increasing complexity? Today, people have seen computers, we know what they do. If the goal is to contribute tools for dealing with the big problems, we might not deploy computers in the traditional sense any more. For instance, if quantum computing would be around the corner, there would be the chance to hold another revolutionary demo. In the field of augmented reality, we should immediately start with preparing one and it wouldn’t even be difficult to do so. Would such endeavors still be true to Doug in spit? Sure, they’re not about documents and knowledge, the web would stay unfixed, so there could be some merit in replicating the image that Doug put in front of us. Keep in mind that it doesn’t have to be an either-or, but a decision for the anniversary event might shape what’s considered to be Engelbartian for the future, if it is a conservational or progressive movement, or both. Is the anniversary supposed to refresh the original vision, to promote a new/updated one (we could even mock one), or encourage long-term research and development?

If we end up with the goal to organize the world’s information and make it universally accessible and useful (in an open way and for real!), we have to realize that people don’t get excited about text manipulation any more as they already have Office suites and the web, that blue-sky spending isn’t available any more, that our results will be less innovative as the field isn’t new and there’s much more “competition” around. On the other hand, what Doug was able to do with a team of people working several years while spending money, a single guy can get ahead slowly without any financial backing as we don’t have to start from scratch and can benefit from existing technology and methods.

What’s wrong with HyperScope? It’s really unfortunate that the W3C didn’t standardize a XSLT interface (despite browsers tend to come with a XSLT processor), but can’t we get a JavaScript implementation of it by now? How much work would it be to update the GUI framework library that was used for HyperScope?

Hypertext: Socratic Authoring

The Future Text Initiative published an introduction to the notion of “Socratic Authoring”, which is pretty straightforward in describing the general requirements for a modern writing and publishing system. For sure similar visions have already been expressed before, but this proposal might be the most recent one.

In it, we find the demand to retain as much as possible from the richness the author experienced during composition. What does this mean? Is it referring to WYSIWYG, the ability of the author to hide parts of the process, the realization that the reader might become a co-author and therefore needs an equivalent writing environment? It sounds like we don’t want to see works being produced electronically just to end up in a single, primitive, fixed form, so re-digitalization would be needed to enrich it again.

Then there’s the suggestion that semantic augmentation should take place when the document is on a server. I ask, why should this only be possible on a server? Why shouldn’t the user deserve full augmentation on the client side also? Sure, augmentation could require some resources that are only available online or include actions that need to be executed on a different machine, but even when disconnected or without using an external proxy (regardless if such an external online service is owned or third party), the user should face artificial limitation just because the client side is lacking implementation.

To be continued…