First, an initiative from the previous year was completed, which became necessary because the conversion from CSV data to XML using third-party components happened to omit individual characters from the result in some rare cases. Therefore, a separate, clean CSV parser had to be developed according to RFC-4180. The bundling as the converter program also allows to filter out CSV columns. This parser implementation was later expanded into a StAX-like programming interface library for Java and C++.
In the hypermedia domain, the browser interface for microphone device access was used for a standalone recording component as well as for an audio messaging system. To support independent commenting on sections within YouTube videos, a workflow was created which would embed the selected clip into a static Web-page and enrich it with additional content. Automated download, cutting, indexing and the obvious possibilities related to transcription and Web Annotation weren’t pursued.
To query data attached to a geographical location, a prototype was built using the browser’s geolocation interface. This technical foundation might later help with advancing towards Augmented Reality as common, public infrastructure.
Related to the Peeragogy Project, several automated processing workflows were developed in order to generate summarizing “wrap” reports similar to Engelbart’s “Journal”. A complete document management system wasn’t achieved yet, but the introduction of NCX (“Navigation Control file for XML” of the Open Packaging Format as found in EPUB) finally allowed source material to flow into a range of different selection/order sequences.
From programming parsers for data formats, it’s a small step to enter into writing interpreters for interactive, dynamic script execution environments as well as domain-specific command languages. The reconstruction of a long-forgotten, list-processing-oriented, plain-text-based interpreter language in the form of the “BAC programming language” awaits the addition of a few instructions which still remain absent.
Another project had the goal to offer configurable forms of input controls for data entry and building pattern catalogs. For this purpose, a generic Web application server solution was realized. Template definitions bind the input fields to the desired semantic data representation. The submitted entries can be retrieved in multiple formats by using HTTP header Content-Type negotiation. To edit existing entries, a wiki-like versioning feature was added. A data source organized this way was used to generate a Progressive Web App; furthermore, automatically filling a dashboard is easily imaginable, too.
GraphML was extended by introducing the categorizability of connections/edges in order to arrange formerly unrelated values into dimensional sequences. The earlier navigation user interface for free-flowing graphs is then able to strap any dimension onto its interchangeable axes, deliberately restricting navigation to these “guide-rails”. Subsequently, orientation is improved while moving through a space of multi-dimensional and potentially irregular data points. Later, a variant was implemented in Java, which also functions as an editor.
The summaries/reports about the meeting-calls of the Doug@50 group were recovered and published again after the original “Journal” got shut down without the Internet Archive having a copy. Thanks to the multi-dimensional graph navigation tool and editor, the list of hypertext publications got recreated in a significantly better way. The observation of online “collaboration”/conversation groups unfortunately didn’t lead to constructive, practical project work. Frode Hegland published “The Future of Text – A 2020 Vision”, which also contains an early, changed version of the description of a decent hypertext system.
Copyright (C) 2021 Stephan Kreutzer. This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.