Let me try to briefly describe a certain future of text that has largely been abandoned since the advent of Word, the Web and DTP. In his book “Track Changes”, Matthew G. Kirschenbaum reconstructs the otherwise already forgotten history of word processing: in contrast to today’s use of the term, it initially referred to a model for organizing the office, then to electronic typewriter machines and later to a wide array of software packages reflecting every imaginable combination of generally useful features and affordances for manipulating text on a computer. From print-perfect corporate letters to authors changing their manuscripts over and over again instead of relying on the services of an editor, electronic writing had to develop around purely textual aspects because of the pressing hardware limitations at the time. Naturally, the early hypertext pioneers expected a new era of powerful tools/instruments to come about that would augment reading, writing and curation way beyond what humankind had built for itself for that purpose so far. Today we know that this future isn’t what happened.
Text by its very nature is a universal cultural technique – and so must be the tools and conventions involved in its production and consumption. Consider a whole infrastructure for text comprised of standards, capabilities, formats and implementations that follow a particular architecture analogous to POSIX, the OSI reference model and the DIKW pyramid. Such a system would need to be organized in separate layers specifically designed to bootstrap semantics from the byte order endian to character encoding, advancing towards syntactical format primitives up to the functional meaning of a portion of text. Similar to ReST and its HATEOAS or what XHTML introduced to Web browsers, overlays of semantic instructions would drive capabilities, converters, interface controls and dynamic rendering in an engine that orchestrates the synthesis of such interoperable components. Users could customize their installation quite flexibly or just import different preexisting settings from a repository maintained by the community – text processing a little bit like Blender with its flow-/node-based approach.
Is this the insanity of a mad man? Maybe, but we have seen variations of this working before with some bits and pieces still in operation here and there. This is not rocket science, this is not too hard, it’s just a lot to do and few are actively contributing because there’s no big money in foundational text technology anymore. By now, much better hypertext and hypermedia tools are urgently needed as an institutional and humanitarian cause. The future of text and its supporting infrastructure can’t be a single product, trapped in a specific execution environment, corrupted by economic interests or restricted by legal demands. Instead, imagine a future in which authors publish directly into the Great Library of everything that has ever been written, that’s constantly curated by crowds of knowledge workers for everybody to have a complete local copy, presented and augmented in any way any reader could ever wish for. If this is too big for now, a decent system to help with managing your own personal archive and the library of collected canonical works would be a good start as well.
After cheap paper became available and the printing press was invented, it still took many generations of intelligent minds to eventually figure out what the medium can and wants to be. Likewise, the industrial revolution called for a long and ongoing struggle to establish worker rights in order to counter boundless exploitation. With our antiquated mindsets and mentality, there’s a real risk that we simply won’t allow digital technology to realize its full potential in our service for another 100-300 years, and the future of text might be of no exception.
Copyright (C) 2019 Stephan Kreutzer. This text is licensed under the GNU Affero General Public License 3 + any later version and/or the Creative Commons Attribution-ShareAlike 4.0 International.