Stephan Kreutzer is becoming increasingly aware that he’s prevented from working on/with other texts than his own. xml_dtd_entity_resolver_1 was developed to expand named DTD entities coming from XHTML.

I want to study and curate everything that gets submitted in the FTI context, but that’s a slow and tedious process – in part because I don’t have the tools that help with such activity. So if I keep discussing every relevant detail, it would eat away from the already very limited time to actually build those tools. Furthermore, it seems that we’re not creating a single common system/infrastructure together (as NLS was), but continue to work on our different, isolated tools components. That’s perfectly fine, while depending on focus, the approach has to adjust accordingly. At the same time, participants submit a load of contributions just like I do too, but using the already existing human/technical systems as a bad starting point for bootstrapping their improvement leads to huge problems that prevent me from doing useful work with the material. Hearing that Christopher Gutteridge suggested the strategy (at 21:26) of “just doing it” to demonstrate how the future of text could look like and to solve our own problems with text work along the way, I think I should just work on my own publications and system with the extended mandate of pretending that sources similar to the jrnl (context) would be something of practical interest, not just for theoretical hypertext studies and mere passive consumption. This strategy being what I would do on my own as well sends me back to square one where I left off a year ago in the hope to find people in the hypertext community that want to bootstrap a capability infrastructure (yes, I know, a vague term inviting many different interpretations). Considering this approach with the increased difficulty resulting from the use of WordPress and web/browser components immediately paralyzed me for a whole week, but it’s not the first time I’ve encountered the overwhelming “curse of complexity”, therefore have my methods to deal with the phenomenon so progress can be made. Solving the many problems one after another isn’t too difficult after all, it’s just a lot that takes its time.

I also grew increasingly suspicious that the Global Challenges Collaboration wants to do something Engelbartian despite the theme is constantly reoccurring. I was listening through their archive of recorded conversations and those reveal that the group is far from being interested in TimeBrowser capabilities as a supporting function or solving complex, urgent world problems or collaborating in ways that didn’t emerge from their conversations. I’m unable to engage as their open web channels (not Zoom, not Facebook) are abandoned, my submissions go to waste and their material remains largely unavailable. Even their more practical action group is no exception, or just think about what one can ever hope to actually do when considering what follows from 53:12 of the recent “GCC Sunday Unblocking” or 2:15:10 of “GCC open space Thursday”.

So by now, I wrote a few texts of my own and don’t need material created by others any more, and despite they’re not dialogue, I can pretend they are, especially by faking a second voice on the jrnl that’s compatible and collaborating in contrast to other non-cooperating entities/sources on the network.

As a first step, I added the xml_dtd_entity_resolver_1 capability/component/tool that’s able to resolve DTD named entities based on either local DTD resolving (local catalogue) or by a simple replacement dictionary. XML files that make use of DTDs (part of the XML spec) can be converted into their expanded equivalent for further pure XML processing without the need to rely on DTDs any more in cases where such a state is favorable. The particular application is the wordpress_retriever_1 workflow, so it can expand the named DTD entities coming from XHTML. A potential hypertext system won’t internally rely on XHTML and DTDs (nowadays only encountered as built-in in browsers anyway) and needs to convert into formats that might not be aware of DTDs, XHTML or XML at all. As XML and DTDs are different formats, it doesn’t make a lot of sense to mix them, be it in their own specification or in XHTML because it gets into the way of bootstrapping semantics, which is why I need to get around their conflation.

Still, as far as the jrnl is concerned, I might deliberately not expand named DTD entities in order to have the XML parser fail, so I can learn about the entities that are actually used, because those that are right now, are only &nbsp;s in empty <p>s, and we really should look into where they come from as well as get rid of them. One can legitimately defend the use of DTD entities, but not in this case. I’m pretty sure that these empty paragraphs are used to add vertical space, which is a quite confused practice in itself, and I guess it’s created by pressing the Enter key in the WordPress editor several times, and the &nbsp; is there to prevent WordPress from removing the otherwise empty paragraph automatically (or alternatively, that’s how WordPress implements several consecutive Enter linebreaks in lack of any decent idea what to do with them). In other words, this renders another piece of WordPress rather useless, which is the editor, so I wonder what’s left. A decent editor should not allow manual linebreaks.

To make my progress a little bit more usable, I added a starter script to the packager and compiled the prepared download package. I also updated the site to the extend that it at least reflects some of the results via the videos about them in lack of better ways to present them. In part, it’s supposed to prepare for the Frankfurt Book Fair in case I want to refer somebody to the effort.

This text is licensed under the GNU Affero General Public License 3 + any later version and/or under the Creative Commons Attribution-ShareAlike 4.0 International.