Archive of Completed Activities

Activity Description

Click an activity item marked with an asterisk (*) at its end to view its description.


35 pages.

70 pages.

105 pages.

140 pages.

175 pages.

Partially based on wordpress_rss_to_html_1.xsl.

Title: Dashboard Initiative Project summary: For data sources/feeds by the Peeragogy project, a dashboard prototype should be developed in order to provide an overview over the progress of activities, new entries, what has been done/collected so far, indicating what’s currently going on. Purpose of the project: The goal of the project is not so much to eventually operate a dashboard for Peeragogy, but to develop the machinery as a white-label, libre-freely licensed, universal, generic solution which can then be applied in other situations as well. The focus is on bootstrapping semantics, automated infrastructure and user interface. Project URL: Project topics & categories: Software / Apps / AI / IT What are you looking for help with? Beta Testing, Critiquing, Curation, Data Organization, Designing, Feedback, Prototyping What “roles/personas” are you looking for? Designer, Developer, Project Manager Help needed headline: UI/UX designers who want to prototype, data curators, beta testers Details of who you are looking for? It’s an open invitation and learning exercise/exploration which is supposed to also produce useful results. Main project image: Project contact email: Project open to: Sharing Publicly How are you currently able to compensate contributors? Looking for volunteers Date which this opportunity expires: 2021-01-16 Anything else you want to add? The coordination/progress of the project is discussed in a chat provided by Peeragogy. An earlier overview/landing page exists on A more recent proposal is this: Other UI preparations/experiments were conducted here: and there’s some other related components and inspirations. The Peeragogy project has several data sources of its own in varying quality, and the plan is to improve some of these so they’ll be available for testing/experimentation.

100 pages.

200 pages.

300 pages.

400 pages.

500 pages.

Mainly to help with entering new pattern descriptions that follow a particular pattern language template, but can also be used for other data entry tasks, it’s a little bit like Google Forms, but also supporting technical semantics.

81 pages.

162 pages.

242 pages.

325 pages.

403 pages.

484 pages.

565 pages.

645 pages.

726 pages.

806 pages.

Many of today’s JSON APIs on the Web embed HTML in their JSON, and if such is obtained/retrieved, the conversion from JSON to XML leaves the embedded HTML double-escaped. This universal tool should de-escape the inner XML based on the element hierarchy path (similar to XPath) like the Wakelet downloader/converter does it.

51 pages.

101 pages.

151 pages.

201 pages.

251 pages.

26 pages.

52 pages.

77 pages.

103 pages.

128 pages.

69 pages.

138 pages.

207 pages.

276 pages.

345 pages.

54 pages.

108 pages.

162 pages.

216 pages.

269 pages.

27 entries.

53 entries.

79 entries.

105 entries.

131 entries.

Combine the generic GBA prototype 3 with the frontend of the SOS Grid prototype 3 and the boilerplate features of note_system into a common basis to build from. Video 1, video 2.

It should work with my variant of the SOS format

Includes export of private posts and comments as well as preparing redirects for WordPress URLs.

The format which includes references to sub “wakes” as well as the “cards” with their content can be downloaded recursively.

Want to use it for audio e-mail messaging convenience, so that I can upload a file with a web interface. A little bit like ownCloud/Nextcloud. Could/should get a ReST API to become a hypertext/hypermedia server that delivers resources without exposing the internal specifics of their storage, effectively acting as a resolver/mapping/abstraction. See the initial commit.

Used “xhtml_to_latex_2.xsl” to generate a printed booklet for better reading/consumption convenience.

Allow to automatically copy the current span selection to the clipboard on completion as an option.

38 pages.

76 pages.

112 pages.

A csv2xml converter does already exist in the automated_digital_publishing package, but it’s buggy. Therefore, I probably should write my own with proper parsing techniques based on an implicit call tree instead of trying to fix the current implementation with its state machine strategy.I started an attempt some time ago, but failed, potentially by not recognizing the linebreak as the element of highest priority or some other reason. Unfortunately, I can’t find that code any more and likely have to start anew from scratch again.

70 pages.

140 pages.

209 pages.

279 pages.

348 pages. It’s bibliography, a list of hypertext system implementations, the glossary, people index, word index.

Recently, completed top-level activities were removed from the main output file (in order to keep focus on actual activities, not loads of historical entries). Still, there needs to be a way to conveniently view the completed activities separately.

Visual rendering of “waiting”/“paused”/“blocked” (red), “abandoned” (red, crossed out), “rejected”/“cancelled” (black, crossed out) would be nice.

I did a Hyperglossary prototype (code, live) for Doug@50 (50th anniversary of Douglas Engelbart’s Great 1968 Demo, but that’s trapped in the browser (Same-Origin-Policy restricts it to a server) and limited to WordPress as a source, therefore a more generalized Hyperglossary capability is needed, preferably as a standalone program, that can apply any glossary in a compatible format onto any target text that’s in a compatible format.

Towards the use of the same input file for both, the glossary input and the base text input, therefore enabling the hyperglossarization of existing XHTML webpages that come with an unaugmented glossary.

The last release of the “Change Tracking Text Editor 1” in a practical scenario requires the user to manually copy files around, so a new version should introduce dialogs to select input sources and output targets

When opening a plain text file that ends with a linebreak and immediately leaving/saving, the import adds additional whitespace to the add instruction.

30 entries.

60 entries.

90 entries.

120 entries.

151 entries. This chunk included the FidoNet Tapes that count as one entry.

This is a discussion of the quote “the conversation is the work” by David Whyte, often cited by/in conversation communities for a description of what they do.

56 pages.

112 pages.

169 pages.

225 pages.

281 pages.

Only reading it because Gyuri Lajos said that it is important for understanding his approach. As suspected, the book hardly contains anything I didn’t already personally know myself before reading it.

82 pages.

163 pages.

245 pages.

326 pages.

407 pages.

This is a one-time manual process as I’m too lazy to adjust the wordpress_retriever_1 workflow a little to incorporate the publication date in the file name and import the text to a revision history.