Archive of Completed Activities
- 2021-12-05T01:02:00Z: Develop a Web “browser”/viewer
- 2021-11-12T19:56:00Z: JTeroStA
- 2021-10-31T23:22:00Z: TeroStA
- 2021-07-19T11:11:00Z: Develop “tero”
- 2021-05-09T19:58:00Z: Develop an independent NoFlo runner/environment in C++
- 2021-04-24T16:50:00Z: Develop tool to pivot XML element hierarchy
- 2021-06-03T20:52:00Z: Transformation of a hierarchical tree data source into a multidimensional graph
- Extract structure paths
- 2021-04-24T16:50:00Z: Split source into its paths/fields/entries/elements
- 2021-06-03T20:52:00Z: Generate GraphML
- 2021-04-07T00:22:00Z: Add support for semantics to the StAX implementation
- 2021-03-07T19:11:00Z: Develop a WordPress retriever using RSS *
- 2021-02-19T23:07:00Z: Develop a scanner to find existing YouTube video IDs as an example for combinatorics/permutation
- 2021-01-31T13:18:00Z: Expand the pattern-server into a generic forms server by incorporating sliders for polling/voting
- 2021-01-17T23:33:00Z: Develop a workflow which extracts the namespaces from a XML input file and can automatically apply corresponding/configured XSL transformations (“ViewSpecs”) based on the semantics specified by the namespace identifiers
- 2021-01-03T17:44:00Z: Make some data from a feed or source flow into the slots of a dashboard *
- 2020-12-23T11:42:00Z: Watch/listen-to all videos by “Augment Engelbart”
- 2020-12-22T23:50:00Z: Start a dashboard UI prototype
- 2020-12-12T23:56:00Z: Expand hyperdex_1 into an editor
- 2021-06-20T12:25:00Z: Develop an interpreter for the BAC Language (video)
- Parser/interpreter + boilerplate
- Basic math operations
- Variables, loops
- Input forms
- Conditions
- Escaping special characters
- 2021-03-13T20:54:00Z: File input/output
- 2021-03-14T16:25:00Z: Standby/forms management
- 2021-04-10T22:50:00Z: String operations
- 2021-06-20T12:25:00Z: read-eval-print
- Change parser to a memory-/random-access-based one
- Interactive step-through debugger
- 2020-10-29T00:09:00Z: Expand the forms server into a wiki
- Generate target formats for the Summaries of the Doug@50 Calls
- 2020-08-27T23:28:00Z: Develop a generic template-based forms server (video) *
- 2020-08-11T00:05:00Z: Develop a pattern generator app based on a configurable pattern template
- 2020-11-15T00:47:00Z: Expand typed/multidimensional graph navigation prototype into a regular + native variant (video)
- 2020-07-27T21:33:00Z: Watch/listen-to all videos by “TogetherTec”
- 2020-07-08T00:11:00Z: Automated workflow to aid with audio/video annotation and summarization
- 2020-07-19T01:00:00Z: Develop a Progressive Web App to track the time of speakers in a meeting (source code)
- 2020-06-26T01:23:00Z: Develop a tool that embeds images into XHTML, so the latter can be passed around as a single standalone file.
- 2020-06-20T01:07:00Z: Automatically generate target formats (EPUB, PDF) of the meeting notes of the working group for producing the Peeragogy Monthly Wrap
- 2020-06-21T21:03:00Z: Insert the meeting notes of the working group automatically into the Monthly Peeragogy Wrap
- 2020-06-17T17:38:00Z: Automatically convert the DigLife directory of decentralization projects into the Murmurations format
- 2020-06-13T23:09:00Z: Enable the graph navigation interface to render typed/multidimensional graphs (video)
- 2020-04-05T13:49:00Z: Port Streaming API implementation for CSV in Java over to C++
- 2020-03-28T22:53:00Z: Develop an audio messaging service (video)
- 2020-03-16T23:28:00Z: Develop a Streaming API implementation for CSV in Java
- 2020-02-27T00:40:25Z: Add Tufte-LaTeX to the Peeragogy Monthly Wrap generator
- 2020-02-18T17:31:10Z: Automatically generate target formats (EPUB, PDF) of the Peeragogy Monthly Wrap (output for each month and all-time)
- 2020-02-13T18:11:00Z: Watch/listen-to all videos in the “Independent Publishers of New England” YouTube playlist of uploads
- 2020-01-16T16:43:00Z: Watch/listen-to all videos in the “Independent Publishers of New England” live session recordings YouTube playlist
- 2020-01-05T15:23:00Z: Audio recorder that uses HTML5 device access permissions
- 2019-12-24T17:41:00Z: Develop software to manage the “monthly mixtape” (video)
- 2019-12-06T12:39:14Z: Write about the future of text (with an additional glossary for contextualization)
- 2020-01-04T12:41:00Z: Read “Peeragogy Handbook” version 3
- Reach 20% *
- 2019-12-06T17:00:00Z: Reach 40% *
- 2019-12-17T18:42:00Z: Reach 60% *
- 2019-12-28T23:59:00Z: Reach 80% *
- 2020-01-04T12:41:00Z: Reach 100% *
- 2020-01-07T02:13:00Z: Watch/listen-to all videos by “The Peeragogy Handbook”
- 2019-11-17T19:22:00Z: Reach 20% *
- 2019-11-27T19:00:00Z: Reach 40% *
- 2019-12-16T15:55:35Z: Reach 60% *
- 2019-12-26T22:22:00Z: Reach 80% *
- 2020-01-07T02:13:00Z: Reach 100% *
- 2019-11-10T21:09:00Z: Convert/prepare the DigLife directory of decentralization projects
- 2019-11-01T20:54:00Z: Develop SOS Server Prototype 1 *
- 2019-09-10T01:16:00Z: Expand GBA prototype-2
- 2019-09-05T19:32:00Z: Add user management back into the GBA prototype (video)
- Change it into directed “social asking & answering”
- 2019-08-30T20:09:00Z: Change it into a generic hierarchical solution (video)
- 2019-09-03T22:50:00Z: Change it into a generic hierarchical solution with configurable levels (video)
- Expand it into a generic graph navigation solution
- Add imports/exports to the generic graph navigation solution
- Write an importer and exporter for GraphML
- Write a generic configurable importer and exporter for XML-based formats *
- Add a graph visualization generator with the frontend prototypes from SOS as renderers
- 2019-09-10T01:16:00Z: Build a courseware/dialog system (video)
- Change it into a system for threaded conversation
- 2019-08-21T21:24:00Z: Curate the “CC YT Live DDKR” call recording towards hypermedia
- 2019-07-30T21:57:00Z: Set up automatic packaging for the “free-scriptures” repository
- 2019-07-30T20:56:00Z: Add support for non-milestoned <verse/>s in OSIS input files for osis2haggai1
- 2019-08-01T23:24:00Z: Build the next GBA prototype iteration (video)
- 2019-07-27T13:00:00Z: Update my public key
- 2019-10-07T00:22:00Z: Automatically convert articles on Wakelet to PDF for print
- 2019-08-20T01:11:00Z: Workflow for automatic download, preparation and transformation
- 2019-10-06T14:02:00Z: Recursive download *
- 2019-10-06T19:10:00Z: The Wakelet API for the website likely uses pagination, which needs to be supported
- 2019-10-07T00:22:00Z: Actual PDF generation via LaTeX
- 2019-06-22T23:21:00Z: Expand the one-time-downloader *
- 2019-06-29T08:11:00Z: Read “Local-first software” *
- 2019-07-27T16:07:00Z: Translate haggai2latex10.xsl to a “xhtml_to_latex_2.xsl” stylesheet
- First, rough translation
- 2019-06-20T20:18:00Z: Figure out how to do links/URLs well/better in PDF for print via LaTeX from an XHTML source
- 2019-06-09T10:51:00Z: Autocopy for the “Text Position Retriever 1” *
- 2020-01-12T03:40:00Z: Develop a CSV parser that converts to XML *
- 2019-05-30T13:02:00Z: Set up boilerplate code: copy, rename, etc.
- 2019-05-30T16:11:00Z: Theoretical introduction/discussion of Information Theory + Encoding, Patterns, Domain-Specific Languages, Parsers & System Theory
- 2020-01-12T03:40:00Z: Implement parser
- 2019-06-21T23:23:00Z: Advance Hyperglossary *
- Call file_picker_1 for input files and output directory selection
- Extract glossary definitions
- 2019-05-15T23:05:00Z: Migrate DTD entity resolving to the digital_publishing_workflow_tools package for XHTML 1.0 Strict and XHTML 1.1
- 2019-05-16T23:36:00Z: Filter target text XHTML file
- 2019-05-16T21:32:00Z: Avoid DTD entity resolving for generic XML concatenation in xml_concatenator_1
- 2019-05-16T23:36:00Z: Concatenate/combine target file and glossary
- 2019-05-16T23:56:00Z: Generate final output result file (rendering/visualization)
- 2019-05-22T21:24:00Z: Localization strings (English and German)
- 2019-05-23T23:33:00Z: Package and publish the release
- 2019-06-09T10:51:00Z: Preserve title from the base text file to the output
- 2019-06-09T10:51:00Z: Change color of term occurrences from “PKE-gold/-orange” to “Doug@50-light-blue”
- 2019-06-09T10:51:00Z: Preserve headings from the base text file to the output
- 2019-06-21T23:23:00Z: Record a video and publish it
- 2019-06-29T21:20:00Z: Allow to keep XHTML definition lists in the base text input file *
- 2019-05-25T10:42:00Z: Expand the “Change Tracking Text Editor 1” into a full, standalone text editor *
- Let the user decide if a new text should be created or an existing one opened
- Call file_picker_1 for input file and output directory selection
- For existing files, create a backup and if a text history file, reconstruct the latest version
- Call change_tracking_text_editor_1
- 2019-05-14T21:10:00Z: Create and call change_history_concatenator_1
- 2019-05-18T00:11:00Z: Generate output files (concatenated, optimized + visualizations)
- 2019-05-20T23:23:00Z: Generate visualization of the unconcatenated, unoptimized output
- 2019-05-24T22:02:00Z: Check localization strings and code checkin
- 2019-06-08T22:42:00Z: Testing
- 2019-05-25T10:42:00Z: Package and publish the new release
- 2019-09-25T22:25:00Z: Fix bug of additional whitespace in output *
- 2019-06-22T13:09:00Z: Watch/listen-to all footage of “BBS: The Documentary”
- Reach 20% *
- Reach 40% *
- Reach 60% *
- 2019-05-28T15:00:00Z: Reach 80% *
- 2019-06-22T13:09:00Z: Reach 100% *
- 2019-06-02T14:51:00Z: Write “The Work Is the Work” *
- Write first draft
- 2019-06-02T14:31:00Z: Edit and publish
- 2019-06-02T14:51:00Z: Create hyperglossarized version of the text
- Expand the topic to online audio/video conferences and conversation communities
- Propose ways for introducing practical implementation
- 2019-08-10T22:24:29Z: Convert all my texts from WordPress to resource files *
Activity Description
Click an activity item marked with an asterisk (*) at its end to view its description.
Descriptions
35 pages.
70 pages.
105 pages.
140 pages.
175 pages.
Partially based on wordpress_rss_to_html_1.xsl.
Title: Dashboard Initiative Project summary: For data sources/feeds by the Peeragogy project, a dashboard prototype should be developed in order to provide an overview over the progress of activities, new entries, what has been done/collected so far, indicating what’s currently going on. Purpose of the project: The goal of the project is not so much to eventually operate a dashboard for Peeragogy, but to develop the machinery as a white-label, libre-freely licensed, universal, generic solution which can then be applied in other situations as well. The focus is on bootstrapping semantics, automated infrastructure and user interface. Project URL: https://peeragogy.org Project topics & categories: Software / Apps / AI / IT What are you looking for help with? Beta Testing, Critiquing, Curation, Data Organization, Designing, Feedback, Prototyping What “roles/personas” are you looking for? Designer, Developer, Project Manager Help needed headline: UI/UX designers who want to prototype, data curators, beta testers Details of who you are looking for? It’s an open invitation and learning exercise/exploration which is supposed to also produce useful results. Main project image: Project contact email: https://hypertext-systems.org/contact.php Project open to: Sharing Publicly How are you currently able to compensate contributors? Looking for volunteers Date which this opportunity expires: 2021-01-16 Anything else you want to add? The coordination/progress of the project is discussed in a chat provided by Peeragogy. An earlier overview/landing page exists on https://github.com/Peeragogy/Peeragogy-Dashboard. A more recent proposal is this: https://github.com/Peeragogy/PeeragogicalActionReviews/issues/1#issuecomment-679358961. Other UI preparations/experiments were conducted here: https://gitlab.com/groupware-systems/dashboard-experimental and there’s some other related components and inspirations. The Peeragogy project has several data sources of its own in varying quality, and the plan is to improve some of these so they’ll be available for testing/experimentation.
100 pages.
200 pages.
300 pages.
400 pages.
500 pages.
Mainly to help with entering new pattern descriptions that follow a particular pattern language template, but can also be used for other data entry tasks, it’s a little bit like Google Forms, but also supporting technical semantics.
81 pages.
162 pages.
242 pages.
325 pages.
403 pages.
484 pages.
565 pages.
645 pages.
726 pages.
806 pages.
Many of today’s JSON APIs on the Web embed HTML in their JSON, and if such is obtained/retrieved, the conversion from JSON to XML leaves the embedded HTML double-escaped. This universal tool should de-escape the inner XML based on the element hierarchy path (similar to XPath) like the Wakelet downloader/converter does it.
51 pages.
101 pages.
151 pages.
201 pages.
251 pages.
26 pages.
52 pages.
77 pages.
103 pages.
128 pages.
69 pages.
138 pages.
207 pages.
276 pages.
345 pages.
54 pages.
108 pages.
162 pages.
216 pages.
269 pages.
27 entries.
53 entries.
79 entries.
105 entries.
131 entries.
Combine the generic GBA prototype 3 with the frontend of the SOS Grid prototype 3 and the boilerplate features of note_system into a common basis to build from. Video 1, video 2.
It should work with my variant of the SOS format
Includes export of private posts and comments as well as preparing redirects for WordPress URLs.
The format which includes references to sub “wakes” as well as the “cards” with their content can be downloaded recursively.
Want to use it for audio e-mail messaging convenience, so that I can upload a file with a web interface. A little bit like ownCloud/Nextcloud. Could/should get a ReST API to become a hypertext/hypermedia server that delivers resources without exposing the internal specifics of their storage, effectively acting as a resolver/mapping/abstraction. See the initial commit.
Used “xhtml_to_latex_2.xsl” to generate a printed booklet for better reading/consumption convenience.
Allow to automatically copy the current span selection to the clipboard on completion as an option.
38 pages.
76 pages.
112 pages.
A csv2xml converter does already exist in the automated_digital_publishing package, but it’s buggy. Therefore, I probably should write my own with proper parsing techniques based on an implicit call tree instead of trying to fix the current implementation with its state machine strategy.I started an attempt some time ago, but failed, potentially by not recognizing the linebreak as the element of highest priority or some other reason. Unfortunately, I can’t find that code any more and likely have to start anew from scratch again.
70 pages.
140 pages.
209 pages.
279 pages.
348 pages. It’s bibliography, a list of hypertext system implementations, the glossary, people index, word index.
Recently, completed top-level activities were removed from the main output file (in order to keep focus on actual activities, not loads of historical entries). Still, there needs to be a way to conveniently view the completed activities separately.
Visual rendering of “waiting”/“paused”/“blocked” (red), “abandoned” (red, crossed out), “rejected”/“cancelled” (black, crossed out) would be nice.
I did a Hyperglossary prototype (code, live) for Doug@50 (50th anniversary of Douglas Engelbart’s Great 1968 Demo, but that’s trapped in the browser (Same-Origin-Policy restricts it to a server) and limited to WordPress as a source, therefore a more generalized Hyperglossary capability is needed, preferably as a standalone program, that can apply any glossary in a compatible format onto any target text that’s in a compatible format.
Towards the use of the same input file for both, the glossary input and the base text input, therefore enabling the hyperglossarization of existing XHTML webpages that come with an unaugmented glossary.
The last release of the “Change Tracking Text Editor 1” in a practical scenario requires the user to manually copy files around, so a new version should introduce dialogs to select input sources and output targets
When opening a plain text file that ends with a linebreak and immediately leaving/saving, the import adds additional whitespace to the add instruction.
30 entries.
60 entries.
90 entries.
120 entries.
151 entries. This chunk included the FidoNet Tapes that count as one entry.
This is a discussion of the quote “the conversation is the work” by David Whyte, often cited by/in conversation communities for a description of what they do.
56 pages.
112 pages.
169 pages.
225 pages.
281 pages.
Only reading it because Gyuri Lajos said that it is important for understanding his approach. As suspected, the book hardly contains anything I didn’t already personally know myself before reading it.
82 pages.
163 pages.
245 pages.
326 pages.
407 pages.
This is a one-time manual process as I’m too lazy to adjust the wordpress_retriever_1 workflow a little to incorporate the publication date in the file name and import the text to a revision history.