Summaries of Doug@50 Calls
Preface
After Douglas Carl Engelbart passed away in July 2013, a group of people got together to continue Engelbart’s invisible revolution (the historical retrospective settled on calling the revolution unfinished). With the 50th anniversary of the Great 1968 Demo coming up on the 2018-12-09, planning began to create a new, modern demo which would inspire and invite such work, research and bootstrapping to be picked up again for our time. After some years passed, from the 2017-11-08 on, a series of regular online calls were conducted. What follows are the summaries of a few of them.
Weekly Call 2018-01-31
In this call, we discussed how TopicQuest, IdeaLoom, Author and SymbolSpace could interoperate. At least the first three turned out to be similar in terms of the capability they provide, with other components missing like a visualization component and maybe translators/converters in between.
Weekly Call 2018-02-21
Frode Hegland informed the group that he was invited as a speaker to the 7th International Summit of the Book in Baku, Azerbaijan. We found it unrealistic to prepare a demo in the short amount of time that remains until then. Marc-Antoine Parent deepened our understanding of knowledge maps, and as Frode prepared a proposal for adding new terms to the HyperGlossary, we discussed how the controls for entering data might relate to knowledge maps and what the use of it could be. Loránd Kedves gave an overview about his work which led to questions how it might fit into the big picture of programming and linked data.
Weekly Call 2018-03-07
There’s a new Google Spreadsheet with our project list in it. The discussion went towards more basic system functionality/capabilities as some of the main scenarios, HyperGlossary, Socratic Authoring and ViewSpecs, would require them to be in place. Specifically URLs were of interest as they’re used by Christopher Gutteridge’s ultralink. Vint Cerf introduced the need for archiving as an intrinsic system capability. Andrew Pam shared some experience about working with e-mail, from which especially IdeaLoom could benefit. Memento and IPFS were mentioned, we need to look into both. We also realized that our onboarding needs improvement.
Weekly Call 2018-03-14
In preparation of the call, Frode Hegland provided the abstract for his presentation for the upcoming International Summit of the Book 2018. Even before the official start time, some members of the group discussed things like Slack features or separation of GUI and data, so a chat like IRC could still look good, which too would power ViewSpecs and key commands.
At the scheduled time, Marc-Antoine Parent explained the new interoperability format that’s designed to help our components with talking to each other in more detail, but then Frode pointed out that it seems to be geared towards knowledge graphs, not documents. Marc-Antoine suggested that WebAnnotation could link both together. Furthermore, a lot of structural requirements could be expressed in HTML, as far as documents are concerned. By gaining more understanding about the nature of both perspectives, “media fragments” came into focus. For structural/semantic references in non-HTML documents, the group wondered how to go about it, Utopia Documents were mentioned.
With this broad range of topics on the table, Houria Iderkou, from a project management standpoint, proposed to partition those discussions into subcommittees, as the weekly call tends to attract quite a lot of general, conceptual brainstorming and discussion, but we make little progress with the specific projects/components. There could also be two calls, one for the general discussion and one for working on and coordinating the actual components for the demo. Three months in, we didn’t make enough progress for what’s needed to end up with something presentable. We acknowledged that we still didn’t decide what the end result for the demo should be. There are the three areas of HyperGlossary, Socratic Authoring and ViewSpec as suggested by Frode, but as they require different Open Hyperdocument System (OHS) Core infrastructure to be in place, the group should pick one depending on the capabilities that we want to show off. On 2018-12-09, the presentation of our Doug@50 effort will only have limited time available anyway. Mark Anderson made us aware that we really should avoid to miss the human side, which needs to be a consideration for how we define the desired result for our demo. Houria also reminded us that one key aspect to demonstrate is that we as a group manage to collaborate.
The action item for builders/implementers who are somehow involved in the main direction of this group is to report to Houria if they’re blocked by something from making progress.
Weekly Call 2018-03-22
This call was shifted from the regularly scheduled Wednesday, 2018-03-21, to Thursday, 2018-03-22.
In response to Houria Iderkou’s request last week, the group tried to determine the topics our efforts could be split into in preparation of a switch to daily calls with a call every work-day in the week. The basis for the discussion was the project list, and after intense consideration of the potential scopes for every day, a weekly plan was completed.
Daily Call 2018-03-26
The topics of the calls on Friday and Wednesday were switched and the Friday call was canceled, so the group didn’t get a chance to discuss and then decide on the capabilities that should be demonstrated on the 50th anniversary.
In preparation of the calls on Monday and Tuesday, Stephan Kreutzer tried to sort out the actual aspects those topics are supposed to cover as they’re framed as the previous “OHS Core” conversation. In the call, it was found out that it doesn’t make a lot of sense to discuss system architecture/design/infrastructure as long as it is not clear what capabilities should be implemented for the demo day. It was decided that the Tuesday call tomorrow should focus on exactly this question. Also in response to this fundamental question, Gyuri Lajos put forward a draft on “Bootstrapping and co-evolving OHS” as a way to check for the level of interest within the Augmentation Research Community by considering the possibility of bootstrapping a new OHS kernel.
Daily Call 2018-03-27
The group ran into the difficulty to get consensus if the capabilities for the demo day should be more from the human system side or the tool system side. The argument for the former was that we might want to demonstrate the technological foundation that makes Doug’s concepts like ViewSpecs possible again. The argument for the latter was that it’s not particularly exciting to the potential audience to watch some boring text visualization techniques in 2018, especially as there is a huge potential in online collaboration groupware or in the field of solving urgent problems in the way public discourse is conducted. Also, another question was if we should target the individual/collaborative knowledge worker or the general population.
Daily Call 2018-03-28
Frode Hegland noticed that participants in the group could be categorized as either interested in making our existing components interoperable, or alternatively in favor of building from scratch in order to make an OHS and the needed capabilities real. Timur Shchukin presented the work of his group including Knowflow and Coflow.
Daily Call 2018-03-30
Today, we primarily discussed mechanisms/concepts for linking/referencing. Christopher Gutteridge is looking at a lot of linking standards. Marc-Antoine Parent re-encouraged us to look at WebAnnotation. In general, it looks like no single mechanism will satisfy all requirements, so we eventually have to support several of them in an OHS.
Daily Call 2018-04-02
At first, Frode Hegland shared his findings in the area of addressability. Marc-Antoine Parent pointed out that the WebAnnotation standard is extensible. Regarding the general notion of addressability, we were thinking about ways to augment conversation/dialogue, as one could ask questions to a virtual avatar and get them answered via voice from previous recordings, either by constructing language with the help of an artificial intelligence or by semantic annotation. YouTube and SoundCloud support a time stamp/code in the URL fragment to reference the start of playback in audio.
It looks like the group generally agrees that when it comes to data sources for the demo, we should work on and with what we ourselves produce during our effort (posts to the Journal, e-mails, call recordings, etc.) in order to bootstrap ourselves and our own communication. Hence, publishing, storing, retrieving and manipulation of those media fragments become relevant capabilities that must be provided by an OHS infrastructure and supporting document formats. Marc-Antoine encourages us to look at Apache Marmotta (a linked data platform), as it might already do most of the stuff we might want/need, plus it supports Memento.
The question is asked, how our calls are recorded. What about a TimeBrowser solution? It could record individual microphones and synchronize the audio tracks via an UTC time mark. Would be very useful for conferences, but we soon realize that we don’t have a lot of experience with audio, so we defer work on audio/video until a later stage, as text is easier to handle for now. Our material should always carry a timestamp, so it can be referenced in the future by the components we will build.
We also believe that there is no need to set up the human system versus the tool system and vice versa, as we’re clearly interested in the symbiosis of the two.
With interaction as the most basic element of the universe over symbols and their addressability, we recognize that the context of interactions, symbols and addressing are indeed very important, too, in regard of how we understand connections.
We discuss the question if it would be a reasonable use case for knowledge representation to map Doug’s terms to their most similar resemblances in today’s terminology, potentially federated by the help of a timeline? Does the knowledge representation work help with the shifting meaning of terms and concepts? Can one “import”/apply context?
Stephan Kreutzer gives an update on his progress over Easter: the work focused on writing a downloader for all posts as referenced by a WordPress blog URL, for such data to be converted to XML and JSON to make it available for the systems/components by Gyuri Lajos, Timur Shchukin and Marc-Antoine. Marc-Antoine reads the WordPress RSS feed already, but that might not contain all data. Within the next few weeks, something in that direction might help us to make our components more interoperable and force us towards practical bootstrapping.
Daily Call 2018-04-03
The scheduled topic was Socratic Authoring, and with the discussion about what to demo yesterday, we wondered about what such a scope would mean for writing/editing. Robert Cunningham reported that HyPerform offers such capabilities, we should look into it. Also, the HyperScope project has produced a converter from Augment files to XML. As writing editors is always more difficult, we pondered on ViewSpecs first. In Doug’s system, views seem to be specifiable dynamically with the help of several standardized statements (as encountered in links), that can be extended by macros and meta compilers, but still mostly text-oriented. Today, it could well be that we want to support a vast multitude of ways to render/visualize data, so we might need some mechanisms to apply layered rendering processors that act on the semantically annotated data. ViewSpecs themselves could be published as separate data points, with some federation going on, be it CSS font-family
-style priorization or HTTP-style ViewSpec negotiation, local preferences or recommendations by the author or for a specific environment. We probably have to abandon WYSIWYG entirely, as there’s no way to anticipate in advance how all the ViewSpecs out there will react on semantic markup, we might want to deliberately break this expectation by separating viewing and editing, which then won’t be a very “integrated” experience in comparison to the less visual NLS/Augment.
Another theme was the question about non-retrievable things, which are of course addressable, and there needs to be support for that. At the same time, we can’t work with non-retrievable material, apply a ViewSpec on it or manipulate it directly, while on the other hand, clients/agents can provide quite some mechanisms to deal with such resources that aren’t accessible.
Daily Call 2018-04-04
Marc-Antoine Parent gave us an introduction to the WebAnnotation standard and its use cases. Robert Cunningham asked if WebAnnotation can be used to specify structure, and how we would publish and apply WebAnnotations, in an ecosystem, embedded or standalone? Stephan Kreutzer asked how we would handle special corner cases like annotations that lack any semantic meaning. It was generally recognized that WebAnnotation might be very useful for what we’re trying to do, be it for linking/referencing/addressing or for semantic annotation (adding meaning in terms of WYSIWYM).
Daily Call 2018-04-05
General knowledge representation matters were discussed. Marc-Antoine Parent updated us that work is going on in the area of layers on top of ReSTful knowledge data collections, to help with merging such collections for example. Marc-Antoine is making his knowledge-graph data more JSON-LD-ish, the list/graph already is. He also expands the linked data platform, so one can subscribe to such entries in order to get notified when new entries get added. Gyuri Lajos and Marc-Antoine acknowledge that displaying/rendering incoming and outgoing links is very important, maybe one of the most used features. We should make use of the Linked Data Platform, or GraphQL.
Gyuri points out the advantages of plain HTML as data format. Marc-Antoine adds that JSON-LD can be embedded, and Gyuri follows up with the notice that interpreting code can be delivered in CDATA
together with the data.
Another realization by Gyuri as a result of the call on Wednesday is that WebAnnotation can be used to build a social platform, and Marc-Antoine again was able to reference the already existing Social Web Protocols and ActivityPub. The way to go is probably to embed WebAnnotations into ActivityPub data.
IdeaLoom can provide RDF in JSON form on request, with the concept ID as URL in there, and requesting that via HTTP content negotiation will provide the data associated with it, so context gets attached via a WebAnnotation that references data on IdeaLoom.
Gyuri describes the example of referencing something in Hypothesis that’s stored on IdeaLoom, how is Hypothesis going to embed IdeaLoom knowledge graph data on their service? Marc-Antoine responds that a highlight would be made on Hypothesis, and then be imported/referenced in IdeaLoom.
Daily Call 2018-04-10
The first 17 minutes are not included.
Marc-Antoine Parent: Glossary entries need a type or relation.
Gyuri Lajos: Knowledge enrichment is separate, a separate tool. On the other hand, while writing, auto-complete could be offered or a search for terms.
Marc-Antoine: Types could have attributes, but then everything becomes more complex. Topic maps with different roles even. Other term suggestions, should we only look at internal data, or also other, external data?
Gyuri: Suggestions could come first from WikiData and then federation servers.
Marc-Antoine: That would be overwhelming. When it comes to server vs. client, I don’t expect the client to have all the data, but instead query for auto-completion, so that should be server-side logic. Furthermore: what about merging suggestions? What if terms are equal from different sources?
Gyuri: That needs to be decided on preference, needs to be federated.
Frode Hegland: Let’s just build that system, practically.
Marc-Antoine: For the first bootstrapped version, auto-complete is not necessarily required. The server should do the merging already. The server needs to do the WikiData API call, which would make it a 2-hop call for the client.
Gyuri: I don’t want the federation server to be overwhelmed. If the system takes off, every query by everybody would go through the server.
Marc-Antoine: On the other hand, there’s the concern about client complexity. The quota on the federation server might get exceeded. Let’s write a client library that supports different sources, this embedded JavaScript client library would make the calls.
Gyuri: Want to provide my solution to the ecosystem. If a word is selected, its glossary data is either already in the local storage, and if not, a query should be send off, that would constitute an automatic concordance.
Marc-Antoine: If there are federation server(s), if an identifier is received, the servers would be asked if they have the data for it, and if not, it would be made available to the federation servers by importing. Ontology should be another library.
Gyuri: Have reservations, it’s hard to do for WikiData.
Marc-Antoine: Wrote down the fields that are needed.
Frode: Let’s try to get glossary into document. It would be interesting to learn from the knowledge guys which interactions glossary could offer at the point of authoring.
Gyuri: Suggest to not bring ontology into writing glossary entries at this early stage. That’s too confusing and a separate step for later, two different activities.
Marc-Antoine: The relation “related” is almost just noise, not usable, so there’s the need for better typed relations, and then that depends on concepts. Ontologies, multiple ones even, should provide their relation types. If the target term comes first, that would already lead to a good restriction for the available relations.
Gyuri: Biomedical semantics are very hard to understand, it’s a heavily cultivated domain.
Marc-Antoine: It’s the only place where RDF survived.
Gyuri: WikiData allows to come up with new entries.
Frode: Specific suggestions needed for a document-centric dialog to add a glossary entry.
Marc-Antoine: To add a related term, should that be just added normally or with a role?
Frode: The glossary relation, is that only for other terms in the same glossary?
Marc-Antoine: It’s interesting that Gyuri immediately jumped on external glossaries for auto-complete.
Frode: Let’s draw an empty box, so the knowledge representation guys can add in there what they want.
Gyuri: If the empty box is a Web control, then I can add a Web component in there, a JavaScript-based widget. Everything that’s developed for the Web can run in this box.
Marc-Antoine: There are probably standards for how to make JavaScript and the host application talk to each other.
Gyuri: And there can be cross-domain POST
messages.
Marc-Antoine: We just need to standardize on the interpretation of the messages.
Gyuri: The interpreter code can come along with the data.
Marc-Antoine: Isn’t CORS a problem?
Gyuri: No, an iframe
can load JavaScript from anywhere.
Frode: Let’s just write it down, hack it together, quick and dirty.
Gyuri: We don’t have a standard for glossary yet, but WebAnnotation can do that.
Marc-Antoine: That’s more for the indexation part, not for annotation, but it could work. WebAnnotation allows to tag existing entries, but isn’t adding to the definition of the term. What’s the format to publish the entries? It should be very minimalistic, I can provide a JSON-LD spec.
Gyuri: We can replace WebAnnotation with whatever we’ll use.
Marc-Antoine: I can make a draft by today, but using WebAnnotation for it would be dangerous because the server would get the burden to do the replacement. It would be the archetype of technical debt.
Gyuri: Still, federation servers should be able to do it.
Marc-Antoine: No, servers shouldn’t handle stuff that’s disguised as WebAnnotation.
Gyuri and Marc-Antoine agree that WebAnnotation should still be used to annotate existing entries.
Marc-Antoine: Will write a draft, and can Gyuri start to write a JavaScript library for it?
Frode: To clarify, is it for posting new entries to WordPress, to define the model?
Marc-Antoine: For the start, it’ll be for WordPress but soon change to federation server, and we’ll see how far we can get, and then do the next step.
Frode: With the end of the demo in mind, we would want to say: by the way, you can immediately use it, no need to set up your federated server.
Marc-Antoine: On IdeaLoom, there will be an interface to configure the data source for a given conversation, to say basically that this RSS feed is for glossary, and from the URL I generate a data feed. Then, Gyuri’s machinery can come in, and Marc-Antoine will provide API endpoints to do things on the existing conversation.
Frode: Where will it live?
Marc-Antoine: On IdeaLoom as federation server, TopicQuest might have their own. So we’ll standardize on an API? There’s still the requirement for a server agreement with a server.
Frode: Doug’s demo was very different from what we have today. It could either be as natively as possible, using an existing WordPress installation and off you go, plus for more advanced knowledge graph work, go talk to the knowledge people, or use Google drive.
Marc-Antoine: Will try to come up with a generic Linked Data Platform solution, Apache Marmotta eventually.
Frode: How is that going to work for a person who’s a blogger?
Marc-Antoine: That’s a constraint on the architecture. The notion that it’ll work without a server is wrong. Somebody has to get a server and pay it, there’s no data that’s in the network. What we can try is to use IPFS, and distributed storage is a different story again. I’m interested in solutions on the server because there I can do computation. With the so-called “server-less” architectures, one has to have an account to do lambda computation there. There is no free lunch.
Frode: There needs to be a conversation about caching when it comes to auto-complete. In case of a student as a user, she might use her own server on which most material will be static, frozen. Documents will stay there online. If they get connected in a knowledge graph, then it is another story.
Gyuri: The data could be on the local storage of the client, or on the Open Science Framework. One can get storage there and they guarantee that the data will be there for a number of years. The content there will be static, but locally it can become alive. It can be DropBox, as the Open Science Framework has connectors to different storage services. That would be part of OHS Core.
Frode: To wrap up, how to give people a few days to send in a list of things they want to see in the empty box? Also, how to make it work for a non-technical student?
Marc-Antoine: I wasn’t aware of this focus – of targeting these users primarily.
Whuffie (?): No, not everybody needs to be able to set it up himself/herself, a friend could do it for him/her.
Frode: It’s not so much about the technical difficulty to set it up, but the accessability/availability to get it started.
Marc-Antoine: That was a non-goal for me previously.
Frode: You’re a cloud guy, but your work should get a wide audience, so how to give people an easy click-click way to start it up?
Marc-Antoine: I see where you’re coming from, and by connecting things between different servers, it could become relatively “serverless”. There’s the Beaker browser, we might look into it as it’s like IPFS. One gets an URL for something that was created locally, and then it becomes accessible to other Beaker browsers, so there could be a Beaker proxy or something.
Daily Call 2018-04-11
Robert Cunningham introduces us to HyPerform. It’s basically an Augment clone, but with an additional menu at the top. It provides a limited subset of Doug’s ViewSpecs. If the ViewSpec codes/characters are learned, text can be navigated very fast. In the design of the system, everything that’s supposed to be a noun is a noun for noun+verb commands, but as it is verb+noun in English (example: “Window move from here to here”), that’s reflected in the software. This convention works with everything, be it text or controls, like “transpose/move Window” + click which to which. Could we standardize on verbs, which would also be helpful for versioning?
Regarding structure, with knowledge representation and threaded conversation, we tend to get very visual, but can we do something with basic text, so people can work with and display their material in whatever way they prefer? The tool should be incredibly basic, so different people could make use of it in ways we probably don’t anticipate. Users would be able to run their text through a lot of processes.
Marc-Antoine Parent: The text capabilities are very good, but HTML is more comfortable because of links. It won’t be transformation, but view controls, adjusting the CSS. Which view controls are application-specific, which ones should be standardized?
Robert: In HyPerform, there’s no way to add ViewSpecs. There can be plugins, but it’s not clear how to add them. Still, commands would be limited to the alphabet of upper- and lower-case characters. What we need are branch, group, flex and statement.
Gyuri Lajos: But wasn’t it the case that there were no files in the original NLS?
Robert: In HyPerform, files are mapped to the local hard drive. If files are moved, links break.
Marc-Antoine: Have we looked at Scrivener? It has something similar to ViewSpecs in search. In HyPerform, everything is content, there is no semantic markup like titles etc., so we can’t easily parse that plain (structured) text and derive meaning from it.
Stephan Kreutzer: Can we cheat? By exporting the data via an output processor, by accessing the internal data format of HyPerform, by adding special characters to the output in order to help a parser?
Marc-Antoine: Are most view controls for hiding and showing material?
Gyuri: There is Org mode for Emacs, which is pretty much like HyPerform, it’s inspired by Doug’s demo by a great deal.
Stephan: But Org mode does it with a Markdown-style special character format.
Marc-Antoine: For structure, I would prefer HTML. I use Tinderbox for structure, and in there is Markdown. Look at the Leo editor, which is an outliner, mostly created for literate programming.
Robert: In HyPerform, there’s the ability to move material as a whole branch, so everything under it also gets moved.
Marc-Antoine: That’s also possible in the outline mode of Word. Sure, there’s more guarantee of correct operation by commands in comparison to visual manipulation.
Robert: Can I customize how level indentation looks like?
Houria Iderkou: What’s the status, what can we do in the remaining time?
Marc-Antoine: There has been significant progress recently.
Houria: Suggestion is that we shouldn’t argue about UI, because there are many options, but to actually build or show it.
Gyuri: Want to pitch to build an entire OHS.
Houria: Focus needs to be on the parts on which everybody can agree and work on.
Gyuri: But we’re not working together. I’m working on this system for more than 5 years, even before finding out about OHS, and now even the Augment Research Community is not looking at it, if not now/here, when, ever?
Stephan: The code that gets written lacks proper licensing and there are technical problems. Gyuri could provide a library for local file-I/O, otherwise I’ll be left/forced to reinvent this wheel. Where are contributions to GitHub? Marc-Antoine has everything on GitHub, that’s very positive.
Gyuri: This occasion now is the big and maybe last chance to get something going for humanity and the world.
Daily Call 2018-04-12
We acknowledge that our online materials are pretty scattered and poorly maintained. There isn’t enough progress to bootstrap ourselves, while on the other hand quite some improvements were made. Frode Hegland requests that we should make even the most basic writing, publication, retrieval and reading of a piece of text possible. Stephan Kreutzer mentions that there’s the Journal on WordPress with blog posts and glossary, and it’s also the only officially mandated material collection to work on.
Marc-Antoine Parent: There’s no point in glossary without getting glossary data from multiple sources. Furthermore, we should separate widget and underlying library. The library should merge duplicates. The draft isn’t done yet, as the issue of ontologies is complex. I added “association” to the data format. Do we really need peer-to-peer? Probably not?
Gyuri Lajos: I’m biased towards the peer-to-peer Web idea. That’ll be powerful for working on things in real-time.
Marc-Antoine: If hyperglossary-aware WordPress servers can talk to each other, there’s no need for peer-to-peer between clients.
Gyuri: Agree that merging and discovery should be on the server, on the concept level. Local client peer-to-peer is a matter of privacy. If there are such servers, they will blow up and get hacked.
Marc-Antoine: For me, that’s a separate issue. There’s HTTP authentication for data access. In IdeaLoom, there are permissions and restrictable read capability. External data operations will only provide data that doesn’t allow the identification of the user. Proper implementation of permissions is work, but not too difficult. At the same time, not a lot of software supports it. There’s the need for permissions per field in larger data, and anonymization. Knowledge patterns still needed to be discoverable, but for Catalyst being a project funded by the EU, privacy regulations needed to be observed as well. The system calculates dynamic permission-based views on the data on the fly. API tokens have certain capabilities/permissions attached to them.
Gyuri: The data is in a graph, so there can be simplified row+column level permissions, which I’m already doing too on the fly on my uniform model. But indeed, one can’t solely rely on client peer-to-peer, merging is a good example.
Marc-Antoine: There can be an easy HTTP model, with an URI to specify the author, so that’s not necessarily identifying the user, but could be.
Gyuri: Servers are important because they can provide information about interesting data that’s not available in the immediate neighborhood.
Marc-Antoine: How can a hyperglossary-enabled server consume incoming WebAnnotations? It should have an ActivityStream inbox, but that doesn’t necessarily need to be implemented, it can also just be read from a stream. There are Hypothesis client libraries to make highlights in WordPress. What about overlapping annotations? More common: a lot of servers annotate over the same word, all making claims about the same word. That would be, let’s say, 10 annotations over the same word on the server, so a mechanism is needed to indicate to the user that those different servers have something to say about the same word.
Gyuri: I found a tweet by a Hypothesis guy in which he showed overlapping annotations. There should be a convergence between WebAnnotation and knowledge graph.
Marc-Antoine: I expect the federation server to look at annotations for a document and handle it.
Gyuri: Glossary capability for WordPress is already solved, look at CM Tooltip Glossary. On Slack, we need to create a #federation-server channel.
Stephan: A federation server, would that be any or our WordPress instance, and/or is a federation server part of the OHS?
Marc-Antoine: No and no.
Daily Call 2018-04-16
Frode Hegland: Something happened to my thinking since the last call. In Author, one feature is supposed to be (not done yet) a mechanism to only show headings, not the body of some text, in order to draw lines between those headers, basically a concept map. By going through use cases, I realized that this feature is probably most useful in the context of collaboration, therefore on the Web. Copying such a node would include all subnodes, which would get deeply copied to the local computer.
Marc-Antoine Parent: It emerged from the discussion of graph + document – to look at headers as nodes is very natural to me, the attached body is just an attribute.
Frode: Moving around nodes is incredibly useful. What about extending Gyuri Lajos’ work into this Virtual Reality direction?
Gyuri: Well, if everything is nodes…
Marc-Antoine: To copy means to fork, with the same origin of course, but diverging.
Gyuri: Then that becomes a new version in the node.
Marc-Antoine + Gyuri: Agreement that there’s the need for both, origin ID and a local copy/fork version ID.
Marc-Antoine: I only want remote nodes, because there might be not enough storage space locally.
Gyuri: Similar to WikiData, it might be temporary, throw-away data.
Marc-Antoine: Then there’s the need for a distributed event bus. Propagation might be implemented via WebSockets to other clients. IdeaLoom has it, it’s a little bit slow, but works. I looked at collaborative editors some time ago and there’s many of them, so I made a list.
Gyuri: To make it collaborative, only a communication channel or database is needed. Only the node modifications need to be shared among peers, the editor itself doesn’t matter that much.
Frode: So what are we going to do, into which direction?
Marc-Antoine: My action plan was to add history to my nodes, as Marmotta is now implemented. I still need to specify the format for knowledge graph. But with this new idea, a format for both, graph and documents? Question to Frode: what is needed by documents? What format to use? There are many internal ones, HTML would be fine, but some editors have their own, so HTML would not necessarily be the best.
Gyuri: I support 70% of DocBook to import to graph.
Frode: Those are very good questions, but also very detailed. What I’m talking about is just writing down two names and connecting them with a line.
Marc-Antoine: First question – can such a node be translated to HTML, round trip?
Robert Cunningham: During the HyPerform presentation, outlines in the outline editor were seen as a problem to do by Marc-Antoine, where structure only implicitly conveys meaning, but introducing semantics will solve that?
Frode: Talked with Christopher Gutteridge briefly, and Christopher should talk to Gyuri. Marc-Antoine’s system should link into documents, I will design my component to make it fit.
Robert: How to make multiple uses of the same text? How to do that in HTML? To dynamically show and hide portions?
Gyuri: I looked at everything done and published by Frode, so what’s needed is that the editor must manipulate an underlying graph model.
Marc-Antoine: I said that with HyPerform, it’s hard to do it via transformation, but in HTML it becomes easy, semantics will do that. So here we come back to the document as a graph. Intelligent <div>
s – one has to have very well designed HTML, which isn’t difficult, but needs to be implemented.
Stephan Kreutzer: So here we come back to xFiles, generic semantics, at which point we don’t care about HTML semantics any more.
Marc-Antoine: Another question towards Robert – where to draw the line between node and body, at the title level, at the section level?
Robert: Is that also relevant for ViewSpecs?
Marc-Antoine: It’s important that Gyuri makes the node model work.
Robert: And then I don’t need a separate reader or code, everything is self-contained, everything is there for it to start its stuff.
Frode: Gyuri, how much effort would it be to make a new view in your system for the graph?
Gyuri: In Robert’s presentation, the view was customizable.
Stephan: So do we want to do collaborative interactive navigating/editing in realtime? That’s another game – doable, but we don’t have a lot of experience nor time left. Co-browsing comes to mind.
Robert: Something like Apache Wave?
Marc-Antoine: There’s now new technology, for example Quill and slate.js.
Robert: It’s also a nice thing to do it with yourself, to synchronize text between devices.
Frode: Agreement reached that we also do interactive capabilities. Now, who does what?
Marc-Antoine: I know that Gyuri is already very advanced, so that can be used, I don’t need to redo it.