First we have to acknowledge there is more discussion about future options than extending existing experiments, with the iD fork, the demo maps and the Semantic MediaWiki being the only self-hosted geo services right now.
Second working purely in the abstract, imaginative layer of specification of our needs, we are prone to the trap of mixing an implementation with the function requested from it.
Looking at how we have been overengineering until now, we can rest assured OAuth and SPARQL, even with geoextensions, are out of scope for the next half year.
Now we have to separate our mind again to distill further requirements regarding
- Query- and Accessability
- Storage and Data models
- Failover and Recovery scenarios
We understand that in a world of Service-Oriented-Architecture and connected microservices, a lot of experience in distributed systems design is not directly available to us. We are struggling with separating Graph, Column and Row stores, not to mention external indexing services and multi-model or only partly open source databases.
When it comes to Querying, we see the predominant *QL dialects, but also semi-proprietary approaches and far distant futures with Linked Data Fragments. Yet we want to encourage users to host themselves and only publish what they want. So we are agnostic about how users offer us their data. Still we have to make a decision for ourselves.
A custom web service may suffice for now and could be extended to offer further dialects. In the beginning we are probably not even getting the linked data part right, as we never discussed any JSON-LD contexts for the models needed. But as we probably just want to store JSON documents (files!) anywhere (git with webinterface, webserver, Document stores, …), we can imagine a progressive schema which adapts to our needs.
The more API consumers exist, the more we have to take care about breaking changes.
Replication and sharding will then assure most of the data is available most of the times, scaling requests and storage horizontally between nodes. Thinking of a data federation tells us many questions within this field of data privacy, authority, ownership and the likes are not yet answered completely.
Yet there are examples which tell us a way to go:
- Merkle-DAGs (like git, dat, ipfs, forkdb) provide the ease of versioning we need
- Remote Forking and Pull Requests are nowhere standardized between implementations, but federated wiki for the first and (private) webmention for the second show how it could work one day.
- Event-based, social architectures of streaming data make assumptions about how to solve Identity. SoLiD and Activity Streams 2.0 are examples of such.
There are many ways of storing data, but our anticipated query scenarios constrain the perspectives of an evaluation grid and filter out overscaling or undersupported environments.
The focus of transformaps should still be to engage communities in the discussions about economic decentralization, linked data models and reference implementations of different approaches to linking data. Thus we care more about the vocabularies to be researched and possible counterpartners in a testbed, rather than inscribing a global normalisation to the field of federated civic geodata publishers.
I still suppose there will be many different data stores to be built upon. A geoindex in a replicated database (CouchDB + GeoCouch) may be interesting, but as interesting is a properly versioned store that is easily distributable to end-users (dat) or a social graph of individuals and organizations working with us. Also, the more events we process and get to publishing streams, performance and intermediate caching become more important.
Let’s just assume any geodatabase and not specify it. What is important is how we access it and make its data available to the public. This is most probably going to be a custom API in front of it.
The big picture design is what we are lacking constantly, but probably this approach is doomed to fail right from the beginning: Do we start imagining the perfect solution or do we collect existing building stones and only go one step further at a time, assuming probably colliding development futures may either collapse into one or differentiate into multiple. Given the restricted resources, we are working on many layers, not only technical, at the same time and thus produce the image of a stalled progression.
Please review the complexity yourself by digging into
I am currently reviewing our Taiga Project for associated user stories. There are four predominant layers in the current anticipation of what we need. I will go through each of them and put them into context with latest updates available to me.
Public/Private API + geo-aware backend
@toka and me are pretty much in line with each other this means a simple Node.js daemon which redirects to a GeoCouch for
bbox and regular queries, but adds the thin authentication layer we need. Helpful resources could be:
Hoodie which provides basic libraries like
.email which could be made geo-aware by using their PouchDB integration for a client-side geo index for a Progressive Web App or a GeoCouch extension for the web service. The associated
.geo library would work both on client and server.
- This rest-api template from a thinkfarm friend.
- How to implement different versioning strategies with CouchDB
There is common consensus about uMap being the closest to an imaginable editor we could long for. Also we cannot build a complete WebGIS from the bottom-up nor design it top-down, thus SHOULD build on the strengths of open collaboration and COULD declare uMap our reference implementation of a collaborative web mapping application.
We tear it apart from the middle-out and extract a usable
Leaflet.Storage abstracted enough for use within multiple mapping applications, linking different frontends to different backends. A missing thin waist of current WebGIS offerings and another step to a more geo-aware web in general.
We could finally be approaching a geo transport layer which increases interoperability between multiple implementations
What uMap currently lacks to be enough for us is
- an official Dockerfile for increased self-hostability.
- an API to expose public and private data
- taxonomy viewer, templates + editing features, but these are maybe just another coupled service. Federated Wiki comes into mind. An explanation about how this could look like is given upon request.
- a simple spreadsheet data collection interface - the view exists already, it just needs to be directly linkable in the JS frontend and could need tighter integration with a free geocoding service à la Mapzen’s.
Most of these features would be offered by CartoDB instead.
Map of Maps
Since one of the Scrum standups we know @species managed to harvest the Semantic MediaWiki and create an overview map of the mappings. Unfortunately we don’t know yet how it came into existence, as it is completely undocumented.
The existing work on importing tabular spreadsheet data and transforming could then be integrated into more generalized, yet not neccessarily automatized workflow examples. Once we have storage in place there is plenty of ways to interact with it from the
So we come to an end. By building on high quality open source software and integrating into existing ecosystems, we build on the work of thousands before us and only add minimal layers of patches which add our desired functionalities. We even managed to make sure users keep control over their data and can revoke any publication permissions at any time. If we federated the dataset before and it had been licenced accordingly, we probably still have other copies floating around. Now we want to show our multitude of views on alternative economies in as many places as possible.
From uMap and other websites we already know
<iframe /> embeds, but Discourse already shows us how lovely oneboxing is.
How does it work? It extracts Structured Data from the Web (click on the first undefined type error and check out world’s probably first use of the ESSGlobal vocabulary by The Institute for Solidarity Economics) and displays it accordingly for known vocabularies.
As we also know about half of the world’s websites are powered by WordPress, we could use it to distribute our self-hosted vision of location-based sustainability data and produce a small mapping plugin which @species already started creating. But if we think of loosely coupling different webservices, we can also imagine to create a
[shortcode] plugin directed at uMap. Inserting a Mapping Viewer or Editor could become as simple as pasting a URL in Discourse, too!
What else is there to discuss?