[Proposal] end 2016 having some components running | mid 2017 - mid 2018 have an RDF-based system running

we need to attract people, that are knowledgeable with RDF.

@almereyda sees RDF as “the Thing” with linked Data.

@species has experiences, that RDF is used somehow, but not in a concise way.

Example by @species:
Wikidata does not work with it. They provide RDF interfaces, but all 4 interfaces are different from the inside. They all work with RDF, but all different.

How could we see, that we are wrong or right wit RDF.

@species: “How many years are we away, that RDF is widely adopted? 4 - 5 years?”

Some practically experiences voices and examples would be nice to the topic of building on RDF.

  • What is good arguments for and against RDF?
  • When do we have to make a decision to work with RDF or prioritize something else for similar or similar oriented outcomes?
  • What is the SWOT for RDF (strengths | weaknesses | opportunities | threads) and how to evaluate them?
  • What other options are there to not be held back from one specific standard or protocol, but maybe opt for a different solution at some point.

@klaus is working on RDF and will certainly be interested in this conversation
@klaus - this might be the moment to finish the document you’ve prepared a year ago, and perhaps submit it to a SWOT analysis


This week I am very busy with different other things, so only briefly:
As Silke mentioned I have some acquaintance with RDF. What I suggested about 1 year ago, was to use a format that is compatible to RDF, not necessarily RDF. This means easily and completely transferable to and from RDF. This could of course be RDF ;-), but it could be also some other format. Essential is, I think, a triple structure. This would be compatible also for the Wiki format. Of course we would need some tools for conversion. I am interested to join in building those tools.


@jnardi has also showed yesterday a RDF vocabulary schema for the solidarity economy. This seemed interesting to take as base and example of how we narrative-based onthologies and semantics could be generated.

I think it would make sense to consider building a team to specifically develop and extend the linked data framework we want to build, ideally with @klaus, someone from ESS Global and eventually other networks/collectives working on similar approaches.

@klaus will you have the opportunity to come to the TransforMap workshop at Solikon?

@gandhiano Unfortunately not! At the same time I have a workshop of my own (einfach.nomadisch.leben)

@klaus and @gandhiano, please also have a look at @mariana’s latest post

Sorry: ESSGLOBAL was a topic already at the beginning of Transformap meetings and at that time a reason to look at RDF etc. I am following the discussion and trying to join again :wink:

1 Like

Has there already been consideration here of populating TransforMap by extracting RDFa from the websites of entities who we would like to include on the TransforMap? This would require that we help to provide a suitable vocabulary (based, presumably, on http://wiki.openstreetmap.org/wiki/Proposed_features/TransforMap).

I’m very happy to elaborate on this, but first I wanted to check for existing work done here. I see there being many advantages to this approach, but also quite a few technical challenges.

1 Like

I see RDFa, but also other embedded structured data types like microformats or actually whatever is available in consideration for scrapers. Ideally websites provide JSON-LD via Content-Negotiation for any IRI, if we’d ask @elfpavlik.

Drupal supports RDFa in the core and being one of the most used CMS by the entities we would like to see on TransforMap, could be interesting to explore this in further detail.

I would for example be interested in mapping/storing all public events locations from the co-munity.net site. Or just aggregating any type of location-aware content type from a Drupal site.

1 Like

Agreed: RDFa, microformats and JSON-LD are all useful representations to extract.

If we can extract data in any of these formats, then we could view the website containing these data as the “master”, and the data we populate into OSM and our own databases as a “cache”, assuming we provide a mechanism for doing a “cache-refresh” to keep the cache in sync with the master. Is this a workable approach?

I am currently working on the p6data project which will require the co-op movement to publish open data. I would like to ensure that the data published will provide information to allow the co-ops to be displayed on TransforMap. Clearly, it would be desirable if the data published by co-ops was “master”, and the data needed to drive TransforMap was “cache”, as described above. Of course, this need not be done via embedded structured data: the more important aspect is a design that enables the master/cache relationship.

I can only speak for OSM:
Syncing with OSM is very delicate, because OSM is not only writeable by ‘us’, but is kept up to date by millions of mappers too. There are some questions that need to be resolved before an automated sync is to approved by the OSM community:

  • What if an object in one of the two DBs disappears? When a mapper deletes a POI because it has closed down, how to propagate this change to the other DB? Automatically or after review?
  • What if an object changes both in the ‘source website’ and OSM at the same time? Which one to take as the new value?

There are only very few databases in sync with OSM because of these problems.
A working example are the Danish housenumbers, which are synced by the Danish ordnance office into OSM. BUT the data quality on the ordnance data is considered that high, that they are regarded as better than OSM and will overwrite a value in OSM without asking.

There is no working two-way sync with OSM that I am aware of at the moment (which we would need). But if we can develop one, this would be a huge benefit for all!

Yes, I am considering exactly this for the first iteration of the Hypermedia API.

A revised translation of @daniel’s write up would be quite nice in this regard.

1 Like

I thought this may be tricky! As I understand it, TransforMap also stores data in a DB separate from OSM. Is that right? If so, where can I find information about which data are held in OSM, and which in a DB under our control?

Indeed! This leads inevitably into questions about globally unique resource identifiers, and which different identifiers actually refer to the same resource. I expect this has already been the subject of discussion here. If so, where can I read about the current thinking on this subject? Does OSM provide “proper” URIs for POIs?

Yes, it is designed that way. Although it is not implemented yet^^.

Some related topics have already been discussed in the vocabularies-and-data Discourse category.

But it is not 100% finalized, which attributes data are stored in with DB.
Definitely in OSM should be stored:

  • coordinates, address
  • name
  • website, social media contact
  • OSM POI type
  • wheelchair accessibility
  • start date
  • opening hours
  • link to image (in Wikimedia Commons or somewhere else on the web)

We are unsure about the following attributes:

  • ‘political’ identity in the alternative economy (commons, transition initiative, degrowth,…) and other tags from the TransforMap Taxonomy
  • ‘private’ contact data like telephone numbers and personal email addresses

Definitely elsewhere:

  • Events (although the event location should be referenced to an OSM object!)
  • Longer description texts
  • Media like images, videos
  • People (like community coordinators, etc.)

There are URIs (e.g. osm.org/node/2698583754) for an OSM POI, but they are not guaranteed to be stable (because a node maybe later changed into an area like a building, then the id changes).
But there is a theoretical solution to this problem: Overpass permanent IDs.


there you go with the unrevised (not updated to current status) translation: Architecture and Road Map, follow-up Potsdam

Could you link some OSM statement on what kind of tags can be accepted and which not? Is there some kind of tag-policy statement? That would be of great help.

Can you specify for what exactly would this 2-way sync be needed?