Consuming the data aggregated

I’m thinking about how to start moving the data behind the links in the Semantic Mediawiki.
Of course there is a SPARQL endpoint that can be used to visualize the data in multiple ways by using the LinkedWiki plugin, which I especially like for using the :clipboard: 4store quadstore implementation.

@toka Can I suppose MediaWiki plugins also work in Semantic Mediawiki?

Because within a medium time scale (i.e. end of the year) I’d love to mobilize the RDF hidden in there with more LDP or Protégé like environments.

Then having a programmatical way to access the data, i.e. on a REPL consuming JSON-LD, may be nice for first steps generating meaning out of our collections. Visualisation will remain crucial, too.

Hi I wonder if you could clarify the current thinking around the Semantic media wiki? In particular how the will not become stale and out of date? @toka @almereyda

As I understand it there are three? Elements to Transformap data.

  1. Data that can be added to OSM directly, eg through interfaces like
    -Relies on community to manually remove stale data

  2. Data that is pulled direct from the web via Rdfa/ Json-Ld or similar Ways to publish Open Data on the Web
    -Automatically updates when websites go offline, or appear with relevant meta data

  3. The semantic media wiki

  • Unclear how this is going to be kept up to date.

Is my understanding roughly correct?

1 Like

true - but there is a BIG community behind - one of the reasons for choosing OSM :slight_smile:

That is the main plan for the future, yes - to include other linked data sources.

1 Like