This is great to see developing. A few quick reflections:
Standards + scraping
In the strategy you say "For example, Google is able to provide the opening hours of shops because the websites for those shops use a standard vocabulary for describing their opening hours."
As I understand, that is only partially true:
(a) For some opening hours data I strongly suspect they are using machine learning to parse data, even when it has not been marked up appropriately. I believe they also provide tools to help webmasters correct information they have got in their systems (I would need to check what webmaster tools currently do around this to be sure of exact mechanics)
(b) Google are able to get people to use markup correctly because of it's sheer market power. If you put the wrong information in your markup, you'll find out quickly enough from customers complaining about incorrect results on Google etc.
These are important things to have in mind when learning from the way in which major information brokers have 'disciplined' websites to provide them with the information they want.
Take a pragmatic approach to Linked Data
As you have already noted, much of the technology around Linked Data isn't as mature, or well maintained, as one might hope.
There is a tricky balance to strike between embracing the positive politics of ideal linked data (the distributed AAA - 'Anyone, can say Anything, about Anything' approach) and working with the messy reality, where network effects kick in, and a few centrally defined vocabularies dominate in a more-or-less lowest-common-denominator sort of way.
There is also a tricky balance to strike between using the full linked data stack (RDF, Triple Stores, SPARQL, reasoning etc), and just encouraging greater use of shared standards, and then providing tooling to help people work with data in whatever format is most familiar to them (usually flattened CSV type formats, or simple JSON).
Although solving data integration problems via linked data often offers an elegant solution, experience suggests to me that it often leads new projects towards premature optimisation for working at a global scale, that limits their ability to take small steps towards practically available data, and integrating across a reasonable number of starting sources.
I would agree these are useful. I would suggest worth starting simple here. Just a list of links will often be enough to get things started, without needing rich meta-data registries until their is user demand.
Can you make the 'user side' of the strategy more prominent. Right now there is a reasonable emphasis on supply side - but thinking more in the strategy about the community processes for identifying and supporting use and user needs might be relevant.