You are here

When does it make sense to do it?

The building of semantic Linked Data structure makes sense when we need to integrate heterogeneous data based on different domains. This structure does not depend on any formats of data, size of database or other attributes, but on common geographic concepts (e.g. river, road...) and their relations (e.g. a road crosses a river). The relevant part of real data and where needed metadata will be transformed to sub-elements of basic concepts (e.g. E55 is a road, OhlöÖe is a river as well as Site of Community Importance8) and then relation between concrete object could be described (e.g. E55 crosses Ohlöe).

This approach enables us to integrate or harmonise originally heterogeneous data based on same concepts.

As example we could mention: Protected Area XX is near road E55; in protected area XX you can see bird YY. Results of such combination could be, that near road 55 you can see bird YY.
To achieve this, the partners foresee a number of key steps to be taken, each of which occurs in one or more of the planned pilots which are described in PILOTS section:

  1. Describe target use case/s domain/s focused on (i) Agroforestry management, (ii) Environmental research and Biodiversity, (iii) Water monitoring and (iv) Forest sustainability and (v) Environmental data reuse. The use cases will be dealt with from end user (related stakeholders) point of view. Each use case will be described with a minimal essential structure and list the basic concepts to be taken into consideration. We expect a maximal re-using of existing semantic structures (controlled vocabularies, gazetteers, registers, registry services, etc.).
  2. Analyse current available data and also current data models, which are available (in the ideal case INSPIRE base).
  3. Define new data RDF models (concepts and relations) or identify possible  extensions of INSPIRE data models, which are optimal in order to fulfil all or particular use case/s.
  4. Where necessary define transformation model/s (processing service) for transferring original data into RDF. This could be a complex process running for a long time and through many databases.
  5. Run the transformation model (it could also include access to distributed databases).
  6. Eventually store or generate on the fly new transformed data in RDF (this information needs to include also links to the original data).
  7. Prepare user-friendly application interface for querying data (as a simple form to be able query data without standards experience). The queries should be divided (or fragmented) into two groups ÔÇô spatial queries (processed in traditional spatial database) and semantic queries based on SPARQL.
  8. Prepare visualisation of results of queries. It will include two possibilities:

a) visualise a list of objects in some form;
b) cartographic visualisation, which has to be provided in relation with original data. So it will be necessary to define some mechanisms, like Filter Encoding etc. that could visualise results on the base of queries.