Tutorials on Sunday, Oct. 11
- Full Day
Tutorials on Monday, Oct. 12
- Full Day
The goal of the Stream Reasoning for Linked Data tutorial is twofold: to (1) present interesting research problems for SW that arise in reasoning on a variety of highly dynamic data; and 2) introduce stream reasoning techniques to SW researchers as powerful tools to use when addressing a data-centric problem characterised both by variety and velocity [DCvF09,RPZ10a]. The tutorial consists of two parts. The first will focus on RDF Stream Processing and continuous query answering over dynamic data. It will begin with an introduction of linked data streams, and query models to process them; next, it will introduce C-SPARQL - a continuous extension of SPARQL for querying RDF streams and RDF graphs - and SPARQLstream – a system enabling ontology-based access to data streams through query rewriting to Stream Processing Engines. The second part of the tutorial will focus on Stream Reasoning techniques, providing an overview of the current state of the art on Stream Reasoning techniques for RDFS and OWL2-RL. The tutorial will focus on two Stream Reasoning techniques to perform incremental materialization of streams of RDF data. The tutorial will end with a wrap up and an overview of the open challenges.
- Alessandra Mileo (Insight Centre for Data Analytics)
- Jean-Paul Calbimonte (École Polytechnique Fédérale de Lausanne)
- Daniele Dell'Aglio (Politecnico di Milano)
- Emanuele Della Valle (Politecnico di Milano)
Now-a-days people are using their mobile devices as their primary computing and communication platform and are generating and consuming vast amounts of data on them. However, the lack of Linked Data tools for these devices has left much of this data unstructured and unfit for reuse or integration with other datasets. In this paper, we present tools and technologies that enable the quick development of mobile Linked Data apps. These tools enable app users to contribute Linked Data, as well as consume existing Linked Data sets, paving the path to a growing Linked Data ecosystem.
- Weihua Li (Massachusetts Institute of Technology)
- Julius Adebayo (Massachusetts Institute of Technology)
- Lalana Kagal (Massachusetts Institute of Technology)
Ontology Design Principles in Support of the Analyst: Expressivity, Non-brittleness, and Seamless Integration at Scale
A few years back, Leo Obrst, Joe Rockmore, a classic data scientists, two intelligence domain experts and other stake holders helped vet a set of ontology design principles that I had put together as a result of my experiences in ontology-based data integration - most recently as chief modeler on the DataSphere integration project and one of two principal ontologist/modelers on Synapse. The session will cover two dozen foundational preventative solutions for many of the modeling problems encountered in serious semantic modeling efforts. While designed to support semantic integration at scale, these principles also support high levels of temporal, geospatial, and analytical expressivity/sophistication. They also facilitate non-brittle future-proof reaction to changing requirements for enterprise ontology modeling, and the semantic web as well.
- Eric Peterson (Noblis/National Security Partners)
Due to the decentralized and linked architecture of Linked Open Data, answering complex queries often requires accessing and combining information from multiple datasets. Processing such federated queries in a virtually integrated fashion is becoming increasingly popular. This tutorial will explore the different approaches used for federated query processing over Linked Data. In particular, we will focus on query federation over SPARQL endpoints, Triple Pattern Fragments, and live Linked Data streams. State-of-the-art techniques will be practically demonstrated with examples along with hands-on experience and exercises to be carried out by the participants. By the end of the tutorial, participants will obtain hands-on knowledge of the federated query processing over Linked Data, understand the main differences between state-of-the-art systems, and be able to position these systems based on their pros and cons.
- Muhammad Saleem (University of Leipzig, Germany)
- Dr. Muhammad Intizar Ali (INSIGHT Centre for Data Analytics, NUI Galway)
- Dr. Ruben Verborgh (Ghent University – iMinds, Belgium)
- Dr. Axel-Cyrille Ngonga Ngomo (University of Leipzig, Germany)
In this tutorial, we will (1) give a comprehensive overview and hands-on training on the established, recently added, and expected extensions to the, conceptual structures of schema.org for e-commerce, including patterns for ownership and demand, (2) present the full tool chain for producing and consuming respective data, (3) explain the long-term vision of linked open commerce, (4) describe the main challenges for future research in the field, and (5) discuss advanced topics, like access control, identity and authentication (e.g. with WebID); micropayment services (like Payswarm), and data management issues from the publisher and consumer perspective. We will also cover research opportunities resulting from the growing adoption and the respective amount of data in JSON-LD, RDFa and Microdata syntaxes.
- Martin Hepp (Universitaet der Bundeswehr Muenchen, Germany)
Ontology-based data access (OBDA) has become a popular paradigm for accessing data stored in legacy sources using Semantic Web technologies. In OBDA, users access the data through a conceptual layer, which provides a convenient query vocabulary abstracting from specific aspects related to the data sources. This conceptual layer is typically expressed as an RDF(S) or OWL ontology, and it is connected to the underlying relational databases using R2RML mappings. When the ontology is queried in SPARQL, the OBDA system exploits the mappings to retrieve elements from the data sources and construct the answers expected by the user. Different approaches for query processing in OBDA have been proposed. We focus here on the virtual approach, which avoids materializing triples retrieved through mappings and answers the SPARQL queries by translating them into SQL queries over the database. In this tutorial we will give the audience a gentle introduction to OBDA covering both practical and theoretical aspects. One the practical side, we will illustrate novel challenges for OBDA arising in two large-scale industrial use cases studied in the European project Optique. We will show how to overcome these challenges through efficient translations from SPARQL to SQL, considering mappings, ontologies, and rules. On the theoretical side, we will present recent theoretical development underlying these techniques.
- Diego Calvanese (KRDB Research Centre for Knowledge and Data, Free University of Bozen-Bolzano, Italy)
- Martin Rezk (KRDB Research Centre for Knowledge and Data, Free University of Bozen-Bolzano, Italy)
- Guohui Xiao (KRDB Research Centre for Knowledge and Data, Free University of Bozen-Bolzano, Italy)
Provenance metadata describes the origin or history of data and it is central to ensuring data quality and supporting scientific reproducibility with increasing role in emerging domains of healthcare informatics and the Internet of Things (IOT). Indeed provenance is used for maintaining audit trails in financial transactions, comply with privacy laws, and facilitate secondary use of health data in research studies. In addition, provenance analytics over Big Data is used to drive Web commerce, which generated $1 trillion worth of business in 2012, trust computations in social media and social network platforms. To address the growing interest in integration of provenance information and knowledge management systems, this tutorial will weave together three related themes of: (1) provenance analytics in “data-driven” research and in the rapidly growing Web of Data (also called the Linked Open Data (LOD)) with 31 billion RDF triples; (2) the role of the new World Wide Web Consortium (W3C) PROV specifications for provenance and highly scalable database query processing techniques for W3C Semantic Web used in provenance analysis; and (3) real world applications of provenance in emerging disciplines of healthcare informatics. The tutorial will be of interest to: (a) academic researchers who are incorporating provenance metadata in their Big Data research for data quality; (b) developers working on scalable platforms for emerging domain applications, such as IOT, LOD, and healthcare and life sciences. In addition to its meaningful breadth, the tutorial will present key technical topics that have seen significant research. These include, provenance modeling and querying, database techniques for querying and indexing W3C RDF datasets in context of the Web of Data, and building complex provenance-enabled healthcare informatics platforms. The tutorial will cover the family of W3C PROV specifications, which are increasingly underpinning provenance in information systems, with PROV Data Model (PROV-DM), PROV Ontology (PROV-O), and the PROV constraints. The tutorial developers have experience in both academic research as well as development of platforms that integrate provenance with RDF database indexing and querying.
- Satya S. Sahoo (Case Western Reserve University, Cleveland, OH USA)
- Praveen Rao (University of Missouri-Kansas City, MI USA)
If you think that getting your (or someone else’s) head around OWL is a challenging experience, you have found the right tutorial. There are really just a few foundational building blocks that everything else is built from. There are 1) individual things -- e.g. JaneDoe, 2) kinds of things -- e.g. Organization and 3) kinds of relationships -- e.g. worksFor. That's pretty much it. In this tutorial we will describe the many ways that these things can be combined and used. Most importantly, there are triples that assert relationships between things -- e.g. JaneDoe worksFor Microsoft. There is inference to generate new triples and a few more key things, but not as much as you think. The topics we will cover include: OWL building blocks: Individuals, Classes and Properties; restrictions made intelligible; triples, inference, ABox vs. TBox, and Boolean constructs: Union, Intersection & Complement. We finish the tutorial with common patterns for building ontologies and common pitfalls when learning OWL. If you are getting started in OWL, this tutorial has everything you need and nothing you don't.
- Michael Uschold (Semantic Arts)
- Dave McComb (Semantic Arts)