Comments (11)
On prefixes we had similar discussions in the SHACL-SPARQL work and noted that prefix declarations are not really an RDF graph concept, but merely a feature of serializations. They do not necessarily "survive" round-tripping so are generally not reliable, as you also say. However, we need to keep in mind that some implementations of a long-URI policy may in fact store these URIs are real strings, and in that case we should aim at keeping the URIs as short as reasonable. A catalog of prefixes such as
[ rdf, rdfs, owl, sh, xsd, skos ]
would hopefully be quite easy to agree on and would shorten the majority of triples considerably, esp with datatypes and in common cases like rdf:type and rdfs:comment triples. These hard-coded abbreviations improve memory consumption but also human-readability.
With hash number, how would they uniquely identify triples - they cannot be parsed back.
from rdf-star.
Would you help me understand your reason why prefixes should be absolutely avoided?
Are the URL string length restrictions relevant for IRIs?
from rdf-star.
Ok Base 64 is an option (assuming we agree that the :rdf4j part can simply be removed in a standardized form.
Comments:
- N-Triples doesn't use any namespace abbreviations, which would cause quite a bit of bloat, e.g. when xsd:date has to be spelled out each time. I would argue that for brevity we should define standard prefixes and require their use.
- Base64 is not human-readable, while URL-encoded strings are at least manageable
Why did you use Base64? Is it producing shorter URIs in average?
Is RDF4J ever storing these long URIs internally or does it use SPO pointers and only produces the URIs when needed (i.e. rarely)?
from rdf-star.
- I think readability is not an important requirement, since when you go 2-3 levels of nesting, you'll get an unreadable mess no matter what encoding you use.
- Limiting length is a legitimate requirement
- Holger points out a requirement of parsability (invertibility). I hadn't thought about it, but I now think it's important, eg to parse and reconstruct RDF* from NTriples*
Using a set of fixed prefixes is a very small step towards limiting length and doesn't solve the problem.
Eg what would be the encoding of this RDF* triple:
<<:Michail_Sholokhov :wrote "<full text of And Quiet Flows the Don, all 5k pages of it>" >>
:disputedBy :A_Chernov.
I think we need to pick some compression method.
Eg EXI https://en.wikipedia.org/wiki/Efficient_XML_Interchange uses Huffman coding for representing XML efficiently on constrained (IoT) devices.
See https://www.w3.org/TR/exi/, https://www.w3.org/TR/2009/WD-exi-evaluation-20090407/
from rdf-star.
It is quite easy to come up with cases where any algorithm will behave poorly. Going down multiple levels of nesting (i.e. statements about statements about statements) is one of those, but is this really happening in practice? Likewise, if anyone stores a whole book text as an RDF literal then the database will suffer no matter what.
I am open to compression algorithms assuming their trade-off is worth it. Keep in mind that we are talking about URIs, so any compressed binary format may require an extra level of URL-encoding. So you'd end up with quite a layering of algorithms that add up complicating the assessment. Qnames already solve compression in the RDF world, but they only work if we either define a comprehensive catalog of common prefixes or another mechanism to safely reference local prefixes (which I don't think is possible).
A proper scientific approach here would be to collect realistic sample data and then let the conversion algorithms do their work to compare size versus serialization/parsing performance, and then also readability (which I wouldn't want to give up on yet). The problem then becomes a matter of proper engineering.
So: does anyone have some example data?
from rdf-star.
GraphDB and rdf4j use urn:rdf4j:triple:xxx
where xxx stands for the Base64 URL-safe encoding of the N-Triples representation of the embedded triple.
from rdf-star.
I vote against relying on prefixes because they can be redefined locally and even xsd
is not standardized (some people use xs
).
Can we use some short hash instead of base64?
from rdf-star.
The way GraphDB does it is perfect IMO.
N-Triples doesn't use any namespace abbreviations
Exactly 👍
prefix declarations are not really an RDF graph concept, but merely a feature of serializations.
Yes, prefixes should absolutely be avoided.
However, we need to keep in mind that some implementations of a long-URI policy may in fact store these URIs are real strings, and in that case we should aim at keeping the URIs as short as reasonable.
I wouldn't worry about implementation in this regard. We should focus on the serialization, the data model does not change; implementors will choose the appropriate data structures.
The mention of long URLs is interesting. As of today, the de facto maximum URL string length widely supported on the interwebz is about a 2000 characters, which would leave about 2,500 characters worth of content unencoded.
from rdf-star.
Are the URL string length restrictions relevant for IRIs?
Ah yes, I meant to mention that I could see this becoming a concern for dereferencing long URLs which encode several layers of embedded RDF* triples this way. Although I imagine it would likely never happen in practice.
Prefixes should be avoided mainly because they introduce ambiguity to an otherwise canonical form. If prefixes are allowed, then there can be two IRIs which encode semantically equivalent triples but manifest as different strings. While it may reduce string length and ease readability, it comes at a great cost to implementations since they must first normalize every string before storing or comparing. Also, prefixes are not in any way intrinsic to the specification (e.g., there is no ontology or set of vocabulary terms RDF-star uses other than maybe rdf
) so selecting a set of prefixes would be rather arbitrary and preferential.
from rdf-star.
from rdf-star.
A long time ago, I flagged this discussion as relevant to semantics
, but in retrospect, it seems to me that its is more about implementations. Semantically, this method raises problems as long as blank nodes are involved, because the blank node label that will be put in the IRI is irrelevant for the semantics (actually, it is even irrelevant for the abstract syntax). Of course, implementations can rely on that internally, and "do the right thing" under the hood with blank node labels.
Therefore, refiling this issue as discussion
, and removing the semantics
label. Shout if you disagree.
from rdf-star.
Related Issues (20)
- SPARQL-star DESCRIBE Queries
- RDF-star semantics (as currently defined) is non-monotonic HOT 4
- sparql-compare type signature conflicts with description
- Confusing definition of sparql-compare for triples with IRI components HOT 1
- Issue with sparql-compare semantics and ORDER BY
- RDF-star IRI 404 HOT 1
- Self-referential resource URI for testing manifest documents is incorrect HOT 6
- [Question] Indexing strategies for reified triples HOT 2
- deeply nested triples or blank nodes? HOT 2
- Referential opacity entailment examples HOT 2
- Clarification on comparison to reified statements
- What is the graph of a quoted triple? HOT 14
- [Question] RDF-star and Nesting Annotations HOT 2
- Subset of RDF-star without recursion HOT 1
- Question about sparql-star-pattern-9 eval test HOT 2
- [Request]: Negative syntax tests for annotation syntax n-triples and n-quads HOT 2
- (tracking) need explicit notes that `<<` and `>>` are not any of the visually similar characters HOT 9
- trig-star : grammar and tests are inconsistent HOT 2
- How can a data property be restricted to apply only to a specific type of statement? HOT 2
- Triple annotation with Parser Tokens HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rdf-star.