Re-decentralizing the Web
Solid is a proposed set of standards and tools for building decentralized Web applications based on Linked Data principles.
Read more on solidproject.org.
Solid Technical Reports
Home Page: https://solidproject.org/TR/
License: MIT License
Re-decentralizing the Web
Solid is a proposed set of standards and tools for building decentralized Web applications based on Linked Data principles.
Read more on solidproject.org.
Define what constitutes:
of normative items.
They are not written in stone but to have them serve as a sufficiently stable guideline we can refer back to.
Aside: We assume for each potential feature there is a UCR eg #9
For example, to add a normative functionality, there needs to be x number of implementations (even if it is rough.. but within the region) before it makes it into the spec. This doesn't have to happen at the WD stage but becomes more of a solid (no pun intended) requirement as the spec matures... along the lines of a CR.
To remove, there is a collective shift on focus or technology in the ecosystem eg. +TLS HTML keygen + browser cert UX is blah.. stuff breaking down that is outside of the control of the specs and individuals involved.. Update with something equivalent if possible.
To update, show that the change keeps existing functionality but simplifies the process..
See also W3C Technical Report Development Process: https://w3c.github.io/w3process/#Reports
The current Web Access Control spec states that you cannot delete the ACL resource for the root container of a user's account.
The root container of a user's account MUST have an ACL resource specified. (If all else fails, the search stops there.)
How does the server indicate to a client application that an ACL resource cannot be deleted?
A way to figure this out today is that the client can check every parent container available if there is a ACL available, but this can be an expensive operation.
After storing an RDFa document it should be retrievable as Turtle or JSON-LD.
Two use cases:
If a resource is readable, the Last-Modified
header should be returned.
(Originally opened by @csarven)
(Moved from linkeddata/gold#64)
From RFC 7232:
An origin server SHOULD send Last-Modified for any selected
representation for which a last modification date can be reasonably
and consistently determined, since its use in conditional requests
and evaluating cache freshness ([RFC7234]) results in a substantial
reduction of HTTP traffic on the Internet and can be a significant
factor in improving service scalability and reliability.
https://tools.ietf.org/html/rfc7232#section-2.2.1
Opening this issue to start discussion on moving forward with recursive deletion of members within a containers. Related to solid/solid-spec#172
I propose that we allow this from a specification level, this will allow for agents to remove a container without first reading & then deleting all its members.
An example of when this would be useful is seen when an agent wants to no longer utilise an application that had previously been given write access to a specific folder within the agents storage pod, the user may not have rights to all directories within that container, but its still their storage pod to do whatever they please with. If we allow recursive deletion, it means that the agent doesn't need to have read or write access to those member resources, but can still delete the entire folder.
To complete solid/solid#14, we need to record what to do with incoming requests after deletion. At the very least, we should make sure that nobody gets the same URI so that they can impersonate the former account owner.
As noted there, we should respond with 410
if nothing else has been recorded, but even better is it to allow users to register their new home and do a 301
. At the very least, their new WebID should be saved.
I thought about using the old .htaccess
format for this, that we just dump such a file in the former home dir of the user.
Solid needs to bridge and spec versioning in context of LDP and Memento.
See also:
In the section "Reading data using SPARQL" I suggest instead using the SEARCH METHOD ( see the recent draft-snell-search-method-00 RFC which is being discussed on the HTTP mailing list currently and that is gaining momentum )
I have implemented that already in rww-play as described in that curl interaction page
$ curl -X SEARCH -k -i -H "Content-Type: application/sparql-query; charset=UTF-8" \
--cert ../eg/test-localhost.pem:test \
--data-binary @../eg/couch.sparql https://localhost:8443/2013/couch
HTTP/1.1 200 OK
Content-Type: application/sparql-results+xml
Content-Length: 337
<?xml version='1.0' encoding='UTF-8'?>
<sparql xmlns='http://www.w3.org/2005/sparql-results#'>
<head>
<variable name='D'/>
</head>
<results>
<result>
<binding name='D'>
<literal datatype='http://www.w3.org/2001/XMLSchema#string'>Comfortable couch in Artist Stables</literal>
</binding>
</result>
</results>
</sparql>
Given that most other WebDAV methods are implemented ( see issue solid/solid-spec#3 ) this should be an easy addition, and seems less ad hoc than what is currently being suggested namely
GET /data/ HTTP/1.1
Host: example.org
Query: SELECT * WHERE { ?s ?p ?o . }
Just to make sure we have this tracked - #12 (and I'm sure other places too!) uses the term 'data pod' consistently, yet the agreed terminology (within inrupt at least) was to use the term 'Pod' everywhere. Personally, I think the word 'data' in 'data pod' is redundant anyway, so my suggestion is to replace all references of 'data pod' with 'Pod.
The ability to control who/what can access specific resources in your pod is great. An important complement is to be able to see how and when they are using this access.
Being able to query how the access you've given is being used is a great way to determine whether those grants are too broad, still necessary, or being abused.
Extreme caution needs to be taken in how this is designed, lest it turn into an avenue for denial of service. For example, limiting it only to those requests that successfully pass authorization would restrict any recorded activity to entities that should have some semblance of trust already. Similarly, recording and updating only aggregate metrics may reduce resource usage without losing much practical value.
My proposal is to make the separation of the WebID provision and the Pod Provision even if the provision is by the same party a Solid requirement.
Why?
Because if a Solid user is dissatisfied with the service of their Pod Provider and would like to leave they should be able to do so without being inconvenienced.
Inconveniences are brought about by making access to your data dependent on a service which is why Solid separates the app from the data. However, making identity dependent on a service could also be an inconvenient factor that would make an individual hesitate to leave a Pod Provider even if they are unhappy with the service.
An example of identity has been used as a bargaining chip in the past is in mobile phone numbers. Although technically possible to transfer a mobile number from one provider to another the process to do so was so cumbersome that people would desist. https://en.wikipedia.org/wiki/Mobile_number_portability
Create an API to enable users to download or export their account data. Useful for backing up data, or for moving data to a server that doesn't support an alternate 'copy'/migrate mechanism.
Note: This issue is related but separate from #12 - Enable users to migrate/copy their accounts to another server.
Open considerations:
To do:
ldnode
- create issue(Moved from solid/solid-spec#68)
This issue is to discuss possible ways to enable portability of user data.
(Originally opened by @nicola)
(Moved from solid/solid-spec#72)
Portability specifically means that "the user can take their data elsewhere", whenever they wish. It is a combination of the following features:
(This is a more detailed proposal continuation of issue solid/solid#49.)
We need a bandwidth-efficient method to copy data to and from Solid pods.
Imagine you're building a 'Save to Solid' widget / app / browser extension. The idea is - the user is browsing some Web resource (a PDF file, or an image, or a video, etc), and would like to save it to their pod (to be able to tag it and do other sorts of CMS stuff on it).
Currently, the only way to perform this operation would be multi-step:
Putting aside the implementation details, this presents two problems: temporary storage space (if the file is being held in a Javascript variable in a web app, or in LocalStorage, this quickly presents challenges when the resource is large), and bandwidth.
This is especially problematic on resource-constrained clients (like mobile apps or browsers) -- the user first has to use their mobile data to download a file temporarily, and then use mobile data to upload that resource to their pod.
COPY
MethodNote: As @RubenVerborgh points out, we should separate the problem / use case from the proposed solution.
Add a new Solid-specific LDP method, COPY
, inspired by the existing WebDAV COPY method.
This proposed solution lets pods play to their strength, to act as always-connected high(er) bandwidth servers. Specifically, it allows a client to issue a single COPY command, and the server would perform the necessary data transfer (using its own bandwidth, not the client's).
This would be just a single step:
To copy FROM an external URL, https://example.com/example.pdf
, TO a user's POD, alice.inrupt.net
:
COPY /papers/example.pdf HTTP/1.1
Host: alice.inrupt.net
Source: https://example.com/example.pdf
HTTP/1.1 201 Created
Date: Mon, 23 May 2019 22:38:34 GMT
Content-Type: application/pdf
Location: https://alice.inrupt.net/papers/example.pdf
COPY
Method SpecsThis method is idempotent, but not safe (see Section 9.1 of RFC2616). Responses to this method must not be cached.
(Note, this is what's currently implemented on node-solid-server
; copying of containers is not implemented.)
When the source resource is not a collection, the result of the COPY method is the creation of a new resource at the destination whose state and behavior match that of the source resource as closely as possible.
To copy a resource FROM an external URL TO a Solid pod:
COPY <destination url>
Source: <source url>
(Note that this is backwards from the current WebDAV semantics, which uses COPY <source url>, Destination: <destination url>
, see below.)
(Question: Should this be supported?)
See the WebDAV COPY for Collections spec for discussion of what's involved, including the handling of recursion via the Depth:
header.
WebDAV's COPY is intended primarily for transfering data out of the WebDAV server and into an external destination. Its syntax is: COPY <source url>
with the Destination: <destination url>
header.
It does not, however, support the common use case where the source URL resides on an external server, and you want to copy it to yours. (In other words, it does not support the Source:
header.)
Since the motivation for Solid's COPY method is the latter (transfering from an external resource to a Solid pod), the Solid COPY method should support the Source:
header (in addition or instead of the Destination:
header).
Copying from a non-LDP public Web resource to a container:
A COPY operation requires that the authenticated user has Write access to the destination container.
Copying FROM an LDP container to an LDP container:
A COPY operation requires that the authenticated user has Read access on the source container, and Write access on the destination container.
Interfacing with actual WebDAV servers:
Out of scope for the moment. We just need this as a convenience method to move to and from Solid pods.
Source:
request header, to handle the use case of transfering resources from an external source to a Solid pod. Question: Should the WebDAV style Destination:
header operation be supported as well? (For transfering resources from a pod to another external pod).node-solid-server
, for experimental purposes..acl
files when copying resources (for example, if a file has its own .acl
, first copy the .acl
, and then the resource itself).?? (Discuss whether the use case can be solved using existing LDP methods).
Outline conformance criteria as to what constitutes a "solid spec". Noting that where the "solid spec" inherits behaviours and expectations from other specs, we refer implementations to the test suites of those specs to check for conformance. For example, reusing the LDP Test Suite: https://w3c.github.io/ldp-testsuite/ .. as opposed to the Solid Spec's Test Suite creating one from scratch.
While this issue may conceptually overlap in parts with "server tests": solid/solid-spec#112 , it focuses on the spec implementation.
Proposing that the Solid Editorial Team undertakes a project focused on the coordination and orchestration of the v1.0 Solid Specification.
Goal:
Coordinate the completion of a comprehensive and reliable v1.0 specification for Solid.
Approach
The Solid Editorial Team, supported by additional subject matter experts, will provide guidance, encouragement, and coordination to any known panels, individuals, or groups working on Candidate Proposals to the Solid Specification, helping to orchestrate and optimize those efforts to minimize blocking dependencies and maximize high-quality output that will pass in-depth editorial review.
Background
Now that we have an established process, it's time to use it to coordinate our various workstreams into one concerted effort to complete the v1.0 Solid Specification. We can use the structures we now have in place to focus and encourage the work and time people are spending towards this common aim. I believe that an important responsibility of the editorial team is to proactively provide this guidance and coordination. Issues such as solid/process#135, solid/authorization-panel#36, and solid/authorization-panel#33 underscore the need for this.
Proposed Next Steps:
Each spec should reflect a "Use Cases and Requirements" document.
Edit:
The primary goal of the UCR document is to illustrate the specification scope. An easy-to-understand narrative to describe situations that are applicable to the Solid specifications.
It is not particularly useful (or even meaningful) to have use cases document for each spec in isolation. For starters, we only need one shared UCR document for the whole Solid ecosystem. At a later date, UCR document can be extended or split into multiple documents depending on the classes of products and specification category (as per #138 ).
The UCR document must have Requirements derived from Use Cases. It is good practice to also include User Stories in the same document in which the Use Cases are derived from.
In order to support both the editorial of the UCR document as well as the authoring and editing of the specs in the ecosystem, proposed user stories and use cases should be accompanied with provenance information eg. authors, supporters, implementers, survey, and other documentation. Participants should add +1/+0/0/-0/-1 survey results to each story/case to denote their reasoning along the lines of:
This is not intended to be an exhaustive "how-to". The main point is to share the use cases, be clear about the scope of the specifications, and refer back to the documentation to support the decision process.
No way to find out if user is authorized to write to (/delete from) a container without actually trying to write (/delete) something.
Develop an API/workflow recommendation for users to be able to securely migrate or copy their account (and potentially all of its data) to another Solid server or compatible service provider.
Considerations:
See also:
(Discussion of moving/migrating an account extracted from the original solid-spec account export issue).
Discussed briefly in #26 , there are several upgrade mechanisms from HTTP to HTTPS. We currently suggest 301
but it doesn't seem very settled what the semantics of 301
in relation to RDF, even though it has been discussed for a long time. It was raised in the TAG but doesn't seem to have a conclusion.
There's also RFC2817.
Create Test Suite for implementations to check their conformance to the Solid Spec ( #282 )
I find spec generator workflows unnecessarily complicated and outputs to be constrained.
We need to have full control over the output if we want it machine-readable to highest degree.
The LDN spec managed this just fine and there were no compromises.
So, can we stick to plain HTML+RDFa editing .. and along the way open the possibility to use Solid-centric tools?
(Derived from https://github.com/solid/specification/pull/13/files#r305292422 )
The Privacy Considerations section has a subsection for "Identifiable Information".
We need to determine what's deemed to be identifiable information and express that in terms of (non)normative text... and so we also know what should be in a test suite.
Then we can revisit statements like:
In order to prevent leakage of non-resource data, error responses SHOULD NOT contain identifiable information.
So, if we know the set or categories for identifiable information, then the recommendation could switch to MUST NOT, unless we also cover exceptions. I presume that there are no need to explore exceptions to allow the inclusion of identifiable information in error responses.
All specifications should be up front up about what to expect ie. are specs versioned or following a living standard model.
To avoid complications down the line and to learn from what's nearby, I'd like to point to https://www.w3.org/2019/04/WHATWG-W3C-MOU.html
Discuss.
When writing a test for OPTIONS
, it is not clear what the conformance criteria would be. In particular, how the multiple field values should be treated, would an implementation be conformant if it had a subset of the indicated values? Exactly the same? Superset?
The current mechanism to list resources in a Solid container does not support an important social media use case.
Say I'm implementing a social blog service (like LiveJournal or Facebook) that allows me to specify access control on each post: public, private (only I can see and read those posts), and friends-only.
Using the current Solid spec and implementation, I cannot simply put my Posts as resources in a /blog/
Container, and set their corresponding ACL files. This is because when an un-authenticated user (or an authenticated user who is not on my Friends list) does a List Resources request on the /blog/
container, they will see all of the posts listed on my blog, even private or friends-only ones.
In other words, as a non-friend, I do GET dmitri.databox.me/blog/
, I will see that it contains:
public post 1
(which means I have read access to it, no problem)private post 2
(I can see it on the list, but get a 401 / access denied when I try to request it)friends-only post 3
(again, I can see it on the list but get access denied)This is completely un-usable, in terms of being able to implement private/hidden resources in the context of a social media app.
After a discussion with @deiu, @sandhawke and @nicola, the proposal is to:
URI Normalization is needed for consistency when comparing URIs.
RFC3986 gives some guidelines, but it is hardly enough, as we already noticed with the http vs https debate that we had because of breakage with vocabs that were being loaded. It is also extremely important for components that do any kind of query that they agree on how to tell if to URIs refer to the same resource. Another field where it is important is with HTTP proxies, if one proxy has a different idea of a resource than the server behind it, it could lead to very strange bugs, even security failures if the proxy and the server doesn't agree on what ACL applies to which resource.
We cannot eliminate false negatives but we should minimize their impact by being strict when we can.
I've seen that the node-solid-server implemented a WAC-Allow
header for identifying which access is granted for the requested resource. Is this also part of the solid api spec, or just a feature of this specific implementation?
Related issue in NSS: nodeSolidServer/node-solid-server#246
I think we need to review our use of Websockets.
Now, Websockets are supported in a standalone package, which has not received much attention. The use cases that motivates Websocket, could to a great extent be done using HTTP/2 and SSE, even though HTTP/2 doesn't entirely replace Websockets.
We should discuss whether we should bring such use cases under the HTTP umbrella, support HTTP/2 for Solid and integrate SSE into the platform.
This allows client code to be smart about whether to give the user editing interfaces to data or just view interfaces.
It also allows them to know whether it open to the public, which might affect for example, whether the user is invited to make a public link, like, bookmark, etc of the resource.
On the 3rd August 2019 @namedgraph_twitter asked:
"what is the scope of the Solid spec?" on the solid/solid-spec gitter channel.
There was a conversation answering that questions and it would be good to record it in an issue for future reference.
How can /foo be accessed with content-negotiation such that they physically map to corresponding files /foo.html , /foo.ttl etc. Is the mapping something that Solid already provides (or could provide) or is that out of scope?
If this is missing, I think it is an important enough feature to have. if for example, /foo is described like: /foo dcterms:hasFormat /foo.ttl . /foo.ttl a dcterms:MediaType ; rdfs:label "text/turtle" . and so on, Solid can silently do the rewriting and give the corresponding response. Something along these lines could give way to having cool URIs
I think one thing we need to look closer is how do we (humans) or a solid server does the mapping. One way to do it may be for solid servers to automatically respond to requests for paths which have a "well known" extension, i.e., foo.ttl, foo.html, foo.rdf exists, and so /foo will do the rewriting to one of those based on conneg.
Currently, the spec says A Solid data pod MUST conform to the LDP specification
.
Would it make sense to clarify that Solid supports only a subset of the LDP spec (specifically, just Basic Containers)?
Clarify the notion of mutable and immutable resources.
Describe the interface.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.