Solid’s independent test suite is maintained by the test suite panel and sponsored through an Open Collective. You can show your support by donating even just 1 USD there, and we’ll add you or your logo to our list of sponsors.
NB: This does not in any way give you a vote in the contents or the reporting of the test suite, in the Solid spec, or in any aspect of the Solid ecosystem. For that, you should join our W3C-CG.
NLNet Foundation donated 15,000 euros in 2020 as part of the Solid-Nextcloud integration project.
These awesome Solid-related startups collectively sponsor the maintenance of the independent Solid test suite through our Open Collective, click on their logos to check them out!
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
---|---|---|---|---|---|---|
Digita | O Team | GraphMetrix | Interition | Ontola | Understory | Startin’blox |
And a very big “Thank You” to the following individuals from the Solid community, who are donating through our Open Collective to make the independent Solid test suite possible, you are all awesome!
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
---|---|---|---|---|---|
(anonymous backer) | Sjoerd van Groning | Jan Schill | Travis Vachon | Sharon Stratsianis | Matthias Evering |
Your server implementation probably already has its own test coverage. And maybe you already run the LDP test suite or plan to run Inrupt’s conformance test suite when it comes out. So then why run an independent test suite in addition, you may ask? The answer is simple: running more tests against your server will never decrease the amount of information you have.
The more tests you run, the more information you collect. When all tests are green, it confirms what we already thought we knew, and improves our confidence. Even better, especially when test results from different sources contradict each other, this information adds up to help us move forward. This test suite tries to cover all Solid-related protocols and to only test for behaviours that are undisputed in the spec, but it’s evolving and never perfect.
All tests are written from assumptions, and sometimes the same assumption that slipped into your code, also slipped into your tests. In that case, the tests will be green for the wrong reasons. This can be as simple as a typo in a predicate which was maybe copied twice from the same source. Easy to fix, but very important for interoperability!
Sometimes we find a test is incorrect or too strict. Sometimes we don’t know what the correct behaviour is. In this case we mark the test as ‘skip’ and open a spec issue for debate. So at least we can turn an “unknown unknown” into a “known unknown”. When servers disagree, we need to document the difference. If we can describe the differences with reproducable tests, this will help us all have more detailed spec discussions! :)
Is this test suite a single complete and correct source of truth? The answer is no. Solid is still evolving and although there is a lot of consensus around how a Solid pod server should behave, there is no complete single truth. This test suite is an additional layer of defence that will help you compare your implementation of Solid with those of others! That way, we all collectively become more interoperable, and that will ultimately increase the value of Solid for everyone.
This test suite runs various testers against various servers in a Docker testnet. The testers can also run against live servers over the public internet.
The following Solid pod server implementations have been tested against (parts of) this suite:
For the ‘version’ column, servers have “(each PR)” if their continuous integration is set up to automatically test against each PR. For closed-source servers we list the public instance against which we run the test suite.
# | name | version | prog.lang | IDP | CRUD | WAC | (WPS) | (CON) | (MON) |
---|---|---|---|---|---|---|---|---|---|
1. | Node Solid Server | (each PR) | JavaScript | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
2. | PHP Solid Server | (each PR) | PHP | ✓ | 7) | ✓ | ✓ | ✓ | |
3. | Solid-Nextcloud | (each PR) | PHP | ✓ | 7) | ✓ | ✓ | ✓ | |
4. | Community Solid Server | v1.1.0 |
TypeScript | 1) | ✓ | 6) | ✓ | ✓ | |
5. | TrinPod | stage.gr…x.net | Lisp | 1) | ✓ | ✓ | 2) | ||
6. | Inrupt ESS | pod.inrupt.com | Java | 1) | ✓ | 3) | 4) | 5) | |
7. | Reactive-SoLiD | (coming soon!) | Scala | ||||||
8. | DexPod | (coming soon!) | Ruby | ||||||
9. | Disfluid | (coming soon!) | C |
1) for some servers we have manually tested that they include a working webid-oidc identity provider, but we don’t have the headless-browser tests that confirm this automatically for these servers. The solid-oidc IDP tester page, in contrast, requires human interaction, but with that it can test any publicly hosted IDP.
2) TrinPod will support this in the future
3) Although Inrupt ESS does have a WAC module, this feature is disabled on pod.inrupt.com for various reasons
4) See #136
5) Due to architectural trade-offs, global locks are not supported in Inrupt ESS
6) See #137
7) PSS and Solid-Nextcloud support PATCH with application/sparql-update
but not with the newly required text/n3
, see https://github.com/solid/solid-crud-tests/pull/53/files
When run locally a test-suite-report app can be run :
See latest test-suite-report.md. The report actually covers CRUD and WAC tests of CSS, ESS and NSS.
If more servers offer Access Control Policies as an experimental alternative to Solid’s existing Web Access Control system, the test-suite panel should find a way to create tests for that, too. But as of November 2020, there are no concrete plans for this.
As of 2021, Web Monetization in Solid is an experiment, no real specifications have been written for it yet. These versioned tests are meant to help the discussion as it progresses. The tests themselves are a work in progress, too. More to come as the project progresses. If you’re not yourself working on WebMonetization yourself, don’t spend too much time trying to implement this feature. If youre a Solid app developer and wondering which servers to use when experimenting with WebMonetization in your Solid app, these tests might help you find your way. See https://github.com/solid/monetization-tests.
There is an outdated runTests.sh
script, which is still the best starting point
to run for instance Kjetil’s RDF-based tests, see old-instructions.md.
To run the test suite against the various servers, it’s best to follow their own CI scripts, see the list above.
The scripts are very similar but have small differences in how they start up the system-under-test. One key step is there obtaining a login cookie. Roughly, it works as follows:
https://server
.https://thirdparty
.www-form-urlencoded
POST to /login/password with username
and password
www-form-urlencoded
POST to /login with username
and password
Using the cookie, the testers will go through the WebID-OIDC dance (adding the Cookie header in each http request).
This allows the testers to get their DPop tokens signed, which they can then use to construct Authorization
and DPop
headers.
We use solid-auth-fetcher for this, specifically its
obtainAuthHeaders functionality.
The webid-provider-tests stop when the tester successfully obtains auth headers.
The solid-crud-tests can run unauthenticated (which is what Community-Solid-Server currently does), or with Authorization
and DPop
headers.
The web-access-control-tests have to run authenticated. To pass these tests, the server currently needs to be an identity provider as well as a wac+crud storage. The ‘Alice’ identity on the server should have full R/W/A/C access (accessTo+default) to the entire pod. The tests then instantiate two Solid Logic instances, one for ‘Alice’ on https://server, and one for ‘Bob’ on https://thirdparty. Through those, Alice will edit her ACL documents to give Bob various kinds of access, and then Bob will test various operations.
See also: