Giter Site home page Giter Site logo

py-brickschema's Introduction

Brick Ontology Python package

Build Documentation Status PyPI version

Documentation available at readthedocs

Installation

The brickschema package requires Python >= 3.8. It can be installed with pip:

pip install brickschema

The brickschema package offers several installation configuration options for reasoning. The default bundled OWLRL reasoner delivers correct results, but exhibits poor performance on large or complex ontologies (we have observed minutes to hours) due to its bruteforce implementation.

The Allegro reasoner has better performance and implements enough of the OWLRL profile to be useful. We execute Allegrograph in a Docker container, which requires the docker package. To install support for the Allegrograph reasoner, use

pip install brickschema[allegro]

The reasonable Reasoner offers even better performance than the Allegro reasoner, but is currently only packaged for Linux and MacOS platforms. To install support for the reasonable Reasoner, use

pip install brickschema[reasonable]

Quickstart

The main Graph object is just a subclass of the excellent RDFlib Graph library, so all features on rdflib.Graph will also work here.

Brief overview of the main features of the brickschema package:

import brickschema

# creates a new rdflib.Graph with a recent version of the Brick ontology
# preloaded.
g = brickschema.Graph(load_brick=True)
# OR use the absolute latest Brick:
# g = brickschema.Graph(load_brick_nightly=True)
# OR create from an existing model
# g = brickschema.Graph(load_brick=True).from_haystack(...)

# load in data files from your file system
g.load_file("mbuilding.ttl")
# ...or by URL (using rdflib)
g.parse("https://brickschema.org/ttl/soda_brick.ttl", format="ttl")

# perform reasoning on the graph (edits in-place)
g.expand(profile="owlrl")
g.expand(profile="shacl") # infers Brick classes from Brick tags

# validate your Brick graph against built-in shapes (or add your own)
valid, _, resultsText = g.validate()
if not valid:
    print("Graph is not valid!")
    print(resultsText)

# perform SPARQL queries on the graph
res = g.query("""SELECT ?afs ?afsp ?vav WHERE  {
    ?afs    a       brick:Air_Flow_Sensor .
    ?afsp   a       brick:Air_Flow_Setpoint .
    ?afs    brick:isPointOf ?vav .
    ?afsp   brick:isPointOf ?vav .
    ?vav    a   brick:VAV
}""")
for row in res:
    print(row)

# start a blocking web server with an interface for performing
# reasoning + querying functions
g.serve("localhost:8080")
# now visit in http://localhost:8080

Features

brickschema supports a number of optional features:

  • [all]: Install all features below
  • [brickify]: install brickify tool for converting metadata from existing sources
  • [web]: allow serving of Brick models over HTTP + web interface
  • [merge]: initial support for merging Brick models with different identifiers together
  • [persistence]: support for saving and loading Brick models to/from disk
  • [allegro]: use Allegrograph reasoner
  • [reasonable]: use Reasonable reasoner

Inference

brickschema makes it easier to employ reasoning on your graphs. Simply call the expand method on the Graph object with one of the following profiles:

  • "rdfs": RDFS reasoning
  • "owlrl": OWL-RL reasoning (using 1 of 3 implementations below)
  • "vbis": add VBIS tags to Brick entities
  • "shacl": infer Brick classes from Brick tags, among other things
from brickschema import Graph

g = Graph(load_brick=True)
g.load_file("test.ttl")
g.expand(profile="owlrl")
print(f"Inferred graph has {len(g)} triples")

The package will automatically use the fastest available reasoning implementation for your system:

  • reasonable (fastest, Linux-only for now): pip install brickschema[reasonable]
  • Allegro (next-fastest, requires Docker): pip install brickschema[allegro]
  • OWLRL (default, native Python implementation): pip install brickschema

To use a specific reasoner, specify "reasonable", "allegrograph" or "owlrl" as the value for the backend argument to graph.expand.

Haystack Translation

brickschema can produce a Brick model from a JSON export of a Haystack model. Then you can use this package as follows:

import json
from brickschema import Graph
model = json.load(open("haystack-export.json"))
g = Graph(load_brick=True).from_haystack("http://project-haystack.org/carytown#", model)
points = g.query("""SELECT ?point ?type WHERE {
    ?point rdf:type/rdfs:subClassOf* brick:Point .
    ?point rdf:type ?type
}""")
print(points)

VBIS Translation

brickschema can add VBIS tags to a Brick model easily

from brickschema import Graph
g = Graph(load_brick=True)
g.load_file("mybuilding.ttl")
g.expand(profile="vbis")

vbis_tags = g.query("""SELECT ?equip ?vbistag WHERE {
    ?equip  <https://brickschema.org/schema/1.1/Brick/alignments/vbis#hasVBISTag> ?vbistag
}""")

Web-based Interaction

brickschema now supports interacting with a Graph object in a web browser. Executing g.serve(<http address>) on a graph object from your Python script or interpreter will start a webserver listening (by default) at http://localhost:8080 . This uses Yasgui to provide a simple web interface supporting SPARQL queries and inference.

To use this feature, install brickschema with the web feature enabled:

pip install brickschema[web]

Brick model validation

The module utilizes the pySHACL package to validate a building ontology against the Brick Schema, its default constraints (shapes) and user provided shapes.

from brickschema import Graph

g = Graph(load_brick=True)
g.load_file('myBuilding.ttl')
valid, _, _ = g.validate()
print(f"Graph is valid? {valid}")

# validating using externally-defined shapes
external = Graph()
external.load_file("other_shapes.ttl")
valid, _, _ = g.validate(shape_graphs=[external])
print(f"Graph is valid? {valid}")

The module provides a command brick_validate similar to the pyshacl command. The following command is functionally equivalent to the code above.

brick_validate myBuilding.ttl -s other_shapes.ttl

Brickify

To use brickify, install brickschema with the [brickify] feature enabled:

pip install brickschema[brickify]

Usage:

$ brickify [OPTIONS] SOURCE

Arguments:

  • SOURCE: Path/URL to the source file [required]

Options:

  • --input-type TEXT: Supported input types: rac, table, rdf, haystack-v4
  • --brick PATH: Brick.ttl
  • --config PATH: Custom configuration file
  • --output PATH: Path to the output file
  • --serialization-format TEXT: Supported serialization formats: turtle, xml, n3, nt, pretty-xml, trix, trig and nquads [default: turtle]
  • --minify / --no-minify: Remove inferable triples [default: False]
  • --input-format TEXT: Supported input formats: xls, csv, tsv, url, turtle, xml, n3, nt, pretty-xml, trix, trig and nquads [default: turtle]
  • --building-prefix TEXT: Prefix for the building namespace [default: bldg]
  • --building-namespace TEXT: The building namespace [default: https://example.com/bldg#]
  • --site-prefix TEXT: Prefix for the site namespace [default: site]
  • --site-namespace TEXT: The site namespace [default: https://example.com/site#]
  • --install-completion: Install completion for the current shell.
  • --show-completion: Show completion for the current shell, to copy it or customize the installation.
  • --help: Show this message and exit.

Usage examples: brickify.

Development

Brick requires Python >= 3.6. We use pre-commit hooks to automatically run code formatters and style checkers when you commit.

Use Poetry to manage packaging and dependencies. After installing poetry, install dependencies with:

poetry install

Enter the development environment with the following command (this is analogous to activating a virtual environment.

poetry shell

On first setup, make sure to install the pre-commit hooks for running the formatting and linting tools:

# from within the environment; e.g. after running 'poetry shell'
pre-commit install

Run tests to make sure build is not broken

# from within the environment; e.g. after running 'poetry shell'
make test

Docs

Docs are written in reStructured Text. Make sure that you add your package requirements to docs/requirements.txt

py-brickschema's People

Contributors

dependabot[bot] avatar epaulson avatar gtfierro avatar jag-1337 avatar jbulow avatar meandor avatar rileywong311 avatar shreyasnagare avatar wcrd avatar xyzisinus avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

py-brickschema's Issues

Brickify output empty.

Hi
I tried to write a few very simple operations to test it out, but I can't seem to be able to get it to run.

I created a very simple example to work on :

Building Name Heating Group Pump Valve Command
B1 HG30 HG30_PU HG30_Vlv HG30_Cmd

I want to try mapping the entries and the relations : hasEquipment, hasPart and hasPoint.

I'm using the brickify command.
brickify Simplified1.csv --output bldg.ttl --input-type csv --config simple.yml

, where Simplified.csv is the name of the .csv with the info above and simple.yml is the name of the operator yml.

The only example I could find in the documentation was for the hasPoint relation so I'll start with that.

---
namespace_prefixes:
  brick: "https://brickschema.org/schema/Brick#"
operations:
  -
    data: |-
      bldg:{Heating Group} rdf:type brick:Terminal Unit ;
                      brick:hasPoint bldg:{Command} .
      bldg:{Command} rdf:type brick:Command .

I expected to get a node for Terminal Unit called HG30 with a hasPoint relation to a Command node. What I got was an empty building.ttl file.

I also tried to map hasPart relationships.

---
namespace_prefixes:
  brick: "https://brickschema.org/schema/Brick#"
operations:
  -
    data: |-
      bldg:{Heating Group} rdf:type brick:Terminal Unit ;
                      brick:hasPart bldg:{Pump} ;
                      brick:hasPart bldg:{Valve} .
      bldg:{Pump} rdf:type brick:Pump .
      bldg:{Valve} rdf:type brick:Valve .

The turtle file for this operator is completely empty as well. I expected a node for Terminal Unit called HG30 with two hasPart relations to the nodes to HG30_PU and HG30_Vlv.

Finally I'd also need the hasEquipment relationship, which I tried mapping like this.

---
namespace_prefixes:
  brick: "https://brickschema.org/schema/Brick#"
operations:
  -
    data: |-
      bldg:{Building Name} rdf:type brick:Building ;
                      brick:hasEquipment bldg:{Heating Group} .
      bldg:{Heating Group} rdf:type brick:Terminal Unit .

Same problem like the others. It runs, but the output bldg.ttl is empty.
Of course in my later application I'd want to write a combined operator. From my reverse engineering I think it should look something like this:

---
namespace_prefixes:
  brick: "https://brickschema.org/schema/Brick#"
operations:
  -
    data: |-
      bldg:{Building Name} rdf:type brick:Building ;
                      brick:hasEquipment bldg:{Heating Group} .
      bldg:{Heating Group} rdf:type brick:Terminal Unit ;
                      brick:hasPart bldg:{Pump} ;
                      brick:hasPart bldg:{Valve} ;
                      brick:hasPoint bldg:{Command} .
      bldg:{Pump} rdf:type brick:Pump .
      bldg:{Valve} rdf:type brick:Valve .
      bldg:{Command} rdf:type brick:Command .

I wanted a Building (B1) main node that has the Terminal Unit (HG30) as an equipment relationship, which has a Pump and a Valve as part relationship and finally a command as a Point relationship (or should it be a is controlled by?)

My main issue is that the output is empty. How do I correct this?

Then secondly, is the operator written correctly for what I intend it to do? If not, how should I correct it?

Something else I also wondered: I'm not sure about how lines are supposed to end. I noticed that sometime they end with ";" and sometimes they end with ".". I also noticed that if there are more entries wit a relationship in the next line ";" is used. Is that correct? Please tell me when to use ";" and when ".". Finally what does the "bldg" part at the start of a brickdefinition do?

There are also some differences to the example shreyasnagare shared with me in my previous post. I also don't understand the subtle syntax differences there for example the way bricks are defined with "a brick" or the spacing with "-". What is the logic behind this? Also I noticed various relationship mappings, but with a completely different structure. Furthermore, there are various relationships like "isControlledBy" "hasLocation" "isLocationOf". Most of them are self explanatory, but is there a comprehensive list of these relationships, what they mean and how to define them?

`pytest` should not be a direct dependency

[tool.poetry.dependencies]
python = "^3.7"
rdflib = "^6.2"
owlrl = "^6.0"
pytest = "^6.2"

I see that pytest is a direct dependency for py-brickschema but this seems like a mistake. With the way things are now this creates problems if a project depends on both py-brickschema as well as the latest major release of pytest (>=7.0).

I suggest we create a new dependency group dedicated solely to test dependencies. Dependencies that are not part of the main group can only be installed through poetry so this takes care of the issue. The pyproject.toml file could be refactored like this to solve it (just an example):

diff --git a/pyproject.toml b/pyproject.toml
index d56f873..b63b049 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -12,7 +12,6 @@ readme = "README.md"
 python = "^3.7"
 rdflib = "^6.2"
 owlrl = "^6.0"
-pytest = "^6.2"
 pyshacl = "^0.20"
 requests = "^2.25.0"
 importlib-resources = "^3.3.0"
@@ -31,6 +30,19 @@ rdflib_sqlalchemy = {optional = true, version = "^0.5"}
 BAC0 = {optional = true, version = "^21.12.3"}
 networkx = {optional = true, version="^2.6"}

+[tool.poetry.group.dev.dependencies]
+flake8 = "^3.7"
+pre-commit = "^2.1"
+tqdm = "^4.56.0"
+
+[tool.poetry.group.test.dependencies]
+pytest = "^6.2"
+pytest-xdist = {extras = ["psutil"], version = "^2.3.0"}
+
+[tool.poetry.group.docs.dependencies]
+Sphinx = "^4.2"
+sphinx-rtd-theme = "^1.0.0"
+
 [tool.poetry.scripts]
 brick_validate = "brickschema.bin.brick_validate:main"
 brickify = "brickschema.brickify.main:app"
@@ -47,14 +59,6 @@ bacnet = ["BAC0"]
 networkx = ["networkx"]
 all = ["docker","click-spinner", "tabulate", "Jinja2", "xlrd", "PyYAML", "typer", "Flask", "dedupe", "colorama", "reasonable", "sqlalchemy", "rdflib_sqlalchemy", "BAC0", "networkx"]

-[tool.poetry.dev-dependencies]
-pre-commit = "^2.1"
-flake8 = "^3.7"
-tqdm = "^4.56.0"
-pytest-xdist = {extras = ["psutil"], version = "^2.3.0"}
-Sphinx = "^4.2"
-sphinx-rtd-theme = "^1.0.0"
-
 [build-system]
 requires = ["setuptools", "poetry_core>=1.0"]
 build-backend = "poetry.core.masonry.api"

Warnings on import

When I first import brickschema I get the warnings below. I installed pip install brickschema[reasonable] in a new 3.8.6 venv.

/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/__init__.py:177: UserWarning: Code: dateTimeStamp is not defined in namespace XSD
  from . import DatatypeHandling, Closure
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/RDFSClosure.py:40: UserWarning: Code: dateTimeStamp is not defined in namespace XSD
  from owlrl.AxiomaticTriples import RDFS_Axiomatic_Triples, RDFS_D_Axiomatic_Triples
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/RDFSClosure.py:40: UserWarning: Code: length is not defined in namespace XSD
  from owlrl.AxiomaticTriples import RDFS_Axiomatic_Triples, RDFS_D_Axiomatic_Triples
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/RDFSClosure.py:40: UserWarning: Code: maxExclusive is not defined in namespace XSD
  from owlrl.AxiomaticTriples import RDFS_Axiomatic_Triples, RDFS_D_Axiomatic_Triples
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/RDFSClosure.py:40: UserWarning: Code: maxInclusive is not defined in namespace XSD
  from owlrl.AxiomaticTriples import RDFS_Axiomatic_Triples, RDFS_D_Axiomatic_Triples
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/RDFSClosure.py:40: UserWarning: Code: maxLength is not defined in namespace XSD
  from owlrl.AxiomaticTriples import RDFS_Axiomatic_Triples, RDFS_D_Axiomatic_Triples
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/RDFSClosure.py:40: UserWarning: Code: minExclusive is not defined in namespace XSD
  from owlrl.AxiomaticTriples import RDFS_Axiomatic_Triples, RDFS_D_Axiomatic_Triples
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/RDFSClosure.py:40: UserWarning: Code: minInclusive is not defined in namespace XSD
  from owlrl.AxiomaticTriples import RDFS_Axiomatic_Triples, RDFS_D_Axiomatic_Triples
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/RDFSClosure.py:40: UserWarning: Code: minLength is not defined in namespace XSD
  from owlrl.AxiomaticTriples import RDFS_Axiomatic_Triples, RDFS_D_Axiomatic_Triples
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/RDFSClosure.py:40: UserWarning: Code: pattern is not defined in namespace XSD
  from owlrl.AxiomaticTriples import RDFS_Axiomatic_Triples, RDFS_D_Axiomatic_Triples
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/OWLRL.py:53: UserWarning: Code: dateTimeStamp is not defined in namespace XSD
  from .XsdDatatypes import OWL_RL_Datatypes, OWL_Datatype_Subsumptions
/home/david/.pyenv/versions/3.8.6/envs/brick/lib/python3.8/site-packages/owlrl/OWLRLExtras.py:64: UserWarning: Code: dateTimeStamp is not defined in namespace XSD
  from .RestrictedDatatype import extract_faceted_datatypes

Tag inference not working

I have a simple example file that validates correctly (see attached instances.ttl).

I'm trying to run SELECT * WHERE { ?x brick:hasTag ?y }, this works in stardog studio with reasoning enabled (although it also returns (brick:Steam,tag:Gas) , (brick:Steam,tag:Steam) which isn't ideal.

It's not working using the py-brickschema reasoners though. For example

g = Graph(load_brick=True)
g.load_file("instances.ttl")
g = BrickInferenceSession().expand(g)
print([t for t in g.query("SELECT * WHERE { ?x brick:hasTag ?y }")])

returns a single result [(brick:hasTag,brick:hasTag)]

In fact, I noticed that the reasoner appears to be inserting triples (?x, ?x, ?x) for every entity in the schema which explains why the above was returned.

This code works (slowly) though

g = Graph(load_brick=True)
g.load_file("instances.ttl")
owlrl.DeductiveClosure(owlrl.OWLRL_Semantics).expand(g.g)

print([t for t in g.g.query("SELECT * WHERE { ?x brick:hasTag ?y }")])

Maybe I'm using the wrong *InferenceSession?

License

This looks like an interesting repo.

Can you add a license? Or can I have explicit permission to use this?

Thanks!

requests not installed with brickschema

Hey Gabe,

python: 3.6.5
brickschema: 0.1.5

pip install brickschema

Collecting brickschema
  Downloading brickschema-0.1.5-py3-none-any.whl (90 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 90 kB 653 kB/s
Collecting rdflib<6.0,>=5.0
  Downloading rdflib-5.0.0-py3-none-any.whl (231 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 231 kB 1.2 MB/s
Collecting pytest<6.0,>=5.3
  Downloading pytest-5.4.3-py3-none-any.whl (248 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 248 kB 768 kB/s
Collecting owlrl<6.0,>=5.2
  Downloading owlrl-5.2.1-py3-none-any.whl (56 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 56 kB 1.3 MB/s
Collecting sqlalchemy<2.0,>=1.3
  Downloading SQLAlchemy-1.3.17-cp36-cp36m-macosx_10_14_x86_64.whl (1.2 MB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 1.2 MB 872 kB/s
Collecting pyshacl<0.12.0,>=0.11.4
  Downloading pyshacl-0.11.5-py3-none-any.whl (83 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 83 kB 1.3 MB/s
Requirement already satisfied: six in /Users/cmosiman/.pyenv/versions/3.6.5/envs/haste/lib/python3.6/site-packages (from rdflib<6.0,>=5.0->brickschema) (1.14.0)
Collecting isodate
  Using cached isodate-0.6.0-py2.py3-none-any.whl (45 kB)
Collecting pyparsing
  Downloading pyparsing-2.4.7-py2.py3-none-any.whl (67 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 67 kB 773 kB/s
Collecting more-itertools>=4.0.0
  Downloading more_itertools-8.4.0-py3-none-any.whl (43 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 43 kB 1.0 MB/s
Collecting attrs>=17.4.0
  Using cached attrs-19.3.0-py2.py3-none-any.whl (39 kB)
Requirement already satisfied: importlib-metadata>=0.12; python_version < "3.8" in /Users/cmosiman/.pyenv/versions/3.6.5/envs/haste/lib/python3.6/site-packages (from pytest<6.0,>=5.3->brickschema) (1.6.0)
Collecting packaging
  Downloading packaging-20.4-py2.py3-none-any.whl (37 kB)
Collecting pluggy<1.0,>=0.12
  Using cached pluggy-0.13.1-py2.py3-none-any.whl (18 kB)
Collecting py>=1.5.0
  Downloading py-1.8.2-py2.py3-none-any.whl (83 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 83 kB 694 kB/s
Collecting wcwidth
  Downloading wcwidth-0.2.4-py2.py3-none-any.whl (30 kB)
Collecting rdflib-jsonld>=0.4.0
  Downloading rdflib-jsonld-0.5.0.tar.gz (55 kB)
     |โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 55 kB 1.0 MB/s
Requirement already satisfied: zipp>=0.5 in /Users/cmosiman/.pyenv/versions/3.6.5/envs/haste/lib/python3.6/site-packages (from importlib-metadata>=0.12; python_version < "3.8"->pytest<6.0,>=5.3->brickschema) (3.1.0)
Building wheels for collected packages: rdflib-jsonld
  Building wheel for rdflib-jsonld (setup.py) ... done
  Created wheel for rdflib-jsonld: filename=rdflib_jsonld-0.5.0-py2.py3-none-any.whl size=15348 sha256=f8c49154dd8e911420faae581b9b8cc2417b4768b1d381fbf2bfb9ff4157ce63
  Stored in directory: /Users/cmosiman/Library/Caches/pip/wheels/e4/72/6e/144dce8b21558be176168b23bd8d7062c21cbbe9596c3926ec
Successfully built rdflib-jsonld
Installing collected packages: isodate, pyparsing, rdflib, more-itertools, attrs, packaging, pluggy, py, wcwidth, pytest, rdflib-jsonld, owlrl, sqlalchemy, pyshacl, brickschema
Successfully installed attrs-19.3.0 brickschema-0.1.5 isodate-0.6.0 more-itertools-8.4.0 owlrl-5.2.1 packaging-20.4 pluggy-0.13.1 py-1.8.2 pyparsing-2.4.7 pyshacl-0.11.5 pytest-5.4.3 rdflib-5.0.0 rdflib-jsonld-0.5.0 sqlalchemy-1.3.17 wcwidth-0.2.4

However, I then go to run a server and get:

cmosiman-34078s:haste cmosiman$ python manage.py runserver
/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/pandas/compat/__init__.py:117: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError.
  warnings.warn(msg)
/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/pandas/compat/__init__.py:117: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError.
  warnings.warn(msg)
Watching for file changes with StatReloader
2020-06-20:00:09:26,310 INFO    [autoreload.py:598] Watching for file changes with StatReloader
Exception in thread django-main-thread:
Traceback (most recent call last):
  File "/Users/cmosiman/.pyenv/versions/3.6.5/lib/python3.6/threading.py", line 916, in _bootstrap_inner
    self.run()
  File "/Users/cmosiman/.pyenv/versions/3.6.5/lib/python3.6/threading.py", line 864, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/django/utils/autoreload.py", line 53, in wrapper
    fn(*args, **kwargs)
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/django/core/management/commands/runserver.py", line 109, in inner_run
    autoreload.raise_last_exception()
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/django/utils/autoreload.py", line 76, in raise_last_exception
    raise _exception[1]
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/django/core/management/__init__.py", line 357, in execute
    autoreload.check_errors(django.setup)()
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/django/utils/autoreload.py", line 53, in wrapper
    fn(*args, **kwargs)
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/django/__init__.py", line 24, in setup
    apps.populate(settings.INSTALLED_APPS)
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/django/apps/registry.py", line 114, in populate
    app_config.import_models()
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/django/apps/config.py", line 211, in import_models
    self.models_module = import_module(models_module_name)
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 994, in _gcd_import
  File "<frozen importlib._bootstrap>", line 971, in _find_and_load
  File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/Users/cmosiman/Github/Haste/haste/generate/models.py", line 6, in <module>
    from lib.helpers import Shadowfax
  File "/Users/cmosiman/Github/Haste/haste/lib/helpers.py", line 8, in <module>
    from brickschema.inference import HaystackInferenceSession
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/brickschema/__init__.py", line 13, in <module>
    from . import graph, inference, namespaces, validate
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/brickschema/validate.py", line 10, in <module>
    from rdflib.plugins.sparql import prepareQuery
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/rdflib/plugins/sparql/__init__.py", line 37, in <module>
    from .processor import prepareQuery, processUpdate
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/rdflib/plugins/sparql/processor.py", line 18, in <module>
    from rdflib.plugins.sparql.evaluate import evalQuery
  File "/Users/cmosiman/.pyenv/versions/haste/lib/python3.6/site-packages/rdflib/plugins/sparql/evaluate.py", line 20, in <module>
    import requests
ModuleNotFoundError: No module named 'requests'

I then pip install requests and works as expected. Not sure why...but wanted to point out to you.

py-brickschema and Oxigraph

We do have RDFs in the Oxigraph database and we want to use py-brickschema module for processing these data. I do have two questions:
1, I was able to obtain the content of the database using HTTP POST with a SPARQL query. I do have Python string with triplets formatted as CSV, XML, or JSON. How it can be imported into a Graph? Below is a code snippet using XML format that is not working. What is wrong here?

    headers =  {"Accept" : "application/sparql-results+xml", "Content-Type" : "application/x-www-form-urlencoded"}
    query = {"query" : 'select ?s ?p ?o where {  ?s ?p ?o . } limit 1000 '}
    response = requests.post(api_url, data=query, headers=headers)   
    g = Graph(load_brick=True)
    g.parse(data=response.content, format="application/rdf+xml")

2, Is there a way how to directly connect to the remote Oxigraph database from py-brickschema (instead of exporting triples using query as pointed out in 1,)?

brickschema v0.3.0 ~ graph.expand("tag") --> Invalid profile: 'tag'

Hey @gtfierro ,

Just upgraded my local brickschema package from 0.2.5 to 0.3.0 and now I can't validate with tags...
graph.expand(profile="tag") # infers Brick classes from Brick tags

Am I missing something?

Traceback (most recent call last):
  File "/home/liam/miniconda3/envs/spectral_brick/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/liam/miniconda3/envs/spectral_brick/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/liam/spectral/spectral_brick/spectral_brick/__main__.py", line 78, in <module>
    g = load_validate_seeded_graph(adapters=adapters, logger=logger, settings=settings)
  File "/home/liam/spectral/spectral_brick/spectral_brick/__main__.py", line 55, in load_validate_seeded_graph
    graph.expand(profile="tag")  # infers Brick classes from Brick tags
  File "/home/liam/miniconda3/envs/spectral_brick/lib/python3.8/site-packages/brickschema/graph.py", line 225, in expand
    raise Exception(f"Invalid profile '{profile}'")
Exception: Invalid profile 'tag'

Tagset Inference misses some Classes

There's an issue with the order in which inference first computes the most likely target, then filters to remove points or equips which can result in missing classes. I noticed this when I tried to adapt the HaystackInferenceSession for my own use.

In particular, I believe Chilled Water Valve Command should map to ['Chilled_Water_Valve', 'Valve_Command'] since there's equipment and a point implied by the tags.

The issue is in infer_entity the method most_likely_tagsets is called twice, first with the set of tags minus {"point","sensor","command","setpoint","alarm","status","parameter","limit"} plus {"equip"} with the result filtered to retain only equipment, and then again with the original set of tags, with the result filtered to only retain points.

The problem is there's filtering inside most_likely_tagsets which results in it returning Chilled Water Valve in both cases, so the point is never inferred.

The easiest fix is to lift the filtering by class type from infer_entity into most_likely_tagsets so it only considers candidates of the correct type.

The code below seems to work, I added is_valid_class as an argument and use this to pre-filter. It may be better to simply pass a string representation, plus I couldn't be bothered thinking about how to make it a list comprehension.

    def most_likely_tagsets(self, orig_s, num=-1, is_valid_class=lambda x:True):
        """
        Returns the list of likely classes for a given set of tags,
        as well as the list of tags that were 'leftover', i.e. not
        used in the inference of a class

        Args:
            tagset (list of str): a list of tags
            num (int): number of likely tagsets to be returned; -1 returns all

        Returns:
            results (tuple): a 2-element tuple containing (1)
            most_likely_classes (list of str): list of Brick classes
            and (2) leftover (set of str): list of tags not used

        """
        s = set(map(_to_tag_case, orig_s))
        tagsets_ = self.lookup_tagset(s)

        # filter classes which aren't of required type
        tagsets = []
        for classes,tagset in tagsets_:
            classes = [c for c in classes if is_valid_class(c)]
            if len(classes) > 0:
                tagsets.append((classes,tagset))

        if len(tagsets) == 0:
            # no tags
            return [], orig_s
        # find the highest number of tags that overlap
        most_overlap = max(map(lambda x: len(s.intersection(x[1])), tagsets))

        # return the class with the fewest tags >= the overlap size
        candidates = list(
            filter(lambda x: len(s.intersection(x[1])) == most_overlap, tagsets)
        )

        # When calculating the minimum difference, we calculate it form the
        # perspective of the candidate tagsets because they will have more tags
        # We want to find the tag set(s) who has the fewest tags over what was
        # provided
        min_difference = min(map(lambda x: len(x[1].difference(s)), candidates))
        most_likely = list(
            filter(lambda x: len(x[1].difference(s)) == min_difference, candidates)
        )

        leftover = s.difference(most_likely[0][1])
        most_likely_classes = list(set([list(x[0])[0] for x in most_likely]))
        # return most likely classes (list) and leftover tags
        # (what of 'orig_s' wasn't used)
        if num < 0:
            return most_likely_classes, leftover
        else:
            return most_likely_classes[:num], leftover

Issue with VersionedGraphCollection undoing/redoing "timelines"

When I think of undo/redo, I guess an analogy is that, with google docs or word, I could undo text I've typed--then, it will give me the option to redo those undoes, but if I type something new, that redo option dissapears.

However, I realize that undo/redo can mean something different in the context of graphs. For VersionedGraphCollection, I can redo even if I've made changes since an undo. How should undo/redo work for a VersionedGraphCollection in this regard?

Here is some code below of me adding some changesets, undoing one, then making new changes, and still being able to redo

from brickschema.persistent import VersionedGraphCollection
from brickschema.namespaces import BRICK, A

vgc = VersionedGraphCollection(uri="sqlite://")

with vgc.new_changeset("project") as cs:
  cs.add((BRICK["11111"], A, BRICK.Building)) # add some dummy entity "11111"

with vgc.new_changeset("project") as cs:
  cs.add((BRICK["22222"], A, BRICK.Building)) # add some dummy entity "22222"

vgc.undo()

# this new changeset alters the "timeline" like going down a new path,
# so I would think that the previous undo data should lose its point (?)
with vgc.new_changeset("project") as cs:
  cs.add((BRICK["33333"], A, BRICK.Building)) # add some dummy entity "33333"

# I can redo even though that undo should be lost as we are no longer on that "timeline"
# this is code from the VersionedGraphCollection.redo() code, which recovers the most recent undo
with vgc.conn() as conn:
  redo_record = conn.execute(
    "SELECT * from redos " "ORDER BY timestamp ASC"
  ).fetchone()
if redo_record is not None:
  print("CAN REDO")
else:
  print("CANNOT REDO")

# similarly, actually running the redoing does not raise an exception
vgc.redo() 

# when I serialize the graph collecton as a whole, it is the version with "11111" and "22222"
# this suggest to me that the redo worked, but in the process it erased the "33333" version
vgc.serialize("Union.ttl") 

# ==================
# @prefix brick: <https://brickschema.org/schema/Brick#> .
#
# brick:11111 a brick:Building .
#
# brick:22222 a brick:Building .
# ==================

# when I serialize just the isolated graph, it only has "11111"
# I don't even know how this happened
g = vgc.graph_at(graph="project")
g.serialize("IndividualGraph.ttl")

# ==================
# @prefix brick: <https://brickschema.org/schema/Brick#> .
#
# brick:11111 a brick:Building .
# ==================

The best way to showcase the current state of the VersionedGraphCollection/Graph that I could think of is just to serialize it, but I know that doesn't quite make sense for just looking at the code. Let me know if there could be a better way to showcase the results.

If this is the proper way undo/redo should work, then why does taking the graph at the name "project" differ from the entire VGC?

Overload operators for Graph

Steps to reproduce:

from brickschema import Graph

g = Graph()
g.parse("set.ttl", format="ttl")

h = Graph()
h.parse("subset.ttl", format="ttl")

g = g - h

g.expand("owlrl")

Traceback (most recent call last):
File "test_operators.py", line 11, in
AttributeError: 'Graph' object has no attribute 'expand'

Although the inplace operations (+=/-=) work without any issues, users might do operations like:
a = b + c (and expect to get the same return type as b and c, like instances of the parent class rdflib.Graph do)

brick_validate fails on Building_Meter

Command:
brick_validate building_meter.tll

Output:
Validation Report
Conforms: False
Results (3):
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
Severity: sh:Violation
Source Shape: [ sh:class unit:Unit ; sh:message Literal("Property hasUnit has object with incorrect type") ; sh:path brick:hasUnit ]
Focus Node: bldg:building_peak_demand
Value Node: unit:KiloW
Result Path: brick:hasUnit
Message: Property hasUnit has object with incorrect type
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
Severity: sh:Violation
Source Shape: [ sh:class unit:Unit ; sh:message Literal("Property hasUnit has object with incorrect type") ; sh:path brick:hasUnit ]
Focus Node: bldg:building_energy_sensor
Value Node: unit:KiloW-HR
Result Path: brick:hasUnit
Message: Property hasUnit has object with incorrect type
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
Severity: sh:Violation
Source Shape: [ sh:class unit:Unit ; sh:message Literal("Property hasUnit has object with incorrect type") ; sh:path brick:hasUnit ]
Focus Node: bldg:building_power_sensor
Value Node: unit:KiloW
Result Path: brick:hasUnit
Message: Property hasUnit has object with incorrect type

Additional info (3 constraint violations with offender hint):

Constraint violation:
[] a sh:ValidationResult ;
sh:focusNode bldg:building_power_sensor ;
sh:resultMessage "Property hasUnit has object with incorrect type" ;
sh:resultPath brick:hasUnit ;
sh:resultSeverity sh:Violation ;
sh:sourceConstraintComponent sh:ClassConstraintComponent ;
sh:sourceShape [ sh:class unit:Unit ;
sh:message "Property hasUnit has object with incorrect type" ;
sh:path brick:hasUnit ] ;
sh:value unit:KiloW .
Offending triple:
bldg:building_power_sensor brick:hasUnit unit:KiloW .

Constraint violation:
[] a sh:ValidationResult ;
sh:focusNode bldg:building_energy_sensor ;
sh:resultMessage "Property hasUnit has object with incorrect type" ;
sh:resultPath brick:hasUnit ;
sh:resultSeverity sh:Violation ;
sh:sourceConstraintComponent sh:ClassConstraintComponent ;
sh:sourceShape [ sh:class unit:Unit ;
sh:message "Property hasUnit has is:issue is:open object with incorrect type" ;
sh:path brick:hasUnit ] ;
sh:value unit:KiloW-HR .
Offending triple:
bldg:building_energy_sensor brick:hasUnit unit:KiloW-HR .

Constraint violation:
[] a sh:ValidationResult ;
sh:focusNode bldg:building_peak_demand ;
sh:resultMessage "Property hasUnit has object with incorrect type" ;
sh:resultPath brick:hasUnit ;
sh:resultSeverity sh:Violation ;
sh:sourceConstraintComponent sh:ClassConstraintComponent ;
sh:sourceShape [ sh:class unit:Unit ;
sh:message "Property hasUnit has object with incorrect type" ;
sh:path brick:hasUnit ] ;
sh:value unit:KiloW .
Offending triple:
bldg:building_peak_demand brick:hasUnit unit:KiloW .

Brick_validate fails for everything with hasUnit property

Support Pulling Different Development Brick Versions

Should make it possible to configure which version of Brick is loaded in to the graph. Possibly by using an environment variable? One concrete use case is to support prototyping/testing against a not-yet-released version of Brick e.g. 1.3.0

convert Haystack JSON model

After downloading bravo.json, running the following lines:

import json
from brickschema import Graph
model = json.load(open("bravo.json"))
g = Graph(load_brick=True).from_haystack("http://project-haystack.org/bravo#", model)

File "C:\Users*\anaconda3\envs\Brick\lib\site-packages\brickschema\inference.py", line 711, in
entities = {e["id"].replace('"', ""): {"tags": e} for e in entities}

AttributeError: 'dict' object has no attribute 'replace'

brick_validate is not getting installed correctly

Still refers to a relative bin directory from this repository; hopefully should be a straightforward fix

$ brick_validate
Traceback (most recent call last):
  File "/home/gabe/.local/bin/brick_validate", line 5, in <module>
    from bin.brick_validate import main
ModuleNotFoundError: No module named 'bin'

newbie tips

Hello,

Can I use py-brickshema to make a Brick model or would I only be using py-brickshema in validating an existing Brick model? Apologize for my ignorance I have a lot to learn...

I am working with configuration files like this below in YAML format that represent a BACnet devices inside a building:

devices:
- address: '12:40'
  device_identifier: '601040'
  device_name: MTR-01 BACnet
  points:
  - object_identifier: analog-input,8
    object_name: Outlet Setp. 1
  - object_identifier: analog-input,9
    object_name: Outlet Temp. 1
  - object_identifier: analog-input,10
    object_name: Inlet Temp. 1
  - object_identifier: analog-input,11
    object_name: Flue1/Pool Temp.
  - object_identifier: analog-input,11
    object_name: Firing Rate 1
  read_multiple: true
  scrape_interval: 600

Which I can modify at will. I am taking a stab in the dark at what it would look like in Brick...? Assuming people starting out are just manually adding the meta data into there processes?..

devices:
  - address: '12:40'
    device_identifier: '601040'
    device_name: MTR-01 BACnet
    type: Boiler_Controller
    points:
      - object_identifier: analog-input,8
        object_name: Outlet Setpoint 1
        brick_class: Setpoint
        brick_function: Temperature
        brick_location: Boiler_Outlet
      - object_identifier: analog-input,9
        object_name: Outlet Temp 1
        brick_class: Sensor
        brick_function: Temperature
        brick_location: Boiler_Outlet
      - object_identifier: analog-input,10
        object_name: Inlet Temp 1
        brick_class: Sensor
        brick_function: Temperature
        brick_location: Boiler_Inlet
      - object_identifier: analog-input,11
        object_name: Flue1/Pool Temp
        brick_class: Sensor
        brick_function: Temperature
        brick_location: Flue
      - object_identifier: analog-input,11
        object_name: Firing Rate 1
        brick_class: Sensor
        brick_function: Firing_Rate
        brick_location: Boiler
    read_multiple: true
    scrape_interval: 600

Where I then I can read it into Python with a script something like this:

import yaml

def read_bacnet_config(file_path):
    with open(file_path, 'r') as file:
        config = yaml.safe_load(file)
        return config

def print_device_attributes(device_config):
    for device in device_config['devices']:
        print(f"Device Name: {device['device_name']}")
        print(f"Device Identifier: {device['device_identifier']}")
        print(f"Device Address: {device['address']}")
        print("Points:")
        for point in device['points']:
            print(f"  - Object Name: {point['object_name']}")
            print(f"    Object Identifier: {point['object_identifier']}")
            if 'brick_class' in point:
                print(f"    Brick Class: {point['brick_class']}")
            if 'brick_function' in point:
                print(f"    Brick Function: {point['brick_function']}")
            if 'brick_location' in point:
                print(f"    Brick Location: {point['brick_location']}")

# Example usage
config_file_path = '201201_brick.yml'
config = read_bacnet_config(config_file_path)
print_device_attributes(config)

I am still trying to figure out how to properly implement a Brick model manually on some BAS devices inside a building through config files like this and where py-brickschema would fit into the process. Any tips greatly appreciated.

For this building there would also be similar config files for VAV boxes, AHU, and a chiller.

`load_file()` returns an `[Errno 22] Invalid argument` error

Hi -

When I run the following code I get an [Errno 22] Invalid argument error. The turtle file used is this one.

import brickschema
g = brickschema.Graph(load_brick=False)
g.load_file("./constrain/resources/brick/Brick.ttl")

Here's the full traceback:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python310\lib\site-packages\brickschema\graph.py", line 572, in load_file
    self.parse(filename, format=fmt)
  File "C:\Python310\lib\site-packages\rdflib\graph.py", line 1492, in parse
    parser.parse(source, self, **args)
  File "C:\Python310\lib\site-packages\rdflib\plugins\parsers\notation3.py", line 2021, in parse
    p.loadStream(stream)
  File "C:\Python310\lib\site-packages\rdflib\plugins\parsers\notation3.py", line 479, in loadStream
    return self.loadBuf(stream.read())  # Not ideal
OSError: [Errno 22] Invalid argument

I'm using brickschema 0.7.5 and rdflib 7.0.0 with Python 3.10.8. Any idea as to why this is happening?

Graph.validate() fails when Brick ontology is preloaded to the graph

For a simple graph that only includes the Brick ontology, validation appears to fail.

from brickschema import Graph

g = Graph(load_brick=True)
_, _, report = g.validate()
print(report)

The resulting report:

Validation Report
Conforms: False
Results (38):
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Power_Factor
        Value Node: qudtqk:PowerFactor
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Speed
        Value Node: qudtqk:Speed
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Capacity
        Value Node: qudtqk:Capacity
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Active_Power
        Value Node: qudtqk:ActivePower
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Luminous_Intensity
        Value Node: qudtqk:LuminousIntensity
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Time
        Value Node: qudtqk:Time
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Flow
        Value Node: qudtqk:VolumeFlowRate
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Conductivity
        Value Node: qudtqk:Conductivity
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Voltage
        Value Node: qudtqk:Voltage
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Atmospheric_Pressure
        Value Node: qudtqk:AtmosphericPressure
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Luminance
        Value Node: qudtqk:Luminance
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Torque
        Value Node: qudtqk:Torque
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Radiance
        Value Node: qudtqk:Radiance
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Temperature
        Value Node: qudtqk:Temperature
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Angle
        Value Node: qudtqk:Angle
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Differential_Dynamic_Pressure
        Value Node: qudtqk:DynamicPressure
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Frequency
        Value Node: qudtqk:Frequency
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Absolute_Humidity
        Value Node: qudtqk:AbsoluteHumidity
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Relative_Humidity
        Value Node: qudtqk:RelativeHumidity
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Mass
        Value Node: qudtqk:Mass
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Complex_Power
        Value Node: qudtqk:ComplexPower
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Volume
        Value Node: qudtqk:Volume
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Electric_Power
        Value Node: qudtqk:ElectricPower
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Pressure
        Value Node: qudtqk:Pressure
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Static_Pressure
        Value Node: qudtqk:StaticPressure
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Dewpoint
        Value Node: qudtqk:DewPointTemperature
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Luminous_Flux
        Value Node: qudtqk:LuminousFlux
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Apparent_Power
        Value Node: qudtqk:ApparentPower
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Real_Power
        Value Node: qudtqk:ActivePower
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Enthalpy
        Value Node: qudtqk:Enthalpy
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Illuminance
        Value Node: qudtqk:Illuminance
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Reactive_Power
        Value Node: qudtqk:ReactivePower
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Differential_Temperature
        Value Node: qudtqk:Temperature
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Electric_Current
        Value Node: qudtqk:ElectricCurrent
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Thermal_Energy
        Value Node: qudtqk:ThermalEnergy
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Differential_Static_Pressure
        Value Node: qudtqk:StaticPressure
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Power
        Value Node: qudtqk:Power
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind
Constraint Violation in ClassConstraintComponent (http://www.w3.org/ns/shacl#ClassConstraintComponent):
        Severity: sh:Violation
        Source Shape: bsh:hasQUDTReferenceShape
        Focus Node: brick:Velocity_Pressure
        Value Node: qudtqk:DynamicPressure
        Result Path: brick:hasQUDTReference
        Message: Value does not have class qudt:QuantityKind

VersionedGraphCollection "Redo" Doesn't Bring Changes Back to Same Graph Name

It appears that redo() for the VersionedGraphCollection, although indeed adding the changeset back to the collection, it doesn't do it under the same graph name. So if I serialize the entire graph collection after undoing and redoing, it appears in the graph/file, but if I serialize only the specific graph I specified originally, it no longer appears in the file even though it is part of the graph collection.

from brickschema.persistent import VersionedGraphCollection, Changeset
from brickschema.namespaces import A, BRICK
from rdflib import Namespace

vg = VersionedGraphCollection("sqlite://")

BLDG = Namespace("test:") 

with vg.new_changeset("my-building") as cs:
    cs.add((BLDG.zone1, A, BRICK.HVAC_Zone))

with vg.new_changeset("my-building") as cs:
    cs.add((BLDG.zone2, A, BRICK.HVAC_Zone))

vg.undo()
vg.redo()

vg.serialize("UndoRedo1.ttl") # serialize the entire graph collection
vg.graph_at(graph="my-building").serialize("UndoRedo2.ttl") # serialize only the graph "my-building"

A potential fix that seems to work might be this (persistent.py on line 171):

# persistent.py
# redo():
triple = pickle.loads(row["triple"])
graph = self.get_context(redo_record["graph"])
if row["is_insertion"]:
  graph.remove((triple[0], triple[1], triple[2]))
else:
  graph.add((triple[0], triple[1], triple[2]))

most_likely_tagsets should return classes, not strings

It'd be nice if the most_likely_tagset call returned actual RDFlib objects instead of just a list of strings so we could figure out the classes involved, (though maybe it's easy to go from the strings to classes?)

The use case is in the recon API, after we predict the class for a query, in the protocol response we need to set an extra 'type' field. (For our use case, that's a bit redundant since we're actually predicting a type, but the protocol specs it, because there are other use cases where it's matching actual entities so an additional type field is useful in the response.) And because it doesn't usually matter, for type we just put back in whatever type the client asked to reconcile against, e.g. 'EquipmentClass', 'PointClass', or 'BrickClass', and default to BrickClass. Because we just get back a string like "Air_Flow_Pressure_Sensor" we can't do anything better, like look up the superclass of 'Air_Flow_Pressure_Sensor' and return that.

The bummer here is that OpenRefine doesn't actually use 'manifest' of possible types to decide what types to display to the user to reconcile against - OpenRefine, even before you officially start 'reconciling', just sends over a sample of a couple of rows to the server to see what types it gets back, and presents them as the list of possible types. This is wrong and a bug in OpenRefine - but it hurts Brick specifically, because obviously OpenRefine is sending these 'prequery' queries without specifying a type, which in our implementation means we reply back saying they're of type 'BrickClass', and hence the only possible type option OpenRefine displays for the real reconciliation process is 'BrickClass'.

Hopefully OpenRefine will fix its limitation and look at the manifest instead of trying to sample what types are possible, but one thing we could do is look at the type we predict and figure out its superclass.

BrickShape.ttl targetSubjectsOf

Just wondering for something like the following constraint in the BrickShape.ttl:

bsh:hasLocationRangeShape a sh:NodeShape ;
    sh:property [ sh:class brick:Location ;
            sh:message "Property hasLocation has object with incorrect type" ;
            sh:path brick:hasLocation ] ;
    sh:targetSubjectsOf brick:hasLocation 

Should "sh:targetSubjectsOf" be "sh:targetObjectsOf" instead?

(Similarly for other object constraints.)

Edit:
It seems that the default brick shapes are not loaded in Graph.validate:

      def validate(self, shape_graphs=None, default_brick_shapes=True):
        """
        Validates the graph using the shapes embedded w/n the graph. Optionally loads in normative Brick shapes
        and externally defined shapes

        Args:
          shape_graphs (list of rdflib.Graph or brickschema.graph.Graph): merges these graphs and includes them in
                the validation
          default_brick_shapes (bool): if True, loads in the default Brick shapes packaged with brickschema

        Returns:
          (conforms, resultsGraph, resultsText) from pyshacl
        """
        shapes = None
        if shape_graphs is not None and isinstance(shape_graphs, list):
            shapes = functools.reduce(lambda x, y: x + y, shape_graphs)
        return pyshacl.validate(self, shacl_graph=shapes)

Is that intentional or perhaps work in progress?

brickify TableHandler: should we disallow 'query' as an operation?

In looking at the Brickify TableHandler, it supports running 'query' as an operation from the YAML/JSON config. This seems like it's a little odd, since every entry in 'operation' is run once for every line in the CSV, instead of just once over the graph like it is in the Handler class. So what we're doing is running 'query' on the graph over and over again, even in the beginning when the graph might be very small.

I suppose you can be very clever about how you create your CSV and use the 'conditions' option for the 'operation', but this feels kinda dangerous since it'd be relying on the TableHandler always processing the CSV one line at a time, top to bottom.

Maybe it'd be better to disallow 'query' as an operation in the TableHandler family of classes, or have a new flavor of 'operation' that is guaranteed to only run once - like in a TableHandler we could run the 'query' operations but only after all of the rows from the input CSV files have loaded and been run through the 'translate' function?

As a related crazy idea, maybe we add a way for Brickify to load some sort of 'base' graph first? Like you'd load up a base graph and then run the TableHandler to append the from the CSV data, processed line by line, to that base graph? I think the 'query' operation would make more sense in that case?

Question: How could I separate between between ontology, site and inferred data?

Hey there, I want to get your opinion on something that I'm thinking of doing:

I want to do inference over datasets but would like to ideally separate boundaries between the used ontology data (i.e. brick, qudt etc.), the input site data (e.g. :Sensor a brick:Temperature_Sensor) and the inferred (output) data (e.g. :Sensor brick:hasTag brick:Temperature`). Since inference is always an additive process, this separation would easily allow me to change the site data (or ontology data) without worrying about the data that has been previously inferred (I can drop all inferred data and re-do inference).

I've been browsing the rdflib and brickschema source to get more familiar with them and I think that a way this might be doable is by reimplementing brickschema.Graph.expand on top of rdflib.Dataset. Separation could then be handled by adding a graph per each category of data (e.g. ontology data, site data, inferred data), doing inference on top of it all and storing the results in the inferred data graph.

Does this sound reasonable? Would something like this work? Is there a chance it might lead to potential complications (e.g. around querying for examples)? Is there maybe an easier way to do it?

Property shape performs differently when using blank nodes vs. explicit nodes

Hey Gabe - let me know if I'm doing anything obviously incorrect. I'm running v0.2.0.

I define the expected data graph:

@prefix brick: <https://brickschema.org/schema/1.1/Brick#> .
@prefix ex: <https://example.com#> .

# Should validate
ex:vav-01 a brick:Variable_Air_Volume_Box ;
    brick:hasPoint ex:p1,
        ex:p2 .
ex:p1 a brick:Discharge_Air_Temperature_Sensor .
ex:p2 a brick:Damper_Position_Sensor .

# Should NOT validate
ex:vav-02 a brick:Variable_Air_Volume_Box ;
    brick:hasPoint ex:p1 .

I then define to shapes graphs that I believe should resolve to identical shapes graphs. The first way with blank nodes works as expected:

@prefix brick: <https://brickschema.org/schema/1.1/Brick#> .
@prefix bsh: <https://brickschema.org/schema/1.1/BrickShape#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix sh: <http://www.w3.org/ns/shacl#> .
@prefix ex: <https://example.com#> .

bsh:VAVShape1 a sh:NodeShape ;
    sh:targetNode ex:vav-01, ex:vav-02 ;
    sh:property [ 
        sh:path brick:hasPoint ;
        sh:qualifiedValueShape [
            sh:class brick:Discharge_Air_Temperature_Sensor ;
        ] ;
        sh:qualifiedMinCount 1 ;
        sh:qualifiedMaxCount 1 ;
    ] ;
    sh:property [ 
        sh:path brick:hasPoint ;
        sh:qualifiedValueShape [
            sh:class brick:Damper_Position_Sensor ;
        ] ;
        sh:qualifiedMinCount 1 ;
        sh:qualifiedMaxCount 1 ;
    ] .

I get the following validation errors (expected):

Validation Report
Conforms: False
Results (1):
Constraint Violation in QualifiedValueShapeConstraintComponent (http://www.w3.org/ns/shacl#QualifiedMinCountConstraintComponent):
	Severity: sh:Violation
	Source Shape: [ rdf:type rdfs:Resource ; sh:path brick:hasPoint ; sh:qualifiedMaxCount Literal("1", datatype=xsd:integer) ; sh:qualifiedMinCount Literal("1", datatype=xsd:integer) ; sh:qualifiedValueShape [ rdf:type rdfs:Resource ; sh:class brick:Damper_Position_Sensor ] ]
	Focus Node: ex:vav-02
	Result Path: brick:hasPoint

Additional info (1 constraint violations with offender hint):

Constraint violation:
[] a sh:ValidationResult ;
    sh:focusNode ex:vav-02 ;
    sh:resultPath brick:hasPoint ;
    sh:resultSeverity sh:Violation ;
    sh:sourceConstraintComponent sh:QualifiedMinCountConstraintComponent ;
    sh:sourceShape [ a rdfs:Resource ;
            sh:path brick:hasPoint ;
            sh:qualifiedMaxCount 1 ;
            sh:qualifiedMinCount 1 ;
            sh:qualifiedValueShape [ a rdfs:Resource ;
                    sh:class brick:Damper_Position_Sensor ] ] .
Violation hint (subject predicate cause):
ex:vav-02 brick:hasPoint "sh:qualifiedMinCount <1>" .

Second way with explicitly defined property shapes:

@prefix brick: <https://brickschema.org/schema/1.1/Brick#> .
@prefix bsh: <https://brickschema.org/schema/1.1/BrickShape#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix sh: <http://www.w3.org/ns/shacl#> .
@prefix ex: <https://example.com#> .

bsh:VAVShape1 a sh:NodeShape ;
    sh:targetNode ex:vav-01, ex:vav-02 ;
    sh:property ex:hasDischargeAirTemperatureSensor ;
    sh:property ex:hasDamperPositionSensor .

ex:hasDischargeAirTemperatureSensor a sh:PropertyShape ;
    sh:path brick:hasPoint ;
    sh:qualifiedValueShape [
        sh:class brick:Discharge_Air_Temperature_Sensor ;
    ] ;
    sh:qualifiedMinCount 1 ;
    sh:qualifiedMaxCount 1 .

ex:hasDamperPositionSensor a sh:PropertyShape ;
    sh:path brick:hasPoint ;
    sh:qualifiedValueShape [
        sh:class brick:Damper_Position_Sensor ;
    ] ;
    sh:qualifiedMinCount 1 ;
    sh:qualifiedMaxCount 1 .

I get the following error:

Traceback (most recent call last):
  File "br-test1.py", line 14, in <module>
    result = v.validate(dataG, shacl_graphs=[shapeG])
  File "/Users/cmosiman/.pyenv/versions/validation/lib/python3.7/site-packages/brickschema/validate.py", line 129, in validate
    self.violationList = self.__attachOffendingTriples()
  File "/Users/cmosiman/.pyenv/versions/validation/lib/python3.7/site-packages/brickschema/validate.py", line 172, in __attachOffendingTriples
    self.__triplesForOneViolation(violation)
  File "/Users/cmosiman/.pyenv/versions/validation/lib/python3.7/site-packages/brickschema/validate.py", line 246, in __triplesForOneViolation
    cObj = f"<{self.__violationPredicateObj(violation, cPred)}>"
  File "/Users/cmosiman/.pyenv/versions/validation/lib/python3.7/site-packages/brickschema/validate.py", line 210, in __violationPredicateObj
    assert len(res) == 1, f"Must have predicate '{predicate}'"
AssertionError: Must have predicate 'sh:qualifiedMinCount'

Thoughts?

PersistentGraph/VersionedGraph hard-code parameters for create and identifier

In persistent.py PersistentGraph and VersionedGraph hard-code parameters that were previously usable:

  • the identifier parameter used by rdflib-sqlalchemy is set to "brickschema_persistent_graph" and "my_store", respectively
  • create=True when opening the graph.

This makes it impossible to connect to existing repositories created with a different identifier, or to recognize that a repository expected to exist does not.

[Question] Has there been an unintentional bump in ontology data shipped with the library for Brickschema v1.3.0?

I'm using py-brickschema to generate some data and I just happen to notice some new equipment classes, for example brick:Water_Storage_Tank, in the generated output which was a bit surprising.

I went to check the ontology on the website but I could not find those classes neither in 1.3.0 nor in Nightly. Digging a bit through the commit history of Brickschema I found that brick:Water_Storage_Tank was introduced with BrickSchema/Brick@679a75e after 1.3.0 was released and the change has been pulled in py-brickschema with 1cd6117.

While it is not really breaking anything for me, it does look like this was unintentional since the 1.3.0 version shipped with py-brickschema is not the same as the 1.3.0 version of the official release.

I don't want to jump to conclusions but, if there is some hardship with managing Brickschema versions, I think it's worth looking into Semantic Versioning combined with Conventional Commits.

Consider supporting rdflib_sqlalchemy?

Hey @gtfierro

I finally found what I've been looking for almost right under my nose...

from rdflib import plugin, RDF, Namespace
from rdflib.graph import Graph
from rdflib.store import Store
from rdflib_sqlalchemy import registerplugins

from brickschema import Graph as bGraph

registerplugins()

SQLALCHEMY_URL ="postgresql+psycopg2://liam:pw@localhost:5432/spectral_brick"

store = plugin.get("SQLAlchemy", Store)(identifier="my_store")
graph = Graph(store, identifier="my_graph")
graph.open(SQLALCHEMY_URL, create=True)
BRICK = Namespace("https://brickschema.org/schema/1.2/Brick#")
graph.bind("brick", BRICK)
BLDG = Namespace("https://topology.sbp.spectral.energy/test#")
graph.add((BLDG.Building, RDF.type, BRICK.Building))

result = graph.query("select * where {?s ?p ?o} limit 10")

for subject, predicate, object_ in result:
    print(subject, predicate, object_)

graph.close()

When I run this setup with brickschema Graph object I get:

Traceback (most recent call last):
  File "test_rdflib_psql.py", line 13, in <module>
    graph = bGraph(store, identifier="my_graph")
  File "/home/liam/miniconda3/envs/spectral_brick/lib/python3.8/site-packages/brickschema/graph.py", line 38, in __init__
    ns.bind_prefixes(self)
  File "/home/liam/miniconda3/envs/spectral_brick/lib/python3.8/site-packages/brickschema/namespaces.py", line 24, in bind_prefixes
    graph.bind("rdf", RDF)
  File "/home/liam/miniconda3/envs/spectral_brick/lib/python3.8/site-packages/rdflib/graph.py", line 932, in bind
    return self.namespace_manager.bind(
  File "/home/liam/miniconda3/envs/spectral_brick/lib/python3.8/site-packages/rdflib/graph.py", line 326, in _get_namespace_manager
    self.__namespace_manager = NamespaceManager(self)
  File "/home/liam/miniconda3/envs/spectral_brick/lib/python3.8/site-packages/rdflib/namespace.py", line 363, in __init__
    for p, n in self.namespaces():  # self.bind is not always called
  File "/home/liam/miniconda3/envs/spectral_brick/lib/python3.8/site-packages/rdflib/namespace.py", line 570, in namespaces
    for prefix, namespace in self.store.namespaces():
  File "/home/liam/miniconda3/envs/spectral_brick/lib/python3.8/site-packages/rdflib_sqlalchemy/store.py", line 675, in namespaces
    with self.engine.connect() as connection:
AttributeError: 'NoneType' object has no attribute 'connect'

Running inference .expand(..) does not add inverse relationships, also adds redundant owl:sameAs triples

I noticed that the OWLRS expand adds owl:sameAs axioms for every node, targeting self?

If we take the example apartment.ttl from brick (slightly modified to add a setpoint and which has 23 asserted triples in total), and run inference, I end up with 150 triples, the majority (entirety even?) of which are :a owl:sameAs :a -- well, obviously :D

Also, none of the inverse relationships are inferred? e.g. all of the brick:isPartOf references... I would have expected inferrence to have added the brick:hasPart axioms? Isn't that the whole point of this?

apartment input:

@prefix apt: <apartment#> .
@prefix brick: <https://brickschema.org/schema/Brick#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
#@prefix qudt: <http://qudt.org/schema/qudt/> .
@prefix unit: <http://qudt.org/vocab/unit/> .

apt:bedroom a brick:Room ;
    brick:isPartOf apt:bedroom_lighting,
        apt:my_apartment,
        apt:thermostat_zone .

apt:kitchen a brick:Room ;
    brick:isPartOf apt:kitchen_lighting,
        apt:my_apartment,
        apt:thermostat_zone .

apt:living_room a brick:Room ;
    brick:isPartOf apt:living_room_lighting,
        apt:my_apartment,
        apt:thermostat_zone .

apt:bedroom_lighting a brick:Lighting_Zone .

apt:kitchen_lighting a brick:Lighting_Zone .

apt:living_room_lighting a brick:Lighting_Zone .

brick:Apartment rdfs:subClassOf brick:Space .

apt:my_apartment a brick:Apartment .

apt:thermostat_zone a brick:HVAC_Zone .

apt:thermostat1 a brick:Thermostat ;
    brick:isPartOf apt:thermostat_zone .

apt:setpoint1 a brick:Temperature_Setpoint ;
    brick:isPartOf apt:thermostat1 ;
    brick:hasUnit unit:DEG_F .

Inferred output:

@prefix apt: <file:///.../apartment#> .
@prefix brick: <https://brickschema.org/schema/Brick#> .
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix unit: <http://qudt.org/vocab/unit/> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .

apt:bedroom a brick:Room ;
    owl:sameAs apt:bedroom ;
    brick:isPartOf apt:bedroom_lighting,
        apt:my_apartment,
        apt:thermostat_zone .

apt:kitchen a brick:Room ;
    owl:sameAs apt:kitchen ;
    brick:isPartOf apt:kitchen_lighting,
        apt:my_apartment,
        apt:thermostat_zone .

apt:living_room a brick:Room ;
    owl:sameAs apt:living_room ;
    brick:isPartOf apt:living_room_lighting,
        apt:my_apartment,
        apt:thermostat_zone .

apt:setpoint1 a brick:Temperature_Setpoint ;
    owl:sameAs apt:setpoint1 ;
    brick:hasUnit unit:DEG_F ;
    brick:isPartOf apt:thermostat1 .

rdf:HTML a rdfs:Datatype ;
    owl:sameAs rdf:HTML .

rdf:LangString a rdfs:Datatype ;
    owl:sameAs rdf:LangString .

rdf:PlainLiteral a rdfs:Datatype ;
    owl:sameAs rdf:PlainLiteral .

rdf:XMLLiteral a rdfs:Datatype ;
    owl:sameAs rdf:XMLLiteral .

rdf:type owl:sameAs rdf:type .

rdfs:Literal a rdfs:Datatype ;
    owl:sameAs rdfs:Literal .

rdfs:comment a owl:AnnotationProperty ;
    owl:sameAs rdfs:comment .

rdfs:isDefinedBy a owl:AnnotationProperty ;
    owl:sameAs rdfs:isDefinedBy .

rdfs:label a owl:AnnotationProperty ;
    owl:sameAs rdfs:label .

rdfs:seeAlso a owl:AnnotationProperty ;
    owl:sameAs rdfs:seeAlso .

rdfs:subClassOf owl:sameAs rdfs:subClassOf .

xsd:NCName a rdfs:Datatype ;
    owl:sameAs xsd:NCName .

xsd:NMTOKEN a rdfs:Datatype ;
    owl:sameAs xsd:NMTOKEN .

xsd:Name a rdfs:Datatype ;
    owl:sameAs xsd:Name .

xsd:anyURI a rdfs:Datatype ;
    owl:sameAs xsd:anyURI .

xsd:base64Binary a rdfs:Datatype ;
    owl:sameAs xsd:base64Binary .

xsd:boolean a rdfs:Datatype ;
    owl:sameAs xsd:boolean .

xsd:byte a rdfs:Datatype ;
    owl:sameAs xsd:byte .

xsd:date a rdfs:Datatype ;
    owl:sameAs xsd:date .

xsd:dateTime a rdfs:Datatype ;
    owl:sameAs xsd:dateTime .

xsd:dateTimeStamp a rdfs:Datatype ;
    owl:sameAs xsd:dateTimeStamp .

xsd:decimal a rdfs:Datatype ;
    owl:sameAs xsd:decimal .

xsd:double a rdfs:Datatype ;
    owl:sameAs xsd:double .

xsd:float a rdfs:Datatype ;
    owl:sameAs xsd:float .

xsd:hexBinary a rdfs:Datatype ;
    owl:sameAs xsd:hexBinary .

xsd:int a rdfs:Datatype ;
    owl:sameAs xsd:int .

xsd:integer a rdfs:Datatype ;
    owl:sameAs xsd:integer .

xsd:language a rdfs:Datatype ;
    owl:sameAs xsd:language .

xsd:long a rdfs:Datatype ;
    owl:sameAs xsd:long .

xsd:negativeInteger a rdfs:Datatype ;
    owl:sameAs xsd:negativeInteger .

xsd:nonNegativeInteger a rdfs:Datatype ;
    owl:sameAs xsd:nonNegativeInteger .

xsd:nonPositiveInteger a rdfs:Datatype ;
    owl:sameAs xsd:nonPositiveInteger .

xsd:normalizedString a rdfs:Datatype ;
    owl:sameAs xsd:normalizedString .

xsd:positiveInteger a rdfs:Datatype ;
    owl:sameAs xsd:positiveInteger .

xsd:short a rdfs:Datatype ;
    owl:sameAs xsd:short .

xsd:string a rdfs:Datatype ;
    owl:sameAs xsd:string .

xsd:time a rdfs:Datatype ;
    owl:sameAs xsd:time .

xsd:token a rdfs:Datatype ;
    owl:sameAs xsd:token .

xsd:unsignedByte a rdfs:Datatype ;
    owl:sameAs xsd:unsignedByte .

xsd:unsignedInt a rdfs:Datatype ;
    owl:sameAs xsd:unsignedInt .

xsd:unsignedLong a rdfs:Datatype ;
    owl:sameAs xsd:unsignedLong .

xsd:unsignedShort a rdfs:Datatype ;
    owl:sameAs xsd:unsignedShort .

owl:backwardCompatibleWith a owl:AnnotationProperty ;
    owl:sameAs owl:backwardCompatibleWith .

owl:deprecated a owl:AnnotationProperty ;
    owl:sameAs owl:deprecated .

owl:equivalentClass owl:sameAs owl:equivalentClass .

owl:incompatibleWith a owl:AnnotationProperty ;
    owl:sameAs owl:incompatibleWith .

owl:priorVersion a owl:AnnotationProperty ;
    owl:sameAs owl:priorVersion .

owl:sameAs owl:sameAs owl:sameAs .

owl:versionInfo a owl:AnnotationProperty ;
    owl:sameAs owl:versionInfo .

brick:hasUnit owl:sameAs brick:hasUnit .

brick:isPartOf owl:sameAs brick:isPartOf .

apt:bedroom_lighting a brick:Lighting_Zone ;
    owl:sameAs apt:bedroom_lighting .

apt:kitchen_lighting a brick:Lighting_Zone ;
    owl:sameAs apt:kitchen_lighting .

apt:living_room_lighting a brick:Lighting_Zone ;
    owl:sameAs apt:living_room_lighting .

apt:thermostat1 a brick:Thermostat ;
    owl:sameAs apt:thermostat1 ;
    brick:isPartOf apt:thermostat_zone .

unit:DEG_F owl:sameAs unit:DEG_F .

brick:Apartment rdfs:subClassOf brick:Space ;
    owl:sameAs brick:Apartment .

brick:HVAC_Zone owl:sameAs brick:HVAC_Zone .

brick:Temperature_Setpoint owl:sameAs brick:Temperature_Setpoint .

brick:Thermostat owl:sameAs brick:Thermostat .

owl:Class owl:sameAs owl:Class .

owl:Nothing a owl:Class ;
    rdfs:subClassOf owl:Nothing,
        owl:Thing ;
    owl:equivalentClass owl:Nothing ;
    owl:sameAs owl:Nothing .

brick:Space owl:sameAs brick:Space .

apt:my_apartment a brick:Apartment,
        brick:Space ;
    owl:sameAs apt:my_apartment .

owl:Thing a owl:Class ;
    rdfs:subClassOf owl:Thing ;
    owl:equivalentClass owl:Thing ;
    owl:sameAs owl:Thing .

brick:Lighting_Zone owl:sameAs brick:Lighting_Zone .

brick:Room owl:sameAs brick:Room .

apt:thermostat_zone a brick:HVAC_Zone ;
    owl:sameAs apt:thermostat_zone .

owl:AnnotationProperty owl:sameAs owl:AnnotationProperty .

rdfs:Datatype owl:sameAs rdfs:Datatype .

Move to SQLalchemy 2.0+

Can we bump the pinned version of SQLalchemy to 2.0+?

This would be useful for projects that want to push to SQL repos with a uniqueidentifier type.

convert Haystack JSON model

After downloading bravo.json, running the following lines:

import json
from brickschema import Graph
model = json.load(open("bravo.json"))
g = Graph(load_brick=True).from_haystack("http://project-haystack.org/bravo#", model)

File "C:\Users*\anaconda3\envs\Brick\lib\site-packages\brickschema\inference.py", line 711, in
entities = {e["id"].replace('"', ""): {"tags": e} for e in entities}

AttributeError: 'dict' object has no attribute 'replace'

Brickify help usage and operations

Hello this is Philip H. from the Google Group

I looked through the Brickify documentation and I need to reach out for help.

The first thing I have trouble with are the handlers and operations.
I get that there are different handlers that can be used, but do I understand right that I have to crate the operator myself?

I understand the basic way the operator is built for the example, but I fail to decipher the deeper logic behind it to be able to apply it to a vastly different data set. Even worse, the logic my data set has is even harder for me to understand and it doesn't have useful column names like the example. Instead it looks like this:

ahu1 only.csv

This is the data, I can see the most logic in it.
It would be nice to use the other two as well, but they seem even harder to decode.

Example.xlsx
Example2.xlsx

Coding an operator seems to prove very difficult. Are there any prebuilt operators that could be used a base?
If I split the date by delimiters would it be easier to write an operator?
Is there is more info or a tutorial available. For example why is there the data operation twice in the template?

Also, what handler would be recommended for these formats?
I was told the structure of the first data set should be aligned with the haystack format, but I wan't able to confirm it as of yet. How do I convert it to a haystack turtle so that I can load with the haystack handler?

About the command:
brickify sheet.tsv --output bldg.ttl --input-type tsv --config template.yml

When running it with other data and template than the examples I get an error:

"'charmap' codec can't decode byte 0x81 in position 462" What does this error message mean?

EDIT:
The example sheet in the documentation seems to be already normalized data. It seems I need to go back one step.
Since the documentation mentions that it's a job for a technical support team, is it feasible as a one man job? What would you think how long it would take to normalize the data and write a full operator?
I noticed that the first data looks a bit similar to the test data Gabe used in his OpenRefine + BrickBuilder video. What is the best way to make this data compatible with Brickify?

I would be very thankful for any help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.