Giter Site home page Giter Site logo

bioimage-io / core-bioimage-io-python Goto Github PK

View Code? Open in Web Editor NEW
18.0 5.0 18.0 6.59 MB

Python libraries for loading, running and packaging bioimage.io models

Home Page: https://bioimage-io.github.io/core-bioimage-io-python/

License: MIT License

Python 22.61% Jupyter Notebook 77.39%
bioimage-analysis bioimage-infomatics bioimaging deep-learning

core-bioimage-io-python's People

Contributors

carlosuc3m avatar constantinpape avatar czaki avatar esgomezm avatar fynnbe avatar jdeschamps avatar jhennies avatar jo-mueller avatar k-dominik avatar m-novikov avatar matuskalas avatar oeway avatar pattonw avatar tomaz-vieira avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

core-bioimage-io-python's Issues

Enforce that the `documentation` key contains only relative path, not a URL

The current spec describe documentation as Relative path to file with additional documentation in markdown. However, the validator only check it's a URI. I noticed that some models use a generic http URL for the documentation key, which make it more complicated to render on the website. I am keen in to make it more restricted:

  • It must be a relative path start with ./
  • It must be a markdown file with *.md extension
  • URL is not allowed

I added clarification to the spec docs too: bioimage-io/spec-bioimage-io#84

Implement common functionality here

As discussed in bioimage-io/bioimage.io#28 (comment) and elsewhere, it would be good to implement general purpose (python) modelzoo functionality in this repo:

I think this would be a good item for the upcoming hackathon(s).

KeyError: 'tensorflow_saved_model_bundle'

Running the verification script in the CI and I got this error:
(https://github.com/deepimagej/models/runs/1963020995)

Traceback (most recent call last):
  File "compile_model_manifest.py", line 270, in <module>
    parse_manifest(models_yaml)
  File "compile_model_manifest.py", line 200, in parse_manifest
    spec.verify_model_data(model_config)
  File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pybio/spec/__main__.py", line 36, in verify_model_data
    schema.Model().load(model_data)
  File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/marshmallow/schema.py", line 723, in load
    data, many=many, partial=partial, unknown=unknown, postprocess=True
  File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/marshmallow/schema.py", line 886, in _do_load
    field_errors=field_errors,
  File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/marshmallow/schema.py", line 1191, in _invoke_schema_validators
    partial=partial,
  File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/marshmallow/schema.py", line 774, in _run_validator
    validator_func(output, partial=partial, many=many)
  File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pybio/spec/schema.py", line 324, in source_specified_if_required
    require_source = {wf for wf in data["weights"] if weight_format_requires_source[wf]}
  File "/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/site-packages/pybio/spec/schema.py", line 324, in <setcomp>
    require_source = {wf for wf in data["weights"] if weight_format_requires_source[wf]}
KeyError: 'tensorflow_saved_model_bundle'

Package scope and name

Hi, a quick thought here regarding the package name.

When I see it from the first time, I feel the current name bioimageio.core is not clear. Basically, "the core of bioimageio" can mean the core of the bioimage.io website, or something unclear. It can also make people wonder what is bioimageio first. But for us, bioimageio is just a domain name that includes the bioimage model zoo.

We should think about what the scope of the package is. As I understand, we are implementing the model inference (perhaps also training in the future), one question is that are we going to also deal with other RDFs (dataset, application, workflow) in the same package? Considering that they can be significantly different from the way we deal with models, we might want to split into smaller python modules.

If that's the case, I would propose to reserve the name core and use something like bioimageio.runner or bioimageio.model-runner for the model. It will also be more clear because it means "the model runner of bioimage.io".

What do you think?

@FynnBe @constantinpape @k-dominik

same name dependencies may clash

importing model.yaml dependencies may fail, if the same dependency (by name) exists with two versions.

example:
consumer (tiktorch) has pybio dependency. If model points to new pybio transformation, the import will not actually happen, as the oder pybio used by tiktorch itself is already imported.

ImplicitOutputShape is not properly validated?!

We specify that shape = shape(input_tensor) * scale + 2 * offset for the ImplicitOutputShape, see https://github.com/bioimage-io/spec-bioimage-io/blob/gh-pages/model_spec_latest.md.
However, it seems like this is not properly checked in bioiamgeio test-model:
test-model passes for https://github.com/bioimage-io/spec-bioimage-io/blob/main/example_specs/models/stardist_example_model/rdf.yaml#L39; however, the correct value would be 33 / 2.
@FynnBe we should make sure that the ImplicitOutputShape (and also InputShape etc.) are validated for the test input / outputs.
Is test-model the best place to do this or can we do it even earlier in validation?

Key word arguments for model specs

Should we add optional and required key word arguments to the model specification, as we have them for other specifications, e.g. reader, sampler, transformation?

(assumption: key word arguments have to be json serializable, no fancy stuff!)

Pros:

  • consistency across specs
  • most models actually do have some kwargs, like nr of channels, etc...

Cons:

  • we would have to specify model kwargs for the different weights to avoid collisions

Is license checked in the validation?

We have a model with incorrect license name ("BSD-3") and rdf version 0.3.2. The validator says that the specs are correct but test-model complains about the licensing:

marshmallow.exceptions.ValidationError: {'license': ['Must be one of: bzip2-1.0.6, Glulxe, Parity-7.0.0, OML, UCL-1.0, UPL-1.0, BSD-Protection, OCLC-2.0, eCos-2.0, Multics, IPL-1.0, IPA, eGenix, Glide, Entessa, FSFUL, Nunit, MPL-2.0-no-copyleft-exception, libpng-2.0, OLDAP-2.2.1, curl, ANTLR-PD, CC-BY-SA-2.0, LiLiQ-P-1.1, TCP-wrappers, Unicode-DFS-2016, ODbL-1.0, LPPL-1.3a, CERN-OHL-1.2, ADSL, CDDL-1.0, Motosoto, BUSL-1.1, OGL-UK-1.0, xinetd, Imlib2, SNIA, OGTSL, TMate, OCCT-PL, GPL-1.0-or-later, YPL-1.1, CECILL-2.0, PHP-3.0, BlueOak-1.0.0, Zimbra-1.3, OGC-1.0, NASA-1.3, SPL-1.0, Intel-ACPI, SISSL-1.2, OGL-Canada-2.0, CC-BY-3.0-US, copyleft-next-0.3.1, GFDL-1.1-invariants-or-later, GL2PS, MS-PL, SCEA, CC-BY-ND-2.5, SSPL-1.0, Spencer-86, LPPL-1.0, GPL-3.0-only, GPL-2.0-with-autoconf-exception, Giftware, CC-BY-NC-ND-3.0, CNRI-Python, GFDL-1.2-no-invariants-or-later, Afmparse, BSD-3-Clause-LBNL, NCGL-UK-2.0, GPL-1.0+, PHP-3.01, Leptonica, bzip2-1.0.5, NIST-PD-fallback, OSL-1.0, OFL-1.1, JasPer-2.0, Naumen, AGPL-1.0-only, C-UDA-1.0, MIT, TCL, LGPL-3.0-only, ECL-1.0, MPL-2.0, CC-BY-NC-1.0, CC-BY-NC-ND-2.5, LPPL-1.3c, JSON, NBPL-1.0, CAL-1.0-Combined-Work-Exception, Unlicense, CNRI-Python-GPL-Compatible, TU-Berlin-2.0, NLPL, LGPL-3.0-or-later, Beerware, NGPL, ZPL-2.1, Saxpath, CC-BY-SA-2.0-UK, CECILL-2.1, XFree86-1.1, IBM-pibs, Zlib, StandardML-NJ, RPSL-1.0, CECILL-1.0, OGL-UK-3.0, BSD-4-Clause-Shortened, Watcom-1.0, Wsuipa, TU-Berlin-1.0, Latex2e, CECILL-B, EUPL-1.0, GFDL-1.2-or-later, CPL-1.0, CC-BY-ND-3.0, NTP, W3C-19980720, GFDL-1.3-only, CC-BY-SA-4.0, EUPL-1.1, GFDL-1.1-no-invariants-only, JPNIC, AMPAS, BSD-3-Clause, MIT-0, Intel, O-UDA-1.0, NPL-1.0, CC-BY-NC-2.5, Mup, Newsletr, PDDL-1.0, SMLNJ, BSD-1-Clause, SimPL-2.0, OLDAP-1.2, Xnet, BSD-2-Clause, AML, GFDL-1.2-only, Info-ZIP, DSDP, AGPL-1.0, BSD-4-Clause-UC, LGPL-2.1-only, OFL-1.0, CDL-1.0, LAL-1.3, Sendmail, OGDL-Taiwan-1.0, Zimbra-1.4, Borceux, OSL-3.0, AMDPLPA, CC-BY-NC-SA-3.0, OLDAP-2.1, BSD-2-Clause-FreeBSD, CPOL-1.02, MPL-1.0, blessing, Parity-6.0.0, AFL-3.0, SGI-B-1.0, BSD-2-Clause-Patent, Artistic-1.0-cl8, CC-BY-NC-ND-4.0, Apache-1.1, ErlPL-1.1, OFL-1.0-RFN, CC-BY-NC-3.0, CC-BY-NC-2.0, MakeIndex, Barr, CC-BY-SA-2.1-JP, GFDL-1.2-no-invariants-only, Hippocratic-2.1, Adobe-2006, OSL-2.0, CC-BY-NC-SA-4.0, LGPL-2.1-or-later, PolyForm-Noncommercial-1.0.0, OpenSSL, GPL-3.0-with-GCC-exception, OPL-1.0, BSD-3-Clause-Attribution, Rdisc, MS-RL, EUDatagrid, LGPLLR, AFL-2.0, MIT-Modern-Variant, GFDL-1.3-invariants-only, LiLiQ-R-1.1, CDLA-Permissive-1.0, DRL-1.0, BSD-Source-Code, CC-BY-NC-ND-1.0, GLWTPL, VSL-1.0, CPAL-1.0, HaskellReport, APSL-1.1, GPL-2.0-or-later, BSD-3-Clause-Modification, OLDAP-2.3, OFL-1.1-no-RFN, BitTorrent-1.0, NRL, GFDL-1.2, MirOS, Sleepycat, LPPL-1.1, WTFPL, PolyForm-Small-Business-1.0.0, Caldera, HTMLTIDY, SISSL, MITNFA, 0BSD, CC0-1.0, LGPL-3.0+, CDLA-Sharing-1.0, GPL-2.0-with-bison-exception, EFL-2.0, AFL-1.1, CC-BY-2.0, RPL-1.5, MulanPSL-1.0, GPL-3.0+, HPND-sell-variant, SSH-OpenSSH, OLDAP-1.1, BitTorrent-1.1, Artistic-1.0, SSH-short, CC-BY-3.0-AT, MIT-CMU, GFDL-1.3-no-invariants-or-later, TOSL, MIT-open-group, OLDAP-2.6, GFDL-1.1-only, FreeBSD-DOC, GPL-2.0, Fair, CECILL-1.1, QPL-1.0, DOC, LAL-1.2, LPL-1.02, CERN-OHL-P-2.0, etalab-2.0, FTL, Qhull, BSD-3-Clause-Clear, BSD-3-Clause-No-Military-License, FSFAP, APL-1.0, OLDAP-2.8, TORQUE-1.1, Sendmail-8.23, diffmark, Frameworx-1.0, zlib-acknowledgement, EFL-1.0, IJG, GFDL-1.3-no-invariants-only, Noweb, GFDL-1.3, LGPL-2.1, gSOAP-1.3b, OFL-1.1-RFN, GPL-3.0-with-autoconf-exception, CERN-OHL-1.1, AFL-2.1, MIT-enna, Adobe-Glyph, EPL-1.0, Xerox, OLDAP-2.0.1, MTLL, ImageMagick, psutils, ClArtistic, GFDL-1.3-invariants-or-later, APSL-1.2, Apache-2.0, NIST-PD, Libpng, TAPR-OHL-1.0, ICU, CC-BY-SA-2.5, CC-PDDC, AGPL-3.0-only, OSL-1.1, SugarCRM-1.1.3, FreeImage, W3C-20150513, D-FSL-1.0, RSA-MD, CC-BY-ND-2.0, GPL-2.0-with-GCC-exception, AGPL-3.0-or-later, AGPL-1.0-or-later, iMatix, Plexus, OFL-1.0-no-RFN, NAIST-2003, MIT-feh, ECL-2.0, CC-BY-2.5, XSkat, Linux-OpenIB, Spencer-99, BSD-3-Clause-No-Nuclear-License-2014, CC-BY-NC-ND-3.0-IGO, CC-BY-NC-SA-1.0, GPL-2.0-with-font-exception, Crossword, OLDAP-2.2.2, BSD-2-Clause-NetBSD, GPL-2.0+, CC-BY-4.0, OLDAP-2.0, NOSL, CDDL-1.1, APSL-1.0, EUPL-1.2, Nokia, RHeCos-1.1, GPL-2.0-only, OLDAP-2.7, Vim, SAX-PD, BSD-3-Clause-No-Nuclear-Warranty, NetCDF, dvipdfm, SHL-0.5, LGPL-2.0-only, AAL, Unicode-TOU, LPPL-1.2, xpp, SHL-0.51, NCSA, LGPL-2.0-or-later, CC-BY-3.0, GPL-1.0, W3C, Aladdin, BSD-3-Clause-No-Nuclear-License, GFDL-1.1-or-later, SMPPL, GFDL-1.1, OLDAP-1.4, Condor-1.1, GPL-1.0-only, GPL-3.0, PSF-2.0, Apache-1.0, EPL-2.0, Python-2.0, OLDAP-2.4, PostgreSQL, Net-SNMP, Ruby, OSET-PL-2.1, Dotseqn, CUA-OPL-1.0, Bahyph, LiLiQ-Rplus-1.1, LGPL-2.0+, wxWindows, AGPL-3.0, Abstyles, OLDAP-1.3, NTP-0, OLDAP-2.2, CC-BY-SA-3.0, SWL, BSD-3-Clause-Open-MPI, LGPL-2.1+, GFDL-1.2-invariants-only, Zend-2.0, GFDL-1.1-no-invariants-or-later, mpich2, NLOD-1.0, gnuplot, CERN-OHL-S-2.0, OGL-UK-2.0, NPL-1.1, Zed, VOSTROM, ZPL-2.0, CERN-OHL-W-2.0, CC-BY-NC-SA-2.0, APSL-2.0, LPL-1.0, ANTLR-PD-fallback, libtiff, HPND, GPL-3.0-or-later, Artistic-2.0, Unicode-DFS-2015, CC-BY-NC-4.0, RPL-1.1, CC-BY-SA-1.0, Cube, ODC-By-1.0, copyleft-next-0.3.0, CC-BY-ND-4.0, ZPL-1.1, GFDL-1.3-or-later, CATOSL-1.1, GPL-2.0-with-classpath-exception, LGPL-2.0, BSD-2-Clause-Views, BSL-1.0, CNRI-Jython, Eurosym, CC-BY-SA-3.0-AT, CECILL-C, EPICS, CC-BY-NC-ND-2.0, GD, X11, MPL-1.1, GFDL-1.1-invariants-only, psfrag, RSCPL, YPL-1.0, SGI-B-1.1, CC-BY-ND-1.0, SGI-B-2.0, APAFML, Spencer-94, ISC, MIT-advertising, GFDL-1.2-invariants-or-later, CC-BY-NC-SA-2.5, CC-BY-1.0, OSL-2.1, CrystalStacker, FSFULLR, libselinux-1.0, MulanPSL-2.0, LGPL-3.0, OLDAP-2.5, Artistic-1.0-Perl, AFL-1.2, CAL-1.0, BSD-4-Clause, Interbase-1.0, NPOSL-3.0.']}

bioimageio package does not work as expected

I have tried to download a model using bioimageio package, but it does not work as expected:

$ bioimageio package 10.5072/zenodo.934248 

Cannot package invalid BioImage.IO RDF 10.5072/zenodo.934248

However, this works in python:

from bioimageio.core import load_resource_description
model = load_resource_description("10.5072/zenodo.934248")

so the model should be valid.

zsh: illegal hardware instruction

Hi!

I'm trying to test a model that pass the validation test but I get the following message, both with test-model and predict-image:

(bio-core-dev) esti@estimacbookair BioImageIO % bioimageio validate models/arabidopsis_03JUN2021.bioimage.io.model/model.yaml
No validation errors for models/arabidopsis_03JUN2021.bioimage.io.model/model.yaml

(bio-core-dev) esti@estimacbookair BioImageIO % bioimageio test-model models/arabidopsis_03JUN2021.bioimage.io.model/model.yaml 
zsh: illegal hardware instruction  bioimageio test-model 

(bio-core-dev) esti@estimacbookair BioImageIO % bioimageio predict-image models/arabidopsis_03JUN2021.bioimage.io.model/model.yaml --inputs models/arabidopsis_03JUN2021.bioimage.io.model/exampleImage.npy --outputs models/arabidopsis_03JUN2021.bioimage.io.model/resultImage.npy
zsh: illegal hardware instruction  bioimageio predict-image  --inputs  --outputs 

I'm testing it in macOS

Do we still need this repository?

@FynnBe as far as I can see the functionality left here is only needed for deprecated things (the old transformations, dataset and model runner ideas).
Do we still need it for anything?
If not, we should consider either removing or at least archiving it.

Releasing this library

I think we have most of the base functionality implemented now, we just need to test prediction with keras / tf models (but need working examples for this) and some smaller issues, see #98.
But I think we can prepare a release + publishing pip and conda packages already. I think it would be good to have the same workflow as in the spec repository.
@oeway @k-dominik would you have time to work on this?

Implement download functionality

It would be very helpful to have download functionality, i.e. a function that takes the zenodo DOI for a model and then downloads the zip; similar to the download function on bioimage.io.

@FynnBe can we refactor the packaging part of the ilastik bioimageio app for this?

Cache handling needs improvement

caching everyting to ~/bioimageio_cache with the current pattern of only analysing the url fails for updates to the remote resource, e.g. a 'master' branch on github, etc...

model_spec.json

commit c196ad7 added an automatically generated model spec. Since it's been so long, I assume it is outdated? Also there should be some note that this file is generated - it's quite irritating. What is it's use?

feedback from deepImageJ models

Missing field tensorflow_version from the tensorflow weights specifications

{'weights': defaultdict(<class 'dict'>,
                        {'tensorflow_saved_model_bundle': {'value': {'tensorflow_version': ['Unknown '
                                                                                            'field.']}}})}

If there is no preprocessing, I tried these two forms preprocessing: null or preprocessing: .
It complains in both cases as:

'inputs': {0: {'preprocessing': ['Not a valid list.']}},

or

{'inputs': {0: {'preprocessing': {0: {'name': ['Must be one of: binarize, '
                                               'clip, scale_linear, sigmoid, '
                                               'zero_mean_unit_variance, '
                                               'scale_range.']}}}}

Error messages here

load_resource_description downloades git_repo

Why does load_resource_description download the git_repo? I assume that this is only metadata and there should be no reason to download it.
This leads to issues when trying to run a model offline:

RuntimeError: Failed to download https://github.com/constantinpape/torch-em.git (HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /constantinpape/torch-em.git (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fba91e78410>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution')))

load_resource_description does not set root path when used with url or doi

If it's used with a file path it sets the correct root_path, otherwise the root path is set to . and files that require a relative path (e.g. the python architecture source file) cannot be used correctly.
See script and outputs below:

from bioimageio.core import load_resource_description

rdf_doi = "10.5072/zenodo.934248"
rdf_url = "https://sandbox.zenodo.org/record/934248/files/rdf.yaml"
# filepath to the downloaded model (either zipped package or yaml)
# (go to https://bioimage.io/#/?id=10.5072%2Fzenodo.881940, select the download icon and select "ilastik")
rdf_path = "/home/pape/Downloads/dsb-nuclei-boundarymodelnew_pytorch_state_dict.zip"

m1 = load_resource_description(rdf_doi)
print("doi-root:", m1.root_path)

m2 = load_resource_description(rdf_url)
print("url-root:", m2.root_path)

m3 = load_resource_description(rdf_path)
print("filepath-root:", m3.root_path)
doi-root: .
url-root: .
filepath-root: /tmp/bioimageio_cache/extracted_packages/home/pape/Downloads/dsb-nuclei-boundarymodelnew_pytorch_state_dict.zip

ensure_raw_resource_description vs load_raw_resource_description

.. only difference between the two is that 'ensure' also accepts resource description objects and (and this is the meaningful difference) returns a tuple of raw resource description (just as load.. does) and root_path

the separate handling of root_path in many places blows up the code slightly. it might be more elegant to allow setting a root_path member on resource description objects.

pro:

  • slimmer code base, e.g. ensure_raw_resource_description can be absorbed into load_raw_resource_description

contra:

  • root_path is not part of the spec, thus this is slightly monkey-patched

neutral:

  • resource description as dicts do not have a root_path, the current handling then assumes the current working directory as root_path. This would not have to change.

    any thoughts, @constantinpape ?

Remaining issues

Here's a list of remaining issues with the functionality here I found while working on #92.
It would be good to fix some of this before making a release, maybe except for last two points, which could be a bit more complicated.

  • prediction_pipeline (and transitively also prediction functions) only support a single input/output tensor; need example model for this
  • prediction_pipeline misses output_dtype
  • prediction_pipeline does not support models with fixed output shape
  • the pre/postprocessing is not quite consistent with https://github.com/bioimage-io/spec-bioimage-io/blob/main/supported_formats_and_operations.md
  • prediction with tiling currently only works for same output shape, but supporting changes in output shape is possible (need example model)
  • issues with normalization only based on local patches, e.g. for tiling, should fix by implementing and using the per_dataset mode

Convenient use

I think the verify scripts are crucial to help developers validate their specs before pushing them to the model zoo.
Once we have the changes in #35 ready, we should explain how to use it and also make this easier.
I have started to add some basic information to the README in #36.
In addition the following things would be quite useful:

  • publish on pip (and/or conda) to make it easy to install
  • provide a dynamic check that also makes sure that all the files referenced in the model.yaml actually exist etc. (maybe we want to do this in the framework specific test and combine with checking the model)
  • provide a command (or flag to the dynamic check) that pakcages the model.zip automatically

Tests fail on GPU (on windows)

while the following test cases pass on cpu, they fail on gpu (on windows; only difference: CUDA_VISIBLE_DEVICES="")

  1. tests\prediction_pipeline\test_prediction_pipeline.py:36 (test_prediction_pipeline_torchscript[unet2d_multi_tensor])
  2. tests\prediction_pipeline\test_prediction_pipeline.py:36 (test_prediction_pipeline_torchscript[unet2d_nuclei_broad_model])

@constantinpape I suspect this is independent of OS and only related to cuda+torchscript...

1.:

tests\prediction_pipeline\test_prediction_pipeline.py:36 (test_prediction_pipeline_torchscript[unet2d_multi_tensor])
any_torchscript_model = WindowsPath('/Users/fbeut/bioimageio_cache/packages/multi-tensorp39.zip')

    def test_prediction_pipeline_torchscript(any_torchscript_model):
>       _test_prediction_pipeline(any_torchscript_model, "pytorch_script")

test_prediction_pipeline.py:38: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_prediction_pipeline.py:20: in _test_prediction_pipeline
    outputs = pp.forward(*inputs)
..\..\bioimageio\core\prediction_pipeline\_prediction_pipeline.py:129: in forward
    prediction = self.predict(*preprocessed)
..\..\bioimageio\core\prediction_pipeline\_prediction_pipeline.py:124: in predict
    return self._model.forward(*input_tensors)
..\..\bioimageio\core\prediction_pipeline\_model_adapters\_model_adapter.py:59: in forward
    return self._forward(*input_tensors)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <bioimageio.core.prediction_pipeline._model_adapters._torchscript_model_adapter.TorchscriptModelAdapter object at 0x00000251FDC143A0>
batch = (<xarray.DataArray (b: 1, c: 1, y: 256, x: 256)>
array([[[[-0.34453768, -0.5941573 , -0.85205287, ..., -0.5926006 ,
  ... , ..., -0.4026966 ,
          -0.43629348, -0.40746877]]]], dtype=float32)
Dimensions without coordinates: b, c, y, x)
torch_tensor = [tensor([[[[-0.3445, -0.5942, -0.8521,  ..., -0.5926, -0.7318, -0.7813],
          [-1.1676, -0.3543, -0.4170,  ..., -... -0.5141,  ..., -0.4316, -0.3849, -0.4363],
          [-0.5178, -0.5888, -0.5254,  ..., -0.4027, -0.4363, -0.4075]]]])]

    def _forward(self, *batch: xr.DataArray) -> List[xr.DataArray]:
        with torch.no_grad():
            torch_tensor = [torch.from_numpy(b.data) for b in batch]
>           result = self._model.forward(*torch_tensor)
E           RuntimeError: The following operation failed in the TorchScript interpreter.
E           Traceback of TorchScript, serialized code (most recent call last):
E             File "code/__torch__/multi_tensor_unet.py", line 17, in forward
E               _3 = self.encoder
E               input = torch.cat([argument_1, argument_2], 1)
E               _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _16, _17, _18, _19, _20, _21, _22, _23, _24, _25, _26, _27, _28, _29, _30, _31, = (_3).forward(input, )
E                                                                                                                                                        ~~~~~~~~~~~ <--- HERE
E               _32 = (_1).forward((_2).forward(_4, ), _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _16, _17, _18, _19, _20, _21, _22, _23, _24, _25, _26, _27, _28, _29, _30, _31, )
E               _33 = (_0).forward(_32, )
E             File "code/__torch__/multi_tensor_unet.py", line 37, in forward
E               _40 = getattr(self.blocks, "1")
E               _41 = getattr(self.poolers, "0")
E               _42 = (getattr(self.blocks, "0")).forward(input, )
E                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
E               _43 = (_40).forward((_41).forward(_42, ), )
E               _44 = (_38).forward((_39).forward(_43, ), )
E             File "code/__torch__/multi_tensor_unet.py", line 49, in forward
E             def forward(self: __torch__.multi_tensor_unet.ConvBlock2d,
E               input: Tensor) -> Tensor:
E               return (self.block).forward(input, )
E                       ~~~~~~~~~~~~~~~~~~~ <--- HERE
E           class Decoder(Module):
E             __parameters__ = []
E             File "code/__torch__/torch/nn/modules/container.py", line 21, in forward
E               _1 = getattr(self, "2")
E               _2 = getattr(self, "1")
E               _3 = (getattr(self, "0")).forward(input, )
E                     ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
E               _4 = (_0).forward((_1).forward((_2).forward(_3, ), ), )
E               return _4
E             File "code/__torch__/torch/nn/modules/conv.py", line 10, in forward
E               input: Tensor) -> Tensor:
E               _0 = self.bias
E               input0 = torch._convolution(input, self.weight, _0, [1, 1], [1, 1], [1, 1], False, [0, 0], 1, False, False, True, True)
E                        ~~~~~~~~~~~~~~~~~~ <--- HERE
E               return input0
E           
E           Traceback of TorchScript, original code (most recent call last):
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/conv.py(419): _conv_forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/conv.py(423): forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/container.py(117): forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
E           /g/kreshuk/pape/Work/my_projects/torch-em/scripts/bioimageio-examples/multi-tensor/multi_tensor_unet.py(268): forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
E           /g/kreshuk/pape/Work/my_projects/torch-em/scripts/bioimageio-examples/multi-tensor/multi_tensor_unet.py(149): forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
E           /g/kreshuk/pape/Work/my_projects/torch-em/scripts/bioimageio-examples/multi-tensor/multi_tensor_unet.py(70): _apply_default
E           /g/kreshuk/pape/Work/my_projects/torch-em/scripts/bioimageio-examples/multi-tensor/multi_tensor_unet.py(88): forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/jit/_trace.py(934): trace_module
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/jit/_trace.py(733): trace
E           /g/kreshuk/pape/Work/bioimageio/python-bioimage-io/bioimageio/core/weight_converter/torch/torchscript.py(92): convert_weights_to_pytorch_script
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch_em/util/modelzoo.py(928): _convert_impl
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch_em/util/modelzoo.py(947): convert_to_pytorch_script
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch_em/util/modelzoo.py(966): add_weight_formats
E           create_example.py(123): export_model
E           create_example.py(159): <module>
E           RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

..\..\bioimageio\core\prediction_pipeline\_model_adapters\_torchscript_model_adapter.py:30: RuntimeError

2.:

tests\prediction_pipeline\test_prediction_pipeline.py:36 (test_prediction_pipeline_torchscript[unet2d_nuclei_broad_model])
any_torchscript_model = WindowsPath('/Users/fbeut/bioimageio_cache/packages/UNet_2D_Nuclei_Broad_0_1_3p39.zip')

    def test_prediction_pipeline_torchscript(any_torchscript_model):
>       _test_prediction_pipeline(any_torchscript_model, "pytorch_script")

test_prediction_pipeline.py:38: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
test_prediction_pipeline.py:20: in _test_prediction_pipeline
    outputs = pp.forward(*inputs)
..\..\bioimageio\core\prediction_pipeline\_prediction_pipeline.py:129: in forward
    prediction = self.predict(*preprocessed)
..\..\bioimageio\core\prediction_pipeline\_prediction_pipeline.py:124: in predict
    return self._model.forward(*input_tensors)
..\..\bioimageio\core\prediction_pipeline\_model_adapters\_model_adapter.py:59: in forward
    return self._forward(*input_tensors)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <bioimageio.core.prediction_pipeline._model_adapters._torchscript_model_adapter.TorchscriptModelAdapter object at 0x000002519C174880>
batch = (<xarray.DataArray (b: 1, c: 1, y: 512, x: 512)>
array([[[[-0.531971  , -0.5176083 , -0.531971  , ..., -0.58942187,
  ..., ..., -0.48888284,
          -0.5128207 , -0.5176083 ]]]], dtype=float32)
Dimensions without coordinates: b, c, y, x,)
torch_tensor = [tensor([[[[-0.5320, -0.5176, -0.5320,  ..., -0.5894, -0.5894, -0.5942],
          [-0.5080, -0.5655, -0.5415,  ..., -...  1.2203,  ..., -0.5415, -0.5272, -0.5224],
          [ 1.0814,  0.8756,  0.7463,  ..., -0.4889, -0.5128, -0.5176]]]])]

    def _forward(self, *batch: xr.DataArray) -> List[xr.DataArray]:
        with torch.no_grad():
            torch_tensor = [torch.from_numpy(b.data) for b in batch]
>           result = self._model.forward(*torch_tensor)
E           RuntimeError: The following operation failed in the TorchScript interpreter.
E           Traceback of TorchScript, serialized code (most recent call last):
E             File "code/__torch__/module_from_source/unet2d.py", line 24, in forward
E               _9 = getattr(self.encoders, "1")
E               _10 = getattr(self.downsamplers, "0")
E               _11 = (getattr(self.encoders, "0")).forward(input, )
E                      ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
E               _12 = (_9).forward((_10).forward(_11, ), )
E               _13 = (_8).forward((_10).forward1(_12, ), )
E             File "code/__torch__/torch/nn/modules/container.py", line 19, in forward
E               _0 = getattr(self, "2")
E               _1 = getattr(self, "1")
E               _2 = (getattr(self, "0")).forward(input, )
E                     ~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
E               return (_0).forward((_1).forward(_2, ), )
E             File "code/__torch__/torch/nn/modules/conv.py", line 10, in forward
E               input: Tensor) -> Tensor:
E               _0 = self.bias
E               input0 = torch._convolution(input, self.weight, _0, [1, 1], [1, 1], [1, 1], False, [0, 0], 1, False, False, True, True)
E                        ~~~~~~~~~~~~~~~~~~ <--- HERE
E               return input0
E           
E           Traceback of TorchScript, original code (most recent call last):
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/conv.py(419): _conv_forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/conv.py(423): forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/container.py(117): forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
E           /g/kreshuk/pape/Work/bioimageio/spec-bioimage-io/example_specs/models/unet2d_nuclei_broad/unet2d.py(55): forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(709): _slow_forward
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/nn/modules/module.py(725): _call_impl
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/jit/_trace.py(934): trace_module
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/lib/python3.8/site-packages/torch/jit/_trace.py(733): trace
E           /g/kreshuk/pape/Work/bioimageio/python-bioimage-io/bioimageio/core/weight_converter/torch/torchscript.py(96): convert_weights_to_pytorch_script
E           /g/kreshuk/pape/Work/bioimageio/python-bioimage-io/bioimageio/core/weight_converter/torch/torchscript.py(115): main
E           /home/pape/Work/software/conda/miniconda3/envs/torch17/bin/bioimageio-convert_torch_to_torchscript(33): <module>
E           RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor

..\..\bioimageio\core\prediction_pipeline\_model_adapters\_torchscript_model_adapter.py:30: RuntimeError

Add notebooks validation for bioimage.io manifest.yaml

Related: #44

Traceback (most recent call last):
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/work/projects/ilastik-project/tiktorch/vendor/python-bioimage-io/pybio/spec/__main__.py", line 75, in <module>
    app()
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/site-packages/typer/main.py", line 214, in __call__
    return get_command(self)(*args, **kwargs)
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/site-packages/typer/main.py", line 497, in wrapper
    return callback(**use_params)  # type: ignore
  File "/home/work/projects/ilastik-project/tiktorch/vendor/python-bioimage-io/pybio/spec/__main__.py", line 69, in verify_bioimageio_manifest
    code = verify_bioimageio_manifest_data(manifest_data)
  File "/home/work/projects/ilastik-project/tiktorch/vendor/python-bioimage-io/pybio/spec/__main__.py", line 40, in verify_bioimageio_manifest_data
    manifest = schema.BioImageIoManifest().load(manifest_data)
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/site-packages/marshmallow/schema.py", line 723, in load
    data, many=many, partial=partial, unknown=unknown, postprocess=True
  File "/home/work/tools/miniconda3/envs/tiktorch-server-env/lib/python3.7/site-packages/marshmallow/schema.py", line 904, in _do_load
    raise exc
marshmallow.exceptions.ValidationError: {'notebook': {0: {'documentation': ['Unknown field.']}}}

Better handling of tf version

We are not really making use of the tf version right now and should:

  • raise a warning if the tf version is different than the one given by the model
  • in build_spec require tf_version to be passed as argument instead of setting it to 1.15 as default value.
  • do the same for opset_version for onnx

Add the option to disable insecure weight types

For serving models with the core library from a public server, we need to have an option to disable all the model weight types that uses source code (e.g. pytorch state dict and keras weights + source code). This is necessary to prevent running arbitrary script provided by the users. We can also do it form outside to maintain a list of insecure weights formats, but would be nice if the core library can already do this.

scale_range in postprocessing

@esgomezm and me are debugging a zero cost model that has scale_range in postprocessing:

  - name: scale_range
    kwargs:
      mode: per_sample
      axes: xyzc
      min_percentile: 0
      max_percentile: 100 

According to the spec this should be valid (i.e. reference_tensor is not defined, so the tensor itself should be used).

However, when running the model the following error is thrown:

(bio-core-dev) pape@mokshi:~/Work/bioimageio/use-cases/zero-cost$ bioimageio test-model unet-no-thresh/model.yaml
Traceback (most recent call last):
  File "/home/pape/Work/software/conda/miniconda3/envs/bio-core-dev/bin/bioimageio", line 33, in <module>
    sys.exit(load_entry_point('bioimageio.core', 'console_scripts', 'bioimageio')())
  File "/home/pape/Work/software/conda/miniconda3/envs/bio-core-dev/lib/python3.7/site-packages/typer/main.py", line 214, in __call__
    return get_command(self)(*args, **kwargs)
  File "/home/pape/Work/software/conda/miniconda3/envs/bio-core-dev/lib/python3.7/site-packages/click/core.py", line 1137, in __call__
    return self.main(*args, **kwargs)
  File "/home/pape/Work/software/conda/miniconda3/envs/bio-core-dev/lib/python3.7/site-packages/click/core.py", line 1062, in main
    rv = self.invoke(ctx)
  File "/home/pape/Work/software/conda/miniconda3/envs/bio-core-dev/lib/python3.7/site-packages/click/core.py", line 1668, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/pape/Work/software/conda/miniconda3/envs/bio-core-dev/lib/python3.7/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/pape/Work/software/conda/miniconda3/envs/bio-core-dev/lib/python3.7/site-packages/click/core.py", line 763, in invoke
    return __callback(*args, **kwargs)
  File "/home/pape/Work/software/conda/miniconda3/envs/bio-core-dev/lib/python3.7/site-packages/typer/main.py", line 497, in wrapper
    return callback(**use_params)  # type: ignore
  File "/home/pape/Work/bioimageio/python-bioimage-io/bioimageio/core/__main__.py", line 55, in test_model
    test_passed = prediction.test_model(model_rdf, weight_format=weight_format, devices=devices, decimal=decimal)
  File "/home/pape/Work/bioimageio/python-bioimage-io/bioimageio/core/prediction.py", line 487, in test_model
    bioimageio_model=model, devices=devices, weight_format=weight_format
  File "/home/pape/Work/bioimageio/python-bioimage-io/bioimageio/core/prediction_pipeline/_prediction_pipeline.py", line 146, in create_prediction_pipeline
    processing = CombinedProcessing(bioimageio_model.inputs, bioimageio_model.outputs)
  File "/home/pape/Work/bioimageio/python-bioimage-io/bioimageio/core/prediction_pipeline/_combined_processing.py", line 76, in __init__
    raise NotImplementedError("computing statistics for output tensors not yet implemented")
NotImplementedError: computing statistics for output tensors not yet implemented

pin bioimageio.spec

the test failures with the conda package (the latest release is sitll pending) makes me think we should pin bioimageio.spec here. otherwise installing any past version is virtually impossible...

I stumbled over the absence of bioimageio.spec pinning in #134 (comment)_

and after some discussion with @k-dominik came to the conclusion that we should pin bioimageio.spec exactly while in 0.x.x (and afterwards down to minor)

Can't select weight format in test_model (and other CLI?)

It looks like the new weight format selection mechanism is broken:

$ bioimageio test-model model.yaml tensorflow_saved_model_bundle
bioimageio.spec 0.3.4
implementing:
        general RDF 0.2.0
        model RDF 0.3.3
bioimageio.core 0.4.3post0
/home/pape/Work/bioimageio/spec-bioimage-io/bioimageio/spec/rdf/v0_2/schema.py:194: UserWarning: BSD-2 is not a recognized SPDX license identifier. See https://spdx.org/licenses/
  warnings.warn(f"{value} is not a recognized SPDX license identifier. See https://spdx.org/licenses/")
Model test for model.yaml has FAILED!
{'error': "Weight format WeightFormatEnum.tensorflow_saved_model_bundle is not in supported formats ['pytorch_state_dict', 'tensorflow_saved_model_bundle', 'pytorch_script', 'onnx', 'keras_hdf5']", 'traceback': ['  File "/home/pape/Work/bioimageio/python-bioimage-io/bioimageio/core/resource_tests.py", line 56, in test_resource\n    bioimageio_model=model, devices=devices, weight_format=weight_format\n', '  File "/home/pape/Work/bioimageio/python-bioimage-io/bioimageio/core/prediction_pipeline/_prediction_pipeline.py", line 149, in create_prediction_pipeline\n    bioimageio_model=bioimageio_model, devices=devices, weight_format=weight_format\n', '  File "/home/pape/Work/bioimageio/python-bioimage-io/bioimageio/core/prediction_pipeline/_model_adapters/_model_adapter.py", line 53, in create_model_adapter\n    raise ValueError(f"Weight format {weight_format} is not in supported formats {_WEIGHT_FORMATS}")\n']}

@FynnBe please also add a test to the CLI tests that tests this so we don't run into this issue again without noticing!

How should we specify what labels can be provided for training?

Not all tensors/channels that are returned by a (neural network) model require a target to compute a loss.
Concretely, how does a consumer know what labels to inquire from a user and how to send them?

Three ideas to hopefully kick off further discussion:
As this is clearly linked to model output, a with-outputs-integrated suggestion:

outputs:
  - name: logits # (2/3 channels should receive labels in this example)
    axes: bcyx
    ...
    shape: [1, 3, 128, 128]
    channels_to_annotate: [membrane, null, nuclei]  # <<< list option
    channels_to_annotate: {0: membrane, 2: nuclei} # <<< dict option
  - name: auxillary or logging output  # (should not receive any labels in this example)
    axes: bc
    ...
    shape: [1, 3]
    channels_to_annotate: null # <<< or [null, null, null] 

Problems:

  • How would different output tensors 'share' annotations? (Via identical label names?)
  • What if there are target labels that don't have directly corresponding inputs?

a draft within training:

training:
    ...
    annotations:
      - name: membrane|nuclei
         axes: bcyx
         shape: [1, 2, 128, 128]
         ...

or could we just indicate raw data vs target in the reader output?
in some.reader.yaml:

outputs:
  - name: raw
    ...
    is_target: false  # <<< default: false ?
  - name: membrane|nuclei  # <<< '|' could denote channel semantic
     ...
     is_target: true
  - name[s]: [membrane, nuclei] # <<< 1 name per channel
     ...
     is_target: true

only (reasonable) limitation here: Readers cannot mix raw and target data within one output array.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.