Giter Site home page Giter Site logo

imagej-tensorflow's Introduction

ImageJ + TensorFlow integration layer

This component is a library which can translate between ImageJ images and TensorFlow tensors.

It also contains a demo ImageJ command for classifying images using a TensorFlow image model, adapted from the TensorFlow image recognition tutorial.

Quickstart

git clone https://github.com/imagej/imagej-tensorflow
cd imagej-tensorflow
mvn -Pexec

This requires Maven. Typically brew install maven on OS X, apt-get install maven on Ubuntu, or detailed instructions otherwise.

imagej-tensorflow's People

Contributors

asimshankar avatar ctrueden avatar dietzc avatar frauzufall avatar hadim avatar hedgehogcode avatar imagejan avatar jlleitschuh avatar mcrescas avatar nicholas-schaub avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

imagej-tensorflow's Issues

Add options dialog to control CPU/GPU mode, and graphics card selection

@frauzufall and I talked through how we could go about providing user-facing control over whether TensorFlow operates in CPU mode or GPU mode, as well as which graphics card is selected in GPU mode. I pushed the skeleton of an options plugin for ImageJ to expose those options (47f6d33 on the options-dialog branch), but there are several challenges:

  • For mode, we need to set org.tensorflow.NativeLibrary.MODE=something before the DefaultTensorFlowService class loads. I.e.: before the SciJava Context is created. So: in effect, we would need a restart.

  • For graphics card selection, we need to set CUDA_VISIBLE_DEVICES and/or CUDA_DEVICE_ORDER env vars, again before TensorFlow initializes, i.e. before SciJava context is created. So again: restart.

But even with a restart: how do we ensure that when the JVM starts up, that these sys props and env vars are set early enough? What machinery could do this? We do not have anything in place. We could have a file containing desired key/values, that the launcher Java code reads and sets as early as possible—before instantiating the SciJava Context. But this would be a new feature. Perhaps it could be part of ImageJ.cfg?

An alternative could be to make sure the TensorFlowService does not reference any TensorFlow classes. E.g., similar to the LegacyService and IJ1Helper of imagej-legacy, it might work to have something like tfService.actions().loadModel(...) as a level of indirection. Might be worth trying, but even if it works, it is quite ugly and unintuitive. Before we do that, let's verify whether creating an SJ context really initializes TF early—maybe it doesn't.

We could submit a pull request to the TensorFlow project that defers the loading of the native library until the first time any TensorFlow operation is performed. I.e.: eliminate the static initializers, since they create problems as described above. But it would be a non-trivial change to TensorFlow to do that, potentially more difficult to maintain.

Feature Request: Object Detection or Instance Segmentation

In my past life as a cell biologist, I used Imagej/FIJI to manually label regions of interest (ROI) in microscope images. After loading them in the RoiManager, ROI would be processed/analyzed with ImageJ/FIJI's various other tools. Here's an example image -- not mine -- in which someone might want to identify the oval-shaped cell nuclei psuedocolored in blue.
DAPI blue nuclei cells
In this particular image, it might be possible to use traditional signal intensity thresholding + watershed methods, but most images, including possibly the above, have enough edge cases such that traditional approaches don't work.

I guess what I'm trying to say is, manual segmentation fairly labor-intensive and I would have welcomed a plugin for reliably predicting ROI. Would anyone be interested in writing an object detection plugin? I don't think this should be too difficult to implement; much of the code can be copied directly from Tensorflow's Object Detection Java API.

That said, object detection though may not be ideal for microscope images because biological objects are often not regularly shaped. For example, a biologist might be interested in outlining the solid-green individual cells in the above image. If it's not too difficult, we might instead look to implement object instance segmentation instead, e.g. with Mask RCNN, again with the Object Detection API.

I can start working on this myself, but because I have zero experience in Java, the going might be slow. I could learn, but if anyone else is interested in helping out, I'd more than welcome the support.

install tensorflow through script interface

Hi, we are trying to use the tensorflow manager through pyimagej in Jupyter notebooks (headless=False), would be great if there is a way to ask the manager to install a tensorflow version through either macro or pyimagej python script.

cc @ctrueden

Try to implement the TensorFlow version control in my code

Hello, I am trying to implement the control of tf versions in my code using the functionality added to ImageJ-TensorFlow. I am having some trouble as I do not know how to tackle it.
Could you please provide some guidelines to do it?
Regards,
Carlos

jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class java.net.URLClassLoader

Hi, we are hitting this error with JDK 11:

(Fiji Is Just) ImageJ 2.1.0/1.53c; Java 11.0.11 [64-bit]; Linux 4.15.0-144-generic; 361MB of 79744MB (<1%)
 
java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.ClassCastException: class jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and java.net.URLClassLoader are in module java.base of loader 'bootstrap')
	at net.imagej.legacy.LegacyService.runLegacyCompatibleCommand(LegacyService.java:307)
	at net.imagej.legacy.DefaultLegacyHooks.interceptRunPlugIn(DefaultLegacyHooks.java:166)
	at ij.IJ.runPlugIn(IJ.java)
	at ij.Executer.runCommand(Executer.java:150)
	at ij.Executer.run(Executer.java:68)
	at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.util.concurrent.ExecutionException: java.lang.ClassCastException: class jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and java.net.URLClassLoader are in module java.base of loader 'bootstrap')
	at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
	at net.imagej.legacy.LegacyService.runLegacyCompatibleCommand(LegacyService.java:303)
	... 5 more
Caused by: java.lang.ClassCastException: class jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to class java.net.URLClassLoader (jdk.internal.loader.ClassLoaders$AppClassLoader and java.net.URLClassLoader are in module java.base of loader 'bootstrap')
	at net.imagej.tensorflow.util.TensorFlowUtil.getTensorFlowJARVersion(TensorFlowUtil.java:79)
	at net.imagej.tensorflow.util.TensorFlowUtil.versionFromClassPathJAR(TensorFlowUtil.java:94)
	at net.imagej.tensorflow.ui.TensorFlowLibraryManagementCommand.getTensorFlowJARVersion(TensorFlowLibraryManagementCommand.java:112)
	at net.imagej.tensorflow.ui.TensorFlowLibraryManagementCommand.initAvailableVersions(TensorFlowLibraryManagementCommand.java:98)
	at net.imagej.tensorflow.ui.TensorFlowLibraryManagementCommand.run(TensorFlowLibraryManagementCommand.java:86)
	at org.scijava.command.CommandModule.run(CommandModule.java:196)
	at org.scijava.module.ModuleRunner.run(ModuleRunner.java:165)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:124)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:63)
	at org.scijava.thread.DefaultThreadService.lambda$wrap$2(DefaultThreadService.java:225)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
	... 1 more

Here is the java version:

ac4816754e0b:~/Fiji.app$ java -version
openjdk version "11.0.11" 2021-04-20
OpenJDK Runtime Environment AdoptOpenJDK-11.0.11+9 (build 11.0.11+9)
OpenJDK 64-Bit Server VM AdoptOpenJDK-11.0.11+9 (build 11.0.11+9, mixed mode)

Screen Shot 2021-07-29 at 8 26 31 PM

cc @esgomezm

Use ByteBuffers for all image types

TensorFlow may use ByteBuffers directly for creating the tensors while other buffers are copied.

From the Documentation of the Tensor class:
create (long[] shape, FloatBuffer data):

Create a Float Tensor with data from the given buffer.

Creates a Tensor with the given shape by copying elements from the buffer (starting from its current position) into the tensor. For example, if shape = {2,3} (which represents a 2x3 matrix) then the buffer must have 6 elements remaining, which will be consumed by this method.

create (Class<T> type, long[] shape, ByteBuffer data):

Create a Tensor of any type with data from the given buffer.

Creates a Tensor with the provided shape of any type where the tensor's data has been encoded into data as per the specification of the TensorFlow C API.

Using GPU with TensorFlow

At the moment the GPU can only be used on Linux with Java TensorFlow: https://www.tensorflow.org/versions/master/install/install_java#gpu_support

Windows support will come soon (they are actively working on it).

The thing is that to enable the GPU on Linux the Maven imports need to be different:

<dependency>
  <groupId>org.tensorflow</groupId>
  <artifactId>libtensorflow</artifactId>
  <version>1.7.0-rc1</version>
</dependency>
<dependency>
  <groupId>org.tensorflow</groupId>
  <artifactId>libtensorflow_jni_gpu</artifactId>
  <version>1.7.0-rc1</version>
</dependency>

It's important to note that GPU won't be enabled with <artifactId>libtensorflow</artifactId> in the POM (even if it's a parent POM).

Also, I noted that TF can't find the CUDA libs in /usr/local/cuda if I don't start Eclipse from the command line (where I have LD_LIBRARY_PATH correctly set in my .bash_profile).

Note that on Python TF ships with-GPU and without-GPU Python packages which mean that GPU can't be discovered during runtime.

So I can't really think of a good way to deal with it. In an ideal world, all ImageJ packages will have the same build and load the correct dependency at runtime according to the platform and the presence of a GPU. But I think it's kind of a dream at the moment.

Anyway, I just wanted to let you know that in case you have comments.

Upgrade to TensorFlow 2

I'll just use this issue to save some notes regarding the upgrade:

The new maven dependency for all operating systems (tensorflow-core-platform-mkl-gpu for GPU support though that didn't immediately work for me):

<dependency>
    <groupId>org.tensorflow</groupId>
    <artifactId>tensorflow-core-platform-mkl</artifactId>
    <version>0.2.0</version>
</dependency>

Example of how to create Tensors from arrays with the new API:

float[] imgArray = new float[height * width* channel];
// .. assign img values to array
Shape shape = Shape.of(1, height, width, channel);
Tensor tensor = TFloat32.tensorOf(NdArrays.wrap(shape, DataBuffers.of(imgArray)));

Ping @tomburke-rse

Cache system variable improvements

We should introduce a system variable imagej.tensorflow.downloads.dir to save the downloaded TF versions to and make this and imagej.tensorflow.modelsdir configurable in the Edit > Options > Tensorflow.. plugin.

java.lang.IllegalStateException: close() has been called on the Graph

The recent fix to actually add models to the cache map (14f6da1) leads to this potential issue: A SavedModelBundle has to be closed to release memory allocation. If an implementation does this:

SavedModelBundle model = tensorFlowService.loadModel(source, modelName, MODEL_TAG);
model.close();
model = tensorFlowService.loadModel(source, modelName, MODEL_TAG);

.. the DefaultTensorFlowService will not know that this model was already closed and return it from the cache map, leading to this exception:

java.lang.IllegalStateException: close() has been called on the Graph

The solution is not to close the model and let the DefaultTensorFlowService close it on dispose. One can also trigger closing all cached models by calling tensorFlowService.dispose().

I thought writing this down in an issue might help someone stumbling over this exception. Is this enough? Also, this is a breaking API change, right..? An alternative would be to not cache models at all. Sadly there seems to be no way to find out if a model was closed.

Ping @HedgehogCode @ctrueden

Cached models only compared by name and tag

I have an issue with the current way we load cached models. If one rebuilds the model locally or if we update remote networks but keep the name, the plugin will still use the old cached one because we just compare the file name and tag (right?). What would be the best way to do it? Are hash sums not appropriate for some reason? Could we at least check for a changed file timestamp? I can implement it, just asking for the best strategy or if I am misunderstanding something.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.