Giter Site home page Giter Site logo

container-images's Introduction

container-images

.github/workflows/ci.yml

Dockerfiles and build contexts for building docker images

There is 1 subdirectory for every image to build. Dockerfiles and build contexts are generated from jinja2 templates in tpls directory. The images.py script generates the files, and can also be used to build docker images and to test them. To see more info do:

    python images.py --help

testing

    python -m pytest tests

Will build every image. Look at the test for example command lines.

Adding an image

  • copy one of the the existing directories
  • See images.py for directions on adding a new configuration
  • create a new automated build on docker hub, copying build setting of existing image

Publishing a new release (Internal Use Only)

Disclaimer: Do NOT do this unless all packages for upcoming release have been uploaded to Intel channel on Anaconda Cloud. Best time to do this is right before FCS when all packages have automatically been uploaded and validated.

If we are publishing 2017.0.0 build number 2, then the docker image will have 3 tags: 2017.0.0-2, 2017.0.0, latest. Github Actions will create a Docker image after a PR is merged. The following steps are all that is needed to update our Dockerhub with our latest IntelPython.

  • Change update_number & build_number in images.py. Most of the time, the build number remains the same (#0) and the minor version is incremented (e.g. 2021.1.0 -> 2021.2.0)

  • Regenerate the READMEs and Dockerfiles for the individual images by running the following command

      python images.py --gen all
    
  • Create branch and commit changes

  • Tag with the release name

      git tag -a 2022.0.0-0 -m '2022.0.0-0 release'
      git push origin update/2022.0.0-0
      git push origin 2022.0.0-0
    
  • Create PR, check that tests pass, and then merge PR. Github actions has been setup to automatically build the Docker image and push it to Dockerhub afterwards.

container-images's People

Contributors

aguzmanballen avatar andresguzman-ballen avatar bsanchezr avatar devpramod avatar devpramod-intel avatar lindt avatar rscohn2 avatar sfblackl-intel avatar triskadecaepyon avatar xaleryb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

container-images's Issues

Error processing tar file

Hi,
I receive the following error when pulling from your docker image:

$ docker pull intelpython/intelpython3_full
Using default tag: latest
latest: Pulling from intelpython/intelpython3_full
8ad8b3f87b37: Pull complete 
e04db1209ac4: Pull complete 
edc7ae7e687c: Pull complete 
4a7b3487193b: Pull complete 
4f4f8387a4e8: Extracting [==================================================>]  1.785GB/1.785GB
failed to register layer: Error processing tar file(exit status 1): link /opt/conda/pkgs/opencv-3.1.0-np112py35_intel_5/share/OpenCV/haarcascades/haarcascade_eye_tree_eyeglasses.xml /opt/conda/share/OpenCV/haarcascades/haarcascade_eye_tree_eyeglasses.xml: no such file or directory

This occurs on OSX host and Linux (ubuntu 16.04) hosts.
Interestingly, I get the same error when I build from scratch (installing intel-python package with silent mode, adding to default path, etc) in my own/custom docker image, even though it builds successfully on Docker Hub.
Any ideas?

Intel Xeon Phi X200 7210 - 64-core MKL Question

Hi Intel Dev Team,
I am using Intel Python on Intel Xeon Phi 7210 CPU with 64 core CPUs.

What environment variables do you recommend setting to optimize Intel Python execution for this processor? My current environment vars are:

export OMP_NUM_THREADS=16 MKL_NUM_THREADS=8
export MKL_DYNAMIC=TRUE OMP_DYNAMIC=TRUE OMP_NESTED=TRUE
export MKL_MIC_ENABLE=1
export MIC_LD_LIBRARY_PATH=$LD_LIBRARY_PATH

Using above variables seems to be using normal MKL library, not the MKL-MIC libraries.

Thanks!

Modules have a bug that crashes

Hi Robert,

Thank you for the awesome Intel Python project. This is one of the best projects to come out of Intel DL effort.

I am running in a bug/packaging issue while running this command inside Docker image:

/bin/bash -c echo -e "help('modules')" | exec python3

Produces a crash with following error in TensorFlow module bundled with Intel Python.

**Warnings:**
2017-09-21 00:21:19.042707: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-21 00:21:19.043989: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-21 00:21:19.044129: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-09-21 00:21:19.044207: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-09-21 00:21:19.044270: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX512F instructions, but these are available on your machine and could speed up CPU computations.
2017-09-21 00:21:19.044330: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-09-21 00:21:19.066999: I tensorflow/core/distributed_runtime/rpc/grpc_channel.cc:215] Initialize GrpcChannelCache for job local -> {0 -> localhost:42002}
2017-09-21 00:21:19.068584: I tensorflow/core/distributed_runtime/rpc/grpc_server_lib.cc:316] Started server with target: grpc://localhost:42002

**Error:**
2017-09-21 00:21:29.772589: F ./tensorflow/core/common_runtime/bfc_allocator.h:291] Could not find Region for 0x223f420
/bin/bash: line 1:     7 Done                    echo -e "help('modules')"
         8 Aborted                 (core dumped) | exec python3

Does TensorFlow really belong in the core bundle? If you must include a DL package, I think PyTorch is a better framework to include given that Intel Python can significantly speed it up.

I would also recommend running the above command as part of Intel release QA procedure to ensure all bundled modules are working as expected.

Thanks!

tensorflow library error

I am trying to run this image using Singularity on a KNL cluster via these instructions:

singularity build intelpython3_full.simg docker://intelpython/intelpython3_fulls
singularity run intelpython3_full.simg python -c "import tensorflow"

but I see this library error:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/__init__.py", line 22, in <module>
    from tensorflow.python import pywrap_tensorflow  # pylint: disable=unused-import
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/__init__.py", line 49, in <module>
    from tensorflow.python import pywrap_tensorflow
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 74, in <module>
    raise ImportError(msg)
ImportError: Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in <module>
    from tensorflow.python.pywrap_tensorflow_internal import *
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
    _pywrap_tensorflow_internal = swig_import_helper()
  File "/opt/conda/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
    _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
  File "/opt/conda/lib/python3.6/imp.py", line 243, in load_module
    return load_dynamic(name, filename, file)
  File "/opt/conda/lib/python3.6/imp.py", line 343, in load_dynamic
    return _load(spec)
ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.23' not found (required by /opt/conda/lib/python3.6/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.