Giter Site home page Giter Site logo

condacolab's People

Contributors

alexmalins avatar jaimergp avatar restlessronin avatar ssurbhi560 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

condacolab's Issues

Getting error while installing dependency python packages

@jaimergp
Hi!
Thank you for your guide.
I have successfully installed Conda. But now when I try to run this line of code:

!conda create --name ABC--file requirements.txt

I am getting this output:

Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed

PackagesNotFoundError: The following packages are not available from current channels:

  - argon2-cffi==20.1.0=pypi_0
  - cython==0.29.21=pypi_0
  - cudatoolkit==9.2=0
  - smmap==3.0.4=pypi_0
  - bleach==3.2.1=pypi_0
  - wandb==0.10.5=pypi_0
  - jupyter==1.0.0=pypi_0
  - pandocfilters==1.4.3=pypi_0
  - prometheus-client==0.9.0=pypi_0
  - wcwidth==0.2.5=pypi_0
  - send2trash==1.5.0=pypi_0
  - libtiff==4.1.0=h2733197_1
  - jupyter-console==6.2.0=pypi_0
  - cffi==1.14.0=py37h2e261b9_0
  - opencv-python==4.4.0.44=pypi_0
  - numpy==1.19.1=py37hbc911f0_0
  - notebook==6.1.6=pypi_0
  - matplotlib==3.3.2=pypi_0
  - zipp==3.3.0=pypi_0
  - pytorch==1.3.1=cuda92py37hb0ba70e_0
  - xz==5.2.5=h7b6447c_0
  - libstdcxx-ng==9.1.0=hdf63c60_0
  - libffi==3.2.1=hf484d3e_1007
  - pexpect==4.8.0=pypi_0
  - prompt-toolkit==3.0.10=pypi_0
  - setuptools==49.6.0=py37_1
  - lz4-c==1.9.2=he6710b0_1
  - jpeg==9b=h024ee3a_2
  - nbformat==5.0.8=pypi_0
  - pyzmq==20.0.0=pypi_0
  - ca-certificates==2020.7.22=0
  - attrs==20.3.0=pypi_0
  - docker-pycreds==0.4.0=pypi_0
  - llvmlite==0.34.0=pypi_0
  - pathtools==0.1.2=pypi_0
  - webencodings==0.5.1=pypi_0
  - psutil==5.7.2=pypi_0
  - tornado==6.1=pypi_0
  - pip==20.2.2=py37_0
  - configparser==5.0.1=pypi_0
  - networkx==2.5=pypi_0
  - libedit==3.1.20191231=h14c3975_1
  - widgetsnbextension==3.5.1=pypi_0
  - cudnn==7.6.5=cuda9.2_0
  - gitpython==3.1.9=pypi_0
  - click==7.1.2=pypi_0
  - ipython==7.19.0=pypi_0
  - sklearn==0.0=pypi_0
  - nest-asyncio==1.4.3=pypi_0
  - qtpy==1.9.0=pypi_0
  - protobuf==3.13.0=pypi_0
  - scikit-image==0.17.2=pypi_0
  - watchdog==0.10.3=pypi_0
  - threadpoolctl==2.1.0=pypi_0
  - certifi==2020.6.20=py37_0
  - intel-openmp==2020.2=254
  - pandas==1.1.4=pypi_0
  - nvidia-ml-py3==7.352.0=pypi_0
  - pickleshare==0.7.5=pypi_0
  - pytz==2020.4=pypi_0
  - ninja==1.10.1=py37hfd86e86_0
  - cycler==0.10.0=pypi_0
  - joblib==0.16.0=pypi_0
  - zlib==1.2.11=h7b6447c_3
  - tk==8.6.10=hbc83047_0
  - mistune==0.8.4=pypi_0
  - pillow==7.2.0=py37hb39fc2d_0
  - async-generator==1.10=pypi_0
  - markupsafe==1.1.1=pypi_0
  - python-dateutil==2.8.1=pypi_0
  - openssl==1.1.1h=h7b6447c_0
  - packaging==20.8=pypi_0
  - olefile==0.46=py37_0
  - zstd==1.4.5=h9ceee32_0
  - ncurses==6.2=he6710b0_1
  - libpng==1.6.37=hbc83047_0
  - metric-learn==0.6.2=pypi_0
  - jupyter-core==4.7.0=pypi_0
  - _pytorch_select==0.2=gpu_0
  - entrypoints==0.3=pypi_0
  - promise==2.3=pypi_0
  - jupyterlab-widgets==1.0.0=pypi_0
  - nbclient==0.5.1=pypi_0
  - traitlets==5.0.5=pypi_0
  - wheel==0.35.1=py_0
  - scipy==1.5.2=pypi_0
  - kiwisolver==1.2.0=pypi_0
  - chardet==3.0.4=pypi_0
  - jupyterlab-pygments==0.1.2=pypi_0
  - pyparsing==2.4.7=pypi_0
  - pycparser==2.20=py_2
  - jupyter-client==6.1.11=pypi_0
  - tqdm==4.50.0=pypi_0
  - pywavelets==1.1.1=pypi_0
  - mkl_fft==1.2.0=py37h23d657b_0
  - ipykernel==5.4.3=pypi_0
  - gitdb==4.0.5=pypi_0
  - pycocotools==2.0.2=pypi_0
  - h5py==2.10.0=pypi_0
  - tifffile==2020.11.26=pypi_0
  - readline==7.0=h7b6447c_5
  - torchvision==0.4.2=cuda92py37h1667eeb_0
  - jsonschema==3.2.0=pypi_0
  - freetype==2.10.2=h5ab3b9f_0
  - idna==2.10=pypi_0
  - subprocess32==3.5.4=pypi_0
  - defusedxml==0.6.0=pypi_0
  - numba==0.51.2=pypi_0
  - terminado==0.9.2=pypi_0
  - ipywidgets==7.6.3=pypi_0
  - requests==2.24.0=pypi_0
  - six==1.15.0=py_0
  - sqlite==3.33.0=h62c20be_0
  - mkl_random==1.1.1=py37h0573a6f_0
  - qtconsole==5.0.1=pypi_0
  - lcms2==2.11=h396b838_0
  - python==3.7.5=h0371630_0
  - jinja2==2.11.2=pypi_0
  - backcall==0.2.0=pypi_0
  - numpy-base==1.19.1=py37hfa32c7d_0
  - jedi==0.18.0=pypi_0
  - decorator==4.4.2=pypi_0
  - flake8==3.8.4=pypi_0
  - ipython-genutils==0.2.0=pypi_0
  - pyflakes==2.2.0=pypi_0
  - scikit-learn==0.23.2=pypi_0
  - testpath==0.4.4=pypi_0
  - sentry-sdk==0.19.0=pypi_0
  - pyyaml==5.3.1=pypi_0
  - nbconvert==6.0.7=pypi_0
  - pygments==2.7.3=pypi_0
  - mkl-service==2.3.0=py37he904b0f_0
  - pycodestyle==2.6.0=pypi_0
  - mccabe==0.6.1=pypi_0
  - importlib-metadata==2.0.0=pypi_0
  - urllib3==1.25.10=pypi_0
  - shortuuid==1.0.1=pypi_0
  - imageio==2.9.0=pypi_0
  - ptyprocess==0.7.0=pypi_0
  - pyrsistent==0.17.3=pypi_0
  - parso==0.8.1=pypi_0
  - libgcc-ng==9.1.0=hdf63c60_0

Current channels:

  - https://conda.anaconda.org/conda-forge/linux-64
  - https://conda.anaconda.org/conda-forge/noarch

To search for alternate channels that may provide the conda package you're
looking for, navigate to

    https://anaconda.org

and use the search bar at the top of the page.

I tried to update Conda as well using this statement:

!conda update --all

But I get this error while rerunning the above line of code, i.e,

!conda create --name ABC--file requirements.txt

Error:

Collecting package metadata (current_repodata.json): failed

InvalidVersionSpec: Invalid version '4.19.112+': empty version component

Could you please help me out that what am I doing wrong here?
Regards

`ModuleNotFoundError` thrown when importing libraries

I installed condacolab then added my environment.yml as instructed in the README and everything installed perfectly, but when I went to run my code a ModuleNotFoundError was thrown at my import statement. I double-checked everything had installed correctly with conda list and the lib was definitely there, so I went to the setup colab notebook that's linked in the README and tried running the code found there and was met with the same issue:
Screen Shot 2021-12-05 at 10 08 03 PM

If I check the python and tensorflow version in my notebook, both return the correct versions installed via conda, so totally not sure why other packages aren't being recognized.

I'm not super technically-inclined so I unfortunately have no idea what the issue could be that's causing this-- it very well could be that I'm accidentally missing some sort of vital step! Otherwise, perhaps there was some sort of update that's causing things to install in the wrong place? Sorry I can't be of more help!

Avoid kernel restart if already installed?

Would it make sense to avoid restarting the kernel when running condacolab.install() on an already-installed state ? This would make it re-executing all cells a worry-less (idempotent if you want) action.

(Probably the same behaviour can be achieved without code changes by wrapping install in a "if ...check" call).

Installing a specific version of python

Hello
How are you?
Thanks for contributing to this project.
Now days, the default version of python3 on Google Colab is 3.10.
I am going to use python3.8 on Colab.
So I am going to install & use this condacolab.
How should & install I this condacolab for python3.8?

Conda not installing package for Python 3.9, getting package for Python 3.8 instead

I am installing condacolab:

!pip install -q condacolab
import condacolab
condacolab.install()

And then installing my package:

!conda install -c opensim_admin opensim

But the version installed by conda is for Python 3.8.

opensim-4.4 | py38np120

python --version returns Python 3.9.16, and !conda info returns python version : 3.9.16.final.0. Also, I can confirm that the package for Python 3.9 exists because when I install it specifying the version and tag, it install the proper package:

conda install -c opensim_admin opensim=4.4=py39np120

This installs the correct version opensim-4.4 | py39np120 and my code works.

Could it be that condacolab / conda is not retrieving the correct packages?

ModuleNotFoundError when installing via install_from_url()

Hi, I'm installing a condacolab via install_from_url() because I need a conda/mamba with python 3.9 while colab is pinned to 3.10. I tried with several versions of mambaforge, included some of the official website or this:

https://github.com/jaimergp/miniforge/releases/tag/22.11.1-4_colab

I've tried as well with the conda constructor following the instruction of the condacolab pypi but modifying the version to 3.9:

name: condacolab  # you can edit this if you want
version: 0.1      # increment if you change the specs, for reproducibility!

channels:
  - conda-forge
specs:
  - python =3.9  # Python MUST be version 3.7
  - pip
  - conda
  - mamba  # mamba is not needed but recommended

# Pip dependencies are NOT recommended. If you do need them
# uncomment the line below and edit `pip-dependencies.sh`.
# post_install: pip-dependencies.sh

# do not edit below this line
# ---------------------------
installer_type: sh

After running all these installations, I load my environment.yml and the conda list shows the correct version of my dependencies. For example:

!conda list | grep ngl
nglview                   3.0.8              pyh1da8cd4_0    conda-forge

But when doing the import in a cell:

import nglview
---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
[<ipython-input-12-dd30d89a3198>](https://localhost:8080/#) in <cell line: 1>()
----> 1 import nglview

ModuleNotFoundError: No module named 'nglview'

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

The weirdest thing is that when I import this dependency in pure python it works:

!python
Python 3.9.19 | packaged by conda-forge | (main, Mar 20 2024, 12:50:21) 
[GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import nglview
>>> nglview.demo()
NGLWidget()

Any idea of what I'm missing?

No package can be installed for pin: cudatoolkit 12.2.

Hi,

Until yesterday, I was using CondaColab and everything was working perfectly. Unfortunately, today I encountered the following error:


# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<

    Traceback (most recent call last):
      File "/usr/local/lib/python3.10/site-packages/conda/exceptions.py", line 1124, in __call__
        return func(*args, **kwargs)
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba.py", line 941, in exception_converter
        raise e
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba.py", line 934, in exception_converter
        exit_code = _wrapped_main(*args, **kwargs)
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba.py", line 892, in _wrapped_main
        result = do_call(parsed_args, p)
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba.py", line 754, in do_call
        exit_code = install(args, parser, "install")
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba.py", line 547, in install
        solver.add_pin(final_spec)
    RuntimeError: No package can be installed for pin: cudatoolkit 12.2.*

It seems the error is related to Mamba, but I couldn't find any relevant information in their GitHub repository.
Are you experiencing the same issue? Has anyone managed to resolve it?

Thank you,

Pablo

google-colab package error when using new base environment

I get the following error when using condacolab master with manually built https://github.com/conda/constructor/ installers?

Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... failed with initial frozen solve. Retrying with flexible solve.

PackagesNotFoundError: The following packages are not available from current channels:

  - google-colab

Current channels:

  - https://repo.anaconda.com/pkgs/main/linux-64
  - https://repo.anaconda.com/pkgs/main/noarch
  - https://repo.anaconda.com/pkgs/r/linux-64
  - https://repo.anaconda.com/pkgs/r/noarch

To search for alternate channels that may provide the conda package you're
looking for, navigate to

    https://anaconda.org

and use the search bar at the top of the page.

Reverting to pip install https://github.com/conda-incubator/condacolab/archive/28521d7c5c494dd6377bb072d97592e30c44609c.tar.gz seems to solve the issue, so I suspect this was introduced with #31?

Getting AssertionError while installing from URL

image

Here is the log file:

__installer__.sh: line 1: syntax error near unexpected token `<'
__installer__.sh: line 1: `<!DOCTYPE html><html class="maestro global-header" lang="en" xml:lang="en" xmlns="http://www.w3.org/1999/xhtml"><head><script nonce="veW741GqN7C2UONCw7GW">'

No module named 'condacolab'

Environment: Google Colab

Code:

!pip install -q condacolab
import condacolab
condacolab.install()

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
[<ipython-input-10-dfa229587fce>](https://localhost:8080/#) in <module>
      1 get_ipython().system('pip install -q condacolab')
----> 2 import condacolab
      3 condacolab.install()

ModuleNotFoundError: No module named 'condacolab'

---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.

To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

Python 3.10

Is there support for python 3.10 (this is now the default).

I was looking at the GitHub for mamba, looks like
that's only supported for 3.9

Crash after condacolab installation

I have this issue with running a colab notebook using condacolab.

CELL 1
import subprocess
subprocess.run( 'pip install -q condacolab'.split() )
import condacolab
condacolab.install()

CELL 2
import condacolab
condacolab.check()

This returns the following error:
AssertionError Traceback (most recent call last)
in <cell line: 2>()
1 import condacolab
----> 2 condacolab.check()

/usr/local/lib/python3.9/dist-packages/condacolab.py in check(prefix, verbose)
300 f"{prefix}/bin" in os.environ["PATH"]
301 ), f"๐Ÿ’ฅ๐Ÿ’”๐Ÿ’ฅ PATH was not patched! Value: {os.environ['PATH']}"
--> 302 assert (
303 f"{prefix}/lib" in os.environ["LD_LIBRARY_PATH"]
304 ), f"๐Ÿ’ฅ๐Ÿ’”๐Ÿ’ฅ LD_LIBRARY_PATH was not patched! Value: {os.environ['LD_LIBRARY_PATH']}"

AssertionError: ๐Ÿ’ฅ๐Ÿ’”๐Ÿ’ฅ LD_LIBRARY_PATH was not patched! Value: /usr/local/nvidia/lib:/usr/local/nvidia/lib64

If I skip condacolab.check and proceed to installing and importing packages:
CELL 2
import subprocess
_ = subprocess.run( 'mamba install scipy -c conda-forge --yes'.split() )
import scipy

I get a crash from colab: Your session crashed for an unknown reason.

Remarkably, in both cases, I only get this issue when executing all cells together. If instead I execute Cell 1, wait its completion, then execute the second, this runs smoothly.

possible problem with gdal

Hello,

I am using google colab to work on a model calibration and I few weeks ago I was able to run the code properly (thanks to condacolab!!), but now colab throws some errors like ValueError: Could not open ('my_file',) as a gdal.OF_RASTER and AttributeError: module 'osgeo.osr' has no attribute 'OAMS_TRADITIONAL_GIS_ORDER'. While downloading GDAL the only possible issue I detected is the massenge failed with initial frozen solve. Retrying with flexible solve. And from this comments in stackoverflow there seems to be a problem with installing GDAL with conda. So I was wondering if you could help me.

here is the link to my colab notebook if you want to reproduce the errors

And if needed this is the link to the calibration user guide (the author share the github repository with data and the steps to calibrate the model)

Regards,
Carson

Requested feature condacolab.install(yml="path/to/file.yml")

It would be super nice if we could get a one-liner to create an environment from a .yml file.

So far I am doing this

!pip install -q condacolab
import condacolab
condacolab.install() #ignore message about session crashing, this is intended
import condacolab
condacolab.check()
!wget -c https://raw.githubusercontent.com/XXXXXXXXX/master/environment.yml
!conda env update --file environment.yml

But I guess it should be easy to wrap it into
condacolab.install(yml = "environment.yml")

Thanks!

Your pinning does not match what's currently installed

Hi there.

I install the package as follows:

!pip install -q condacolab
import condacolab
condacolab.install()

The runtime restarts. When I try to install packages, e.g. using

!mamba install -c conda-forge -c intel -c astra-toolbox -c ccpi cil numpy astra-toolbox --quiet

I get the error as follows:

Your pinning does not match what's currently installed. Please remove the pin and fix your installation
  Pin: python=3.9
  Currently installed: conda-forge/linux-64::python==3.8.15=h4a9ceb5_0_cpython

Note that I did not have the issue yesterday. Yesterday I believe Google Collaborative worked with Python 3.8. It somehow has changed.

Regards,

Revamp `env` kwarg in new implementation and provide pre launch logic

We forgot about the env keyword argument in #31, oopsie.

This keyword allows users to inject environment variables in the kernel launch. We used it for LD_LIBRARY_PATH, but we don't need that anymore because we activate the environment fully.

Instead, I suggest we revamp this option and mix it with a new keyword argument `pre_kernel_launch or something. It should accept either a path to a file or a multiline content str.

Logic would be like:

def fn(..., env={"my_var": "my_value"}, pre_kernel_launch="my_script.sh"):
  ...
  contents = ""
  for key, value in env.items():
    contents += f'export {key}="{value}"\n'
  with open(pre_kernel_launch_script) as f:
    contents += "\n"
    contents += f.read()
  ...
  with open(sys.executable, "w") as f:
    f.write(
      f"""
      #!/bin/bash
      {contents}  # this is the new change!
      source {prefix}/etc/profile.d/conda.sh
      conda activate
      unset PYTHONPATH
      mv /usr/bin/lsb_release /usr/bin/lsb_release.renamed_by_condacolab.bak
      exec {bin_path}/python $@
      """

python 3.8?

Is there a way to use condacolab to switch to python 3.8 (or any other version)? I tried : "!conda install -c anaconda python=3.8" but that resulted in:

โœจ๐Ÿฐโœจ Everything looks OK!
Collecting package metadata (current_repodata.json): done
Solving environment: | WARNING conda.core.solve:_add_specs(611): pinned spec python=3.7 conflicts with explicit specs. Overriding pinned specfailed with initial frozen solve. Retrying with flexible solve.
Solving environment: / WARNING conda.core.solve:_add_specs(611): pinned spec python=3.7 conflicts with explicit specs. Overriding pinned specfailed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: / WARNING conda.core.solve:_add_specs(611): pinned spec python=3.7 conflicts with explicit specs. Overriding pinned specfailed with initial frozen solve. Retrying with flexible solve.
Solving environment: | WARNING conda.core.solve:_add_specs(611): pinned spec python=3.7 conflicts with explicit specs. Overriding pinned specfailed

SpecsConfigurationConflictError: Requested specs conflict with configured specs.
requested specs:
- python=3.8
pinned specs:
- python_abi=3.7[build=cp37]
Use 'conda config --show-sources' to look for 'pinned_specs' and 'track_features'
configuration parameters. Pinned specs may also be defined in the file
/usr/local/conda-meta/pinned.

Release strategies for 0.2x

  • Cut prerelease (not scheduled yet)
  • Update README
  • Release 0.1.5 with a warning (optional)
  • Twitter thread
  • Pinned issue

Weird conflict errors

I adapted the example notebook: https://colab.research.google.com/drive/1HjikV9AS7X4eklbPtauTG_N6XNGIwOHG?usp=sharing

But I get the following errors:

platform: linux-64
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... 
Found conflicts! Looking for incompatible packages.
This can take several minutes.  Press CTRL-C to abort.
failed
Traceback (most recent call last):
  File "/root/miniconda3/bin/constructor", line 11, in <module>
    sys.exit(main())
  File "/root/miniconda3/lib/python3.9/site-packages/constructor/main.py", line 244, in main
    main_build(dir_path, output_dir=out_dir, platform=args.platform,
  File "/root/miniconda3/lib/python3.9/site-packages/constructor/main.py", line 112, in main_build
    fcp_main(info, verbose=verbose, dry_run=dry_run, conda_exe=conda_exe)
  File "/root/miniconda3/lib/python3.9/site-packages/constructor/fcp.py", line 387, in main
    _urls, dists, approx_tarballs_size, approx_pkgs_size, has_conda = _main(
  File "/root/miniconda3/lib/python3.9/site-packages/constructor/fcp.py", line 295, in _main
    precs = list(solver.solve_final_state())
  File "/root/miniconda3/lib/python3.9/site-packages/conda/core/solve.py", line 281, in solve_final_state
    ssc = self._run_sat(ssc)
  File "/root/miniconda3/lib/python3.9/site-packages/conda/common/io.py", line 88, in decorated
    return f(*args, **kwds)
  File "/root/miniconda3/lib/python3.9/site-packages/conda/core/solve.py", line 815, in _run_sat
    ssc.solution_precs = ssc.r.solve(tuple(final_environment_specs),
  File "/root/miniconda3/lib/python3.9/site-packages/conda/common/io.py", line 88, in decorated
    return f(*args, **kwds)
  File "/root/miniconda3/lib/python3.9/site-packages/conda/resolve.py", line 1322, in solve
    self.find_conflicts(specs, specs_to_add, history_specs)
  File "/root/miniconda3/lib/python3.9/site-packages/conda/resolve.py", line 352, in find_conflicts
    raise UnsatisfiableError(bad_deps, strict=strict_channel_priority)
conda.exceptions.UnsatisfiableError: The following specifications were found to be incompatible with each other:

Output in format: Requested package -> Available versions

Package _libgcc_mutex conflicts for:
python==3.8 -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex[version='*|0.1',build='conda_forge|main']
cudatoolkit=11.0 -> libgcc-ng[version='>=7.3.0'] -> _libgcc_mutex[version='*|0.1',build='conda_forge|main']

Package cudatoolkit conflicts for:
cudatoolkit=11.0
rapids==21.12 -> cudatoolkit[version='11.0.*|11.2.*|11.5.*|11.4.*']
rapids==21.12 -> cucim=21.12 -> cudatoolkit[version='10.0|10.0.*|10.1|10.1.*|10.2|10.2.*|11.0|11.0.*|11.1|11.1.*|>=11,<12.0a0|>=11.2,<12|9.2|9.2.*|11.4|11.4.*|>=11.2,<12.0a0|>=11.0,<=11.6|>=11.0,<=11.5|>=11.0,<11.2']

Package libgcc-ng conflicts for:
rapids==21.12 -> cucim=21.12 -> libgcc-ng[version='>=4.9|>=7.3.0|>=9.3.0|>=9.4.0|>=7.5.0']
python==3.8 -> libgcc-ng[version='>=7.3.0']
python==3.8 -> libffi[version='>=3.2.1,<3.3.0a0'] -> libgcc-ng[version='>=4.9|>=9.4.0|>=7.5.0|>=9.3.0']
cudatoolkit=11.0 -> libgcc-ng[version='>=7.3.0|>=9.3.0|>=9.4.0']
dask-sql -> jpype1[version='>=1.0.2'] -> libgcc-ng[version='>=4.9|>=7.3.0|>=7.5.0|>=9.3.0|>=9.4.0']

Package python_abi conflicts for:
dask-sql -> importlib-metadata -> python_abi[version='2.7.*|3.10.*|3.7|3.6.*|3.6',build='*_cp27mu|*_cp36m|*_cp310|*_pypy37_pp73|*_pypy36_pp73']
dask-sql -> python_abi[version='3.7.*|3.9.*|3.8.*',build='*_cp39|*_cp37m|*_cp38']

Package python conflicts for:
dask-sql -> python[version='>=3.6|>=3.7,<3.8.0a0|>=3.9,<3.10.0a0|>=3.8,<3.9.0a0']
dask-sql -> dask[version='>=2021.11.1,<=2022.01.0'] -> python[version='2.7.*|3.5.*|3.6.*|>=2.7,<2.8.0a0|>=3|>=3.10,<3.11.0a0|>=3.6.1|>=3.7|>=3.6,<3.7.0a0|>=3.5|3.4.*|3.7.*|2.7.*|>=3.5|>=3.9|3.9.*|3.8.*|>=3.5,<3.6.0a0']
rapids==21.12 -> cupy[version='>=9.5.0,<10.0.0a0'] -> python[version='3.7.*|3.8.*|>=3.10,<3.11.0a0|>=3.9,<3.10.0a0|>=3.6|>=3.6,<3.7.0a0']
python==3.8
rapids==21.12 -> python[version='>=3.7,<3.8.0a0|>=3.8,<3.9.0a0']

Package pandas conflicts for:
dask-sql -> dask[version='>=2021.11.1,<=2022.01.0'] -> pandas[version='>=0.23.0|>=0.25.0|>=1.0']
dask-sql -> pandas[version='<1.2.0|<1.2.0,>=1.0.0|>=1.0.0']

Package libstdcxx-ng conflicts for:
python==3.8 -> libffi[version='>=3.2.1,<3.3.0a0'] -> libstdcxx-ng[version='>=4.9|>=7.5.0']
python==3.8 -> libstdcxx-ng[version='>=7.3.0']

Package typing_extensions conflicts for:
dask-sql -> importlib-metadata -> typing_extensions[version='>=3.6.4']
rapids==21.12 -> cudf=21.12 -> typing_extensionsThe following specifications were found to be incompatible with your system:

  - feature:/linux-64::__glibc==2.27=0
  - feature:|@/linux-64::__glibc==2.27=0
  - cudatoolkit=11.0 -> __glibc[version='>=2.17,<3.0.a0']
  - rapids==21.12 -> cucim=21.12 -> __glibc[version='>=2.17|>=2.17,<3.0.a0']

Your installed version is: 2.27

I can install the packages specified in construct.yaml manually with no problems, so I don't think there is any conflict here at all:

conda install -y -c rapidsai -c nvidia -c conda-forge \
        python=3.8 rapids=21.12 cudatoolkit=11.0 dask-sql

NGLView doesn't work with Conda Colab (but does with pip)

NGLView is a widget for visualizing chemical structures. It works in Colab, but only when installed with pip.

Here's a notebook demonstrating it with pip: https://colab.research.google.com/drive/1D-MD6vpVmz0NMrz8Wf9W-3ZpDJBTaPZ0?usp=sharing

And here's the equivalent notebook with Conda: https://colab.research.google.com/drive/1nHUENuqeSoG-vbY7JoXSDZaoxGViofnX?usp=sharing

I've tried overwriting the conda installation with pip in all sorts of orders, but it seems like once conda is activated the widget won't work. If I leave off the enable_custom_widget_manager() call, on Pip I get a message telling me about it, but on Conda it just silently fails.

In the Conda notebook, I get an error in my JS console:

Error has occurred while trying to update output. Error: not found
    t https://ssl.gstatic.com/colaboratory-static/widgets/colab-cdn-widget-manager/b3e629b1971e1542/manager.min.js:1867

I think this might be an issue for other widgets but I haven't found any yet.

I'm raising this here because it works locally with Conda and in Colab with pip, but if this is expected behavior and NGLView needs a fix I can raise an issue there instead!

Thanks for Conda Colab!

Python 3.8 installer?

Hello, I am trying to use mamba within Google Colab by using condacolab. However, during my execution of !mamba install ..., it will always throws an error:

Your pinning does not match what's currently installed. Please remove the pin and fix your installation
  Pin: python=3.8
  Currently installed: conda-forge/linux-64::python==3.7.12=hb7a2778_100_cpython

It happens as long as I set GPU as the runtime instead of None. I see that the default condacolab will install Python 3.7, however, it seems the GPU runtime has Python 3.8, which may cause mamba pinning 3.8 as the version when running further !mamba install ... commands.
I followed #15 and according to this comment, I should use another Miniconda-like installer to condacolab.install_from_url(). However, I didn't find any. Can anyone share the setup for Python3.8 on Colab GPU runtime?
My test Colab Notebook can be found here.

Suddenly unable to install rdkit due to pinning issue

Earlier today I was able to install rdkit in google colab using condacolab, !mamba install -c conda-forge rdkit

Now I get the following:

Your pinning does not match what's currently installed. Please remove the pin and fix your installation
Pin: python=3.8
Currently installed: conda-forge/linux-64::python==3.7.12=hb7a2778_100_cpython

Any idea what the issue could be??

Thanks

install_from_local()?

Thanks for creating condacolab! I was wondering if you thought about the possibility of installing from a local .sh file that one could keep in their Google Drive. That way there would be no need for downloading from another location. That would be cool for some of the things I'm planning to use in a class I'll be teaching in the Spring. Cheers!

is the kernel restart actually required?

I was trying to mimic what colabconda is doing in standalone cell:

import os
import pathlib
import sys

if 'CONDA_PREFIX' not in os.environ:
  !curl -O https://repo.anaconda.com/miniconda/Miniconda3-py37_4.12.0-Linux-x86_64.sh
  !bash Miniconda3-py37_4.12.0-Linux-x86_64.sh -p conda-env -b
  conda_prefix_path = pathlib.Path('conda-env')
  site_package_path = conda_prefix_path / 'lib/python3.7/site-packages'
  sys.path.insert(0, str(site_package_path.resolve()))
  CONDA_PREFIX = str(conda_prefix_path.resolve())
  PATH = os.environ['PATH']
  LD_LIBRARY_PATH = os.environ['LD_LIBRARY_PATH']
  %env CONDA_PREFIX={CONDA_PREFIX}
  %env PATH={CONDA_PREFIX}/bin:{PATH}
  %env LD_LIBRARY_PATH={CONDA_PREFIX}/lib:{LD_LIBRARY_PATH}

and found myself able to import installed python and native packages successfully after that, I'm curious what I'm missing or if that's an acceptable workaround to avoid the annoying runtime restart?

pinning does not match what's currently installed

When running the example notebook (as well as my own notebooks), I get a pinning error from !mamba install -q openmm:

Your pinning does not match what's currently installed. Please remove the pin and fix your installation
Pin: python=3.7
Currently installed: conda-forge/linux-64::python==3.6.12=hffdb5ce_0_cpython

I suppose something may have changed with colab recently. I'm not familiar with mamba so perhaps there is an easy way to update this, but I don't know it. Thanks for your help!

Running scripts

I am currently using Google Colab to run some scripts I have developed. However, when I install Condacolab and try to run the script through the command

!python script.py

I get the error "ModuleNotFoundError" for every library imported in the script. When importing the libraries through Google Colab cells it works correctly. Is it possible to run scripts after instaling Condacolab?

condacolab should be installed by condacolab :)

After #31, it doesn't seems that condacolab is available anymore in the python environment:

---------------------------------------------------------------------------
ModuleNotFoundError                       Traceback (most recent call last)
[<ipython-input-5-456c51dbd3c8>](https://localhost:8080/#) in <module>
----> 1 import condacolab
      2 get_ipython().run_line_magic('env', 'CONDA_PREFIX=condacolab.PREFIX')
      3 get_ipython().system('flow.tcl -design spm')

ModuleNotFoundError: No module named 'condacolab'

This remove the ability to do condacolab.check() to check that the installation succeeded, or rely on condacolab.PREFIX to compute path relative to the environment.

Pin to cudatoolkit 12.2.* preventing installation of multiple packages in colab

A pin of the cudatoolkit 12.2.* is impeding the ability to install packages like openmm and gcc-12.1. Here is an example output for installing openmm:

!mamba install openmm -c conda-forge -y
Looking for: ['openmm']

conda-forge/linux-64                                        Using cache
conda-forge/noarch                                          Using cache
No package can be installed for pin: cudatoolkit 12.2.*

# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<

    Traceback (most recent call last):
      File "/usr/local/lib/python3.10/site-packages/conda/exceptions.py", line 1124, in __call__
        return func(*args, **kwargs)
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba.py", line 941, in exception_converter
        raise e
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba.py", line 934, in exception_converter
        exit_code = _wrapped_main(*args, **kwargs)
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba.py", line 892, in _wrapped_main
        result = do_call(parsed_args, p)
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba.py", line 754, in do_call
        exit_code = install(args, parser, "install")
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba.py", line 547, in install
        solver.add_pin(final_spec)
    RuntimeError: No package can be installed for pin: cudatoolkit 12.2.*

`$ /usr/local/bin/mamba install openmm -c conda-forge -y`

  environment variables:
                 CIO_TEST=<not set>
COLAB_DEBUG_ADAPTER_MUX_PATH=/usr/local/bin/dap_multiplexer
COLAB_LANGUAGE_SERVER_PROXY=<set>
               CONDA_ROOT=/usr/local
           CURL_CA_BUNDLE=<not set>
          LD_LIBRARY_PATH=/usr/local/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
               LD_PRELOAD=<not set>
             LIBRARY_PATH=/usr/local/cuda/lib64/stubs
                     PATH=/opt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/us
                          r/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/googl
                          e-cloud-sdk/bin
               PYTHONPATH=/env/python
           PYTHONWARNINGS=ignore:::pip._internal.cli.base_command
       REQUESTS_CA_BUNDLE=<not set>
            SSL_CERT_FILE=<not set>
               TCLLIBPATH=/usr/share/tcltk/tcllib1.20

     active environment : None
       user config file : /root/.condarc
 populated config files : /usr/local/.condarc
          conda version : 23.1.0
    conda-build version : not installed
         python version : 3.10.10.final.0
       virtual packages : __archspec=1=x86_64
                          __glibc=2.35=0
                          __linux=6.1.58=0
                          __unix=0=0
       base environment : /usr/local  (writable)
      conda av data dir : /usr/local/etc/conda
  conda av metadata url : None
           channel URLs : https://conda.anaconda.org/conda-forge/linux-64
                          https://conda.anaconda.org/conda-forge/noarch
          package cache : /usr/local/pkgs
                          /root/.conda/pkgs
       envs directories : /usr/local/envs
                          /root/.conda/envs
               platform : linux-64
             user-agent : conda/23.1.0 requests/2.28.2 CPython/3.10.10 Linux/6.1.58+ ubuntu/22.04.3 glibc/2.35
                UID:GID : 0:0
             netrc file : None
           offline mode : False


An unexpected error has occurred. Conda has prepared the above report.

Manual specifications of cudatoolkit versions, like !mamba install -c conda-forge cudatoolkit=11.8 -y does not remove the "pin". This issue does not occur on local installations of conda, where the default cudatoolkit version for openmm seems to be 11.8 in the conda-forge repository.

sys.executable not set correctly

This is probably an edge case but I'm importing a library that calls a .py file and uses sys.executable to set the python executable to do this this.

After installing condacolab the value of sys.executable is: /usr/bin/python3.real

It should be /usr/local/bin/python so that packages installed via conda can be found.

The dependencies of the program can only be imported with the latter command.

RuntimeError: Invalid spec, no package name found: <NULL>

Looking for: ['mdanalysis', 'click', 'coverage', 'ipywidgets=7', 'lomap2', 'lxml', 'mdtraj', 'nbval', 'networkx', 'nglview', 'notebook', 'openff-forcefields', 'openmm', 'openmmtools', 'pip', 'plugcli', 'pymbar', 'pytest', 'pytest-cov', 'pytest-xdist', 'pydantic', 'python=3.9', 'rdkit', 'typing_extensions', "gufe[version='>=0.7.1']", "openfe[version='>=0.7.1']", 'py3dmol']



  Pinned packages:

  - python 3.10.*
  - python_abi 3.10.* *cp310*
  - cudatoolkit 11.8.*



# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<<

    Traceback (most recent call last):
      File "/usr/local/lib/python3.10/site-packages/conda/exceptions.py", line 1124, in __call__
        return func(*args, **kwargs)
      File "/usr/local/lib/python3.10/site-packages/conda_env/cli/main.py", line 78, in do_call
        exit_code = getattr(module, func_name)(args, parser)
      File "/usr/local/lib/python3.10/site-packages/conda/notices/core.py", line 109, in wrapper
        return func(*args, **kwargs)
      File "/usr/local/lib/python3.10/site-packages/conda_env/cli/main_update.py", line 132, in execute
        result[installer_type] = installer.install(prefix, specs, args, env)
      File "/usr/local/lib/python3.10/site-packages/mamba/mamba_env.py", line 140, in mamba_install
        print(solver.explain_problems())
    RuntimeError: Invalid spec, no package name found: <NULL>

`$ /usr/local/bin/mamba update -n base -f /environment.yml`

  environment variables:
                 CIO_TEST=<not set>
COLAB_DEBUG_ADAPTER_MUX_PATH=/usr/local/bin/dap_multiplexer
COLAB_LANGUAGE_SERVER_PROXY=<set>
  CONDA_AUTO_UPDATE_CONDA=false
               CONDA_ROOT=/usr/local
           CURL_CA_BUNDLE=<not set>
          LD_LIBRARY_PATH=/usr/local/lib:/usr/lib64-nvidia
               LD_PRELOAD=<not set>
             LIBRARY_PATH=/usr/local/cuda/lib64/stubs
                     PATH=/opt/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/us
                          r/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tools/node/bin:/tools/googl
                          e-cloud-sdk/bin
               PYTHONPATH=/env/python
           PYTHONWARNINGS=ignore:::pip._internal.cli.base_command
       REQUESTS_CA_BUNDLE=<not set>
            SSL_CERT_FILE=<not set>
               TCLLIBPATH=/usr/share/tcltk/tcllib1.20

     active environment : None
       user config file : /root/.condarc
 populated config files : /usr/local/.condarc
          conda version : 23.1.0
    conda-build version : not installed
         python version : 3.10.10.final.0
       virtual packages : __archspec=1=x86_64
                          __cuda=12.0=0
                          __glibc=2.31=0
                          __linux=5.10.147=0
                          __unix=0=0
       base environment : /usr/local  (writable)
      conda av data dir : /usr/local/etc/conda
  conda av metadata url : None
           channel URLs : https://conda.anaconda.org/conda-forge/linux-64
                          https://conda.anaconda.org/conda-forge/noarch
          package cache : /usr/local/pkgs
                          /root/.conda/pkgs
       envs directories : /usr/local/envs
                          /root/.conda/envs
               platform : linux-64
             user-agent : conda/23.1.0 requests/2.28.2 CPython/3.10.10 Linux/5.10.147+ ubuntu/20.04.5 glibc/2.31
                UID:GID : 0:0
             netrc file : None
           offline mode : False


An unexpected error has occurred. Conda has prepared the above report.

I've attached the environment.yml that caused this, I was able to get condacolab installed and the check was fine, this is the command I used to install my env:
!mamba env update -n base -f /environment.yml
environment.yml.txt

environments

shortcoming mentioned "You can only use the base environment, so do not try to create more environments with conda create."

but i think i am actually able to do so, like this:

%%bash
eval "$(conda shell.bash hook)" # copy conda command to shell
conda create -n env_test_1 python=3.6 #older than the default
conda activate env_test_1
python --version
which python

the output is as follows:

Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... done

## Package Plan ##

  environment location: /usr/local/envs/env_test_1

  added / updated specs:
    - python=3.6


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    certifi-2021.5.30          |   py36h5fab9bb_0         141 KB  conda-forge
    ld_impl_linux-64-2.36.1    |       hea4e1c9_1         668 KB  conda-forge
    libgcc-ng-9.3.0            |      h2828fa1_19         7.8 MB  conda-forge
    libgomp-9.3.0              |      h2828fa1_19         376 KB  conda-forge
    libstdcxx-ng-9.3.0         |      h6de172a_19         4.0 MB  conda-forge
    python-3.6.13              |hffdb5ce_0_cpython        38.4 MB  conda-forge
    python_abi-3.6             |          2_cp36m           4 KB  conda-forge
    readline-8.1               |       h46c0cb4_0         295 KB  conda-forge
    setuptools-49.6.0          |   py36h5fab9bb_3         936 KB  conda-forge
    sqlite-3.36.0              |       h9cd32fc_0         1.4 MB  conda-forge
    ------------------------------------------------------------
                                           Total:        54.0 MB

The following NEW packages will be INSTALLED:

  _libgcc_mutex      conda-forge/linux-64::_libgcc_mutex-0.1-conda_forge
  _openmp_mutex      conda-forge/linux-64::_openmp_mutex-4.5-1_gnu
  ca-certificates    conda-forge/linux-64::ca-certificates-2021.5.30-ha878542_0
  certifi            conda-forge/linux-64::certifi-2021.5.30-py36h5fab9bb_0
  ld_impl_linux-64   conda-forge/linux-64::ld_impl_linux-64-2.36.1-hea4e1c9_1
  libffi             conda-forge/linux-64::libffi-3.3-h58526e2_2
  libgcc-ng          conda-forge/linux-64::libgcc-ng-9.3.0-h2828fa1_19
  libgomp            conda-forge/linux-64::libgomp-9.3.0-h2828fa1_19
  libstdcxx-ng       conda-forge/linux-64::libstdcxx-ng-9.3.0-h6de172a_19
  ncurses            conda-forge/linux-64::ncurses-6.2-h58526e2_4
  openssl            conda-forge/linux-64::openssl-1.1.1k-h7f98852_0
  pip                conda-forge/noarch::pip-21.1.3-pyhd8ed1ab_0
  python             conda-forge/linux-64::python-3.6.13-hffdb5ce_0_cpython
  python_abi         conda-forge/linux-64::python_abi-3.6-2_cp36m
  readline           conda-forge/linux-64::readline-8.1-h46c0cb4_0
  setuptools         conda-forge/linux-64::setuptools-49.6.0-py36h5fab9bb_3
  sqlite             conda-forge/linux-64::sqlite-3.36.0-h9cd32fc_0
  tk                 conda-forge/linux-64::tk-8.6.10-h21135ba_1
  wheel              conda-forge/noarch::wheel-0.36.2-pyhd3deb0d_0
  xz                 conda-forge/linux-64::xz-5.2.5-h516909a_1
  zlib               conda-forge/linux-64::zlib-1.2.11-h516909a_1010



Downloading and Extracting Packages
libstdcxx-ng-9.3.0   | 4.0 MB    | ########## | 100% 
python-3.6.13        | 38.4 MB   | ########## | 100% 
sqlite-3.36.0        | 1.4 MB    | ########## | 100% 
ld_impl_linux-64-2.3 | 668 KB    | ########## | 100% 
libgomp-9.3.0        | 376 KB    | ########## | 100% 
readline-8.1         | 295 KB    | ########## | 100% 
setuptools-49.6.0    | 936 KB    | ########## | 100% 
libgcc-ng-9.3.0      | 7.8 MB    | ########## | 100% 
python_abi-3.6       | 4 KB      | ########## | 100% 
certifi-2021.5.30    | 141 KB    | ########## | 100% 
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
#
# To activate this environment, use
#
#     $ conda activate env_test_1
#
# To deactivate an active environment, use
#
#     $ conda deactivate

Python 3.6.13
/usr/local/envs/env_test_1/bin/python

as you can see, in the last line the older python has been called successfully

(my use case is testing compilation and running of hybrid python/c++ packages, not for interactive usage in the notebook kernel)

Naive question: How can I install condacolab when connected to local runtime in Colab?

Hi all,

Thank you very much for your amazing work, which made this part of science accessible to people like me. Well done!

What I would like to ask may seem stupid. Could one still use notebooks that require condacolab to be installed before everything when connected to local runtime? Or on jupyter notebook in their local machine?

I have spent hours and hours so I thought to get in touch and ask.

Any help would be greatly appreciated.

Best,
Dimitris

Python 3.7

@jezdez can i please get python3.7 in condalab? It works working perfectly till yesterday, but now it is installing python 3.8.

cffi and _cffi_backend version mismatch after condacolab.install() and kernel relaunch

See cffi-bug colab notebook.

There are several resources related to this:

In other code:

!conda install -c sgbaird mat_discover
from mat_discover.mat_discover_ import Discover

It was the source of the following error

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
<ipython-input-21-1873c4667230> in <module>()
----> 1 from mat_discover.mat_discover_ import Discover

15 frames
/usr/local/lib/python3.7/site-packages/mat_discover/mat_discover_.py in <module>()
     40 # from sklearn.decomposition import PCA
     41 
---> 42 import umap
     43 import hdbscan
     44 

/usr/local/lib/python3.7/site-packages/umap/__init__.py in <module>()
      1 from warnings import warn, catch_warnings, simplefilter
----> 2 from .umap_ import UMAP
      3 
      4 try:
      5     with catch_warnings():

/usr/local/lib/python3.7/site-packages/umap/umap_.py in <module>()
     30 import umap.distances as dist
     31 
---> 32 import umap.sparse as sparse
     33 
     34 from umap.utils import (

/usr/local/lib/python3.7/site-packages/umap/sparse.py in <module>()
     10 import numpy as np
     11 
---> 12 from umap.utils import norm
     13 
     14 locale.setlocale(locale.LC_NUMERIC, "C")

/usr/local/lib/python3.7/site-packages/umap/utils.py in <module>()
     37 
     38 
---> 39 @numba.njit("i4(i8[:])")
     40 def tau_rand_int(state):
     41     """A fast (pseudo)-random number generator.

/usr/local/lib/python3.7/site-packages/numba/core/decorators.py in wrapper(func)
    219             with typeinfer.register_dispatcher(disp):
    220                 for sig in sigs:
--> 221                     disp.compile(sig)
    222                 disp.disable_compile()
    223         return disp

/usr/local/lib/python3.7/site-packages/numba/core/dispatcher.py in compile(self, sig)
    907                 with ev.trigger_event("numba:compile", data=ev_details):
    908                     try:
--> 909                         cres = self._compiler.compile(args, return_type)
    910                     except errors.ForceLiteralArg as e:
    911                         def folded(args, kws):

/usr/local/lib/python3.7/site-packages/numba/core/dispatcher.py in compile(self, args, return_type)
     77 
     78     def compile(self, args, return_type):
---> 79         status, retval = self._compile_cached(args, return_type)
     80         if status:
     81             return retval

/usr/local/lib/python3.7/site-packages/numba/core/dispatcher.py in _compile_cached(self, args, return_type)
     91 
     92         try:
---> 93             retval = self._compile_core(args, return_type)
     94         except errors.TypingError as e:
     95             self._failed_cache[key] = e

/usr/local/lib/python3.7/site-packages/numba/core/dispatcher.py in _compile_core(self, args, return_type)
    109                                       args=args, return_type=return_type,
    110                                       flags=flags, locals=self.locals,
--> 111                                       pipeline_class=self.pipeline_class)
    112         # Check typing error if object mode is used
    113         if cres.typing_error is not None and not flags.enable_pyobject:

/usr/local/lib/python3.7/site-packages/numba/core/compiler.py in compile_extra(typingctx, targetctx, func, args, return_type, flags, locals, library, pipeline_class)
    603     """
    604     pipeline = pipeline_class(typingctx, targetctx, library,
--> 605                               args, return_type, flags, locals)
    606     return pipeline.compile_extra(func)
    607 

/usr/local/lib/python3.7/site-packages/numba/core/compiler.py in __init__(self, typingctx, targetctx, library, args, return_type, flags, locals)
    307         # Make sure the environment is reloaded
    308         config.reload_config()
--> 309         typingctx.refresh()
    310         targetctx.refresh()
    311 

/usr/local/lib/python3.7/site-packages/numba/core/typing/context.py in refresh(self)
    154         Useful for third-party extensions.
    155         """
--> 156         self.load_additional_registries()
    157         # Some extensions may have augmented the builtin registry
    158         self._load_builtins()

/usr/local/lib/python3.7/site-packages/numba/core/typing/context.py in load_additional_registries(self)
    689 
    690     def load_additional_registries(self):
--> 691         from . import (
    692             cffi_utils,
    693             cmathdecl,

/usr/local/lib/python3.7/site-packages/numba/core/typing/cffi_utils.py in <module>()
     17 try:
     18     import cffi
---> 19     ffi = cffi.FFI()
     20 except ImportError:
     21     ffi = None

/usr/local/lib/python3.7/dist-packages/cffi/api.py in __init__(self, backend)
     54                     raise Exception("Version mismatch: this is the 'cffi' package version %s, located in %r.  When we import the top-level '_cffi_backend' extension module, we get version %s, located in %r.  The two versions should be equal; check your installation." % (
     55                         __version__, __file__,
---> 56                         backend.__version__, backend.__file__))
     57                 else:
     58                     # PyPy

Exception: Version mismatch: this is the 'cffi' package version 1.14.6, located in '/usr/local/lib/python3.7/dist-packages/cffi/api.py'.  When we import the top-level '_cffi_backend' extension module, we get version 1.14.5, located in '/usr/local/lib/python3.7/site-packages/_cffi_backend.cpython-37m-x86_64-linux-gnu.so'.  The two versions should be equal; check your installation.

PackagesNotFoundError: The following packages are missing from the target environment: - cudatoolkit=12.2

Aparently, cudatoolkit has been recently updated to a newer version 12.2 at colab and this generated conflcts when trying to install packages like openmm and ambertools with conda.

When I execute the following cell in colab:

!conda install openmmforcefields -c conda-forge -y
!conda install -c conda-forge ambertools -y
!conda install -c conda-forge parmed -y

I throws the following error:

PackagesNotFoundError: The following packages are missing from the target environment:

  • cudatoolkit=12.2

ModuleNotFoundError Traceback (most recent call last)
in <cell line: 12>()
10
11 #load dependencies
---> 12 from openmm import app, unit
13 from openmm.app import HBonds, NoCutoff, PDBFile
14 from openff.toolkit.topology import Molecule, Topology

ModuleNotFoundError: No module named 'openmm'

Install R packages?

Is it possible to install R packages using condacolab, then load these packages inside %%R cells?

If I try it I get this error:

RRuntimeError: Error in (function (filename = "Rplot%03d.png", width = 480, height = 480,  : 
  Graphics API version mismatch

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.