Giter Site home page Giter Site logo

clip-interrogator-ext's People

Contributors

cpietsch avatar dervedro avatar pharmapsychotic avatar vladmandic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

clip-interrogator-ext's Issues

No indicator that batch process runs

Hi again,
I noticed that after pressing the button, there is no indicator / information that the process started (or failed).

Greetings from Austria,
Johannes

interrogator does not processing model.

it was fine until yesterday, today I am getting this error.

The size of tensor a (8) must match the size of tensor b (64) at non-singleton dimension 0
every time trying to interrogate image.
tried few other models, same problem.

Feature request: Use Interrogate settings from webUI

Just a request: in the webUI there is an Interrogate setting by default, where you can turn on what extra information I don't need.
Please build this option into your plugin, because for me for example the basic prompt is enough, without artists, styles and other extras.

image

Add more settings, skip inquire categories and so on

This is a request for adding features , it would be much much better if the extension have the option like , nucleus sampling, beam search , visual question answering, maximum number of lines in text file, min and max description length,max flavors

Please add option to support handling sub-folders

Hi,
I like the tool very much. The only problem for me is that it only process the image from the specified folder but not inside sub folders. I use the tools mainly for captioning image samples in batch to be used by EveryDream2trainer which allow input folder to have sub folders for better organization so my image folder is well organized by categories. Thus I hope the tool can support sub folders or otherwise I would have to paste those paths dozens of time.

Thank you for this excellent tool.

extension access disabled because of command line flags

I got this error trying to install your extension.

File "E:\AI_Art\SUPERSD\stable-diffusion-webui\modules\ui_extensions.py", line 22, in check_access
    assert not shared.cmd_opts.disable_extension_access, "extension access disabled because of command line flags"
AssertionError: extension access disabled because of command line flags

Subject focus?

Is it possible to include a prompt that will focus on a particular aspect of the image(s) being interrogated?

Like, I'm tired of getting the same faces in certain models. I'd like to upload a batch of images (or even individual) & say "please give me descriptions with heavy focus on describing faces" so it would say things like "small nose" or "sharp chin" or "big forehead" etc.

or if I said I wanted the focus to be on the backgrounds and no so much on the face it would just say "woman" but then go into detail about what's in the background

or lighting...
or color composition...
etc.

Function floorOp_i64 was not found in the library

I am getting this error when I try to generate the prompt. The error is killing the app.

Error getting visible function: (null) Function floorOp_i64 was not found in the library /AppleInternal/Library/BuildRoots/c651a45f-806e-11ed-a221-7ef33c48bc85/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSKernelDAG.mm:803: failed assertion Error getting visible function:
(null) Function floorOp_i64 was not found in the library'
zsh: abort ./webui.sh
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
jhonosorio@MacBook-Pro-de-Jhon-EPAM stable-diffusion-webui %`

TypeError: 'NoneType' object is not subscriptable

Sorry but I'm a dummy, when generating a prompt there is an error like this. rtx 3060 12gb vRAM , 32gb RAM

Loading CLIP Interrogator 0.5.4...

detected < 12GB VRAM, using low VRAM mode

Loading BLIP model...

Error verifying pickled file from tmp/.cache\torch\hub\checkpoints\model_base_caption_capfilt_large.pth:

Traceback (most recent call last):

File "D:\StableDefussion\stable-diffusion-portable-main\modules\safe.py", line 81, in check_pt

with zipfile.ZipFile(filename) as z:

File "D:\StableDefussion\stable-diffusion-portable-main\python\lib\zipfile.py", line 1267, in init

self._RealGetContents()

File "D:\StableDefussion\stable-diffusion-portable-main\python\lib\zipfile.py", line 1334, in _RealGetContents

raise BadZipFile("File is not a zip file")

zipfile.BadZipFile: File is not a zip file

During handling of the above exception, another exception occurred:

Traceback (most recent call last):

File "D:\StableDefussion\stable-diffusion-portable-main\modules\safe.py", line 135, in load_with_extra

check_pt(filename, extra_handler)

File "D:\StableDefussion\stable-diffusion-portable-main\modules\safe.py", line 102, in check_pt

unpickler.load()

_pickle.UnpicklingError: persistent IDs in protocol 0 must be ASCII strings

-----> !!!! The file is most likely corrupted !!!! <-----

You can skip this check with --disable-safe-unpickle commandline argument, but that is not going to help you.

Traceback (most recent call last):

File "D:\StableDefussion\stable-diffusion-portable-main\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict

output = await app.get_blocks().process_api(

File "D:\StableDefussion\stable-diffusion-portable-main\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api

result = await self.call_function(

File "D:\StableDefussion\stable-diffusion-portable-main\venv\lib\site-packages\gradio\blocks.py", line 833, in call_function

prediction = await anyio.to_thread.run_sync(

File "D:\StableDefussion\stable-diffusion-portable-main\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync

return await get_asynclib().run_sync_in_worker_thread(

File "D:\StableDefussion\stable-diffusion-portable-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread

return await future

File "D:\StableDefussion\stable-diffusion-portable-main\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run

result = context.run(func, *args)

File "D:\StableDefussion\stable-diffusion-portable-main\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 131, in image_to_prompt

load(clip_model_name)

File "D:\StableDefussion\stable-diffusion-portable-main\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 69, in load

ci = Interrogator(config)

File "D:\StableDefussion\stable-diffusion-portable-main\venv\lib\site-packages\clip_interrogator\clip_interrogator.py", line 77, in init

blip_model = blip_decoder(

File "D:\StableDefussion\stable-diffusion-portable-main\venv\lib\site-packages\blip\models\blip.py", line 175, in blip_decoder

model,msg = load_checkpoint(model,pretrained)

File "D:\StableDefussion\stable-diffusion-portable-main\venv\lib\site-packages\blip\models\blip.py", line 224, in load_checkpoint

state_dict = checkpoint['model']

TypeError: 'NoneType' object is not subscriptable

https://telegra.ph/error-03-08-13

[Feature Request] : Ext through API

Hi everyone,
I'm stuck I cannot figure out, how to submit the model "fast" instead of the default one:

Bildschirmfoto 2023-06-03 um 18 08 43 Has this feature not been implemented yet, or am I overlooking something?

Greetings from Austria,
Johannes

12gb of Vram and got COOM error

Hi, the extension was working perfectly before, and since the recent update I got this error
CUDA out of memory. Tried to allocate 920.00 MiB (GPU 0; 12.00 GiB total capacity; 9.16 GiB already allocated; 0 bytes free; 10.68 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

It is very surprising because I have a 3060 card with its 12gb of VRAM.

Error loading script

This extension is still not working, please fix it.

Restarting UI...
Error loading script: clip_interrogator_ext.py
Traceback (most recent call last):
File "I:\Super SD 2.0\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "I:\Super SD 2.0\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "I:\Super SD 2.0\stable-diffusion-webui\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 3, in
import clip_interrogator
ModuleNotFoundError: No module named 'clip_interrogator'

Interrogator tab doesnt show up, Installation Failed

Error loading script: clip_interrogator_ext.py
Traceback (most recent call last):
File "E:\stable-diffusion\modules\scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "E:\stable-diffusion\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in call_with_frames_removed
File "E:\stable-diffusion\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 9, in
import clip_interrogator
File "E:\stable-diffusion\venv\lib\site-packages\clip_interrogator_init
.py", line 1, in
from .clip_interrogator import Config, Interrogator, LabelTable, list_caption_models, list_clip_models, load_list
File "E:\stable-diffusion\venv\lib\site-packages\clip_interrogator\clip_interrogator.py", line 12, in
from transformers import AutoProcessor, AutoModelForCausalLM, BlipForConditionalGeneration, Blip2ForConditionalGeneration
ImportError: cannot import name 'BlipForConditionalGeneration' from 'transformers' (E:\stable-diffusion\venv\lib\site-packages\transformers_init_.py)

This is the Error I get, After removing the extension and install it back again the problem persist, no solution so far.... Please help

Can't see the Interegator tab

Hello I am getting this errror

Closing server running on port: 7860
Restarting UI...
Error loading script: clip_interrogator_ext.py
Traceback (most recent call last):
File "C:\GitHub\stable-diffusion-webui\modules\scripts.py", line 229, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "C:\GitHub\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\GitHub\stable-diffusion-webui\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 9, in
import clip_interrogator
ModuleNotFoundError: No module named 'clip_interrogator'

Does this plugin work on Apple M1?

I have installed and successfully run Stable Diffusion WebUI on my Apple M1 computer, after successfully installing this plugin, I cannot use this plugin, Python3 will crash and exit after clicking button Generate, the WebUI's log shows the following:

...
Loading CLIP Interrogator 0.5.4...
Loading BLIP model...
load checkpoint from https://storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_large_caption.pth
Loading CLIP model...
Loaded CLIP model and data in 5.56 seconds.
2023-02-23 12:07:07.519 Python[16268:467906] Error getting visible function:
 (null) Function floorOp_i64 was not found in the library
/AppleInternal/Library/BuildRoots/a0876c02-1788-11ed-b9c4-96898e02b808/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSKernelDAG.mm:755: failed assertion `Error getting visible function:
 (null) Function floorOp_i64 was not found in the library'
zsh: abort      ./webui.sh

This appears to be an OS architecture issue? Is it possible to get a solution?
Thanks!

Ignores .heic-format while batch processing

Hiho,
thank you so much for this plugin. It's amazing! <3

I noticed that .heic-format is ignored while batch processing. But it's available at drag and drop.

All the best from Austria,
Johannes

Cosmetic Issue: About panel unreadable with dark themes

When a dark theme is used for SD WebUI, the About panel for this extension is near-black text on a near-black background (anapnoe WebUI UX fork).

WORKAROUND: Highlight near invisible text by selecting it with the mouse.

Add more settings / functionnality

  • A possibility to generate both a "positive" prompt AND a negative prompt (calculate one then the other in the same load, so that after clicking on "Generate", we obtain both) at the same time.

  • The possibility of selecting several models which will display their results in the WebUI as a list, and why not also to download as a file (txt, csv, etc...).

  • Possibly, a description of the characteristics of the different CLIP models (like the weight and other thing).

These features would save a lot of time and comfort.

Feature Request? Weighted results

The builtin interrogator drops a series of weights with styles - e.g. (behance contest winner:0.455). This comes in stupid handy when analyzing large amounts of files, as it's a much better indicator than simple occurrence frequency. I'm not sure if this happens at the model layer or what, but it would make this extension much more useful for a related set of scripts I'm developing to perform analytics on large batches of interrogator output.

Incidentally, those scripts are also looking for a place to live and I'm wondering if you'd have any interest in having them pulled into this extension.

Problem with installing

Error running install.py for extension D:\sd.webui\webui\extensions\clip-interrogator-ext.
Command: "D:\sd.webui\system\python\python.exe" "D:\sd.webui\webui\extensions\clip-interrogator-ext\install.py"
Error code: 1
stdout: Installing requirements for CLIP Interrogator

stderr: Traceback (most recent call last):
File "D:\sd.webui\webui\extensions\clip-interrogator-ext\install.py", line 14, in
launch.run_pip(f"install clip-interrogator=={CI_VERSION}", "requirements for CLIP Interrogator")
File "D:\sd.webui\webui\modules\launch_utils.py", line 124, in run_pip
return run(f'"{python}" -m pip {command} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}", live=live)
File "D:\sd.webui\webui\modules\launch_utils.py", line 101, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't install requirements for CLIP Interrogator.
Command: "D:\sd.webui\system\python\python.exe" -m pip install clip-interrogator==0.5.4 --prefer-binary
Error code: 2
stdout: Collecting clip-interrogator==0.5.4
Using cached clip_interrogator-0.5.4-py3-none-any.whl (787 kB)
Requirement already satisfied: torch in d:\sd.webui\system\python\lib\site-packages (from clip-interrogator==0.5.4) (2.0.1+cu118)
Requirement already satisfied: torchvision in d:\sd.webui\system\python\lib\site-packages (from clip-interrogator==0.5.4) (0.15.2+cu118)
Requirement already satisfied: Pillow in d:\sd.webui\system\python\lib\site-packages (from clip-interrogator==0.5.4) (9.5.0)
Requirement already satisfied: requests in d:\sd.webui\system\python\lib\site-packages (from clip-interrogator==0.5.4) (2.31.0)
Requirement already satisfied: safetensors in d:\sd.webui\system\python\lib\site-packages (from clip-interrogator==0.5.4) (0.3.1)
Requirement already satisfied: tqdm in d:\sd.webui\system\python\lib\site-packages (from clip-interrogator==0.5.4) (4.65.0)
Requirement already satisfied: open-clip-torch in d:\sd.webui\system\python\lib\site-packages (from clip-interrogator==0.5.4) (2.7.0)
Collecting blip-ci (from clip-interrogator==0.5.4)
Using cached blip-ci-0.0.3.tar.gz (43 kB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'

stderr: ERROR: Exception:
Traceback (most recent call last):
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\cli\base_command.py", line 169, in exc_logging_wrapper
status = run_func(*args)
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\cli\req_command.py", line 248, in wrapper
return func(self, options, args)
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\commands\install.py", line 377, in run
requirement_set = resolver.resolve(
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\resolver.py", line 92, in resolve
result = self._result = resolver.resolve(
File "D:\sd.webui\system\python\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "D:\sd.webui\system\python\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 427, in resolve
failure_causes = self._attempt_to_pin_criterion(name)
File "D:\sd.webui\system\python\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 239, in _attempt_to_pin_criterion
criteria = self._get_updated_criteria(candidate)
File "D:\sd.webui\system\python\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 230, in _get_updated_criteria
self._add_to_criteria(criteria, requirement, parent=candidate)
File "D:\sd.webui\system\python\lib\site-packages\pip_vendor\resolvelib\resolvers.py", line 173, in _add_to_criteria
if not criterion.candidates:
File "D:\sd.webui\system\python\lib\site-packages\pip_vendor\resolvelib\structs.py", line 156, in bool
return bool(self._sequence)
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 155, in bool
return any(self)
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 143, in
return (c for c in iterator if id(c) not in self._incompatible_ids)
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\found_candidates.py", line 47, in _iter_built
candidate = func()
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\factory.py", line 206, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 293, in init
super().init(
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 156, in init
self.dist = self._prepare()
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 225, in _prepare
dist = self._prepare_distribution()
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\resolution\resolvelib\candidates.py", line 304, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\operations\prepare.py", line 516, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\operations\prepare.py", line 631, in _prepare_linked_requirement
dist = _get_prepared_distribution(
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\operations\prepare.py", line 69, in _get_prepared_distribution
abstract_dist.prepare_distribution_metadata(
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\distributions\sdist.py", line 48, in prepare_distribution_metadata
self._install_build_reqs(finder)
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\distributions\sdist.py", line 118, in _install_build_reqs
build_reqs = self._get_build_requires_wheel()
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\distributions\sdist.py", line 95, in _get_build_requires_wheel
return backend.get_requires_for_build_wheel()
File "D:\sd.webui\system\python\lib\site-packages\pip_internal\utils\misc.py", line 692, in get_requires_for_build_wheel
return super().get_requires_for_build_wheel(config_settings=cs)
File "D:\sd.webui\system\python\lib\site-packages\pip_vendor\pyproject_hooks_impl.py", line 166, in get_requires_for_build_wheel
return self._call_hook('get_requires_for_build_wheel', {
File "D:\sd.webui\system\python\lib\site-packages\pip_vendor\pyproject_hooks_impl.py", line 321, in _call_hook
raise BackendUnavailable(data.get('traceback', ''))
pip._vendor.pyproject_hooks._impl.BackendUnavailable: Traceback (most recent call last):
File "D:\sd.webui\system\python\lib\site-packages\pip_vendor\pyproject_hooks_in_process_in_process.py", line 77, in build_backend
obj = import_module(mod_path)
File "importlib_init
.py", line 126, in import_module
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 992, in _find_and_load_unlocked
File "", line 241, in _call_with_frames_removed
File "", line 1050, in _gcd_import
File "", line 1027, in _find_and_load
File "", line 1006, in _find_and_load_unlocked
File "", line 688, in _load_unlocked
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "C:\Users\Name\AppData\Local\Temp\pip-build-env-5kdaxrd\overlay\Lib\site-packages\setuptools_init.py", line 15, in
import setuptools.version
File "C:\Users\Name\AppData\Local\Temp\pip-build-env-_5kdaxrd\overlay\Lib\site-packages\setuptools\version.py", line 1, in
from ._importlib import metadata
File "C:\Users\Name\AppData\Local\Temp\pip-build-env-5kdaxrd\overlay\Lib\site-packages\setuptools_importlib.py", line 44, in
import importlib.metadata as metadata # noqa: F401
File "importlib\metadata_init
.py", line 17, in
File "importlib\metadata_adapters.py", line 3, in
File "email\message.py", line 15, in
File "email\utils.py", line 29, in
File "socket.py", line 51, in
ModuleNotFoundError: No module named '_socket'

Documentation

Hi, is there any kind of documentation to the extension/difference between the models/suggestive usage ? TIA :)

Can't find vocabulary file at path...

I've just installed the extension but now I'm stuck.
When I try to do an analysis on an image, I get this error:

ValueError: Can't find a vocabulary file at path 'C:\Users\xxxxx/.cache\huggingface\hub\models--bert-base-uncased\snapshots\0a6aa9128b6194f4f3c4db429b6cb4891cdb421b\vocab.txt'. To load the vocabulary from a Google pretrained model use tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)

How can I solve this?

ImportError: cannot import name 'BlipForConditionalGeneration' from 'transformers'

It seems that the new update has some sort of conflict with the built in clip_interrogator from automatic1111. It causes issues with the number of beams setting. Anything other than 1 causes an error readout, putting number of beams to 1 fixes the issue. Removing the extension allows me to change the number of beams to any number, with successful interrogations.

Restarting UI...
Additional Network extension not installed, Only hijack built-in lora
LoCon Extension hijack built-in lora successfully
Error loading script: clip_interrogator_ext.py
Traceback (most recent call last):
  File "E:\stable-diffusion-webui\modules\scripts.py", line 248, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "E:\stable-diffusion-webui\modules\script_loading.py", line 11, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "E:\stable-diffusion-webui\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 9, in <module>
    import clip_interrogator
  File "E:\stable-diffusion-webui\venv\lib\site-packages\clip_interrogator\__init__.py", line 1, in <module>
    from .clip_interrogator import Config, Interrogator, LabelTable, list_caption_models, list_clip_models, load_list
  File "E:\stable-diffusion-webui\venv\lib\site-packages\clip_interrogator\clip_interrogator.py", line 12, in <module>
    from transformers import AutoProcessor, AutoModelForCausalLM, BlipForConditionalGeneration, Blip2ForConditionalGeneration
ImportError: cannot import name 'BlipForConditionalGeneration' from 'transformers' (E:\stable-diffusion-webui\venv\lib\site-packages\transformers\__init__.py)

This error occurs when no other extensions are installed or enabled.

CUDA out of memory

Since lately, the Interrogator keeps producing out of memory on my 8GB VRAM card.
Any ideas how to solve this? Also, does the size of the interrogated image matters for the VRAM and processing?

Getting errors when trying to generate prompt

I'm using the directml version of A1111 for AMD 7900XT GPU so I don't know if that has anything to do with it, but I am getting this error when attempting to generate a prompt:

new(): expected key in DispatchKeySet(CPU, CUDA, HIP, XLA, MPS, IPU, XPU, HPU, Lazy, Meta) but got: PrivateUse1

Error running install.py f ------------ Error loading script: clip_interrogator_ext.py

python: 3.10.6  • 
torch: 1.13.1+cu117  • 
xformers: 0.0.16rc425  • 
gradio: 3.16.2  • 
commit: [a9fed7c3]

first time installed this ext

===============================================================================
Error running install.py for extension Z:\sd\webui\extensions\clip-interrogator-ext.
Command: "Z:\sd\system\python\python.exe" "Z:\sd\webui\extensions\clip-interrogator-ext\install.py"
Error code: 1
stdout: Installing requirements for CLIP Interrogator

stderr: Traceback (most recent call last):
File "Z:\sd\webui\extensions\clip-interrogator-ext\install.py", line 14, in
launch.run_pip(f"install clip-interrogator=={CI_VERSION}", "requirements for CLIP Interrogator")
File "Z:\sd\webui\launch.py", line 145, in run_pip
return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
File "Z:\sd\webui\launch.py", line 113, in run
raise RuntimeError(message)
RuntimeError: Couldn't install requirements for CLIP Interrogator.
Command: "Z:\sd\system\python\python.exe" -m pip install clip-interrogator==0.6.0 --prefer-binary
Error code: 1
stdout: Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple

stderr: ERROR: Could not find a version that satisfies the requirement clip-interrogator==0.6.0 (from versions: 0.1.3, 0.1.4, 0.2.0, 0.3.0, 0.3.1, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.4.0, 0.4.1, 0.4.2, 0.4.3, 0.4.4, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4)
ERROR: No matching distribution found for clip-interrogator==0.6.0

===============================================================================

restart UI

===============================================================================

Restarting UI...
Error loading script: clip_interrogator_ext.py
Traceback (most recent call last):
File "Z:\sd\webui\modules\scripts.py", line 248, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "Z:\sd\webui\modules\script_loading.py", line 11, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "Z:\sd\webui\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 9, in
import clip_interrogator
ModuleNotFoundError: No module named 'clip_interrogator'


thats all,

pleassse help

Extension not working since latest update

Error verifying new BLIP Large pickled file. Can I download it manually to solve the issue?

HERE IS THE ERROR:

Loading CLIP Interrogator 0.6.0...
Loading caption model blip-large...
Error verifying pickled file from C:\Users\TD FILM STUDIO/.cache\huggingface\hub\models--Salesforce--blip-image-captioning-large\snapshots\44f04d56320e7d169fec18c4b19f7efb9ab83105\pytorch_model.bin:
Traceback (most recent call last):
File "C:\SD\stable-diffusion-webui-master\modules\safe.py", line 81, in check_pt
with zipfile.ZipFile(filename) as z:
File "C:\Program Files\Python310\lib\zipfile.py", line 1267, in init
self._RealGetContents()
File "C:\Program Files\Python310\lib\zipfile.py", line 1334, in _RealGetContents
raise BadZipFile("File is not a zip file")
zipfile.BadZipFile: File is not a zip file

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\SD\stable-diffusion-webui-master\modules\safe.py", line 135, in load_with_extra
check_pt(filename, extra_handler)
File "C:\SD\stable-diffusion-webui-master\modules\safe.py", line 102, in check_pt
unpickler.load()
_pickle.UnpicklingError: persistent IDs in protocol 0 must be ASCII strings

-----> !!!! The file is most likely corrupted !!!! <-----
You can skip this check with --disable-safe-unpickle commandline argument, but that is not going to help you.

Traceback (most recent call last):
File "C:\SD\stable-diffusion-webui-master\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "C:\SD\stable-diffusion-webui-master\venv\lib\site-packages\gradio\blocks.py", line 1015, in process_api
result = await self.call_function(
File "C:\SD\stable-diffusion-webui-master\venv\lib\site-packages\gradio\blocks.py", line 833, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\SD\stable-diffusion-webui-master\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\SD\stable-diffusion-webui-master\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\SD\stable-diffusion-webui-master\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\SD\stable-diffusion-webui-master\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 128, in image_to_prompt
load(clip_model_name, caption_model_name)
File "C:\SD\stable-diffusion-webui-master\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 66, in load
ci = Interrogator(config)
File "C:\SD\stable-diffusion-webui-master\venv\lib\site-packages\clip_interrogator\clip_interrogator.py", line 70, in init
self.load_caption_model()
File "C:\SD\stable-diffusion-webui-master\venv\lib\site-packages\clip_interrogator\clip_interrogator.py", line 84, in load_caption_model
caption_model = BlipForConditionalGeneration.from_pretrained(model_path, torch_dtype=self.dtype)
File "C:\SD\stable-diffusion-webui-master\venv\lib\site-packages\transformers\modeling_utils.py", line 2480, in from_pretrained
loaded_state_dict_keys = list(state_dict.keys())
AttributeError: 'NoneType' object has no attribute 'keys'

PermissionError: [Errno 13] Permission denied

The last weeks I have not been able to use Clip Interrogator at all. I only get the error message below.

I have reinstalled automatik 1111, i have deleted and reinstalled the venv, i have deleted and reinstalled the extension.
Nothing works. When i try to analyse or caption a image this comes up. Permission denied.
And i cant even find that directory that's being denied in my computer.

Anyone know how to fix this?

_Loading CLIP Interrogator 0.5.4...
Traceback (most recent call last):
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1051, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, *args)
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 82, in image_analysis
load(clip_model_name)
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\extensions\clip-interrogator-ext\scripts\clip_interrogator_ext.py", line 61, in load
blip_model=shared.interrogator.load_blip_model().float()
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\modules\interrogate.py", line 102, in load_blip_model
blip_model = models.blip.blip_decoder(pretrained=files[0], image_size=blip_image_eval_size, vit='base', med_config=os.path.join(paths.paths["BLIP"], "configs", "med_config.json"))
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\repositories\BLIP\models\blip.py", line 173, in blip_decoder
model = BLIP_Decoder(**kwargs)
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\repositories\BLIP\models\blip.py", line 96, in init
self.tokenizer = init_tokenizer()
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\repositories\BLIP\models\blip.py", line 187, in init_tokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1760, in from_pretrained
resolved_vocab_files[file_id] = cached_file(
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\transformers\utils\hub.py", line 409, in cached_file
resolved_file = hf_hub_download(
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\utils_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 1308, in hf_hub_download
_cache_commit_hash_for_specific_revision(storage_folder, revision, commit_hash)
File "C:\Q Dropbox\Kent Q\100 Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\huggingface_hub\file_download.py", line 941, in _cache_commit_hash_for_specific_revision
ref_path.write_text(commit_hash)
File "C:\Program Files\Python310\lib\pathlib.py", line 1152, in write_text
with self.open(mode='w', encoding=encoding, errors=errors, newline=newline) as f:
File "C:\Program Files\Python310\lib\pathlib.py", line 1117, in open
return self.accessor.open(self, mode, buffering, encoding, errors,
PermissionError: [Errno 13] Permission denied: 'C:\Users\kentl\.cache\huggingface\hub\models--bert-base-uncased\refs\main'

Where are models saved?

I have two machine and really don't want to download ALL the models provided, so I'm downloading on one machine and want to copy models over.

where are the model files saved?

clip-interrogator-ext vs. Automatic1111 webui build-in interrogator

Hello,
thank you for your work and this amazing extension!

I have been playing with both your extension and Automatic1111 build in interrogator. I have been comparing the results between both of them and they are significantly different. It is probably very subjective, but for some reason I find Automatic1111 results much better.

Could you tell me what would be the case in such a different results? Is it the because of the different artists and style lists each of the extensions is using?

ViT-g-14 not found

Hello,

as I learnt VT-g-14 is the best model for 2.1. I think there is a typo here.

when I choose it I get error:

Model config for ViT-g-14 not found; available models ['mt5-base-ViT-B-32', 'mt5-xl-ViT-H-14', 'RN50', 'RN50-quickgelu', 'RN50x4', 'RN50x16', 'RN50x64', 'RN101', 'RN101-quickgelu', 'roberta-ViT-B-32', 'timm-convnext_base', 'timm-convnext_large', 'timm-convnext_xlarge', 'timm-efficientnetv2_rw_s', 'timm-resnetaa50d', 'timm-swin_base_patch4_window7_224', 'timm-vit_medium_patch16_gap_256', 'timm-vit_relpos_medium_patch16_cls_224', 'ViT-B-16', 'ViT-B-16-plus', 'ViT-B-16-plus-240', 'ViT-B-32', 'ViT-B-32-plus-256', 'ViT-B-32-quickgelu', 'ViT-e-14', 'ViT-G-14', 'ViT-H-14', 'ViT-H-16', 'ViT-L-14', 'ViT-L-14-280', 'ViT-L-14-336', 'ViT-L-16', 'ViT-L-16-320', 'ViT-M-16', 'ViT-M-16-alt', 'ViT-M-32', 'ViT-M-32-alt', 'ViT-S-16', 'ViT-S-16-alt', 'ViT-S-32', 'ViT-S-32-alt', 'xlm-roberta-base-ViT-B-32', 'xlm-roberta-large-ViT-H-14'].

Progress bar?

You know how we love to have our progress bar or estimated time! Is there a progress bar I'm missing for batch process? or could there be one created?

Install BUG las version 22.03.2023

File "E:\AI\SDP\extensions\clip-interrogator-ext\install.py", line 14, in <module>
    launch.run_pip(f"install clip-interrogator=={CI_VERSION}", "requirements for CLIP Interrogator")
  File "E:\AI\SDP\launch.py", line 145, in run_pip
    return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
  File "E:\AI\SDP\launch.py", line 113, in run
    raise RuntimeError(message)
RuntimeError: Couldn't install requirements for CLIP Interrogator.
Command: "E:\AI\SDP\venv\Scripts\python.exe" -m pip install clip-interrogator==0.6.0 --prefer-binary
Error code: 1
stdout: Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Collecting clip-interrogator==0.6.0
  Downloading clip_interrogator-0.6.0-py3-none-any.whl (787 kB)
     -------------------------------------- 787.8/787.8 kB 3.8 MB/s eta 0:00:00
Requirement already satisfied: tqdm in e:\ai\sdp\venv\lib\site-packages (from clip-interrogator==0.6.0) (4.64.1)
Requirement already satisfied: torchvision in e:\ai\sdp\venv\lib\site-packages (from clip-interrogator==0.6.0) (0.14.1+cu116)
Requirement already satisfied: safetensors in e:\ai\sdp\venv\lib\site-packages (from clip-interrogator==0.6.0) (0.2.7)
Requirement already satisfied: requests in e:\ai\sdp\venv\lib\site-packages (from clip-interrogator==0.6.0) (2.25.1)
Collecting transformers>=4.27.1
  Downloading transformers-4.27.2-py3-none-any.whl (6.8 MB)
     ---------------------------------------- 6.8/6.8 MB 9.6 MB/s eta 0:00:00
Requirement already satisfied: torch in e:\ai\sdp\venv\lib\site-packages (from clip-interrogator==0.6.0) (1.13.1+cu116)
Requirement already satisfied: Pillow in e:\ai\sdp\venv\lib\site-packages (from clip-interrogator==0.6.0) (9.4.0)
Requirement already satisfied: open-clip-torch in e:\ai\sdp\venv\lib\site-packages (from clip-interrogator==0.6.0) (2.7.0)
Requirement already satisfied: accelerate in e:\ai\sdp\venv\lib\site-packages (from clip-interrogator==0.6.0) (0.12.0)
Requirement already satisfied: regex!=2019.12.17 in e:\ai\sdp\venv\lib\site-packages (from transformers>=4.27.1->clip-interrogator==0.6.0) (2022.10.31)
Requirement already satisfied: filelock in e:\ai\sdp\venv\lib\site-packages (from transformers>=4.27.1->clip-interrogator==0.6.0) (3.9.0)
Requirement already satisfied: tokenizers!=0.11.3,<0.14,>=0.11.1 in e:\ai\sdp\venv\lib\site-packages (from transformers>=4.27.1->clip-interrogator==0.6.0) (0.12.1)
Requirement already satisfied: packaging>=20.0 in e:\ai\sdp\venv\lib\site-packages (from transformers>=4.27.1->clip-interrogator==0.6.0) (23.0)
Requirement already satisfied: huggingface-hub<1.0,>=0.11.0 in e:\ai\sdp\venv\lib\site-packages (from transformers>=4.27.1->clip-interrogator==0.6.0) (0.11.1)
Requirement already satisfied: pyyaml>=5.1 in e:\ai\sdp\venv\lib\site-packages (from transformers>=4.27.1->clip-interrogator==0.6.0) (6.0)
Requirement already satisfied: numpy>=1.17 in e:\ai\sdp\venv\lib\site-packages (from transformers>=4.27.1->clip-interrogator==0.6.0) (1.23.3)
Requirement already satisfied: colorama in e:\ai\sdp\venv\lib\site-packages (from tqdm->clip-interrogator==0.6.0) (0.4.6)
Requirement already satisfied: psutil in e:\ai\sdp\venv\lib\site-packages (from accelerate->clip-interrogator==0.6.0) (5.9.4)
Requirement already satisfied: typing-extensions in e:\ai\sdp\venv\lib\site-packages (from torch->clip-interrogator==0.6.0) (4.4.0)
Collecting protobuf==3.20.0
  Downloading protobuf-3.20.0-cp310-cp310-win_amd64.whl (903 kB)
     ------------------------------------- 903.8/903.8 kB 11.5 MB/s eta 0:00:00
Requirement already satisfied: ftfy in e:\ai\sdp\venv\lib\site-packages (from open-clip-torch->clip-interrogator==0.6.0) (6.1.1)
Requirement already satisfied: sentencepiece in e:\ai\sdp\venv\lib\site-packages (from open-clip-torch->clip-interrogator==0.6.0) (0.1.97)
Requirement already satisfied: idna<3,>=2.5 in e:\ai\sdp\venv\lib\site-packages (from requests->clip-interrogator==0.6.0) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in e:\ai\sdp\venv\lib\site-packages (from requests->clip-interrogator==0.6.0) (1.26.14)
Requirement already satisfied: chardet<5,>=3.0.2 in e:\ai\sdp\venv\lib\site-packages (from requests->clip-interrogator==0.6.0) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in e:\ai\sdp\venv\lib\site-packages (from requests->clip-interrogator==0.6.0) (2022.12.7)
Requirement already satisfied: wcwidth>=0.2.5 in e:\ai\sdp\venv\lib\site-packages (from ftfy->open-clip-torch->clip-interrogator==0.6.0) (0.2.6)
Installing collected packages: protobuf, transformers, clip-interrogator
  Attempting uninstall: protobuf
    Found existing installation: protobuf 3.19.6
    Uninstalling protobuf-3.19.6:
      Successfully uninstalled protobuf-3.19.6

stderr: WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution - (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution - (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution - (e:\ai\sdp\venv\lib\site-packages)
    WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
    WARNING: Ignoring invalid distribution -rotobuf (e:\ai\sdp\venv\lib\site-packages)
    WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
    WARNING: Ignoring invalid distribution - (e:\ai\sdp\venv\lib\site-packages)
ERROR: Could not install packages due to an OSError: [WinError 5]   : 'E:\\AI\\SDP\\venv\\Lib\\site-packages\\google\\~0otobuf\\internal\\_api_implementation.cp310-win_amd64.pyd'
Check the permissions.

WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution - (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution - (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -rotobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution -otobuf (e:\ai\sdp\venv\lib\site-packages)
WARNING: Ignoring invalid distribution - (e:\ai\sdp\venv\lib\site-packages)

[notice] A new release of pip is available: 23.0 -> 23.0.1
[notice] To update, run: E:\AI\SDP\venv\Scripts\python.exe -m pip install --upgrade pip

Problem with batch

"The size of tensor a (4) must match the size of tensor b (3) at non-singleton dimension 0al progress: 0it [00:00, ?it/s]"

I get this error when I try to use batch. I have a 3060 for what its worth, and I'm able to interrogate without batch just fine.

incorrect memory check

latest version adds checks for available vram in scripts/clip_interrogator_ext.py, but it does so incorrectly.

def add_tab():
    global low_vram
    low_vram = shared.cmd_opts.lowvram or shared.cmd_opts.medvram
    if not low_vram and torch.cuda.is_available():
        device = devices.get_optimal_device()
        vram_total = torch.cuda.get_device_properties(device).total_memory
        if vram_total < 12*1024*1024*1024:
            low_vram = True
            print(f"    detected < 12GB VRAM, using low VRAM mode")

first, 12GB GPUs always report as slightly less. so this check would only pass for 16GB GPUs since that is the first valid one after 12GB.
for example, RTX3060 reports 12884377600 which is 11.8 GB.

second, if check does not pass, message printed to console is without any context whatsover.
its indented (and no other console messages are) and there is no mention it comes from clip interrogator extension.
how is user to know what does this refer to?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.