Giter Site home page Giter Site logo

comfyui-phi-3-mini's Introduction

P3 Comfyui

Phi-3-mini in ComfyUI

Dingtalk_20240426213526

项目介绍 | Info

  • 将微软 Phi-3-mini-4k-instruct 模型引入到 ComfyUI 中,模型很小,速度很快,性能很强(媲美 GPT-3.5 和 Mixtral 8x7B)

  • Phi-3-mini-4k-instruct 开源可商用( MIT 许可),中文表现很不错,可用于 生成/补全提示词 或畅聊人生!

  • 版本:V1.0 支持系统提示词,支持单/多轮对话双模式,支持中文输入自动并输出英文提示词

详细说明 | Features

  • 节点:

    • 🏖️Phi3mini 4k ModelLoader:会自动下载模型(请保持网络畅通)

    • 🏖️Phi3mini 4k:支持系统指令设置(System Instruction)

    • 🏖️Phi3mini 4k Chat:支持系统指令设置(System Instruction)+ 多轮对话

  • 节点示例:

Dingtalk_20240426225104

  • 上下文多轮对话:

Dingtalk_20240426224040

参数说明 | Parameters

  • model:接入模型
  • tokenizer:分词器
  • prompt:提示词
  • system_instruction:系统指令
  • temperature:随机性

安装 | Install

  • 环境依赖要求:transformers>=4.40.0,手动升级:pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers

  • 推荐使用管理器 ComfyUI Manager 安装(ON THE WAY)

  • 手动安装:

    1. cd custom_nodes
    2. https://github.com/ZHO-ZHO-ZHO/ComfyUI-Phi-3-mini.git
    3. cd custom_nodes/ComfyUI-Phi-3-mini
    4. pip install -r requirements.txt
    5. 重启 ComfyUI
  • 输出节点可配合像ComfyUI-Gemini中 ✨DisplayText_Zho 一样的任何接受文本的节点

工作流 | Workflow

V1.0 工作流

Phi-3-mini-4k + CosXL【Zho】

Dingtalk_20240426223015

Phi-3-mini-4k Chat【Zho】

Dingtalk_20240426211605

更新日志 | Changelog

20240426

  • V1.0: 支持系统提示词,支持单/多轮对话双模式,支持中文输入自动并输出英文提示词

  • 创建项目

Stars

Star History Chart

关于我 | About me

📬 联系我

🔗 社交媒体

💡 支持我

Credits

Phi-3-mini-4k-instruct

comfyui-phi-3-mini's People

Contributors

zho-zho-zho avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

comfyui-phi-3-mini's Issues

phi3无法按照system prompt执行任务

SCR-20240507-pixp

按照默认的工作流生成图片时,给出的文本只有中文

When generating images according to the default workflow, the text given is in Chinese only

image

如果将原提示词换为英文,会生成较长的无用的内容

If the original prompt is replaced with English, it generates longer and useless content

一开始部署时Phi3是正常运行的,输入中文也能给出英文的提示词,自从清除过一次.cache文件夹以后,再重新下载运行就变成这样的故障状态,请问有办法修复吗?尝试过重置节点和更新整个节点都没办法解决
Phi3 was running normally when deployed at the beginning, and could give English prompts when typing in Chinese, but after clearing the .cache folder once, and then re-downloading and running again, it becomes such a faulty state, is there any way to fix it? I've tried resetting the node and updating the whole node but it doesn't work.

很多插件都不是用transformers4.40,怎么办?

很多插件都不是用transformers4.40,即使手动安装了transformers4.40,在启动comfyui时不知道哪个插件的检查就会卸载transformers4.40来安装4.37.1的版本,导致phi3-mini不能运行,请问这要怎样处理啊?谢谢

We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like microsoft/Phi-3-mini-4k-instruct is not the path to a directory containing a file named config.json.

Error occurred when executing Phi3mini_4k_ModelLoader_Zho:

We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like microsoft/Phi-3-mini-4k-instruct is not the path to a directory containing a file named config.json.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'.

File "G:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\deforum-comfy-nodes\deforum_nodes\exec_hijack.py", line 55, in map_node_over_list
return orig_exec(obj, input_data_all, func, allow_interrupt)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\ComfyUI\execution.py", line 69, in map_node_over_list
results.append(getattr(obj, func)())
^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Phi-3-mini\Phi3mini.py", line 26, in load_model
model = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\auto_factory.py", line 523, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\models\auto\configuration_auto.py", line 928, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\configuration_utils.py", line 631, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\configuration_utils.py", line 686, in _get_config_dict
resolved_config_file = cached_file(
^^^^^^^^^^^^
File "G:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\transformers\utils\hub.py", line 441, in cached_file
raise EnvironmentError(

Error occurred when executing Phi3mini_4k_ModelLoader_Zho: Using `low_cpu_mem_usage=True` or a `device_map` requires Accelerate: `pip install accelerate`

Traceback (most recent call last):
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 69, in map_node_over_list
results.append(getattr(obj, func)())
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Phi-3-mini\Phi3mini.py", line 26, in load_model
model = AutoModelForCausalLM.from_pretrained(
File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\models\auto\auto_factory.py", line 558, in from_pretrained
return model_class.from_pretrained(
File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\transformers\modeling_utils.py", line 3086, in from_pretrained
raise ImportError(
ImportError: Using low_cpu_mem_usage=True or a device_map requires Accelerate: pip install accelerate

请问怎么回事呢

啥时候增加个释放显存,内存的功能。

可不可以增加自动释放显存的功能,AI模型加载后就吃掉了8G显存,没剩下多少给绘图模型了,显存都干爆了。还有不用时,能否从内存卸载掉模型,内存也很吃紧。模型能不能用量化缩小到2.6G的,这个7G模型,好大。

macOS 下是否可以用呢

我的m1 mac上安装了comfyui,可以跑起来
但是这个ComfyUI-Phi-3-mini会报错

Error occurred when executing Phi3mini_4k_ModelLoader_Zho:

Torch not compiled with CUDA enabled

File "/Users/askiter/AI/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/Users/askiter/AI/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/Users/askiter/AI/ComfyUI/execution.py", line 69, in map_node_over_list
results.append(getattr(obj, func)())
File "/Users/askiter/AI/ComfyUI/custom_nodes/ComfyUI-Phi-3-mini/Phi3mini.py", line 26, in load_model
model = AutoModelForCausalLM.from_pretrained(
File "/Users/askiter/AI/PyVenv3.10/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained
return model_class.from_pretrained(
File "/Users/askiter/AI/PyVenv3.10/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3689, in from_pretrained
) = cls._load_pretrained_model(
File "/Users/askiter/AI/PyVenv3.10/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4123, in _load_pretrained_model
new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model(
File "/Users/askiter/AI/PyVenv3.10/lib/python3.10/site-packages/transformers/modeling_utils.py", line 887, in _load_state_dict_into_meta_model
set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs)
File "/Users/askiter/AI/PyVenv3.10/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 298, in set_module_tensor_to_device
new_value = value.to(device)
File "/Users/askiter/AI/PyVenv3.10/lib/python3.10/site-packages/torch/cuda/init.py", line 284, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")

是否只能在有N卡的机器上跑呢?
谢谢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.