Giter Site home page Giter Site logo

rayventura / shortgpt Goto Github PK

View Code? Open in Web Editor NEW
5.0K 58.0 618.0 3.56 MB

๐Ÿš€๐ŸŽฌ ShortGPT - Experimental AI framework for youtube shorts / tiktok channel automation

Home Page: https://shortx.ai/?ref=sgpt

License: Other

Python 99.53% Dockerfile 0.47%
ai artificial-intelligence automation autonomous-agents content gpt-4 openai python video video-editing

shortgpt's Introduction

๐Ÿš€๐ŸŽฌ ShortGPT

(Note: ShortX is out, a paid premium automation factory with more capabilities than ShortGPT, check it out at shortx.ai)

ShortGPT-logo
โšก Automating video and short content creation with AI โšก

Follow the installation steps below for running the web app locally (running the google Colab is highly recommanded). Please read "installation-notes.md" for more details.

๐ŸŽฅ Showcase (Full video on YouTube)

quickshowcase.mp4

๐ŸŽฅ Voice Dubbing

ShortGPT.video.to.video.dubbing.and.voice.translation.mp4

๐ŸŒŸ Show Your Support

We hope you find ShortGPT helpful! If you do, let us know by giving us a star โญ on the repo. It's easy, just click on the 'Star' button at the top right of the page. Your support means a lot to us and keeps us motivated to improve and expand ShortGPT. Thank you and happy content creating! ๐ŸŽ‰

GitHub star chart

๐Ÿ› ๏ธ How it works

alt text

๐Ÿ“ Introduction to ShortGPT

ShortGPT is a powerful framework for automating content creation. It simplifies video creation, footage sourcing, voiceover synthesis, and editing tasks. Of the most popular use-cases of ShortGPT is youtube automation and Tiktok creativity program automation.

  • ๐ŸŽž๏ธ Automated editing framework: Streamlines the video creation process with an LLM oriented video editing language.

  • ๐Ÿ“ƒ Scripts and Prompts: Provides ready-to-use scripts and prompts for various LLM automated editing processes.

  • ๐Ÿ—ฃ๏ธ Voiceover / Content Creation: Supports multiple languages including English ๐Ÿ‡บ๐Ÿ‡ธ, Spanish ๐Ÿ‡ช๐Ÿ‡ธ, Arabic ๐Ÿ‡ฆ๐Ÿ‡ช, French ๐Ÿ‡ซ๐Ÿ‡ท, Polish ๐Ÿ‡ต๐Ÿ‡ฑ, German ๐Ÿ‡ฉ๐Ÿ‡ช, Italian ๐Ÿ‡ฎ๐Ÿ‡น, Portuguese ๐Ÿ‡ต๐Ÿ‡น, Russian ๐Ÿ‡ท๐Ÿ‡บ, Mandarin Chinese ๐Ÿ‡จ๐Ÿ‡ณ, Japanese ๐Ÿ‡ฏ๐Ÿ‡ต, Hindi ๐Ÿ‡ฎ๐Ÿ‡ณ,Korean ๐Ÿ‡ฐ๐Ÿ‡ท, and way over 30 more languages (with EdgeTTS)

  • ๐Ÿ”— Caption Generation: Automates the generation of video captions.

  • ๐ŸŒ๐ŸŽฅ Asset Sourcing: Sources images and video footage from the internet, connecting with the web and Pexels API as necessary.

  • ๐Ÿง  Memory and persistency: Ensures long-term persistency of automated editing variables with TinyDB.

If you prefer not to install the prerequisites on your local system, you can use the Google Colab notebook. This option is free and requires no installation setup.

  1. Click on the link to the Google Colab notebook: https://colab.research.google.com/drive/1_2UKdpF6lqxCqWaAcZb3rwMVQqtbisdE?usp=sharing

  2. Once you're in the notebook, simply run the cells in order from top to bottom. You can do this by clicking on each cell and pressing the 'Play' button, or by using the keyboard . Enjoy using ShortGPT!

Instructions for running shortGPT locally

This guide provides step-by-step instructions for installing shortGPT and its dependencies. To run ShortGPT locally, you need Docker.

Installation Steps

To run ShortGPT, you need to have docker. Follow the instructions "installation-notes.md" for more details.

  1. For running the Dockerfile, do this:
docker build -t short_gpt_docker:latest .
docker run -p 31415:31415 --env-file .env short_gpt_docker:latest

Running runShortGPT.py Web Interface

  1. After running the script, a Gradio interface should open at your local host on port 31415 (http://localhost:31415)

Framework overview

  • ๐ŸŽฌ The ContentShortEngine is designed for creating shorts, handling tasks from script generation to final rendering, including adding YouTube metadata.

  • ๐ŸŽฅ The ContentVideoEngine is ideal for longer videos, taking care of tasks like generating audio, automatically sourcing background video footage, timing captions, and preparing background assets.

  • ๐Ÿ—ฃ๏ธ The ContentTranslationEngine is designed to dub and translate entire videos, from mainstream languages to more specific target languages. It takes a video file, or youtube link, transcribe it's audio, translates the content, voices it in a target language, adds captions , and gives back a new video, in a totally different language.

  • ๐ŸŽž๏ธ The automated EditingEngine, using Editing Markup Language and JSON, breaks down the editing process into manageable and customizable blocks, comprehensible to Large Language Models.

๐Ÿ’ก ShortGPT offers customization options to suit your needs, from language selection to watermark addition.

๐Ÿ”ง As a framework, ShortGPT is adaptable and flexible, offering the potential for efficient, creative content creation.

More documentation incomming, please be patient.

Technologies Used

ShortGPT utilizes the following technologies to power its functionality:

  • Moviepy: Moviepy is used for video editing, allowing ShortGPT to make video editing and rendering

  • Openai: Openai is used for automating the entire process, including generating scripts and prompts for LLM automated editing processes.

  • ElevenLabs: ElevenLabs is used for voice synthesis, supporting multiple languages for voiceover creation.

  • EdgeTTS: Microsoft's FREE EdgeTTS is used for voice synthesis, supporting way many more language than ElevenLabs currently.

  • Pexels: Pexels is used for sourcing background footage, allowing ShortGPT to connect with the web and access a wide range of images and videos.

  • Bing Image: Bing Image is used for sourcing images, providing a comprehensive database for ShortGPT to retrieve relevant visuals.

These technologies work together to provide a seamless and efficient experience in automating video and short content creation with AI.

๐Ÿ’ Contributing

As an open-source project in a rapidly developing field, we are extremely open to contributions, whether it would be in the form of a new feature, improved infrastructure, or better documentation.

Star History Chart

shortgpt's People

Contributors

abidlabs avatar abirabedinkhan avatar eltociear avatar erim32 avatar logicalcode22 avatar paillat-dev avatar rayventura avatar test12376 avatar transonit avatar vembarrajan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

shortgpt's Issues

Video without any sound

After rendering, I view the finished video and watch it with sound in the web interface, but when I download it, it is completely without sound. What could be the problem?

Multiple versions of a package installing repeatedly during requirements.txt run

Hi,

Was looking to test this, set up Python 3.10 venv and started installing dependencies.

I am getting the following during the install:

image

It's been at it for about 30 minutes now. Any reason this might be the case?

It finally seems to have bombed out here:

image

It has continued with installation but is still painfully slow.

Thanks!

features of the program or bugs ?

Hi RayVentura, I noticed in the Automate a video with stock assets feature ๐ŸŽž๏ธ. When creating a video if I manually enter 1 separate content scrips will result in longer content. At this time, the program increases the running speed to give less than 1 minute. This made the video run extremely fast, I tried to slow down when the sound broke.
can add an option so that if I tick it will not squeeze it down to less than 1 minute (this is very good if I make longer videos uploaded to some other platforms like Facebook, Pinterest).

this video is so fast : https://youtu.be/mruAxRsBKUI
thank you very much

Not finding Magick even when it is installed ubuntu

Got this error on wsl even after installing magick:
ERROR : ImageMagick, a program required for making Captions with ShortGPT was not found on your computer. Please go back to the README and follow the instructions to install ImageMagick

Traceback Info : File "/home/ed/ShortGPT/gui/short_automation_ui.py", line 144, in create_short shortEngine = create_short_engine(short_type=short_type, File "/home/ed/ShortGPT/gui/short_automation_ui.py", line 108, in create_short_engine return RedditShortEngine(background_video_name=background_video, File "/home/ed/ShortGPT/shortGPT/engine/reddit_short_engine.py", line 12, in init super().init(short_id=short_id, short_type="reddit_shorts", background_video_name=background_video_name, background_music_name=background_music_name, File "/home/ed/ShortGPT/shortGPT/engine/content_short_engine.py", line 19, in init super().init(short_id, short_type, language) File "/home/ed/ShortGPT/shortGPT/engine/abstract_content_engine.py", line 20, in init self.initializeMagickAndFFMPEG() File "/home/ed/ShortGPT/shortGPT/engine/abstract_content_engine.py", line 93, in initializeMagickAndFFMPEG raise Exception("ImageMagick, a program required for making Captions with ShortGPT was not found on your computer. Please go back to the README and follow the instructions to install ImageMagick")

Bing่ฟžๆŽฅไธไธŠไนˆ

ERROR | Sslerror : HTTPSConnectionPool(host='www.bing.com', port=443): Max retries exceeded with url: /images/search?q=perfectly+preserved+image&first=1 (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))
Traceback Info : File "D:\Model\ShortGPT\gui\short_automation_ui.py", line 61, in create_short for step_num, step_info in shortEngine.makeContent(): File "D:\Model\ShortGPT\shortGPT\engine\abstract_content_engine.py", line 72, in makeContent self.stepDictcurrentStep File "D:\Model\ShortGPT\shortGPT\engine\content_short_engine.py", line 82, in _generateImageUrls self._db_timed_image_urls = editing_images.getImageUrlsTimed( File "D:\Model\ShortGPT\shortGPT\editing_utils\editing_images.py", line 7, in getImageUrlsTimed return [(pair[0], searchImageUrlsFromQuery(pair[1])) for pair in tqdm(imageTextPairs, desc='Search engine queries for images...')] File "D:\Model\ShortGPT\shortGPT\editing_utils\editing_images.py", line 7, in return [(pair[0], searchImageUrlsFromQuery(pair[1])) for pair in tqdm(imageTextPairs, desc='Search engine queries for images...')] File "D:\Model\ShortGPT\shortGPT\editing_utils\editing_images.py", line 12, in searchImageUrlsFromQuery images = getBingImages(query, retries=retries) File "D:\Model\ShortGPT\shortGPT\api_utils\image_api.py", line 39, in getBingImages response = requests.get(f"https://www.bing.com/images/search?q={query}&first=1") File "D:\Users\1\miniconda3\envs\shortgpt\lib\site-packages\requests\api.py", line 73, in get return request("get", url, params=params, **kwargs) File "D:\Users\1\miniconda3\envs\shortgpt\lib\site-packages\requests\api.py", line 59, in request return session.request(method=method, url=url, **kwargs) File "D:\Users\1\miniconda3\envs\shortgpt\lib\site-packages\requests\sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "D:\Users\1\miniconda3\envs\shortgpt\lib\site-packages\requests\sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "D:\Users\1\miniconda3\envs\shortgpt\lib\site-packages\requests\adapters.py", line 517, in send raise SSLError(e, request=request)

If the problem persists, don't hesitate to contact our support. We're here to assist you

raise IOError(("MoviePy error: failed to read the duration of file %s.\n"

Today, I use Video Automation with stock assets and get back this error , I use ๐ŸŽž๏ธ google colab . thank you

Error File "/content/ShortGPT/gui/video_automation_ui.py", line 95, in respond
video_path = makeVideo(script, language.value, isVertical, progress=progress)
File "/content/ShortGPT/gui/video_automation_ui.py", line 46, in makeVideo
for step_num, step_info in shortEngine.makeShort():
File "/content/ShortGPT/shortGPT/engine/abstract_content_engine.py", line 72, in makeShort
self.stepDictcurrentStep
File "/content/ShortGPT/shortGPT/engine/content_video_engine.py", line 139, in _editAndRenderShort
videoEditor.renderVideo(outputPath, logger=self.logger)
File "/content/ShortGPT/shortGPT/editing_framework/editing_engine.py", line 92, in renderVideo
engine.generate_video(self.schema, outputPath, logger=logger)
File "/content/ShortGPT/shortGPT/editing_framework/core_editing_engine.py", line 53, in generate_video
clip = self.process_video_asset(asset)
File "/content/ShortGPT/shortGPT/editing_framework/core_editing_engine.py", line 179, in process_video_asset
clip = VideoFileClip(**params)
File "/usr/local/lib/python3.10/dist-packages/moviepy/video/io/VideoFileClip.py", line 88, in init
self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,
File "/usr/local/lib/python3.10/dist-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in init
infos = ffmpeg_parse_infos(filename, print_infos, check_duration,
File "/usr/local/lib/python3.10/dist-packages/moviepy/video/io/ffmpeg_reader.py", line 289, in ffmpeg_parse_infos
raise IOError(("MoviePy error: failed to read the duration of file %s.\n"

โœจ [Feature Request / Suggestion]:

Suggestion / Feature Request

  • allow us to enter custom script
  • add custom text to make the short video look better
  • option to remove generated image from video
  • add our own stock video

Why would this be useful?

  • cause sometimes the generated script just needs to be changed a little. and using prompt multiple time to get similar result is just not effective
  • it make the short video quality higher
  • remove generated and add our own stock video is to increase video quality and give more control on what kind content we can produce

Screenshots/Assets/Relevant links

Example
https://www.youtube.com/shorts/Pujw_qJU0BM

โœจ [Feature Request / Suggestion]:

Suggestion / Feature Request

For OpenAI api key allow to use a custom api endpoint as bettergpt allows.

Why would this be useful?

to use a free key

Screenshots/Assets/Relevant links

No response

Hi RayVentura

Can you add Vietnamese language on Google Colab?
thank you very much

โ“ [Question]: ValueError: An event handler (respond) didn't receive enough output values (needed: 6, received: 3)

Your Question

Traceback (most recent call last):
File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\gradio\routes.py", line 437, in run_predict
output = await app.get_blocks().process_api(
File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\gradio\blocks.py", line 1355, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\gradio\blocks.py", line 1258, in postprocess_data
self.validate_outputs(fn_index, predictions) # type: ignore
File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\gradio\blocks.py", line 1233, in validate_outputs
raise ValueError(
ValueError: An event handler (respond) didn't receive enough output values (needed: 6, received: 3).
Wanted outputs:
[textbox, chatbot, html, html, button, button]
Received outputs:
[{'visible': False, 'type': 'generic_update'}, {'label': None, 'show_label': None, 'container': None, 'scale': None, 'min_width': None, 'visible': None, 'value': [[None, 'Your video is being made now! ๐ŸŽฌ']], 'height': None, 'type': 'update'}, {'label': None, 'show_label': None, 'visible': False, 'value': '', 'type': 'update'}]

โœจ [Feature Request / Suggestion]: Long video analysis

Suggestion / Feature Request

Hello Ray, I've already thought about creating a function to analyze a long video and cut the best parts using gpt and put subtitles in the video and thus create several shorts, if you haven't thought of it I'm interested in having something like that we can talk

Why would this be useful?

No response

Screenshots/Assets/Relevant links

No response

โœจ [Feature Request / Suggestion]: Use chat gpt without API

Suggestion / Feature Request

Hi

import json
import requests

safeInput = input('Request:  ')

# Prepare the data payload
data = {
    "prompt": safeInput
}
payload = json.dumps(data)

# Set the headers
headers = {
    "User-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/110.0",
    "Accept": "application/json, text/plain, */*",
    "Accept-Language": "en-US,en;q=0.5",
    "Content-Type": "application/json",
    "Origin": "https://chatbot.theb.ai",
    "Referer": "https://chatbot.theb.ai/"
}

# Send the POST request
url = "https://chatbot.theb.ai/api/chat-process"
response = requests.post(url, data=payload, headers=headers)

# Process the response
if response.status_code == 200:
    response_text = response.text

    # Find the last JSON string in the response text
    json_strings = response_text.strip().split('\n')
    last_json_string = json_strings[-1]

    response_json = json.loads(last_json_string)
    print(response_json['text'])
else:
    print("Error:", response.status_code)
   

You can use this code in your project to use chatgpt without API key

Why would this be useful?

No response

Screenshots/Assets/Relevant links

No response

Discord spam

Can you please not spam random Discord servers? It is against the Discord ToS and has been reported.

removing reliance on Third-Party APIs [abandoned]

Edit:
@elevenlabs is this a joke? @maxilevi sell-out

ERROR| Exception : Error in response, 401 , message: {"detail":{"status":"quota_exceeded","message":"Unusual activity detected. Free Tier usage disabled. If you are using proxy/VPN you might need to purchase a Paid Plan to not trigger our abuse detectors. Free Tier only works if users do not abuse it, for example by creating multiple free accounts. If we notice that many people try to abuse it, we will need to reconsider Free Tier altogether. Please play fair.\nPlease purchase any Paid Subscription to continue."}}

MoviePy error: failed to read the duration of file %s.\n

I see this in my Google collab console:

Step 1 _generateTempAudio Step 2 _speedUpAudio Step 3 _timeCaptions Detected language: English 100% 8176/8176 [00:14<00:00, 578.76frames/s] Step 4 _generateVideoSearchTerms Expecting ',' delimiter: line 1 column 1407 (char 1406) not the right format Step 5 _generateVideoUrls Step 6 _chooseBackgroundMusic Step 7 _prepareBackgroundAssets Step 8 _prepareCustomAssets Step 9 _editAndRenderShort Error File "/content/ShortGPT/gui/video_automation_ui.py", line 95, in respond video_path = makeVideo(script, language.value, isVertical, progress=progress) File "/content/ShortGPT/gui/video_automation_ui.py", line 46, in makeVideo for step_num, step_info in shortEngine.makeShort(): File "/content/ShortGPT/shortGPT/engine/abstract_content_engine.py", line 72, in makeShort self.stepDict[currentStep]() File "/content/ShortGPT/shortGPT/engine/content_video_engine.py", line 139, in _editAndRenderShort videoEditor.renderVideo(outputPath, logger=self.logger) File "/content/ShortGPT/shortGPT/editing_framework/editing_engine.py", line 92, in renderVideo engine.generate_video(self.schema, outputPath, logger=logger) File "/content/ShortGPT/shortGPT/editing_framework/core_editing_engine.py", line 53, in generate_video clip = self.process_video_asset(asset) File "/content/ShortGPT/shortGPT/editing_framework/core_editing_engine.py", line 179, in process_video_asset clip = VideoFileClip(**params) File "/usr/local/lib/python3.10/dist-packages/moviepy/video/io/VideoFileClip.py", line 88, in __init__ self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt, File "/usr/local/lib/python3.10/dist-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in __init__ infos = ffmpeg_parse_infos(filename, print_infos, check_duration, File "/usr/local/lib/python3.10/dist-packages/moviepy/video/io/ffmpeg_reader.py", line 289, in ffmpeg_parse_infos raise IOError(("MoviePy error: failed to read the duration of file %s.\n"

and this in the script running page:
`ERROR : Oserror : MoviePy error: failed to read the duration of file https://player.vimeo.com/external/439473996.hd.mp4?s=e57b22a2fa8b6e4dd94706c3a4eeb9341012259b&profile_id=169&oauth2_token_id=57447761. Here are the file infos returned by ffmpeg: ffmpeg version 4.2.2-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers built with gcc 8 (Debian 8.3.0-6) configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg libavutil 56. 31.100 / 56. 31.100 libavcodec 58. 54.100 / 58. 54.100 libavformat 58. 29.100 / 58. 29.100 libavdevice 58. 8.100 / 58. 8.100 libavfilter 7. 57.100 / 7. 57.100 libswscale 5. 5.100 / 5. 5.100 libswresample 3. 5.100 / 3. 5.100 libpostproc 55. 5.100 / 55. 5.100
Traceback Info : File "/content/ShortGPT/gui/video_automation_ui.py", line 95, in respond video_path = makeVideo(script, language.value, isVertical, progress=progress) File "/content/ShortGPT/gui/video_automation_ui.py", line 46, in makeVideo for step_num, step_info in shortEngine.makeShort(): File "/content/ShortGPT/shortGPT/engine/abstract_content_engine.py", line 72, in makeShort self.stepDictcurrentStep File "/content/ShortGPT/shortGPT/engine/content_video_engine.py", line 139, in _editAndRenderShort videoEditor.renderVideo(outputPath, logger=self.logger) File "/content/ShortGPT/shortGPT/editing_framework/editing_engine.py", line 92, in renderVideo engine.generate_video(self.schema, outputPath, logger=logger) File "/content/ShortGPT/shortGPT/editing_framework/core_editing_engine.py", line 53, in generate_video clip = self.process_video_asset(asset) File "/content/ShortGPT/shortGPT/editing_framework/core_editing_engine.py", line 179, in process_video_asset clip = VideoFileClip(**params) File "/usr/local/lib/python3.10/dist-packages/moviepy/video/io/VideoFileClip.py", line 88, in init self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt, File "/usr/local/lib/python3.10/dist-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in init infos = ffmpeg_parse_infos(filename, print_infos, check_duration, File "/usr/local/lib/python3.10/dist-packages/moviepy/video/io/ffmpeg_reader.py", line 289, in ffmpeg_parse_infos raise IOError(("MoviePy error: failed to read the duration of file %s.\n"

If the problem persists, don't hesitate to contact our support. We're here to assist you`

Screenshot:
image

Hugging Face deploy version

Good day, it would be great to see a standalone version that can be deployed on Hugging Face. Because at the moment it's not possible to deploy to Hugging Face due to ImageMagick issues

โœจ [Feature Request / Suggestion]: Adding Additional Video Stock Footage API's

Suggestion / Feature Request

Adding Additional Video Stock Footage API's

Pixabay
https://pixabay.com/service/about/api/

VideoEvo
https://www.videvo.net/a/videvo-api/

Videovo
https://www.videvo.net/stock-video-footage/apis/

Coverr
https://coverr.co/developers

These are all free API access etc.

Why would this be useful?

If one stock footage site doesn't provide correct content, we can select another

Screenshots/Assets/Relevant links

No response

not found shortgpt.py file

as per the below we have to locate "shortgpt.py" but code not have this file.
Step 4: Install Python Dependencies
Open a terminal or command prompt.

Navigate to the directory where shortgpt.py is located (the cloned repo).

Execute the following command to install the required Python dependencies:

Video to short error

Hi

Seems to be a brilliant solution. KUDOS!!!

Yet, I am constantly getting this error:

ERROR : Video too short
Traceback Info : File "C:\Users\Tal\VScode\shortgpt\gui\short_automation_ui.py", line 59, in create_short for step_num, step_info in shortEngine.makeShort(): File "C:\Users\Tal\VScode\shortgpt\shortGPT\engine\abstract_content_engine.py", line 72, in makeShort self.stepDictcurrentStep File "C:\Users\Tal\VScode\shortgpt\shortGPT\engine\content_short_engine.py", line 110, in _prepareBackgroundAssets self._db_background_trimmed = extract_random_clip_from_video( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Tal\VScode\shortgpt\shortGPT\editing_utils\handle_videos.py", line 75, in extract_random_clip_from_video raise Exception("Video too short")

If the problem persists, don't hesitate to contact our support. We're here to assist you.

What exactly the error refers to?

Expecting ',' delimiter: line 1 column 257 (char 256)

Automate a video with stock assets๏ผš

Step 1 _generateTempAudio
Step 2 _speedUpAudio
Step 3 _timeCaptions
Detected language: English
100% 5117/5117 [00:01<00:00, 2592.00frames/s]
Step 4 _generateVideoSearchTerms
Expecting ',' delimiter: line 1 column 257 (char 256)
not the right format

ErrorGPT

ERROR | Exception : GPT3 error: You exceeded your current quota, please check your plan and billing details.How to sovle it?

No audio in video

I've eleven labs key set, and characters are also being consumed on each try, I've tried 3 times with different configuration, there is no audio (I'm sure) in the generated video. and in directory audio files exist too, and they play just fine. Please let me know what do you need more.

Versions:
MacOS
ffmpeg version 6.0 Copyright (c) 2000-2023 the FFmpeg developers
(env) (base) mahev@Mahevs-MacBook-Pro shortgpt % convert --version
Version: ImageMagick 7.1.1-12 Q16-HDRI aarch64 21239 https://imagemagick.org
Copyright: (C) 1999 ImageMagick Studio LLC
License: https://imagemagick.org/script/license.php
Features: Cipher DPC HDRI Modules OpenMP(5.0)
Delegates (built-in): bzlib fontconfig freetype gslib heic jng jp2 jpeg jxl lcms lqr ltdl lzma openexr png ps raw tiff webp xml zlib
Compiler: gcc (4.2)

ERROR | Attributeerror : module 'ffmpeg' has no attribute 'Error'

Traceback Info : File "/Users//Sites/shortgpt/gui/short_automation_ui.py", line 69, in create_short for step_num, step_info in shortEngine.makeShort(): File "/Users//Sites/shortgpt/shortGPT/engine/abstract_content_engine.py", line 72, in makeShort self.stepDictcurrentStep File "/Users//Sites/shortgpt/shortGPT/engine/content_short_engine.py", line 70, in _timeCaptions whisper_analysis = audio_utils.audioToText(self._db_audio_path) File "/Users//Sites/shortgpt/shortGPT/audio/audio_utils.py", line 60, in audioToText gen = transcribe_timestamped(WHISPER_MODEL, filename,verbose=False, fp16=False) File "/usr/local/anaconda3/lib/python3.10/site-packages/whisper_timestamped/transcribe.py", line 266, in transcribe_timestamped (transcription, words) = _transcribe_timestamped_efficient(model, audio, File "/usr/local/anaconda3/lib/python3.10/site-packages/whisper_timestamped/transcribe.py", line 846, in _transcribe_timestamped_efficient transcription = model.transcribe(audio, **whisper_options) File "/usr/local/anaconda3/lib/python3.10/site-packages/whisper/transcribe.py", line 121, in transcribe mel = log_mel_spectrogram(audio, padding=N_SAMPLES) File "/usr/local/anaconda3/lib/python3.10/site-packages/whisper/audio.py", line 130, in log_mel_spectrogram audio = load_audio(audio) File "/usr/local/anaconda3/lib/python3.10/site-packages/whisper/audio.py", line 50, in load_audio except ffmpeg.Error as e:

Error when trying to create video

Went though the "automate video with stock assests" workflow with all api keys input and after installing requirements but get this error:
(base) D:\Code\ShortGPT>python runShortGPT.py
Running on local URL: http://127.0.0.1:31415

To create a public link, set share=True in launch().
Step 1 _generateTempAudio
'AsyncRequest' object has no attribute '_json_response_data'
Step 2 _speedUpAudio
ffmpeg version 4.2.3 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 9.3.1 (GCC) 20200523
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-amf --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[mp3 @ 00000187d2789480] Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from '.editing_assets/general_video_assets/4e0ede81afb2456a93961084/temp_audio_path.wav':
Duration: 00:00:40.80, start: 0.000000, bitrate: 64 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, mono, fltp, 64 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (mp3 (mp3float) -> pcm_s16le (native))
Press [q] to stop, [?] for help
Output #0, wav, to '.editing_assets/general_video_assets/4e0ede81afb2456a93961084/audio_voice.wav':
Metadata:
ISFT : Lavf58.29.100
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, mono, s16, 705 kb/s
Metadata:
encoder : Lavc58.54.100 pcm_s16le
size= 3515kB time=00:00:40.80 bitrate= 705.6kbits/s speed= 376x
video:0kB audio:3514kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.002167%
Step 3 _timeCaptions
100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 72.1M/72.1M [00:07<00:00, 10.7MiB/s]
Error File "D:\Code\ShortGPT\gui\video_automation_ui.py", line 122, in respond
video_path = makeVideo(script, language.value, isVertical, progress=progress)
File "D:\Code\ShortGPT\gui\video_automation_ui.py", line 42, in makeVideo
for step_num, step_info in shortEngine.makeShort():
File "D:\Code\ShortGPT\shortGPT\engine\abstract_content_engine.py", line 72, in makeShort
self.stepDictcurrentStep
File "D:\Code\ShortGPT\shortGPT\engine\content_video_engine.py", line 69, in _timeCaptions
whisper_analysis = audio_utils.audioToText(self._db_audio_path)
File "D:\Code\ShortGPT\shortGPT\audio\audio_utils.py", line 60, in audioToText
gen = transcribe_timestamped(WHISPER_MODEL, filename,verbose=False, fp16=False)
File "C:\Users\Ed\AppData\Local\mambaforge\lib\site-packages\whisper_timestamped\transcribe.py", line 230, in transcribe_timestamped
alignment_heads=get_alignment_heads(model) if word_alignement_most_top_layers is None else None,
File "C:\Users\Ed\AppData\Local\mambaforge\lib\site-packages\whisper_timestamped\transcribe.py", line 2032, in get_alignment_heads
return _get_alignment_heads(model_name, num_layers, num_heads)
File "C:\Users\Ed\AppData\Local\mambaforge\lib\site-packages\whisper_timestamped\transcribe.py", line 2037, in _get_alignment_heads
mask = torch.from_numpy(array).reshape(num_layers, num_heads)

โ“ [Question]: Vietnamese font error ?

Your Question

when I use English everything is great, but when using Vietnamese, the caption displayed in the video will have a font error, I also tried replacing it with arial font but it didn't work. How to make sure the Vietnamese font is not corrupted? Normally, if the file transcribtion will use utf-8 format, so as not to have the Vietnamese language font error. but I have searched quite deeply for the files but can't find any mention of this format. only fonts !!

thank you @ray

ERROR | Typeerror : CoreEditingEngine.generate_image() got an unexpected keyword argument 'logger'

Traceback Info : File "/Users/liams/Documents/shortgpt/gui/short_automation_ui.py", line 61, in create_short for step_num, step_info in shortEngine.makeContent(): File "/Users/liams/Documents/shortgpt/shortGPT/engine/abstract_content_engine.py", line 72, in makeContent self.stepDictcurrentStep File "/Users/liams/Documents/shortgpt/shortGPT/engine/reddit_short_engine.py", line 57, in _prepareCustomAssets imageEditingEngine.renderImage( File "/Users/liams/Documents/shortgpt/shortGPT/editing_framework/editing_engine.py", line 98, in renderImage engine.generate_image(self.schema, outputPath, logger=logger)

Any ideas?

i see this Expecting value: line 1 column 1 (char 0)

after installing it on windows
and when I want to enter my API in the config and click save

I see this error: Expecting value: line 1 column 1 (char 0)
I tried 3 browsers but the same error

even on the first tab for every option that I choose, I see the same error

๐Ÿ› [Bug]: MoviePy error: failed to read the duration of file %s.\n

What happened?

Follow the example to run

Step 9 _editAndRenderShort
Error File "/home/ecs-user/shortgpt/gui/video_automation_ui.py", line 95, in respond
video_path = makeVideo(script, language.value, isVertical, progress=progress)
File "/home/ecs-user/shortgpt/gui/video_automation_ui.py", line 46, in makeVideo
for step_num, step_info in shortEngine.makeContent():
File "/home/ecs-user/shortgpt/shortGPT/engine/abstract_content_engine.py", line 72, in makeContent
self.stepDictcurrentStep
File "/home/ecs-user/shortgpt/shortGPT/engine/content_video_engine.py", line 139, in _editAndRenderShort
videoEditor.renderVideo(outputPath, logger=self.logger)
File "/home/ecs-user/shortgpt/shortGPT/editing_framework/editing_engine.py", line 95, in renderVideo
engine.generate_video(self.schema, outputPath, logger=logger)
File "/home/ecs-user/shortgpt/shortGPT/editing_framework/core_editing_engine.py", line 53, in generate_video
clip = self.process_video_asset(asset)
File "/home/ecs-user/shortgpt/shortGPT/editing_framework/core_editing_engine.py", line 200, in process_video_asset
clip = VideoFileClip(**params)
File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in init
self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,
File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in init
infos = ffmpeg_parse_infos(filename, print_infos, check_duration,
File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 289, in ffmpeg_parse_infos
raise IOError(("MoviePy error: failed to read the duration of file %s.\n"

What type of browser are you seeing the problem on?

Chrome

What type of Operating System are you seeing the problem on?

Linux

Python Version

python 3.10

Application Version

V0.0.15

Expected Behavior

Get a generated video

Error Message

Error   File "/home/ecs-user/shortgpt/gui/video_automation_ui.py", line 95, in respond
    video_path = makeVideo(script, language.value, isVertical, progress=progress)
  File "/home/ecs-user/shortgpt/gui/video_automation_ui.py", line 46, in makeVideo
    for step_num, step_info in shortEngine.makeContent():
  File "/home/ecs-user/shortgpt/shortGPT/engine/abstract_content_engine.py", line 72, in makeContent
    self.stepDict[currentStep]()
  File "/home/ecs-user/shortgpt/shortGPT/engine/content_video_engine.py", line 139, in _editAndRenderShort
    videoEditor.renderVideo(outputPath, logger=self.logger)
  File "/home/ecs-user/shortgpt/shortGPT/editing_framework/editing_engine.py", line 95, in renderVideo
    engine.generate_video(self.schema, outputPath, logger=logger)
  File "/home/ecs-user/shortgpt/shortGPT/editing_framework/core_editing_engine.py", line 53, in generate_video
    clip = self.process_video_asset(asset)
  File "/home/ecs-user/shortgpt/shortGPT/editing_framework/core_editing_engine.py", line 200, in process_video_asset
    clip = VideoFileClip(**params)
  File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in __init__
    self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,
  File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in __init__
    infos = ffmpeg_parse_infos(filename, print_infos, check_duration,
  File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 289, in ffmpeg_parse_infos
    raise IOError(("MoviePy error: failed to read the duration of file %s.\n"

^CKeyboard interruption in main thread... closing server.

Code to produce this issue.

No response

Screenshots/Assets/Relevant links

No response

Pull the latest code๏ผŒand run i get this error---->KeyError: 'voices'

Traceback (most recent call last):
File "/home/ecs-user/ShortGPT/runShortGPT.py", line 1, in
from gui.gui import ShortGptUI
File "/home/ecs-user/ShortGPT/gui/gui.py", line 3, in
from gui.asset_library_ui import AssetLibrary
File "/home/ecs-user/ShortGPT/gui/asset_library_ui.py", line 5, in
from gui.asset_components import (background_music_checkbox,
File "/home/ecs-user/ShortGPT/gui/asset_components.py", line 44, in
voiceChoice = gr.Radio(getElevenlabsVoices(), label="Elevenlabs voice", value="Antoni", interactive=True)
File "/home/ecs-user/ShortGPT/gui/asset_components.py", line 26, in getElevenlabsVoices
voices = list(reversed(ElevenLabsAPI(api_key).get_voices().keys()))
File "/home/ecs-user/ShortGPT/shortGPT/api_utils/eleven_api.py", line 17, in get_voices
self.voices = {voice['name']: voice['voice_id'] for voice in response.json()['voices']}
Traceback (most recent call last):
File "/home/ecs-user/ShortGPT/runShortGPT.py", line 1, in
from gui.gui import ShortGptUI
File "/home/ecs-user/ShortGPT/gui/gui.py", line 3, in
from gui.asset_library_ui import AssetLibrary
File "/home/ecs-user/ShortGPT/gui/asset_library_ui.py", line 5, in
from gui.asset_components import (background_music_checkbox,
File "/home/ecs-user/ShortGPT/gui/asset_components.py", line 44, in
voiceChoice = gr.Radio(getElevenlabsVoices(), label="Elevenlabs voice", value="Antoni", interactive=True)
File "/home/ecs-user/ShortGPT/gui/asset_components.py", line 26, in getElevenlabsVoices
voices = list(reversed(ElevenLabsAPI(api_key).get_voices().keys()))
File "/home/ecs-user/ShortGPT/shortGPT/api_utils/eleven_api.py", line 17, in get_voices
self.voices = {voice['name']: voice['voice_id'] for voice in response.json()['voices']}
KeyError: 'voices'

Language Support

Hello, brother. Can it support short video generation in Chinese?

Unable to run

After the first time i tried, it show to me the error information below:

 python .\runShortGPT.py
Traceback (most recent call last):
  File "C:\Users\guizh\.virtualenvs\shortgpt-py310\lib\site-packages\requests\models.py", line 971, in json
    return complexjson.loads(self.text, **kwargs)
  File "C:\Users\guizh\AppData\Local\Programs\Python\Python310\lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "C:\Users\guizh\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\guizh\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 2 column 1 (char 1)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\guizh\Documents\ShortGPT\runShortGPT.py", line 1, in <module>
    from gui.gui import run_app
  File "C:\Users\guizh\Documents\ShortGPT\gui\gui.py", line 2, in <module>
    from gui.config_ui import create_config_ui
  File "C:\Users\guizh\Documents\ShortGPT\gui\config_ui.py", line 5, in <module>
    from gui.asset_components import voiceChoice, voiceChoiceTranslation, getElevenlabsVoices
  File "C:\Users\guizh\Documents\ShortGPT\gui\asset_components.py", line 25, in <module>
    voiceChoice = gr.Radio(getElevenlabsVoices(), label="Elevenlabs voice", value="Antoni", interactive=True)
  File "C:\Users\guizh\Documents\ShortGPT\gui\asset_components.py", line 20, in getElevenlabsVoices
    voices = list(reversed(getVoices(api_key).keys()))
  File "C:\Users\guizh\Documents\ShortGPT\shortGPT\api_utils\eleven_api.py", line 12, in getVoices
    for a in response.json()['voices']:
  File "C:\Users\guizh\.virtualenvs\shortgpt-py310\lib\site-packages\requests\models.py", line 975, in json
    raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 2 column 1 (char 1)

โœจ [Feature Request / Suggestion]: Transforming EditingEngine => EditingTrack, CoreEditingAPI => EditingEngine, and implementing a dynamic editing track for each contentEngine, with ability to save to / pull from database.

Suggestion / Feature Request

Transforming EditingEngine => EditingTrack, CoreEditingAPI => EditingEngine, and implementing a dynamic editing track for each contentEngine, with ability to save to / pull from database.

Why would this be useful?

This will be useful to create more dynamic editing flows, and to be able to resume an automated editing process from memory.
Also will help a lot in defining editing step inheritance between content engines.

Screenshots/Assets/Relevant links

No response

โ“ [Question]: Parameter:video_duraation is null

Your Question

ERROR: [generic] None: Unable to download webpage: <urlopen error [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)> (caused by URLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))
Failed getting duration from the following video/audio url/path using yt_dlp. ERROR: [generic] None: Unable to download webpage: <urlopen error [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)> (caused by URLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))
Error executing command using ffprobe.
The url/path https://rr3---sn-ipoxu-umbk.googlevideo.com/videoplayback?expire=1690303487&ei=n6e_ZLXqI9Ti2roP1uCGqAQ&ip=122.118.113.163&id=o-APMQx_6sqjVLZ7nVsL5O9aMrwU4IP0RWCD_osD2GKMpk&itag=335&source=youtube&requiressl=yes&mh=ww&mm=31%2C29&mn=sn-ipoxu-umbk%2Csn-un57enez&ms=au%2Crdu&mv=m&mvi=3&pl=20&initcwndbps=1058750&vprv=1&svpuc=1&mime=video%2Fwebm&gir=yes&clen=463772648&dur=545.611&lmt=1629834060752995&mt=1690281456&fvip=1&keepalive=yes&fexp=24007246%2C51000012%2C51000022&beids=24350017&c=IOS&txp=5511222&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cvprv%2Csvpuc%2Cmime%2Cgir%2Cclen%2Cdur%2Clmt&sig=AOq0QJ8wRAIgIjOF0-ENyQWUgq_IFMMleqOzt0vnUNXJ8igDz7E55PwCICtv472UOCFm1quOBHt2QMmHAMo2D9nj1LDaurl4S0wH&lsparams=mh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=AG3C_xAwRgIhAKWtvcOreQmadDAtLm33voGyqfg487yrE6ZkJMC9V6dKAiEAjL3I8VWtJQX8Bai2HN6drjFqDDVzzg_OwasK-wSW0dI%3D does not point to a video/ audio. Impossible to extract its duration
Step 9 _prepareBackgroundAssets
{'voiceover_audio_url': '.editing_assets/facts_shorts_assets/34b661c45b294808a1f2ac1e/audio_voice.wav', 'video_duration': None, 'background_video_url': 'https://rr3---sn-ipoxu-umbk.googlevideo.com/videoplayback?expire=1690303487&ei=n6e_ZLXqI9Ti2roP1uCGqAQ&ip=122.118.113.163&id=o-APMQx_6sqjVLZ7nVsL5O9aMrwU4IP0RWCD_osD2GKMpk&itag=335&source=youtube&requiressl=yes&mh=ww&mm=31%2C29&mn=sn-ipoxu-umbk%2Csn-un57enez&ms=au%2Crdu&mv=m&mvi=3&pl=20&initcwndbps=1058750&vprv=1&svpuc=1&mime=video%2Fwebm&gir=yes&clen=463772648&dur=545.611&lmt=1629834060752995&mt=1690281456&fvip=1&keepalive=yes&fexp=24007246%2C51000012%2C51000022&beids=24350017&c=IOS&txp=5511222&sparams=expire%2Cei%2Cip%2Cid%2Citag%2Csource%2Crequiressl%2Cvprv%2Csvpuc%2Cmime%2Cgir%2Cclen%2Cdur%2Clmt&sig=AOq0QJ8wRAIgIjOF0-ENyQWUgq_IFMMleqOzt0vnUNXJ8igDz7E55PwCICtv472UOCFm1quOBHt2QMmHAMo2D9nj1LDaurl4S0wH&lsparams=mh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps&lsig=AG3C_xAwRgIhAKWtvcOreQmadDAtLm33voGyqfg487yrE6ZkJMC9V6dKAiEAjL3I8VWtJQX8Bai2HN6drjFqDDVzzg_OwasK-wSW0dI%3D', 'music_url': 'public/Music joakim karud dreams.wav'}

โœจ [Feature Request / Suggestion]: GPT3 error: Rate limit

Suggestion / Feature Request

ERROR | Exception : GPT3 error: Rate limit reached for default-gpt-3.5-turbo in organization on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method.

The free chatGPT API has rate limits. How can I adjust and decrease the speed in the code?

Why would this be useful?

No response

Screenshots/Assets/Relevant links

No response

ERROR | Runtimeerror : Numpy is not available

Help me fix this error

Traceback Info : File "D:\PYTHON\ShortGPT-stable\0.0.2\gui\short_automation_ui.py", line 69, in create_short for step_num, step_info in shortEngine.makeShort(): File "D:\PYTHON\ShortGPT-stable\0.0.2\shortGPT\engine\abstract_content_engine.py", line 72, in makeShort self.stepDictcurrentStep File "D:\PYTHON\ShortGPT-stable\0.0.2\shortGPT\engine\content_short_engine.py", line 70, in _timeCaptions whisper_analysis = audio_utils.audioToText(self._db_audio_path) File "D:\PYTHON\ShortGPT-stable\0.0.2\shortGPT\audio\audio_utils.py", line 60, in audioToText gen = transcribe_timestamped(WHISPER_MODEL, filename,verbose=False, fp16=False) File "C:\Users\ANHPC\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\whisper_timestamped\transcribe.py", line 230, in transcribe_timestamped alignment_heads=get_alignment_heads(model) if word_alignement_most_top_layers is None else None, File "C:\Users\ANHPC\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\whisper_timestamped\transcribe.py", line 2032, in get_alignment_heads return _get_alignment_heads(model_name, num_layers, num_heads) File "C:\Users\ANHPC\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\whisper_timestamped\transcribe.py", line 2037, in _get_alignment_heads mask = torch.from_numpy(array).reshape(num_layers, num_heads)

ERROR | Exception : GPT3 error: module 'openai' has no attribute 'ChatCompletion'

How to fix this?

D:\ShortGPT-stable\ShortGPT-stable>python runShortGPT.py
Running on local URL: http://127.0.0.1:31415

To create a public link, set share=True in launch().
Step 1 _generateScript
Error communicating with OpenAI: module 'openai' has no attribute 'ChatCompletion'
Error communicating with OpenAI: module 'openai' has no attribute 'ChatCompletion'
Error communicating with OpenAI: module 'openai' has no attribute 'ChatCompletion'
Error communicating with OpenAI: module 'openai' has no attribute 'ChatCompletion'
Error File "D:\ShortGPT-stable\ShortGPT-stable\gui\short_automation_ui.py", line 61, in create_short
for step_num, step_info in shortEngine.makeContent():
File "D:\ShortGPT-stable\ShortGPT-stable\shortGPT\engine\abstract_content_engine.py", line 72, in makeContent
self.stepDictcurrentStep
File "D:\ShortGPT-stable\ShortGPT-stable\shortGPT\engine\reddit_short_engine.py", line 38, in _generateScript
self._db_script, _ = self.__getRealisticStory(max_tries=1)
File "D:\ShortGPT-stable\ShortGPT-stable\shortGPT\engine\reddit_short_engine.py", line 25, in __getRealisticStory
new_script = self.__generateRandomStory()
File "D:\ShortGPT-stable\ShortGPT-stable\shortGPT\engine\reddit_short_engine.py", line 16, in __generateRandomStory
question = reddit_gpt.getInterestingRedditQuestion()
File "D:\ShortGPT-stable\ShortGPT-stable\shortGPT\gpt\reddit_gpt.py", line 17, in getInterestingRedditQuestion
return gpt_utils.gpt3Turbo_completion(chat_prompt=chat, system=system, temp=1.08)
File "D:\ShortGPT-stable\ShortGPT-stable\shortGPT\gpt\gpt_utils.py", line 88, in gpt3Turbo_completion
raise Exception("GPT3 error: %s" % oops)

Split the content engines into multiple classes that will take care of specific actions, such as Script generation, Image / video search, ect..

The current content engine is very general, and takes care of the following steps of content creation:
1- script generation
2- audio voiceover creation
3- audio speeding
4- caption timing from voiceover (with whisper)
5- If wants images: generate image search terms and time them
6- Scrape the images
7- Choose background music
8- Choose background video
9- Prepare custom assets if any
10- Edit and render video or short
11- Add metadata

These steps are pretty good for a few types of content, but it restricts all other types of contents that are much more volatile in their video editing ability. (Imagine playing clips of other audios in the middle of the video, or playing a highlight, ect..) As such, we must separate the tasks of the Content engine into many smaller modules that can handle these separate tasks, and have higher abstractions binding them.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.