shreyashankar / gpt3-sandbox Goto Github PK
View Code? Open in Web Editor NEWThe goal of this project is to enable users to create cool web demos using the newly released OpenAI GPT-3 API with just a few lines of Python.
License: MIT License
The goal of this project is to enable users to create cool web demos using the newly released OpenAI GPT-3 API with just a few lines of Python.
License: MIT License
I run a fine-tuned model that sometimes takes a while to load. The UI does not reflect that, it just stays "silent".
output in terminal is:
[2022-06-05 11:48:07,665] ERROR in app: Exception on /translate [POST]
(...)
openai.error.RateLimitError: That model is still being loaded. Please try again shortly.
Can the UI be modified to deliver that message to the user?
I have followed the set up but at the step of checking the environment by running python examples/run_latex_app.py, I came across the error as follows: I wonder if anyone could help solve it
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:130:10)
at module.exports (C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\util\createHash.js:135:53)
at NormalModule._initBuildHash (C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:417:16)
at handleParseError (C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:471:10)
at C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:503:5
at C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:358:12
at C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:373:3
at iterateNormalLoaders (C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:214:10)
at iterateNormalLoaders (C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:221:10)
C:\Users\jiz52\gpt3-sandbox\node_modules\react-scripts\scripts\start.js:19
throw err;
^
Error: error:0308010C:digital envelope routines::unsupported
at new Hash (node:internal/crypto/hash:67:19)
at Object.createHash (node:crypto:130:10)
at module.exports (C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\util\createHash.js:135:53)
at NormalModule._initBuildHash (C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:417:16)
at C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:452:10
at C:\Users\jiz52\gpt3-sandbox\node_modules\webpack\lib\NormalModule.js:323:13
at C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:367:11
at C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:233:18
at context.callback (C:\Users\jiz52\gpt3-sandbox\node_modules\loader-runner\lib\LoaderRunner.js:111:13)
at C:\Users\jiz52\gpt3-sandbox\node_modules\babel-loader\lib\index.js:59:103 {
opensslErrorStack: [ 'error:03000086:digital envelope routines::initialization error' ],
library: 'digital envelope routines',
reason: 'unsupported',
code: 'ERR_OSSL_EVP_UNSUPPORTED'
}
Node.js v17.1.0
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Does sharing the API key legal?
currently, if you want to see how GPT does with more examples, you need to add examples in the main script and then relaunch the flask back end. Ideally, they would be able to make changes to the GPT configuration and see the model improve as they include more priming examples.
One way of doing this would be to add a UI component that sends a post to update the underlying GPT model but this would result in potentially a cluttered UI (not ideal for demos).
The volume and fuel of ordinary turbo jet engines but several times the speed
write static demo_web_app that instead takes in gpt class and and flask app
Tasks:
I wonder that we can apply this pipeline for GPT2 model without finetuning or training, just give gpt2 a prompt and it gives back the result. Thanks
Hi, thanks for developing such a great tool.
Just wondering if you could add an example for training GPT-3 for Named Entity Recognition (NER) tasks?
I'm not sure how I can use the add_example function to specify the answer to a question for NER tasks:
...
gpt.add_example(Example('Tom was born in 1942', '[(Tom, Name), (1942, Year)]'))
...
Thanks!
Hi!
I'm trying to run any of the examples in the examples
folder, but I'm stuck with the same screen that shows the default text for the UI (I uploaded an screenshot of what I got from examples/run_latex_app.py
). Also, inputting anything there and trying to submit does nothing.
I have followed all the Setup steps in README and, from what I gather, the text in the UI should be different from what I see. Do you have any ideas of what I could have done wrong?
P.S.: my OpenAI console shows that my key hasn't been used. And I exported the config file as indicated in README.
I use Windows 10 and Powershell, my problem is i can't set the OPENAI_CONFIG variable, whenever i try to run an example .py, it says "The environment variable 'OPENAI_CONFIG' is not set "
I tried $env:OPENAI_CONFIG= , set OPENAI_CONFIG= but none of it works. I also tried different file path formats to point out the openai.cfg file that i created. But i had no luck.
Unable to create and get gpt-3 API to hands with the code samples
Browser:
Not Found
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
Terminal:
127.0.0.1 - - [03/Aug/2022 19:13:33] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [03/Aug/2022 19:13:36] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [03/Aug/2022 19:13:41] "GET / HTTP/1.1" 404 -
127.0.0.1 - - [03/Aug/2022 19:13:42] "GET /favicon.ico HTTP/1.1" 404
Description:
I followed the Readme file's instructions, but I couldn't replicate the example and always ended up with this. Can anyone help me with this?
Hi,
I'm trying to get the sandbox running on an AWS ubuntu dev server that I have and I am running into a problem because this server is already using ports 5000, 5001, 5007, and 5008 for other flask/gunicorn applications. I can't quite figure out how to set things up to use a different port, say 5009. THe key variables seem to be the proxy value set in package.json and the port & host values optionally provided in demo_web_app.py at app(run). Which of them needs to be set to the new port value?
Whenever I run from api import GPT, Example, set_openai_key
I always get this error Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'api'
Even though I have already installed everything and followed the instructions properly..
python -m venv $ENV_NAME
should be
python3 -m venv $ENV_NAME
as the venv command only exists in python3
i'm trying to generate code; but when it generates something, it doesn't give me any line breaks
Running pip install -r api/requirements.txt
produces the following error:
ERROR: Cannot install requests==2.24.0 and urllib3==1.26.5 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested urllib3==1.26.5
requests 2.24.0 depends on urllib3!=1.25.0, !=1.25.1, <1.26 and >=1.21.1
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
Python 3.10.1, pip 21.3.1, MacOS Monterey 12.2
sir can you please help me with this
raise RuntimeError(
RuntimeError: The environment variable 'OPENAI_CONFIG' is not set and as such configuration could not be loaded. Set this variable and make it point to a configuration file
But I live in Iran and I have no hope of building it. From those who can do something in this field, I can provide it to them for free.
Hi,
Firstly thanks for this cool repo. I am trying to build a question and answer app using GPT3. How do I save a document file or feed the example with say 500 words text based on which users will question and the API will search the file or given text and provide the answer. Endpoint for is given in the GPT3 example but I am finding it tough to integrate it with this repo example. Any help is appreciated. Below the given code:
doc_list=["sample text to be searched."]
response = openai.Answer.create(
search_model="davinci",
model="curie",
question="What is sample text",
documents=doc_list,
examples_context="In 2017, U.S. life expectancy was 78.6 years.",
examples=[["What is human life expectancy in the United States?","78 years."]],
max_tokens=10,
stop=["\n", "<|endoftext|>"],
)
print(response)
Thanks,
Raji
Like, every single time I want to get something from the model, do I need to give it examples ?
On is it possible I create pretext/profile/save the examples someplace such that I do not have to send it to the model everytime I make a request ?
Upon making a request would the examples I try and train it with prior eat up the tokens as well ?
Just a question, couldn't think of a better forum to ask. Please close if you don't find it appropriate.
Thanks.
Hi guys, I have been trying to run the example program but I am getting the error as in the title. I tried to separate the clone of repo in a different directory but still getting the same error.
Right now it's done via a .cfg file and setting the environment variable LATEX_TRANSLATOR_CONFIG
(name is an artifact from original version of this app). Figure out if there's a more intuitive way of doing this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.