Comments (6)
This line of your output looks like this isn't being run in a virtual env
[2023-04-19 06:03:19,041] [registry.py:249] Loading registry from /opt/homebrew/lib/python3.9/site-packages/evals/registry/completion_fns
That means the eval scripts can't find the benchmarking.pth
file that you should have created in /Users/ejohnson/src/Auto-GPT-Benchmarks/venv/lib/python3.9/site-packages/benchmarking.pth
after you created and activated a Python virtual environment named venv
.
The contents of that benchmarking.pth
file should be /Users/ejohnson/src/Auto-GPT-Benchmarks/
for your system.
That should resolve this error.
from auto-gpt-benchmarks.
Hmm, that is what the contents of benchmarking.pth happens to be, but it's throwing the same error.
from auto-gpt-benchmarks.
even in a virtual environment? can you post your error output again while running this in a venv?
I had this same error and was able to resolve it which is what makes me think this isn't a bug.
from auto-gpt-benchmarks.
After running source venv/bin/activate.fish
, my fish prompt changes to show that I am in fact in a venv, but I still receive the same output.
I am running on a M1 MacBook Pro, but I really don't think that should make much of a difference.
from auto-gpt-benchmarks.
@desojo and @samuelbutler These guys should be resolved now.
Thanks for taking an initial crack at it when it was still really, really unstable.
from auto-gpt-benchmarks.
I am closing this, as this is no longer how we run benchmarks
from auto-gpt-benchmarks.
Related Issues (20)
- Basic Linting/Code Formatting/CI/CD pipelining
- Command write_to_file returned: Error: [Errno 2] No such file or directory: '/home/appuser/auto_gpt_workspace/output.txt' HOT 2
- I'm Creating .bat files for Windows Users To Keep It Super Simple HOT 1
- AutoGPT Benchmarks "records.jsonl" Data Not Saved in README Location when ran as-is HOT 1
- Logging over print
- AutoGPTBenchmarkSettings
- Count token usage in evals
- Select OpenAI Eval subset we want to run in CI
- OpenAI evals will retry forever if they timeout
- Create Issue Templates for Benchmarking
- Add AutoGPT commit hash to eval filename
- Request for Log Sharing HOT 2
- Does agent-protocol have a version 0.2.3? HOT 1
- Long form code change evaluation
- Build a dashboard for displaying historical eval results HOT 1
- What is the best system to test REST APIs ? HOT 1
- Build a pipeline for evaluating benchmarks
- Large language model evaluation model paper and method aggregation HOT 4
- Delete everything from the Agent's workspace instead of select files on cleanup HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from auto-gpt-benchmarks.