Comments (2)
Hi @Truth-In-Lies, thanks for your comprehensive analysis, which is very valuable. Here is how we view the contamination problem: since it is intrinsically challenging to formulate a precise definition of contamination (https://arxiv.org/abs/2311.04850), and as you pointed out -- paraphrasing, our decontamination pipeline exactly follows existing good practices, i.e., StarCoder to just remove exact match.
To truly understand the model performance, our approach is to evaluate the models on as more benchmarks as possible. I also want to mention two additional and independent evaluation efforts, LiveCodeBench -- a contamination free benchmark of fresh coding tasks, and EvoEval -- evolving existing benchmarks to formulate new coding problems. They all showed Magicoder's good performance.
To sum up, we believe a comprehensive evaluation is the way to resolve the contamination issue and it is possible as you said the model may overfit to one specific benchmark suite due to paraphrasing (e.g., HumanEval).
Hope this answers your question!
from magicoder.
Thanks so much for your answer and for your amazing work. I think that HumanEval (and MBPP) has been widely leaked on the web, which makes it less useful as a benchmark. I've noticed that many models tend to generate problems similar to those in HumanEval. Maybe it's time for new benchmarks to take their place. Thanks again for your time and expertise!
from magicoder.
Related Issues (20)
- So many impressive experiments ! Are there any experiments with neftune ? HOT 1
- The correctness of solution HOT 1
- used Dilated attenton instead of Vanilla Attention in Llama model and fine-tuen the model ,
- How do you set the 'stop_words' parameter
- Are the training loss and validation loss recorded? HOT 4
- Data collection and generation HOT 1
- Got same problem that model only return lots of '\n' HOT 5
- Achieved close performance of MagicoderS by finetuning only with `evol-codealpaca-v1`. HOT 8
- A scaling law of instruction-code-data would be very interesting... HOT 3
- catastrophic forgetting problem HOT 1
- The templates used in reproducing the eval results: why adding the instruction again after "### Response: "? HOT 1
- 8台A40机器上复现magicoder-S-DS-6.7B的结果
- Is it normal to take more than one hour to get the humanevalplus results?
- HuggingFace Playground has failed
- Quantised Finetuning on 22GB*4 GPUs
- A question of the generated data from the starcoderdata HOT 2
- Code for the evaluations on APPS.
- Inquiry about Paper Details of Magicoder
- Question about the different replication result
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from magicoder.