soallpeach / onboarding Goto Github PK
View Code? Open in Web Editor NEWThe repository to register in soalpich challenge
Home Page: https://soallpeach.run
License: MIT License
The repository to register in soalpich challenge
Home Page: https://soallpeach.run
License: MIT License
When I use vegeta locally, it doesn't use so much CPU (I know about cpu=1 option) and also my webserver doesn't use any CPU.
On the other hand, when I try wrk for example, with 1 thread, both wrk and my webserver, both using some CPU!
I think replacing it with a better tool can leads to different results.
wrk command can be like:
wrk -t1 -c500 -d45s --latency -s script.lua http://localhost:8080
And content of script.lua will be like:
wrk.method = "POST" wrk.body = "8"
I would suggest having composite numbers in the input.txt file.
Also for the output, it's better to have a reference to the input number.
Currently, you only rely on the order of numbers which doesn't seem safe! In another word, what if I accidentally print "1" as the result of 11 instead of "1" as the result of 13? The final result is correct, but you cannot see the mistake!
It is useful to show sha or link to the commit which is built and run.
Hi
I have changed my code slightly in the last round, this is the time recorded in my machine
python3 original.py 10.34s user 1.98s system 99% cpu 12.340 total
python3 original.py 10.11s user 2.25s system 99% cpu 12.360 total
python3 original.py 10.35s user 2.12s system 99% cpu 12.475 total
python3 prime_checker.py 5.75s user 0.42s system 93% cpu 6.591 total
python3 prime_checker.py 5.77s user 0.39s system 93% cpu 6.572 total
python3 prime_checker.py 5.79s user 0.43s system 93% cpu 6.649 total
so based on those my code should work faster but the result was the opposite!
last round time 4.xx seconds
this round 160.xx seconds ://
it's so weird
If the latest commit id of the branch is the same as the previous one, don't run it again.
https://soallpeach.run/scores/17223
Traceback (most recent call last):
File \"workspace/metrics.py\", line 30, in <module>
metrics_json = json.loads(metrics_file.read())
File \"/opt/hostedtoolcache/Python/3.8.2/x64/lib/python3.8/json/__init__.py\", line 357, in loads
return _default_decoder.decode(s)
File \"/opt/hostedtoolcache/Python/3.8.2/x64/lib/python3.8/json/decoder.py\", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File \"/opt/hostedtoolcache/Python/3.8.2/x64/lib/python3.8/json/decoder.py\", line 355, in raw_decode
raise JSONDecodeError(\"Expecting value\", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Hi.
First of all, I have to thank you for your amazing idea.
As we see , in test file we have 5 non-prim numbers at first, and another numbers are prim!!!
So I can just print 0 for first 5 records and 1 for other records!!!!
As a result, I don't think test files are fair.
I think it's better that have more non-prim numbers in random places.
It can bring us closer to reality.
If your Docker file has a CMD directive instead of ENTRYPOINT it won't work with current way of running. We basically overwrite any CMD directive so if somebody has both we will also break that.
Sample:
https://soallpeach.run/scores/3009
https://github.com/nnourani/soallpeach/blob/master/prime/Dockerfile
Hi,
I've done a bunch of tests and it looks like the time elapsed in run.py does not represent the real time of the algorithm.
I've built a couple of the contestant's containers and tested the results locally and it doesn't match up with the repo's Action results. This is what I ran:
perf stat docker run -it --rm -v /tmp/ch1/input.txt:/input.txt prime:CONTENSTANT /input.txt > /dev/null
Please let me know if you need me to share more info
Users should be able to find the source of the challenge implementations easily. For this purpose, we can add a link to the root directory of each implementation.
On every new round, the table stops showing the previous result. so that for a few minutes the table is not complete.
Hi,
in run_in_file_program.sh
, use of sh
command is required. I've built my container FROM scratch
so it won't run with this.
Is sh
a requirement for running the challenges? if so, we should make sure it's reflected in the docs.
Cheers,
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.