Comments (5)
Not another app on same port 5000.
Indeed, in my config, everything is in a docker-compose, each micro service running in a separate area
Thanks for settings, but I've tried yesterday, and problem was the same indeed
Nevermind, all is working now scrapyd is goodly configured, thanks!
Need some stuff though, log parser as another micro service, reverse proxy everything for auth / auto ssl, etc.
from scrapydweb.
Well, I've tried also spider keeper that is working (the GUI), but didn't want to send egg to sracpyd
I've fixed the issue by forcing scrapyd to use the exact same version of python as the version my scrapers use, and spiderkeeper now works completely
And as a result, scrapydweb works also now, no more error 500
I guess there's a problem with scrapydweb, that could have a GUI that works at least, even if scrapyd is badly configured
But everything works now :)
from scrapydweb.
The key is that you should split this argument passed in: --scrapyd_server=scrapyd:6800
This works for me:
Content of the Dockerfile
FROM python:3.6-jessie
ENV TZ="Europe/Paris"
WORKDIR /app
RUN pip install scrapydweb
RUN cp /usr/local/lib/python3.6/site-packages/scrapydweb/default_settings.py /app/scrapydweb_settings_v7.py
EXPOSE 5000
CMD ["scrapydweb", "--disable_auth", "--disable_logparser", "--scrapyd_server", "IP-OF-YOUR-SCRAPYD-SERVER:6800"]
Docker commands
ubuntu@ubuntu:~/docker$ sudo docker build -t scrapydweb:latest .
ubuntu@ubuntu:~/docker$ sudo docker run -d -p 5000:5000 scrapydweb
1da5a344b172f5e2d22f8e34a2ba0733c26e4e87be39c266c3ecc9a34eb41802
ubuntu@ubuntu:~/docker$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1da5a344b172 scrapydweb "scrapydweb --disabl…" 16 seconds ago Up 15 seconds 0.0.0.0:5000->5000/tcp amazing_edison
ubuntu@ubuntu:~/docker$ sudo docker logs 1da
[2019-01-21 09:02:38,892] INFO in werkzeug: * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
[2019-01-21 09:03:31,004] INFO in werkzeug: 172.17.0.1 - - [21/Jan/2019 09:03:31] "GET / HTTP/1.1" 302 -
[2019-01-21 09:03:32,143] INFO in werkzeug: 172.17.0.1 - - [21/Jan/2019 09:03:32] "GET /1/dashboard/ HTTP/1.1" 200 -
[2019-01-21 09:03:32,660] INFO in werkzeug: 172.17.0.1 - - [21/Jan/2019 09:03:32] "GET /static/v110/css/style.css HTTP/1.1" 200 -
from scrapydweb.
Well, I've tried also spider keeper that is working (the GUI), but didn't want to send egg to sracpyd
I've fixed the issue by forcing scrapyd to use the exact same version of python as the version my scrapers use, and spiderkeeper now works completelyAnd as a result, scrapydweb works also now, no more error 500
I guess there's a problem with scrapydweb, that could have a GUI that works at least, even if scrapyd is badly configured
But everything works now :)
So, you were running another app on the same port 5000 when the error 500 raised?
from scrapydweb.
I knew you may be using docker-compose from the name 'scrapydweb_1'.
Actually, I was wondering why ScrapydWeb would raise the exception below.
When the code reached line 98, it had fetched the page content from somewhere like 'http://127.0.0.1:6800/jobs', and everything should be working well.
@jdespatis You can also pass in the argument '--verbose' for trouble shooting if needed.
scrapydweb_1 | File "/usr/local/lib/python3.6/site-packages/scrapydweb/jobs/dashboard.py", line 98, in generate_response
scrapydweb_1 | _url_items = re.search(r"href='(.*?)'>", row['items']).group(1)
scrapydweb_1 | AttributeError: 'NoneType' object has no attribute 'group'
from scrapydweb.
Related Issues (20)
- project dependices package version incompatible HOT 3
- Not able to see stats section of the job HOT 1
- scrapydweb failed to run on python 3.8 HOT 5
- 启动报错:sqlite3.OperationalError: no such table: metadata HOT 13
- Is it possible to run multiple spider at the same time in a tmux machine with scrapydweb automatically
- items Oops! Something went wrong. HOT 1
- scrapydweb fresh install won't run HOT 8
- APScheduler 3.10 causing 500 errors HOT 2
- How to Change Timezone of scrapydweb? HOT 3
- Which scrapyd image you use? HOT 2
- Clean install on clean Ubuntu VM. Whatever I do it is not working. HOT 2
- Docker compose scrapdweb with scrapyd the log url use docker name
- Processes dont stop after finishing HOT 1
- v1.4.1 submit cron job can't run HOT 1
- ('Connection aborted.', timeout('timed out',))
- ERROR: Package 'scrapydweb' requires a different Python: HOT 4
- Error while installing scrapydweb HOT 2
- spiders are closed but showing as running/warning in the tasks page
- web界面可以使用中文吗? HOT 1
- DATABASE_URL配置连接域名:端口的mysql失败 HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from scrapydweb.