Comments (12)
Are there any pending jobs in the jobs page after you deleted the timer task?
from scrapydweb.
Are there any pending jobs in the jobs page after you deleted the timer task?
from scrapydweb.
See step 4 in https://github.com/my8100/files/blob/master/scrapyd-basic-auth/README.md#try-it-out
from scrapydweb.
See step 4 in https://github.com/my8100/files/blob/master/scrapyd-basic-auth/README.md#try-it-out
我现在已经更新job页面了,在任务 点击stop还是会一直运行
from scrapydweb.
But there are no running jobs in your screenshot.
from scrapydweb.
See step 4 in https://github.com/my8100/files/blob/master/scrapyd-basic-auth/README.md#try-it-out
我又新建一个任务,在job页面stop或者forceStop的时候都会报这个错误
from scrapydweb.
See #7 (comment)
from scrapydweb.
See #7 (comment)
我看了一下logs,没有错误,我用了无头浏览器来爬,不知道是不是这个原因
from scrapydweb.
It seems so.
#7 (comment)
#7 (comment)
from scrapydweb.
It seems so.
#7 (comment)
#7 (comment)
谢谢,在cmd运行了scrapydweb -v ,可以正常停止了,最后一个问题,就是items页面显示404
from scrapydweb.
Actually, adding the argument ‘-v’ only changes the logging level of scrapydweb.
As for the items page, see https://scrapyd.readthedocs.io/en/stable/config.html#items-dir
BTW, try to communicate in English on GitHub.
from scrapydweb.
Actually, adding the argument ‘-v’ only changes the logging level of scrapydweb.
As for the items page, see https://scrapyd.readthedocs.io/en/stable/config.html#items-dirBTW, try to communicate in English on GitHub.
ok! thx! Is FEED_URI written in settings. py of the scrapy project?
from scrapydweb.
Related Issues (20)
- Not able to see stats section of the job HOT 1
- scrapydweb failed to run on python 3.8 HOT 5
- 启动报错:sqlite3.OperationalError: no such table: metadata HOT 13
- Is it possible to run multiple spider at the same time in a tmux machine with scrapydweb automatically
- items Oops! Something went wrong. HOT 1
- scrapydweb fresh install won't run HOT 8
- APScheduler 3.10 causing 500 errors HOT 2
- How to Change Timezone of scrapydweb? HOT 3
- Which scrapyd image you use? HOT 2
- Clean install on clean Ubuntu VM. Whatever I do it is not working. HOT 2
- Docker compose scrapdweb with scrapyd the log url use docker name
- Processes dont stop after finishing HOT 1
- v1.4.1 submit cron job can't run HOT 1
- ('Connection aborted.', timeout('timed out',))
- ERROR: Package 'scrapydweb' requires a different Python: HOT 4
- Error while installing scrapydweb HOT 2
- spiders are closed but showing as running/warning in the tasks page
- web界面可以使用中文吗? HOT 1
- DATABASE_URL配置连接域名:端口的mysql失败 HOT 2
- Jobs are killed without a clear reason HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from scrapydweb.