Comments (13)
- Adding Scrapyd servers via web UI is not an urgent feature as they don't change often.
Managing Scrapyd servers with https enabled would be supported in a future release. - There is no plan yet to support multi-language.
- Currently, any network request to Scrapyd servers has a timeout of 60 seconds, and no retrying,
though Timer Tasks would retry one time anyway.
In general, it seems no need to custom these configs.
from scrapydweb.
安装完成之后,输入scrapydweb命令,出现这个报错,请问如何解决:
Traceback (most recent call last):
File "D:\fenxihuanjing\lib\logging\config.py", line 389, in resolve
self.importer(used)
ModuleNotFoundError: No module named 'flask.logging.wsgi_errors_stream'; 'flask.logging' is not a package
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "D:\fenxihuanjing\lib\logging\config.py", line 562, in configure
handler = self.configure_handler(handlers[name])
File "D:\fenxihuanjing\lib\logging\config.py", line 733, in configure_handler
kwargs = {k: config[k] for k in config if valid_ident(k)}
File "D:\fenxihuanjing\lib\logging\config.py", line 733, in <dictcomp>
kwargs = {k: config[k] for k in config if valid_ident(k)}
File "D:\fenxihuanjing\lib\logging\config.py", line 324, in __getitem__
return self.convert_with_key(key, value)
File "D:\fenxihuanjing\lib\logging\config.py", line 290, in convert_with_key
result = self.configurator.convert(value)
File "D:\fenxihuanjing\lib\logging\config.py", line 461, in convert
value = converter(suffix)
File "D:\fenxihuanjing\lib\logging\config.py", line 400, in ext_convert
return self.resolve(value)
File "D:\fenxihuanjing\lib\logging\config.py", line 396, in resolve
raise v
File "D:\fenxihuanjing\lib\logging\config.py", line 389, in resolve
self.importer(used)
ValueError: Cannot resolve 'flask.logging.wsgi_errors_stream': No module named 'flask.logging.wsgi_errors_stream'; 'flask.logging' is not a package
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "run.py", line 11, in <module>
from scrapydweb import create_app
File "D:\fenxihuanjing\lib\site-packages\scrapydweb\__init__.py", line 36, in <module>
'handlers': ['wsgi']
File "D:\fenxihuanjing\lib\logging\config.py", line 799, in dictConfig
dictConfigClass(config).configure()
File "D:\fenxihuanjing\lib\logging\config.py", line 570, in configure
'%r' % name) from e
ValueError: Unable to configure handler 'wsgi'
from scrapydweb.
@cgleiyucun
Try pip install --upgrade flask
from scrapydweb.
@my8100
very thanks
from scrapydweb.
file模块下的items功能页面出现这种报错,请问要如何解决?
Oops! Something went wrong.
http://127.0.0.1:6800/items/
status_code: 404
No Such Resource
No such child resource.
Tip Click the above link to make sure your Scrapyd server is accessable.from scrapydweb.
See https://scrapyd.readthedocs.io/en/latest/config.html#items-dir
from scrapydweb.
thanks
from scrapydweb.
@my8100
就是说这个页面不用去处理它,一旦启用会覆盖掉scrapy的items设置?
from scrapydweb.
Yes, you can also hide the menu by setting SHOW_SCRAPYD_ITEMS
to False.
scrapydweb/scrapydweb/default_settings.py
Lines 156 to 160 in 8104386
from scrapydweb.
thanks
from scrapydweb.
@my8100 do you finished this feature?
from scrapydweb.
I love this pro, also I need this ability
from scrapydweb.
The tip is trying to access hard-coded 127.0.0.1:6800
All of the blue link (for example Monitor and control all of your Scrapyd servers.
) also have this issue.
from scrapydweb.
Related Issues (20)
- Not able to see stats section of the job HOT 1
- scrapydweb failed to run on python 3.8 HOT 5
- 启动报错:sqlite3.OperationalError: no such table: metadata HOT 13
- Is it possible to run multiple spider at the same time in a tmux machine with scrapydweb automatically
- items Oops! Something went wrong. HOT 1
- scrapydweb fresh install won't run HOT 8
- APScheduler 3.10 causing 500 errors HOT 2
- How to Change Timezone of scrapydweb? HOT 3
- Which scrapyd image you use? HOT 2
- Clean install on clean Ubuntu VM. Whatever I do it is not working. HOT 2
- Docker compose scrapdweb with scrapyd the log url use docker name
- Processes dont stop after finishing HOT 1
- v1.4.1 submit cron job can't run HOT 1
- ('Connection aborted.', timeout('timed out',))
- ERROR: Package 'scrapydweb' requires a different Python: HOT 4
- Error while installing scrapydweb HOT 2
- spiders are closed but showing as running/warning in the tasks page
- web界面可以使用中文吗? HOT 1
- DATABASE_URL配置连接域名:端口的mysql失败 HOT 2
- Jobs are killed without a clear reason HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from scrapydweb.