Comments (3)
Does the config below work for you?
- day_of_week: Monday-Friday
- hour: 8,17
- minute: 0
- second: 0
from scrapydweb.
嘿朋友! 你的方案能用, 看来是我当时理解存在偏差. 不过我相信我不是唯一误解的人, 我有一个不错的建议:
参考GTD软件Things3的重复任务来设计:
Hey friends! Your plan can be used, it seems that I understood the deviation at the time. But I believe that I am not the only misunderstood person, I have a good suggestion: Refer to the repeated tasks of the GTD software Things3 to design:
重复条件: \ Repeat conditions
- 完成后 \ After completion
- n (分钟/小时/天/周/月) 后执行 \ n (minutes/hours/days/weeks/months)
- 结束条件 (从不/截止日期/n次后) \ Execution* End conditions (never/dead date/n times)
- 定期 \ Regular
- 每(天/周/月) (12:00, 13:30) 执行 \ per (day/week/month) (12:00 , 13:30)
- 结束条件 (从不/截止日期/n次后) \ Execution* End condition (never/dead date/n times)
我相信这样设计能够应对更多的场景, 能够更加便于理解与使用.
I believe that this design can handle more scenarios and can be more easily understood and used.
your guy: BiarFordlander
from scrapydweb.
Thanks for your suggestion.
- The Timer Tasks feature is implemented based on APScheduler.
- Whether a scraping job is finished or not is unknown to APScheduler, see #30.
- The
max_runs
option is dropped in APScheduler v3.0.0, so currently you cannot specify how many times a task should be executed. - Setting
*/n
means firing everyn
values, starting from the minimum, see the docs of APScheduler. - Both
start_date
andend_date
are available and optional when adding a task.
Also, check out the links in the HELP section of the Timer Tasks page to get more info.
from scrapydweb.
Related Issues (20)
- Not able to see stats section of the job HOT 1
- scrapydweb failed to run on python 3.8 HOT 5
- 启动报错:sqlite3.OperationalError: no such table: metadata HOT 13
- Is it possible to run multiple spider at the same time in a tmux machine with scrapydweb automatically
- items Oops! Something went wrong. HOT 1
- scrapydweb fresh install won't run HOT 8
- APScheduler 3.10 causing 500 errors HOT 2
- How to Change Timezone of scrapydweb? HOT 3
- Which scrapyd image you use? HOT 2
- Clean install on clean Ubuntu VM. Whatever I do it is not working. HOT 2
- Docker compose scrapdweb with scrapyd the log url use docker name
- Processes dont stop after finishing HOT 1
- v1.4.1 submit cron job can't run HOT 1
- ('Connection aborted.', timeout('timed out',))
- ERROR: Package 'scrapydweb' requires a different Python: HOT 4
- Error while installing scrapydweb HOT 2
- spiders are closed but showing as running/warning in the tasks page
- web界面可以使用中文吗? HOT 1
- DATABASE_URL配置连接域名:端口的mysql失败 HOT 2
- Jobs are killed without a clear reason HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from scrapydweb.