Giter Site home page Giter Site logo

flask-apscheduler's People

Contributors

christopherpickering avatar freakman avatar gkirito avatar guru-florida avatar jeffvandrewjr avatar jmnofziger-habana avatar mxdev88 avatar pysanders avatar soasme avatar ssimk0 avatar viniciuschiele avatar wgwz avatar xarbit avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flask-apscheduler's Issues

how to unschedule a job in flask-apscheduler using id?

I want to unschedule a running job from inside the job, but i am not able to figure the work around for this
something like this I have used before, in apscheduler: scheduler.unschedule_job(schedule_task.job)

i tried using : scheduler.delete_job('job1')

I wrote a basic code(just an example), where i am trying to stop the job once it reaches id limit
code:flaskAPSched.py
'''
from flask import Flask
from flask_apscheduler import APScheduler
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
from apscheduler.schedulers.background import BackgroundScheduler
from flask_sqlalchemy import SQLAlchemy
from sqlalchemy.sql import func
app = Flask(name)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://postgres:postgres@localhost:5432/scheduler'
db = SQLAlchemy(app)

class Test(db.Model):
tablename = 'testing'
id=db.Column(db.Integer, primary_key=True,autoincrement=True)
name=db.Column(db.String(200))
def init(self, name):
self.name=name

class Config(object):
JOBS = [
{
'id': 'job11',
'func': 'flaskAPSched:job1',
'args': ('Shrey'),
'trigger': 'interval',
'seconds': 5
}
]

db.create_all()
db.session.commit()

def job1(a):
s=Test(name=a)
db.session.add(s)
db.session.commit()
x=db.session.query(func.max(Student.id).label("max_id")).one().max_id
print x
if x==5:
print "limit reached"
scheduler.delete_job('job1')
print(str(a) + ' ' + str(b))

if name == 'main':
app = Flask(name)
app.config.from_object(Config())
scheduler = APScheduler()
scheduler.init_app(app)
scheduler.start()
app.run()

'''

what i am trying to achieve is, after putiing 5 entries, scheduler should stop executing the job
but nothing is happening, job just keep executing and does not stop at all

is this the right way of doing it, or there is any other way to unschedule a job?

uwsgi deploy questions

Local development model to introduce this no problem, but in the use of uwsgi deployment can not work properly, how can I solve?

flask-apscheduler doen't work with uwsgi

Hello,
I'm using flask-apscheduler with uwsgi on AWS EC2.

when I run on my mac it works well, but when I load it to on my server.

it does not work means no response or no action.

here is my config of flask and uwsgi.

please check.

I'm using python 2.7

[uwsgi]
base = /home/xx/app/api_server
module = application:application
virtualenv = %(base)/venv
master = true
processes = 5

socket = %(base)/udc.sock
chmod-socket = 660
uid=xx
gid=xx
vacuum = true
logto = %(base)/log/uwsgi.log
die-on-term = true
touch-reload = %(base)/reload


in config.py

JOBS = [
{
'id': 'job1',
'func': 'api.v1.mod_batch.batch:suggest_push',
'trigger': {
'type': 'cron',
'day_of_week': 'mon,wed',
'hour': 8,
'minute': 15
}
},
{
'id': 'job2',
'func': 'api.v1.mod_batch.batch:update_match_condtion_date',
'trigger': {
'type': 'cron',
'hour': 14,
'minute': 47
}
},
]

flask main thread is hang ,not handler request

#scheduler.py
JOBS = [ { 'id': 'just test', 'func': '__builtin__:print', 'args': ('test .......',), 'trigger': 'interval', 'seconds': 5, } ]
#runserver.py
from flask import Flask
from flask_apscheduler import APScheduler
app = Flask('myapp')
app.config.from_pyfile('./scheduler.py')
apscheduler = APScheduler(app=app)
apscheduler.start()
app.run()

##thread info
###:~$ sudo ps H -C python -o 'pid tid cmd'
PID TID CMD
19690 19690 python runserver.py
19690 19699 python runserver.py
19690 19700 python runserver.py
19690 19701 python runserver.py

the apscheduler can execute, but flask can not handler http request

port 5000 open and listening, when i curl localhost:5000 , it will block forever.

Multiple jobs on one scheduler causing Greenlet failed with KeyError

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/gevent/greenlet.py", line 375, in _notify_links
    link(self)
  File "/usr/local/lib/python2.7/site-packages/gevent/threading.py", line 22, in _cleanup
    __threading__._active.pop(id(g))
KeyError: 4476187664
(<function _cleanup at 0x10a7a5848>, <Greenlet at 0x10acd3410>) failed with KeyError

I have a main job configured to update the db and the clients with new data updater_job.
At a certain moment I start another job downloader_job using scheduler.scheduler.add_job() to start downloading a number of files (separate job so that this is ran in background and independently from updater_job. I do another scheduler.scheduler.add_job() to start a job stat_updaterwhich is doing some stuff with the downloader class used in download_job.
If I comment out the line where I add the stat_updaterjob my program runs fine, with it I receive the error mentioned above.

updater_job: interval 5s
downloader_job: trigger=None, next_run_time=datetime.datetime.now() (I just need to run this once)
stat_updater: interval 1s

Where to find Config parameter documentation?

Hello,
First of all, thank you for such a useful library. I have googled for aps-scheduler config parameter. For example,
JOBS = [ { 'id': 'job1', 'func': 'jobs:job1', 'args': (1, 2), 'trigger': 'interval', 'seconds': 10 } ]
I would like to get a documentation for all available parameters and their possible values.

Right way to enable job "one at a time" job thread execution.

I want to ensure that another job thread is not started if one is still running.
This is useful for debugging so that you don't get jobs piled up if you hit a break point.

I was considering following the example and using:

    SCHEDULER_EXECUTORS = {
        'default': {'type': 'threadpool', 'max_workers': 1}
}

But I wasn't sure if that was the best option.

I also saw this is the example, but I'm not clear on what exactly this does:

    SCHEDULER_JOB_DEFAULTS = {
        'coalesce': False,
        'max_instances': 3
}

add_job() Function

Apologies if this is an obvious question, but in looking through the documentation, all of the examples load the jobs through the config. Does flask-apscheduler support add/remove job functions similar to ap-scheduler?

Here is my configuration:

SCHEDULER_JOBSTORES = {'default': SQLAlchemyJobStore(url='sqlite:///job_scheduler.db')}
SCHEDULER_EXECUTORS = {'default': {'type': 'threadpool', 'max_workers': 20}}
SCHEDULER_JOB_DEFAULTS = {'coalesce': False,'max_instances': 3}

When I've tried to run the add_job() function:

scheduler.add_job(self.job_run,'interval',id=job_id,args=[self,job_id],seconds=15)

it is throwing the following error

TypeError: add_job() got multiple values for keyword argument 'id'.

Thank you in advance and happy to provide more context if that would be helpful.

Run job and get data from DB

Hi @viniciuschiele Thanks for your flask-apscheduler, it makes my life easier :D

Btw I have an issue while running the scheduler job. The job it self works perfectly until I tried to access my db model it throws error

RuntimeError: application not registered on db instance and no application bound to current context

any solution how to use the db instance and app context to the job views?

Thanks

RuntimeError: working outside of application context

With flask, I use mongo database and my scheduler job use this mongo, but i have error

RuntimeError: working outside of application context

When I use call function, I use

with app.test_request_context():
    functionxxx()

But here not possible to works.
Do you know how to fix this problem?

CSRF attack possibility?

Is a REST API around apscheduler not coming with the risk of CSRF attacks, considering that someone could add a job by having you click on a link which sends a HTTP POST request to add a job? https://github.com/viniciuschiele/flask-apscheduler/blob/master/flask_apscheduler/views.py#L35-L46 ?
I'd allow access to flask-apscheduler only from localhost, but it might not be sufficient (see: https://lwn.net/Articles/703485/), if you just click a bad link, which knows about your flask-apscheduler service and adds a job to it. If you don't run a webbrowser on your production server you're fine, because then you don't click on any links, but maybe it should still be considered?

A workaround could be to add jobs via a configuration file (so you are secured by linux file permissions) and then send a REST API request to reload jobs from the configuration file?

Maybe also adding authentication to HTTP requests could improve security?

exception when call delete_all_jobs method

Hi Vinicius,

I'd tried to use it in my project, and encounter exception when call the delete_all_jobs method , but it works when call the delete_job separately, below is the source and exception,Can you give some advise?

@schedule.route('/delete/<name>')
def deletejob(name):
    logger.info('Delete the schedule job for %s' %name)
    scheduler.delete_job(name)
    return redirect(url_for('schedule.joblist'))


@schedule.route('/deleteall')
def deleteall():
    scheduler.delete_all_jobs()
    logger.info("Delete all of the cronjob")
    flash('all jobs removed')
    return redirect(url_for('schedule.joblist'))

Exception:

TypeError
TypeError: _remove() takes at least 2 arguments (1 given)

Logging improperly configured?

I'm getting this message when my code has a syntax error, instead of a proper error being output to the console.

No handlers could be found for logger "apscheduler.executors.default"

I'm not sure if this is on your end, or APScheduler's end.

flask-context

Hi, @viniciuschiele :
I read the example flask_context.py. I have some questions about that.

  1. Is the the app stored in the database? And then we get the app from database through db.app?
  2. If I don't wanna use database, how could I get the app context in the thread?
    Thank you!

flask-apscheduler scheduled job only logs the first time of execution

Hi Vinicius,

I configured a test cron job and logging in Flask. That job has logging inside it. I observed that job only logs the first time of execution to file, and all following loggings are missing.

Here's my code:

# module: mymodule
import logging
logger = logging.getLogger('myapp')

def job(a, b):
    print("Start test job")
    logger.info("Start test job")
class ScheduledJobConfigs(object):
    JOBS = [
        {
            'id': 'test',
            'func': 'mymodule:job',
            'args': (0, 1),
            'trigger': 'cron',
            'second': '*/5'
        },
    ]

    SCHEDULER_API_ENABLED = True
app = Flask('myapp')

if __name__ == '__main__':
    # Setup logging
    handler = RotatingFileHandler('myapp.log', maxBytes=10000, backupCount=1)
    handler.setLevel(logging.INFO)
    handler.setFormatter(logging.Formatter(
        '%(asctime)s %(levelname)s: %(message)s '
        '[in %(module)s.%(filename)s:%(lineno)d]'))
    app.logger.addHandler(handler)
    app.logger.setLevel(logging.INFO)

    # Setup APScheduler
    app.config.from_object(ScheduledJobConfigs())
    scheduler = APScheduler()
    scheduler.api_enabled = True
    scheduler.init_app(app)
    scheduler.start()

    app.run()

Since 'job' is configured to run every 5 sec, I expect to see "Start test job" from both console and myapp.log file every 5 sec. But the reality is, I saw new "Start test job" from console every 5 sec, but "Start test job" only shows once in myapp.log.

This seems to be not specific to cron triggered job. I tested interval triggered job and it has the same problem. Is this a bug, or I didn't configure logging correctly for APScheduler?

Thanks,
Bowen

AttributeError: 'NoneType' object has no attribute 'encode'

The scheduled task completes but this error is logged.

Traceback (most recent call last):
  File "/usr/lib/python2.7/logging/__init__.py", line 851, in emit
    msg = self.format(record)
  File "/usr/lib/python2.7/logging/__init__.py", line 724, in format
    return fmt.format(record)
  File "/usr/lib/python2.7/logging/__init__.py", line 464, in format
    record.message = record.getMessage()
  File "/usr/lib/python2.7/logging/__init__.py", line 328, in getMessage
    msg = msg % self.args
  File "/usr/local/lib/python2.7/dist-packages/apscheduler/job.py", line 248, in __str__
    return repr_escape(self.__unicode__())
  File "/usr/local/lib/python2.7/dist-packages/apscheduler/job.py", line 252, in __unicode__
    status = 'next run at: ' + datetime_repr(self.next_run_time) if self.next_run_time else 'paused'
  File "/usr/local/lib/python2.7/dist-packages/apscheduler/util.py", line 187, in datetime_repr
    return dateval.strftime('%Y-%m-%d %H:%M:%S %Z') if dateval else 'None'
  File "/usr/local/lib/python2.7/dist-packages/dateutil/tz.py", line 41, in inner_func
    return myfunc(*args, **kwargs).encode()
AttributeError: 'NoneType' object has no attribute 'encode'
Logged from file base.py, line 121

Jobs get scheduled twice every once in a while

In the logs I see:

INFO:apscheduler.executors.default:Job "get_stats_trigger_02 (trigger: interval[0:01:00], next run at: 2017-10-26 11:50:40 PDT)" executed successfully
INFO:apscheduler.executors.default:Running job "get_stats_trigger_02 (trigger: interval[0:01:00], next run at: 2017-10-26 11:51:40 PDT)" (scheduled at 2017-10-26 11:50:40.453984-07:00)
INFO:apscheduler.executors.default:Running job "get_stats_trigger_02 (trigger: interval[0:01:00], next run at: 2017-10-26 11:51:40 PDT)" (scheduled at 2017-10-26 11:50:40.453984-07:00)
--------------------------------------------------------------------------------
DEBUG in views [/home/mthmbe/MTHMBE/MTHMBE/views.py:189]:
No queries found
--------------------------------------------------------------------------------
INFO:apscheduler.executors.default:Job "get_stats_trigger_02 (trigger: interval[0:01:00], next run at: 2017-10-26 11:51:40 PDT)" executed successfully
--------------------------------------------------------------------------------
DEBUG in views [/home/mthmbe/MTHMBE/MTHMBE/views.py:189]:
No queries found
--------------------------------------------------------------------------------
INFO:apscheduler.executors.default:Job "get_stats_trigger_02 (trigger: interval[0:01:00], next run at: 2017-10-26 11:51:40 PDT)" executed successfully

but then only one is run at 2017-10-26 11:51:40 PDT

This is the JOB:

  JOBS = [
       {
           'id': 'get_stats_trigger_02',
           'func': 'MTHMBE.views:get_stats',
           'args': (False, 60),
           'replace_existing': True,
           'trigger': 'interval',
           'seconds': 60
       }
   ]

Add License

Hi,

Can you add LICENSE file in archive file ?

Best regard

Logging Issues

Essentially a continuation of issue #1, opening a separate issue as the old one is closed, get the message No handlers could be found for logger "apscheduler.executors.default". This happens when a error is present in the scheduler code. (The function that is called by the scheduler).
To test, please run this. It's modified off the one in the examples folder to throw an error.

https://gist.github.com/HackToHell/b8c73a2c6db53e9881ac
Output

 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
 * Restarting with stat
No handlers could be found for logger "apscheduler.executors.default"
No handlers could be found for logger "apscheduler.executors.default"

It seems to be an issue with APScheduler library itself, https://bitbucket.org/agronholm/apscheduler/issues/78/no-handlers-could-be-found-for-logger. So the suggested method to catch the errors is to have a basic logging setup ( http://stackoverflow.com/q/28724459/787563 ). I tried that and added it to the code. This resulted in flasks logging system's output being killed off.
I added

import logging
logging.basicConfig()

after the imports.

New Output - Shows the proper output. (Notice there's no message from flask present in the previous run)

F:\lalala>python balhbalh.py
ERROR:apscheduler.executors.default:Job "job1 (trigger: interval[0:00:10], next run at: 2016-01-17 14:55:04 IST)" raised an exception
Traceback (most recent call last):
  File "C:\Users\Gowtham\Miniconda\lib\site-packages\apscheduler\executors\base.py", line 112, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "balhbalh.py", line 21, in job1
    print(a + ' ' + b)
TypeError: unsupported operand type(s) for +: 'int' and 'str'
ERROR:apscheduler.executors.default:Job "job1 (trigger: interval[0:00:10], next run at: 2016-01-17 14:55:05 IST)" raised an exception
Traceback (most recent call last):
  File "C:\Users\Gowtham\Miniconda\lib\site-packages\apscheduler\executors\base.py", line 112, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "balhbalh.py", line 21, in job1
    print(a + ' ' + b)
TypeError: unsupported operand type(s) for +: 'int' and 'str'

This comment in the same SO thread says the very same to author of the library, however he says that it should not be the case. Unfortunately I am unable to debug the issue further due to my lack of experience with python's logging system(It's pretty weird).

Is there any way to resolve the issue by ideally having the output from flask and APScheduler in the same place ?

AttributeError: 'module' object has no attribute 'loads'

I have loaded the latest build and am unable to get the advanced.py example to run. Any idea what I may be missing?

$ python foo.py    
Traceback (most recent call last):
  File "foo.py", line 3, in <module>
    from flask_apscheduler import APScheduler
  File "/usr/local/lib/python2.7/dist-packages/flask_apscheduler/__init__.py", line 17, in <module>
    from flask_apscheduler.scheduler import APScheduler
  File "/usr/local/lib/python2.7/dist-packages/flask_apscheduler/scheduler.py", line 22, in <module>
    from . import views
  File "/usr/local/lib/python2.7/dist-packages/flask_apscheduler/views.py", line 18, in <module>
    from .json import jsonify
  File "/usr/local/lib/python2.7/dist-packages/flask_apscheduler/json.py", line 8, in <module>
    loads = json.loads
AttributeError: 'module' object has no attribute 'loads'

About APScheduler & logging

Hello, since the APScheduler uses python logging module which needs to be initialized(which leads to error like this do you have a plan to reorganize the Flask logger with APScheduler?

Integrate generated /scheduler endpoint with flask-restplus's swagger UI

We're using flask-restplus to generate a testing UI that allows QA/developers to browse the API interactively. It generated test pages & documentation for all appropriately annotated endpoints.
Currently the /scheduler EP is basically "hidden" as it does not expose the same flask-restplus metadata.
If this were added, then it would show up in the generated swagger-ui and one could see/browse the EP without knowing of it;s existence and path in advance.

conflicts with an existing job

I have the Config like this:

class Config(object):
    JOBS = [
        {
            'id': 'job9',
            'func': 'sync:job1',
            'args': (1, 2),
            'trigger': 'interval',
            'seconds': 10
        }
    ]

    SCHEDULER_JOBSTORES = {
        'default': SQLAlchemyJobStore(url='sqlite:///'+utils.get_absolute_path("datas/jobstore.db"))
    }

    SCHEDULER_EXECUTORS = {
        'default': {'type': 'threadpool', 'max_workers': 20}
    }

    SCHEDULER_JOB_DEFAULTS = {
        'coalesce': False,
        'max_instances': 3
    }

    SCHEDULER_VIEWS_ENABLED = True

So, when i start the scheduler, it will insert the job in the sqlite. however, when i restart the scheduler, it will insert again, which will cause an exception:

Traceback (most recent call last):
  File "/Users/zhangzheng/PycharmProjects/quant/mainapp.py", line 73, in <module>
    scheduler.start()
  File "/Users/zhangzheng/pyenv/tensorflow/lib/python2.7/site-packages/flask_apscheduler/scheduler.py", line 81, in start
    self.__scheduler.start()
  File "/Users/zhangzheng/pyenv/tensorflow/lib/python2.7/site-packages/apscheduler/schedulers/background.py", line 33, in start
    BaseScheduler.start(self, *args, **kwargs)
  File "/Users/zhangzheng/pyenv/tensorflow/lib/python2.7/site-packages/apscheduler/schedulers/base.py", line 151, in start
    self._real_add_job(job, jobstore_alias, replace_existing)
  File "/Users/zhangzheng/pyenv/tensorflow/lib/python2.7/site-packages/apscheduler/schedulers/base.py", line 847, in _real_add_job
    store.add_job(job)
  File "/Users/zhangzheng/pyenv/tensorflow/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 94, in add_job
    raise ConflictingIdError(job.id)
apscheduler.jobstores.base.ConflictingIdError: u'Job identifier (job9) conflicts with an existing job'

I think this isn't the right way when I start a scheduler. the right way should be inserting a job when the store not exist or just skip it. beacuse, when my app is done, i want to resume the existing job when i restart it.

zhangzheng

Duplicating Startup Jobs

When I add jobs after flask is started (through a req to the /scheduler), it behaves correctly, but somehow I cannot seem to get apscheduler to run only one instance of a job during startup. It creates two of each job regardless of whether I have max instances set to 1. At this point, I'm not sure this is a problem with my configuration or the code itself since this phenomena occurs whether I use gevent or not as either the flask engine or the scheduler.

# construct flask app object
from flask import Flask, request, session, jsonify, url_for, render_template
app = Flask(import_name=__name__)

# initialize logging and debugging
import sys
import logging
app.logger.addHandler(logging.StreamHandler(sys.stdout))
app.logger.setLevel(logging.DEBUG)
app.config['ASSETS_DEBUG'] = False

# construct the landing page
@app.route('/')
def landing_page():
    return jsonify({'status':'ok'}), 200

# construct the catchall for URLs which do not exist
@app.errorhandler(404)
def page_not_found(error):
    return render_template('404.html'), 404

# construct scheduler object (with gevent processor)
from flask_apscheduler import APScheduler
from apscheduler.schedulers.gevent import GeventScheduler
gevent_scheduler = GeventScheduler()
ap_scheduler = APScheduler(scheduler=gevent_scheduler)

# add job to app scheduler
from time import time
scheduler_configs = {
    'SCHEDULER_JOBS': [ {
        'id': 'app.logger.debug.%s' % str(time()),
        'func': 'launch:app.logger.debug',
        'kwargs': { 'msg': 'APScheduler has started.' },
        'misfire_grace_time': 5,
        'max_instances': 1,
        'replace_existing': False,
        'coalesce': True
    } ],
    'SCHEDULER_TIMEZONE': 'UTC',
    'SCHEDULER_VIEWS_ENABLED': True
}
app.config.update(**scheduler_configs)

# attach app to scheduler and start scheduler
ap_scheduler.init_app(app)
ap_scheduler.start()

# initialize test wsgi localhost server with default memory job store
if __name__ == '__main__':
    from gevent.pywsgi import WSGIServer
    http_server = WSGIServer(('0.0.0.0', 5001), app)
    http_server.serve_forever()
    # app.run(host='0.0.0.0', port=5001)

How do you correctly delete the SCHEDULER_JOBSTORES database?

So, I am messing about running the advanced example. I copy over the config to a config.py file. Next, when I run the example again I get the error below. Next, I just switch over to my database and delete the apscheduler_jobs table from my database, then when I run the example again I get the same error. I have tried changing the id to a new id but that doesn't work either.

What is the correct way to remove the apscheduler_jobs table from my database while I am testing and making mistakes?

Traceback (most recent call last):
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\apscheduler\jobstores\sqlalchemy.py", line 92, in add_job
    self.engine.execute(insert)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\sqlalchemy\engine\base.py", line 2055, in execute
    return connection.execute(statement, *multiparams, **params)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\sqlalchemy\engine\base.py", line 945, in execute
    return meth(self, multiparams, params)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\sqlalchemy\sql\elements.py", line 263, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\sqlalchemy\engine\base.py", line 1053, in _execute_clauseelement
    compiled_sql, distilled_params
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\sqlalchemy\engine\base.py", line 1189, in _execute_context
    context)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\sqlalchemy\engine\base.py", line 1393, in _handle_dbapi_exception
    exc_info
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\sqlalchemy\util\compat.py", line 203, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb, cause=cause)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\sqlalchemy\util\compat.py", line 186, in reraise
    raise value.with_traceback(tb)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\sqlalchemy\engine\base.py", line 1182, in _execute_context
    context)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\sqlalchemy\engine\default.py", line 470, in do_execute
    cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (psycopg2.IntegrityError) duplicate key value violates unique constraint "apscheduler_jobs_pkey"
DETAIL:  Key (id)=(job1) already exists.
 [SQL: 'INSERT INTO apscheduler_jobs (id, next_run_time, job_state) VALUES (%(id)s, %(next_run_time)s, %(job_state)s)'] [parameters: {'id': 'job1', 'next_run_time': 1504900413.448354, 'job_state': <psycopg2.extensions.Binary object at 0x00000192D8EBD238>}]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "app.py", line 53, in <module>
    scheduler.start()
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\flask_apscheduler\scheduler.py", line 82, in start
    self._scheduler.start()
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\apscheduler\schedulers\background.py", line 33, in start
    BaseScheduler.start(self, *args, **kwargs)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\apscheduler\schedulers\base.py", line 154, in start
    self._real_add_job(job, jobstore_alias, replace_existing)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\apscheduler\schedulers\base.py", line 851, in _real_add_job
    store.add_job(job)
  File "C:\Anaconda3\envs\g2x_flask\lib\site-packages\apscheduler\jobstores\sqlalchemy.py", line 94, in add_job
    raise ConflictingIdError(job.id)
apscheduler.jobstores.base.ConflictingIdError: 'Job identifier (job1) conflicts with an existing job'

Security Hole: flask_apscheduler.APScheduler.init_app() always adds Jobs UI

The Jobs UI allows anyone to see the jobs list & run a job ID. There is no authentication scheme to protect the job control. This is a major security hole.

The real problem is APScheduler.init_app() always adds Jobs UI endpoints. Please, add a flag to the constructor & init_app methods so that the Job UI can be left out of the Flask routes (I recommend this as the default).

NOTE:
For now, I am overriding the method to ignore __load_views() so that it doesn't make the routes.

Unable to modify cron expression for a job

Hi,

I am trying to modify a job from running every single minute, to every 10 minutes. I get "Expected a trigger instance, got unicode instead" error. I pasted below curl commands I used.

$  curl -v -H "Accept: application/json" -H "Content-type: application/json" -X POST -d '{"func": "__main__:job1", "id": "567", "name": "My job", "args": ["1", "5"], "trigger": "cron", "minute": "*/1"}' http://<service IP>:5000/scheduler/jobs

{
"id": "567",
"name": "My job",
"func": "main:job1",
"args": [
"1",
"5"
],
"kwargs": {},
"trigger": "cron",
"minute": "*/1",
"misfire_grace_time": 1,
"max_instances": 1,
"next_run_time": "2016-02-14T09:40:00+00:00"

$ curl -v -H "Accept: application/json" -H "Content-type: application/json" -X PATCH  -d '{"trigger": "cron", "minute": "*/10"}' http://<service IP>:5000 /scheduler/jobs/567

"error_message": "Expected a trigger instance, got unicode instead"

Thanks.

Add info to readme for running on top of flask-socketio

I had weird results. Scheduling with a 5s interval resulted in this:

1437117604.32 :calling update() thread: MainProcess - Thread-10 
1437117605.6 :calling update() thread: MainProcess - Thread-14 
1437117609.35 :calling update() thread: MainProcess - Thread-10 
1437117610.62 :calling update() thread: MainProcess - Thread-14 
1437117614.32 :calling update() thread: MainProcess - Thread-13 
1437117615.61 :calling update() thread: MainProcess - Thread-17 
1437117619.32 :calling update() thread: MainProcess - Thread-10 

Solution is to use the monkey patch from gevent by putting this above all other imports in your app.py:

from gevent import monkey
monkey.patch_all()

how does SQLAlchemyJobStore create a rable in DB

Hi , have been using flask-APscheduler and it has been working really well for me,
but now I am not able to figure out this jobstore issue
I am trying to create a native job store, never had the requirement before , so I tried something like this

'''
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
jobstores = {
'default': SQLAlchemyJobStore(url='postgresql+psycopg2://uname:pass@localhost/test_db',tablename='apscheduler_jobs')
}
scheduler = APScheduler(BackgroundScheduler(jobstores=jobstores))

'''

This I used once with APScheduler, but with flask i dont think jobstore is working at all(even if i give non existing db name, it does not give any error, table also not getting created for right db name), I dont want to do it through a config class, cause all my tasks are dynamic and i dont use config class for those, so just for jobstore i dont want to create a separate task, plus i have all my configurations for the app already set
Is this the right workaround? any issue with this? if not then how it should be done

any suggestions?

Multiple instances of shceduler problem

I had a doubt, about using flask-apscheduler in a web app
When used in any forking web server (gunicorn uwsgi, etc...), processes get forked, thus apacheduler gets instantiated (and possibly started) in each forked worker process. This leads to the "many jobs running" issue?
In production multiple instances of the app gets created, it will initiate that many scheduler instances also right? so all those instances will try executing the job is it?
Am I wrong here?

How do I deploy flask-apscheduler with mod_wsgi?

I tried to return the app variable to my wsgi file. I don't see any error messages in apache2's error.log. I just get "connection refused" when trying to open localhost:8610/scheduler/jobs.
See my attempt:

mymodule.scheduler.py (misses some lines so it's easier to read)

import os
import sys
import time
from importlib import import_module

from apscheduler.events import EVENT_JOB_EXECUTED, EVENT_JOB_ERROR
from apscheduler.executors.pool import ThreadPoolExecutor, ProcessPoolExecutor
from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.interval import IntervalTrigger
from flask import Flask
from flask_apscheduler import APScheduler
from helputils.core import log


class Config(object):
    """Configurations for flask-apscheduler"""
    JOBS = jobs
    JOBLOG_PATH = schedulerlog
    SCHEDULER_JOBSTORES = jobstores
    SCHEDULER_EXECUTORS = executors
    SCHEDULER_JOB_DEFAULTS = job_defaults
    SCHEDULER_VIEWS_ENABLED = True
    SCHEDULER_TIMEZONE = berlin_timezone

class Scheduler():

    def run(self):
        log.info("Started scheduler.py")                                                                                                                                                                                                                                                                   
        self.sched = APScheduler()
        app = Flask(__name__)
        app.config.from_object(Config())
        app.debug = True
        self.sched.init_app(app)                                                                                                                                                                                                                                                                                         
        self.sched.start()                                                                                                                                                                                                                                                                                               
        return app

# Textual reference                                                                                                                                                                                                                                                                                                                             
run = Scheduler().run

flask_apscheduler.vhost

<VirtualHost *:8610>
    ServerName flask_apscheduler

    WSGIDaemonProcess flask_apscheduler user=http group=http threads=5 python-path=/srv/http/flask_apscheduler
    WSGIScriptAlias / /srv/http/flask_apscheduler/flask_apscheduler.wsgi

    <Directory /srv/http/flask_apscheduler>
        WSGIProcessGroup flask_apscheduler
        WSGIApplicationGroup %{GLOBAL}
        Require all granted
    </Directory>
</VirtualHost>

flask_apscheduler.wsgi (in /srv/http/flask_apscheduler/flask_apscheduler.wsgi)

import os
import sys
sys.path.insert(0, '/srv/http/flask_apscheduler')
os.chdir("/srv/http/flask_apscheduler")
from mymodule.scheduler import run as application

Problem when use gunicorn

I tried to start 4 process by using gunicorn -w 4 -b 0.0.0.0:5000 test:app, and there are 4 flask apscheduler bound to 4 app respectively which will lead to confliction. I only need one scheduler to work. I tried to use the method to bind a socket, but i can't apply for a port in my company, is there any way else to solve that problem?

Value Error when running jobs.py

When I run the example jobs.py in the python shell, I get this:

Traceback (most recent call last):
File "", line 1, in
File "/var/www/FlaskApp/varie/scheduler_example.py", line 21, in
scheduler.init_app(app)
File "/usr/local/lib/python2.7/dist-packages/flask_apscheduler/scheduler.py", line 68, in init_app
self._load_jobs()
File "/usr/local/lib/python2.7/dist-packages/flask_apscheduler/scheduler.py", line 269, in _load_jobs
self.add_job(**job)
File "/usr/local/lib/python2.7/dist-packages/flask_apscheduler/scheduler.py", line 129, in add_job
return self._scheduler.add_job(**job_def)
File "/usr/local/lib/python2.7/dist-packages/apscheduler/schedulers/base.py", line 425, in add_job
job = Job(self, **job_kwargs)
File "/usr/local/lib/python2.7/dist-packages/apscheduler/job.py", line 44, in init
self._modify(id=id or uuid4().hex, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/apscheduler/job.py", line 157, in _modify
func = ref_to_obj(func)
File "/usr/local/lib/python2.7/dist-packages/apscheduler/util.py", line 270, in ref_to_obj
raise LookupError('Error resolving reference %s: could not import module' % ref)
LookupError: Error resolving reference jobs:job1: could not import module

I didn't do any modification to the code. What am I doing wrong?

Thanks in advance.

Use cron-like Scheduling in flask-apscheduler

I want my job work in regular time, in apscheduler I can set like this
sched.add_job(my_job, 'cron', year=2017,month = 03,day = 22,hour = 17,minute = 19,second = 07)
but I do not know how to set it in flask-apscheduler?

thinks for answer!

LookupError: No executor by the name "threadpool" was found

This error occurs when running the latest advanced.py example.

$ python foo.py                               
Traceback (most recent call last):
  File "foo.py", line 41, in <module>
    scheduler.init_app(app)
  File "/usr/local/lib/python2.7/dist-packages/flask_apscheduler/scheduler.py", line 71, in init_app
    self.__load_config()
  File "/usr/local/lib/python2.7/dist-packages/flask_apscheduler/scheduler.py", line 202, in __load_config
    self.__scheduler.configure(**options)
  File "/usr/local/lib/python2.7/dist-packages/apscheduler/schedulers/base.py", line 95, in configure
    self._configure(config)
  File "/usr/local/lib/python2.7/dist-packages/apscheduler/schedulers/background.py", line 27, in _configure
    super(BackgroundScheduler, self)._configure(config)
  File "/usr/local/lib/python2.7/dist-packages/apscheduler/schedulers/base.py", line 595, in _configure
    executor = self._create_plugin_instance('executor', plugin, value)
  File "/usr/local/lib/python2.7/dist-packages/apscheduler/schedulers/base.py", line 766, in _create_plugin_instance
    raise LookupError('No {0} by the name "{1}" was found'.format(type_, alias))
LookupError: No executor by the name "threadpool" was found

Pausing scheduler doesnt work

Hi
I am trying to pause a schedule of concurrent jobs. I used
scheduler.pause()
when there comes a priority task. And also want to resume by scheduler.resume(), after the task is completed. It does not pause. It throws an internal error. Is there a way around to pause all the current scheduled jobs and resume them?
My config object looks like this.

class Config(object):
        JOBS = []
        for x in cache.get('Schedules'):
            temp_job = {}
            temp_job['id'] = x['ID']          # ID is unique
            temp_job['func'] = fun_Scheduler
            temp_job['args'] = [x['PIN'], x['Version']]           
            temp_job['trigger'] = "interval"
            temp_job['seconds'] = 10                                # Can change the interval to any number of seconds
            JOBS.append(temp_job)


        SCHEDULER_JOB_DEFAULTS = {
        'coalesce': False,
        'max_instances': 10
        }
        
        SCHEDULER_EXECUTORS = {
        'default': {'type': 'threadpool', 'max_workers': 30}
        }
       
        SCHEDULER_API_ENABLED = True

And the flask function to pause the scheduler is

    @app.route("/patchAPI/<PIN>", methods=['GET'])
    def patchThem():
        scheduler.pause()
        resp=classic.patchPINCollection()  # This takes a while, so I am considering resuming it after this statement
        return(resp)

The scheduler works fine. Schedules all the jobs in the background.

But I do not get the response when I do GET http:127.0.0.1:5001/patchAPI/09876
I get an error like

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request.  Either the server is overloaded or there is an error in the application.</p>

I have tried various other ways. But, couldn't find solution. Kindly, throw some insight on this.

Thank you

Background/Blocking tasks

Hi,
Just came across flask-apscheduler, actually i am writing an application with flask where I am using apscheduler, but i can see that flask-apscheduler is also available so I had a doubt
Like apscheduler we can pick wether we want to run a task as background or foreground(blocking), can we have that configuration here
scheduler = APScheduler() //this does not provide any of that kind of config

I have a requirement where I want to run a task(cron/interval) on the basis of a post request, can that be achieved using this?

Thanks for any kind of help

Can we run multiple APScheduler jobs parallely?

The way I see it right now, if i define two or more jobs, they are running sequentially. Is there a way we can run multiple jobs parallely?

class Config(object):
    JOBS = [
        {
            'id': 'job1',
            'func': 'jobs:job1',
            'args': (1, 2),
            'trigger': 'interval',
            'seconds': 10
        },
       {
            'id': 'job2',
            'func': 'jobs:job1',
            'args': (3, 4),
            'trigger': 'interval',
            'seconds': 10
        }
    ]

    SCHEDULER_API_ENABLED = True


def job1(a, b):
    print(str(a) + ' ' + str(b))

if __name__ == '__main__':
    app = Flask(__name__)
    app.config.from_object(Config())

    scheduler = APScheduler()
    # it is also possible to enable the API directly
    # scheduler.api_enabled = True
    scheduler.init_app(app)
    scheduler.start()
    app.run()

Right now the two jobs defined(job1 and job2) are running one after another. Is there a way to execute them individually and parallely?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.