Giter Site home page Giter Site logo

pythonfutures's Introduction

Build Status

This is a backport of the concurrent.futures standard library module to Python 2.

It does not work on Python 3 due to Python 2 syntax being used in the codebase. Python 3 users should not attempt to install it, since the package is already included in the standard library.

To conditionally require this library only on Python 2, you can do this in your setup.py:

setup(
    ...
    extras_require={
        ':python_version == "2.7"': ['futures']
    }
)

Or, using the newer syntax:

setup(
    ...
    install_requires={
        'futures; python_version == "2.7"'
    }
)

Warning

The ProcessPoolExecutor class has known (unfixable) problems on Python 2 and should not be relied on for mission critical work. Please see Issue 29 and upstream bug report for more details.

pythonfutures's People

Contributors

agronholm avatar bmwiedemann avatar dalcinl avatar eli-schwartz avatar fahhem avatar jwilk avatar kianmeng avatar kived avatar leojay avatar mlsteele avatar rhansen avatar sseg avatar torstehu avatar wooparadog avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pythonfutures's Issues

Python 2.5 compatibility

According to PyPI it should be compatible with Python 2.5, but there is an 
issue with `notify_all`. It should be notifyAll on Python 2.5.

Original issue reported on code.google.com by [email protected] on 9 Apr 2013 at 1:08

Preserve exception info when re-raising

Currently the code does stuff like:

        except Exception, e:
            ...
            raise e # should just be 'raise'

and:

    def __get_result(self):
        if self._exception:
            raise self._exception
            # better to store complete exc_info & use 3-parameter raise
            # a,b,c = self._exc_info; raise a,b,c
        else:
            return self._result

As such, re-raised exceptions lose a great deal of information because the 
tracebacks are missing. The suggested fixes are also in line with your 
philosophy discussed in #2, in that you want to preserve the original exception.

Original issue reported on code.google.com by [email protected] on 10 Nov 2010 at 4:00

map is greedy

map includes this line:

  fs = [self.submit(fn, *args) for args in zip(*iterables)]

That means no futures code gets executed until all the iterables have been 
completely consumed.  If each iteration of an iterable can take quite some 
time, that means a long wait before any futures even starts.

It also means a humungous list is built up if the iterables returns many items. 
 For example if iterable was xrange(2000000) then that fs list is going to be 2 
million items long even if num_workers was something like 8.

A generator expression should be used instead.

Original issue reported on code.google.com by rogerbinns on 7 Jun 2013 at 2:01

Tracebacks lost when exception is thrown in worker

When an exception is thrown in a thread or subprocess the exception is 
captured, but not the traceback - this can make it difficult to see where an 
error is coming from, as the exception is simply re-raised in result() or 
returned from exception()

This patch adds a traceback attribute to the thrown exception with a string 
serialized version of the stacktrace.



Original issue reported on code.google.com by [email protected] on 5 Feb 2014 at 10:50

Attachments:

Improvement: Faster exit on shutdown for thread pooling

I propose changing
        while True:
            try:
                work_item = work_queue.get(block=True, timeout=0.1)
            except queue.Empty:
                executor = executor_reference()
                # Exit if:
                #   - The interpreter is shutting down OR
                #   - The executor that owns the worker has been collected OR
                #   - The executor that owns the worker has been shutdown.
                if _shutdown or executor is None or executor._shutdown:
                    return
                del executor
            else:
                work_item.run()

To
        while True:
            executor = executor_reference()
            # Exit if:
            #   - The interpreter is shutting down OR
            #   - The executor that owns the worker has been collected OR
            #   - The executor that owns the worker has been shutdown.
            if _shutdown or executor is None or executor._shutdown:
                return
            del executor
            try:
                work_item = work_queue.get(block=True, timeout=0.1)
            except queue.Empty:
                pass
            else:
                work_item.run()

This would allow you not to have to wait for workers to go through the pool. 
Since we just wait in shutdown for the threads to finish, it should (as far as 
I can see) be safe to just make the threads unconditionally check whether they 
should exit and do so. As it is, you *have to* wait for all submitted work to 
finish before exiting interpreter which is imo inconvenient.

Original issue reported on code.google.com by [email protected] on 10 Jun 2013 at 9:24

Raising an exception that is unable to be unpickled causes hang in ProcessPoolExecutor

What steps will reproduce the problem?
1. In the function submitted to a ProcessPoolExecutor, raise a custom exception 
class that takes more than one argument to __init__.

What is the expected output? What do you see instead?
I expect a call to future.result() to not hang.

What version of the product are you using? On what operating system?
I'm using ver 2.1.6 on python 2.7 on Gentoo Linux.

Please provide any additional information below.
I have attached a patch to address the issue and a test case for it.  Without 
the patch, the new test case hangs.  With the patch, it passes.

This is needed because of the issue raised in 
http://bugs.python.org/issue1692335.  An exception class that takes multiple 
arguments to __init__ can be pickled but it raises a TypeError when being 
unpickled:

In [1]: class MyError(Exception):
   ...:     def __init__(self, arg1, arg2):
   ...:         super(MyError, self).__init__(
   ...:             'arg1 = {}, arg2 = {}'.format(arg1, arg2))
   ...: 

In [2]: import pickle

In [3]: p = pickle.dumps(MyError('arg1val', 'arg2val'))

In [4]: pickle.loads(p)
---------------------------------------------------------------------------
<snip>
TypeError: __init__() takes exactly 3 arguments (2 given)

So if a child process raises an exception like this, it gets pickled and put in 
the result queue just fine.  However, in _queue_management_worker, the call to 
result_queue.get(block=True) will raise an uncaught TypeError when it tries to 
unpickle the exception.  So then the queue management just breaks.

My proposed patch attempts to catch this condition before putting the exception 
in the result queue and create a new exception that will be able to be 
unpickled but still contains information from the original exception.

Original issue reported on code.google.com by [email protected] on 30 Sep 2014 at 2:23

Attachments:

Exception in thread QueueFeederThread

What steps will reproduce the problem?

1. Create new executor based on the ProcessPoolExecutor
2. Set max_workers=8
3. Submit two tasks to run
4. wait on all futures



What is the expected output? What do you see instead?

No exceptions, program runs to completion.  Instead seeing an exception, unsure 
of cause (sometimes the program runs fine, other times it doesn't).


Program Output (sporadically hits this):

"Exception in thread QueueFeederThread (most likely raised during interpreter 
shutdown):
Traceback (most recent call last):
  File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
  File "/usr/lib/python2.6/threading.py", line 484, in run
  File "/usr/lib/python2.6/multiprocessing/queues.py", line 233, in _feed
<type 'exceptions.TypeError'>: 'NoneType' object is not callable
"





What version of the product are you using? On what operating system?

Ubuntu 11.04
Python 2.6.6

Original issue reported on code.google.com by [email protected] on 11 Jan 2011 at 1:50

Support Python3 in tests


======================================================================
ERROR: test_futures (unittest.loader.ModuleImportFailure)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/nix/store/lwz7kmb4jzgh3da04p1pa723i6084xk0-python3-3.4.3/lib/python3.4/unittest/case.py", line 58, in testPartExecutor
    yield
  File "/nix/store/lwz7kmb4jzgh3da04p1pa723i6084xk0-python3-3.4.3/lib/python3.4/unittest/case.py", line 577, in run
    testMethod()
  File "/nix/store/lwz7kmb4jzgh3da04p1pa723i6084xk0-python3-3.4.3/lib/python3.4/unittest/loader.py", line 32, in testFailure
    raise exception
ImportError: Failed to import test module: test_futures
Traceback (most recent call last):
  File "/nix/store/lwz7kmb4jzgh3da04p1pa723i6084xk0-python3-3.4.3/lib/python3.4/unittest/loader.py", line 312, in _find_tests
    module = self._get_module_from_name(name)
  File "/nix/store/lwz7kmb4jzgh3da04p1pa723i6084xk0-python3-3.4.3/lib/python3.4/unittest/loader.py", line 290, in _get_module_from_name
    __import__(name)
  File "/tmp/nix-build-python3.4-futures-3.0.2.drv-0/futures-3.0.2/test_futures.py", line 10, in <module>
    from StringIO import StringIO
ImportError: No module named 'StringIO'

Sync with upstream changes

The concurrent.futures module in cpython has had some changes since this 
backport was released; it would be good to sync up with cpython 3.3.  I'm 
particularly interested in this change:
  http://hg.python.org/cpython/annotate/4390d6939a56/Lib/concurrent/futures/thread.py#130
which prevents a 100ms delay when shutting down a ThreadPoolExecutor.

Original issue reported on code.google.com by [email protected] on 23 May 2013 at 3:28

Eliminate Globals used for thread tracking

This patch adds a tracker to the base Executor that is used to track all 
threads / processes created by the Executor and do proper cleanup of said 
threads on Executor Shutdown / Python Exit.

This also unifies the API between Process|ThreadPoolExecutor a little as 
well. Making max_workers optional on both, and defaulting to cpu_count. 
Adds a worker_count property to the base Executor, that returns the number 
of workers, which is used instead of len(self._processes|threads) since the 
threads and processes are now tracked in the trackers. Also since the 
shutdown method is unified between the two implementations now it moves it 
to the base Executor.


All Tests pass on win32 & Ubuntu, I dont have access to OSX to test there

Original issue reported on code.google.com by [email protected] on 21 May 2010 at 7:29

Attachments:

Backport new API to Python 2 futures

Python 2 is here to stay for a long time. There's value in having a consistent 
API to both reduce confusion and so that users don't have to worry about one 
day migrating to a different API. Plus it's low-hanging fruit.

Original issue reported on code.google.com by [email protected] on 10 Nov 2010 at 4:03

Make multiprocessing optional

I would suggest that multiprocessing is only an optional dependency. If it is 
not  installed the ProcessPoolExecutor is just not available.

This would enable threaded futures on systems where you cannot compile/install 
the multiprocessing package.

Original issue reported on code.google.com by [email protected] on 20 Feb 2013 at 12:57

Please apply the fix for http://bugs.python.org/issue16284

The absence of the patches from http://bugs.python.org/issue16284 
(concurrent.futures ThreadPoolExecutor keeps unnecessary references to worker 
functions) leads to memory leaks. Currently we have to monkey-patch the library.

Thanks!

Original issue reported on code.google.com by [email protected] on 15 Nov 2014 at 8:14

No task_done() call in ThreadPoolExecutor?

This is more of a curiosity than a bug report. Sorry but I wasn't sure where 
else to ask.

I don't see a work_queue.task_done() call in thread.py.  Is that on purpose?

I'm new to threads and queues in python, so please excuse my ignorance, but 
wouldn't this disrupt proper shutdown?

Thanks.

Original issue reported on code.google.com by [email protected] on 10 Oct 2013 at 8:37

pypy compat

What steps will reproduce the problem?
1. run testsuite under pypy
2.
3.

What is the expected output? What do you see instead?
pass all. 1 hang and an outrgiht fail

What version of the product are you using? On what operating system?
futures-2.1.6

Please provide any additional information below.

does a hang @


test_del_shutdown (__main__.ProcessPoolShutdownTest) ...

On removing those two instances, then

```python
======================================================================
FAIL: test_repr (__main__.FutureTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_futures.py", line 577, in test_repr
    '<Future at 0x[0-9a-f]+ state=pending>')
AssertionError: Regexp didn't match: '<Future at 0x[0-9a-f]+ state=pending>' 
not found in '<Future at 0x7f2ad3bf26e8L state=pending>'

----------------------------------------------------------------------
Ran 56 tests in 46.216s

FAILED (failures=1)
Traceback (most recent call last):
  File "app_main.py", line 72, in run_toplevel
  File "test_futures.py", line 723, in <module>
    test_main()
  File "test_futures.py", line 46, in decorator
    return func(*args)
  File "test_futures.py", line 718, in test_main
    ThreadPoolShutdownTest)
  File "/usr/lib64/pypy/lib-python/2.7/test/test_support.py", line 1144, in run_unittest
    _run_suite(suite)
  File "/usr/lib64/pypy/lib-python/2.7/test/test_support.py", line 1098, in _run_suite
    raise TestFailed(err)
TestFailed: Traceback (most recent call last):
  File "test_futures.py", line 577, in test_repr
    '<Future at 0x[0-9a-f]+ state=pending>')
AssertionError: Regexp didn't match: '<Future at 0x[0-9a-f]+ state=pending>' 
not found in '<Future at 0x7f2ad3bf26e8L state=pending>'

I don't see any benefit in a full log. Do you?


Original issue reported on code.google.com by `[email protected]` on 5 May 2014 at 4:47

ThreadPoolExecutor#map() doesn't start tasks until generator is used

See this simple source: http://paste.pound-python.org/show/ZTWE9gxRpAX5qsgMGS2H/

On Python 3, the prints happen before the generator is used at all. On Python 
2, they don't happen and RuntimeError is raised when trying to iterate on the 
returned generator.

Python 2.7.8 on OS X 10.9 with futures 2.1.6.

Original issue reported on code.google.com by [email protected] on 30 Aug 2014 at 9:23

Futures holding on to WorkItem references preventing garbage collection

After using heapy/guppy on my daemon application to try and locate memory 
problems I found that the futures work item objects we're being held as 
references which was preventing the data I had passed through my worker 
function from being garbage collected. 

After searching about the web I ran into an issue that was logged very recently 
(last month) against the futures library in Python 3.

http://bugs.python.org/issue16284

This issue should provide all the details (plus the fix) and simply needs to be 
back ported. I made some of the changes locally just to test it out and my 
memory has noticably improved.

What steps will reproduce the problem?
1. Use guppy/heapy and start a monitor on a daemon like process
2. Have your daemon process (or just while true main thread) submit work 
through to ThreadPoolExecutor
3. submit a couple jobs to warm up the memory then reset the heap reference 
point [hp.setref()]
4. check the heap [hp.heap()]
5. run another job through the futures and check the heap

What is the expected output? What do you see instead?
I expected the memory to return back to the previous value after the job was 
done, but it grew on each subsequent job instead and digging into the main 
culprits turned out to be objects that were being referenced in the futures 
work item.

What version of the product are you using? On what operating system?
futures 2.1.3

Please provide any additional information below.
Again, looks like someone else has already found this very problem in the 
formal futures library of python 3 and has fixed it, you can find all the 
details here: http://bugs.python.org/issue16284

Original issue reported on code.google.com by [email protected] on 15 Nov 2012 at 7:16

Deferred Class Would Be Nice

I've found this to personally be very useful in conjunction with Future

It allows for chaining and distribution of results and errors

An implementation of Deferred using Future. It only works with my pyev Future 
class though as Future requires the loop to be passed around.

http://bitbucket.org/bfrog/whizzer/src/tip/whizzer/futures.py

Original issue reported on code.google.com by [email protected] on 1 Sep 2010 at 10:57

Traceback of Exceptions from parallel tasks is truncated

I submit a job to the ThreadPoolExecutor. The job raises an exception which is 
reraised by future.result(), but the traceback starts at the future.result() 
call.

This does not happen in Python3 because tracebacks are attached to the 
exception.

The commits leading up to 
https://bitbucket.org/bluehorn/pythonfutures/changeset/31103a24718d43c23ab6dd5be
15bbb2a31e0ce2b fix this by storing sys.exc_info()[2] into the future.

Please consider merging my changes.

Original issue reported on code.google.com by [email protected] on 28 Sep 2012 at 4:07

len(Executor) return number of items in queue?

As a feature request, can the Executor respond to a len() request by showing the number of non-finished/canceled items in the pool?

I would like a clean pythonic way of seeing how many items remain to be executed and this seemed the way to go.

psudo-code:

    myvar = list(range(1, 30))

    pool = concurrent.futures.ThreadPoolExecutor(max_workers=2)
    results = pool.map(myfunction, myvar)

    for result in results:
        print("waiting for " + str(len(pool)) + " tasks to finish")

ThreadPoolExecutor should fail when max_worker is 0

I noticed a difference when instantiated the executor with ThreadPoolExecutor(max_workers=0).
The backport does not raise, whereas the 3.5 raises a ValueError:

  File "/opt/python/3.5.0/lib/python3.5/concurrent/futures/thread.py", line 96, in __init__
    raise ValueError("max_workers must be greater than 0")

ValueError: max_workers must be greater than 0

Ctrl-C doesn't end the program

I'm using Python 2.7, and my program ignores SIGTERM signals when using 
ThreadPoolExecutor.

After some debugging I found that my main thread is blocked on Thread.join(), 
and it holds the interruption until the call returns.

My solution is to use join(timeout), as it will respect the interruptions and 
allow the program to exit properly.

The next issue I found was that the atexit hook will try to wait for pending 
threads anyway.
If the program ended without waiting on these, I suppose skipping this step is 
probably not too bad either.

Not a great solution, but works for me :)

Original issue reported on code.google.com by [email protected] on 1 Nov 2013 at 11:57

Attachments:

more responsive, unordered, run_to_results

Currently, Executor.run_to_results iterates through the FutureList, waiting
for each future to finish *before* continuing with the next one. The
results are returned in the same order as the list of input tasks.

Please could a generator-method be made that yields when *any* future
finishes (and possibly return results in a different order from the
inputs). This is quite a common pattern; however at the moment I'm having
to implement this using the hack below (since FutureList has no way to keep
track of what has already been yielded), which doesn't scale.


-            for future in fs:
-                if timeout is None:
-                    yield future.result()
-                else:
-                    yield future.result(end_time - time.time())
+            yielded = [False] * len(fs)
+            while False in yielded:
+                fs.wait(None if timeout is None else end_time -
time.time(), return_when=FIRST_COMPLETED)
+                for i, future in enumerate(fs):
+                    future._condition.acquire()
+                    if future._state == FINISHED and not yielded[i]:
+                        yielded[i] = True
+                        print "yielded #%s (%s total)" % (i, len(fs))
+                        yield future.result()
+                    future._condition.release()
+

A better method would be to have worker push FINISHED futures onto a
result_queue or something.

Original issue reported on code.google.com by [email protected] on 3 Apr 2010 at 8:04

Intermittent threading hangs with process pools

I've been chasing this for a year or so (across various versions of Python and futuresโ€”this time 2.7.6 and 3.0.3) and finally went through the rigamarole of settings up the Python gdb tools to get some decent tracebacks out of it. In short, during large jobs with thousands of tasks, execution sometimes hangs. It runs for about an hour, getting somewhere between 11-17% done in the current reproduction; conveniently, I have a progress bar. The variation makes me think it's some kind of timing bug. The CPU use slowly falls down to 0 as the worker processes complete and no new ones are scheduled to replace them. I end up with a process table like this:

  585 ?        00:01:58 dxr
  605 ?        01:03:30 dxr <defunct>
  606 ?        00:22:48 dxr <defunct>
  607 ?        00:01:50 dxr <defunct>
  609 ?        00:17:43 dxr <defunct>

The defunct processes are the workers. Adding -L, we can see the threads futures spins up to coordinate the work distribution:

  585   585 ?        00:00:39 dxr
  585   603 ?        00:00:00 dxr
  585   604 ?        00:00:00 dxr
  585   608 ?        00:01:16 dxr
  605   605 ?        01:03:30 dxr <defunct>
  606   606 ?        00:22:48 dxr <defunct>
  607   607 ?        00:01:50 dxr <defunct>
  609   609 ?        00:17:43 dxr <defunct>

I don't know why there are only 3 of them, when my process pool is of size 4. Maybe that's a clue?

The Python traceback, from attaching with gdb and using its Python tools, looks like this:

(gdb) py-bt
#5 Frame 0x23c2e90, for file /usr/lib/python2.7/threading.py, line 339, in wait (self=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x7f2716b74ea0>, acquire=<built-in method acquire of thread.lock object at remote 0x7f2716b74ea0>, _Condition__waiters=[<thread.lock at remote 0x7f2716b74e00>], release=<built-in method release of thread.lock object at remote 0x7f2716b74ea0>) at remote 0x7f2716b5c920>, timeout=None, waiter=<thread.lock at remote 0x7f2716b74e00>, saved_state=None)
    waiter.acquire()
#9 Frame 0x7f272708f460, for file /usr/lib/python2.7/threading.py, line 620, in wait (self=<_Event(_Verbose__verbose=False, _Event__flag=False, _Event__cond=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x7f2716b74ea0>, acquire=<built-in method acquire of thread.lock object at remote 0x7f2716b74ea0>, _Condition__waiters=[<thread.lock at remote 0x7f2716b74e00>], release=<built-in method release of thread.lock object at remote 0x7f2716b74ea0>) at remote 0x7f2716b5c920>) at remote 0x7f2716b5c840>, timeout=None)
    self.__cond.wait(timeout)
#13 Frame 0x24189e0, for file /code/dbg-venv/local/lib/python2.7/site-packages/concurrent/futures/_base.py, line 217, in as_completed (fs=set([<...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <.....(truncated)
    waiter.event.wait(wait_timeout)
#19 Frame 0x2424df0, for file /code/dbg-venv/local/lib/python2.7/site-packages/click/_termui_impl.py, line 240, in next (self=<ProgressBar(color=None, pos=47, length_known=True, max_width=62, file=<_NonClosingTextIOWrapper(_stream=<_FixupStream(_stream=<file at remote 0x7f272f2751c0>) at remote 0x7f27270a6fb0>) at remote 0x7f272b1edc60>, is_hidden=False, avg=[<float at remote 0x25d2558>, <float at remote 0x25d2530>, <float at remote 0x25d2508>, <float at remote 0x25d24e0>, <float at remote 0x25d24b8>, <float at remote 0x25d25a8>, <float at remote 0x25d25d0>], last_eta=<float at remote 0x25d2468>, width=36, info_sep='  ', bar_template='%(label)-18s [%(bar)s] %(info)s', label='Indexing files', empty_char='-', start=<float at remote 0x2360e88>, entered=True, item_show_func=None, autowidth=False, show_percent=None, show_pos=False, finished=False, fill_char='#', eta_known=True, show_eta=False, iter=<generator at remote 0x7f2716b4cb60>, length=273, current_item=<Future(_exception=None, _result=None, _condition=<_Condit...(truncated)
    rv = next(self.iter)
#27 Frame 0x24187a0, for file /home/dxr/dxr/dxr/build.py, line 325, in show_progress (futures=[<Future(_exception=None, _result=None, _condition=<_Condition(_Condition__lock=<_RLock(_Verbose__verbose=False, _RLock__owner=None, _RLock__block=<thread.lock at remote 0x7f272bb3bc20>, _RLock__count=0) at remote 0x7f27270a6f40>, acquire=<instancemethod at remote 0x7f27270ee9e0>, _is_owned=<instancemethod at remote 0x7f27270ee4e0>, _release_save=<instancemethod at remote 0x7f27270ee8e0>, release=<instancemethod at remote 0x7f27270ee5e0>, _acquire_restore=<instancemethod at remote 0x7f27270eeae0>, _Verbose__verbose=False, _Condition__waiters=[]) at remote 0x7f272f13d4c0>, _state='RUNNING', _traceback=None, _waiters=[<_AsCompletedWaiter(lock=<thread.lock at remote 0x7f2716b74f40>, event=<_Event(_Verbose__verbose=False, _Event__flag=False, _Event__cond=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x7f2716b74ea0>, acquire=<built-in method acquire of thread.lock object at remote 0x7f2716b74ea0...(truncated)
    for future in bar:
#30 Frame 0x2411980, for file /home/dxr/dxr/dxr/build.py, line 638, in index_files (tree=<TreeConfig(_section={'ignore_filenames': ['.hg', '.git', 'CVS', '.svn', '.bzr', '.deps', '.libs', '.DS_Store', '.nfs*', '*~', '._*'], 'build_command': 'cd $source_folder && ./mach clobber && make -f client.mk build MOZ_OBJDIR=$object_folder MOZ_MAKE_FLAGS="-s -j$jobs"', 'description': '', 'core': {}, 'enabled_plugins': [<Plugin(tree_to_index=<type at remote 0x2325170>, file_to_skim=None, filters=[<type at remote 0x232e220>, <type at remote 0x2324270>, <type at remote 0x2323e80>, <type at remote 0x2324d80>, <type at remote 0x233b890>, <type at remote 0x2324990>], analyzers={'tokenizer': {'trigram_tokenizer': {'max_gram': 3, 'type': 'nGram', 'min_gram': 3}}, 'analyzer': {'trigramalyzer': {'type': 'custom', 'tokenizer': 'trigram_tokenizer'}, 'lowercase': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'keyword'}, 'trigramalyzer_lower': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'trigram_tokenizer'}}}, ref...(truncated)
    for future in show_progress(futures, 'Indexing files'):
#33 Frame 0x231ea40, for file /home/dxr/dxr/dxr/build.py, line 275, in index_tree (tree=<TreeConfig(_section={'ignore_filenames': ['.hg', '.git', 'CVS', '.svn', '.bzr', '.deps', '.libs', '.DS_Store', '.nfs*', '*~', '._*'], 'build_command': 'cd $source_folder && ./mach clobber && make -f client.mk build MOZ_OBJDIR=$object_folder MOZ_MAKE_FLAGS="-s -j$jobs"', 'description': '', 'core': {}, 'enabled_plugins': [<Plugin(tree_to_index=<type at remote 0x2325170>, file_to_skim=None, filters=[<type at remote 0x232e220>, <type at remote 0x2324270>, <type at remote 0x2323e80>, <type at remote 0x2324d80>, <type at remote 0x233b890>, <type at remote 0x2324990>], analyzers={'tokenizer': {'trigram_tokenizer': {'max_gram': 3, 'type': 'nGram', 'min_gram': 3}}, 'analyzer': {'trigramalyzer': {'type': 'custom', 'tokenizer': 'trigram_tokenizer'}, 'lowercase': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'keyword'}, 'trigramalyzer_lower': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'trigram_tokenizer'}}}, refs...(truncated)
    index_files(tree, tree_indexers, index, pool, es)
#37 Frame 0x23ea910, for file /home/dxr/dxr/dxr/build.py, line 64, in index_and_deploy_tree (tree=<TreeConfig(_section={'ignore_filenames': ['.hg', '.git', 'CVS', '.svn', '.bzr', '.deps', '.libs', '.DS_Store', '.nfs*', '*~', '._*'], 'build_command': 'cd $source_folder && ./mach clobber && make -f client.mk build MOZ_OBJDIR=$object_folder MOZ_MAKE_FLAGS="-s -j$jobs"', 'description': '', 'core': {}, 'enabled_plugins': [<Plugin(tree_to_index=<type at remote 0x2325170>, file_to_skim=None, filters=[<type at remote 0x232e220>, <type at remote 0x2324270>, <type at remote 0x2323e80>, <type at remote 0x2324d80>, <type at remote 0x233b890>, <type at remote 0x2324990>], analyzers={'tokenizer': {'trigram_tokenizer': {'max_gram': 3, 'type': 'nGram', 'min_gram': 3}}, 'analyzer': {'trigramalyzer': {'type': 'custom', 'tokenizer': 'trigram_tokenizer'}, 'lowercase': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'keyword'}, 'trigramalyzer_lower': {'filter': ['lowercase'], 'type': 'custom', 'tokenizer': 'trigram_tokenizer...(truncated)
    index_name = index_tree(tree, es, verbose=verbose)
#41 Frame 0x231f140, for file /home/dxr/dxr/dxr/cli/index.py, line 26, in index (config=<Config(_section={'es_hosts': 'http://12---Type <return> to continue, or q <return> to quit---
7.0.0.1:9200/', 'default_tree': 'mozilla-central', 'max_thumbnail_size': 20000, 'es_alias': 'dxr_{format}_{tree}', 'es_catalog_index': 'dxr_catalog', 'workers': 4, 'log_folder': '/code/dxr-logs-{tree}', 'es_indexing_timeout': 60, 'temp_folder': '/code/dxr-temp-{tree}', 'google_analytics_key': '', 'es_refresh_interval': 60, 'es_catalog_replicas': 1, 'www_root': '', 'es_index': 'dxr_{format}_{tree}_{unique}', 'generated_date': 'Fri, 05 Feb 2016 18:27:38 +0000', 'skip_stages': ['build']}, trees=<OrderedDict(_OrderedDict__map={'mozilla-central': ['mozilla-central', [None, [...], [...]], [...]]}, _OrderedDict__end=[...]) at remote 0x7f272b1ae5c0>) at remote 0x7f272b5e9300>, verbose=False, tree_names=(), tree=<TreeConfig(_section={'ignore_filenames': ['.hg', '.git', 'CVS', '.svn', '.bzr', '.deps', '.libs', '.DS_Store', '.nfs*', '*~', '._*'], 'build_command': 'cd $source_folder && ./mach clobbe...(truncated)
...

Here's the calling code.

Here's the C traceback as well, in case it's helpful:

#0  sem_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/sem_wait.S:85
#1  0x000000000056e2a4 in PyThread_acquire_lock (lock=0x22b12e0, waitflag=1) at ../Python/thread_pthread.h:324
#2  0x000000000060fba9 in lock_PyThread_acquire_lock (self=0x7f2716b74e00, args=()) at ../Modules/threadmodule.c:52
#3  0x00000000004877aa in PyCFunction_Call (func=<built-in method acquire of thread.lock object at remote 0x7f2716b74e00>,
    arg=(), kw=0x0) at ../Objects/methodobject.c:81
#4  0x00000000005273b4 in call_function (pp_stack=0x7ffcb97d7450, oparg=0) at ../Python/ceval.c:4020
#5  0x00000000005222e1 in PyEval_EvalFrameEx (
    f=Frame 0x23c2e90, for file /usr/lib/python2.7/threading.py, line 339, in wait (self=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x7f2716b74ea0>, acquire=<built-in method acquire of thread.lock object at remote 0x7f2716b74ea0>, _Condition__waiters=[<thread.lock at remote 0x7f2716b74e00>], release=<built-in method release of thread.lock object at remote 0x7f2716b74ea0>) at remote 0x7f2716b5c920>, timeout=None, waiter=<thread.lock at remote 0x7f2716b74e00>, saved_state=None), throwflag=0) at ../Python/ceval.c:2666
#6  0x0000000000524b9a in PyEval_EvalCodeEx (co=0x7f272ce09720,
    globals={'current_thread': <function at remote 0x7f272ce27060>, '_BoundedSemaphore': <type at remote 0x1f88210>, 'currentThread': <function at remote 0x7f272ce27060>, '_Timer': <type at remote 0x1f8a6a0>, '_format_exc': <function at remote 0x7f272ce16c30>, 'Semaphore': <function at remote 0x7f272ce22a38>, '_deque': <type at remote 0x8c7c60>, 'activeCount': <function at remote 0x7f272ce27300>, '_profile_hook': None, '_sleep': <built-in function sleep>, '_trace_hook': None, 'ThreadError': <type at remote 0x1df77d0>, '_enumerate': <function at remote 0x7f272ce273a8>, '_start_new_thread': <built-in function start_new_thread>, 'BoundedSemaphore': <function at remote 0x7f272ce241b0>, '_shutdown': <instancemethod at remote 0x7f272d2048e0>, '__all__': ['activeCount', 'active_count', 'Condition', 'currentThread', 'current_thread', 'enumerate', 'Event', 'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Thread', 'Timer', 'setprofile', 'settrace', 'local', 'stack_size'], '_Event': <type at remote 0x1f88930>, 'active_count': <fu...(truncated), locals=0x0, args=0x7f272708f5f8, argcount=2, kws=0x7f272708f608, kwcount=0, defs=0x7f272ce23168,
    defcount=1, closure=0x0) at ../Python/ceval.c:3252
#7  0x000000000052799e in fast_function (func=<function at remote 0x7f272ce22f78>, pp_stack=0x7ffcb97d78f0, n=2, na=2, nk=0)
    at ../Python/ceval.c:4116
#8  0x0000000000527588 in call_function (pp_stack=0x7ffcb97d78f0, oparg=1) at ../Python/ceval.c:4041
#9  0x00000000005222e1 in PyEval_EvalFrameEx (
    f=Frame 0x7f272708f460, for file /usr/lib/python2.7/threading.py, line 620, in wait (self=<_Event(_Verbose__verbose=False, _Event__flag=False, _Event__cond=<_Condition(_Verbose__verbose=False, _Condition__lock=<thread.lock at remote 0x7f2716b74ea0>, acquire=<built-in method acquire of thread.lock object at remote 0x7f2716b74ea0>, _Condition__waiters=[<thread.lock at remote 0x7f2716b74e00>], release=<built-in method release of thread.lock object at remote 0x7f2716b74ea0>) at remote 0x7f2716b5c920>) at remote 0x7f2716b5c840>, timeout=None), throwflag=0) at ../Python/ceval.c:2666
#10 0x0000000000524b9a in PyEval_EvalCodeEx (co=0x7f272ce0e5c0,
    globals={'current_thread': <function at remote 0x7f272ce27060>, '_BoundedSemaphore': <type at remote 0x1f88210>, 'currentThread': <function at remote 0x7f272ce27060>, '_Timer': <type at remote 0x1f8a6a0>, '_format_exc': <function at remote 0x7f272ce16c30>, 'Semaphore': <function at remote 0x7f272ce22a38>, '_deque': <type at remote 0x8c7c60>, 'activeCount': <function at remote 0x7f272ce27300>, '_profile_hook': None, '_sleep': <built-in function sleep>, '_trace_hook': None, 'ThreadError': <type at remote 0x1df77d0>, '_enumerate': <function at remote 0x7f272ce273a8>, '_start_new_thread': <built-in function start_new_thread>, 'BoundedSemaphore': <function at remote 0x7f272ce241b0>, '_shutdown': <instancemethod at remote 0x7f272d2048e0>, '__all__': ['activeCount', 'active_count', 'Condition', 'currentThread', 'current_thread', 'enumerate', 'Event', 'Lock', 'RLock', 'Semaphore', 'BoundedSemaphore', 'Thread', 'Timer', 'setprofile', 'settrace', 'local', 'stack_size'], '_Event': <type at remote 0x1f88930>, 'active_count': <fu...(truncated), locals=0x0, args=0x2418bb0, argcount=2, kws=0x2418bc0, kwcount=0, defs=0x7f272ce23478, defcount=1,
    closure=0x0) at ../Python/ceval.c:3252
#11 0x000000000052799e in fast_function (func=<function at remote 0x7f272ce24ae0>, pp_stack=0x7ffcb97d7d90, n=2, na=2, nk=0)
    at ../Python/ceval.c:4116
#12 0x0000000000527588 in call_function (pp_stack=0x7ffcb97d7d90, oparg=1) at ../Python/ceval.c:4041
#13 0x00000000005222e1 in PyEval_EvalFrameEx (
    f=Frame 0x24189e0, for file /code/dbg-venv/local/lib/python2.7/site-packages/concurrent/futures/_base.py, line 217, in as_completed (fs=set([<...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <...>, <.....(truncated), throwflag=0) at ../Python/ceval.c:2666
#14 0x0000000000452c14 in gen_send_ex (gen=0x7f2716b4cb60, arg=0x0, exc=0) at ../Objects/genobject.c:85
#15 0x0000000000453581 in gen_iternext (gen=0x7f2716b4cb60) at ../Objects/genobject.c:283
#16 0x0000000000513dce in builtin_next (self=0x0, args=(<generator at remote 0x7f2716b4cb60>,)) at ../Python/bltinmodule.c:1107
#17 0x00000000004877aa in PyCFunction_Call (func=<built-in function next>, arg=(<generator at remote 0x7f2716b4cb60>,), kw=0x0)
    at ../Objects/methodobject.c:81
#18 0x00000000005273b4 in call_function (pp_stack=0x7ffcb97d8120, oparg=1) at ../Python/ceval.c:4020
#19 0x00000000005222e1 in PyEval_EvalFrameEx (
    f=Frame 0x2424df0, for file /code/dbg-venv/local/lib/python2.7/site-packages/click/_termui_impl.py, line 240, in next (self=<ProgressBar(color=None, pos=47, length_known=True, max_width=62, file=<_NonClosingTextIOWrapper(_stream=<_FixupStream(_stream=<file at remote 0x7f272f2751c0>) at remote 0x7f27270a6fb0>) at remote 0x7f272b1edc60>, is_hidden=False, avg=[<float at remote 0x25d2558>, <float at remote 0x25d2530>, <float at remote 0x25d2508>, <float at remote 0x25d24e0>, <float at remote 0x25d24b8>, <float at remote 0x25d25a8>, <float at remote 0x25d25d0>], last_eta=<float at remote 0x25d2468>, width=36, info_sep='  ', bar_template='%(label)-18s [%(bar)s] %(info)s', label='Indexing files', empty_char='-', start=<float at remote 0x2360e88>, entered=True, item_show_func=None, autowidth=False, show_percent=None, show_pos=False, finished=False, fill_char='#', eta_known=True, show_eta=False, iter=<generator at remote 0x7f2716b4cb60>, length=273, current_item=<Future(_exception=None, _result=None, _condition=<_Condit...(truncated), throwflag=0) at ../Python/ceval.c:2666
...

Let me know if I can supply any more information. I'm also not sure if this is more properly filed with upstream, as my codebase isn't Python 3 clean. Thank you!

Map is still greedy

Further test of map suggests it is still greedy. If you submit a large iterator, it gets consumed almost immediately, causing memory issues.

Here's a simple test. If map wasn't greedy, it would print "time to first result" almost immediately. But it prints when the processing is almost done.

import concurrent.futures
import time
import sys

def job(i):
    return i

with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
    s = 0
    start_time = time.time()
    for i in executor.map(job, xrange(1, 100000)):
        if s == 0:
            print "time to first result=%.3f s" % (time.time() - start_time)
        s += i
    print "time to finish=%.3f s" % (time.time() - start_time)

Race condition in concurrent.futures / http://bugs.python.org/issue14406

What steps will reproduce the problem?

flist = [executor.submit(...) for x in ...]
futures.wait(flist)

What is the expected output? What do you see instead?

the wait call should return, but it doesn't do that.

What version of the product are you using? On what operating system?

futures-2.1.2

Please provide any additional information below.

see http://bugs.python.org/issue14406

Original issue reported on code.google.com by [email protected] on 20 Aug 2012 at 12:48

ProcessPoolExecutor is broken

Plus, the Python process gets stuck in a state where KeyboardInterrupt doesn't 
work.

As of latest release (2.1.2):

$ ipython
Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41)
Type "copyright", "credits" or "license" for more information.

IPython 0.10 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object'. ?object also works, ?? prints more.

In [1]: import futures
/home/yang/env/lib/python2.6/site-packages/futures/__init__.py:24: 
DeprecationWarning: The futures package has been deprecated. Use the 
concurrent.futures package instead.
  DeprecationWarning)

In [2]: from concurrent.futures import *

In [3]: def go(x): print x
   ...: 

In [4]: with ProcessPoolExecutor(1) as p: list(p.map(go, range(3)))
   ...: 
Traceback (most recent call last):
  File "/home/yang/env/lib/python2.6/multiprocessing/queues.py", line 242, in _feed
    send(obj)
PicklingError: Can't pickle <type 'function'>: attribute lookup 
__builtin__.function failed

^CProcess Process-1:
Traceback (most recent call last):
  File "/home/yang/env/lib/python2.6/multiprocessing/process.py", line 232, in _bootstrap
    self.run()
  File "/home/yang/env/lib/python2.6/multiprocessing/process.py", line 88, in run 
    self._target(*self._args, **self._kwargs)
  File "/home/yang/env/lib/python2.6/site-packages/concurrent/futures/process.py", line 140, in _process_worker
    call_item = call_queue.get(block=True, timeout=0.1)
  File "/home/yang/env/lib/python2.6/multiprocessing/queues.py", line 103, in get 
    if not self._poll(block and (deadline-time.time()) or 0.0):
KeyboardInterrupt

Original issue reported on code.google.com by [email protected] on 19 Apr 2011 at 9:28

Callback when done

I think that it will be good to have the possibility to specify a "callback"
to be run when the future is done.
This will make it easier to get it integrated with GUIs (or other
situations where we prefer to have signals, not polling)

Original issue reported on code.google.com by [email protected] on 16 Apr 2010 at 9:18

issue with proxy authentication when installing the package "futures: 3.0.5" ("futures 3.0.3" seems to works fine)

Hi there,

Not sure what is the best place to post this. Please tell me if this is not the right place.

I installed conda 3.18.8 (My company used this version) on a PC (Windows 7):

H:>conda info
Current conda install:
platform : win-64
conda version : 3.18.8
conda-build version : 1.18.2
python version : 2.7.11.final.0
requests version : 2.8.1
root environment : C:\Program Files\Anaconda2 (writable)
default environment : C:\Program Files\Anaconda2
envs directories : C:\Program Files\Anaconda2\envs
package cache : C:\Program Files\Anaconda2\pkgs
channel URLs : https://repo.continuum.io/pkgs/free/win-64/
https://repo.continuum.io/pkgs/free/noarch/
https://repo.continuum.io/pkgs/pro/win-64/
https://repo.continuum.io/pkgs/pro/noarch/
config file : C:\Users\C172685.condarc
is foreign system : False

with a condarc file for the proxy config and with the following packages:

pip
scipy
cx_oracle
anaconda-navigator
blaze
conda-manager
dill
et_xmlfile

which gave me
anaconda 2.4.1
conda 4.1.11

when I install "futures: 3.0.5-py27_0 defaults" I get an issue with the proxy (see screenshot 1)
screenshot 1

as you ca see it asked for "https proxy username: Password:"
-> I can see all the character when I type the pwd
-> the pwd doesn't work
-> it will ask me again the pwd and then it will be stuck and I will have to kill the windows !

I thought it was an issue with the proxy with our company but if I removed it it works fine again (no issue with the pwd) !(see screenshot 2)
screenshot 2

I also find that: futures 3.0.3 works fine
Do expert understand my issue with futures: 3.0.5-py27_0 defaults ? Why it affect the authentication with the proxy ?

I am new with conda and almost now knowledge with proxy.

Thanks a lot
Cheers
Fabien

missing braces in concurrent/futures/_base.py

The missing braces in line 357
raise type(self._exception), self._exception, self._traceback
needs to be updated to
raise (type(self._exception), self._exception, self._traceback)

This issue is causing django-pipeline to break as it depends on this package.

API Change

This is an API Change that changes the current Executor to ExecutorBase and 
adds a new Executor class that is used like

futures.Executor() # creates an executor that uses threading and a 
max_workers = to the number of cpus

futures.Executor(use='process') # Creates an executor that uses 
multiprocessing and a max_workers = to the number of cpus

futures.Executor(max_workers=5) # threading again, just specifying the 
number of workers

futures.Executor(use='process', max_workers=5) # back to multiprocessing, 
but with the max_workers specified



This also includes a patch to fix the examples so they run on windows (it 
relies on the api_change.diff)

Original issue reported on code.google.com by [email protected] on 7 Mar 2010 at 3:06

Attachments:

[Errno 32] Broken pipe When Mapping Too Many Values

First of all, thanks for that great backport. I am using it for the xfork package to support the 2.7 branch.

Unfortunately, there is something strange happening with this futures distribution which works perfectly fine with python3.4

from concurrent.futures import ProcessPoolExecutor

def calc(n):
    with ProcessPoolExecutor() as pool:
        results = pool.map(term, range(n))
        return sum(results)

def term(x):
    return x

print(calc(5000))
/usr/bin/python2.7 calc.py
12497500
Traceback (most recent call last):
  File "/usr/lib/python2.7/multiprocessing/queues.py", line 266, in _feed
    send(obj)
IOError: [Errno 32] Broken pipe

PSF License and BSD license validity clarification

Main license is BSD but individual files have PSF License header. It makes confusion about correct licensing terms.

It needs clarification which license(s) is/are valid for this package.

Main LICENSE file (BSD) was created on on Jun 11, 2009 and not modified later. PSF License boilerplates in individual files were added on 13 Nov 2010, commit @69d0be403e5e118a3bf23a4fc8f1c49f4fd4d36c

atexit hooks are wrongly used

I do not know how to reproduce with small program, but when using tornado 
tests, I receive that:

--------------------------------
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
    func(*targs, **kargs)
  File "/home/mmarkk/.local/lib/python2.7/site-packages/concurrent/futures/process.py", line 82, in _python_exit
    items = list(_threads_queues.items())
AttributeError: 'NoneType' object has no attribute 'items'
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
  File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
    func(*targs, **kargs)
  File "/home/mmarkk/.local/lib/python2.7/site-packages/concurrent/futures/thread.py", line 41, in _python_exit
    items = list(_threads_queues.items())
AttributeError: 'NoneType' object has no attribute 'items'
Error in sys.exitfunc:
Traceback (most recent call last):
  File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
    func(*targs, **kargs)
  File "/home/mmarkk/.local/lib/python2.7/site-packages/concurrent/futures/thread.py", line 41, in _python_exit
    items = list(_threads_queues.items())
AttributeError: 'NoneType' object has no attribute 'items'
---------------------------

But this problem is easily fixed with that:

--- process.py  2015-01-21 15:27:49.458086156 +0500
+++ process.py.new  2015-01-21 15:28:10.906259116 +0500
@@ -76,7 +76,7 @@
 _threads_queues = weakref.WeakKeyDictionary()
 _shutdown = False

-def _python_exit():
+def _python_exit(_threads_queues=_threads_queues):
     global _shutdown
     _shutdown = True
     items = list(_threads_queues.items())


!!!!! and exactly the same for threads.py !!!!

What the hell? I do not know how the _threads_queues may become None.

Original issue reported on code.google.com by [email protected] on 21 Jan 2015 at 10:29

Implement exception-chaining for future.result()

Chained exceptions are vital in debugging programs that use this library.

They are going to be added to py3k with
http://www.python.org/dev/peps/pep-3134/ but for people using python 2,
I've attached a quick-and-dirty implementation that is specific to
python-futures.

I've tested it for ThreadPoolExecutor but not ProcessPoolExecutor.

Sample output when future.result() is called:


Traceback (most recent call last):
  File "src/scrape.py", line 287, in <module>
    sys.exit(main(*args, **opts.__dict__))
  File "src/scrape.py", line 65, in main
    ret = f(*args)
  File "src/scrape.py", line 187, in round_photo
    self.ff.commitUserPhotos(socgr.vs["id"], ppdb)
  File
"/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/tags/scrape/flic
kr.py",
line 265, in commitUserPhotos
    self.execAllUnique(users, ppdb, "producer db (user)", run, post, conc_m)
  File
"/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/tags/scrape/flic
kr.py",
line 217, in execAllUnique
    LOG.info, "%s: %%(i1)s/%s %%(it)s" % (name, total), expected_length=total):
  File
"/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/tags/scrape/util
.py",
line 188, in enumerate_cb
    for i, item in enumerate(iterable):
  File
"/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/futures/_base.py
",
line 602, in run_to_results
    raise e
futures._base.ExecutionException: Caused by:
  File
"/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/futures/thread.p
y",
line 87, in run
    result = self.call()
  File
"/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/tags/scrape/flic
kr.py",
line 210, in <lambda>
    tasks = [partial(lambda it: (it, run(it)), it) for it in items if it
not in done]
  File
"/home/infinity0/0/work/compsci/ii-2009-project/tag-routing/src/tags/scrape/flic
kr.py",
line 253, in run
    print a[3]
IndexError('list index out of range',)

Original issue reported on code.google.com by [email protected] on 3 Apr 2010 at 6:29

Attachments:

futures-2.1.4.tar.gz underdone.

What steps will reproduce the problem?
1. unpack futures-2.1.4.tar.gz
2.
3.

What is the expected output? What do you see instead?

CHANGES  concurrent  docs  futures  LICENSE  setup.cfg  setup.py  
test_futures.py  tox.ini

concurrent  futures  LICENSE  setup.cfg  setup.py

What version of the product are you using? On what operating system?
2.1.4

Please provide any additional information below.

Someone left out required source files, most importantly test_futures.py, 
arguably less importantly the WHOLE docs folder, plus CHANGES tox.ini,  The 
main has more content, but these represent an adequate list of MISIING source 
content to top it up to a similar standard to 2.1.3.
It's beyond me how to run the tests when test_futures.py isn't there!


Original issue reported on code.google.com by [email protected] on 6 Jul 2013 at 11:08

syntax error with raise type(self._exception), self._exception, self._traceback (_base.py)

I installed futures 3.0.3 and it raised the following exception:

from IPython.kernel import KernelManager

File "c:\apythonensae\python\lib\site-packages\IPython\kernel__init__.py", line 4, in
from . import zmq
File "c:\apythonensae\python\lib\site-packages\IPython\kernel\zmq__init__.py", line 10, in
from .session import Session
File "c:\apythonensae\python\lib\site-packages\IPython\kernel\zmq\session.py", line 41, in
from zmq.eventloop.ioloop import IOLoop
File "c:\apythonensae\python\lib\site-packages\zmq\eventloop__init__.py", line 3, in
from zmq.eventloop.ioloop import IOLoop
File "c:\apythonensae\python\lib\site-packages\zmq\eventloop\ioloop.py", line 35, in
from tornado.ioloop import PollIOLoop, PeriodicCallback
File "c:\apythonensae\python\lib\site-packages\tornado\ioloop.py", line 46, in
from tornado.concurrent import TracebackFuture, is_future
File "c:\apythonensae\python\lib\site-packages\tornado\concurrent.py", line 37, in
from concurrent import futures
File "c:\apythonensae\python\lib\site-packages\concurrent\futures__init__.py", line 8, in
from concurrent.futures._base import (FIRST_COMPLETED,
File "c:\apythonensae\python\lib\site-packages\concurrent\futures_base.py", line 355
raise type(self._exception), self._exception, self._traceback
^
SyntaxError: invalid syntax

release 2.1.4 regression.

What steps will reproduce the problem?
1. acquire tarball from pypi, the only one I gather
2. unpack and count
3.

What is the expected output? What do you see instead?
CHANGES  concurrent  crawl.py  docs  futures  futures.egg-info  LICENSE  
PKG-INFO  primes.py  setup.cfg  setup.py  test_futures.py  tox.ini

concurrent  futures  futures.egg-info  PKG-INFO  setup.cfg  setup.py

What version of the product are you using? On what operating system?
the latest release 2.1.4, gentoo

Please provide any additional information below.

Someone left out half the source code.   ooopsie

Original issue reported on code.google.com by [email protected] on 29 Jun 2013 at 7:25

child processes inherit parent's global state?

Hi,

import concurrent.futures as cf

BEEP = None

def foo(x):
    print BEEP

if __name__ == '__main__':
    BEEP = 500
    with cf.ProcessPoolExecutor(max_workers=4) as executor:
        for i in xrange(10):
            executor.submit(foo, i)

In this example, I would expect all the processes to print None, but they all print 500. Is this the correct behavior?

If so, is there a way to exec clean processes without inheriting memory from the parent?

Child process termination not known by Parent error in concurrent.futures

What steps will reproduce the problem?

1. Submit a task using ProcessPoolExecutor
2. kill -9 <one_of_childrens_pid>
3. parent process gets blocked forever.

What is the expected output? What do you see instead?

We encountered an error in which if a child process dies or crashes the parent 
process is not notified and parent goes in blocked state. Other children are 
either in blocked or timed out state.

We were able to reproduce this scenario by using following code and by killing 
one of the child.

#!/home/y/bin64/python2.7

import concurrent.futures
import time
import signal
import os
import sys
import traceback


def just_wait(identifier):
    time.sleep(20)
    return identifier

def signal_handler(sig, stack):
    try:
        result = os.waitpid(-1, os.WNOHANG)
        while result[0]:
            print("Reaped child process %s" % result[0])
            result = os.waitpid(-1, os.WNOHANG)
        traceback.print_stack()
        sys.exit()    
    except (OSError):
        pass

def main():
    with concurrent.futures.ProcessPoolExecutor(max_workers=30) as executor:
        future_to_id = [executor.submit(just_wait, i) for i in range(1, 31)]
        for future in concurrent.futures.as_completed(future_to_id):
            returned_id = future.result()
            print "Process Id: ", returned_id

if __name__=='__main__':
    signal.signal(signal.SIGCHLD, signal_handler)
    main()

The status of one of the child processes:
$sudo strace -p 30974
Password: 
Process 30974 attached - interrupt to quit
restart_syscall(<... resuming interrupted call ...>) = -1 ETIMEDOUT (Connection 
timed out)
gettimeofday({1410964539, 104107}, NULL) = 0
gettimeofday({1410964539, 104165}, NULL) = 0
futex(0x7f3e698e7000, FUTEX_WAIT_BITSET|FUTEX_CLOCK_REALTIME, 0, {1410964539, 
204165000}, ffffffff) = -1 ETIMEDOUT (Connection timed out)
gettimeofday({1410964539, 204812}, NULL) = 0
gettimeofday({1410964539, 204845}, NULL) = 0

The status for parent process:
sudo strace -p 30948
Process 30948 attached - interrupt to quit
futex(0x1addc30, FUTEX_WAIT_PRIVATE, 0, NULL

What version of the product are you using? On what operating system?
RHEL - 6.4.

Please provide any additional information below.
Here's the related issue that got fixed in python 3.3 - 
http://bugs.python.org/issue9205
Since we are using python 2.7.5, is this possible to backport this fix as well 
to futures for 2.7.5.

Original issue reported on code.google.com by [email protected] on 17 Sep 2014 at 9:25

error: can't start new thread

Windows 7 x64, python 2.7.9
I use requests-features library which uses python-futures.

I call this usual code (got from requests-futures examples) that makes requests to some internet-API many-many times in loop. And it seems it creates 1 new thread per each request, so it seems bug.

from requests_futures.sessions import FuturesSession as SessionAsync
session = SessionAsync(max_workers=2)
resp = session.get(self.main_address,
params=self.query_params,
background_callback=self.parse_response,
proxies=...,
timeout=...)

When loops counter comes to ~600 and number of threads comes to 600, fall occurs:

File "C:\myscript.py", line 223, in get_coords_for_addr
timeout=self.web_config.timeout)
File "C:\Python27\ArcGIS10.2\lib\site-packages\requests\sessions.py", line 476, in get
return self.request('GET', url, *_kwargs)
File "C:\Python27\ArcGIS10.2\lib\site-packages\requests_futures\sessions.py", line 73, in request
return self.executor.submit(func, *args, *_kwargs)
File "C:\Python27\ArcGIS10.2\lib\site-packages\concurrent\futures\thread.py", line 111, in submit
self._adjust_thread_count()
File "C:\Python27\ArcGIS10.2\lib\site-packages\concurrent\futures\thread.py", line 127, in _adjust_thread_count
t.start()
File "C:\Python27\ArcGIS10.2\lib\threading.py", line 745, in start
_start_new_thread(self.__bootstrap, ())
error: can't start new thread

pthread_cond_wait: Invalid argument

What steps will reproduce the problem?
1. Runs 10-15 threads
2. Wait 1-10 minutes

What is the expected output? What do you see instead?

pthread_cond_wait: Invalid argument

What version of the product are you using? On what operating system?

futures v2.1.2, Mac OS X Snow Leopard


Original issue reported on code.google.com by [email protected] on 29 May 2011 at 7:33

Threading needs at least one statement after submit to start

OS: Windows 8.1, Python 2.7.9 32-bit and 3.4.3 64-bit, also tested under Cygwin and Ubuntu 14.10

The following code snippet will return False on Python2 with pythonfutures and True on Python3 with the builtin module:

import time, concurrent.futures

executor = concurrent.futures.ThreadPoolExecutor(max_workers = 1)
future   = executor.submit(time.sleep, 1)
#print('')
print(future.running())

If you uncomment print(''), Python2 with pythonfutures will print True.

concurrency errors running multiple processes together with a process pool executor

I'm trying to run multiple processes and at the same time use the 
concurrent.futures.ProcessPoolExecutor to run CPU intensive jobs. The first few 
requests are happily served, but then a KeyError is raised from 
concurrent.futures.process, and the server hangs.

I submitted a bug report for Tornado, and they suggest it's a problem with 
concurrent package.

This is the simplest form I stripped the code to.

server:

"""
server runs 2 processes and does job on a ProcessPoolExecutor
"""
import tornado.web
import tornado.ioloop
import tornado.gen
import tornado.options
import tornado.httpserver

from concurrent.futures import ProcessPoolExecutor


class MainHandler(tornado.web.RequestHandler):

    executor = ProcessPoolExecutor(1)

    @tornado.gen.coroutine
    def post(self):
        num = int(self.request.body)
        result = yield self.executor.submit(pow, num, 2)
        self.finish(str(result))


application = tornado.web.Application([
    (r"/", MainHandler),
])


def main():
    tornado.options.parse_command_line()
    server = tornado.httpserver.HTTPServer(application)
    server.bind(8888)
    server.start(2)
    tornado.ioloop.IOLoop.instance().start()


if __name__ == '__main__':
    main()
client

"""
client
"""
from tornado.httpclient import AsyncHTTPClient
from tornado.gen import coroutine
from tornado.ioloop import IOLoop


@coroutine
def remote_compute(num):
    rsp = yield AsyncHTTPClient().fetch(
        'http://127.0.0.1:8888', method='POST', body=str(num))
    print 'result:', rsp.body

IOLoop.instance().run_sync(lambda: remote_compute(10))
error traceback

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/local/Cellar/python/2.7.7_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 810, in __bootstrap_inner
    self.run()
  File "/usr/local/Cellar/python/2.7.7_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/threading.py", line 763, in run
    self.__target(*self.__args, **self.__kwargs)
  File "/Users/cliffxuan/.virtualenvs/executor/lib/python2.7/site-packages/concurrent/futures/process.py", line 216, in _queue_management_worker
    work_item = pending_work_items[result_item.work_id]
KeyError: 0

Original issue reported on code.google.com by [email protected] on 15 Oct 2014 at 8:44

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.