Celery

Latest version: v5.4.0

Safety actively analyzes 621803 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 28 of 48

2.7.3.7

--------------------

- Fixes Python 2.5 support.

2.7.3.6

--------------------

- Pool: Can now be used in an event loop, without starting the supporting
threads (TimeoutHandler still not supported)

To facilitate this the pool has gained the following keyword arguments:

- ``with_task_thread``
- ``with_result_thread``
- ``with_supervisor_thread``
- ``on_process_up``

Callback called with Process instance as argument
whenever a new worker process is added.

Used to add new process fds to the eventloop::

def on_process_up(proc):
hub.add_reader(proc.sentinel, pool.maintain_pool)

- ``on_process_down``

Callback called with Process instance as argument
whenever a new worker process is found dead.

Used to remove process fds from the eventloop::

def on_process_down(proc):
hub.remove(proc.sentinel)

- ``semaphore``

Sets the semaphore used to protect from adding new items to the
pool when no processes available. The default is a threaded
one, so this can be used to change to an async semaphore.

And the following attributes::

- ``readers``

A map of ``fd`` -> ``callback``, to be registered in an eventloop.
Currently this is only the result outqueue with a callback
that processes all currently incoming results.

And the following methods::

- ``did_start_ok``

To be called after starting the pool, and after setting up the
eventloop with the pool fds, to ensure that the worker processes
didn't immediately exit caused by an error (internal/memory).

- ``maintain_pool``

Public version of ``_maintain_pool`` that handles max restarts.

- Pool: Process too frequent restart protection now only counts if the process
had a non-successful exit-code.

This to take into account the maxtasksperchild option, and allowing
processes to exit cleanly on their own.

- Pool: New options max_restart + max_restart_freq

This means that the supervisor can't restart processes
faster than max_restart' times per max_restart_freq seconds
(like the Erlang supervisor maxR & maxT settings).

The pool is closed and joined if the max restart
frequency is exceeded, where previously it would keep restarting
at an unlimited rate, possibly crashing the system.

The current default value is to stop if it exceeds
100 * process_count restarts in 1 seconds. This may change later.

It will only count processes with an unsuccessful exit code,
this is to take into account the ``maxtasksperchild`` setting
and code that voluntarily exits.

- Pool: The ``WorkerLostError`` message now includes the exit-code of the
process that disappeared.

2.7.3.5

--------------------

- Now always cleans up after ``sys.exc_info()`` to avoid
cyclic references.

- ExceptionInfo without arguments now defaults to ``sys.exc_info``.

- Forking can now be disabled using the
``MULTIPROCESSING_FORKING_DISABLE`` environment variable.

Also this envvar is set so that the behavior is inherited
after execv.

- The semaphore cleanup process started when execv is used
now sets a useful process name if the ``setproctitle``
module is installed.

- Sets the ``FORKED_BY_MULTIPROCESSING``
environment variable if forking is disabled.

2.7.3.4

--------------------

- Added `billiard.ensure_multiprocessing()`

Raises NotImplementedError if the platform does not support
multiprocessing (e.g. Jython).

2.7.3.3

--------------------

- PyPy now falls back to using its internal _multiprocessing module,
so everything works except for forking_enable(False) (which
silently degrades).

- Fixed Python 2.5 compat. issues.

- Uses more with statements

- Merged some of the changes from the Python 3 branch.

2.7.3.2

--------------------

- Now installs on PyPy/Jython (but does not work).

Page 28 of 48

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.