| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.11.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.8.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| py | 1.8.0 | <=1.9.0 |
show Py 1.10.0 includes a fix for CVE-2020-29651: A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality. |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| lxml | 4.4.2 | <6.1.0 |
show Affected versions of the lxml package are vulnerable to XML External Entity Injection due to insecure default parser configuration that resolves external entities. The iterparse() function and the ETCompatXMLParser() class both default to resolve_entities=True, so untrusted XML input processed through either parser will expand external entity references and read referenced local files from the host. An attacker who supplies a crafted XML document to an application using these parsers in their default configuration can read sensitive local files and exfiltrate their contents through the parsed output. |
| lxml | 4.4.2 | <4.9.1 |
show Lxml 4.9.1 includes a fix for CVE-2022-2309: NULL Pointer Dereference allows attackers to cause a denial of service (or application crash). This only applies when lxml is used together with libxml2 2.9.10 through 2.9.14. libxml2 2.9.9 and earlier are not affected. It allows triggering crashes through forged input data, given a vulnerable code sequence in the application. The vulnerability is caused by the iterwalk function (also used by the canonicalize function). Such code shouldn't be in wide-spread use, given that parsing + iterwalk would usually be replaced with the more efficient iterparse function. However, an XML converter that serialises to C14N would also be vulnerable, for example, and there are legitimate use cases for this code sequence. If untrusted input is received (also remotely) and processed via iterwalk function, a crash can be triggered. |
| lxml | 4.4.2 | <4.6.3 |
show Lxml version 4.6.3 includes a fix for CVE-2021-28957: An XSS vulnerability was discovered in python-lxml's clean module versions before 4.6.3. When disabling the safe_attrs_only and forms arguments, the Cleaner class does not remove the formation attribute allowing for JS to bypass the sanitizer. A remote attacker could exploit this flaw to run arbitrary JS code on users who interact with incorrectly sanitized HTML. https://bugs.launchpad.net/lxml/+bug/1888153 |
| lxml | 4.4.2 | <4.6.5 |
show Lxml 4.6.5 includes a fix for CVE-2021-43818: Prior to version 4.6.5, the HTML Cleaner in lxml.html lets certain crafted script content pass through, as well as script content in SVG files embedded using data URIs. Users that employ the HTML cleaner in a security relevant context should upgrade to lxml 4.6.5 to receive a patch. |
| lxml | 4.4.2 | <4.6.2 |
show Lxml 4.6.2 includes a fix for CVE-2020-27783: A XSS vulnerability was discovered in python-lxml's clean module. The module's parser didn't properly imitate browsers, which caused different behaviors between the sanitizer and the user's page. A remote attacker could exploit this flaw to run arbitrary HTML/JS code. |
| babel | 2.7.0 | <2.9.1 |
show Babel 2.9.1 includes a fix for CVE-2021-42771: Babel.Locale in Babel before 2.9.1 allows attackers to load arbitrary locale .dat files (containing serialized Python objects) via directory traversal, leading to code execution. https://github.com/python-babel/babel/pull/782 |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| ipython | 7.10.2 | >=8.0.0a0,<8.0.1 , >=7.17.0,<7.31.1 , >=6.0.0a0,<7.16.3 , <5.11 |
show Ipython versions 8.0.1, 7.31.1, 7.16.3 and 5.11 include a fix for CVE-2022-21699: Affected versions are subject to an arbitrary code execution vulnerability achieved by not properly managing cross user temporary files. This vulnerability allows one user to run code as another on the same machine. https://github.com/ipython/ipython/security/advisories/GHSA-pq7m-3gw7-gq5x |
| ipython | 7.10.2 | <8.10.0 |
show IPython 8.10.0 includes a fix for CVE-2023-24816: Versions prior to 8.10.0 are subject to a command injection vulnerability with very specific prerequisites. This vulnerability requires that the function 'IPython.utils.terminal.set_term_title' be called on Windows in a Python environment where ctypes is not available. The dependency on 'ctypes' in 'IPython.utils._process_win32' prevents the vulnerable code from ever being reached in the ipython binary. However, as a library that could be used by another tool 'set_term_title' could be called and hence introduce a vulnerability. If an attacker get untrusted input to an instance of this function they would be able to inject shell commands as current process and limited to the scope of the current process. As a workaround, users should ensure that any calls to the 'IPython.utils.terminal.set_term_title' function are done with trusted or filtered input. https://github.com/ipython/ipython/security/advisories/GHSA-29gw-9793-fvw7 |
| paramiko | 2.7.1 | <3.4.0 |
show Paramiko 3.4.0 has been released to fix vulnerabilities affecting encrypt-then-MAC digest algorithms in tandem with CBC ciphers, and ChaCha20-poly1305. The fix requires cooperation from both ends of the connection, making it effective when the remote end is OpenSSH >= 9.6 and configured to use the new “strict kex” mode. For further details, refer to the official Paramiko documentation or GitHub repository. https://github.com/advisories/GHSA-45x7-px36-x8w8 |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| prompt-toolkit | 3.0.2 | <3.0.13 |
show Prompt-toolkit 3.0.13 fixes a race condition in `ThreadedHistory` which could lead to a deadlock. https://github.com/prompt-toolkit/python-prompt-toolkit/commit/99092a8c6d4b411645ac4b84d504e5226e7eebb8#diff-48c0ff10dc3990285d19b3f54e6bfec763089ba1229dc6f9e88463a1046adad7R163 |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.11.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.8.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| py | 1.8.0 | <=1.9.0 |
show Py 1.10.0 includes a fix for CVE-2020-29651: A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality. |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| lxml | 4.4.2 | <6.1.0 |
show Affected versions of the lxml package are vulnerable to XML External Entity Injection due to insecure default parser configuration that resolves external entities. The iterparse() function and the ETCompatXMLParser() class both default to resolve_entities=True, so untrusted XML input processed through either parser will expand external entity references and read referenced local files from the host. An attacker who supplies a crafted XML document to an application using these parsers in their default configuration can read sensitive local files and exfiltrate their contents through the parsed output. |
| lxml | 4.4.2 | <4.9.1 |
show Lxml 4.9.1 includes a fix for CVE-2022-2309: NULL Pointer Dereference allows attackers to cause a denial of service (or application crash). This only applies when lxml is used together with libxml2 2.9.10 through 2.9.14. libxml2 2.9.9 and earlier are not affected. It allows triggering crashes through forged input data, given a vulnerable code sequence in the application. The vulnerability is caused by the iterwalk function (also used by the canonicalize function). Such code shouldn't be in wide-spread use, given that parsing + iterwalk would usually be replaced with the more efficient iterparse function. However, an XML converter that serialises to C14N would also be vulnerable, for example, and there are legitimate use cases for this code sequence. If untrusted input is received (also remotely) and processed via iterwalk function, a crash can be triggered. |
| lxml | 4.4.2 | <4.6.3 |
show Lxml version 4.6.3 includes a fix for CVE-2021-28957: An XSS vulnerability was discovered in python-lxml's clean module versions before 4.6.3. When disabling the safe_attrs_only and forms arguments, the Cleaner class does not remove the formation attribute allowing for JS to bypass the sanitizer. A remote attacker could exploit this flaw to run arbitrary JS code on users who interact with incorrectly sanitized HTML. https://bugs.launchpad.net/lxml/+bug/1888153 |
| lxml | 4.4.2 | <4.6.5 |
show Lxml 4.6.5 includes a fix for CVE-2021-43818: Prior to version 4.6.5, the HTML Cleaner in lxml.html lets certain crafted script content pass through, as well as script content in SVG files embedded using data URIs. Users that employ the HTML cleaner in a security relevant context should upgrade to lxml 4.6.5 to receive a patch. |
| lxml | 4.4.2 | <4.6.2 |
show Lxml 4.6.2 includes a fix for CVE-2020-27783: A XSS vulnerability was discovered in python-lxml's clean module. The module's parser didn't properly imitate browsers, which caused different behaviors between the sanitizer and the user's page. A remote attacker could exploit this flaw to run arbitrary HTML/JS code. |
| babel | 2.7.0 | <2.9.1 |
show Babel 2.9.1 includes a fix for CVE-2021-42771: Babel.Locale in Babel before 2.9.1 allows attackers to load arbitrary locale .dat files (containing serialized Python objects) via directory traversal, leading to code execution. https://github.com/python-babel/babel/pull/782 |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| ipython | 7.10.2 | >=8.0.0a0,<8.0.1 , >=7.17.0,<7.31.1 , >=6.0.0a0,<7.16.3 , <5.11 |
show Ipython versions 8.0.1, 7.31.1, 7.16.3 and 5.11 include a fix for CVE-2022-21699: Affected versions are subject to an arbitrary code execution vulnerability achieved by not properly managing cross user temporary files. This vulnerability allows one user to run code as another on the same machine. https://github.com/ipython/ipython/security/advisories/GHSA-pq7m-3gw7-gq5x |
| ipython | 7.10.2 | <8.10.0 |
show IPython 8.10.0 includes a fix for CVE-2023-24816: Versions prior to 8.10.0 are subject to a command injection vulnerability with very specific prerequisites. This vulnerability requires that the function 'IPython.utils.terminal.set_term_title' be called on Windows in a Python environment where ctypes is not available. The dependency on 'ctypes' in 'IPython.utils._process_win32' prevents the vulnerable code from ever being reached in the ipython binary. However, as a library that could be used by another tool 'set_term_title' could be called and hence introduce a vulnerability. If an attacker get untrusted input to an instance of this function they would be able to inject shell commands as current process and limited to the scope of the current process. As a workaround, users should ensure that any calls to the 'IPython.utils.terminal.set_term_title' function are done with trusted or filtered input. https://github.com/ipython/ipython/security/advisories/GHSA-29gw-9793-fvw7 |
| paramiko | 2.7.1 | <3.4.0 |
show Paramiko 3.4.0 has been released to fix vulnerabilities affecting encrypt-then-MAC digest algorithms in tandem with CBC ciphers, and ChaCha20-poly1305. The fix requires cooperation from both ends of the connection, making it effective when the remote end is OpenSSH >= 9.6 and configured to use the new “strict kex” mode. For further details, refer to the official Paramiko documentation or GitHub repository. https://github.com/advisories/GHSA-45x7-px36-x8w8 |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| prompt-toolkit | 3.0.2 | <3.0.13 |
show Prompt-toolkit 3.0.13 fixes a race condition in `ThreadedHistory` which could lead to a deadlock. https://github.com/prompt-toolkit/python-prompt-toolkit/commit/99092a8c6d4b411645ac4b84d504e5226e7eebb8#diff-48c0ff10dc3990285d19b3f54e6bfec763089ba1229dc6f9e88463a1046adad7R163 |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.11.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.8.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| py | 1.8.0 | <=1.9.0 |
show Py 1.10.0 includes a fix for CVE-2020-29651: A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality. |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| zipp | 0.6.0 | <3.19.1 |
show A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. |
| lxml | 4.4.2 | <6.1.0 |
show Affected versions of the lxml package are vulnerable to XML External Entity Injection due to insecure default parser configuration that resolves external entities. The iterparse() function and the ETCompatXMLParser() class both default to resolve_entities=True, so untrusted XML input processed through either parser will expand external entity references and read referenced local files from the host. An attacker who supplies a crafted XML document to an application using these parsers in their default configuration can read sensitive local files and exfiltrate their contents through the parsed output. |
| lxml | 4.4.2 | <4.9.1 |
show Lxml 4.9.1 includes a fix for CVE-2022-2309: NULL Pointer Dereference allows attackers to cause a denial of service (or application crash). This only applies when lxml is used together with libxml2 2.9.10 through 2.9.14. libxml2 2.9.9 and earlier are not affected. It allows triggering crashes through forged input data, given a vulnerable code sequence in the application. The vulnerability is caused by the iterwalk function (also used by the canonicalize function). Such code shouldn't be in wide-spread use, given that parsing + iterwalk would usually be replaced with the more efficient iterparse function. However, an XML converter that serialises to C14N would also be vulnerable, for example, and there are legitimate use cases for this code sequence. If untrusted input is received (also remotely) and processed via iterwalk function, a crash can be triggered. |
| lxml | 4.4.2 | <4.6.3 |
show Lxml version 4.6.3 includes a fix for CVE-2021-28957: An XSS vulnerability was discovered in python-lxml's clean module versions before 4.6.3. When disabling the safe_attrs_only and forms arguments, the Cleaner class does not remove the formation attribute allowing for JS to bypass the sanitizer. A remote attacker could exploit this flaw to run arbitrary JS code on users who interact with incorrectly sanitized HTML. https://bugs.launchpad.net/lxml/+bug/1888153 |
| lxml | 4.4.2 | <4.6.5 |
show Lxml 4.6.5 includes a fix for CVE-2021-43818: Prior to version 4.6.5, the HTML Cleaner in lxml.html lets certain crafted script content pass through, as well as script content in SVG files embedded using data URIs. Users that employ the HTML cleaner in a security relevant context should upgrade to lxml 4.6.5 to receive a patch. |
| lxml | 4.4.2 | <4.6.2 |
show Lxml 4.6.2 includes a fix for CVE-2020-27783: A XSS vulnerability was discovered in python-lxml's clean module. The module's parser didn't properly imitate browsers, which caused different behaviors between the sanitizer and the user's page. A remote attacker could exploit this flaw to run arbitrary HTML/JS code. |
| zipp | 0.6.0 | <3.19.1 |
show A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. |
| pyjwt | 1.7.1 | >=1.0.0,<2.4.0 |
show PyJWT 2.4.0 includes a fix for CVE-2022-29217: An attacker submitting the JWT token can choose the used signing algorithm. The PyJWT library requires that the application chooses what algorithms are supported. The application can specify 'jwt.algorithms.get_default_algorithms()' to get support for all algorithms, or specify a single algorithm. The issue is not that big as 'algorithms=jwt.algorithms.get_default_algorithms()' has to be used. As a workaround, always be explicit with the algorithms that are accepted and expected when decoding. |
| pyjwt | 1.7.1 | <2.12.0 |
show Affected versions of this package are vulnerable to Insufficient Verification of Data Authenticity. The library does not validate the `crit` (Critical) Header Parameter as required by RFC 7515 §4.1.11 — when a JWT contains a `crit` array listing extensions that the library does not understand, the token is accepted instead of rejected. An attacker can exploit this vulnerability by crafting JWTs with unknown critical extensions (e.g., MFA requirements, token binding, scope restrictions) that are silently ignored, potentially bypassing security policies or causing split-brain verification in mixed-library deployments where other RFC-compliant libraries would reject the same token. |
| tqdm | 4.41.0 | >=4.4.0,<4.66.3 |
show Tqdm version 4.66.3 addresses CVE-2024-34062, a vulnerability where optional non-boolean CLI arguments like `--delim`, `--buf-size`, and `--manpath` were passed through Python's `eval`, allowing for arbitrary code execution. This security risk, only locally exploitable, has been mitigated in this release. Users are advised to upgrade to version 4.66.3 immediately as there are no workarounds for this issue. |
| babel | 2.7.0 | <2.9.1 |
show Babel 2.9.1 includes a fix for CVE-2021-42771: Babel.Locale in Babel before 2.9.1 allows attackers to load arbitrary locale .dat files (containing serialized Python objects) via directory traversal, leading to code execution. https://github.com/python-babel/babel/pull/782 |
| pyyaml | 5.3b1 | <5.3.1 |
show Pyyaml 5.3.1 includes a fix for CVE-2020-1747: A vulnerability was discovered in the PyYAML library in versions before 5.3.1, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor. |
| pyyaml | 5.3b1 | <5.3.1 |
show Pyyaml 5.3.1 includes a fix for CVE-2020-1747: A vulnerability was discovered in the PyYAML library in versions before 5.3.1, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| urllib3 | 1.25.7 | >=1.25.4,<1.26.5 |
show Urllib3 1.26.5 includes a fix for CVE-2021-33503: When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect. https://github.com/advisories/GHSA-q2q7-5pp4-w6pg |
| urllib3 | 1.25.7 | >=1.22,<2.6.3 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to redirect handling that drains connections by decompressing redirect response bodies without enforcing streaming read limits. The issue occurs when using urllib3’s streaming mode (for example, preload_content=False) while allowing redirects, because urllib3.response.HTTPResponse.drain_conn() would call HTTPResponse.read() in a way that decoded/decompressed the entire redirect response body even before any streaming reads were performed, effectively bypassing decompression-bomb safeguards. |
| urllib3 | 1.25.7 | >=1.0,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to improper handling of highly compressed HTTP response bodies during streaming decompression. The urllib3.HTTPResponse methods stream(), read(), read1(), read_chunked(), and readinto() may fully decompress a minimal but highly compressed payload based on the Content-Encoding header into an internal buffer instead of limiting the decompressed output to the requested chunk size, causing excessive CPU usage and massive memory allocation on the client side. |
| urllib3 | 1.25.7 | >=1.24,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to allowing an unbounded number of content-encoding decompression steps for HTTP responses. The HTTPResponse content decoding pipeline in urllib3 follows the Content-Encoding header and applies each advertised compression algorithm in sequence without enforcing a maximum chain length or effective output size, so a malicious peer can send a response with a very long encoding chain that triggers excessive CPU use and massive memory allocation during decompression. |
| urllib3 | 1.25.7 | <=1.26.18 , >=2.0.0a1,<=2.2.1 |
show Urllib3's ProxyManager ensures that the Proxy-Authorization header is correctly directed only to configured proxies. However, when HTTP requests bypass urllib3's proxy support, there's a risk of inadvertently setting the Proxy-Authorization header, which remains ineffective without a forwarding or tunneling proxy. Urllib3 does not recognize this header as carrying authentication data, failing to remove it during cross-origin redirects. While this scenario is uncommon and poses low risk to most users, urllib3 now proactively removes the Proxy-Authorization header during cross-origin redirects as a precautionary measure. Users are advised to utilize urllib3's proxy support or disable automatic redirects to handle the Proxy-Authorization header securely. Despite these precautions, urllib3 defaults to stripping the header to safeguard users who may inadvertently misconfigure requests. |
| urllib3 | 1.25.7 | <1.26.17 , >=2.0.0a1,<2.0.5 |
show Urllib3 1.26.17 and 2.0.5 include a fix for CVE-2023-43804: Urllib3 doesn't treat the 'Cookie' HTTP header special or provide any helpers for managing cookies over HTTP, that is the responsibility of the user. However, it is possible for a user to specify a 'Cookie' header and unknowingly leak information via HTTP redirects to a different origin if that user doesn't disable redirects explicitly. https://github.com/urllib3/urllib3/security/advisories/GHSA-v845-jxx5-vc9f |
| urllib3 | 1.25.7 | >=1.25.2,<=1.25.7 |
show The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). See: CVE-2020-7212. |
| urllib3 | 1.25.7 | <1.26.18 , >=2.0.0a1,<2.0.7 |
show Affected versions of urllib3 are vulnerable to an HTTP redirect handling vulnerability that fails to remove the HTTP request body when a POST changes to a GET via 301, 302, or 303 responses. This flaw can expose sensitive request data if the origin service is compromised and redirects to a malicious endpoint, though exploitability is low when no sensitive data is used. The vulnerability affects automatic redirect behavior. It is fixed in versions 1.26.18 and 2.0.7; update or disable redirects using redirects=False. This vulnerability is specific to Python's urllib3 library. |
| urllib3 | 1.25.7 | <2.5.0 |
show urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, it is possible to disable redirects for all requests by instantiating a PoolManager and specifying retries in a way that disable redirects. By default, requests and botocore users are not affected. An application attempting to mitigate SSRF or open redirect vulnerabilities by disabling redirects at the PoolManager level will remain vulnerable. This issue has been patched in version 2.5.0. |
| urllib3 | 1.25.7 | <1.25.9 |
show Urllib3 1.25.9 includes a fix for CVE-2020-26137: Urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116. https://github.com/python/cpython/issues/83784 https://github.com/urllib3/urllib3/pull/1800 |
| ipython | 7.10.2 | >=8.0.0a0,<8.0.1 , >=7.17.0,<7.31.1 , >=6.0.0a0,<7.16.3 , <5.11 |
show Ipython versions 8.0.1, 7.31.1, 7.16.3 and 5.11 include a fix for CVE-2022-21699: Affected versions are subject to an arbitrary code execution vulnerability achieved by not properly managing cross user temporary files. This vulnerability allows one user to run code as another on the same machine. https://github.com/ipython/ipython/security/advisories/GHSA-pq7m-3gw7-gq5x |
| ipython | 7.10.2 | <8.10.0 |
show IPython 8.10.0 includes a fix for CVE-2023-24816: Versions prior to 8.10.0 are subject to a command injection vulnerability with very specific prerequisites. This vulnerability requires that the function 'IPython.utils.terminal.set_term_title' be called on Windows in a Python environment where ctypes is not available. The dependency on 'ctypes' in 'IPython.utils._process_win32' prevents the vulnerable code from ever being reached in the ipython binary. However, as a library that could be used by another tool 'set_term_title' could be called and hence introduce a vulnerability. If an attacker get untrusted input to an instance of this function they would be able to inject shell commands as current process and limited to the scope of the current process. As a workaround, users should ensure that any calls to the 'IPython.utils.terminal.set_term_title' function are done with trusted or filtered input. https://github.com/ipython/ipython/security/advisories/GHSA-29gw-9793-fvw7 |
| paramiko | 2.7.1 | <3.4.0 |
show Paramiko 3.4.0 has been released to fix vulnerabilities affecting encrypt-then-MAC digest algorithms in tandem with CBC ciphers, and ChaCha20-poly1305. The fix requires cooperation from both ends of the connection, making it effective when the remote end is OpenSSH >= 9.6 and configured to use the new “strict kex” mode. For further details, refer to the official Paramiko documentation or GitHub repository. https://github.com/advisories/GHSA-45x7-px36-x8w8 |
| urllib3 | 1.25.7 | >=1.25.4,<1.26.5 |
show Urllib3 1.26.5 includes a fix for CVE-2021-33503: When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect. https://github.com/advisories/GHSA-q2q7-5pp4-w6pg |
| urllib3 | 1.25.7 | >=1.22,<2.6.3 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to redirect handling that drains connections by decompressing redirect response bodies without enforcing streaming read limits. The issue occurs when using urllib3’s streaming mode (for example, preload_content=False) while allowing redirects, because urllib3.response.HTTPResponse.drain_conn() would call HTTPResponse.read() in a way that decoded/decompressed the entire redirect response body even before any streaming reads were performed, effectively bypassing decompression-bomb safeguards. |
| urllib3 | 1.25.7 | >=1.0,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to improper handling of highly compressed HTTP response bodies during streaming decompression. The urllib3.HTTPResponse methods stream(), read(), read1(), read_chunked(), and readinto() may fully decompress a minimal but highly compressed payload based on the Content-Encoding header into an internal buffer instead of limiting the decompressed output to the requested chunk size, causing excessive CPU usage and massive memory allocation on the client side. |
| urllib3 | 1.25.7 | >=1.24,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to allowing an unbounded number of content-encoding decompression steps for HTTP responses. The HTTPResponse content decoding pipeline in urllib3 follows the Content-Encoding header and applies each advertised compression algorithm in sequence without enforcing a maximum chain length or effective output size, so a malicious peer can send a response with a very long encoding chain that triggers excessive CPU use and massive memory allocation during decompression. |
| urllib3 | 1.25.7 | <=1.26.18 , >=2.0.0a1,<=2.2.1 |
show Urllib3's ProxyManager ensures that the Proxy-Authorization header is correctly directed only to configured proxies. However, when HTTP requests bypass urllib3's proxy support, there's a risk of inadvertently setting the Proxy-Authorization header, which remains ineffective without a forwarding or tunneling proxy. Urllib3 does not recognize this header as carrying authentication data, failing to remove it during cross-origin redirects. While this scenario is uncommon and poses low risk to most users, urllib3 now proactively removes the Proxy-Authorization header during cross-origin redirects as a precautionary measure. Users are advised to utilize urllib3's proxy support or disable automatic redirects to handle the Proxy-Authorization header securely. Despite these precautions, urllib3 defaults to stripping the header to safeguard users who may inadvertently misconfigure requests. |
| urllib3 | 1.25.7 | <1.26.17 , >=2.0.0a1,<2.0.5 |
show Urllib3 1.26.17 and 2.0.5 include a fix for CVE-2023-43804: Urllib3 doesn't treat the 'Cookie' HTTP header special or provide any helpers for managing cookies over HTTP, that is the responsibility of the user. However, it is possible for a user to specify a 'Cookie' header and unknowingly leak information via HTTP redirects to a different origin if that user doesn't disable redirects explicitly. https://github.com/urllib3/urllib3/security/advisories/GHSA-v845-jxx5-vc9f |
| urllib3 | 1.25.7 | >=1.25.2,<=1.25.7 |
show The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). See: CVE-2020-7212. |
| urllib3 | 1.25.7 | <1.26.18 , >=2.0.0a1,<2.0.7 |
show Affected versions of urllib3 are vulnerable to an HTTP redirect handling vulnerability that fails to remove the HTTP request body when a POST changes to a GET via 301, 302, or 303 responses. This flaw can expose sensitive request data if the origin service is compromised and redirects to a malicious endpoint, though exploitability is low when no sensitive data is used. The vulnerability affects automatic redirect behavior. It is fixed in versions 1.26.18 and 2.0.7; update or disable redirects using redirects=False. This vulnerability is specific to Python's urllib3 library. |
| urllib3 | 1.25.7 | <2.5.0 |
show urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, it is possible to disable redirects for all requests by instantiating a PoolManager and specifying retries in a way that disable redirects. By default, requests and botocore users are not affected. An application attempting to mitigate SSRF or open redirect vulnerabilities by disabling redirects at the PoolManager level will remain vulnerable. This issue has been patched in version 2.5.0. |
| urllib3 | 1.25.7 | <1.25.9 |
show Urllib3 1.25.9 includes a fix for CVE-2020-26137: Urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116. https://github.com/python/cpython/issues/83784 https://github.com/urllib3/urllib3/pull/1800 |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| prompt-toolkit | 3.0.2 | <3.0.13 |
show Prompt-toolkit 3.0.13 fixes a race condition in `ThreadedHistory` which could lead to a deadlock. https://github.com/prompt-toolkit/python-prompt-toolkit/commit/99092a8c6d4b411645ac4b84d504e5226e7eebb8#diff-48c0ff10dc3990285d19b3f54e6bfec763089ba1229dc6f9e88463a1046adad7R163 |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.8.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| py | 1.8.0 | <=1.9.0 |
show Py 1.10.0 includes a fix for CVE-2020-29651: A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality. |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| lxml | 4.4.2 | <6.1.0 |
show Affected versions of the lxml package are vulnerable to XML External Entity Injection due to insecure default parser configuration that resolves external entities. The iterparse() function and the ETCompatXMLParser() class both default to resolve_entities=True, so untrusted XML input processed through either parser will expand external entity references and read referenced local files from the host. An attacker who supplies a crafted XML document to an application using these parsers in their default configuration can read sensitive local files and exfiltrate their contents through the parsed output. |
| lxml | 4.4.2 | <4.9.1 |
show Lxml 4.9.1 includes a fix for CVE-2022-2309: NULL Pointer Dereference allows attackers to cause a denial of service (or application crash). This only applies when lxml is used together with libxml2 2.9.10 through 2.9.14. libxml2 2.9.9 and earlier are not affected. It allows triggering crashes through forged input data, given a vulnerable code sequence in the application. The vulnerability is caused by the iterwalk function (also used by the canonicalize function). Such code shouldn't be in wide-spread use, given that parsing + iterwalk would usually be replaced with the more efficient iterparse function. However, an XML converter that serialises to C14N would also be vulnerable, for example, and there are legitimate use cases for this code sequence. If untrusted input is received (also remotely) and processed via iterwalk function, a crash can be triggered. |
| lxml | 4.4.2 | <4.6.3 |
show Lxml version 4.6.3 includes a fix for CVE-2021-28957: An XSS vulnerability was discovered in python-lxml's clean module versions before 4.6.3. When disabling the safe_attrs_only and forms arguments, the Cleaner class does not remove the formation attribute allowing for JS to bypass the sanitizer. A remote attacker could exploit this flaw to run arbitrary JS code on users who interact with incorrectly sanitized HTML. https://bugs.launchpad.net/lxml/+bug/1888153 |
| lxml | 4.4.2 | <4.6.5 |
show Lxml 4.6.5 includes a fix for CVE-2021-43818: Prior to version 4.6.5, the HTML Cleaner in lxml.html lets certain crafted script content pass through, as well as script content in SVG files embedded using data URIs. Users that employ the HTML cleaner in a security relevant context should upgrade to lxml 4.6.5 to receive a patch. |
| lxml | 4.4.2 | <4.6.2 |
show Lxml 4.6.2 includes a fix for CVE-2020-27783: A XSS vulnerability was discovered in python-lxml's clean module. The module's parser didn't properly imitate browsers, which caused different behaviors between the sanitizer and the user's page. A remote attacker could exploit this flaw to run arbitrary HTML/JS code. |
| babel | 2.7.0 | <2.9.1 |
show Babel 2.9.1 includes a fix for CVE-2021-42771: Babel.Locale in Babel before 2.9.1 allows attackers to load arbitrary locale .dat files (containing serialized Python objects) via directory traversal, leading to code execution. https://github.com/python-babel/babel/pull/782 |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| ipython | 7.10.2 | >=8.0.0a0,<8.0.1 , >=7.17.0,<7.31.1 , >=6.0.0a0,<7.16.3 , <5.11 |
show Ipython versions 8.0.1, 7.31.1, 7.16.3 and 5.11 include a fix for CVE-2022-21699: Affected versions are subject to an arbitrary code execution vulnerability achieved by not properly managing cross user temporary files. This vulnerability allows one user to run code as another on the same machine. https://github.com/ipython/ipython/security/advisories/GHSA-pq7m-3gw7-gq5x |
| ipython | 7.10.2 | <8.10.0 |
show IPython 8.10.0 includes a fix for CVE-2023-24816: Versions prior to 8.10.0 are subject to a command injection vulnerability with very specific prerequisites. This vulnerability requires that the function 'IPython.utils.terminal.set_term_title' be called on Windows in a Python environment where ctypes is not available. The dependency on 'ctypes' in 'IPython.utils._process_win32' prevents the vulnerable code from ever being reached in the ipython binary. However, as a library that could be used by another tool 'set_term_title' could be called and hence introduce a vulnerability. If an attacker get untrusted input to an instance of this function they would be able to inject shell commands as current process and limited to the scope of the current process. As a workaround, users should ensure that any calls to the 'IPython.utils.terminal.set_term_title' function are done with trusted or filtered input. https://github.com/ipython/ipython/security/advisories/GHSA-29gw-9793-fvw7 |
| paramiko | 2.7.1 | <3.4.0 |
show Paramiko 3.4.0 has been released to fix vulnerabilities affecting encrypt-then-MAC digest algorithms in tandem with CBC ciphers, and ChaCha20-poly1305. The fix requires cooperation from both ends of the connection, making it effective when the remote end is OpenSSH >= 9.6 and configured to use the new “strict kex” mode. For further details, refer to the official Paramiko documentation or GitHub repository. https://github.com/advisories/GHSA-45x7-px36-x8w8 |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| prompt-toolkit | 3.0.2 | <3.0.13 |
show Prompt-toolkit 3.0.13 fixes a race condition in `ThreadedHistory` which could lead to a deadlock. https://github.com/prompt-toolkit/python-prompt-toolkit/commit/99092a8c6d4b411645ac4b84d504e5226e7eebb8#diff-48c0ff10dc3990285d19b3f54e6bfec763089ba1229dc6f9e88463a1046adad7R163 |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.11.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.8.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| py | 1.8.0 | <=1.9.0 |
show Py 1.10.0 includes a fix for CVE-2020-29651: A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality. |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| lxml | 4.4.2 | <6.1.0 |
show Affected versions of the lxml package are vulnerable to XML External Entity Injection due to insecure default parser configuration that resolves external entities. The iterparse() function and the ETCompatXMLParser() class both default to resolve_entities=True, so untrusted XML input processed through either parser will expand external entity references and read referenced local files from the host. An attacker who supplies a crafted XML document to an application using these parsers in their default configuration can read sensitive local files and exfiltrate their contents through the parsed output. |
| lxml | 4.4.2 | <4.9.1 |
show Lxml 4.9.1 includes a fix for CVE-2022-2309: NULL Pointer Dereference allows attackers to cause a denial of service (or application crash). This only applies when lxml is used together with libxml2 2.9.10 through 2.9.14. libxml2 2.9.9 and earlier are not affected. It allows triggering crashes through forged input data, given a vulnerable code sequence in the application. The vulnerability is caused by the iterwalk function (also used by the canonicalize function). Such code shouldn't be in wide-spread use, given that parsing + iterwalk would usually be replaced with the more efficient iterparse function. However, an XML converter that serialises to C14N would also be vulnerable, for example, and there are legitimate use cases for this code sequence. If untrusted input is received (also remotely) and processed via iterwalk function, a crash can be triggered. |
| lxml | 4.4.2 | <4.6.3 |
show Lxml version 4.6.3 includes a fix for CVE-2021-28957: An XSS vulnerability was discovered in python-lxml's clean module versions before 4.6.3. When disabling the safe_attrs_only and forms arguments, the Cleaner class does not remove the formation attribute allowing for JS to bypass the sanitizer. A remote attacker could exploit this flaw to run arbitrary JS code on users who interact with incorrectly sanitized HTML. https://bugs.launchpad.net/lxml/+bug/1888153 |
| lxml | 4.4.2 | <4.6.5 |
show Lxml 4.6.5 includes a fix for CVE-2021-43818: Prior to version 4.6.5, the HTML Cleaner in lxml.html lets certain crafted script content pass through, as well as script content in SVG files embedded using data URIs. Users that employ the HTML cleaner in a security relevant context should upgrade to lxml 4.6.5 to receive a patch. |
| lxml | 4.4.2 | <4.6.2 |
show Lxml 4.6.2 includes a fix for CVE-2020-27783: A XSS vulnerability was discovered in python-lxml's clean module. The module's parser didn't properly imitate browsers, which caused different behaviors between the sanitizer and the user's page. A remote attacker could exploit this flaw to run arbitrary HTML/JS code. |
| babel | 2.7.0 | <2.9.1 |
show Babel 2.9.1 includes a fix for CVE-2021-42771: Babel.Locale in Babel before 2.9.1 allows attackers to load arbitrary locale .dat files (containing serialized Python objects) via directory traversal, leading to code execution. https://github.com/python-babel/babel/pull/782 |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| ipython | 7.10.2 | >=8.0.0a0,<8.0.1 , >=7.17.0,<7.31.1 , >=6.0.0a0,<7.16.3 , <5.11 |
show Ipython versions 8.0.1, 7.31.1, 7.16.3 and 5.11 include a fix for CVE-2022-21699: Affected versions are subject to an arbitrary code execution vulnerability achieved by not properly managing cross user temporary files. This vulnerability allows one user to run code as another on the same machine. https://github.com/ipython/ipython/security/advisories/GHSA-pq7m-3gw7-gq5x |
| ipython | 7.10.2 | <8.10.0 |
show IPython 8.10.0 includes a fix for CVE-2023-24816: Versions prior to 8.10.0 are subject to a command injection vulnerability with very specific prerequisites. This vulnerability requires that the function 'IPython.utils.terminal.set_term_title' be called on Windows in a Python environment where ctypes is not available. The dependency on 'ctypes' in 'IPython.utils._process_win32' prevents the vulnerable code from ever being reached in the ipython binary. However, as a library that could be used by another tool 'set_term_title' could be called and hence introduce a vulnerability. If an attacker get untrusted input to an instance of this function they would be able to inject shell commands as current process and limited to the scope of the current process. As a workaround, users should ensure that any calls to the 'IPython.utils.terminal.set_term_title' function are done with trusted or filtered input. https://github.com/ipython/ipython/security/advisories/GHSA-29gw-9793-fvw7 |
| paramiko | 2.7.1 | <3.4.0 |
show Paramiko 3.4.0 has been released to fix vulnerabilities affecting encrypt-then-MAC digest algorithms in tandem with CBC ciphers, and ChaCha20-poly1305. The fix requires cooperation from both ends of the connection, making it effective when the remote end is OpenSSH >= 9.6 and configured to use the new “strict kex” mode. For further details, refer to the official Paramiko documentation or GitHub repository. https://github.com/advisories/GHSA-45x7-px36-x8w8 |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| prompt-toolkit | 3.0.2 | <3.0.13 |
show Prompt-toolkit 3.0.13 fixes a race condition in `ThreadedHistory` which could lead to a deadlock. https://github.com/prompt-toolkit/python-prompt-toolkit/commit/99092a8c6d4b411645ac4b84d504e5226e7eebb8#diff-48c0ff10dc3990285d19b3f54e6bfec763089ba1229dc6f9e88463a1046adad7R163 |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.8.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| py | 1.8.0 | <=1.9.0 |
show Py 1.10.0 includes a fix for CVE-2020-29651: A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality. |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| zipp | 0.6.0 | <3.19.1 |
show A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. |
| lxml | 4.4.2 | <6.1.0 |
show Affected versions of the lxml package are vulnerable to XML External Entity Injection due to insecure default parser configuration that resolves external entities. The iterparse() function and the ETCompatXMLParser() class both default to resolve_entities=True, so untrusted XML input processed through either parser will expand external entity references and read referenced local files from the host. An attacker who supplies a crafted XML document to an application using these parsers in their default configuration can read sensitive local files and exfiltrate their contents through the parsed output. |
| lxml | 4.4.2 | <4.9.1 |
show Lxml 4.9.1 includes a fix for CVE-2022-2309: NULL Pointer Dereference allows attackers to cause a denial of service (or application crash). This only applies when lxml is used together with libxml2 2.9.10 through 2.9.14. libxml2 2.9.9 and earlier are not affected. It allows triggering crashes through forged input data, given a vulnerable code sequence in the application. The vulnerability is caused by the iterwalk function (also used by the canonicalize function). Such code shouldn't be in wide-spread use, given that parsing + iterwalk would usually be replaced with the more efficient iterparse function. However, an XML converter that serialises to C14N would also be vulnerable, for example, and there are legitimate use cases for this code sequence. If untrusted input is received (also remotely) and processed via iterwalk function, a crash can be triggered. |
| lxml | 4.4.2 | <4.6.3 |
show Lxml version 4.6.3 includes a fix for CVE-2021-28957: An XSS vulnerability was discovered in python-lxml's clean module versions before 4.6.3. When disabling the safe_attrs_only and forms arguments, the Cleaner class does not remove the formation attribute allowing for JS to bypass the sanitizer. A remote attacker could exploit this flaw to run arbitrary JS code on users who interact with incorrectly sanitized HTML. https://bugs.launchpad.net/lxml/+bug/1888153 |
| lxml | 4.4.2 | <4.6.5 |
show Lxml 4.6.5 includes a fix for CVE-2021-43818: Prior to version 4.6.5, the HTML Cleaner in lxml.html lets certain crafted script content pass through, as well as script content in SVG files embedded using data URIs. Users that employ the HTML cleaner in a security relevant context should upgrade to lxml 4.6.5 to receive a patch. |
| lxml | 4.4.2 | <4.6.2 |
show Lxml 4.6.2 includes a fix for CVE-2020-27783: A XSS vulnerability was discovered in python-lxml's clean module. The module's parser didn't properly imitate browsers, which caused different behaviors between the sanitizer and the user's page. A remote attacker could exploit this flaw to run arbitrary HTML/JS code. |
| zipp | 0.6.0 | <3.19.1 |
show A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. |
| pyjwt | 1.7.1 | >=1.0.0,<2.4.0 |
show PyJWT 2.4.0 includes a fix for CVE-2022-29217: An attacker submitting the JWT token can choose the used signing algorithm. The PyJWT library requires that the application chooses what algorithms are supported. The application can specify 'jwt.algorithms.get_default_algorithms()' to get support for all algorithms, or specify a single algorithm. The issue is not that big as 'algorithms=jwt.algorithms.get_default_algorithms()' has to be used. As a workaround, always be explicit with the algorithms that are accepted and expected when decoding. |
| pyjwt | 1.7.1 | <2.12.0 |
show Affected versions of this package are vulnerable to Insufficient Verification of Data Authenticity. The library does not validate the `crit` (Critical) Header Parameter as required by RFC 7515 §4.1.11 — when a JWT contains a `crit` array listing extensions that the library does not understand, the token is accepted instead of rejected. An attacker can exploit this vulnerability by crafting JWTs with unknown critical extensions (e.g., MFA requirements, token binding, scope restrictions) that are silently ignored, potentially bypassing security policies or causing split-brain verification in mixed-library deployments where other RFC-compliant libraries would reject the same token. |
| tqdm | 4.41.0 | >=4.4.0,<4.66.3 |
show Tqdm version 4.66.3 addresses CVE-2024-34062, a vulnerability where optional non-boolean CLI arguments like `--delim`, `--buf-size`, and `--manpath` were passed through Python's `eval`, allowing for arbitrary code execution. This security risk, only locally exploitable, has been mitigated in this release. Users are advised to upgrade to version 4.66.3 immediately as there are no workarounds for this issue. |
| babel | 2.7.0 | <2.9.1 |
show Babel 2.9.1 includes a fix for CVE-2021-42771: Babel.Locale in Babel before 2.9.1 allows attackers to load arbitrary locale .dat files (containing serialized Python objects) via directory traversal, leading to code execution. https://github.com/python-babel/babel/pull/782 |
| pyyaml | 5.3b1 | <5.3.1 |
show Pyyaml 5.3.1 includes a fix for CVE-2020-1747: A vulnerability was discovered in the PyYAML library in versions before 5.3.1, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor. |
| pyyaml | 5.3b1 | <5.3.1 |
show Pyyaml 5.3.1 includes a fix for CVE-2020-1747: A vulnerability was discovered in the PyYAML library in versions before 5.3.1, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| urllib3 | 1.25.7 | >=1.25.4,<1.26.5 |
show Urllib3 1.26.5 includes a fix for CVE-2021-33503: When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect. https://github.com/advisories/GHSA-q2q7-5pp4-w6pg |
| urllib3 | 1.25.7 | >=1.22,<2.6.3 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to redirect handling that drains connections by decompressing redirect response bodies without enforcing streaming read limits. The issue occurs when using urllib3’s streaming mode (for example, preload_content=False) while allowing redirects, because urllib3.response.HTTPResponse.drain_conn() would call HTTPResponse.read() in a way that decoded/decompressed the entire redirect response body even before any streaming reads were performed, effectively bypassing decompression-bomb safeguards. |
| urllib3 | 1.25.7 | >=1.0,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to improper handling of highly compressed HTTP response bodies during streaming decompression. The urllib3.HTTPResponse methods stream(), read(), read1(), read_chunked(), and readinto() may fully decompress a minimal but highly compressed payload based on the Content-Encoding header into an internal buffer instead of limiting the decompressed output to the requested chunk size, causing excessive CPU usage and massive memory allocation on the client side. |
| urllib3 | 1.25.7 | >=1.24,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to allowing an unbounded number of content-encoding decompression steps for HTTP responses. The HTTPResponse content decoding pipeline in urllib3 follows the Content-Encoding header and applies each advertised compression algorithm in sequence without enforcing a maximum chain length or effective output size, so a malicious peer can send a response with a very long encoding chain that triggers excessive CPU use and massive memory allocation during decompression. |
| urllib3 | 1.25.7 | <=1.26.18 , >=2.0.0a1,<=2.2.1 |
show Urllib3's ProxyManager ensures that the Proxy-Authorization header is correctly directed only to configured proxies. However, when HTTP requests bypass urllib3's proxy support, there's a risk of inadvertently setting the Proxy-Authorization header, which remains ineffective without a forwarding or tunneling proxy. Urllib3 does not recognize this header as carrying authentication data, failing to remove it during cross-origin redirects. While this scenario is uncommon and poses low risk to most users, urllib3 now proactively removes the Proxy-Authorization header during cross-origin redirects as a precautionary measure. Users are advised to utilize urllib3's proxy support or disable automatic redirects to handle the Proxy-Authorization header securely. Despite these precautions, urllib3 defaults to stripping the header to safeguard users who may inadvertently misconfigure requests. |
| urllib3 | 1.25.7 | <1.26.17 , >=2.0.0a1,<2.0.5 |
show Urllib3 1.26.17 and 2.0.5 include a fix for CVE-2023-43804: Urllib3 doesn't treat the 'Cookie' HTTP header special or provide any helpers for managing cookies over HTTP, that is the responsibility of the user. However, it is possible for a user to specify a 'Cookie' header and unknowingly leak information via HTTP redirects to a different origin if that user doesn't disable redirects explicitly. https://github.com/urllib3/urllib3/security/advisories/GHSA-v845-jxx5-vc9f |
| urllib3 | 1.25.7 | >=1.25.2,<=1.25.7 |
show The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). See: CVE-2020-7212. |
| urllib3 | 1.25.7 | <1.26.18 , >=2.0.0a1,<2.0.7 |
show Affected versions of urllib3 are vulnerable to an HTTP redirect handling vulnerability that fails to remove the HTTP request body when a POST changes to a GET via 301, 302, or 303 responses. This flaw can expose sensitive request data if the origin service is compromised and redirects to a malicious endpoint, though exploitability is low when no sensitive data is used. The vulnerability affects automatic redirect behavior. It is fixed in versions 1.26.18 and 2.0.7; update or disable redirects using redirects=False. This vulnerability is specific to Python's urllib3 library. |
| urllib3 | 1.25.7 | <2.5.0 |
show urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, it is possible to disable redirects for all requests by instantiating a PoolManager and specifying retries in a way that disable redirects. By default, requests and botocore users are not affected. An application attempting to mitigate SSRF or open redirect vulnerabilities by disabling redirects at the PoolManager level will remain vulnerable. This issue has been patched in version 2.5.0. |
| urllib3 | 1.25.7 | <1.25.9 |
show Urllib3 1.25.9 includes a fix for CVE-2020-26137: Urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116. https://github.com/python/cpython/issues/83784 https://github.com/urllib3/urllib3/pull/1800 |
| ipython | 7.10.2 | >=8.0.0a0,<8.0.1 , >=7.17.0,<7.31.1 , >=6.0.0a0,<7.16.3 , <5.11 |
show Ipython versions 8.0.1, 7.31.1, 7.16.3 and 5.11 include a fix for CVE-2022-21699: Affected versions are subject to an arbitrary code execution vulnerability achieved by not properly managing cross user temporary files. This vulnerability allows one user to run code as another on the same machine. https://github.com/ipython/ipython/security/advisories/GHSA-pq7m-3gw7-gq5x |
| ipython | 7.10.2 | <8.10.0 |
show IPython 8.10.0 includes a fix for CVE-2023-24816: Versions prior to 8.10.0 are subject to a command injection vulnerability with very specific prerequisites. This vulnerability requires that the function 'IPython.utils.terminal.set_term_title' be called on Windows in a Python environment where ctypes is not available. The dependency on 'ctypes' in 'IPython.utils._process_win32' prevents the vulnerable code from ever being reached in the ipython binary. However, as a library that could be used by another tool 'set_term_title' could be called and hence introduce a vulnerability. If an attacker get untrusted input to an instance of this function they would be able to inject shell commands as current process and limited to the scope of the current process. As a workaround, users should ensure that any calls to the 'IPython.utils.terminal.set_term_title' function are done with trusted or filtered input. https://github.com/ipython/ipython/security/advisories/GHSA-29gw-9793-fvw7 |
| paramiko | 2.7.1 | <3.4.0 |
show Paramiko 3.4.0 has been released to fix vulnerabilities affecting encrypt-then-MAC digest algorithms in tandem with CBC ciphers, and ChaCha20-poly1305. The fix requires cooperation from both ends of the connection, making it effective when the remote end is OpenSSH >= 9.6 and configured to use the new “strict kex” mode. For further details, refer to the official Paramiko documentation or GitHub repository. https://github.com/advisories/GHSA-45x7-px36-x8w8 |
| urllib3 | 1.25.7 | >=1.25.4,<1.26.5 |
show Urllib3 1.26.5 includes a fix for CVE-2021-33503: When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect. https://github.com/advisories/GHSA-q2q7-5pp4-w6pg |
| urllib3 | 1.25.7 | >=1.22,<2.6.3 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to redirect handling that drains connections by decompressing redirect response bodies without enforcing streaming read limits. The issue occurs when using urllib3’s streaming mode (for example, preload_content=False) while allowing redirects, because urllib3.response.HTTPResponse.drain_conn() would call HTTPResponse.read() in a way that decoded/decompressed the entire redirect response body even before any streaming reads were performed, effectively bypassing decompression-bomb safeguards. |
| urllib3 | 1.25.7 | >=1.0,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to improper handling of highly compressed HTTP response bodies during streaming decompression. The urllib3.HTTPResponse methods stream(), read(), read1(), read_chunked(), and readinto() may fully decompress a minimal but highly compressed payload based on the Content-Encoding header into an internal buffer instead of limiting the decompressed output to the requested chunk size, causing excessive CPU usage and massive memory allocation on the client side. |
| urllib3 | 1.25.7 | >=1.24,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to allowing an unbounded number of content-encoding decompression steps for HTTP responses. The HTTPResponse content decoding pipeline in urllib3 follows the Content-Encoding header and applies each advertised compression algorithm in sequence without enforcing a maximum chain length or effective output size, so a malicious peer can send a response with a very long encoding chain that triggers excessive CPU use and massive memory allocation during decompression. |
| urllib3 | 1.25.7 | <=1.26.18 , >=2.0.0a1,<=2.2.1 |
show Urllib3's ProxyManager ensures that the Proxy-Authorization header is correctly directed only to configured proxies. However, when HTTP requests bypass urllib3's proxy support, there's a risk of inadvertently setting the Proxy-Authorization header, which remains ineffective without a forwarding or tunneling proxy. Urllib3 does not recognize this header as carrying authentication data, failing to remove it during cross-origin redirects. While this scenario is uncommon and poses low risk to most users, urllib3 now proactively removes the Proxy-Authorization header during cross-origin redirects as a precautionary measure. Users are advised to utilize urllib3's proxy support or disable automatic redirects to handle the Proxy-Authorization header securely. Despite these precautions, urllib3 defaults to stripping the header to safeguard users who may inadvertently misconfigure requests. |
| urllib3 | 1.25.7 | <1.26.17 , >=2.0.0a1,<2.0.5 |
show Urllib3 1.26.17 and 2.0.5 include a fix for CVE-2023-43804: Urllib3 doesn't treat the 'Cookie' HTTP header special or provide any helpers for managing cookies over HTTP, that is the responsibility of the user. However, it is possible for a user to specify a 'Cookie' header and unknowingly leak information via HTTP redirects to a different origin if that user doesn't disable redirects explicitly. https://github.com/urllib3/urllib3/security/advisories/GHSA-v845-jxx5-vc9f |
| urllib3 | 1.25.7 | >=1.25.2,<=1.25.7 |
show The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). See: CVE-2020-7212. |
| urllib3 | 1.25.7 | <1.26.18 , >=2.0.0a1,<2.0.7 |
show Affected versions of urllib3 are vulnerable to an HTTP redirect handling vulnerability that fails to remove the HTTP request body when a POST changes to a GET via 301, 302, or 303 responses. This flaw can expose sensitive request data if the origin service is compromised and redirects to a malicious endpoint, though exploitability is low when no sensitive data is used. The vulnerability affects automatic redirect behavior. It is fixed in versions 1.26.18 and 2.0.7; update or disable redirects using redirects=False. This vulnerability is specific to Python's urllib3 library. |
| urllib3 | 1.25.7 | <2.5.0 |
show urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, it is possible to disable redirects for all requests by instantiating a PoolManager and specifying retries in a way that disable redirects. By default, requests and botocore users are not affected. An application attempting to mitigate SSRF or open redirect vulnerabilities by disabling redirects at the PoolManager level will remain vulnerable. This issue has been patched in version 2.5.0. |
| urllib3 | 1.25.7 | <1.25.9 |
show Urllib3 1.25.9 includes a fix for CVE-2020-26137: Urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116. https://github.com/python/cpython/issues/83784 https://github.com/urllib3/urllib3/pull/1800 |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| prompt-toolkit | 3.0.2 | <3.0.13 |
show Prompt-toolkit 3.0.13 fixes a race condition in `ThreadedHistory` which could lead to a deadlock. https://github.com/prompt-toolkit/python-prompt-toolkit/commit/99092a8c6d4b411645ac4b84d504e5226e7eebb8#diff-48c0ff10dc3990285d19b3f54e6bfec763089ba1229dc6f9e88463a1046adad7R163 |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.11.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.8.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| py | 1.8.0 | <=1.9.0 |
show Py 1.10.0 includes a fix for CVE-2020-29651: A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality. |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| zipp | 0.6.0 | <3.19.1 |
show A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. |
| lxml | 4.4.2 | <6.1.0 |
show Affected versions of the lxml package are vulnerable to XML External Entity Injection due to insecure default parser configuration that resolves external entities. The iterparse() function and the ETCompatXMLParser() class both default to resolve_entities=True, so untrusted XML input processed through either parser will expand external entity references and read referenced local files from the host. An attacker who supplies a crafted XML document to an application using these parsers in their default configuration can read sensitive local files and exfiltrate their contents through the parsed output. |
| lxml | 4.4.2 | <4.9.1 |
show Lxml 4.9.1 includes a fix for CVE-2022-2309: NULL Pointer Dereference allows attackers to cause a denial of service (or application crash). This only applies when lxml is used together with libxml2 2.9.10 through 2.9.14. libxml2 2.9.9 and earlier are not affected. It allows triggering crashes through forged input data, given a vulnerable code sequence in the application. The vulnerability is caused by the iterwalk function (also used by the canonicalize function). Such code shouldn't be in wide-spread use, given that parsing + iterwalk would usually be replaced with the more efficient iterparse function. However, an XML converter that serialises to C14N would also be vulnerable, for example, and there are legitimate use cases for this code sequence. If untrusted input is received (also remotely) and processed via iterwalk function, a crash can be triggered. |
| lxml | 4.4.2 | <4.6.3 |
show Lxml version 4.6.3 includes a fix for CVE-2021-28957: An XSS vulnerability was discovered in python-lxml's clean module versions before 4.6.3. When disabling the safe_attrs_only and forms arguments, the Cleaner class does not remove the formation attribute allowing for JS to bypass the sanitizer. A remote attacker could exploit this flaw to run arbitrary JS code on users who interact with incorrectly sanitized HTML. https://bugs.launchpad.net/lxml/+bug/1888153 |
| lxml | 4.4.2 | <4.6.5 |
show Lxml 4.6.5 includes a fix for CVE-2021-43818: Prior to version 4.6.5, the HTML Cleaner in lxml.html lets certain crafted script content pass through, as well as script content in SVG files embedded using data URIs. Users that employ the HTML cleaner in a security relevant context should upgrade to lxml 4.6.5 to receive a patch. |
| lxml | 4.4.2 | <4.6.2 |
show Lxml 4.6.2 includes a fix for CVE-2020-27783: A XSS vulnerability was discovered in python-lxml's clean module. The module's parser didn't properly imitate browsers, which caused different behaviors between the sanitizer and the user's page. A remote attacker could exploit this flaw to run arbitrary HTML/JS code. |
| zipp | 0.6.0 | <3.19.1 |
show A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. |
| pyjwt | 1.7.1 | >=1.0.0,<2.4.0 |
show PyJWT 2.4.0 includes a fix for CVE-2022-29217: An attacker submitting the JWT token can choose the used signing algorithm. The PyJWT library requires that the application chooses what algorithms are supported. The application can specify 'jwt.algorithms.get_default_algorithms()' to get support for all algorithms, or specify a single algorithm. The issue is not that big as 'algorithms=jwt.algorithms.get_default_algorithms()' has to be used. As a workaround, always be explicit with the algorithms that are accepted and expected when decoding. |
| pyjwt | 1.7.1 | <2.12.0 |
show Affected versions of this package are vulnerable to Insufficient Verification of Data Authenticity. The library does not validate the `crit` (Critical) Header Parameter as required by RFC 7515 §4.1.11 — when a JWT contains a `crit` array listing extensions that the library does not understand, the token is accepted instead of rejected. An attacker can exploit this vulnerability by crafting JWTs with unknown critical extensions (e.g., MFA requirements, token binding, scope restrictions) that are silently ignored, potentially bypassing security policies or causing split-brain verification in mixed-library deployments where other RFC-compliant libraries would reject the same token. |
| tqdm | 4.41.0 | >=4.4.0,<4.66.3 |
show Tqdm version 4.66.3 addresses CVE-2024-34062, a vulnerability where optional non-boolean CLI arguments like `--delim`, `--buf-size`, and `--manpath` were passed through Python's `eval`, allowing for arbitrary code execution. This security risk, only locally exploitable, has been mitigated in this release. Users are advised to upgrade to version 4.66.3 immediately as there are no workarounds for this issue. |
| babel | 2.7.0 | <2.9.1 |
show Babel 2.9.1 includes a fix for CVE-2021-42771: Babel.Locale in Babel before 2.9.1 allows attackers to load arbitrary locale .dat files (containing serialized Python objects) via directory traversal, leading to code execution. https://github.com/python-babel/babel/pull/782 |
| pyyaml | 5.3b1 | <5.3.1 |
show Pyyaml 5.3.1 includes a fix for CVE-2020-1747: A vulnerability was discovered in the PyYAML library in versions before 5.3.1, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor. |
| pyyaml | 5.3b1 | <5.3.1 |
show Pyyaml 5.3.1 includes a fix for CVE-2020-1747: A vulnerability was discovered in the PyYAML library in versions before 5.3.1, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| urllib3 | 1.25.7 | >=1.25.4,<1.26.5 |
show Urllib3 1.26.5 includes a fix for CVE-2021-33503: When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect. https://github.com/advisories/GHSA-q2q7-5pp4-w6pg |
| urllib3 | 1.25.7 | >=1.22,<2.6.3 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to redirect handling that drains connections by decompressing redirect response bodies without enforcing streaming read limits. The issue occurs when using urllib3’s streaming mode (for example, preload_content=False) while allowing redirects, because urllib3.response.HTTPResponse.drain_conn() would call HTTPResponse.read() in a way that decoded/decompressed the entire redirect response body even before any streaming reads were performed, effectively bypassing decompression-bomb safeguards. |
| urllib3 | 1.25.7 | >=1.0,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to improper handling of highly compressed HTTP response bodies during streaming decompression. The urllib3.HTTPResponse methods stream(), read(), read1(), read_chunked(), and readinto() may fully decompress a minimal but highly compressed payload based on the Content-Encoding header into an internal buffer instead of limiting the decompressed output to the requested chunk size, causing excessive CPU usage and massive memory allocation on the client side. |
| urllib3 | 1.25.7 | >=1.24,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to allowing an unbounded number of content-encoding decompression steps for HTTP responses. The HTTPResponse content decoding pipeline in urllib3 follows the Content-Encoding header and applies each advertised compression algorithm in sequence without enforcing a maximum chain length or effective output size, so a malicious peer can send a response with a very long encoding chain that triggers excessive CPU use and massive memory allocation during decompression. |
| urllib3 | 1.25.7 | <=1.26.18 , >=2.0.0a1,<=2.2.1 |
show Urllib3's ProxyManager ensures that the Proxy-Authorization header is correctly directed only to configured proxies. However, when HTTP requests bypass urllib3's proxy support, there's a risk of inadvertently setting the Proxy-Authorization header, which remains ineffective without a forwarding or tunneling proxy. Urllib3 does not recognize this header as carrying authentication data, failing to remove it during cross-origin redirects. While this scenario is uncommon and poses low risk to most users, urllib3 now proactively removes the Proxy-Authorization header during cross-origin redirects as a precautionary measure. Users are advised to utilize urllib3's proxy support or disable automatic redirects to handle the Proxy-Authorization header securely. Despite these precautions, urllib3 defaults to stripping the header to safeguard users who may inadvertently misconfigure requests. |
| urllib3 | 1.25.7 | <1.26.17 , >=2.0.0a1,<2.0.5 |
show Urllib3 1.26.17 and 2.0.5 include a fix for CVE-2023-43804: Urllib3 doesn't treat the 'Cookie' HTTP header special or provide any helpers for managing cookies over HTTP, that is the responsibility of the user. However, it is possible for a user to specify a 'Cookie' header and unknowingly leak information via HTTP redirects to a different origin if that user doesn't disable redirects explicitly. https://github.com/urllib3/urllib3/security/advisories/GHSA-v845-jxx5-vc9f |
| urllib3 | 1.25.7 | >=1.25.2,<=1.25.7 |
show The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). See: CVE-2020-7212. |
| urllib3 | 1.25.7 | <1.26.18 , >=2.0.0a1,<2.0.7 |
show Affected versions of urllib3 are vulnerable to an HTTP redirect handling vulnerability that fails to remove the HTTP request body when a POST changes to a GET via 301, 302, or 303 responses. This flaw can expose sensitive request data if the origin service is compromised and redirects to a malicious endpoint, though exploitability is low when no sensitive data is used. The vulnerability affects automatic redirect behavior. It is fixed in versions 1.26.18 and 2.0.7; update or disable redirects using redirects=False. This vulnerability is specific to Python's urllib3 library. |
| urllib3 | 1.25.7 | <2.5.0 |
show urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, it is possible to disable redirects for all requests by instantiating a PoolManager and specifying retries in a way that disable redirects. By default, requests and botocore users are not affected. An application attempting to mitigate SSRF or open redirect vulnerabilities by disabling redirects at the PoolManager level will remain vulnerable. This issue has been patched in version 2.5.0. |
| urllib3 | 1.25.7 | <1.25.9 |
show Urllib3 1.25.9 includes a fix for CVE-2020-26137: Urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116. https://github.com/python/cpython/issues/83784 https://github.com/urllib3/urllib3/pull/1800 |
| ipython | 7.10.2 | >=8.0.0a0,<8.0.1 , >=7.17.0,<7.31.1 , >=6.0.0a0,<7.16.3 , <5.11 |
show Ipython versions 8.0.1, 7.31.1, 7.16.3 and 5.11 include a fix for CVE-2022-21699: Affected versions are subject to an arbitrary code execution vulnerability achieved by not properly managing cross user temporary files. This vulnerability allows one user to run code as another on the same machine. https://github.com/ipython/ipython/security/advisories/GHSA-pq7m-3gw7-gq5x |
| ipython | 7.10.2 | <8.10.0 |
show IPython 8.10.0 includes a fix for CVE-2023-24816: Versions prior to 8.10.0 are subject to a command injection vulnerability with very specific prerequisites. This vulnerability requires that the function 'IPython.utils.terminal.set_term_title' be called on Windows in a Python environment where ctypes is not available. The dependency on 'ctypes' in 'IPython.utils._process_win32' prevents the vulnerable code from ever being reached in the ipython binary. However, as a library that could be used by another tool 'set_term_title' could be called and hence introduce a vulnerability. If an attacker get untrusted input to an instance of this function they would be able to inject shell commands as current process and limited to the scope of the current process. As a workaround, users should ensure that any calls to the 'IPython.utils.terminal.set_term_title' function are done with trusted or filtered input. https://github.com/ipython/ipython/security/advisories/GHSA-29gw-9793-fvw7 |
| paramiko | 2.7.1 | <3.4.0 |
show Paramiko 3.4.0 has been released to fix vulnerabilities affecting encrypt-then-MAC digest algorithms in tandem with CBC ciphers, and ChaCha20-poly1305. The fix requires cooperation from both ends of the connection, making it effective when the remote end is OpenSSH >= 9.6 and configured to use the new “strict kex” mode. For further details, refer to the official Paramiko documentation or GitHub repository. https://github.com/advisories/GHSA-45x7-px36-x8w8 |
| urllib3 | 1.25.7 | >=1.25.4,<1.26.5 |
show Urllib3 1.26.5 includes a fix for CVE-2021-33503: When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect. https://github.com/advisories/GHSA-q2q7-5pp4-w6pg |
| urllib3 | 1.25.7 | >=1.22,<2.6.3 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to redirect handling that drains connections by decompressing redirect response bodies without enforcing streaming read limits. The issue occurs when using urllib3’s streaming mode (for example, preload_content=False) while allowing redirects, because urllib3.response.HTTPResponse.drain_conn() would call HTTPResponse.read() in a way that decoded/decompressed the entire redirect response body even before any streaming reads were performed, effectively bypassing decompression-bomb safeguards. |
| urllib3 | 1.25.7 | >=1.0,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to improper handling of highly compressed HTTP response bodies during streaming decompression. The urllib3.HTTPResponse methods stream(), read(), read1(), read_chunked(), and readinto() may fully decompress a minimal but highly compressed payload based on the Content-Encoding header into an internal buffer instead of limiting the decompressed output to the requested chunk size, causing excessive CPU usage and massive memory allocation on the client side. |
| urllib3 | 1.25.7 | >=1.24,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to allowing an unbounded number of content-encoding decompression steps for HTTP responses. The HTTPResponse content decoding pipeline in urllib3 follows the Content-Encoding header and applies each advertised compression algorithm in sequence without enforcing a maximum chain length or effective output size, so a malicious peer can send a response with a very long encoding chain that triggers excessive CPU use and massive memory allocation during decompression. |
| urllib3 | 1.25.7 | <=1.26.18 , >=2.0.0a1,<=2.2.1 |
show Urllib3's ProxyManager ensures that the Proxy-Authorization header is correctly directed only to configured proxies. However, when HTTP requests bypass urllib3's proxy support, there's a risk of inadvertently setting the Proxy-Authorization header, which remains ineffective without a forwarding or tunneling proxy. Urllib3 does not recognize this header as carrying authentication data, failing to remove it during cross-origin redirects. While this scenario is uncommon and poses low risk to most users, urllib3 now proactively removes the Proxy-Authorization header during cross-origin redirects as a precautionary measure. Users are advised to utilize urllib3's proxy support or disable automatic redirects to handle the Proxy-Authorization header securely. Despite these precautions, urllib3 defaults to stripping the header to safeguard users who may inadvertently misconfigure requests. |
| urllib3 | 1.25.7 | <1.26.17 , >=2.0.0a1,<2.0.5 |
show Urllib3 1.26.17 and 2.0.5 include a fix for CVE-2023-43804: Urllib3 doesn't treat the 'Cookie' HTTP header special or provide any helpers for managing cookies over HTTP, that is the responsibility of the user. However, it is possible for a user to specify a 'Cookie' header and unknowingly leak information via HTTP redirects to a different origin if that user doesn't disable redirects explicitly. https://github.com/urllib3/urllib3/security/advisories/GHSA-v845-jxx5-vc9f |
| urllib3 | 1.25.7 | >=1.25.2,<=1.25.7 |
show The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). See: CVE-2020-7212. |
| urllib3 | 1.25.7 | <1.26.18 , >=2.0.0a1,<2.0.7 |
show Affected versions of urllib3 are vulnerable to an HTTP redirect handling vulnerability that fails to remove the HTTP request body when a POST changes to a GET via 301, 302, or 303 responses. This flaw can expose sensitive request data if the origin service is compromised and redirects to a malicious endpoint, though exploitability is low when no sensitive data is used. The vulnerability affects automatic redirect behavior. It is fixed in versions 1.26.18 and 2.0.7; update or disable redirects using redirects=False. This vulnerability is specific to Python's urllib3 library. |
| urllib3 | 1.25.7 | <2.5.0 |
show urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, it is possible to disable redirects for all requests by instantiating a PoolManager and specifying retries in a way that disable redirects. By default, requests and botocore users are not affected. An application attempting to mitigate SSRF or open redirect vulnerabilities by disabling redirects at the PoolManager level will remain vulnerable. This issue has been patched in version 2.5.0. |
| urllib3 | 1.25.7 | <1.25.9 |
show Urllib3 1.25.9 includes a fix for CVE-2020-26137: Urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116. https://github.com/python/cpython/issues/83784 https://github.com/urllib3/urllib3/pull/1800 |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| prompt-toolkit | 3.0.2 | <3.0.13 |
show Prompt-toolkit 3.0.13 fixes a race condition in `ThreadedHistory` which could lead to a deadlock. https://github.com/prompt-toolkit/python-prompt-toolkit/commit/99092a8c6d4b411645ac4b84d504e5226e7eebb8#diff-48c0ff10dc3990285d19b3f54e6bfec763089ba1229dc6f9e88463a1046adad7R163 |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.8.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| py | 1.8.0 | <=1.9.0 |
show Py 1.10.0 includes a fix for CVE-2020-29651: A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality. |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| zipp | 0.6.0 | <3.19.1 |
show A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. |
| lxml | 4.4.2 | <6.1.0 |
show Affected versions of the lxml package are vulnerable to XML External Entity Injection due to insecure default parser configuration that resolves external entities. The iterparse() function and the ETCompatXMLParser() class both default to resolve_entities=True, so untrusted XML input processed through either parser will expand external entity references and read referenced local files from the host. An attacker who supplies a crafted XML document to an application using these parsers in their default configuration can read sensitive local files and exfiltrate their contents through the parsed output. |
| lxml | 4.4.2 | <4.9.1 |
show Lxml 4.9.1 includes a fix for CVE-2022-2309: NULL Pointer Dereference allows attackers to cause a denial of service (or application crash). This only applies when lxml is used together with libxml2 2.9.10 through 2.9.14. libxml2 2.9.9 and earlier are not affected. It allows triggering crashes through forged input data, given a vulnerable code sequence in the application. The vulnerability is caused by the iterwalk function (also used by the canonicalize function). Such code shouldn't be in wide-spread use, given that parsing + iterwalk would usually be replaced with the more efficient iterparse function. However, an XML converter that serialises to C14N would also be vulnerable, for example, and there are legitimate use cases for this code sequence. If untrusted input is received (also remotely) and processed via iterwalk function, a crash can be triggered. |
| lxml | 4.4.2 | <4.6.3 |
show Lxml version 4.6.3 includes a fix for CVE-2021-28957: An XSS vulnerability was discovered in python-lxml's clean module versions before 4.6.3. When disabling the safe_attrs_only and forms arguments, the Cleaner class does not remove the formation attribute allowing for JS to bypass the sanitizer. A remote attacker could exploit this flaw to run arbitrary JS code on users who interact with incorrectly sanitized HTML. https://bugs.launchpad.net/lxml/+bug/1888153 |
| lxml | 4.4.2 | <4.6.5 |
show Lxml 4.6.5 includes a fix for CVE-2021-43818: Prior to version 4.6.5, the HTML Cleaner in lxml.html lets certain crafted script content pass through, as well as script content in SVG files embedded using data URIs. Users that employ the HTML cleaner in a security relevant context should upgrade to lxml 4.6.5 to receive a patch. |
| lxml | 4.4.2 | <4.6.2 |
show Lxml 4.6.2 includes a fix for CVE-2020-27783: A XSS vulnerability was discovered in python-lxml's clean module. The module's parser didn't properly imitate browsers, which caused different behaviors between the sanitizer and the user's page. A remote attacker could exploit this flaw to run arbitrary HTML/JS code. |
| zipp | 0.6.0 | <3.19.1 |
show A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. |
| babel | 2.7.0 | <2.9.1 |
show Babel 2.9.1 includes a fix for CVE-2021-42771: Babel.Locale in Babel before 2.9.1 allows attackers to load arbitrary locale .dat files (containing serialized Python objects) via directory traversal, leading to code execution. https://github.com/python-babel/babel/pull/782 |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| ipython | 7.10.2 | >=8.0.0a0,<8.0.1 , >=7.17.0,<7.31.1 , >=6.0.0a0,<7.16.3 , <5.11 |
show Ipython versions 8.0.1, 7.31.1, 7.16.3 and 5.11 include a fix for CVE-2022-21699: Affected versions are subject to an arbitrary code execution vulnerability achieved by not properly managing cross user temporary files. This vulnerability allows one user to run code as another on the same machine. https://github.com/ipython/ipython/security/advisories/GHSA-pq7m-3gw7-gq5x |
| ipython | 7.10.2 | <8.10.0 |
show IPython 8.10.0 includes a fix for CVE-2023-24816: Versions prior to 8.10.0 are subject to a command injection vulnerability with very specific prerequisites. This vulnerability requires that the function 'IPython.utils.terminal.set_term_title' be called on Windows in a Python environment where ctypes is not available. The dependency on 'ctypes' in 'IPython.utils._process_win32' prevents the vulnerable code from ever being reached in the ipython binary. However, as a library that could be used by another tool 'set_term_title' could be called and hence introduce a vulnerability. If an attacker get untrusted input to an instance of this function they would be able to inject shell commands as current process and limited to the scope of the current process. As a workaround, users should ensure that any calls to the 'IPython.utils.terminal.set_term_title' function are done with trusted or filtered input. https://github.com/ipython/ipython/security/advisories/GHSA-29gw-9793-fvw7 |
| paramiko | 2.7.1 | <3.4.0 |
show Paramiko 3.4.0 has been released to fix vulnerabilities affecting encrypt-then-MAC digest algorithms in tandem with CBC ciphers, and ChaCha20-poly1305. The fix requires cooperation from both ends of the connection, making it effective when the remote end is OpenSSH >= 9.6 and configured to use the new “strict kex” mode. For further details, refer to the official Paramiko documentation or GitHub repository. https://github.com/advisories/GHSA-45x7-px36-x8w8 |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| prompt-toolkit | 3.0.2 | <3.0.13 |
show Prompt-toolkit 3.0.13 fixes a race condition in `ThreadedHistory` which could lead to a deadlock. https://github.com/prompt-toolkit/python-prompt-toolkit/commit/99092a8c6d4b411645ac4b84d504e5226e7eebb8#diff-48c0ff10dc3990285d19b3f54e6bfec763089ba1229dc6f9e88463a1046adad7R163 |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.11.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| Package | Installed | Affected | Info |
|---|---|---|---|
| py | 1.8.0 | <=1.11.0 |
show ** DISPUTED ** Py throughout 1.11.0 allows remote attackers to conduct a ReDoS (Regular expression Denial of Service) attack via a Subversion repository with crafted info data because the InfoSvnCommand argument is mishandled. https://github.com/pytest-dev/py/issues/287 |
| py | 1.8.0 | <=1.9.0 |
show Py 1.10.0 includes a fix for CVE-2020-29651: A denial of service via regular expression in the py.path.svnwc component of py (aka python-py) through 1.9.0 could be used by attackers to cause a compute-time denial of service attack by supplying malicious input to the blame functionality. |
| rsa | 3.4.2 | >=3.0,<4.7 |
show Rsa 4.7 includes a fix for CVE-2020-25658: It was found that python-rsa is vulnerable to Bleichenbacher timing attacks. An attacker can use this flaw via the RSA decryption API to decrypt parts of the cipher text encrypted with RSA. |
| rsa | 3.4.2 | <4.3 |
show Rsa 4.3 includes a fix for CVE-2020-13757: Python-RSA before 4.3 ignores leading '\0' bytes during decryption of ciphertext. This could conceivably have a security-relevant impact, e.g., by helping an attacker to infer that an application uses Python-RSA, or if the length of accepted ciphertext affects application behavior (such as by causing excessive memory allocation). |
| zipp | 0.6.0 | <3.19.1 |
show A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. |
| lxml | 4.4.2 | <6.1.0 |
show Affected versions of the lxml package are vulnerable to XML External Entity Injection due to insecure default parser configuration that resolves external entities. The iterparse() function and the ETCompatXMLParser() class both default to resolve_entities=True, so untrusted XML input processed through either parser will expand external entity references and read referenced local files from the host. An attacker who supplies a crafted XML document to an application using these parsers in their default configuration can read sensitive local files and exfiltrate their contents through the parsed output. |
| lxml | 4.4.2 | <4.9.1 |
show Lxml 4.9.1 includes a fix for CVE-2022-2309: NULL Pointer Dereference allows attackers to cause a denial of service (or application crash). This only applies when lxml is used together with libxml2 2.9.10 through 2.9.14. libxml2 2.9.9 and earlier are not affected. It allows triggering crashes through forged input data, given a vulnerable code sequence in the application. The vulnerability is caused by the iterwalk function (also used by the canonicalize function). Such code shouldn't be in wide-spread use, given that parsing + iterwalk would usually be replaced with the more efficient iterparse function. However, an XML converter that serialises to C14N would also be vulnerable, for example, and there are legitimate use cases for this code sequence. If untrusted input is received (also remotely) and processed via iterwalk function, a crash can be triggered. |
| lxml | 4.4.2 | <4.6.3 |
show Lxml version 4.6.3 includes a fix for CVE-2021-28957: An XSS vulnerability was discovered in python-lxml's clean module versions before 4.6.3. When disabling the safe_attrs_only and forms arguments, the Cleaner class does not remove the formation attribute allowing for JS to bypass the sanitizer. A remote attacker could exploit this flaw to run arbitrary JS code on users who interact with incorrectly sanitized HTML. https://bugs.launchpad.net/lxml/+bug/1888153 |
| lxml | 4.4.2 | <4.6.5 |
show Lxml 4.6.5 includes a fix for CVE-2021-43818: Prior to version 4.6.5, the HTML Cleaner in lxml.html lets certain crafted script content pass through, as well as script content in SVG files embedded using data URIs. Users that employ the HTML cleaner in a security relevant context should upgrade to lxml 4.6.5 to receive a patch. |
| lxml | 4.4.2 | <4.6.2 |
show Lxml 4.6.2 includes a fix for CVE-2020-27783: A XSS vulnerability was discovered in python-lxml's clean module. The module's parser didn't properly imitate browsers, which caused different behaviors between the sanitizer and the user's page. A remote attacker could exploit this flaw to run arbitrary HTML/JS code. |
| zipp | 0.6.0 | <3.19.1 |
show A Denial of Service (DoS) vulnerability exists in the jaraco/zipp library. The vulnerability is triggered when processing a specially crafted zip file that leads to an infinite loop. This issue also impacts the zipfile module of CPython, as features from the third-party zipp library are later merged into CPython, and the affected code is identical in both projects. The infinite loop can be initiated through the use of functions affecting the `Path` module in both zipp and zipfile, such as `joinpath`, the overloaded division operator, and `iterdir`. Although the infinite loop is not resource exhaustive, it prevents the application from responding. |
| pyjwt | 1.7.1 | >=1.0.0,<2.4.0 |
show PyJWT 2.4.0 includes a fix for CVE-2022-29217: An attacker submitting the JWT token can choose the used signing algorithm. The PyJWT library requires that the application chooses what algorithms are supported. The application can specify 'jwt.algorithms.get_default_algorithms()' to get support for all algorithms, or specify a single algorithm. The issue is not that big as 'algorithms=jwt.algorithms.get_default_algorithms()' has to be used. As a workaround, always be explicit with the algorithms that are accepted and expected when decoding. |
| pyjwt | 1.7.1 | <2.12.0 |
show Affected versions of this package are vulnerable to Insufficient Verification of Data Authenticity. The library does not validate the `crit` (Critical) Header Parameter as required by RFC 7515 §4.1.11 — when a JWT contains a `crit` array listing extensions that the library does not understand, the token is accepted instead of rejected. An attacker can exploit this vulnerability by crafting JWTs with unknown critical extensions (e.g., MFA requirements, token binding, scope restrictions) that are silently ignored, potentially bypassing security policies or causing split-brain verification in mixed-library deployments where other RFC-compliant libraries would reject the same token. |
| tqdm | 4.41.0 | >=4.4.0,<4.66.3 |
show Tqdm version 4.66.3 addresses CVE-2024-34062, a vulnerability where optional non-boolean CLI arguments like `--delim`, `--buf-size`, and `--manpath` were passed through Python's `eval`, allowing for arbitrary code execution. This security risk, only locally exploitable, has been mitigated in this release. Users are advised to upgrade to version 4.66.3 immediately as there are no workarounds for this issue. |
| babel | 2.7.0 | <2.9.1 |
show Babel 2.9.1 includes a fix for CVE-2021-42771: Babel.Locale in Babel before 2.9.1 allows attackers to load arbitrary locale .dat files (containing serialized Python objects) via directory traversal, leading to code execution. https://github.com/python-babel/babel/pull/782 |
| pyyaml | 5.3b1 | <5.3.1 |
show Pyyaml 5.3.1 includes a fix for CVE-2020-1747: A vulnerability was discovered in the PyYAML library in versions before 5.3.1, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor. |
| pyyaml | 5.3b1 | <5.3.1 |
show Pyyaml 5.3.1 includes a fix for CVE-2020-1747: A vulnerability was discovered in the PyYAML library in versions before 5.3.1, where it is susceptible to arbitrary code execution when it processes untrusted YAML files through the full_load method or with the FullLoader loader. Applications that use the library to process untrusted input may be vulnerable to this flaw. An attacker could use this flaw to execute arbitrary code on the system by abusing the python/object/new constructor. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| jinja2 | 2.10.3 | <3.1.6 |
show Prior to 3.1.6, an oversight in how the Jinja sandboxed environment interacts with the |attr filter allows an attacker that controls the content of a template to execute arbitrary Python code. To exploit the vulnerability, an attacker needs to control the content of a template. Whether that is the case depends on the type of application using Jinja. This vulnerability impacts users of applications which execute untrusted templates. Jinja's sandbox does catch calls to str.format and ensures they don't escape the sandbox. However, it's possible to use the |attr filter to get a reference to a string's plain format method, bypassing the sandbox. After the fix, the |attr filter no longer bypasses the environment's attribute lookup. This vulnerability is fixed in 3.1.6. |
| jinja2 | 2.10.3 | <3.1.5 |
show An oversight in how the Jinja sandboxed environment detects calls to str.format allows an attacker who controls the content of a template to execute arbitrary Python code. To exploit the vulnerability, an attacker needs to control the content of a template. Whether that is the case depends on the type of application using Jinja. This vulnerability impacts users of applications which execute untrusted templates. Jinja's sandbox does catch calls to str.format and ensures they don't escape the sandbox. However, it's possible to store a reference to a malicious string's format method, then pass that to a filter that calls it. No such filters are built-in to Jinja, but could be present through custom filters in an application. After the fix, such indirect calls are also handled by the sandbox. |
| jinja2 | 2.10.3 | <2.11.3 |
show This affects the package jinja2 from 0.0.0 and before 2.11.3. The ReDoS vulnerability is mainly due to the '_punctuation_re regex' operator and its use of multiple wildcards. The last wildcard is the most exploitable as it searches for trailing punctuation. This issue can be mitigated by Markdown to format user content instead of the urlize filter, or by implementing request timeouts and limiting process memory. |
| jinja2 | 2.10.3 | <3.1.4 |
show Jinja is an extensible templating engine. The `xmlattr` filter in affected versions of Jinja accepts keys containing non-attribute characters. XML/HTML attributes cannot contain spaces, `/`, `>`, or `=`, as each would then be interpreted as starting a separate attribute. If an application accepts keys (as opposed to only values) as user input, and renders these in pages that other users see as well, an attacker could use this to inject other attributes and perform XSS. The fix for CVE-2024-22195 only addressed spaces but not other characters. Accepting keys as user input is now explicitly considered an unintended use case of the `xmlattr` filter, and code that does so without otherwise validating the input should be flagged as insecure, regardless of Jinja version. Accepting _values_ as user input continues to be safe. |
| jinja2 | 2.10.3 | <3.1.3 |
show Jinja is an extensible templating engine. Special placeholders in the template allow writing code similar to Python syntax. It is possible to inject arbitrary HTML attributes into the rendered HTML template, potentially leading to Cross-Site Scripting (XSS). The Jinja `xmlattr` filter can be abused to inject arbitrary HTML attribute keys and values, bypassing the auto escaping mechanism and potentially leading to XSS. It may also be possible to bypass attribute validation checks if they are blacklist-based. |
| jinja2 | 2.10.3 | <3.1.6 |
show Prior to 3.1.6, an oversight in how the Jinja sandboxed environment interacts with the |attr filter allows an attacker that controls the content of a template to execute arbitrary Python code. To exploit the vulnerability, an attacker needs to control the content of a template. Whether that is the case depends on the type of application using Jinja. This vulnerability impacts users of applications which execute untrusted templates. Jinja's sandbox does catch calls to str.format and ensures they don't escape the sandbox. However, it's possible to use the |attr filter to get a reference to a string's plain format method, bypassing the sandbox. After the fix, the |attr filter no longer bypasses the environment's attribute lookup. This vulnerability is fixed in 3.1.6. |
| jinja2 | 2.10.3 | <3.1.5 |
show An oversight in how the Jinja sandboxed environment detects calls to str.format allows an attacker who controls the content of a template to execute arbitrary Python code. To exploit the vulnerability, an attacker needs to control the content of a template. Whether that is the case depends on the type of application using Jinja. This vulnerability impacts users of applications which execute untrusted templates. Jinja's sandbox does catch calls to str.format and ensures they don't escape the sandbox. However, it's possible to store a reference to a malicious string's format method, then pass that to a filter that calls it. No such filters are built-in to Jinja, but could be present through custom filters in an application. After the fix, such indirect calls are also handled by the sandbox. |
| jinja2 | 2.10.3 | <2.11.3 |
show This affects the package jinja2 from 0.0.0 and before 2.11.3. The ReDoS vulnerability is mainly due to the '_punctuation_re regex' operator and its use of multiple wildcards. The last wildcard is the most exploitable as it searches for trailing punctuation. This issue can be mitigated by Markdown to format user content instead of the urlize filter, or by implementing request timeouts and limiting process memory. |
| jinja2 | 2.10.3 | <3.1.4 |
show Jinja is an extensible templating engine. The `xmlattr` filter in affected versions of Jinja accepts keys containing non-attribute characters. XML/HTML attributes cannot contain spaces, `/`, `>`, or `=`, as each would then be interpreted as starting a separate attribute. If an application accepts keys (as opposed to only values) as user input, and renders these in pages that other users see as well, an attacker could use this to inject other attributes and perform XSS. The fix for CVE-2024-22195 only addressed spaces but not other characters. Accepting keys as user input is now explicitly considered an unintended use case of the `xmlattr` filter, and code that does so without otherwise validating the input should be flagged as insecure, regardless of Jinja version. Accepting _values_ as user input continues to be safe. |
| jinja2 | 2.10.3 | <3.1.3 |
show Jinja is an extensible templating engine. Special placeholders in the template allow writing code similar to Python syntax. It is possible to inject arbitrary HTML attributes into the rendered HTML template, potentially leading to Cross-Site Scripting (XSS). The Jinja `xmlattr` filter can be abused to inject arbitrary HTML attribute keys and values, bypassing the auto escaping mechanism and potentially leading to XSS. It may also be possible to bypass attribute validation checks if they are blacklist-based. |
| scrapy | 2.12.0 | >=1.4.0,<=2.14.1 |
show Affected versions of the Scrapy package are vulnerable to Improper Input Validation due to unsafe handling of externally supplied Referrer-Policy header values. In scrapy.spidermiddlewares.referer.RefererMiddleware.policy(), Scrapy reads the Referrer-Policy response header and passes it to _load_policy_class(), which in affected versions could treat a header value that looked like a Python import path as a callable object, import it with load_object(), and instantiate or execute it. |
| scrapy | 2.12.0 | >=0.7 |
show Scrapy is vulnerable to CVE-2017-14158: Scrapy allows remote attackers to cause a denial of service (memory consumption) via large files because arbitrarily many files are read into memory, which is especially problematic if the files are then individually written in a separate thread to a slow storage resource, as demonstrated by interaction between dataReceived (in core/downloader/handlers/http11.py) and S3FilesStore. |
| urllib3 | 1.25.7 | >=1.25.4,<1.26.5 |
show Urllib3 1.26.5 includes a fix for CVE-2021-33503: When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect. https://github.com/advisories/GHSA-q2q7-5pp4-w6pg |
| urllib3 | 1.25.7 | >=1.22,<2.6.3 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to redirect handling that drains connections by decompressing redirect response bodies without enforcing streaming read limits. The issue occurs when using urllib3’s streaming mode (for example, preload_content=False) while allowing redirects, because urllib3.response.HTTPResponse.drain_conn() would call HTTPResponse.read() in a way that decoded/decompressed the entire redirect response body even before any streaming reads were performed, effectively bypassing decompression-bomb safeguards. |
| urllib3 | 1.25.7 | >=1.0,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to improper handling of highly compressed HTTP response bodies during streaming decompression. The urllib3.HTTPResponse methods stream(), read(), read1(), read_chunked(), and readinto() may fully decompress a minimal but highly compressed payload based on the Content-Encoding header into an internal buffer instead of limiting the decompressed output to the requested chunk size, causing excessive CPU usage and massive memory allocation on the client side. |
| urllib3 | 1.25.7 | >=1.24,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to allowing an unbounded number of content-encoding decompression steps for HTTP responses. The HTTPResponse content decoding pipeline in urllib3 follows the Content-Encoding header and applies each advertised compression algorithm in sequence without enforcing a maximum chain length or effective output size, so a malicious peer can send a response with a very long encoding chain that triggers excessive CPU use and massive memory allocation during decompression. |
| urllib3 | 1.25.7 | <=1.26.18 , >=2.0.0a1,<=2.2.1 |
show Urllib3's ProxyManager ensures that the Proxy-Authorization header is correctly directed only to configured proxies. However, when HTTP requests bypass urllib3's proxy support, there's a risk of inadvertently setting the Proxy-Authorization header, which remains ineffective without a forwarding or tunneling proxy. Urllib3 does not recognize this header as carrying authentication data, failing to remove it during cross-origin redirects. While this scenario is uncommon and poses low risk to most users, urllib3 now proactively removes the Proxy-Authorization header during cross-origin redirects as a precautionary measure. Users are advised to utilize urllib3's proxy support or disable automatic redirects to handle the Proxy-Authorization header securely. Despite these precautions, urllib3 defaults to stripping the header to safeguard users who may inadvertently misconfigure requests. |
| urllib3 | 1.25.7 | <1.26.17 , >=2.0.0a1,<2.0.5 |
show Urllib3 1.26.17 and 2.0.5 include a fix for CVE-2023-43804: Urllib3 doesn't treat the 'Cookie' HTTP header special or provide any helpers for managing cookies over HTTP, that is the responsibility of the user. However, it is possible for a user to specify a 'Cookie' header and unknowingly leak information via HTTP redirects to a different origin if that user doesn't disable redirects explicitly. https://github.com/urllib3/urllib3/security/advisories/GHSA-v845-jxx5-vc9f |
| urllib3 | 1.25.7 | >=1.25.2,<=1.25.7 |
show The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). See: CVE-2020-7212. |
| urllib3 | 1.25.7 | <1.26.18 , >=2.0.0a1,<2.0.7 |
show Affected versions of urllib3 are vulnerable to an HTTP redirect handling vulnerability that fails to remove the HTTP request body when a POST changes to a GET via 301, 302, or 303 responses. This flaw can expose sensitive request data if the origin service is compromised and redirects to a malicious endpoint, though exploitability is low when no sensitive data is used. The vulnerability affects automatic redirect behavior. It is fixed in versions 1.26.18 and 2.0.7; update or disable redirects using redirects=False. This vulnerability is specific to Python's urllib3 library. |
| urllib3 | 1.25.7 | <2.5.0 |
show urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, it is possible to disable redirects for all requests by instantiating a PoolManager and specifying retries in a way that disable redirects. By default, requests and botocore users are not affected. An application attempting to mitigate SSRF or open redirect vulnerabilities by disabling redirects at the PoolManager level will remain vulnerable. This issue has been patched in version 2.5.0. |
| urllib3 | 1.25.7 | <1.25.9 |
show Urllib3 1.25.9 includes a fix for CVE-2020-26137: Urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116. https://github.com/python/cpython/issues/83784 https://github.com/urllib3/urllib3/pull/1800 |
| ipython | 7.10.2 | >=8.0.0a0,<8.0.1 , >=7.17.0,<7.31.1 , >=6.0.0a0,<7.16.3 , <5.11 |
show Ipython versions 8.0.1, 7.31.1, 7.16.3 and 5.11 include a fix for CVE-2022-21699: Affected versions are subject to an arbitrary code execution vulnerability achieved by not properly managing cross user temporary files. This vulnerability allows one user to run code as another on the same machine. https://github.com/ipython/ipython/security/advisories/GHSA-pq7m-3gw7-gq5x |
| ipython | 7.10.2 | <8.10.0 |
show IPython 8.10.0 includes a fix for CVE-2023-24816: Versions prior to 8.10.0 are subject to a command injection vulnerability with very specific prerequisites. This vulnerability requires that the function 'IPython.utils.terminal.set_term_title' be called on Windows in a Python environment where ctypes is not available. The dependency on 'ctypes' in 'IPython.utils._process_win32' prevents the vulnerable code from ever being reached in the ipython binary. However, as a library that could be used by another tool 'set_term_title' could be called and hence introduce a vulnerability. If an attacker get untrusted input to an instance of this function they would be able to inject shell commands as current process and limited to the scope of the current process. As a workaround, users should ensure that any calls to the 'IPython.utils.terminal.set_term_title' function are done with trusted or filtered input. https://github.com/ipython/ipython/security/advisories/GHSA-29gw-9793-fvw7 |
| paramiko | 2.7.1 | <3.4.0 |
show Paramiko 3.4.0 has been released to fix vulnerabilities affecting encrypt-then-MAC digest algorithms in tandem with CBC ciphers, and ChaCha20-poly1305. The fix requires cooperation from both ends of the connection, making it effective when the remote end is OpenSSH >= 9.6 and configured to use the new “strict kex” mode. For further details, refer to the official Paramiko documentation or GitHub repository. https://github.com/advisories/GHSA-45x7-px36-x8w8 |
| urllib3 | 1.25.7 | >=1.25.4,<1.26.5 |
show Urllib3 1.26.5 includes a fix for CVE-2021-33503: When provided with a URL containing many @ characters in the authority component, the authority regular expression exhibits catastrophic backtracking, causing a denial of service if a URL were passed as a parameter or redirected to via an HTTP redirect. https://github.com/advisories/GHSA-q2q7-5pp4-w6pg |
| urllib3 | 1.25.7 | >=1.22,<2.6.3 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to redirect handling that drains connections by decompressing redirect response bodies without enforcing streaming read limits. The issue occurs when using urllib3’s streaming mode (for example, preload_content=False) while allowing redirects, because urllib3.response.HTTPResponse.drain_conn() would call HTTPResponse.read() in a way that decoded/decompressed the entire redirect response body even before any streaming reads were performed, effectively bypassing decompression-bomb safeguards. |
| urllib3 | 1.25.7 | >=1.0,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to improper handling of highly compressed HTTP response bodies during streaming decompression. The urllib3.HTTPResponse methods stream(), read(), read1(), read_chunked(), and readinto() may fully decompress a minimal but highly compressed payload based on the Content-Encoding header into an internal buffer instead of limiting the decompressed output to the requested chunk size, causing excessive CPU usage and massive memory allocation on the client side. |
| urllib3 | 1.25.7 | >=1.24,<2.6.0 |
show Affected versions of the urllib3 package are vulnerable to Denial of Service (DoS) due to allowing an unbounded number of content-encoding decompression steps for HTTP responses. The HTTPResponse content decoding pipeline in urllib3 follows the Content-Encoding header and applies each advertised compression algorithm in sequence without enforcing a maximum chain length or effective output size, so a malicious peer can send a response with a very long encoding chain that triggers excessive CPU use and massive memory allocation during decompression. |
| urllib3 | 1.25.7 | <=1.26.18 , >=2.0.0a1,<=2.2.1 |
show Urllib3's ProxyManager ensures that the Proxy-Authorization header is correctly directed only to configured proxies. However, when HTTP requests bypass urllib3's proxy support, there's a risk of inadvertently setting the Proxy-Authorization header, which remains ineffective without a forwarding or tunneling proxy. Urllib3 does not recognize this header as carrying authentication data, failing to remove it during cross-origin redirects. While this scenario is uncommon and poses low risk to most users, urllib3 now proactively removes the Proxy-Authorization header during cross-origin redirects as a precautionary measure. Users are advised to utilize urllib3's proxy support or disable automatic redirects to handle the Proxy-Authorization header securely. Despite these precautions, urllib3 defaults to stripping the header to safeguard users who may inadvertently misconfigure requests. |
| urllib3 | 1.25.7 | <1.26.17 , >=2.0.0a1,<2.0.5 |
show Urllib3 1.26.17 and 2.0.5 include a fix for CVE-2023-43804: Urllib3 doesn't treat the 'Cookie' HTTP header special or provide any helpers for managing cookies over HTTP, that is the responsibility of the user. However, it is possible for a user to specify a 'Cookie' header and unknowingly leak information via HTTP redirects to a different origin if that user doesn't disable redirects explicitly. https://github.com/urllib3/urllib3/security/advisories/GHSA-v845-jxx5-vc9f |
| urllib3 | 1.25.7 | >=1.25.2,<=1.25.7 |
show The _encode_invalid_chars function in util/url.py in the urllib3 library 1.25.2 through 1.25.7 for Python allows a denial of service (CPU consumption) because of an inefficient algorithm. The percent_encodings array contains all matches of percent encodings. It is not deduplicated. For a URL of length N, the size of percent_encodings may be up to O(N). The next step (normalize existing percent-encoded bytes) also takes up to O(N) for each step, so the total time is O(N^2). If percent_encodings were deduplicated, the time to compute _encode_invalid_chars would be O(kN), where k is at most 484 ((10+6*2)^2). See: CVE-2020-7212. |
| urllib3 | 1.25.7 | <1.26.18 , >=2.0.0a1,<2.0.7 |
show Affected versions of urllib3 are vulnerable to an HTTP redirect handling vulnerability that fails to remove the HTTP request body when a POST changes to a GET via 301, 302, or 303 responses. This flaw can expose sensitive request data if the origin service is compromised and redirects to a malicious endpoint, though exploitability is low when no sensitive data is used. The vulnerability affects automatic redirect behavior. It is fixed in versions 1.26.18 and 2.0.7; update or disable redirects using redirects=False. This vulnerability is specific to Python's urllib3 library. |
| urllib3 | 1.25.7 | <2.5.0 |
show urllib3 is a user-friendly HTTP client library for Python. Prior to 2.5.0, it is possible to disable redirects for all requests by instantiating a PoolManager and specifying retries in a way that disable redirects. By default, requests and botocore users are not affected. An application attempting to mitigate SSRF or open redirect vulnerabilities by disabling redirects at the PoolManager level will remain vulnerable. This issue has been patched in version 2.5.0. |
| urllib3 | 1.25.7 | <1.25.9 |
show Urllib3 1.25.9 includes a fix for CVE-2020-26137: Urllib3 before 1.25.9 allows CRLF injection if the attacker controls the HTTP request method, as demonstrated by inserting CR and LF control characters in the first argument of putrequest(). NOTE: this is similar to CVE-2020-26116. https://github.com/python/cpython/issues/83784 https://github.com/urllib3/urllib3/pull/1800 |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to HTTP Request Smuggling. The HTTP 1.0 and 1.1 server provided by twisted.web could process pipelined HTTP requests out-of-order, possibly resulting in information disclosure. |
| twisted | 19.10.0 | >=0,<20.3.0 |
show Affected versions of Twisted, an event-driven network framework, are susceptible to HTTP Request Smuggling. This vulnerability arises from inadequate validation of modified request headers, enabling an attacker to smuggle requests through several techniques: employing multiple Content-Length headers, combining a Content-Length header with a Transfer-Encoding header, or utilizing a Transfer-Encoding header with values other than 'chunked' or 'identity'. This flaw compromises the framework's ability to securely process HTTP requests. |
| twisted | 19.10.0 | <=19.10.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10109: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with a content-length and a chunked encoding header, the content-length took precedence and the remainder of the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | >=0.9.4,<22.10.0rc1 |
show Twisted 22.10.0rc1 includes a fix for CVE-2022-39348: NameVirtualHost Host header injection. https://github.com/twisted/twisted/security/advisories/GHSA-vg46-2rrj-3647 |
| twisted | 19.10.0 | >=11.1,<22.1 |
show Twisted 22.1 includes a fix for CVE-2022-21712: In affected versions, twisted exposes cookies and authorization headers when following cross-origin redirects. This issue is present in the 'twisted.web.RedirectAgent' and 'twisted.web.BrowserLikeRedirectAgent' functions. There are no known workarounds. |
| twisted | 19.10.0 | <20.3.0 |
show Twisted 20.3.0 includes a fix for CVE-2020-10108: In Twisted Web through 19.10.0, there was an HTTP request splitting vulnerability. When presented with two content-length headers, it ignored the first header. When the second content-length value was set to zero, the request body was interpreted as a pipelined request. |
| twisted | 19.10.0 | <24.7.0rc1 |
show Affected versions of Twisted are vulnerable to XSS. The `twisted.web.util.redirectTo` function contains an HTML injection vulnerability. If application code allows an attacker to control the redirect URL this vulnerability may result in Reflected Cross-Site Scripting (XSS) in the redirect response HTML body. |
| twisted | 19.10.0 | >=16.3.0,<23.10.0rc1 |
show Twisted 23.10.0rc1 includes a fix for CVE-2023-46137: Disordered HTTP pipeline response in twisted.web. #NOTE: The data we include in this advisory differs from the publicly available on nist.nvd.gov. As indicated in the project's changelog, the vulnerability was introduced in Twisted 16.3.0. https://github.com/twisted/twisted/security/advisories/GHSA-xc8x-vp79-p3wm |
| twisted | 19.10.0 | <22.4.0rc1 |
show Twisted 22.4.0rc1 includes a fix for CVE-2022-24801: Prior to version 22.4.0rc1, the Twisted Web HTTP 1.1 server, located in the 'twisted.web.http' module, parsed several HTTP request constructs more leniently than permitted by RFC 7230. This non-conformant parsing can lead to desync if requests pass through multiple HTTP parsers, potentially resulting in HTTP request smuggling. Users who may be affected use Twisted Web's HTTP 1.1 server and/or proxy and also pass requests through a different HTTP server and/or proxy. The Twisted Web client is not affected. The HTTP 2.0 server uses a different parser, so it is not affected. Two workarounds are available: Ensure any vulnerabilities in upstream proxies have been addressed, such as by upgrading them; or filtering malformed requests by other means, such as configurating an upstream proxy. https://github.com/twisted/twisted/security/advisories/GHSA-c2jg-hw38-jrqq |
| prompt-toolkit | 3.0.2 | <3.0.13 |
show Prompt-toolkit 3.0.13 fixes a race condition in `ThreadedHistory` which could lead to a deadlock. https://github.com/prompt-toolkit/python-prompt-toolkit/commit/99092a8c6d4b411645ac4b84d504e5226e7eebb8#diff-48c0ff10dc3990285d19b3f54e6bfec763089ba1229dc6f9e88463a1046adad7R163 |
https://pyup.io/repos/github/SportySpots/seedorf/python-3-shield.svg
[](https://pyup.io/repos/github/SportySpots/seedorf/)
.. image:: https://pyup.io/repos/github/SportySpots/seedorf/python-3-shield.svg
:target: https://pyup.io/repos/github/SportySpots/seedorf/
:alt: Python 3
<a href="https://pyup.io/repos/github/SportySpots/seedorf/"><img src="https://pyup.io/repos/github/SportySpots/seedorf/shield.svg" alt="Python 3" /></a>
!https://pyup.io/repos/github/SportySpots/seedorf/python-3-shield.svg(Python 3)!:https://pyup.io/repos/github/SportySpots/seedorf/
{<img src="https://pyup.io/repos/github/SportySpots/seedorf/python-3-shield.svg" alt="Python 3" />}[https://pyup.io/repos/github/SportySpots/seedorf/]
https://pyup.io/repos/github/SportySpots/seedorf/shield.svg
[](https://pyup.io/repos/github/SportySpots/seedorf/)
.. image:: https://pyup.io/repos/github/SportySpots/seedorf/shield.svg
:target: https://pyup.io/repos/github/SportySpots/seedorf/
:alt: Updates
<a href="https://pyup.io/repos/github/SportySpots/seedorf/"><img src="https://pyup.io/repos/github/SportySpots/seedorf/shield.svg" alt="Updates" /></a>
!https://pyup.io/repos/github/SportySpots/seedorf/shield.svg(Updates)!:https://pyup.io/repos/github/SportySpots/seedorf/
{<img src="https://pyup.io/repos/github/SportySpots/seedorf/shield.svg" alt="Updates" />}[https://pyup.io/repos/github/SportySpots/seedorf/]