Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.0.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.0.1 | <2.2.0 |
show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. |
torch | 2.0.1 | <2.2.0 |
show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. |
torch | 2.0.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
scikit-learn | 1.3.2 | <1.5.0 |
show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. |
transformers | 4.39.3 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.39.3 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.39.3 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.39.3 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.39.3 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.0.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.0.1 | <2.2.0 |
show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. |
torch | 2.0.1 | <2.2.0 |
show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. |
torch | 2.0.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
scikit-learn | 1.3.2 | <1.5.0 |
show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. |
transformers | 4.39.3 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.39.3 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.39.3 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.39.3 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.39.3 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.0.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.0.1 | <2.2.0 |
show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. |
torch | 2.0.1 | <2.2.0 |
show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. |
torch | 2.0.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
scikit-learn | 1.3.2 | <1.5.0 |
show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. |
transformers | 4.39.3 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.39.3 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.39.3 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.39.3 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.39.3 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.0.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.0.1 | <2.2.0 |
show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. |
torch | 2.0.1 | <2.2.0 |
show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. |
torch | 2.0.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
scikit-learn | 1.3.2 | <1.5.0 |
show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. |
transformers | 4.39.3 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.39.3 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.39.3 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.39.3 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.39.3 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.0.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.0.1 | <2.2.0 |
show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. |
torch | 2.0.1 | <2.2.0 |
show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. |
torch | 2.0.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
scikit-learn | 1.3.2 | <1.5.0 |
show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. |
transformers | 4.39.3 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.39.3 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.39.3 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.39.3 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.39.3 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.0.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.0.1 | <2.2.0 |
show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. |
torch | 2.0.1 | <2.2.0 |
show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. |
torch | 2.0.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
scikit-learn | 1.3.2 | <1.5.0 |
show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. |
transformers | 4.39.3 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.39.3 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.39.3 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.39.3 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.39.3 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.0.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.0.1 | <2.2.0 |
show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. |
torch | 2.0.1 | <2.2.0 |
show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. |
torch | 2.0.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
scikit-learn | 1.3.2 | <1.5.0 |
show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. |
transformers | 4.39.3 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.39.3 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.39.3 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.39.3 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.39.3 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.0.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.0.1 | <2.2.0 |
show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. |
torch | 2.0.1 | <2.2.0 |
show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. |
torch | 2.0.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
scikit-learn | 1.3.2 | <1.5.0 |
show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. |
transformers | 4.39.3 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.39.3 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.39.3 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.39.3 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.39.3 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.39.3 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.39.3 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.39.3 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
torch | 2.0.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.0.1 | <2.2.0 |
show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. |
torch | 2.0.1 | <2.2.0 |
show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. |
torch | 2.0.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
torch | 2.3.1 | <=2.6.0 |
show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. |
torch | 2.3.1 | <2.6.0 |
show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. |
scikit-learn | 1.3.2 | <1.5.0 |
show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. |
https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg
[](https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/)
.. image:: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg :target: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/ :alt: Python 3
<a href="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/"><img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Python 3" /></a>
!https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg(Python 3)!:https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
{<img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg" alt="Python 3" />}[https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/]
https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg
[](https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/)
.. image:: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg :target: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/ :alt: Updates
<a href="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/"><img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Updates" /></a>
!https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg(Updates)!:https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
{<img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Updates" />}[https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/]