Package | Installed | Affected | Info |
---|---|---|---|
transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.36.0 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.36.0 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.36.0 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.36.0 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.36.0 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.36.0 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.36.0 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.36.0 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
Package | Installed | Affected | Info |
---|---|---|---|
transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
transformers | 4.36.0 | <4.52.1 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the DonutProcessor class's token2json() method. This vulnerability affects versions 4.51.3 and earlier, and is fixed in version 4.52.1. The issue arises from the regex pattern <s_(.*?)> which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting document processing tasks using the Donut model. |
transformers | 4.36.0 | <4.48.0 |
show Affected versions of the huggingface transformers package are vulnerable to Regular Expression Denial of Service (ReDoS). The Nougat tokenizer's post_process_single method contains a regular expression pattern that fails to limit backtracking when processing markdown-style headers. The vulnerable regex pattern ^#+ (?:\.?(?:\d|[ixv])+)*\s*(?:$|\n\s*) uses nested quantifiers with overlapping character classes, leading to catastrophic backtracking. The fix addresses this vulnerability by replacing the problematic pattern with ^#+ (?:[\d+\.]+|[ixv\.]+)?\s*(?:$|\n\s*), which uses explicit character classes and removes nested quantifiers. This prevents catastrophic backtracking by limiting the regex complexity from O(2^n) to linear time, ensuring the tokenizer can safely process any input without performance degradation. |
transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg
[](https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/)
.. image:: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg :target: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/ :alt: Python 3
<a href="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/"><img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Python 3" /></a>
!https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg(Python 3)!:https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
{<img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg" alt="Python 3" />}[https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/]
https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg
[](https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/)
.. image:: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg :target: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/ :alt: Updates
<a href="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/"><img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Updates" /></a>
!https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg(Updates)!:https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
{<img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Updates" />}[https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/]