| Package | Installed | Affected | Info |
|---|---|---|---|
| transformers | 4.36.0 | >=4.34.0, <4.48.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. |
| transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
| transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. |
| transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
| transformers | 4.36.0 | >=4.22.0, <4.52.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. |
| transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. |
| transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
| Package | Installed | Affected | Info |
|---|---|---|---|
| transformers | 4.36.0 | >=4.34.0, <4.48.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. |
| transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
| transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. |
| transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
| transformers | 4.36.0 | >=4.22.0, <4.52.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. |
| transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. |
| transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
| Package | Installed | Affected | Info |
|---|---|---|---|
| transformers | 4.36.0 | >=4.34.0, <4.48.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. |
| transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
| transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. |
| transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
| transformers | 4.36.0 | >=4.22.0, <4.52.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. |
| transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. |
| transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
| Package | Installed | Affected | Info |
|---|---|---|---|
| transformers | 4.36.0 | >=4.34.0, <4.48.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. |
| transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
| transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. |
| transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
| transformers | 4.36.0 | >=4.22.0, <4.52.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. |
| transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. |
| transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
| Package | Installed | Affected | Info |
|---|---|---|---|
| transformers | 4.36.0 | >=4.34.0, <4.48.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. |
| transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
| transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. |
| transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
| transformers | 4.36.0 | >=4.22.0, <4.52.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. |
| transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. |
| transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
| Package | Installed | Affected | Info |
|---|---|---|---|
| transformers | 4.36.0 | >=4.34.0, <4.48.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. |
| transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
| transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. |
| transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
| transformers | 4.36.0 | >=4.22.0, <4.52.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. |
| transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. |
| transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
| Package | Installed | Affected | Info |
|---|---|---|---|
| transformers | 4.36.0 | >=4.34.0, <4.48.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. |
| transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
| transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. |
| transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
| transformers | 4.36.0 | >=4.22.0, <4.52.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. |
| transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. |
| transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
| Package | Installed | Affected | Info |
|---|---|---|---|
| transformers | 4.36.0 | >=4.34.0, <4.48.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. |
| transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
| transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. |
| transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
| transformers | 4.36.0 | >=4.22.0, <4.52.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. |
| transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. |
| transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
| Package | Installed | Affected | Info |
|---|---|---|---|
| transformers | 4.36.0 | >=4.34.0, <4.48.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. |
| transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
| transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. |
| transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
| transformers | 4.36.0 | >=4.22.0, <4.52.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. |
| transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. |
| transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
| Package | Installed | Affected | Info |
|---|---|---|---|
| transformers | 4.36.0 | >=4.34.0, <4.48.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. |
| transformers | 4.36.0 | <4.37.0 |
show Transformers is affected by a shell injection vulnerability. It appears that while this issue is generally not critical for the library's primary use cases, it can become more significant in specific production environments. Particularly in scenarios where the library interacts with user-generated input — such as in web application backends, desktop applications, and cloud-based ML services — the risk of arbitrary code execution increases. https://github.com/huggingface/transformers/pull/28299 |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. |
| transformers | 4.36.0 | <4.50.0 |
show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. |
| transformers | 4.36.0 | <4.38.0 |
show The huggingface/transformers library is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_repo_checkpoint()` function of the `TFPreTrainedModel()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. |
| transformers | 4.36.0 | >=4.22.0, <4.52.0 |
show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. |
| transformers | 4.36.0 | <4.52.1 |
show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. |
| transformers | 4.36.0 | <4.50.0 |
show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. |
| transformers | 4.36.0 | <4.48.0 |
show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. |
| transformers | 4.36.0 | <4.53.0 |
show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. |
| transformers | 4.36.0 | <4.51.0 |
show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. |
| transformers | 4.36.0 | <4.41.0 |
show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. |
https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg
[](https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/)
.. image:: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg
:target: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
:alt: Python 3
<a href="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/"><img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Python 3" /></a>
!https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg(Python 3)!:https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
{<img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg" alt="Python 3" />}[https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/]
https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg
[](https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/)
.. image:: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg
:target: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
:alt: Updates
<a href="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/"><img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Updates" /></a>
!https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg(Updates)!:https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
{<img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Updates" />}[https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/]