 
    
  
    
  
    | Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.0.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.0.1 | <2.2.0 | show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. | 
| torch | 2.0.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.0.1 | <2.2.0 | show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. | 
| torch | 2.0.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| scikit-learn | 1.3.2 | <1.5.0 | show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. | 
| transformers | 4.39.3 | >=4.34.0, <4.48.0 | show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. | 
| transformers | 4.39.3 | <4.48.0 | show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. | 
| transformers | 4.39.3 | <4.50.0 | show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. | 
| transformers | 4.39.3 | <4.51.0 | show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. | 
| transformers | 4.39.3 | >=4.22.0, <4.52.0 | show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. | 
| transformers | 4.39.3 | <4.52.1 | show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. | 
| transformers | 4.39.3 | <4.48.0 | show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. | 
| transformers | 4.39.3 | <4.50.0 | show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. | 
| transformers | 4.39.3 | <4.48.0 | show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. | 
| transformers | 4.39.3 | <4.41.0 | show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. | 
| transformers | 4.39.3 | <4.41.0 | show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. | 
| transformers | 4.39.3 | <4.51.0 | show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. | 
| transformers | 4.39.3 | <4.41.0 | show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.0.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.0.1 | <2.2.0 | show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. | 
| torch | 2.0.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.0.1 | <2.2.0 | show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. | 
| torch | 2.0.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| scikit-learn | 1.3.2 | <1.5.0 | show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. | 
| transformers | 4.39.3 | >=4.34.0, <4.48.0 | show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. | 
| transformers | 4.39.3 | <4.48.0 | show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. | 
| transformers | 4.39.3 | <4.50.0 | show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. | 
| transformers | 4.39.3 | <4.51.0 | show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. | 
| transformers | 4.39.3 | >=4.22.0, <4.52.0 | show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. | 
| transformers | 4.39.3 | <4.52.1 | show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. | 
| transformers | 4.39.3 | <4.48.0 | show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. | 
| transformers | 4.39.3 | <4.50.0 | show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. | 
| transformers | 4.39.3 | <4.48.0 | show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. | 
| transformers | 4.39.3 | <4.41.0 | show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. | 
| transformers | 4.39.3 | <4.41.0 | show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. | 
| transformers | 4.39.3 | <4.51.0 | show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. | 
| transformers | 4.39.3 | <4.41.0 | show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. | 
| Package | Installed | Affected | Info | 
|---|---|---|---|
| torch | 2.0.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.0.1 | <2.2.0 | show PyTorch before v2.2.0 was discovered to contain a heap buffer overflow vulnerability in the component /runtime/vararg_functions.cpp. This vulnerability allows attackers to cause a Denial of Service (DoS) via a crafted input. | 
| torch | 2.0.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.0.1 | <2.2.0 | show Pytorch before version v2.2.0 was discovered to contain a use-after-free vulnerability in torch/csrc/jit/mobile/interpreter.cpp. | 
| torch | 2.0.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| torch | 2.3.1 | <2.8.0 | show *Disputed* A vulnerability, which was classified as problematic, was found in PyTorch 2.6.0. Affected is the function torch.nn.functional.ctc_loss of the file aten/src/ATen/native/LossCTC.cpp. The manipulation leads to denial of service. An attack has to be approached locally. The exploit has been disclosed to the public and may be used. The name of the patch is 46fc5d8e360127361211cb237d5f9eef0223e567. It is recommended to apply a patch to fix this issue. | 
| torch | 2.3.1 | <2.7.1-rc1 | show Affected versions of the PyTorch package are vulnerable to Denial of Service (DoS) due to improper handling in the MKLDNN pooling implementation. The torch.mkldnn_max_pool2d function fails to properly validate input parameters, allowing crafted inputs to trigger resource exhaustion or crashes in the underlying MKLDNN library. An attacker with local access can exploit this vulnerability by passing specially crafted tensor dimensions or parameters to the max pooling function, causing the application to become unresponsive or crash. | 
| torch | 2.3.1 | <2.6.0 | show PyTorch is a Python package that provides tensor computation with strong GPU acceleration and deep neural networks built on a tape-based autograd system. In version 2.5.1 and prior, a Remote Command Execution (RCE) vulnerability exists in PyTorch when loading a model using torch.load with weights_only=True. This issue has been patched in version 2.6.0. | 
| scikit-learn | 1.3.2 | <1.5.0 | show A sensitive data leakage vulnerability was identified in affected versions of scikit-learn TfidfVectorizer. The vulnerability arises from the unexpected storage of all tokens present in the training data within the `stop_words_` attribute, rather than only storing the subset of tokens required for the TF-IDF technique to function. This behavior leads to the potential leakage of sensitive information, as the `stop_words_` attribute could contain tokens that were meant to be discarded and not stored, such as passwords or keys. The impact of this vulnerability varies based on the nature of the data being processed by the vectorizer. | 
| transformers | 4.39.3 | >=4.34.0, <4.48.0 | show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity in the Nougat tokenizer's `post_process_single` method. The method employs a regex pattern which includes nested quantifiers and overlapping character classes, causing excessive backtracking. An attacker can exploit this by submitting crafted markdown-style headers that trigger the regex to consume significant processing time, potentially leading to service disruption. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded evaluation of user-supplied regular expressions in the AdamWeightDecay._do_use_weight_decay method. The TensorFlow optimizer’s _do_use_weight_decay iterates over include_in_weight_decay and exclude_from_weight_decay lists and calls re.search on each pattern against parameter names, enabling catastrophic backtracking on crafted inputs. An attacker who can control these lists can provide pathological patterns that saturate the CPU and cause processes using transformers to hang, resulting in a Denial of Service. | 
| transformers | 4.39.3 | <4.48.0 | show Affected versions of the Hugging Face Transformers library include standalone conversion scripts that are vulnerable to deserialization of untrusted data, potentially leading to arbitrary code execution. Users should update to the version of the Transformers library where these scripts have been excluded from release distributions. | 
| transformers | 4.39.3 | <4.50.0 | show A vulnerability in the `preprocess_string()` function of the `transformers.testing_utils` module in huggingface/transformers version v4.48.3 allows for a Regular Expression Denial of Service (ReDoS) attack. The regular expression used to process code blocks in docstrings contains nested quantifiers, leading to exponential backtracking when processing input with a large number of newline characters. An attacker can exploit this by providing a specially crafted payload, causing high CPU usage and potential application downtime, effectively resulting in a Denial of Service (DoS) scenario. | 
| transformers | 4.39.3 | <4.51.0 | show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_configuration_file()` function within the `transformers.configuration_utils` module. The affected version is 4.49.0, and the issue is resolved in version 4.51.0. The vulnerability arises from the use of a regular expression pattern `config\.(.*)\.json` that can be exploited to cause excessive CPU consumption through crafted input strings, leading to catastrophic backtracking. This can result in model serving disruption, resource exhaustion, and increased latency in applications using the library. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the EnglishNormalizer.normalize_numbers() method. The normalize_numbers() implementation in src/transformers/models/clvp/number_normalizer.py applies number-matching patterns such as ([0-9][0-9,]+[0-9]) to untrusted input without atomic grouping or bounds, allowing catastrophic backtracking and excessive CPU consumption. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to inefficient regular expressions in the MarianTokenizer.remove_language_code() method. The method compiles a language-code pattern and uses language_code_re.match() and language_code_re.sub() on untrusted text (e.g., matching ">>...<<"), which allows crafted inputs to cause catastrophic backtracking and high CPU utilization. An attacker can submit specially formed strings to any service that tokenizes text with MarianTokenizer—without authentication—to slow the process dramatically and potentially cause a denial of service. | 
| transformers | 4.39.3 | >=4.22.0, <4.52.0 | show Affected versions of the `transformers` package are vulnerable to Regular Expression Denial of Service (ReDoS) due to unbounded regular expression complexity. The `DonutProcessor` class's `token2json()` method employs the regex pattern `<s_(.*?)>`, which can be manipulated to trigger catastrophic backtracking with crafted input strings. An attacker can exploit this by providing malicious input to the method, leading to excessive CPU consumption and potential service disruption during document processing tasks. | 
| transformers | 4.39.3 | <4.52.1 | show Hugging Face Transformers versions up to 4.49.0 are affected by an improper input validation vulnerability in the `image_utils.py` file. The vulnerability arises from insecure URL validation using the `startswith()` method, which can be bypassed through URL username injection. This allows attackers to craft URLs that appear to be from YouTube but resolve to malicious domains, potentially leading to phishing attacks, malware distribution, or data exfiltration. The issue is fixed in version 4.52.1. | 
| transformers | 4.39.3 | <4.48.0 | show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling MobileViTV2 configuration files. The vulnerability exists in versions before 4.48.0, where the configuration file parsing functionality fails to properly validate user-supplied data, allowing malicious YAML configuration files to be deserialized without proper sanitization checks. An attacker can exploit this vulnerability by crafting a malicious configuration file and convincing a target user to process it using the convert_mlcvnets_to_pytorch.py script, resulting in arbitrary code execution within the context of the current user when the configuration is loaded. | 
| transformers | 4.39.3 | <4.50.0 | show Affected versions of the Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) attacks in multiple tokenizer components. The vulnerability exists in regex patterns used by the Nougat, GPTSan Japanese, and GPT-NeoX Japanese tokenizers that are susceptible to catastrophic backtracking. A remote attacker can exploit this vulnerability by providing specially crafted input strings to these tokenizers, causing excessive CPU consumption through exponential time complexity in regex processing, resulting in service disruption and resource exhaustion. The vulnerability was fixed by refactoring the vulnerable regex patterns to eliminate backtracking potential. The fix converts problematic patterns that use nested quantifiers and alternations into more efficient implementations. | 
| transformers | 4.39.3 | <4.48.0 | show Affected versions of the transformers package are vulnerable to Deserialization of Untrusted Data due to improper validation when handling Trax model files. The vulnerability exists in versions before 4.48.0, where the model file parsing functionality lacks proper validation of user-supplied data, allowing deserialization of malicious payloads embedded in model files without verification. An attacker can exploit this vulnerability by crafting a malicious Trax model file and convincing a target user to load it through the application, resulting in arbitrary code execution within the context of the current user when the model is processed. | 
| transformers | 4.39.3 | <4.41.0 | show Transformers version 4.41.0 updates its `black` dependency from version 22.1.0 to 24.3.0 to address the security vulnerability identified as CVE-2024-21503. | 
| transformers | 4.39.3 | <4.53.0 | show Affected versions of the Hugging Face Transformers package are vulnerable to Regular Expression Denial of Service (ReDoS) due to an inefficient regex pattern in weight name conversion. The convert_tf_weight_name_to_pt_weight_name() function uses the regular expression pattern /[^/]*___([^/]*)/, which is susceptible to catastrophic backtracking when processing specially crafted TensorFlow weight names. An attacker can exploit this vulnerability by providing malicious weight names during model conversion between TensorFlow and PyTorch formats, causing excessive CPU consumption and potentially rendering the service unresponsive. | 
| transformers | 4.39.3 | <4.41.0 | show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49082. | 
| transformers | 4.39.3 | <4.51.0 | show A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically in the `get_imports()` function within `dynamic_module_utils.py`. This vulnerability affects versions 4.49.0 and is fixed in version 4.51.0. The issue arises from a regular expression pattern `\s*try\s*:.*?except.*?:` used to filter out try/except blocks from Python code, which can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. This vulnerability can lead to remote code loading disruption, resource exhaustion in model serving, supply chain attack vectors, and development pipeline disruption. | 
| transformers | 4.39.3 | <4.41.0 | show Transformers version 4.41.0 updates its `aiohttp` dependency from version 3.8.5 to 3.9.0 to address the security vulnerability identified as CVE-2023-49081. | 
https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg
[](https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/)
.. image:: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg
     :target: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
     :alt: Python 3
          <a href="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/"><img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Python 3" /></a>
!https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg(Python 3)!:https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
{<img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/python-3-shield.svg" alt="Python 3" />}[https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/]
        https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg
[](https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/)
.. image:: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg
     :target: https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
     :alt: Updates
          <a href="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/"><img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Updates" /></a>
!https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg(Updates)!:https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/
{<img src="https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/shield.svg" alt="Updates" />}[https://pyup.io/repos/github/stephenhky/PyShortTextCategorization/]