Towhee

Latest version: v1.1.3

Safety actively analyzes 621521 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.9.0

Added 4 SOTA mdoels

* **Vis4mer**
* paper: [*Long Movie Clip Classification with State-Space Video Models*](https://arxiv.org/abs/2204.01692)

* **MCProp**
* paper: [*Transformer-Based Multi-modal Proposal and Re-Rank for Wikipedia Image-Caption Matching*](https://arxiv.org/abs/2206.10436)

* **RepLKNet**
* paper: [*Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs*](https://arxiv.org/abs/2203.06717)

* **Shunted Transformer**
* paper: [*Shunted Self-Attention via Multi-Scale Token Aggregation*](https://arxiv.org/abs/2111.15193)

0.8.1

Added 4 SOTA mdoels

* **ISC**
* page: [*image-embedding/isc*](https://towhee.io/image-embedding/isc)
* paper: [*Contrastive Learning with Large Memory Bank and Negative Embedding Subtraction for Accurate Copy Detection*](https://arxiv.org/abs/2112.04323)

* **MetaFormer**
* paper: [*MetaFormer Is Actually What You Need for Vision*](https://arxiv.org/abs/2111.11418)

* **ConvNeXt**
* paper: [*A ConvNet for the 2020s*](https://arxiv.org/abs/2201.03545)

* **HorNe**
* paper: [*HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions*](https://arxiv.org/abs/2207.14284)

0.8.0

Add 3 SOTA models

* **nnfp**
* page: [*audio-embedding/nnfp*](https://towhee.io/audio-embedding/nnfp)
* paper: [*Neural Audio Fingerprint for High-specific Audio Retrieval based on Contrastive Learning*](https://arxiv.org/pdf/2010.11910.pdf)

* **RepMLPNet**
* paper: [*Hierarchical Vision MLP with Re-parameterized Locality*](https://arxiv.org/pdf/2112.11081.pdf)

* **Wave-ViT**
* paper: [*Unifying Wavelet and Transformers for Visual Representation Learning*](https://arxiv.org/pdf/2207.04978v1.pdf)

0.7.3

Add 5 SOTA models

* **CoCa**
* paper: [*CoCa*](https://arxiv.org/pdf/2205.01917.pdf)

* **CoFormer**
* paper: [*CoFormer*](https://arxiv.org/pdf/2203.16518.pdf)

* **TransRAC**
* paper: [*TransRAC*](https://arxiv.org/pdf/2204.01018.pdf)

* **CVNet**
* paper: [*CVNet*](https://arxiv.org/pdf/2204.01458.pdf)

* **MaxViT**
* paper: [*MaxViT*](https://arxiv.org/pdf/2204.01697.pdf)

0.7.1

Add 1 vision transformer backbone, 1 text-image retrieval model, 2 video retrieval models

* **MPViT**
* page: [*image-embedding/mpvit*](https://towhee.io/image-embedding/mpvit)
* paper: [*MPViT : Multi-Path Vision Transformer for Dense Prediction*](https://arxiv.org/pdf/2112.11010.pdf)

* **LightningDOT**
* page: [*image-text-embedding/lightningdot*](https://towhee.io/image-text-embedding/lightningdot)
* paper: [*LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval*](https://arxiv.org/pdf/2103.08784.pdf)

* **BridgeFormer**
* page: [*video-text-embedding/bridge-former*](https://towhee.io/video-text-embedding/bridge-former)
* paper: [*Bridging Video-text Retrieval with Multiple Choice Questions*](https://arxiv.org/pdf/2201.04850.pdf)

* **collaborative-experts**
* page: [*video-text-embedding/collaborative-experts*](https://towhee.io/video-text-embedding/collaborative-experts)
* paper: [*TEACHTEXT: CrossModal Generalized Distillation for Text-Video Retrieval*](https://arxiv.org/pdf/2104.08271.pdf)

0.7.0

Add 6 video understanding/classification models

* **Video Swin Transformer**
* page: [*action-classification/video-swin-transformer*](https://towhee.io/action-classification/video-swin-transformer)
* paper: [*Video Swin Transformer*](https://arxiv.org/pdf/2106.13230v1.pdf)

* **TSM**
* page: [*action-classification/tsm*](https://towhee.io/action-classification/tsm)
* paper: [*TSM: Temporal Shift Module for Efficient Video Understanding*](https://arxiv.org/pdf/1811.08383v3.pdf)

* **Uniformer**
* page: [*action-classification/uniformer*](https://towhee.io/action-classification/uniformer)
* paper: [*UNIFORMER: UNIFIED TRANSFORMER FOR EFFICIENT SPATIOTEMPORAL REPRESENTATION LEARNING*](https://arxiv.org/pdf/2201.04676v3.pdf)

* **OMNIVORE**
* page: [*action-classification/omnivore*](https://towhee.io/action-classification/omnivore)
* paper: [*OMNIVORE: A Single Model for Many Visual Modalities*](https://arxiv.org/pdf/2201.08377.pdf)

* **TimeSformer**
* page: [*action-classification/timesformer*](https://towhee.io/action-classification/timesformer)
* paper: [*Is Space-Time Attention All You Need for Video Understanding?*](https://arxiv.org/pdf/2102.05095.pdf)

* **MoViNets**
* page: [*action-classification/movinet*](https://towhee.io/action-classification/movinet)
* paper: [*MoViNets: Mobile Video Networks for Efficient Video Recognition*](https://arxiv.org/pdf/2103.11511.pdf)

Add 4 video retrieval models

* **CLIP4Clip**
* page: [*video-text-embedding/clip4clip*](https://towhee.io/video-text-embedding/clip4clip)
* paper: [*CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval*](https://arxiv.org/pdf/2104.08860.pdf)

* **DRL**
* page: [*video-text-embedding/drl*](https://towhee.io/video-text-embedding/drl)
* paper: [*Disentangled Representation Learning for Text-Video Retrieval*](https://arxiv.org/pdf/2203.07111.pdf)

* **Frozen in Time**
* page: [*video-text-embedding/frozen-in-time*](https://towhee.io/video-text-embedding/frozen-in-time)
* paper: [*Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval*](https://arxiv.org/pdf/2104.00650.pdf)

* **MDMMT**
* page: [*video-text-embedding/mdmmt*](https://towhee.io/video-text-embedding/mdmmt)
* paper: [*MDMMT: Multidomain Multimodal Transformer for Video Retrieval*](https://arxiv.org/pdf/2103.10699.pdf)

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.