| [Mask-RCNN](https://gluon-cv.mxnet.io/model_zoo/segmentation.html#instance-segmentation) | mask AP on COCO | N/A | **33.1%** | 32.8% ([Detectron](https://github.com/facebookresearch/Detectron)) |
Interactive visualizations for pre-trained models
For [image classification](https://gluon-cv.mxnet.io/model_zoo/classification.html):
<a href="https://gluon-cv.mxnet.io/model_zoo/classification.html"><img src="https://user-images.githubusercontent.com/3307514/47051128-ca3aa100-d157-11e8-8b50-08841c8cdf5f.png" width="400px" /></a>
and for [object detection](https://gluon-cv.mxnet.io/model_zoo/detection.html)
<a href="https://gluon-cv.mxnet.io/model_zoo/detection.html"><img src="https://user-images.githubusercontent.com/421857/47048450-4d0b2e00-d14f-11e8-9338-bb20bb69655b.png" width="400px"/></a>
Deploy without Python
All models are hybridiziable. They can be deployed without Python. See [tutorials](https://github.com/dmlc/gluon-cv/tree/master/scripts/deployment/cpp-inference) to deploy these models in C++.
New Models with Training Scripts
DenseNet, DarkNet, SqueezeNet for [image classification](https://gluon-cv.mxnet.io/model_zoo/classification.html#imagenet)
We now provide a broader range of model families that are good for out of box usage and various research purposes.
[YoloV3](https://gluon-cv.mxnet.io/model_zoo/detection.html#id44) for object detection
Significantly more accurate than original paper. For example, we get 37.0% mAP on CoCo versus the original [paper](https://pjreddie.com/media/files/papers/YOLOv3.pdf)'s 33.0%. The techniques we used will be included in a paper to be released later.
[Mask-RCNN](https://gluon-cv.mxnet.io/model_zoo/segmentation.html#instance-segmentation) for instance segmentation
Accuracy now matches Caffe2 Detectron without FPN, e.g. 38.3% box AP and 33.1% mask AP on COCO with ResNet50.
FPN support will come in future versions.
[DeepLabV3](https://gluon-cv.mxnet.io/model_zoo/segmentation.html#semantic-segmentation) for semantic segmentation.
Slightly more accurate than original paper. For example, we get 86.7% mIoU on voc versus the original paper's 85.7%.
WGAN
Reproduced [WGAN](https://github.com/dmlc/gluon-cv/tree/master/scripts/gan/wgan) with ResNet
Person Re-identification
Provide a [baseline model](https://github.com/dmlc/gluon-cv/tree/master/scripts/re-id/baseline) which achieved 93.1 best rank1 score on Market1501 dataset.
Enhanced Models with Better Accuracy
[Faster R-CNN](https://gluon-cv.mxnet.io/model_zoo/detection.html#id37)
* Improved Pascal VOC model accuracy. mAP improves to 78.3% from previous version's 77.9%. VOC models with 80%+ mAP will be released with the tech paper.
* Added models trained on COCO dataset.
* Now Resnet50 model achieves 37.0 mAP, out-performs Caffe2 Detectron without FPN (36.5 mAP).
* Resnet101 model achieves 40.1 mAP, out-performs Caffe2 Detectron with FPN(39.8 mAP)
* FPN support will come in future versions.
[ResNet](https://gluon-cv.mxnet.io/model_zoo/classification.html#resnet), [MobileNet](https://gluon-cv.mxnet.io/model_zoo/classification.html#mobilenet), [DarkNet](https://gluon-cv.mxnet.io/model_zoo/classification.html#others), [Inception](https://gluon-cv.mxnet.io/model_zoo/classification.html#others) for image classifcation
* Significantly improved accuracy for some models. For example, ResNet50_v1b gets 78.3% versus previous version's ResNet50_v1b's 77.07%.
* Added models trained with mixup and distillation. For example, ResNet50_v1d has 3 versions: ResNet50_v1d_distill (78.67%), ResNet50_v1d_mixup (79.16%), ResNet50_v1d_mixup_distill (79.29%).
[Semantic Segmentation](https://gluon-cv.mxnet.io/model_zoo/segmentation.html#semantic-segmentation)
* Synchronized Batch Normalization training.
* Added Cityscapes dataset and pretrained models.
* Added training details for reproducing state-of-the-art on Pascal VOC and Provided COCO pre-trained models for VOC.
Dependency