IEI Mustang-MPCIE-MX2 – Vision Accelerator Card – Mustang-MPCIE-MX2

Compare

IEI – Mustang-V100-MX2 – Mustang-MPCIE-MX2 – Vision Accelerator Card – Movidious – MA2485 VPU – 8Gb

Description

Mustang-MPCIE-MX2

Intel Vision Accelerator Design with Intel Movidius VPU

MX2

A Perfect Choice for AI Deep Learning Inference Workloads

Specs

Powered by Open Visual Inference & Neural Network Optimization (OpenVINO) toolkit

 

  • Compact size miniPCIe 30x50mm
  • Low power consumption ,approximate < 7W for two Intel Movidius Myriad X VPU.
  • Supported OpenVINO toolkit, AI edge computing ready device
  • Two Intel Movidius Myriad X VPU can execute Two topologies simultaneously.

 

Intel Distribution of OpenVINO toolkit

 

Intel Distribution of OpenVINO toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across multiple types of Intel platforms and maximizes performance.

It can optimize pre-trained deep learning models such as Caffe, MXNET, and ONNX Tensorflow. The tool suite includes more than 20 pre-trained models, and supports 100+ public and custom models (includes Caffe*, MXNet, TensorFlow*, ONNX*, Kaldi*) for easier deployments across Intel silicon products (CPU, GPU/Intel Processor Graphics, FPGA, VPU).

Features

  • Operating Systems
    Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bit
  • OpenVINO toolkit
    • Intel Deep Learning Deployment Toolkit
      • – Model Optimizer
      • – Inference Engine
    • Optimized computer vision libraries
    • Intel Media SDK
    • Current Supported Topologies: AlexNet, GoogleNetV1/V2, Mobile_SSD, MobileNetV1/V2, MTCNN Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2,  ResNet-18/50/101, Faster-RCNN (more variants are coming soon)
    • High flexibility, Mustang-MPCIE-MX2 develop on OpenVINO toolkit structure which allows trained data such as Caffe, TensorFlow, and MXNet to execute on it after convert to optimized IR.
  • High flexibility, Mustang-MPCIE-MX2 develop on OpenVINO toolkit structure which allows trained data such as Caffe, TensorFlow, and MXNet to execute on it after convert to optimized IR.

MX2

 

Applications

application-scenario-1.pngapplication-scenario-2.png
Machine VisionSmart Retail
application-scenario-3.pngapplication-scenario-4.png
SurveillanceMedical Diagnostics

 

Specification

Model NameMustang-V100-MX8Mustang-V100-MX4Mustang-MPCIE-MX2Mustang-M2AE-MX1
Main ChipEight Intel Movidius Myriad X MA2485 VPU4 x Intel Movidius Myriad X MA2485 VPU2 x Intel Movidius Myriad X MA2485 VPU1 x Intel Movidius Myriad X MA2485 VPU
Operating SystemsUbuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit (Support Windows 10 in the end of 2018 & more OS are coming soon)Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bitUbuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bitUbuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bit
Dataplane InterfacePCI Express x4PCIe Gen 2 x 2miniPCIeM.2 AE Key
Power Consumption<30W15WApproximate 7.5WApproxinate 5W
Operating Temperature5°C~55°C(ambient temperature)0°C~55°C (In TANK AIoT Dev. Kit)0°C~55°C (In TANK AIoT Dev. Kit)0°C~55°C (In TANK AIoT Dev. Kit)
CoolingActive fanActive fanPassive/Active HeatsinkPassive Heatsink
DimensionsHalf-Height, Half-Length, Single-width PCIe113 x 56 x 23 mm30 x 50 mm22 x 30 mm
Support TopologyAlexNet, GoogleNet V1/V2/V4, Yolo Tiny V1/V2, Yolo V2/V3, SSD300,SSD512, ResNet-18/50/101/152,

DenseNet121/161/169/201, SqueezeNet 1.0/1.1, VGG16/19, MobileNet-SSD, Inception-ResNetv2,

Inception-V1/V2/V3/V4,SSD-MobileNet-V2-coco, MobileNet-V1-0.25-128, MobileNet-V1-0.50-160,

MobileNet-V1-1.0-224, MobileNet-V1/V2, Faster-RCNN

AlexNet, GoogleNetV1/V2, Mobile_SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101AlexNet, GoogleNetV1/V2, Mobile_SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101AlexNet, GoogleNetV1/V2, Mobile_SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/1015% ~ 90%
Operating Humidity5% ~ 90%5% ~ 90%5% ~ 90%5% ~ 90%

*Standard PCIe slot provides 75W power, this feature is preserved for user in case of different system configuration

 

Ordering Information

Part No.Description
Mustang-V100-MX8-R10Computing Accelerator Card with 8 x Movidius Myriad X MA2485 VPU, PCIe Gen2 x4 interface, RoHS
Mustang-V100-MX4-R10Computing Accelerator Card with 4x Intel Movidius Myriad X MA2485 VPU, PCIe Gen 2 x 2 interface, RoHS
Mustang-MPCIE-MX1Deep learning inference accelerating miniPCIe card with 2 x Intel Movidius Myriad X MA2485 VPU, miniPCIe interface 30mm x 50mm, RoHS
Mustang-M2AE-MX1Computing Accelerator Card with 1 x Intel Movidius Myriad X MA2485 VPU,M.2 AE key interface, 2230, RoHS
Mustang-M2BM-MX2-R10Deep learning inference accelerating M.2 BM key card with 2 x Intel Movidius Myriad X MA2485 VPU, M.2 interface 22mm x 80mm, RoHS

Datasheet

Download