IEI Mustang-V100-MX8 – Vision Accelerator Card – Mustang-V100-MX8

Compare

IEI – Mustang-V100-MX8 – Mustang-V100-MX8 – Vision Accelerator Card – Movidious – MA2485 VPU – 8Gb

Description

Mustang-V100-MX8

Accelerate To The Future

Intel Vision Accelerator Design with Intel Movidius VPU.

mustang-v100-prod-pic-v2.png

A Perfect Choice for AI Deep Learning Inference Workloads

feature-list-pic-2.png

Powered by Open Visual Inference & Neural Network Optimization (OpenVINO) toolkit

  • Half-Height, Half-Length, Single-Slot compact size
  • Low power consumption ,approximate 2.5W for each Intel Movidius Myriad X VPU.
  • Supported OpenVINO toolkit, AI edge computing ready device
  • Eight Intel Movidius Myriad X VPU can execute eight topologies simultaneously.

OpenVINO toolkit

OpenVINO toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel hardware and maximizes performance.
It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel-hardware heterogeneously such as CPU, GPU, Intel Movidius Neural Compute Stick, and FPGA.

toolkit-process-pic-pc.png

Applications

application-scenario-1.pngapplication-scenario-2.png
Machine VisionSmart Retail
application-scenario-3.pngapplication-scenario-4.png
SurveillanceMedical Diagnostics

Overview

Powered by Open Visual Inference & Neural Network Optimization (OpenVINO) toolkit. Half-Height, Half-Length, Single-Slot compact size. Low power consumption ,approximate 2.5W for each Intel Movidius Myriad X VPU. Supported OpenVINO toolkit, AI edge computing ready device. Eight Intel Movidius Myriad X VPU can execute eight topologies simultaneously.OpenVINO toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across Intel hardware and maximizes performance. It can optimize pre-trained deep learning model such as Caffe, MXNET, Tensorflow into IR binary file then execute the inference engine across Intel-hardware heterogeneously such as CPU, GPU, Intel Movidius Neural Compute Stick, and FPGA.

Features

  • Operating Systems
    • Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit (Support Windows 10 in the end of 2018 & more OS are coming soon)
  • OpenVINO Toolkit
  • Intel Deep Learning Deployment Toolkit
    • Model Optimizer
    • Inference Engine
  • Optimized computer vision libraries
  • Intel Media SDK
  • *OpenCL graphics drivers and runtimes.
  • Current Supported Topologies: AlexNet, GoogleNet V1, Yolo Tiny V1 & V2, Yolo V2, SSD300, ResNet-18, Faster-RCNN. (more variants are coming soon)
  • High flexibility, Mustang-V100-MX8 develop on OpenVINO toolkit structure which allows trained data such as Caffe, TensorFlow, and MXNet to execute on it after convert to optimized IR.

*OpenCL is the trademark of Apple Inc. used by permission by KhronosVision Accelerator Card with 8 x Movidius VPU

Specification

Model NameMustang-V100-MX8Mustang-V100-MX4Mustang-MPCIE-MX2Mustang-M2AE-MX1
Main ChipEight Intel Movidius Myriad X MA2485 VPU4 x Intel Movidius Myriad X MA2485 VPU2 x Intel Movidius Myriad X MA2485 VPU1 x Intel Movidius Myriad X MA2485 VPU
Operating SystemsUbuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit (Support Windows 10 in the end of 2018 & more OS are coming soon)Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bitUbuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bitUbuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bit
Dataplane InterfacePCI Express x4PCIe Gen 2 x 2miniPCIeM.2 AE Key
Power Consumption<30W15WApproximate 7.5WApproxinate 5W
Operating Temperature5°C~55°C(ambient temperature)0°C~55°C (In TANK AIoT Dev. Kit)0°C~55°C (In TANK AIoT Dev. Kit)0°C~55°C (In TANK AIoT Dev. Kit)
CoolingActive fanActive fanPassive/Active HeatsinkPassive Heatsink
DimensionsHalf-Height, Half-Length, Single-width PCIe113 x 56 x 23 mm30 x 50 mm22 x 30 mm
Support TopologyAlexNet, GoogleNet V1/V2/V4, Yolo Tiny V1/V2, Yolo V2/V3, SSD300,SSD512, ResNet-18/50/101/152,

DenseNet121/161/169/201, SqueezeNet 1.0/1.1, VGG16/19, MobileNet-SSD, Inception-ResNetv2,

Inception-V1/V2/V3/V4,SSD-MobileNet-V2-coco, MobileNet-V1-0.25-128, MobileNet-V1-0.50-160,

MobileNet-V1-1.0-224, MobileNet-V1/V2, Faster-RCNN

AlexNet, GoogleNetV1/V2, Mobile_SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101AlexNet, GoogleNetV1/V2, Mobile_SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101AlexNet, GoogleNetV1/V2, Mobile_SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/1015% ~ 90%
Operating Humidity5% ~ 90%5% ~ 90%5% ~ 90%5% ~ 90%

*Standard PCIe slot provides 75W power, this feature is preserved for user in case of different system configuration

 

Ordering Information

Part No.Description
Mustang-V100-MX8-R10Computing Accelerator Card with 8 x Movidius Myriad X MA2485 VPU, PCIe Gen2 x4 interface, RoHS
Mustang-V100-MX4-R10Computing Accelerator Card with 4x Intel Movidius Myriad X MA2485 VPU, PCIe Gen 2 x 2 interface, RoHS
Mustang-MPCIE-MX1Deep learning inference accelerating miniPCIe card with 2 x Intel Movidius Myriad X MA2485 VPU, miniPCIe interface 30mm x 50mm, RoHS
Mustang-M2AE-MX1Computing Accelerator Card with 1 x Intel Movidius Myriad X MA2485 VPU,M.2 AE key interface, 2230, RoHS
Mustang-M2BM-MX2-R10Deep learning inference accelerating M.2 BM key card with 2 x Intel Movidius Myriad X MA2485 VPU, M.2 interface 22mm x 80mm, RoHS

Datasheet

Download