Intel Vision Accelerator Design with Intel Movidius VPU
A Perfect Choice for AI Deep Learning Inference Workloads
Powered by Open Visual Inference & Neural Network Optimization (OpenVINO) toolkit
Compact size M.2 2230 card. (22 x 30mm)
Low power consumption ,approximate 5W for one Intel Movidius Myriad X VPU.
Supported OpenVINO toolkit, AI edge computing ready device
Intel® Distribution of OpenVINO™ toolkit
Intel® Distribution of OpenVINO™ toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across multiple types of Intel® platforms and maximizes performance.
It can optimize pre-trained deep learning models such as Caffe, MXNET, and ONNX Tensorflow. The tool suite includes more than 20 pre-trained models, and supports 100+ public and custom models (includes Caffe*, MXNet, TensorFlow*, ONNX*, Kaldi*) for easier deployments across Intel® silicon products (CPU, GPU/Intel® Processor Graphics, FPGA, VPU).
Current Supported Topologies: AlexNet, GoogleNetV1/V2, MobileNet SSD, MobileNetV1/V2, MTCNN, Squeezenet1.0/1.1, Tiny Yolo V1 & V2, Yolo V2, ResNet-18/50/101 * For more topologies support information please refer to Intel® OpenVINO™ Toolkit official website.
High flexibility, Mustang-M2AE-MX1 develop on OpenVINO™ toolkit structure which allows trained data such as Caffe, TensorFlow, MXNet, and ONNX to execute on it after convert to optimized IR.
1 x Intel Movidius Myriad X MA2485 VPU
Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit, Windows 10 64bit
Quite often, we’re the manufacturer behind the solutions you know
Whether we’re utilising our own manufacturing & production facilities here in Southampton – or our ecosystem of partners in Asia, BVM can cope with any size or complexity of batch manufacturing. Our production team are highly motivated with a flexible “can do” attitude ensuring consistent quality and on-time delivery.