Intel Vision Accelerator Design with Intel Movidius VPU
A Perfect Choice for AI Deep Learning Inference Workloads
Powered by Open Visual Inference & Neural Network Optimization (OpenVINO) toolkit
Compact size M.2 2280 card. (22x80mm)
Low power consumption, approximate 7.5W for two Intel Movidius Myriad X VPU.
Supported OpenVINO toolkit, AI edge computing ready device
Two Intel Movidius Myriad X VPU can execute two topologies simultaneously.
Intel Distribution of OpenVINO toolkit
Intel Distribution of OpenVINO toolkit is based on convolutional neural networks (CNN), the toolkit extends workloads across multiple types of Intel platforms and maximizes performance.
It can optimize pre-trained deep learning models such as Caffe, MXNET, and ONNX Tensorflow. The tool suite includes more than 20 pre-trained models, and supports 100+ public and custom models (includes Caffe*, MXNet, TensorFlow*, ONNX*, Kaldi*) for easier deployments across Intel silicon products (CPU, GPU/Intel Processor Graphics, FPGA, VPU).
Operating Systems Ubuntu 16.04.3 LTS 64bit, CentOS 7.4 64bit , Windows 10 64bit
Quite often, we’re the manufacturer behind the solutions you know
Whether we’re utilising our own manufacturing & production facilities here in Southampton – or our ecosystem of partners in Asia, BVM can cope with any size or complexity of batch manufacturing. Our production team are highly motivated with a flexible “can do” attitude ensuring consistent quality and on-time delivery.