PQ Labs to boost Intel Deep Learning 199X times faster on CPU, enabling AI on billions of existing devices without a GPU or AI chip upgrade, thanks to MagicNet technology
FREMONT, Calif., Jan. 15, 2019 /PRNewswire/ — A Lego car equipped with a low power 0.9GHz processor and a tiny camera is able to do all the self-driving car tasks in the wild, chasing and playing with your cats and dogs, recording and uploading videos, collecting fruits, following your expeditions – A scene that may only happen in a Sci-Fi movie, will soon become reality thanks to MagicNet, a new deep learning network that accelerates AI 199x faster on CPU.
Running deep learning tasks without a GPU or AI chips hardware can actually be faster? That seemed to be mission impossible for decades until January 2019 when PQ Labs demonstrated a jaw-dropping deep learning solution to surprise every visitor in the CES tech show.
By embedding a 0.9GHz Intel processor into a Lego car, the toy acquires Artificial Intelligence skills instantly to run self-driving AI tasks such as detecting objects or obstacles. In the past, the toy would have to carry a heavy computer case with a high graphics card installed in order to have such AI computing power.
The story of a parallel tech universe
For historical reasons, the whole AI industry, and academic research are built upon the graphics card programming model (specifically NVIDIA GPU card, while AMD GPU lacks such software/algorithm functionality). There are other AI chips that follow suit, but they are all using similar AI models and optimization strategy.
“The AI technology tree may be unfolded in a wrong way or at least there is an alternative tech path to be explored in order to achieve better AI results. And this is where PQ Labs comes into play,” says Frank Lu, PQ Labs CEO and inventor of MagicNet.
MagicNet is the Answer
MagicNet is designed and developed from the ground up starting from the fundamentals of deep learning mathematics. All math operations are redefined and re-implemented into a library called “Magic-Compute,” replacing the need of NVIDIA Cuda, cuDNN or Intel MKL and run significantly faster. For example, “convolution” operations (the building block of all deep learning models) are replaced by Magic-Convolution to enjoy significant performance boost.
The speedup of MagicNet also comes from its unique AI backbone model. The backbone runs faster than efficient models such as MobileNet V2, ShuffleNet V2 with higher accuracy. By replacing the backbones of Yolo or SSD with MagicNet, the new networks: Magic-Yolo or Magic-SSD can run 199x times faster than the original versions.
SOURCE PQ Labs