Ubuntu 18.04 with tensorflow 2.4.0 installed

Hi, everyone

I new to this board. Here is a dumb question: for now I have Ubuntu 18.04 and TensorFlow 2.4.0 (build from source) installed on the board, if I want to use NPU to accelerate computation, how can I do it? If the NPU is only for the android platform does that mean I have to pack my model as an APK and reflash the OS to Andriod to use the NPU function?
Also, I noticed that the board has a MALI GPU, if NPU cannot be used on the Linux system, can I use this GPU to accelerate calculation. If yes, is there any reference about how to use it?

Thx for answering

Board based on the Kirin 970 - HI3670 Application Processor
More info: http://www.96boards.org/product/hikey970/ (Website coming soon…)
Buy now: https://www.seeedstudio.com/HiKey-970-Development-Board-p-3046.html

For accelerating AI, and I presume you mean inferencing not training, with a 970 you have 3 resources at your fingertips. I don’t personally have a 970 but in the grand scheme it’s not really any different than the other boards I do have.

Essentially you have CPU, GPU (Mali) and the NPU. The problem with GPU and NPU is they need to hook into the tensorflow framework at some level in order to be useful. For GPU really the best means to do this is via OpenCL standard, which unfortunately Tensorflow doesn’t have a great track record with. Likewise for the NPU there needs to be software support to hook it into tensorflow so work can be dispatched to it. The NPU works on Android because it’s supporting the NN hal. As long as a framework is ported to Android and uses the NN api, the NPU will work great.

Things are looking bleak. Well maybe not, while you might have a tensorflow model, you don’t actually have to perform inference in tensorflow, other AI frameworks can import and use the model to do the task. Other frameworks for instance have OpenCL support which would allow you to take advantage of the Mali GPU. tvm for instance can do this. (http://tvm.apache.org) Example : https://tvm.apache.org/docs/tutorials/frontend/from_tensorflow.html

Being able to use the NPU will probably be the most difficult unless someone takes on the effort to integrate it with an AI framework or two. Given docs, it would be possible but not a small task.

Hope this helps.

1 Like

Thx a lot!
I do mean inferencing, I will try this in the way as your suggestions, see if I can use OpenCL supported framework to make the GPU works, the NPU I think I might just give up cuz I don’t think I would be able to find a solution in a short time, the official platform from HUAWEI HIAI Foundation is designed only for the android development environment.