Skip to content

ncnn Support


This power pack is provided by partner Tencent nihui

ncnn is Tencent's open source neural network inference framework

  • Support for deep learning models caffe/mxnet/keras/pytorch(onnx)/darknet/tensorflow(mlir)
  • Cross-platform :Windows/Linux/MacOS/Android/iOS/WebAssembly/...
  • Compatible with multiple CPU architectures :RISC-V/x86/arm/mips/...
  • Support GPU acceleration:NVIDIA/AMD/Intel/Apple/ARM-Mali/Adreno/...
  • Support various common model structures, such as mobilenet/shufflenet/resnet/LSTM/SSD/yolo...
  • Very strong, please move to QQ group ncnn github README on home page



Prepare cross-compilation toolchain

Visit T-head Chip Open Community to download Toolchain-900 series:

For example, riscv64-linux-x86_64-20210512.tar.gz, unzip after downloading, and set environment variables

tar -xf riscv64-linux-x86_64-20210512.tar.gz
export RISCV_ROOT_PATH=/home/nihui/osd/riscv64-linux-x86_64-20210512

Download and compile ncnn

Cross compile ncnn for D1-H architecture

Because of the compiler bug, release compilation will cause illegal instruction errors at runtime, so you have to use relwithdebinfo to compile.

git clone
cd ncnn
mkdir build-c906
cd build-c906
make -j32

Test benchncnn

D1-H The default TinaLinux will cause an illegal command error when executing the ncnn program, you have to use the Debian system

Copy ncnn/build-c906/benchmark/benchncnn and ncnn/benchmark/*.param to the D1-H development board

./benchncnn 4 1 0 -1 0


For links to the original text and subsequent updates, please refer to: