Convert and Run Inference for Faster RCNN and YOLOv3 model

Hi guys,

I need help on how to convert and run the Faster RCNN and YOLO model on the 820c board.

For FRCNN. Inside the Qualcomm SNPE documentation, it states that these type of model has been supported starting from v1.4.0 but there is no tutorial on how convert and run the model.

These are the links that I have been using for Faster RCNN setup

Merged with https://github.com/rbgirshick/py-faster-rcnn

For YOLOv3 to DLC, what is the step to convert from YOLOv3 ONNX model to DLC model.

I would be much appreciated if there is anybody done it and validate it is working though if its only the conversion to dlc model.

Thanks.