Convert and Run Inference for Faster RCNN and YOLOv3 model

Hi guys,

I need help on how to convert and run the Faster RCNN and YOLO model on the 820c board.

For FRCNN. Inside the Qualcomm SNPE documentation, it states that these type of model has been supported starting from v1.4.0 but there is no tutorial on how convert and run the model.

These are the links that I have been using for Faster RCNN setup

Merged with GitHub - rbgirshick/py-faster-rcnn: Faster R-CNN (Python implementation) -- see https://github.com/ShaoqingRen/faster_rcnn for the official MATLAB version

For YOLOv3 to DLC, what is the step to convert from YOLOv3 ONNX model to DLC model.

I would be much appreciated if there is anybody done it and validate it is working though if its only the conversion to dlc model.

Thanks.