周立君, 刘宇, 白璐, 刘飞, 王亚伟. 使用TensorRT进行深度学习推理[J]. 应用光学, 2020, 41(2): 337-341. DOI: 10.5768/JAO202041.0202007
引用本文: 周立君, 刘宇, 白璐, 刘飞, 王亚伟. 使用TensorRT进行深度学习推理[J]. 应用光学, 2020, 41(2): 337-341. DOI: 10.5768/JAO202041.0202007
ZHOU Lijun, LIU Yu, BAI Lu, LIU Fei, WANG Yawei. Using TensorRT for deep learning and inference applications[J]. Journal of Applied Optics, 2020, 41(2): 337-341. DOI: 10.5768/JAO202041.0202007
Citation: ZHOU Lijun, LIU Yu, BAI Lu, LIU Fei, WANG Yawei. Using TensorRT for deep learning and inference applications[J]. Journal of Applied Optics, 2020, 41(2): 337-341. DOI: 10.5768/JAO202041.0202007

使用TensorRT进行深度学习推理

Using TensorRT for deep learning and inference applications

  • 摘要: TensorRT是一个高性能的深度学习推理平台。它包括一个深度学习推理优化器和运行时为深度学习推理应用程序提供低延迟和高吞吐量。给出了一个使用TensorRT快速构建计算管道的例子,实现通过TensorRT执行智能视频分析的典型应用。该示例演示了使用片上解码器进行解码、使用片上标量进行视频缩放和GPU计算的4个并发视频流。为了演示的简单性,只有一个通道使用NVIDIA TensorRT执行对象标识,并在标识的对象周围生成包围框。该示例还使用视频转换器函数进行各种格式转换,使用EGLImage来演示缓冲区共享和图像显示。最后采用GPU卡V100对ResNet网络进行TensorRT加速性能的实际测试,结果表明TensorRT能够使吞吐量提升大约15倍。

     

    Abstract: TensorRT is a high-performance deep learning and inference platform. It includes a deep learning and inference optimizer as well as runtime that provides low latency and high throughput for deep learning and inference applications. An example of using TensorRT to quickly build computational pipelines to implement a typical application for performing intelligent video analysis with TensorRT was presented. This example demonstrated four concurrent video streams that used an on-chip decoder for decoding, on-chip scalar for video scaling, and GPU computing. For simplicity of presentation, only one channel used NVIDIA TensorRT to perform object identification and generate bounding boxes around the identified objects. This example also used video converter functions for various format conversions, EGLImage to demonstrate buffer sharing and image display. Finally, the GPU card V100 was used to test the TensorRT acceleration performance of ResNet network. The results show that TensorRT can improve the throughput by about 15 times.

     

/

返回文章
返回