티스토리 뷰
[2018][***]Deep detection network for real-life traffic sign in vehicular networks[***]A Hierarchical Deep Architecture and Mini-Batch Selection Method For JointTraffic Sign and Light Detection
Arc Lab. 2019. 7. 31. 11:341. 인용 논문
Traffic-sign detection and classification in the wild
Zhe Zhu, Dun Liang, Songhai Zhang, Xiaolei Huang, Baoli Li, Shimin Hu;
The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2110-2118
2. 인용 부분
-Page 2
We are the first to present a network that performs
joint traffic light and sign detection. Our architecture
is suitable for autonomous car deployment because it
saves GPU memory by solving two tasks and has realtime detection speeds. We test our single network on the
Bosch Small Traffic Light dataset [3] and the TsinghuaTencent 100K Traffic Sign dataset [4] and show it
outperforms the existing Bosch dataset state-of-the-art
We have chosen to use the Tsinghua-Tencent 100K dataset because it is significantly
more challenging; there are 45 classes of signs, and the images are not cropped to include only the traffic sign extent. Moreover, the recent attempts [4], [12], [13] on this dataset and standard evaluation procedure allows our work to be compared to the state-of-the-art.
-Page 3
-Page 5
We evaluate our proposed solutions described in Section III by performing experiments on the Bosch and TsinghuaTencent datasets. For training, we reduce the TsinghuaTencent dataset to 45 classes as done in [4], [12], [13].
-Page 6
Tsinghua-Tencent Evaluation Procedure: Following [4],
we evaluate our approaches on accuracy and recall metrics
on small (area < 322 pixels), medium (322 < area < 962) and large (area > 962) objects used in the Microsoft COCO
benchmark. Although better suited metrics exist for object detection such as mAP, average recall is sufficient for
comparison due to the respective correlation with detection performance [37]. With a minimum IoU threshold of ≥ 0.5, we evaluate over all classes to determine the final model
performances with respect to the Tsinghua-Tencent test set.
-Page 7
Li et al. [12] has a slow detection time of 0.6 seconds per frame excluding proposal time because of their generative model, and [13] and [4] do not provide inference time, but both state speed as a needed improvement for their models. Meng et al. [13] uses an expensive image pyramid and sliding window approach and [4] uses the computationally intensive OverFeat [38] framework. Our model is able to perform inference at 0.015 seconds per image, a 40X speedup over [12].
- Total
- Today
- Yesterday
- Meow
- Jekyll and Hyde
- SSM
- Memorize
- belief
- Mask R-CNN
- 2D Game
- Worry
- aws #cloudfront
- Badge
- #TensorFlow
- Library
- 도커
- docker
- Sea Bottom
- sentence test
- project
- #REST API
- some time ago
- GOD
- #ELK Stack
- #ApacheSpark
- Game Engine
- #ApacheZeppelin
- ate
- English
- Physical Simulation
- Ragdoll
- OST
- ILoop Engine
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |