Help ?

IGMIN: あなたがここにいてくれて嬉しいです. お願いクリック '新しいクエリを作成してください' 当ウェブサイトへの初めてのご訪問で、さらに情報が必要な場合は.

すでに私たちのネットワークのメンバーで、すでに提出した質問に関する進展を追跡する必要がある場合は, クリック '私のクエリに連れて行ってください.'

Browse by Subjects

Welcome to IgMin Research – an Open Access journal uniting Biology, Medicine, and Engineering. We’re dedicated to advancing global knowledge and fostering collaboration across scientific fields.

Members

Our purpose is to inspire cross-field connections that drive new insights and developments.

Articles

Our purpose is to inspire cross-field connections that drive new insights and developments.

Explore Content

Our purpose is to inspire cross-field connections that drive new insights and developments.

Identify Us

Our purpose is to inspire cross-field connections that drive new insights and developments.

IgMin Corporation

Welcome to IgMin, a leading platform dedicated to enhancing knowledge dissemination and professional growth across multiple fields of science, technology, and the humanities. We believe in the power of open access, collaboration, and innovation. Our goal is to provide individuals and organizations with the tools they need to succeed in the global knowledge economy.

Publications Support
[email protected]
E-Books Support
[email protected]
Webinars & Conferences Support
[email protected]
Content Writing Support
[email protected]
IT Support
[email protected]

Search

Explore Section

Content for the explore section slider goes here.

Abstract

要約 at IgMin Research

Our purpose is to inspire cross-field connections that drive new insights and developments.

Engineering Group Case Report 記事ID: igmin167

System for Detecting Moving Objects Using 3D Li-DAR Technology

Robotics BioengineeringSensorsSignal Processing DOI10.61927/igmin167 Affiliation

Affiliation

    1Department of Electronics and Communication Engineering, Hajee Mohammad Danesh Science and Technology University, Dinajpur-5200, Bangladesh

    2Department of Pharmacy, World University of Bangladesh, Bangladesh

2.1k
VIEWS
251
DOWNLOADS
Connect with Us

要約

The “System for Detecting Moving Objects Using 3D LiDAR Technology” introduces a groundbreaking method for precisely identifying and tracking dynamic entities within a given environment. By harnessing the capabilities of advanced 3D LiDAR (Light Detection and Ranging) technology, the system constructs intricate three-dimensional maps of the surroundings using laser beams. Through continuous analysis of the evolving data, the system adeptly discerns and monitors the movement of objects within its designated area. What sets this system apart is its innovative integration of a multi-sensor approach, combining LiDAR data with inputs from various other sensor modalities. This fusion of data not only enhances accuracy but also significantly boosts the system’s adaptability, ensuring robust performance even in challenging environmental conditions characterized by low visibility or erratic movement patterns. This pioneering approach fundamentally improves the precision and reliability of moving object detection, thereby offering a valuable solution for a diverse array of applications, including autonomous vehicles, surveillance, and robotics.

数字

参考文献

    1. Feng Z, Jing L, Yin P, Tian Y, Li B. Advancing self-supervised monocular depth learning with sparse LiDAR. In Conference on Robot Learning. 2022; 685-694. https://doi.org/10.48550/arXiv.2109.09628
    2. Sinan H, Fabio R, Tim K, Andreas R, Werner H. Raindrops on the windshield: Performance assessment of camera-based object detection. In IEEE International Conference on Vehicular Electronics and Safety. 1-7; 2019. https://doi.org/10.1109/ICVES.2019.8906344
    3. Ponn T, Kröger T, Diermeyer F. Unraveling the complexities: Addressing challenging scenarios in camera-based object detection for automated vehicles. Sensors. 2020; 20(13): 3699. https://doi.org/10.3390/s20133699
    4. Fu XB, Yue SL, Pan DY. Scoring on the court: A novel approach to camera-based basketball scoring detection using convolutional neural networks. International Journal of Automation and Computing. 2021; 18(2):266–276. https://doi.org/10.1007/s11633-020-1259-7
    5. Lee J, Hwang KI. Adaptive frame control for real-time object detection: YOLO's evolution. Multimedia Tools and Applications. 2022; 81(25): 36375–36396. https://doi.org/10.1007/s11042-021-11480-0
    6. Meyer GP, Laddha A, Kee E, Vallespi-Gonzalez C, Wellington CK. Lasernet: Efficient probabilistic 3D object detection for autonomous driving. In IEEE Conference on Computer Vision and Pattern Recognition. 2019; 12677–12686. https://doi.org/10.1109/CVPR.2019.01296
    7. Shi S, Wang X, Li H. Pointrcnn: 3D object proposal generation and detection from point cloud. In IEEE Conference on Computer Vision and Pattern Recognition. 2019; 770–779. https://doi.org/10.1109/CVPR.2019.00086
    8. Ye M, Xu S, Cao T. Hvnet: Hybrid voxel network for LiDAR-based 3D object detection. In IEEE Conference on Computer Vision and Pattern Recognition. 2020; 1631–1640. https://doi.org/10.1109/CVPR42600.2020.00170
    9. Ye Y, Chen H, Zhang C, Hao X, Zhang Z. Sarpnet: Shape attention regional proposal network for LiDAR-based 3D object detection. Neurocomputing. 2020; 379: 53-63. https://doi.org/10.1016/j.neucom.2019.09.086
    10. Fan L, Xiong X, Wang F, Wang N, Zhang Z. Rangedet: In defense of range view for LiDAR-based 3D object detection. In IEEE International Conference on Computer Vision. 2021; 2918–2927. https://doi.org/10.1109/ICCV48922.2021.00291
    11. Hu H, Cai Q, Wang D, Lin J, Sun M, Krahenbuhl P, Darrell T, Yu F. Joint monocular 3D vehicle detection and tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019; 5390–5399.
    12. Kuhn H. The Hungarian method for the assignment problem. Naval Research Logistics (NRL). 2005; 52(1):7–21.
    13. Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2012; 3354–3361.
    14. Wang S, Sun Y, Liu C, Liu M. Pointtracknet: An end-to-end network for 3D object detection and tracking from point clouds. IEEE Robotics and Automation Letters. 2020; 5:3206–3212.
    15. Bernardin K, Stiefelhagen R. Evaluating multiple object tracking performance: The CLEAR MOT metrics. EURASIP Journal on Image and Video Processing. 2008; 246309.
    16. Luiten J, Os Ep AA, Dendorfer P, Torr P, Geiger A, Leal-Taixé L, Leibe B. HOTA: A Higher Order Metric for Evaluating Multi-object Tracking. Int J Comput Vis. 2021;129(2):548-578. doi: 10.1007/s11263-020-01375-2. Epub 2020 Oct 8. PMID: 33642696; PMCID: PMC7881978.
    17. Wang S, Cai P, Wang L, Liu M. DITNET: End-to-end 3D object detection and track ID assignment in spatio-temporal world. IEEE Robotics and Automation Letters. 2021; 6:3397–3404.
    18. Luiten J, Fischer T, Leibe B. Track to reconstruct and reconstruct to track. IEEE Robotics and Automation Letters. 2020; 5:1803–1810.
    19. Khader M, Cherian S. An Introduction to Automotive LIDAR. Texas Instruments. 2020; https://www.ti.com/lit/wp/slyy150a/slyy150a.pdf
    20. Tian Z, Chu X, Wang X, Wei X, Shen C. Fully Convolutional One-Stage 3D Object Detection on LiDAR Range Images. 2022. arXiv preprint arXiv:2205.13764.
    21. Chen X, Ma H, Wan J, Li B, Xia T. Multi-view 3D object detection network for autonomous riving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017; 1907–1915.
    22. Ku J, Mozifian M, Lee J, Harakeh A, Waslander SL. Joint 3D proposal generation and object detection from view aggregation. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2018; 1–8.
    23. Zhou Y, Tuzel O. Voxelnet: End-to-end learning for point cloud-based 3D object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018; 4490–4499.
    24. Qi CR, Liu W, Wu C, Su H, Guibas LJ. Frustum PointNets for 3D object detection from RGB-D data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018; 918–927.
    25. Cao P, Chen H, Zhang Y, Wang G. Multi-view frustum PointNet for object detection in autonomous driving. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP). 2019; 3896–3899.
    26. Qi CR, Su H, Mo K, Guibas LJ. PointNet: Deep learning on point sets for 3D classification and segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017; 652–660.
    27. Qi CR, Yi L, Su H, Guibas LJ. PointNet++: Deep hierarchical feature learning on point sets in a metric space. arXiv preprint arXiv:1706.02413. 2017.
    28. Shi S, Wang X, Li H. Pointrcnn: 3D object proposal generation and detection from point cloud. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019; 770–779.
    29. Sun X, Wang S, Wang M, Cheng SS, Liu M. An advanced LiDAR point cloud sequence coding scheme for autonomous driving. In Proceedings of the 28th ACM International Conference on Multimedia. 2020; 2793–2801.
    30. Sun X, Wang M, Du J, Sun Y, Cheng SS. A Task-Driven Scene-Aware LiDAR Point Cloud Coding amework for Autonomous Vehicles. IEEE Transactions on Industrial Informatics. 1–11. 2022. https://doi.org/10.1109/TII.2022.3168235

類似の記事

Association and New Therapy Perspectives in Post-Stroke Aphasia with Hand Motor Dysfunction
Shuo Xu, Chengfang Liang, Shaofan Chen, Zhiming Huang and Haoqing Jiang
DOI10.61927/igmin141
Slip Resistance Evaluation of 10 Indoor Floor Surfaces
Cal Snow, Cody Hays, Sarah Girard, Lorri Birkenbuel, Daniel Autenrieth and David Gilkey
DOI10.61927/igmin199
研究を公開する

私たちは、科学、技術、工学、医学に関する幅広い種類の記事を編集上の偏見なく公開しています。

提出する

見る 原稿のガイドライン 追加 論文処理料

IgMin 科目を探索する

Advertisement