A Review of Neural Network Approach on Engineering Drawing Recognition and Future Directions

Muhammad Syukri Mohd Yazed - Universiti Tun Hussein Onn Malaysia, Batu Pahat, Johor, Malaysia
Ezak Fadzrin Ahmad Shaubari - Universiti Tun Hussein Onn Malaysia, Batu Pahat, Johor, Malaysia
Moi Hoon Yap - bManchester Metropolitan University, Manchester, M15 6BH, United Kingdom


Citation Format:



DOI: http://dx.doi.org/10.30630/joiv.7.4.01716

Abstract


Engineering Drawing (ED) digitization is a crucial aspect of modern industrial processes, enabling efficient data management and facilitating automation. However, the accurate detection and recognition of ED elements pose significant challenges. This paper presents a comprehensive review of existing research on ED element detection and recognition, focusing on the role of neural networks in improving the analysis process. The study evaluates the performance of the YOLOv7 model in detecting ED elements through rigorous experimentation. The results indicate promising precision and recall rates of up to 87.6% and 74.4%, respectively, with a mean average precision (mAP) of 61.1% at IoU threshold 0.5. Despite these advancements, achieving 100% accuracy remains elusive due to factors such as symbol and text overlapping, limited dataset sizes, and variations in ED formats. Overcoming these challenges is vital to ensuring the reliability and practical applicability of ED digitization solutions. By comparing the YOLOv7 results with previous research, the study underscores the efficacy of neural network-based approaches in handling ED element detection tasks. However, further investigation is necessary to address the challenges above effectively. Future research directions include exploring ensemble methods to improve detection accuracy, fine-tuning model parameters to enhance performance, and incorporating domain adaptation techniques to adapt models to specific ED formats and domains. To enhance the real-world viability of ED digitization solutions, this work highlights the importance of conducting testing on diverse datasets representing different industries and applications. Additionally, fostering collaborations between academia and industry will enable the development of tailored solutions that meet specific industrial needs. Overall, this research contributes to understanding the challenges in ED digitization and paves the way for future advancements in this critical field.

Keywords


Engineering drawings; ED analysis; Neural network; Object detection and recognition; Industrial practice.

Full Text:

PDF

References


F. Chollet and others, Deep learning with Python, vol. 361. Manning New York, 2018.

E. Elyan, L. Jamieson, and A. Ali-gombe, “Deep learning for symbols detection and classification in engineering drawings,” Neural Networks, vol. 129, pp. 91–102, 2020, doi: 10.1016/j.neunet.2020.05.025.

B. Scheibel, J. Mangler, and S. Rinderle-Ma, “Extraction of dimension requirements from engineering drawings for supporting quality control in production processes,” Comput Ind, vol. 129, Aug. 2021, doi: 10.1016/j.compind.2021.103442.

L. Jamieson, C. F. Moreno-Garcia, and E. Elyan, “Deep Learning for Text Detection and Recognition in Complex Engineering Diagrams,” in 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1–7.

K. Mahasivabhattu, D. Bandi, S. K. Singh, P. Kumar, and others, “Engineering Data Management Using Artificial Intelligence,” in Offshore Technology Conference, 2019.

Y. Zhao, X. Deng, and H. Lai, “A deep learning-based method to detect components from scanned structural drawings for reconstructing 3D models,” Applied Sciences (Switzerland), vol. 10, no. 6, Mar. 2020, doi: 10.3390/app10062066.

N. Kriegeskorte, “Deep neural networks: a new framework for modeling biological vision and brain information processing,” Annu Rev Vis Sci, vol. 1, pp. 417–446, 2015.

A. Sherstinsky, “Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network,” Physica D, vol. 404, p. 132306, 2020.

E. Elyan, C. M. Garcia, and C. Jayne, “Symbols classification in engineering drawings,” in 2018 International Joint Conference on Neural Networks (IJCNN), 2018, pp. 1–8.

K. Seeliger et al., “Convolutional neural network-based encoding and decoding of visual object recognition in space and time,” Neuroimage, vol. 180, pp. 253–266, 2018.

D. Wang, J. Fan, H. Fu, and B. Zhang, “Research on optimization of big data construction engineering quality management based on RNN-LSTM,” Complexity, vol. 2018, 2018.

S. Ren, K. He, R. Girshick, X. Zhang, and J. Sun, “Object detection networks on convolutional feature maps,” IEEE Trans Pattern Anal Mach Intell, vol. 39, no. 7, pp. 1476–1481, Feb. 2017, doi: 10.1109/TPAMI.2016.2601099.

J. Dekhtiar, A. Durupt, M. Bricogne, B. Eynard, H. Rowson, and D. Kiritsis, “Deep learning for big data applications in CAD and PLM--Research review, opportunities and case study,” Comput Ind, vol. 100, pp. 227–243, 2018.

D. P. Benjamin Forgues, E. Gulko, J. Massicott, and C. Meubus, “The use of high-level knowledge for enhanced entry of engineering drawings,” in 9th International Conference on Pattern Recognition, Los Alamitos, CA, USA, Nov. 1988, pp. 119–124. doi: 10.1109/ICPR.1988.28186.

R. Chan Pyng Lai and Kasturi, “Detection of dimension sets in engineering drawings,” IEEE Trans Pattern Anal Mach Intell, vol. 16, no. 8, pp. 848–855, 1994, doi: 10.1109/34.308483.

D. Van Daele, N. Decleyre, H. Dubois, and W. Meert, “An Automated Engineering Assistant: Learning Parsers for Technical Drawings,” Feb. 2019, [Online]. Available: http://arxiv.org/abs/1909.08552

R. Ptucha, F. Petroski Such, S. Pillai, F. Brockler, V. Singh, and P. Hutkowski, “Intelligent character recognition using fully convolutional neural networks,” Pattern Recognit, vol. 88, pp. 604–613, Apr. 2019, doi: 10.1016/j.patcog.2018.12.017.

S. H. Joseph and T. P. Pridmore, “Knowledge-Directed Interpretation of Mechanical Engineering Drawings.pdf,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 9, pp. 928–940, 1992, doi: https://doi.org/10.1109/34.161351.

C. F. Moreno-García, E. Elyan, and C. Jayne, “New trends on digitisation of complex engineering drawings,” Neural Comput Appl, vol. 31, no. 6, pp. 1695–1712, Feb. 2019, doi: 10.1007/s00521-018-3583-1.

D. Blostein, “General Diagram-Recognition Methodologies,” International Workshop on Graphics Recognition, vol. 1072, pp. 106–122, 1995, doi: DOI: 10.1007/3-540-61226-2_10.

K. Tombre, “Analysis of engineering drawings: State of the art and challenges,” in International Workshop on Graphics Recognition, 1997, pp. 257–264.

T. Kanungo, R. M. Haralick, and D. Dori, “Understanding Engineering Drawings: A Survey,” in Proceedings of First IARP Workshop on Graphics Recognition, Citeseer, Ed., 1995, pp. 119–130.

Y. Lu, “Machine printed character segmentation-; An overview Add to Mendeley Share Cite,” Pattern Recognit, vol. 28, no. 1, pp. 67–80, 1995, doi: 10.1016/0031-3203(94)00068-WGet.

S. U. Ahmad, C. R. Kulkarni, and A. B. Barbadekar, “Text Detection and Recognition: A Review IRJET Journal Art ificial Neural Net works for Document Analysis and Recognit ion Text Detection and Recognition: A Review,” International Research Journal of Engineering and Technology (IRJET), vol. 4, no. 6, pp. 179–185, 2017, [Online]. Available: www.irjet.net

L. P. Cordella and M. Vento, “Symbol recognition in documents: A collection of techniques?,” International Journal on Document Analysis and Recognition, vol. 3, no. 2, pp. 73–88, 2000, doi: 10.1007/s100320000036.

S. V Ablameyko and S. Uchida, “Recognition of engineering drawing entities: review of approaches,” Int J Image Graph, vol. 7, no. 04, pp. 709–733, 2007.

V. Nagasamy and N. A. Langrana, “Engineering drawing processing and vectorization system,” Comput Vis Graph Image Process, vol. 49, no. 3, pp. 379–397, 1990.

Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.

K. Sun, J. Zhang, C. Zhang, and J. Hu, “Generalized extreme learning machine autoencoder and a new deep neural network,” Neurocomputing, vol. 230, pp. 374–381, 2017.

Y. LeCun, P. Haffner, L. Bottou, and Y. Bengio, “Object Recognition with Gradient-Based Learning,” Shape, Contour and Grouping in Computer Vision. pp. 319–345, 1999. doi: 10.1007/3-540-46805-6_19.

Vili Podgorelec, Peter Kokol, Bruno Stiglic, Ivan Rozman, “Decision Trees : An Overview and Their Use in Medicine,” J Med Syst, vol. 26, no. November, pp. 445–463, 2014, doi: 10.1023/A:1016409317640.

K. Florence, “Logistic regression and Kernelized SVM,” pp. 1–5, 2014.

M. Cells, W. David, and D. Pan, “Classification Classification of of Malaria-Infected Cells Using Using Deep Deep Convolutional Neural Neural Networks Networks,” 2018, doi: 10.5772/intechopen.72426.

Y. Luo, T. Yu, J. Zheng, and Y. DIng, “Design of engineering drawing recognition system based on Yolo V4,” in IEEE 6th Information Technology and Mechatronics Engineering Conference, ITOEC 2022, Institute of Electrical and Electronics Engineers Inc., 2022, pp. 1221–1225. doi: 10.1109/ITOEC53115.2022.9734453.

L. Liu, Y. Chen, and X. Liu, “Engineering drawing recognition model with convolutional neural network,” in PervasiveHealth: Pervasive Computing Technologies for Healthcare, ICST, Feb. 2019, pp. 112–116. doi: 10.1145/3366194.3366213.

C. F. Moreno-garc, E. Elyan, and C. Jayne, “Heuristics-Based Detection to Improve Text / Graphics Segmentation in Complex Engineering Drawings,” pp. 87–98, 2017, doi: 10.1007/978-3-319-65172-9.

S. Mani, M. A. Haddad, D. Constantini, W. Douhard, Q. Li, and L. Poirier, “Automatic Digitization of Engineering Diagrams using Deep Learning and Graph Search,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 176–177.

H. Wang, T. Pan, and M. K. Ahsan, “Hand-drawn electronic component recognition using deep learning algorithm,” International Journal of Computer Applications in Technology, vol. 62, no. 1, pp. 13–19, 2020.

R. Geetha, T. Thilagam, and T. Padmavathy, “Effective offline handwritten text recognition model based on a sequence-to-sequence approach with CNN–RNN networks,” Neural Comput Appl, vol. 33, no. 17, pp. 10923–10934, Sep. 2021, doi: 10.1007/s00521-020-05556-5.

A. Chiney et al., “Handwritten Data Digitization Using an Anchor based Multi-Channel CNN (MCCNN) Trained on a Hybrid Dataset (h-EH),” in Procedia CIRP, Elsevier BV, 2021, pp. 175–182. doi: 10.1016/j.procs.2021.05.095.

Y. Nagaoka, T. Miyazaki, Y. Sugaya, and S. Omachi, “Text detection using multi-stage region proposal network sensitive to text scale†,” Sensors (Switzerland), vol. 21, no. 4, pp. 1–15, Feb. 2021, doi: 10.3390/s21041232.

Y. Y. Fang and Z. H. Yin, “A Text Correction and Recognition for Intelligent Railway Drawing Detection,” in Proceedings of the 16th IEEE Conference on Industrial Electronics and Applications, ICIEA 2021, Institute of Electrical and Electronics Engineers Inc., Aug. 2021, pp. 1438–1443. doi: 10.1109/ICIEA51954.2021.9516318.

Do Thuan, “Evolution of YOLO Algorithm and YOLOv5: The State-of-The-Art Object Detection Algorithm,” Oulu University of Applied Sciences, 2021. Accessed: Mar. 22, 2022. [Online]. Available: https://www.theseus.fi/handle/10024/452552

C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors,” ArXiv, Jul. 2022, [Online]. Available: http://arxiv.org/abs/2207.02696

A. Sherstinsky, “Fundamentals of Recurrent Neural Network ( RNN ) and Long Short-Term Memory ( LSTM ) network,” Physica D, vol. 404, p. 132306, 2020, doi: 10.1016/j.physd.2019.132306.

U.-V. Marti and H. Bunke, “A full English sentence database for off-line handwriting recognition,” in Proceedings of the Fifth International Conference on Document Analysis and Recognition. ICDAR’99 (Cat. No. PR00318), 1999, pp. 705–708.

F. Menasri, J. Louradour, A.-L. Bianne-Bernard, and C. Kermorvant, “The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition,” in Document Recognition and Retrieval XIX, 2012, p. 82970Y.

P. Banerjee et al., “Automatic Creation of Hyperlinks in AEC Documents by Extracting the Sheet Numbers Using LSTM Model,” in TENCON 2018-2018 IEEE Region 10 Conference, 2018, pp. 1667–1672.

M. Shaker and M. ElHelw, “Optical character recognition using deep recurrent attention model,” in Proceedings of the 2nd International Conference on Robotics, Control and Automation, 2017, pp. 56–59.

A. F. Biten, R. Tito, L. Gomez, E. Valveny, and D. Karatzas, “OCR-IDL: OCR Annotations for Industry Document Library Dataset,” ArXiv, Feb. 2022, doi: https://doi.org/10.48550/arXiv.2202.12985.

H. Takahashi, N. Itoh, T. Amano, and A. Yamashita, “A spelling correction method and its application to an OCR system,” Pattern Recognit, vol. 23, no. 4, pp. 363–377, Mar. 1990, doi: https://doi.org/10.1016/0031-3203(90)90023-E.

N. H. Imam, V. G. Vassilakis, and D. Kolovos, “OCR post-correction for detecting adversarial text images,” Journal of Information Security and Applications, vol. 66, May 2022, doi: 10.1016/j.jisa.2022.103170.

F. Martínez-Plumed, E. Gómez, and J. Hernández-Orallo, “Futures of artificial intelligence through technology readiness levels,” Telematics and Informatics, vol. 58, May 2021, doi: 10.1016/j.tele.2020.101525.

R. Cao and C. Lim Tan, “Text/Graphics Separation in Maps,” 2002, pp. 167–177. doi: 10.1007/3-540-45868-9_14.

P. P. Roy, E. Vazquez, J. Llad´os, R. Baldrich, and U. Pal, “A System to Segment Text and Symbols from Color Maps,” 2008. doi: DOI: 10.1007/978-3-540-88188-9_23.

G. Henzold, Geometrical dimensioning and tolerancing for design, manufacturing and inspection: a handbook for geometrical product specification using ISO and ASME standards, 2nd ed., vol. 3. Oxford, UK: Elsevier, 2006.

I. Cristofolini, AMST’02 Advanced Manufacturing Systems and Technology: Datums Concepts by ASME and ISO Standards, vol. 437. Springer Vienna: Vienna , 2002. doi: 10.1007/978-3-7091-2555-7_74.

I. Popov and S. Onuh, “Critical notes and considerations on the use of ISO 286-1 for CAD modelling and rapid product development,” International Journal of Agile Systems and Management, vol. 2, no. 2, pp. 214–221, 2007, doi: 10.1504/ijasm.2007.015790.

S. Tornincasa, “Geometrical Specification for Non-Rigid Parts,” in Springer Tracts in Mechanical Engineering, Springer Science and Business Media Deutschland GmbH, 2021, pp. 283–290. doi: 10.1007/978-3-030-60854-5_12.

A. Dhillon and G. K. Verma, “Convolutional neural network: a review of models, methodologies and applications to object detection,” Progress in Artificial Intelligence, vol. 9, no. 2. Springer, pp. 85–112, Jun. 01, 2020. doi: 10.1007/s13748-019-00203-0.

A. Ajit, K. Acharya, and A. Samanta, “A Review of Convolutional Neural Networks,” in International Conference on Emerging Trends in Information Technology and Engineering, ic-ETITE 2020, Institute of Electrical and Electronics Engineers Inc., Feb. 2020. doi: 10.1109/ic-ETITE47903.2020.049.

A. A. Tulbure, A. A. Tulbure, and E. H. Dulf, “A review on modern defect detection models using DCNNs – Deep convolutional neural networks,” Journal of Advanced Research, vol. 35. Elsevier BV, pp. 33–48, Jan. 01, 2022. doi: 10.1016/j.jare.2021.03.015.

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks.” [Online]. Available: http://code.google.com/p/cuda-convnet/

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, International Conference on Learning Representations, ICLR, 2015.

C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning.” [Online]. Available: www.aaai.org

J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.

M. A. El-Sayed and M. A. Khafagy, “Automated Edge Detection Using Convolutional Neural Network,” 2013. [Online]. Available: www.ijacsa.thesai.org

G. S. and X. Z. N. Yao, “Substation Object Detection Based on Enhance RCNN Model,” in Asia Conference on Power and Electrical Engineering (ACPEE), 2021, pp. 463–469. doi: 10.1109/ACPEE51499.2021.9437086.

M. Cieżobka, “Conversion of raster images into vector graphics,” Czasopismo Techniczne. Mechanika, vol. 105, pp. 3–10, 2008.

Z. Lu, “Detection of text regions from digital engineering drawings,” IEEE Trans Pattern Anal Mach Intell, vol. 20, no. 4, pp. 431–439, 1998, doi: 10.1109/34.677283.

X. Wang, L. Xie, C. Dong, and Y. Shan, “Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data,” ArXiv, Jul. 2021, [Online]. Available: http://arxiv.org/abs/2107.10833