Blog

Your blog category

Public Conference

Voice-Controlled Car Prototype: Advancing Human-Machine Interface with NLP and Wireless Communication. Our project presents a Voice-Controlled Car prototype aimed at filling the gap in systematic evaluations of voice-controlled systems. The prototype utilizes Natural Language Processing (NLP) techniques, and an Arduino UNO interfaced Bluetooth module for wireless communication with the “AMR Voice Control” Android app. Through algorithmic processes, the system extracts and executes multiple voice commands sequentially. Extensive testing with multiple phrases demonstrates strong performance in the Bluetooth range (8.5–12 meters) and response accuracy. The prototype is capable of extracting commands like forward, backward, stop, left and right form sentences. It follows all these commands one-by-one in a sequence. Additional features include live streaming via an ESP32-CAM module and obstacle recognition using an ultrasonic sensor, enhancing its practicality in real-world scenarios. This project offers an effective and practical voice-activated solution for Human-Machine Interface (HMI) applications, prioritizing usability and practicality. Link: https://www.publications.scrs.in/chapter/978-81-955020-8-0/1

Public Conference Read More »

International Conference

Enhancing Efficiency and Functionality of Voice-Controlled Cars through NLP Techniques and Additional Features. This research introduces a Voice-Controlled Car prototype that addresses the existing literature gap in systematic evaluations of voice-controlled systems. The prototype employs Natural Language Processing (NLP) techniques and an Arduino UNO-interfaced Bluetooth module to facilitate wireless communication with a dedicated Android app, “AMR Voice Control.” Through an algorithmic process, the system extracts and executes multiple voice commands sequentially. Strong performance in terms of Bluetooth range (8.5–12 m). The effectiveness of the technique is demonstrated by the short processing times (2–7 ms) for command extraction and execution times ranging from 8.95 to 21.08 s. The prototype was tested with 50 statements and demonstrated solid performance. The average execution time for six commands takes 20.11 s. The prototype has extra features like live streaming via an ESP32-CAM module and obstacle recognition using an ultrasonic sensor to increase its usefulness in real-world scenarios. Performance study uses Python and data visualization tools to visualize the relationship between execution time and the number of instructions, which offers valuable insights for future voice-controlled system optimizations. This research provides an effective and practical voice-activated solution for HMI applications, with a focus on usability and practicality. Link: https://link.springer.com/chapter/10.1007/978-981-97-6588-1_10

International Conference Read More »

About Us

Gudsky Research Foundation empowers students with free mentorship, research opportunities, workshops, and global collaboration.

Departments

© 2025 Gudsky Research Foundation