Driver phone use detection using visual, audio and inertial sensor processing

dc.contributor.advisorMyburgh, Hermanus Carel
dc.contributor.emailu12060055@tuks.co.zaen_ZA
dc.contributor.postgraduateViviers, Barend Jacobus
dc.date.accessioned2020-02-20T09:51:57Z
dc.date.available2020-02-20T09:51:57Z
dc.date.created2020-04-29
dc.date.issued2019
dc.descriptionDissertation (MEng (Computer Engineering))--University of Pretoria, 2019en_ZA
dc.description.abstractDriver distraction is a major cause of road accidents and fatalities, especially distraction caused by the use of phones while driving. Current driver phone use detection methods can be divided into two broad categories, namely vision and non-vision-based approaches. Several methods need additional hardware infrastructure for the system to function. Approaches requiring little infrastructure will have improved adoption rates. The type of output produced by each method of implementation needs careful consideration. Some methods estimate phone position (i.e. is the phone located in the driver's or passenger's side of the vehicle), while other methods detect real-time instances of phone use (i.e. whether the driver is currently talking on the phone). Current vision-based methods can only detect a driver talking on the phone while it is held next to the ear. Arguably, even more dangerous phone use behaviour is texting, as it diverts a driver's attention for an extended period. This work focused on the implementation and combination of three driver phone use detection methods. Two of the methods provide phone localisation inside the vehicle. This is helpful, as it indicates if a phone might be in the driver's access area. The first localisation method utilises audio ranging to time the arrival of audio pulses; the second method uses embedded phone inertial sensors to track the phone from a known reference point. The third method monitors driver behaviour using a camera and identifies instances of phone use. This vision-based approach continually monitors the driver and detects talking on the phone and texting behaviour as it occurs. An approach that combines the phone use detection methods developed is proposed. Phone localisation methods are fused with driver behaviour image classification to create a more accurate and robust system. Comprehensive experimentation was conducted to test system performance in a wide variety of conditions and circumstances. Experiments were first conducted individually for each method; all methods were then tested together collectively. Collective evaluation involved numerous vehicle trips where factors such as harsh lighting conditions, head pose variations, increase in the ambient noise level and irregular phone pick-ups were tested. Experiments were chosen to evaluate method performance in real-world conditions. All experiments combined accounted for 122 minutes of collected data. This included a total of 7379 samples, 4839 samples were of 'no phone use', 1536 samples were of 'talking on phone' and 1004 samples were of 'texting'. Each sample relates to one second of recorded data. Results from experimentation show that very accurate localisation and driver behaviour identification can be provided by the methods developed. Audio ranging was the most accurate localisation method. It obtained overall average precision of 94.61% and recall of 96.22%. Phone inertial localisation achieved average precision of 83.50% and recall of 85.59%. The vision-based method that utilised a convolutional neural network (CNN) to classify driver behaviour yielded average precision of 91.47% and recall of 95.04%. CNN image classification combined with audio ranging localisation obtained even higher accuracy when detecting driver phone use behaviour. It obtained overall average precision of 95.89% and recall of 95.29% when classifying 'talking on phone' and 'texting' driver behaviour. A combination of detection methods increases system accuracy and robustness. A comparison of methods developed in this work to those in previous works illustrates that the new implementations provide several benefits and performance increases. The proposed solutions furthered the development of driver phone use detection systems. This contributes to the ultimate goal of lowering road accidents and fatalities caused by driver distraction.en_ZA
dc.description.availabilityUnrestricteden_ZA
dc.description.degreeMEng (Computer Engineering)en_ZA
dc.description.departmentElectrical, Electronic and Computer Engineeringen_ZA
dc.description.sponsorshipThis work is based on the research supported wholly / in part by the National Research Foundation (NRF) of South Africa (Grant Number: UID 111723). This research was also supported by Telkom South Africa and Bytes Universal Systems via the Telkom Centre for Connected Intelligence (CCI) at the University of Pretoria.en_ZA
dc.identifier.citation*en_ZA
dc.identifier.otherA2020en_ZA
dc.identifier.urihttp://hdl.handle.net/2263/73454
dc.language.isoenen_ZA
dc.publisherUniversity of Pretoria
dc.rights© 2019 University of Pretoria. All rights reserved. The copyright in this work vests in the University of Pretoria. No part of this work may be reproduced or transmitted in any form or by any means, without the prior written permission of the University of Pretoria.
dc.subjectPhone use detectionen_ZA
dc.subjectPhone localisationen_ZA
dc.subjectDriver distractionen_ZA
dc.subjectBehaviour classificationen_ZA
dc.subjectTalking on the phone and texting detectionen_ZA
dc.subjectUCTD
dc.titleDriver phone use detection using visual, audio and inertial sensor processingen_ZA
dc.typeDissertationen_ZA

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Viviers_Driver_2019.pdf
Size:
12.71 MB
Format:
Adobe Portable Document Format
Description:
Dissertation

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.75 KB
Format:
Item-specific license agreed upon to submission
Description: