Accurate detection of pedestrian lanes is a crucial criterion for vision-impaired people to navigate freely and safely. The current deep learning methods have achieved reasonable accuracy at this task. However, they lack practicality for real-time pedestrian lane detection due to non-optimal accuracy, speed, and model size trade-off. Hence, an optimized deep neural network (DNN) for pedestrian lane detection is required. Designing a DNN from scratch is a laborious task that requires significant experience and time. This paper proposes a novel neural architecture search (NAS) algorithm, named MSD-NAS, to automate this laborious task. The proposed method designs an optimized deep network with multi-scale input branches, allowing the derived network to utilize local and global contexts for predictions. The search is also performed in a large and generic space that includes many existing hand-designed network architectures as candidates. To further boost performance, we propose a Short-ter
Deep learning (DL) is a class of machine learning algorithms that relies on deep neural networks (DNNs) for computations. Unlike traditional machine learning algorithms, DL can learn from raw data directly and effectively. Hence, DL has been successfully applied to tackle many real-world problems. When applying DL to a given problem, the primary task is designing the optimum DNN. This task relies heavily on human expertise, is time-consuming, and requires many trial-and-error experiments.
This thesis aims to automate the laborious task of designing the optimum DNN by exploring the neural architecture search (NAS) approach. Here, we propose two new NAS algorithms for two real-world problems: pedestrian lane detection for assistive navigation and hyperspectral image segmentation for biosecurity scanning. Additionally, we also introduce a new dataset-agnostic predictor of neural network performance, which can be used to speed-up NAS algorithms that require the evaluation of candidate DNNs
Pedestrian lane detection is a crucial task in assistive navigation for vision-impaired people. It can provide information on walkable regions, help blind people stay on the pedestrian lane, and assist with obstacle detection. An accurate and real-time lane detection algorithm can improve travel safety and efficiency for the visually impaired. Despite its importance, pedestrian lane detection in unstructured scenes for assistive navigation has not attracted sufficient attention in the research community. This paper aims to provide a comprehensive review and an experimental evaluation of methods that can be applied for pedestrian lane detection, thereby laying a foundation for future research in this area. Our study covers traditional and deep learning methods for pedestrian lane detection, general road detection, and general semantic segmentation. We also perform an experimental evaluation of the representative methods on a large benchmark dataset that is specifically created for pedes
Automatic pedestrian lane detection is a challenging problem that is of great interest in assistive navigation and autonomous driving. Such a detection system must cope well with variations in lane surfaces and illumination conditions so that a vision-impaired user can navigate safely in unknown environments. This paper proposes a new lightweight Bayesian Gabor Network (BGN) for camera-based detection of pedestrian lanes in unstructured scenes. In our approach, each Gabor parameter is represented as a learnable Gaussian distribution using variational Bayesian inference. For the safety of vision-impaired users, in addition to an output segmentation map, the network provides two full-resolution maps of aleatoric uncertainty and epistemic uncertainty as well-calibrated confidence measures. Our Gabor-based method has fewer weights than the standard CNNs, therefore it is less prone to overfitting and requires fewer operations to compute. Compared to the state-of-the-art semantic segmentatio