As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
From the real-time forecasting of events to Visual analysis tasks, the state-of-the-art machine learning algorithms exhibit unmatched performance. Furthermore, with the ongoing traction of embedded computing, the deployment of machine learning algorithms on mobile devices is receiving increasing attention. There are numerous practical applications where hand-held devices having machine learning methods can be more useful due to their compact size and integrated resources. However, for the realization of ML methods on embedded devices, either the used algorithm should be less computationally complex or there should be some efficient way to implement a state-of-the-art algorithm on a less-powerful embedded device. In this paper, different approaches for reducing the computational complexity of a machine learning-based computer vision application are presented that can be helpful to make other such algorithms applicable on the embedded devices. Results show that the hardware architecture based exploitation can further improve the performance of an existing framework.