As a guest user you are not logged in or recognized by your IP address. You have
access to the Front Matter, Abstracts, Author Index, Subject Index and the full
text of Open Access publications.
The rapid development of mobile technology and the emergence of cloud resources have freed the mobile user from the constraints due to the limited resources of their device. Since a decade ago, many efforts have been done in order to take advantage of these resources-rich cloud. One common approach is to offload computation from mobile devices to resourceful servers. Recently GPUs also have received a lot of attention from the scientific community. GPU is highly optimized for throughput and can be used to accelerate different types of applications.
This paper presents the design and the implementation of an adaptive computation offloading framework based on OpenCL. This system can transparently offload OpenCL workloads from mobile devices to an available OpenCL compatible device. In addition the system is able to decide if a certain workload needs to be offloaded and to which devices. This adaptive method takes into account the network transfer speed and the intrinsic characteristics of the kernel that impacts on its execution time. Our evaluation shows our system can adapt to several different environments and can achieve as much as 50.3X execution speed compared to the local execution.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.
This website uses cookies
We use cookies to provide you with the best possible experience. They also allow us to analyze user behavior in order to constantly improve the website for you. Info about the privacy policy of IOS Press.