

Predicting users' next location becomes an important requirement in various Location-based applications and services. This prediction provides the most probable next location in order to make proactive offerings or services based on those predicted locations. However, prediction models in the literature achieve satisfactory results, but ignore an important fact that values and representations of some variables can be much more relevant to the final location prediction than the rest of variables. In this paper, we study the impact of Space-Time representation learning in location prediction model through evaluating different architectural configurations. First, we evaluate the impact of many different data inputs on the model final prediction performance. Based on that, different prediction models are proposed that vary in terms of the number and type of input features. Second, we investigate the impact of input representation techniques on the prediction performance using both embedding representation learning and one-hot vector representation (i.e. static vectors). We conduct thorough experiments in all our previous models on two real-world datasets, GeoLife and Gowalla.