Social media users express their feelings, experiences, ideas, and stories with little or no regard for the conventions of traditional grammar. Online discourse, by its very nature, is rife with transliterated text along with code-mixing and code-switching. Transliteration is heavily featured due to the ease of inputting romanized text with standard keyboards over native scripts. Due to its ubiquity, it is a critical area of study to ensure NLP models perform well in real-world scenarios. In this paper, we analyze the performance of various language models, Tiny Large Language models, TF-IDF and Bag-of-Words feature extraction-based classical ML models, as well as zero-shot classification with ChatGPT on romanized/transliterated social media text. We chose the tasks of sentiment analysis and offensive language identification and we carried out experiments for three different languages, namely Bangla, Hindi, and Arabic, for six datasets. To our surprise, we discovered across multiple datasets that the non-neural methods perform very competitively with fine-tuned transformer-based mono/multilingual language models, tiny large language models, and ChatGPT for classification tasks in transliterated text. These classical models train in seconds using only a fraction of the computing power, and thus the carbon footprint, required by language models. We demonstrate TF-IDF and BoW-based classifiers achieve performance within around 3% of fine-tuned LMs and could thus be considered as a strong baseline for transliterated text-based NLP tasks. Additionally, we investigated various mitigation strategies such as translation and augmentation via the use of ChatGPT, as well as Masked Language Modelling to dataset-specific pretraining for language models. Depending on the dataset and language, employing those mitigation techniques yields a 2-3% further improvement in accuracy and macro-F1 above baseline.