Ebook: Advances in Parallel Computing Algorithms, Tools and Paradigms
Recent developments in parallel computing for various fields of application are providing improved solutions for handling data. These newer, innovative ideas offer the technical support necessary to enhance intellectual decisions, while also dealing more efficiently with the huge volumes of data currently involved.
This book presents the proceedings of ICAPTA 2022, the International Conference on Advances in Parallel Computing Technologies and Applications, hosted as a virtual conference from Bangalore, India, on 27 and 28 January 2022. The aim of the conference was to provide a forum for the sharing of knowledge about various aspects of parallel computing in communications systems and networking, including cloud and virtualization solutions, management technologies and vertical application areas. The conference also provided a premier platform for scientists, researchers, practitioners and academicians to present and discuss their most recent innovations, trends and concerns, as well as the practical challenges encountered in this field. More than 300 submissions were received for the conference, from which the 91 full-length papers presented here were accepted after review by a panel of subject experts. Topics covered include parallel computing in communication, machine learning intelligence for parallel computing and parallel computing for software services in theoretical and practical aspects.
Providing an overview of recent developments in the field, the book will be of interest to all those whose work involves the use of parallel computing technologies.
This book presents the proceedings of the Virtual International Conference on Advances in Parallel Computing Technologies and Applications (ICAPTA 2022), held on the 27th and 28th of January 2022 in Bangalore, India, at T. John Institute of Technology, jointly with KAS Innovative India.
The aim of the conference is to provide a forum for sharing knowledge about various aspects of parallel computing in communications systems and networking, including cloud and virtualisation solutions, management technologies and vertical application areas. The recent developments in parallel computing for various fields of application are used to provide solutions to handling the huge volume of data involved in a faster manner. These newer, innovative ideas are well thought-out, with adequate technical support in computer communication to enhance the decisions with regard to intellectual aspects. The conference also provides a premier platform for scientists, researchers, practitioners and academicians to present and discuss their most recent innovations, trends and concerns, as well as the practical challenges encountered in this field.
ICAPTA 2022, received more than 300 submissions, from which 91 full length papers were accepted based on the reviews and comments of subject experts. The topics include parallel computing in communication, machine learning intelligence for parallel computing and parallel computing for software services in theoretical and practical aspects. The main programme of the two-day conference included a principal guest address, two invited talks, three keynote talks and seven technical sessions for paper presentations.
We hope that all participants will take the opportunity not only to exchange their knowledge, experiences and ideas, but also to make useful contacts for their ongoing research in new directions.
Finally, we would like to thank all the authors and express our heartfelt thanks to Chairman. Dr. Thomas P. John, T. John Group of Institutions, Director of TJIT, Dr. P. Suresh Venugopal, Principal T. John Institute of Technology. Administrative Officer, Heads and Faculty members of T. John Institute of Technology, as well as all the committee members, Reviewers and session chairs, for their support, enthusiasm and time, all of which helped to make ICAPTA 2022 a successful conference during this time of pandemic. We would also like to take this opportunity to express our sincere thanks to V. Kaliraj, Director of KAS Innovative India, Chennai for co-organizing this event, and to thank Prof. Dr. G. R. Joubert, Book Series Editor, Advances in Parallel Computing, IOS Press for his support and tireless effort in preparation for the publication of these conference proceedings.
The goal of this research is to illustrate how the quick tuning results of the recommended modified PID controller can be used to manage the motor’s speed and keep it constant during load fluctuations. As a result, the PID regulator improves the overall performance of the BLDC motor. The PID controller’s capability may be enhanced for better control based on the simulation results. Using MATLAB simulation, create a simulation model of a BLDC motor. A PID controller may enhance the performance of BLDC motors by lowering overshoot, rising time, and steady-state inaccuracy.
Mountain biking is an extreme sport with unpredictable terrain and several dangerous risks associated with it. Even the soundest minds might need external stimuli to alert them to be more careful at a particular moment of a potential fall. The proposed work involves developing an algorithm capable of detecting falls in mountain biking activity. Machine Learning classifier algorithms are used for fall detection. The existing fall detection algorithms are used to detect falls in environments with limited movements. Fall detection through the use of cameras causes invasion of privacy and is done in fixed environments with predictable dangers, another type is through sensors attached to the human body which acts as an obstruction to the activities of the person. The proposed Ensembled Boosting Model (EBM) classifier involves overcoming these pitfalls and developing a high accurate system to detect falls in open and unpredictable environments. The algorithm proposed in this paper aims to detect falls through real-time data such as acceleration, gyroscopic values, for any user. In the future, this algorithm can be used as a precursor to implement a real-time fall prediction device to be used by anyone and in any environment.
Direct Sequence Code Division Multiple Access (DS-CDMA) is a schemewhere several users transmit their data simultaneously over a common wireless communication channel,by spreading each data by distinct codes. At the receiver, the individual data are detected by appropriate decoding. In this paper, a new smart receiver is proposed for detecting DS-CDMA signals based on a multi-layer Feed Forward Neural Network (FFNN). The proposed receiver detects the transmitted data when the received signal is distorted due to channel noise, near-far effect and Rayleigh fading. The channel state information is indirectly captured during the training of the FFNN and hence the conventional channel state estimation using pilot signal or training sequences is eliminated. Experimental results show that the performance of the proposed receiver in terms of detection accuracy is superior to similar competitive demodulators.
Diabetes has become one of the most fatal diseases as a result of lifestyle changes, food habits, and decreased physical exercise. Diabetes is believed to afflict 422 million people globally, according to the latest WHO estimates. Having said that, the Type II category of diabetes is more fatal because it is determined by the body’s insulin resistance. Furthermore, Type II diabetes has been linked to complications with the kidneys, eyes, and heart. A big number of scientists are also looking into the possibility of a link between diabetes and cancer. We present an overview of such discoveries as well as our cancer research efforts in this report. Dimensionality reduction, Classification, and Clustering are applied in the proposed work to compare with the existing classifiers. PIMA Indian diabetes datasets and Stanford AIM-94 dataset is considered as the benchmark dataset for performing experimentation.
Nowadays, solar spectral irradiances are modelled by solar activity indices, which are used to identify the solar energy absorbed in the environment. This paper devises the Deep LSTM model for predicting solar activity using Sunspot Number (SSN) and Solar Radio Flux (SRF). The processing steps involved in the solar activity prediction are technical indicator extraction and solar activity prediction. In this paper, the solar indices are considered as an input of solar activity prediction, which is acquired from the solar cycle progression dataset. The technical indicators, like Simple Moving Average (SMA), Average True Range (ATR), Relative Strength Index (RSI), William’s %R, Stochastic %D, and Commodity Channel Index (CCI)are extracted for attaining a better prediction performance. In addition, the solar activity prediction is carried out using Deep LSTM based on SSN and SRF. The Deep LSTM is an effective deep learning technique that is widely utilized for prediction purposes since it has a better prediction ability. Moreover, the experimental result demonstrates that the devised Deep LSTM attained the minimum Mean Absolute Error (MAE), Mean Squared Error (MSE) and Root Mean Squared Error (RMSE) of 1.186, 2.869 and 1.693, correspondingly.
Electronic health records (EHRs) are both important and sensitive since they store crucial data that is routinely shared across various parties, such as clinics, pharmacies, and medical practices. These health files require increased safety and confidentiality to prevent leakage or misuse by a third party. There have been instances where a security breach in a patient’s electronic medical records has occurred. Blockchain technology may be useful in providing privacy for these documents. The fundamental goal of the work mentioned in this article is to improve the security, privacy, management, and efficient sharing of medical records. In this study, we will present a complete assessment of several strategies for safeguarding EMR privacy using blockchain technology.
On the Internet, web applications are served from a centralized location i.e., server, for higher maintainability. However, in the centralized architecture, if there is an occurrence of server failure or crash, the web applications cannot be serve to the end-users until the server goes live again. In addition, in the existing centralized architecture for web hosting services, integrity of the hosted websites entirely relies on the third-party applications which checks for any possible threats in the system. In order to provide data integrity within the system and to overcome the above-mentioned single point of failure, we proposed the decentralized solution for hosting web applications, which provides more data availability to the end-users and maintains the integrity of the data. The proposed model makes use of the Interplanetary File System (IPFS) for storing and retrieving web applications, which provides high availability and reliability. In addition, the proposed model uses the Blockchain Technology for authenticity and confidentiality. The smart contracts are deploy on the Ethereum Block chain, which aids the service provider to manage the hosting service system. The proposed model also comparatively decreases the time taken to transfer the file over the IPFS using optimal path-finding algorithm. The proposed algorithm has a lesser time complexity when compared to the Bitswap protocol used in IPFS. The use of blockchain with IPFS cumulatively provides better authenticity via Ethereum Smart Contracts, which reduces risk and failure.
Deep learning has become a recent explosion in our everyday lives. Being one of the leading machine learning tools among various tools, deep learning contributes a lot to image analysis and the vision of the computer. This tool is considered enormous for image analysis, especially in detecting fetal cardiac abnormalities in a parallel computing environment. Screening congenital cardiac disease (CCD) is challenging in achieving accuracy in terms of diagnoses concerning the manual process. Hence in this proposed work, optimized ultrasound image (USI) based Artificial Neural Network (ANN), a deep learning tool, has proven in exploiting the dissimilitude prognoses of cardiomyopathies and in predicting perinatal mortality of congenital cardiac disease (CCD). Fetal Cardiac parameters are evaluated using the myocardial performance index (MPI), a biomarker of global cardiac function providing statistics on various periods during diastolic and systolic phases. This paper also discusses some potential trends of deep learning application in ultrasound image analysis in detecting and predicting the abnormalities in fetal cardiac function.
ZeroTouch is the new emergence of sensing technologies as a touch-free interaction of air writing. The main objective of this paper is to use the pen with motion mode of interaction for the efficient online gesture. The virtual pen is detected in webcam with different contour detection is made along with eraser functionalities. But it is made much easier with OpenCV technologies. To display the text from speech in clear format a whiteboard function is added using Tkgui toolkit. A faster region-based convolution neural network technique is proposed for image text detection and then the detected image is converted to text and saved in different image formats.
Globally, numerous preventive measures were taken to treat the COVID-19 epidemic. Face masks and social distancing were two of the most crucial practices for limiting the spread of novel viruses. With YOLOv5 and a pre-trained framework, we present a novel method of complex mask detection. The primary objective is to detect complex different face masks at higher rates and obtain accuracy of about 94% to 99% on real-time video feeds. The proposed methodology also aims to implement a structure to detect social distance based on a YOLOv5 architecture for controlling, monitoring, accomplishing, and reducing the interaction of physical communication among people in the day-to-day environment. In order for the framework to be trained for the different crowd datasets from the top, it was trained for the human contrasts. Based on the pixel information and the violation threshold, the Euclidean distance between peoples is determined as soon as the people in the video are spotted. In the results, this social distance architecture is described as providing effective monitoring and alerting.
The new forms of networks labeled IoT are relatively new and which become buzz in this decade. The network architecture lets any smart device loosely connect to the Internet under internet protocol. However, the other dimension of this network facilitates intruders to access the network with no critical efforts. The context of intrusions has been delineated as intrusion practices of other devices connected to an IoT network that are connected to external networks through a gateway. Vice versa, the compromised IoT network intends to communicate with external devices or networks to perform intrusion practices. In this regard, intrusion detection through machine learning demands significant feature selection and optimization techniques. This manuscript endeavored to demonstrate the scope distribution diversity assessment methods of traditional statistical practices toward feature selection and optimization in this regard, the contribution “Distribution Diversity Method of Feature Optimization (DDMFO) to Protect Intrusion Practices on IoT Networks” of this paper uses the Dice Similarity Coefficient procedure to pick the optimum characteristics for the training of the classifier. The classifier that has been adopted in this contribution is Naïve Bayes, trained by the features selected by the proposal. The experimental research concludes the significance of the taxonomy, which demonstrates substantial accuracy and minimal false alarm.
Internet of Things (IoT) based Wireless Sensor Networks (WSN) comprises several miniaturized sensor nodes limited in terms of transmission range, available battery power and data rate. These nodes work collaboratively to monitor physical/environmental conditions and provide appropriate control action. Congestion has become one of the key issues in WSN because of the increase in multimedia traffic and IoT proliferation in WSN. The higher traffic load can easily lead to link-level or node-level congestion in the network. A joint layer 4/layer 3 driven congestion control distributed algorithm is proposed to detect congestion and thereby avoid congestion through alternate routing. In this work, congestion detection and avoidance method are presented which uses Random Early Discard (RED) scheme, instead of the Droptail Queue. This scheme computes a threshold to avoid further congestion and uses Location Aided Energy Efficient Routing protocol (LAEER) to find the alternate path for routing data packets. As a result, this approach achieves load balancing as it spreads traffic throughout the network. Also, simulation results infer that LAEER shows better results in terms of Quality of Service (QoS) metrics such as PDR in comparison with AODV as it has avoided further packet drops proactively through alternate forwarding neighbor selection.
Deep learning based intrusion detection system has acquired prominence in digital protection frameworks. The fundamental component of such a framework is to give an assurance to the ICT foundation in the interruption recognition framework (IDS). Wise arrangements are exceptionally essential and expected to control the intricacy and identification of new assault types. The smart frameworks, for example, Deep learning and Machine learning have broadly been acquainted with its advantages with actually manage intricate and layered information.The IDS has various types of known and unknown attacks, however there is a chance to improve the detection of attacks on implementing in real case scenario. Thus, this paper proposes a hybrid deep learning technique that combines convolutional neural network model with Long short term memory model to improvise the performance in recognizing the anomaly packets in the network. Experimentation has been carried out with NSL KDD dataset and the performances are compared with the traditional machine and deep learning models in terms of common metrics such as accuracy, sensitivity and specificity.
A Spoof news is a fraud content meant to misguide the reader about the event with ill motive. In this article a reactive technique using deep learning is proposed to deal with it effectively. Spoof news are innumerable in number over microblog twitter and have wide range of bad effects overall. This is causing chaos and hoax among the readers about the issue. They are getting mislead about the issue a lot. As of now automatic locators of fake news are ineffective and few in number. This emphasized us to come up with smart locator with deep learning mechanism. One way of dealing with this issue is to make “blacklist” of origins and composers of counterfeit news. Here we need to examine all irksome instances of origins and creators in gradual manner. To cater this need we came up with a classifier based on deep learning mechanism that studies linguistic, network account aspects of twitter news and then distinguishes them into spoof and legitimate ones. We set up a deep learning model that takes both legitimate and spoof news elements as input and learns by analyzing their constructs. Then do the binary classification of news effectively thus avoiding the user not to misled by fake.
Roller Bearing (RB) is one of the critical mechanical components in rotating machineries. Failure of a bearing may cause the fatal breakdown of an entire machine and inestimable financial losses due to its continuous rotation. Hence, it is significant to diagnose the fault accurately at an early stage so that it helps in predictive maintenance of the machine from malfunctioning. In the recent developments, Machine Learning (ML) has shown a drastic change in the way we predict, analyze and interpret the results. In this paper, a diagnostic technique is being proposed to identify the bearing faults that employs ensemble learning algorithms such as Bagging, Extra Tree and Gradient Boosting classifiers. The proposed method includes 1) Pre-processing of vibration data 2) Extracting statistical features such as Mean, Standard Deviation, Kurtosis, Crest Factor and Mel-Frequency Cepstral Co-efficient (MFCC) features and 3) Training the Ensemble Learning algorithms for classifying the various faults based on extracted features. For experimentation, vibration data is collected from the Case Western Reserve University (CWRU) Laboratory to diagnose 12 different fault types associated with Inner Race (IR), Outer Race (OR), Ball fault and normal bearing of varying diameters. Results shows that Ensemble learning algorithms performs better based on MFCC features as compared to statistical features.
Retinoblastoma is an embryonic intraocular tumor arising in the retina of the eye. It is a dangerous tumor that can damage the eye and its surrounding components. Chromosome 13q14.1-14.2 is the cytogenetic location of the RB1 gene. As a result, early identification of Retinoblastoma in children is essential. Over the last few decades, Retinoblastoma treatment has improved with the goal of not only saving life and the eye but also optimizing residual vision. In oncology, machine learning approaches used to predict cancer patient treatment outcomes include data collection and preprocessing, text mining of clinical literature, and constructing prediction models. This paper discusses recent advances in the management of Retinoblastoma, as well as data preparation and model construction for identifying patterns between Retinoblastoma clinical factors and predicting therapy success using machine learning.
Recent days, Road accidents are the major cause of deaths. Numerous lives are either lost or at risk due to car accidents. It is a very important and crucial area which needs lot of attention, huge exploration and high priority to detect the accidents, identify the cause, address the issue on time and provide feasible solution during road accidents due to vehicle crash. Time delay and response time to address the accidents are the major challenges to rescue and treat people during accidents and emergencies. In order to rescue and save lives due to accidents in remote places, an efficient automated system is needed for accident detection, cause identification and on time assisting the patients after the occurrence of accident. This automated system has to communicate with the concerned fatalities about the current status of crash and respond immediately within less time. Many researchers have proposed different accident detection and alert systems in their research and survey which involves the Bluetooth, Global Positioning System (GPS), and Global System for Mobile Communications (GSM), various algorithms involving machine learning and mobile applications. Also Sensors for accident detection are based on the acceleration parameters, Smart phone for accident detection etc. This research work provides a critical and in-depth review of various emerging methods and techniques for addressing the road accidents which must be resolved in order to save lives.
The current approaches for diagnosing mental disorders rely heavily on self-reported and clinical interview ratings. The development of an automatic recognition system assists in the early detection and discovery of biological markers for diagnostic purposes. In this paper, develops a multimodal machine learning model, where it processes multiple modalities like visual, acoustic and textual features using a cross-modality correlation. The study uses a Denoising Autoencoder that finds the multimodal representations and then it adopts Fisher Vector encoding to form session-level descriptors. Paragraph Vector (PV) are used for textual modality that embeds the interview sessions transcripts into document representations that tends to capture the mental disorder cues. Finally, the textual and audio-visual features are fused before training the 3-layered denoising autoencoder (DAE) with Residual Neural Network classifier. The proposed model is validated using two bipolar disorders that includes depression and bipolar disorder. The study uses two different datasets including Extended Distress Analysis Interview Corpus (E-DAIC) and Bipolar Disorder Corpus (BDC) to analyse the depression and bipolar disorders. The experimental evaluation is conducted to show the performance improvement using the proposed multimodal model than other state-of-the art methods in detecting the depression and bipolar disorders. The results of simulation show that the proposed method obtains an improved detection rate than existing models.
The carbon-emitting fuel reduction leads us to Solar power and its possible way of maximum availability. Solar energy is one of the most copious energy resources in nature. This can be received using receptors and converted into electrical energy by photovoltaic cells. The receiving method of energy is an important phase and it should be one of the best efficient methods. This paper follows the Sun positioning algorithm to get the exact location of the Sun over a period of years. In this system, the tracker movement mechanisms are controlled by PID controllers with an advanced, optimized chimp optimization algorithm.
Machine Learning is concerned with the making of calculations and methods that use PCs to learn and acquire insight, using the related knowledge available.This work is focused on machine learning approaches for predicting diabetic disorders, using datasets from Predict the Diabetic Diseases. A web-based comparative analysis of multiple machine learning algorithms (Decision Tree, Support Vector Machine, K-Nearest Neighbor, and Logistic Regression) is utilized in this paper, to assess their performances in recognizing reliable models for detecting diabetic disease. To see the effects of adding more features to the classification model, three performance measures were chosen: F1-Measure, Precision, and Accuracy.
Brain-Computer Interface (BCI), deals with controlling of different assistance devices by utilising brain waves. The application of BCI is not simply limited to medical applications, and therefore its research has gained significant attention. It was noticed that huge amount of research papers had been published based on BCI in the last decade through which new challenges are constantly discovered. BCI uses many medical techniques such as EEG, ECG, ultrasound scans etc. Here in this paper we many deal with EEG and a detail comparsion of two commonly used classifiers used in classification and regressions are been done and output were obtained.
EARLYBUDDY is a smart alarm clock app for Android devices. It primarily uses traffic data to help the user wake up on time when there is an estimated delay. Users can choose a timeframe within which they must awaken rather than a single wake-up time. The app will then select a time inside that window based on the projected delay based on frequent traffic data updates. If it detects that the user will be late to their destination, it will change the existing alarm to wake them up. Users can also specify their morning habits, which are factored into the total delay estimate. These routines are optional, but they are encouraged because they help the user to organize their morning and provide more data for the application to use when calculating delays. All humans are not made equal when it comes to sleep. EARLYBUDDY features a provision that prevents users from turning off the phone to silence the alarm if they find it difficult to get up from bed in the morning. Users must tap the stop alarm or snooze button, which only appears in the app itself at the center of the screen, to turn off the alarm. If you press anyplace else on the screen or push the snooze button incorrectly, the alarm will ring again. The basic goal here is to actively wake up the body and mind before hitting the snooze button. Users can also place their phones on the mattress nextto them, and EARLYBUDDY will analyze their sleeping habits using the device’s sensors. This may be used to assess the quality of the user’s night’s sleep and set the alarm at the most natural time.
Important part of everyone’s personal and impersonal life focuses on security in which this system focuses on wireless home security doorbell. It combines the function of the apparatus with home network to design a peer to peer communication system. Existing system has Camera, which is integrated to video calling as such however there is very few interface application to model the device. In this system, a motion detection sensor is used to sense the human movement which will automatically activate the doorbell within a short period of time. Literally motion sensor is used for alarm based products which will notify an emergency alert in the base of the alarm, when a fire explosion is caused in company or factory. When a person touches the calling bell it allows to talk to the person who is standing near the doorbell and communicate with them by using microphone with the concept of Live Cloud integrated solution for 24/7 security system. Sensor is activated through the Arduino IDE which is programmed with a time limit to send the signal of the sensed movement of the human to the calling bell. It also matches the existing dataset images with the visitor to notify the home owner and inserts the new visitor image into the dataset using facial recognition technique. This system is also meant to serve elder people and to identify unauthorised intruders through camera. In the age of automation, it is necessary to update our security systems with new technology and to make living easier.
Every day, many bugs are raised, which are not fully resolved, and a large number of developers are using open sources or third-party resources, which leads to security issues. Bug-triage is the upcoming automated bug report system to assign respective security teams for an ample rate of bug reports submitted from different IDEs within the organization (on-premises). Furthermore, by predicting the appropriate team (who can resolve the bug) in an organization, the bugs can be assigned once it is tracked. With this, cost and time can be saved in tracking and assigning the bugs. In this paper, we are implementing an Automatic bug tracking system (ABTS) to assign the team for the reported bug using the Text analysis for bug labeling and classification machine learning algorithm for predicting developer.