Ebook: Advances in Digital Technologies
The use of digital information and web technologies is now essential to all our lives on a daily basis. In particular, web technologies that enable easy access to digital information in all its forms and regardless of the user’s purpose are extremely important.
This book presents papers from the 7th International Conference on Applications of Digital Information and Web Technologies (ICADIWT 2016), held in Keelung City, Taiwan, in March 2016. The conference, which has been organized since 2008, is aimed at building the infrastructure necessary for the large-scale development of web technologies, and attracts participants from many countries who attend the conference to demonstrate and discuss their research findings. The 19 full papers presented at the conference have been arranged into 5 sections: networking; fuzzy systems; intelligent information systems; data communication and protection; and cloud computing. Subjects covered fall under areas such as Internet communication, technologies and software; digital communication software and networks; the Internet of things; databases and applications; and many more.
The book will be of interest to all those whose work involves the application of digital information and web technologies.
The International Conference on Applications of Digital Information and Web Technologies was the 7th event of the series organized since 2008 with the aim of building infrastructure necessary for large-scale development of Web technologies that enable easy access to digital information in its every form regardless of the user's need. Over the years, the ICADIWT conference has created its own research community of participants from numerous countries who attend the event every time with the intention to demonstrate and discuss the essential details of their research findings.
The ICADIWT conference series is organized by the Digital Information Research Foundation, a publisher of academic journals in computer and information science with its headquarter in India and an office in London, UK. The 7th ICADIWT conference was held from March the 29th to 31st in the National Taiwan Ocean University, Keelung City, Taiwan. This year we invited two Keynote Speakers who presented on innovative techniques aimed at providing secure communication and on the Web Service Description Language. The two speeches reached a broad audience of researchers and delegates.
The scope of the ICADIWT 2016 conference covers the following research areas: Internet Communication, Internet Technologies, Web Applications, Internet Software, Data Access and Transmission, Digital Communication Software, Digital Networks, Web Communication Interfaces, Internet Content Processing, Internet of Things, Internet of Everything, Data Communication, Databases and applications, Web Systems Engineering Design, Intelligent Agent Systems, Semantic Web Studies, Adaptive Web applications and personalization, Navigation and Hypermedia.
This year, only 19 papers were accepted for presentation out of 57 submissions in total reviewed by at least two Review Committee Members each, making the overall acceptance rate 33%. The authors are from Taiwan, Japan, Poland, Thailand, China, Malaysia, Viet Nam and Nigeria, making the coefficient of internationality at 85%.
First and foremost, we would like to express our gratitude to the Ministry of Science and Technology, Republic of China, Taiwan for supporting the ICADIWT 2016 conference; to our Keynote Speakers and to all our Program Committee Members as well as the Board of Reviewers for their hard work aimed at ensuring a high standard for the conference papers.
ICADIWT 2016 Editors
Jolanta Mizera-Pietraszko Yao-Liang Chung and Pit Pichappan
Code hopping multiple access (CHMA) is a newly emerging multiple access technique with its potential to implement high security communications. Unfortunately, orthogonality amongst user signals in existing CHMA schemes can be preserved only in synchronous channels under an assumption that neither multipath interference (MI) nor multiple access interference (MAI) exists. Exploiting their ideal orthogonality, we apply orthogonal complementary codes to CHMA systems to overcome the problems with existing CHMA schemes. In particular, we will show that the application of orthogonal complementary codes can significantly improve the performance of a CHMA system due to its unique collision resistant capability. The properties and BER performance of the proposed system are analyzed for both uplink and downlink applications, where the system may suffer MI and MAI simultaneously. Simulation results show that the complementary coded CHMA with channel coding offers a high capacity as a robust PHY for futuristic wireless communications.
The maintainability plays an important role in controlling quality of web services. Maintaining a software project depends on the degree of complexity, which can be estimated by metrics. Web Service Description Language (WSDL) documents are used for describing Web services. The complexity of web services can be estimated by analysing its WSDL document. The WSDL provide the description of the web service to the service requestors. However, the WSDL does not contain implementation details information of a web service, hence, one can only estimate the data complexity of a web service. The data complexity can be characterised by an effort required to understand the structures of the messages that are responsible for exchanging and conveying the data. Furthermore, the complexity of web service can be computed by the metrics which analyse the structures of the messages.
In this work we present a metric to compute the complexity of web services data weight (DW) of the WSDL. DW is defined as the sum of the data complexity of each input and output messages. Further, the data complexity of a message can be computed by analyzing the message structures with the arguments that the operations of a Web service take. To prove the value of metric, it is evaluated and validated both theoretically and empirically. The theoretical evaluation is performed by the Weyuker's properties based on the measurement theory. The practical utility of the metric is evaluated by the Kaner's framework, which consists of several questions to evaluate the practical usefulness and scientific base of the metric. The most important validation of this metric is the empirical validation. The data weight of WSDL is empirically validated by applying it on more than 50 WSDL real files available on the web. The metric is also compared with similar metrics to prove its value. The empirical, theoretical and practical validation and comparative study proved that this data weight metric is a very good indicator for estimating the quality of web services. The experimentations proved that if the data weight metric's value increases, the quality of web services will decrease, because increasing value of data weight implies inefficient use of memory and time.
Facing the challenge of exponential growth of mobile traffic and strict performance requirements in various 5G usage scenarios, spectrum consideration in 5G research and service planning has become increasingly important. Therefore, both the telecom industry and the regulator have become highly interested in planning their B4G (Beyond 4G) and 5G related spectrum in advance.
In the first part of this talk, the speaker will first provide an overview regarding the general spectrum trend in current global mobile broadband market, typical prediction models used for mobile spectrum demands, recent WRC-15 conclusions on new IMT bands, and the candidate bands for the discussions of WRC-19.
In the second phase, the status of several important B4G/5G candidate bands will be examined and their availability in Taiwan will be used as an example to illustrate the difficulties of spectrum evolution in an area with existing mobile spectrum allocations. The speaker will also examine potential interference and spectrum planning conflicts among B4G/5G bands and several fast emerging IoT and PPDR band requirements, especially in the sub-1GHz band.
In addition, the global trend and importance of spectrum sharing in future mobile spectrum planning will also be introduced, since the spectrum sharing approach can become inevitable in B4G/5G wherever available spectrum is highly limited. Several important spectrum sharing framework will be discussed, including TVWS, LSA, and the use of unlicensed bands for mobile broadband.
Finally, some recommendations for the selection of future B4G/5G spectrum will be presented from an operator's point of view. The final recommended bands can be very operator-dependent, and the rationale will be explained.
Stability, or robust [Dscr ]-stability analysis of computer network considered as a dynamic system, relies on increasing the speed of data transmission while minimizing the queuing time delays of its packets in the router buffers that can affect the overall data flow of the network traffic. We consider a zero exclusion condition as an effective method for testing and analyzing the computer networks stability. Our findings indicate that keeping control over the queuing time delays as well as including some factors of the RED algorithm and its variants, which we present in this paper, can improve the quality of network services significantly. Our method can be applicable both to single-loop and multi-loop network systems.
Framed Slotted Aloha (FSA) protocols have been widely used in diverse multiple access communication systems. Consider an FSA system that consists of pre-specified users and there is assignable number of slots accommodated in a frame. Each user will randomly transmit a packet in one of the assigned slots. Characterizing the probability for the event that an exact number of collided slots occurs in a frame, is an important issue for some FSA protocols based networks, e.g., RFID networks. In this paper, we propose a generic analytical approach (GAA) to evaluate the above characterized probability of FSA systems by using the principle of inclusion and exclusion. Accuracy of the ECFF is validated via simulations. We believe that major contributions of this paper can provide valuable method for the evaluation of the characterized probability in a variety of FSA protocols based applications.
Trust is an essential necessity in Online Social Networks (OSNs) because of their current pervasive usage in sharing information on the Web. The traditional methods of measuring trust in social networks using heuristics or by manual methods are computationally expensive and sometimes do not offer reliable and scalable results. Recently, researchers resort to the use OSNs properties to model real life events because the interrelationships on these sites encode some human behaviors that resemble their off-line ones. Trust on these sites can, therefore, be measured by taking into consideration the social relationship and ties embeddedness within the underlying user graph. To this end, we study the distribution nature of “common friends”, the network mixing, average clustering coefficient and scale-free behaviors of 4 OSNs. The study shows that the distribution of “common friends” on OSNs follows a power law. Furthermore, to measure trust using these metrics, we proposed mathematical frameworks and analysis to support our claim. The findings show that these metrics have the ability to measure the level of trust on an unknown user on a social network of trust. These results will further help redefine and develop more reliable, fast and trust inference algorithms and recommender systems for online products and auction sales, email spam detection algorithms, etc. To the best of our knowledge, this work is the first to study the statistical distribution of “common friends” on OSNs and demonstrated how they can be exploited for trust inference.
Resource allocation and the associated deadlock prevention problem originated in the design and the implementation of the operating systems, comprising distributed computing, parallel computing and grid computing. This paper presents an improving deadlock prevention algorithm used to schedule the policies of resource supply for resource allocation on heterogeneous distributed platform. In the current scenario, deadlock prevention algorithm using two way search method has created the problem of taking higher time complexity of O (
Prominent among the factors militating against quality education is poor student intake standards. This in the long run has multiplier effect on the quality of a nation's human capital. The role of human capital in national transformation cannot be overemphasized. However, the process of developing human resource for socio-economic transformation, particularly in an optimal sense, means conscious and concerted efforts must be geared towards meritocracy. Secondary education is reputed for bridging the gap between primary education and tertiary education. It particularly provides the gateway for career development as subjects taken at the senior level are tailored towards future career choices. To ensure that resources invested in education are well utilized, students' admission process has to be streamlined to secure best candidates specifically in gifted schools where competition is high. This paper formulated a computational strategy to upgrade admission process by reducing cost and time associated with it. Using a Nigerian University Secondary School as case study, the researchers applied a Metaheuristics search algorithm to the admission problem of securing the best candidates from a pool of applicants. The results supported the claims that Metaheuristics algorithms are capable of optimizing an admission process in terms of cost and speed.
Intrusion Detection System deals with huge amount of data which contains irrelevant and redundant features causing slow training and testing process, also higher resource consumption as well as poor detection rate. It is not simply removing these irrelevant or redundant features due to deteriorate the performance of classifiers. Furthermore, by choosing the effective and important features, the classification mode and the classification performance will be improved. Rough Set is the most widely used as a baseline technique of single classifier approach on intrusion detection system. Typically, Rough Set is an efficient instrument in dealing with huge dataset in concert with missing values and granularing the features. However, large numbers of generated features reducts and rules must be chosen cautiously to reduce the processing power in dealing with massive parameters for classification. Hence, the primary objective of this study is to probe the significant reducts and rules prior to classification process of Intrusion Detection System. All embracing analyses are presented to eradicate the insignificant attributes, reduct and rules for better classification taxonomy. Reducts with core attributes and minimal cardinality are preferred to construct new decision table, and subsequently generate high classification rates. In addition, rules with highest support, fewer length, high Rule Importance Measure (RIM) and high coverage rule are favored since they reveal high quality performance. The results are compared in terms of the classification accuracy between the original decision table and a new decision table.
Discretization of real value attributes is a vital task in data mining, particularly in the classification problem. Discretization part is also the crucial part resulting the good classification. Empirical results have shown that the quality of classification methods depends on the discretization algorithm in preprocessing step. Universally, discretization is a process of searching for partition of attribute domains into intervals and unifying the values over each interval. Significant discretization technique suit to the Intrusion Detection System (IDS) data need to determine in IDS framework, since IDS data consist of huge records that need to be examined in system. There are many Rough Set discretization technique that can be used, among of them are Semi Naives and Equal Frequency Binning.
In this study, the wavelet transform based method is proposed to detect image forgeries. Based on the wavelet transform properties, the suspected regions of digital images are detected. It can be known, the wavelet transform gives a variety of output components, including the Approximation, Horizontal Detail, Vertical Detail and Diagonal Detail. The detail components provide the information of edges, if a region is copy-moved or spliced, the edges of this region will be not naturally, thus they can be detected by analyzing the detail components. Due to this characteristic, an efficient approach in digital image is developed for using the detail components of wavelet transformed images. To increasing the accuracy of the approach, a preprocessing step and the morphological processing are applied in the method. The simulating results prove the robust and efficiency of this approach with low computational time and proper accuracy.
This paper looks at how translating tweets from Malay to English may impact its sentiment score, and if at all the score can be improved in case a tweet is translated compared to just translating the keywords. Tweets written in Malay were translated using an online dictionary before proceeding for analysis. An online sentiment analysis tool, Twinword was used to perform the sentiment analysis on both translated and untranslated tweets. The results of the analysis showed translating tweets did not create a significant impact on the overall sentiment score; therefore translating the whole length of the tweet would not affect the accuracy score.
Malaysians are actively expressing feelings and opinions on social networks such as Twitter. These expressions can be harvested for studying the customer sentiments towards certain brands and preferences of customers. As business analytics becoming more important, sentiment analysis may provide crucial information in making customer-driven decisions. Therefore, accuracy is critical in determining the reliability and integrity of the analysis. Although, processing massive messages on social media is a huge challenge, it is now made easier by the advancement of the big data architecture. There are many techniques in interpreting these messages. However, Malaysians consists of people from very diversified backgrounds with a multitude of cultures and languages in daily use. Therefore, it is very common to find messages on social media with mixture of various local languages and slangs. The slangs used are mostly dialects expressed with alphabets. This project explores the techniques on analyzing the popularity of 5 telecommunication companies in Malaysia and addresses the shortfalls of using the existing English sentiment dictionary. With the accomplishment of this project, a new localized dictionary is developed by compiling various mixtures of English, Malay localized sentiwords and slangs into the dictionary. The new dictionary is proven to capture and analyze 30% extra keywords on the Malaysian tweets sampled. These additional matches will improve the accuracy compared to existing dictionaries.
In this paper, we propose a method for suggesting sub-topics of an issued queries by using the conceptual structure of WordNet. In existing search engines, sub-topics are created on the basis of the input histories of search engine users. Therefore, often-used pairs of key- and sub-topics can be issued; otherwise it is difficult for unaccustomed users to input both key- and sub-topics. We believe that the proposed method can help the users to issue appropriate search queries conveying their own information needs because sub-topics are general subsidiary information of a key-topic and are automatically suggested as the context of a key-topic irrespective of the input histories. To evaluate the proposed method, we conducted a usability test of the created search queries using a crowdsourcing site and compared the usability of the proposed method with that of the existing search engines.
This research work presents the development of a prototype short distance, full duplex data transfer system using red pointer module as the light transmitter and PIN photodiode as the receiver. The system is low cost, stable, and has low power consumption.) This data transfer unit is connected to a computer system using the Prolific USB-to-Serial Com Port chipset which also uses the computer as its power source. The unit is used for data transfers between two nearby computer systems or peripheral devices. In this work, a data transfer program is developed. The program was written in the visual C# language. It is convenient to specify the COM port and the appropriate bit rates. Data transfers of both text and graphical data are tested and results are discussed; both with and without moving object interferences. This data transfer unit has the potential for applications in EMI-sensitive environments and moving objects interferences.
In this paper, we consider a typical health care system via the help of Wireless Sensor Network (WSN) for wireless patient tracking. The wireless patient tracking module of this system performs localization out of samples of Received Signal Strength (RSS) variations and tracking through a Particle Filter (PF) for WSN assisted by multiple transmit-power information. However, during the course of transmission power control, localization based on the RSS is a challenging problem because of the inconsistent RSS indication (RSSI) measurements in WSN. Therefore, we propose an adaptive-resampling, i.e. Kullback-Leibler Distance (KLD)-resampling with adjusted variance and gradient data, based on PF to improve the wireless patient tracking via smoothing down the effect of RSS variations by generating a sample set near the high-likelihood region for different transmit-power information. The key issue of this method is to use an adjusted variance and gradient data for reducing the wireless patient tracking error lower than 2 meter in 80% of the cases. We conduct a number of simulations using the health care's data set which contains real measurements of RSS collected to evaluate the accurate wireless patient tracking. The average wireless patient position error of our technique improves about 3% that of Sampling Important Resampling (SIR) PF, around 0.5% that of KLD-resampling, and about 6% that of gradient descent (i.e., unused PF). More importantly, when setting up suitable transmit-power level of self-RSSI, the simulation results show that the proposed technique outperforms these above existing methods, especial in the whole error from 0 to 3m. Finally, the simulation results also show the impact of number of particles for SIR algorithm, or the maximum number of samples for KLD-resampling algorithms on the wireless patient tracking when compared these traditional strategies in real experiment.
Plagiarised reports on the web are a social problem and should be eliminated. In addition, such report writing results in ineffective knowledge construction. In this study, in order to prevent web page plagiarism, we have developed an investigative report writing support system that restricts copying and pasting information within web pages. With this system, students can externalise their knowledge constructed through web browsing into notes, write reports with these notes as report materials and remove inadequacies in the report content by reflection.
In this study, we concentrate on a feasibility study on personal data analysis for well-being oriented life support. We introduce our basic concept and model to describe the personal data collected from people's daily life, and discuss how to utilize the personal analysis to provide users with the individualized services in their daily lives. Finally, we present the experimental analysis based on people's daily activity data, to demonstrate the feasibility of our proposed approach.