Mirsaeid Hosseini Shirvani, Department of Computer Engineering, Sari Branch, Islamic Azad University, Sari, IRAN
Each cloud provider has a lot of challenges to meet complicated business process requested by users. We can enumerate the most important ones as resource limitation, network failure and cybersecurity attacks which make a single cloud not to be reliable. Also, the user suffer from vendor lock-in phenomenon. On the other side, multi-cloud market (MCM) can be reliable paradigm to provision users value-added services. Moreover, users can excerpt services based on their requirement especially with regards to cost and security perspectives via cloud brokers. In this paper, we formulate web service composition problem into multi-objective problem. Since we encounter with ever-increasing MCM in which each provider presents the same service with different price and security level, a meta-heuristic algorithm is extended to figure out current combinatorial multi-objective problem known as multi-objective time-vary particle swarm optimization (MOTV-PSO). Simulation results and non-dominated solutions indicate that our proposed meta-heuristic algorithm beats other existing approaches in terms of several metrics derived from literature.
multi-objective, multi-cloud, particle swarm optimization
Mirsaeid Hosseini Shirvani, Department of Computer Engineering, Sari Branch, Islamic Azad University, Sari, IRAN
Wireless sensor network (WSN) is applied because of several reasons such as monitoring in tough places, surveillance, prevention of unprecedented events etc. which leads it as a hot topic today. Different works have been done in literature to address several issues related to WSN’s challenges. Since battery-limited devices are used in WSN, one of the most important subjects is power management which indirectly has drastic effect on network life cycle. Inasmuch as energy model demonstrates that even nodes which do not monitor any event consumes energy continuously; therefore, sleep/wakeup approach can be a promising technique to save energy. We propose a novel sleep/wakeup fuzzy TOPSIS based approach to set unnecessary nodes into sleep mode. To do so, we devise a fuzzy TOPSIS framework as a controller for ranking candidate nodes for setting to sleeping mode. In this ranking method, nodes which have more residual energy and high distance coverage by 2-hop step have more chance to be selected. Our proposed approach not only prolongs network life cycle, but also covers most of the field. The result of implementation shows more than 20 percent dominance against existing approaches such as EC-CKN.
Wireless sensor network (WSN), Power management, Fuzzy TOPSIS
Ren Zhi, Li Dong, Chen Cifei1 and Chen Minhua, Chongqing University of Posts and Telecommunications Chongqing, China
At present, researches on RPL is mainly focused on wireless low-power and lossy networks. The protocol has problems such as redundant network overhead and slow network convergence. In addition, the existing open source implementations of RPL are mainly based on Contiki And Tiny operating systems , cannot be used in an autonomic control plane based on a wired autonomic system. In order to solve the above problems and provide efficient and reliable communication support in the autonomic control plane, this paper proposes a fast and efficient routing construction mechanism, which adjusts the routing construction mode that nodes actively join DODAG, cancels the passive wait for DIO messages, centrally selects an optimal parent node, and avoids the redundant overhead caused by frequent switching of parent nodes. The protocol software is based on the Linux operating system and uses the C language to develop according to the standard of the autonomic control plane. Experimental results show that the improved protocol can optimize the convergence timeof routing construction, control overhead of routing construction and network average throughput in the autonomic control plane.
Autonomic Network, Autonomic control plane, RPL routing protocol, Optimization of routing construction, Linux operating system.
Mark Armand Atkins, Florida Institute of Technology, Melbourne, USA
Two approaches are taken here in an endeavor to discover natural definitions of knowledge and wisdom that are justiﬁable with respect to both theory and practice, using graph theory: (1) The metrics approach is to produce graphs that force an increase in various graph metrics, whereas (2) the dimensions approach is based on the observation that the graphical representation of aggelia in the DIKW hierarchy seems to increase in dimension with each step up the hierarchy. The dimensions method produces far more cogent definitions than the metrics method, so that is the set of definitions proposed, especially for use in artificial intelligence.
Knowledge Representation, Artificial Intelligence, Graph Theory, DAG, DIKW
Mark Phil B. Pacot and Nelson Marcos, De La Salle University, Manila, Philippines
Aerial imaging system plays a vital role in various areas utilizing the vast amount of aerial images acquired using known platforms like satellite and UAV imagery. However, it has always been a constant dilemma by many individuals in collecting these images with no distor-tions occurred. One of the common distortions is the atmospheric conditions or presence of clouds that block some regions of a particular image resulting in loss of complete information on the ground scene. In this paper, we explored the superiority of generative adversarial networks (GAN) in solving many computer vision and image processing problems. With that, we devel-oped a new cloud removal technique intended for datasets acquired from Unmanned Aerial Ve-hicle System (UAVs) imagery using GAN. And with a series of conducted experiments, we found out that our model performed well in removing cloud artifacts in most of our test data. Thus, it is necessary to evaluate the quality of the output images through well-known metrics such as mean-squared error, peak signal-to-noise ratio, and structural similarity index. But here in our study, we opt to use another method called the perception-based image quality evaluator (PIQE), known for its no reference or ground truth image. Using this metric, we were able to prove that our model has successfully achieved in improving the visual appearance of the output images. Overall, our work presented a new way of exploiting the capability of GAN in so many ways to adhere to the different computing problems co-exist in our modern society.
generative adversarial network, unpaired datasets, perception-based image qual-ity evaluator, unmanned aerial vehicle system, cloud removal
Katalina Biondi, Department of Computer Science, University of Central Florida, Orlando, Fl, United States of America
Camouflage detection is a difficult image classification task. However, its correct classification would be beneficial for defense purposes. The ability to detect camouflaged objects can reduce chances of ambushes and reduce casualties. This could aid in increasing target acquisition for surveillance systems and then later be a tool in advancing current camouflage tactics by intentionally trying to fool the classifier. The camouflage project classifies images from the CAMO dataset as being camouflaged or non-camouflaged by using k-means clustering for image segmentation and a convolutional neural network for classification. The project explores how the model preforms when images were compressed and evaluated using k-means clustering of 2, 5, 10 colours.
Camouflage Image Classification, CAMO Dataset, Convolutional Neural Networks, Deep Learning, Image Segmentation, K-Means Clustering
Takayuki Nakachi1 and Hitoshi Kiya2, 1Nippon Telegraph and Telephone Corporation, Kanagawa, Japan and 2Tokyo Metropolitan University, Tokyo, Japan
In this paper, we propose a privacy-preserving pattern recognition scheme with image compression. The proposed scheme is based on secure sparse coding using a random unitary transform. It leads to the following two prominent features: 1) It is capable of pattern recognition that works in encrypted image domain. Even if data leaks, privacy can be maintained because data remains encrypted. 2) It works as Encryptionthen- Compression (EtC) systems, where image encryption can be conducted prior to compression. The pattern recognition can be carried out in the compressed signal domain using a few sparse coefficients. Based on the pattern recognition result, it can efficiently compress only the selected images. We demonstrate its performance in detecting humans in the INRIA dataset. The proposal is shown to realize human detection with encrypted images and efficiently compress the selected images.
Surveillance Camera, Pattern Recognition, Secure Computation, Sparse Coding, Random Unitary Transform
Vedran Stipetic and Sven Loncaric, Faculty of Electrical Engineering and Computing, University of Zagreb, Zagreb, Croatia
Images taken outdoors are often degraded by atmospheric conditions such as fog and haze. These degradations can reduce contrast, blur edges, and reduce saturation of images.In this paper we propose a new method for single image dehazing. The method is based on an idea from color constancy called the gray world assumption. This assumption states that the average values of each channel in a picture are the same. Using this assumption and a haze degradation model we can quickly and accurately estimate the haze thickness and recover a haze free image. The proposed method is validated on a synthetic and natural image dataset and compared to other methods. The experimental results have shown that the proposed method provides comparable results to other dehazing methods.
Image restoration, image dehazing
Yin Li1, Wenjing Yang1 and Naiyang Guan2, 1State Key Laboratory of High Performance Computing, College of Computer, National University of Defense Technology, Changsha, China and 2Artificial Intelligence Research Center, National Innovation Institute of Defense Technology, Beijing, China
Hyperspectral images contain rich information on the fingerprints of materials and are being popularly used in the exploration of oil and gas, environmental monitoring, and remote sensing. Since hyperspectral images cover a wide range of wavelengths with high resolution, they can provide rich features for enhancing the subsequent target detection and classification procedure. The recently proposed deep learning algorithms have been frequently utilized to extract features from hyperspectral images. However, these algorithms are all data-hungry, and need large volume of data to learn an even usable model. To overcome this deficiency, this paper proposes an improved 3D Wasserstein generative adversarial network (3D WGAN-GP) model to generate hyperspectral images with the help of a small amount of training homemade unlabeled hyperspectral images. In particular, WGAN-GP generates hyperspectral images through three-dimensional convolutional neural networks (3D-CNN) and discriminates it against the real ones through a discriminator network. We train the proposed 3D WGAN-GP model by jointly learning both networks with the Adam algorithm. To the best of our knowledge, this is the first work that augments hyperspectral images by using the deep learning algorithm. We evaluate the quality of the generated hyperspectral images by comparing their spectrums to the corresponding real ones. The experimental results confirm the effectiveness of the proposed model.
Hyperspectral Data, 3D CNN, WGAN-GP
Fei Dai, Dengyi Zhang and Kehua Su, School of Computer Science, Wuhan University, Wuhan, China
Although asegmentation network aimingto improve the accuracy of burn diagnosis through reducing the impact of human errorand decrease diagnostic cost has been proposed by us,it is diagnostic accuracy can only be achieved by providing a lot oflabelleddata.In this paper, the deep learning technology is utilized to solve this problemby generating burn images automatically instead of manually collecting from hospitals, which is time and energy consuming. The method generating burn datasetsis called Burn Generative Adversarial Network (Burn-GAN), withColor Adjusted Seamless Cloning (CASC) used to fuse generated burn images with human texture following. Human images with burns and 2D coordinates of burn-edges were acquired finally. The network is compared with previous preeminent networks in terms of existing distribution quality metrics and the accuracy of the segmentation network. Results demonstrate that our methodimproved the segmentation accuracy from 84.2% to 95.6%.
Image Segmentation, Image Generation, Deep Learning, Burn Image
Mohammad Mahyoob, Department of Languages and Translation Taibah University, Madinah, KSA
This paper proposes an improved morphological analyser for Arabic pronominal systems using finite state method. The main advantage of the finite state method is very flexible, powerful and efficient. The main contribution of this work is the full analysis and the representation of morphological analysis of the all the inflections of pronoun forms in Arabic.Kleene's theorem, one of the first and most important results about FSAs, relates the class of languages generated by finite state automaton to certain closure properties. This result makes the theory of finite-state automata a very versatile, descriptive framework . In this paper we build a finite state network for the inflectional forms of the root words, restricted to all the inflections and grammatical properties of generating the attached and unattached forms of pronouns in Arabic language.
Computational Morphology, Arabic pronominal system, Arabic Morphological Analyzer, finite state automaton
Yiyao Wang1 and Lihua Tian2, 1Software Engineering, Xi'an Jiaotong University, Shaanxi Province, China and 2Software Engineering, Xi'an Jiaotong University, Shaanxi Province, China
This paper proposes a text classification model, called improved memory neural network model, which is used to process large-scale training data. In this model, the optimized transformer feature extractor is used to replace the memory neural network which can not be parallelized. At the same time, the multi-level void convolution matrix is designed to replace the convolution neural network, so as to extract more accurate semantic unit features. Finally, in order to reduce the model parameters, each level of the convolution network pooling layer and the full connection layer are eliminated, but the global average pooling layer is used Instead, the experimental results show that the model takes into account the accuracy, model parameters and convergence rate.
Convolution Neural Network, Memory Neural Network, Global Pooling, Hole, Full Connection
Hentabli Hamza, Naomie Salim and Maged Nasser, Faculty of Computing, Universiti Teknologi Malaysia
According to the principle of similar property, structurally similar compounds exhibit very similar properties and, also, similar biological activities. Many researchers have applied this principle to discovering novel drugs, which has led to the emergence of the chemical structure-based activity prediction. Using this technology, it becomes easier to predict the activities of unknown compounds (target) by comparing the unknown target compounds with a group of already known chemical compounds. Thereafter, the researcher assigns the activities of the similar and known compounds to the target compounds. Various Machine Learning (ML) techniques have been used for predicting the activity of the compounds. In this study, the researchers have introduced a novel predictive system, i.e., MaramalNet, which is a convolutional neural network that enables the prediction of molecular bioactivities using a different molecular matrix representation. MaramalNet is a deep learning system which also incorporates the substructure information with regards to the molecule for predicting its activity. The researchers have investigated this novel convolutional network for determining its accuracy during the prediction of the activities for the unknown compounds. This approach was applied to a popular dataset and the performance of this system was compared with three other classical ML algorithms. All experiments indicated that MaramalNet was able to provide an interesting prediction rate (where the highly diverse dataset showed 88.01% accuracy, while a low diversity dataset showed 99% accuracy). Also, MaramalNet was seen to be very effective for the homogeneous datasets but showed a lower performance in the case of the structurally heterogeneous datasets.
Bioactive Molecules, Activity prediction model, Convolutional neural network, Deep Learning, biological activities
Alexandria Dominique Farias and Gongling Sun, International Space University, Strasbourg, France
As the technology for providing satellite imagery has improved, the data has become so abundant that manual processing is sometimes no longer an option for analysis. There has been a prominent trend in techniques used to automate this process and host the processing in massive online cloud servers. These processes include data mining (DM) and machine learning (ML). The techniques that will be discussed include: clustering, regression, neural networks, and convolutional neural networks (CNN). The main challenges for earth observation, including the size of data, its complex nature, a high barrier to entry, and the datasets used for training data, are discussed, as well as the solutions that are addressing these challenges. This paper will show how some of these techniques are currently being used in the field of earth observation. Google Earth Engine (EE) has been chosen to process and run our scripts on publicly available Landsat-7 remote sensing (RS) data catalogues. Using this RS data, it is possible to classify and discover historical algal blooms in the Baltic Sea surrounding the Swedish island of Gotland.
Earth Observation, Remote Sensing, Satellite Data, Data Mining, Machine Learning, Google Earth Engine, Algal Blooms, Phytoplankton Bloom, Cyanobacteria
Li Chunliang, Xu Qinye, JiaHandong, Song Weixing, Li Xiaofeng and Liu Nan*, School of Computer Science and Technology, Shandong Jianzhu University, JiNan, China
Recently, the genomic scaffold filling problem has attracted a lot of attention at home and abroad. However, almost current studies assume that the scaffold is given as an incomplete sequence. This differs significantly from most of the real genomic dataset (where a scaffold is given as a list of contigs). In this paper, we review the genomic scaffold filling problem by considering this important case when two scaffolds R and S is given, the missing genes can only be inserted in between the contigs, and the objective is to maximize the number of common adjacencies between the filled genome and . Considering the insertion of type-A substrings and type-B substrings, we propose a polynomial time solvable filling algorithm, and a program is written in python language to implement the algorithm, and the filling results and the total number of adjacencies generated are visualized.
Genome, Scaffold filling, Adjacency, Contig, Polynomial time
Mohammed Zakaria Moustafa1, Mohammed Rizk Mohammed2, Hatem Awed Khater3, Hager Ali Yahia4, 1Department of Electrical Engineering (Power and Machines Section) ALEXANDRIA University, Alexandria, Egypt, 2Department of Communication and Electronics Engineering, ALEXANDRIA University, Alexandria, Egypt, 3Department of Computer science, HORAS University, Damietta, Egypt and 4Department of Communication and Electronics Engineering, ALEXANDRIA University, Alexandria, Egypt
A support vector machine (SVM) learns the decision surface from two different classes of the input points, in many applications there are misclassifications in some of the input points. In this paper a bi-objective quadratic programming model is utilized and different feature quality measures are optimized simultaneously using the weighting method for solving our bi-objective quadratic programming problem. An important contribution will be added for the proposed bi-objective quadratic programming model by getting different efficient support vectors due to changing the weighting values. The experimental results, give evidence of the effectiveness of the weighting parameters on reducing the misclassification between two classes of the input points.
Support vector machine (SVMs), Classification, Multi-objective problems, weighting method, Quadratic programming.
Shi Wenxiu and Li Nianqiang, School of Information Science and Engineering, University of Jinan, Jinan, China
In order to achieve the automatic identification of farmland pests and improve recognition accuracy, this paper proposes a method of farmland pest identification based on target detection algorithm .First of all, a labeled farm pest database is established; then uses Faster R-CNN algorithm, the model uses the improved Inception network for testing; finally, the proposed target detection model is trained and tested on the farm pest database, with the average precision up to 90.54%.
Object detection algorithm, Faster R-CNN, Inception network.
Mohamed Bouyahi1 and Yassine ben Ayed2, 1University of Sfax, National School of Engineers of Sfax (ENIS), Sfax, Tunisia and 2University of Sfax, Higher Institute of Computer Sciences and Multimedia (ISIMS), Sfax, Tunisia
Recent technologies’ understanding videos content remain limited due to its complexity and length. However, videos segmentation into small coherent units facilitates indexing and searching task. The subjectivity remains the essential constraint of videos, but the genre (drama, action...) does not present any conflict. In this paper, we present a new approach to video segmentation into scenes based on genre prediction. Initially, the video is divided into shots of equal duration. We used architecture, based on audio-visuals deep features extracted from trained neural networks for genre prediction, and we introduced a transition detection method based on the similarity calculation between shots genre. The originality of this method consists in using the highly level semantic relationship between successive shots for transition detection. We reached good performances on videos of the multi varied genre. We used the RAI dataset and BBC dataset to evaluate our method through a comparison with other state-of-the-art approaches.
Segmentation, Transition Detection, Multimodal, Deep Features, Genre
Zuoqi Tang1, Zheqi Lv2, Chao Wu3,4, 1Department of Computer Science and Technology, Zhejiang University, China, 2Department of Marine Informatics, Zhejiang University, China, 3Department of Public Affairs, Zhejiang University, China and 4Center of Social Welfare and Governance, Zhejiang University, China
Big data and machine learning are poised to revolutionize the field of artificial intelligence and represent a step towards building an intelligent society. Big data is considered to be the key to unlocking the next great waves of growth in productivity, the value of data is realized through machine learning. In this survey, we begin with an introduction to the general field of data pricing and distributed machine learning then progress to the main streams of data pricing and mechanism design methods. Our survey will cover several current areas of research within the field of data pricing, including the incentive mechanism design for federated learning, reinforcement learning, auction, crowdsourcing, and blockchain, especially, focus on reward function for machine learning and payment scheme. In parallel, we highlight the pricing scheme in data transactions, focusing on data evaluation via distributed machine learning. To conclude, we discuss some research challenges and future directions of data pricing for machine learning.
Data pricing, Big data, Machine learning, Data transaction
Daniela-Maria Cristea, Department of Computer Science, Babes-Bolyai University, Cluj Napoca, Romania
ML is a relatively well-defined subfield of Artificial Intelligenceand in Chemoinformatics focuses on (a) finding regions in descriptor space associated with particular chemical behaviors or (b) relating measures of chemical behavior to values of descriptors.
Machine Learning, Drug Design, QSAR, ADME variants, Chemoinformatics
Kagombe Geofrey, Prof. Waweru Mwangi and Prof. Wafula Muliaro, Department of Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
Agile Software development methods offer numerous advantages to adopting organisations e.g. the ability to deploy systems fast, satisfying dynamic user requirements among others. Serious concerns have been raised over the applicability of these methods in developing security critical software. This is mainly because; the traditional design of Software security engineering practices present a conflict in character to agile principles. The framework we present in this paper addresses this issue by ensuring that security is considered from the beginning of the project to the end. Knowledge from established security best practices mainly drawn from but not limited to SSE-CMM is applied to implement security in an agile software development environment (Scrum). The aim is to achieve the intended security goals within the process to appropriate levels by adopting agility in the implementation of some of these activities.
Agile software development, Scrum (Software development), Software Security Engineering, Risk.
Kiran Kumaar CNK, Capgemini India Private Limited, Inside Divyasree TechnoPark, Kundalahalli, Brookefield, Bengaluru, Karnataka 560037, India
Defects are one of the seven prominent wastes in lean process that arises out of the failure of a product or functionality from meeting customer expectations. These defects, in turn, can cause rework and re-deployment of that product or functionality again, which costs valuable time, effort, and money. As per the survey, most of the clients invest much time, energy, and money in fixing production defects. This paper provides information about ways to move into quality engineering from quality assurance mode for digital transformation by diagnostic, Predictive & Prescriptive approaches, it also outlines the overall increase in quality observations, given QA shift left and continuous delivery through Agile with the integration of analytics and toolbox.
Diagnostic, Predictive & Prescriptive approaches, continuous delivery through Agile
Abdulrahman M. Qahtani, Department of Computer Science, Taif University, Taif City, KSA
The software engineering industry has witnessed an increasing number of innovative methods and practices in the last decade at different levels, ranging from development processes to software projects and from testing to verification software products. Extensive studies have been conducted empirically to investigate and discuss the impact of using agile principles in the testing process on distributed teams across geographical boundaries. This empirical study has a similar focus, using a real case study in a distributed domain and applying agile testing to a selected team, compares their outcome with another three teams to determine the impact of involving a client in a testing process to overcome distributed development challenges. The findings indicate a highly positive impact on team productivity when using agile tests as compared with other groups using central distributed team testing. All teams met a 90% testing requirement. However, the group applying agile testing verified more than 99% of all requests entered into the testing process, a notable difference supporting the productivity of any development project.
Distributed Software Development (DSD), Global Software Development (GSD), Software Testing, Agile Development, Case study & Empirical study
Guofeng Li1 and Xuejun Yu2, 1Faculty of Information Technology, Beijing University of Technology, Beijing, China and 2Beijing University of Technology, Beijing, China
At present, the research on software trustworthiness mainly focuses on two parts: behavioral trustworthiness and trusted computing. The research status of trusted computing is in the stage of active immune of trusted 3.0. Behavioral trustworthiness mainly focuses on the detection and monitoring of software behavior trajectory. Abnormal behaviors are found through scene and hierarchical monitoring program call sequence, Restrict sensitive and dangerous software behavior. At present, the research of behavior trust mainly uses XML language to configure behavior statement, which constrains sensitive and dangerous software behaviors. These researches are mainly applied to software trust testing methods. The research of XML behavior statement file mainly uses the method of obtaining sensitive behavior set and defining behavior path to manually configure. It mainly focuses on the formulation of behavior statements and the generation of behavior statement test cases. There are few researches on behavior semantics trustworthiness. Behavior statements are all based on behavior set configuration XML format declaration files. There are complicated and time-consuming problems in manual configuration, including incomplete behavior sets. This paper uses the trusted tool of semantic analysis technology to solve the problem of behavior set integrity And can generate credible statement file efficiently. The main idea of this paper is to use semantic analysis technology to model requirements, including dynamic semantic analysis and static semantic analysis. This paper uses UML model to automatically generate XML language code, behavioral semantic analysis and modeling, and formal modeling of non functional requirements, so as to ensure the credibility of the developed software trusted tools and the automatically generated XML files. It is mainly based on the formal construction of non functional requirements Model research, semantic analysis of the state diagram and function layer in the research process, generation of XML language trusted behavior declaration file by activity diagram established by model driven method, and finally generation of functional semantic set and functional semantic tree set by semantic analysis to ensure the integrity of the software. Behavior set generates behavior declaration file in XML format by the design of trusted tools Trusted computing is used to verify the credibility of trusted tools.
behavior declaration, behavior semantic analysis, trusted tool design, functional semantic set
Copyright © CCSIT 2020