Sunday 1 January 2017

Realizing Omni Directional Architectural Scalability with Software Stability

Vol. 11  Issue 1
Year:2016
Issue:Jul-Sep 
Title:Realizing Omni Directional Architectural Scalability with Software Stability
Author Name:M.E. Fayad, Shivanshu K. Singh and Rafael Capilla
Synopsis:
Current software development approaches need to cope with new design challenges, in which the incessant complexity of software system require more scalable systems that can be adapted better. Hence, the evolution of such systems and their architectures depend on how stable a design is against new requirements and on the desired quality level. As in previous columns, Software Stability results in a key to deal with many challenges that might influence the system. We have seen in previous articles in this series, what the problems associated with traditional software architecture approaches are, when it comes to scalability and stability in particular and how they negatively impact the software, over due course of time. Modern software development approaches require one to produce highly scalable, adaptable and stable systems and platforms, that in many cases could be more reactive against changes (e.g.: Self-adaptable systems). Thus, the underlying architecture behind such systems should be flexible enough and adaptable to realize the idea of stability.

Study on Analyzing The Performance of Improved Evolutionary Algorithm Design Based On Time for the Project Scheduling Problem

Vol. 11  Issue 1
Year:2016
Issue:Jul-Sep 
Title:Study on Analyzing The Performance of Improved Evolutionary Algorithm Design Based On Time for the Project Scheduling Problem
Author Name:Gadupudi Dakshayani, Asadi Srinivasulu, P. Samson Anosh Babu and Mahindra .M
Synopsis:
Software project scheduling is a problem faced by software project managers. Different evolutionary algorithms will give different results i.e, different schedules. Project scheduling problem includes identifying every task and the dependency among the tasks. The skills necessary by the people to execute those tasks and also the dependency among the tasks will be analyzed. Project scheduling problem also needs to estimate the effort and cost prior to the development of the project. Even after estimating the skills required, number of employees needed, and cost of resources the schedules that are provided may not be accurate because of using evolutionary algorithm which is using repair mechanism. Repair mechanism unnecessarily reduces the dedications which may leads to failure of the software product. To overcome the problems like reducing the dedications unnecessarily and producing different schedules for the same project, the techniques like mutation, encoding and fitness will be used. Those are implemented in the improved evolutionary algorithm which will provide the results like hit rate and fitness values. Hit rate and fitness values will be useful to know that the schedule obtained by using the parameters what we are considered to get the fitness is accurate one or not.

Analyzing Software Defect Prediction Using K-Means and Expectation Maximization Clustering Algorithm Based On Genetic Feature Selection

Vol. 11  Issue 1
Year:2016
Issue:Jul-Sep 
Title:Analyzing Software Defect Prediction Using K-Means and Expectation Maximization Clustering Algorithm Based On Genetic Feature Selection
Author Name:R. Reena and R. Thirumalai Selvi
Synopsis:
The prediction software defect components are an economically important activity and so has received a good deal of attention. However, making sense of the many, and sometimes seemingly inconsistent, a result is difficult. To improve the performance of software defect prediction, this research proposed the mixture of genetic algorithm and bagging technique. The thesis contains two phase. The first phase is feature selection; the features are selected using genetic algorithm, the bagging technique is used for class imbalance problem. The second phase is defect prediction; Software defects are predicted using K-Means and an Expectation Maximization (EM) algorithm. K-Means is a simple and popular approach that is widely used to cluster/classify data. EM algorithm is known to be an appropriate optimization for finding compact clusters. EM guarantees elegant convergence. EM algorithm assigns an object to a cluster according to a weight representing the probability of membership. The proposed method is evaluated using the data sets from NASA metric data repository. The proposed method is evaluated based on evaluation measurement such as accuracy and error rate. The experimental results demonstrate that our approach outperforms other competing approaches.

An Enhanced WSD Approach for Improving Terminological Issues in Process Models

Vol. 11  Issue 1
Year:2016
Issue:Jul-Sep 
Title:An Enhanced WSD Approach for Improving Terminological Issues in Process Models
Author Name:S. Jyoshna and K. Delhi Babu
Synopsis:
Nowadays detecting and resolving lexical ambiguities are difficult tasks in the business process models. Business process models represent all functions of a business activity in a sequential order. So business process models should not hold any terminological issues there has been lack of techniques to handle the problem of ambiguity in words due to synonyms and homonyms. In existing work, a technique called word sense disambiguation based on babelnet was used to detect and resolve the lexical ambiguities. Word sense disambiguation is a method for finding correct meaning of an ambiguous word. Babelnet is one of the widely used lexical resource that combine both wordnet and Wikipedia to identify the different meanings of the ambiguous words automatically. In addition to the existing work, the authors proposed a domain driven disambiguation approach that uses wordnet domain to find the domain information about a word automatically to detect and resolve the lexical ambiguity in business process models.

Automating Traceability Link Recovery Using Information Retrieval

Vol. 11  Issue 1
Year:2016
Issue:Jul-Sep 
Title:Automating Traceability Link Recovery Using Information Retrieval
Author Name:D. Mounika and K. Delhi Babu 
Synopsis:
Software documentation is one of the important factors in the software maintenance. Documentation illustrates the written form of data which can be easily understandable by the software engineers. Traceability links are the links which are used to decrease the distance between the software developers and the software documentation. Previously there was a technique called AdDoc that automatically detects the changes in the documentation. In this paper we propose a method called Information Retrieval (IR). Information retrieval is well known method for the automating traceability recovery based on the similarity among the software artifacts. IR combines both textual and structural information for the traceability recovery in the software documentation. Synonymy problem can be decreased by the information retrieval method and can retrieve the correct link between the source code and the documentation. In this work, the performance of the information retrieval method is comparatively high than the previous technique.

Architectural Design of General Cryptanalysis Platform for Pedagogical Purposes

Vol. 11  Issue 1
Year:2016
Issue:Jul-Sep 
Title:Architectural Design of General Cryptanalysis Platform for Pedagogical Purposes
Author Name:Sufyan T. Faraj Al-Janabi and Wael Ali Hussien 
Synopsis:
Cryptanalysis is one of the most challenging research areas in the field of information security. Often, this includes how to find the key which has been used for hiding the message and thus to arrive to the original information. In order to avoid others’ attacks, one should first have enough knowledge and experience of the existing cryptanalytic attacks on various cryptographic systems. These attacks and their avoidance requirements can be described based on information available to opponent, computational time requisites, memory requirements, etc. Security analysis of the existing ciphers is very helpful to better understand the requirements for designing secure and efficient ciphers. This paper main objective is to propose a design for a general cryptanalysis platform for pedagogical purposes. Besides educational benefits expected on information security side, other benefits of practicing with certain software development methods will also be investigated. The whole work can be considered to be under the general title of ethical hacking. In order to make a solid ground for the research, the paper starts by surveying different cryptanalysis techniques for various cipher systems. The paper also reports on the progress of our ongoing work.

Accomplishing Bi-Directional Vertical Architectural Scalability with Software Stability

Vol. 10  Issue 4
Year:2016
Issue:Apr-Jun
Title:Accomplishing Bi-Directional Vertical Architectural Scalability with Software Stability
Author Name:M.E. Fayad, Shivanshu K. Singh and Rafael Capilla 
Synopsis:
While moving away from traditional approaches to build software and design software architecture, the authors realized that it is sensible to migrate to a platform of better fundamental approach. This refers to the way one looks at the analysis and design of any software. This helps one to weave into the system's architecture itself like different qualities such as adaptability, extensibility, scalability and stability over time, than worrying about them at a much later stage. Such architecture will compel developers to find fixes to address the issues related to such quality factors; this approach usually culminates in creating projects that incur very high costs and unprecedented consequences that are bad and unwanted.

Mining Fuzzy Association Rules Using Various Algorithms: A Survey

Vol. 10  Issue 4
Year:2016
Issue:Apr-Jun
Title:Mining Fuzzy Association Rules Using Various Algorithms: A Survey
Author Name:O. Gireesha and O. Obulesu
Synopsis:
The discovery of Association Rules (AR) acquire an imperative rule in Data Mining, which tries to find correlation among the attributes in a database. Classical Association Rules are meant for Boolean data and they suffer from a sharp boundary problem in handling quantitative data. Thereby, Fuzzy Association Rules (FAR) with fuzzy minimum support and confidence is introduced. In Fuzzy Association Rule Mining (FARM), determining fuzzy sets, tuning membership functions and automatic design of fuzzy sets are prominent objectives. Hence, FARM can be viewed as a multi-objective optimization problem. In this paper, different algorithms for FARM are discussed with merits and demerits.

A Review of Techniques for Real time Authentication

Vol. 10  Issue 4
Year:2016
Issue:Apr-Jun
Title:A Review of Techniques for Real time Authentication
Author Name:E. Sabarinathan and E. Manoj
Synopsis:
The Development of multimedia applications makes digital media to bring about conveniences to the people by easy processing of data. At the same time, it enables the illegal attackers to attack the works. For the protection of data, there has been growing interest in developing effective techniques to discourage the unauthorized duplication of digital data. Among Cryptography and Steganography, only the watermarking provides the copyright protection and authentication. This paper is a comprehensive review of diverse image processing methods and enormous number of interrelated solicitations in various disciplines, including various watermarking techniques. In this paper, different existing techniques are discussed along with their drawbacks, also explaining their future scope.

Data Mining Approach For Advancement of “Association Rule Mining” Using “R Programming”

Vol. 10  Issue 4
Year:2016
Issue:Apr-Jun
Title:Data Mining Approach For Advancement of “Association Rule Mining” Using “R Programming”
Author Name:Rahul Kumar Vij, Parveen Kalra and C.S. Jawalkar 
Synopsis:
Over the years, data study was focused on finding proofs using rough datasets which made data mining to jump into, to show its strengths. This field gave the researchers a pictorial view of trends and patterns that can help in predictive maintenance. Algorithms available in past are unable to handle rough datasets; especially when major issues were processing data in advance and making an understandable model. Association rules were made to predict the correlation between the attributes. The current study mainly focuses on the trends followed by the “car evaluation dataset attributes” to give a suitable rating for a car. Arules library in “R” is used to reflect the same as well as some preprocessing is done. The results are then compared with “Step Induction Association Rule Mining” i.e. advancement done as well as orange classifier. The results clearly reflect that rules formed can help in predictive analysis of a car by knowing its features and specifications and the step induction rules give more clarity about the predictions.

Computer Forensics: An Overview

Vol. 10  Issue 4
Year:2016
Issue:Apr-Jun
Title:Computer Forensics: An Overview
Author Name:K.V.N. Rajesh and K.V.N. Ramesh 
Synopsis:
Computer Forensics involves identification of computer crimes and finding solutions for them by using analytical and investigative techniques. These analytical and investigative techniques involve the acquisition, analysis and documentation of computer data related to crimes. This paper describes the computer forensics, security issues in computer and network, types of computer crimes, examples of computer crimes, tools of computer forensics, cyber laws, courses and career in computer forensics.

The Current State of Scalability: What is and What Should Be

Vol. 10  Issue 3
Year:2016
Issue:Jan-Mar 
Title:The Current State of Scalability: What is and What Should Be
Author Name:Mohamed E. Fayad, Shivanshu K. Singh, and Rafael Capilla
Synopsis:
Traditional approaches of architecting software are incapable of providing all the solutions for developing scalable architectures. Uncertainty or lack of knowledge about which steps or guidelines to use, in order to obtain a good modularization of the architecture, is one of the major problems that keeps us from realizing a truly scalable architecture.

Literature Survey on Multimedia Data Retrieval Techniques Using Data Mining

Vol. 10  Issue 3
Year:2016
Issue:Jan-Mar 
Title: Literature Survey on Multimedia Data Retrieval Techniques Using Data Mining
Author Name:D. Saravanan
Synopsis:
Data mining is a process of extracting facts from a given huge set of data. Of the available huge data set, multimedia is one which contains diverse data such as audio, video, image, text and motion, and such video data play a vital role in the field of video data mining. For extracting information from this huge content, we need special techniques. Because of numerous devices like cell phones, tablets, and other electronic devices available today, we can upload images or video data very easily. Today information comes in the form of electronic information instead of text information. Most of the information like news, entertainment, books, healthcare and weather forecasts are in the electronic form. Among this information the acquisition and storage of video data is an easy task, but retrieval of information from video data is challenging. This paper brings some of these issues and challenges involved in image extraction using data mining techniques.

A Survey of Genetic Feature Selection for Software Defect Prediction

Vol. 10  Issue 3
Year:2016
Issue:Jan-Mar 
Title: A Survey of Genetic Feature Selection for Software Defect Prediction
Author Name:R. Reena, and R. Thirumalai Selvi
Synopsis:
Software defect prediction is an important research topic in the software engineering field, especially to solve the inefficiency and ineffectiveness of the existing industrial approach of software testing and reviews. The software defect prediction performance decreases significantly because the data set contains noisy attributes and class imbalance. Feature selection is generally used in machine learning when the learning task involves high-dimensional and noisy attribute datasets. In this survey, a Genetic Algorithm and a bagging technique is a research topic for Software Defect Prediction. The survey of publications on this topic leads to the conclusion that the field of genetic algorithms applications is growing fast. The authors overall aim is to provide an efficient feature selection for further development of the research.

Approaching Developments on Parallel Programming Models Through JAVA

Vol. 10  Issue 3
Year:2016
Issue:Jan-Mar 
Title: Approaching Developments on Parallel Programming Models Through JAVA
Author Name:Bala Dhandayuthapani Veerasamy, and G.M. Nasira
Synopsis:
Multicore platforms allow developers to optimize applications by intelligent partitioning at different workloads on different processor cores. Currently, application programs are optimized to use multiple processor resources, resulting in faster application performance. The authors earlier research work focused on native thread for Java on windows thread, Pthread, and Intel TBB. The authors also developed Native Threads, Native Pthread, Java Native Intel TBB beneath windows 32-bit platform. This article aims to identify the future directions of native thread for Java on windows thread, Pthread, and Intel TBB through JNI beneath windows 64-bit platforms and other platform besides. Furthermore, it articulates additional opening to pursue approaching developments on parallel programming models through Java.

Leveraging Configuration Management and Product Evolution of SPL Using Variability Aware Design Patterns

Vol. 10  Issue 3
Year:2016
Issue:Jan-Mar 
Title: Leveraging Configuration Management and Product Evolution of SPL Using Variability Aware Design Patterns
Author Name:K.L.S. Soujanya, and A. Ananda Rao
Synopsis:
Software Product Line (SPL) is an emerging approach to satisfy the ever-increasing customization demands by reusing commonalities and variability's. Variability - aware design patterns can leverage SPL configuration management and evolution of new products. Design pattern is a blueprint or model solution to a frequently occurring design problem. Variability aware design patterns can address variability and help in customizing software products. Modularization of artefacts and reusability of them can be realized by using design patterns. Design patterns in SPL is relatively used in new research area. However, composite design patterns that are variability-aware can lead to the realization of high quality SPL. In this context, the configuration management and product derivation are to be conceived and handled. There are no dedicated efforts found in the literature to leverage the usage of design patterns in SPL. The authors proposed a framework and provided provision for variability-aware design patterns. They use the concept of roles and map them to variability model. Then they map design pattern roles to artefacts thus realizing variability with industry best practices. This will help in improving the dynamic reconfiguration of SPL artefacts. Their empirical evaluation shows that the approach improved performance up to 20% with respect to configuration management of SPL and product derivation. The prototype demonstrates the proof of concept.

Requirements Elicitation Approach for Cyber Security Systems

Vol. 10  Issue 3
Year:2016
Issue:Jan-Mar 
Title: Requirements Elicitation Approach for Cyber Security Systems
Author Name:Issa Atoum 
Synopsis:
Requirements elicitation is considered the most important step in software engineering. There are several techniques to elicit requirements, however they are limited. Most approaches are general qualitative approaches. Thus, they do not suite specific software domain, such as cyber security. This article proposes a new technique to elicit requirements from cyber security strategies. The approach is able to formally define requirements' strengths, and link them with respective analyst's expertise. Consequently, management can easily select the appropriate requirements to be implemented. The use of the proposed approach on a selected cyber security domain showed its applicability on cyber security framework implementations.

Literature Survey on Web Based Knowledge Extraction

Vol. 10  Issue 2
Year:2015
Issue:Oct-Dec 
Title: Literature Survey on Web Based Knowledge Extraction
Author Name:D. Saravanan
Synopsis:
Internet users have increased with increase in technology. People use internet for their business need, education, online marketing, social communication and more. For this purpose, users are forced to use the internet for their day-to-day operation effectiveness. But content in the web is also increased due to various factors. Any user from any place can upload any type of file easily, and all the contents today are available in the form of digital data. From this huge repertoire browsing, the correct web page is a really challenging task to the user. Retrieving the effective and correct content is not an easy job, for which a number of research works have been performed in the field of web extraction. This paper reviews some of the web extraction techniques and methods.

Test Data Generator to Improve Code Coverage in Object Oriented Programming

Vol. 10  Issue 2
Year:2015
Issue:Oct-Dec 
Title: Test Data Generator to Improve Code Coverage in Object Oriented Programming
Author Name:R.E. Harish Goud, C. Kishore and A. Srinivasulu
Synopsis:
The features of Object Oriented Programming (i.e., abstraction, encapsulation and visibility) prevent the direct access to some modules of the source code, so that the automated test data generation becomes a challenging task. To solve this problem, Search Based Software Testing (SBST) has been applied. Previously, Random search approach has been applied to generate test suite which achieves code coverage of 70% in less than 10 sec. To address the same problem, new search approach algorithms are used which generates a test suite of high code coverage than early approaches in less search time. The proposed approach, first describes how to structure the test data generation problem for unit-testing. Based on static analysis, it considers methods or constructors to change the state that may reach to test the target. After that, it introduces a generator of instance of classes using two strategies to increase the likelihood to reach the test target, such as Seeding and Diversification, which may produce test suite of high code coverage with less search time.

A Current Research Activity for Big Data Concept

Vol. 10  Issue 2
Year:2015
Issue:Oct-Dec 
Title: A Current Research Activity for Big Data Concept
Author Name:K. Prabu 
Synopsis:
In the recent research years scientists regularly encounter limitations due to large data sets in many areas. Big data is high-volume, high-velocity and high-variety information assets that demand cost effective, innovative forms of information processing for enhanced insight and decision making and it refers to data volumes in the range of Exabytes (10 ) and beyond. Such volumes exceed the capacity of current on-line storage systems and processing systems. Data, information, and knowledge are being created and collected at a rate, that is rapidly approaching the Exabyte range. But, its creation and aggregation are accelerating and will approach the Zettabyte range within a few years. Data sets grown in size in part because they are increasingly being gathered from ubiquitous information-sensing mobile devices, aerial sensory technologies, software logs, cameras, microphones, Radio-Frequency Identification (RFID) readers, and Wireless Sensor Networks. Big data usually include data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process the data within a tolerable elapsed time. Big data sizes are a constantly moving target, as of 2012 ranging from a few dozen Terabytes to many Petabytes of data in a single data set. Big data is difficult to work with using most Relational Database Management Systems and desktop statistics and visualization packages, requiring instead "massively parallel software running on tens, hundreds, or even thousands of servers". This paper analyze the issues and challenges of the recent research activity towards big data concepts.

An Optimal ANT Algorithm for Balanced Scheduling in Computing Grids

Vol. 10  Issue 2
Year:2015
Issue:Oct-Dec 
Title: An Optimal ANT Algorithm for Balanced Scheduling in Computing Grids
Author Name:K. Jairam Naik,A. Jagan and N. Satya Narayana 
Synopsis:
Grid computing relies on distributed heterogeneous resources to support convoluted computing problems. Grids are mainly classified into computing grid and data grid. Scheduling the jobs in computing grid is a very big obstacle. For efficient and incentive based use of grid resources, optimal mechanism in grids is needed that can distribute jobs to the prime resources and balance work load among them. In the real world, the ants have an unique ability to team up for finding an optimal path for food resources. The behavior of real ants was simulated by an Ant algorithm. In this paper, the authors proposed an optimized Ant algorithm for balanced job scheduling in the computing grid environment. The primary goal of the approach is to schedule and balance the entire system load, so that no resource is overloaded or under loaded under any circumstances. To achieve the primary goal, a Novel Balanced Ant Colony Optimization (NBACO) always calculates and updates local and global pheromone values, which makes the resource always normally loaded. The secondary aim is to considerably improve grid performance in terms of increased throughput, reduced makespan and resubmission time. To achieve this goal, NBACO always finds the most successive available resources in the grid and assigns the job to it. The successive resource is the one, which has the lower tendency to fail. According to the experimental results, NBACO can outperform other job scheduling and load balancing algorithms.

The Clustered Similarity (CS) Approach for Relevant Disease Information Extraction

Vol. 10  Issue 2
Year:2015
Issue:Oct-Dec 
Title:The Clustered Similarity (CS) Approach for Relevant Disease Information Extraction
Author Name:N. Satyanandam and Satyanarayana 
Synopsis:
The Machine Learning field has picked up its push in any space of examination and just as of late has turned into a dependable apparatus in the therapeutic area. The experiential area of programmed learning is utilized as a part of assignments, for example, restorative choice bolster, medicinal imaging, protein-protein collaboration, extraction of therapeutic information, and for general patient administration care. Machine Learning (ML) is imagined as a device by which PC based frameworks can be incorporated in the medicinal services field to show signs of improvement, all around sorted out therapeutic consideration. It depicts a ML-based strategy for building an application that is fit for distinguishing and dispersing social insurance data. It concentrates sentences from distributed medicinal papers that say maladies and medications, and distinguishes semantic relations that exist in the middle of ailments and medicines. This paper proposes a new way of information retrieval. A clustered similarity approach is used to overcome the previous approach’s drawbacks. Results are obtained and the proposed work is much better than existing techniques.

The Current State of Scalability only Scaling Up Out

Vol. 10  Issue 1
Year:2015
Issue:Jul-Sep
Title:The Current State of Scalability only Scaling Up Out
Author Name:Mohamed E. Fayad and Shivanshu K. Singh 
Synopsis:
Reduction of cost and time is one the major concerns in software development. Therefore, developing scalable architectures, which can efficiently accommodate evolving requirements to adapt to new environments, is worth future research. Current development approaches do not guarantee fully scalable architectures, due to their inability to detect and identify where and how new layers are to be added or incorporated to, or current layers are to be removed from the architecture being developed. In addition, there is a conceived shortage of the architectural points that will be used to connect/remove other architectures and applications. Consequently, these architectures might encounter or face a total collapse or a considerable increase in cost and time, when new changes are to be incurred to it. When businesses experience substantial increments in their services' demands, their main concern is their application architecture's ability to scale over time to assure a proper handling of these loads. The architecture is required to efficiently scale, and adapt in such manner that it will fit in both constrained and unconstrained environments, yet still being able to take full advantage of the available resources to improve its performance. There are multiple stages in the lifecycle of a software product. The development starts from the requirements analysis stage, moving on to the design, then coding, testing and then the final delivery, which may involve deployment and configuration of the means to deliver the software to its users, in the form of a final software system. This and the subsequent columns look at the definition of scalability from the perspective of software architectures.

Intelligence Intrusion Multi Detection Prevention System Principles

Vol. 10  Issue 1
Year:2015
Issue:Jul-Sep
Title:Intelligence Intrusion Multi Detection Prevention System Principles
Author Name:S. Murugan and K. Kuppusamy
Synopsis:
Research on intelligence Intrusion Detection Prevention Systems (IDPSs) found in the literature survey are effectively used to identify and detect only known Network attacks and are unable to evaluate the risk of Network service. In order to overcome limitations of the existing Intrusion Detection System (IDS), a new active defense system with Intelligence principles named IIDPS (Intelligence Intrusion Detecton Prevention System) for detecting and preventing unknown malware has been proposed in this article. This system fulfills the objectives of security like authenticity, confidentiality, integrity, availability, and non-repudiation.

Map Reduce Architecture for Grid

Vol. 10  Issue 1
Year:2015
Issue:Jul-Sep
Title:Map Reduce Architecture for Grid
Author Name:Neeraj Rathore
Synopsis:
Recently many large scale computer systems are built in order to meet the high storage and processing demands of compute and data-intensive applications. MapReduce is one of the most popular programming models designed to support the development of such applications. MapReduce is a software framework for easily writing applications which process vast amount of data in-parallel, by using multiple CPUs on various machines, in a reliable, and fault tolerant manner. The various input and output parameters, that are part of this model have been identified. The proposed architecture is implemented in open source Java. The Map Reduce programming model is easy to use, even for programmers without experience with parallel and distributed systems, since it hides the details of parallelization, faulttolerance, locality optimization, and load balancing. It has a large variety of problems which are easily expressible as MapReduce computations. Finally, an implementation of MapReduce that scales to large clusters of machines comprising thousands of machines has been developed. The implementation makes efficient use of these machine resources and therefore is suitable for use on many of the large computational problems encountered.

A Modern Approach on Student Performance Prediction using Multi-Agent Data Mining Technique

Vol. 10  Issue 1
Year:2015
Issue:Jul-Sep
Title:A Modern Approach on Student Performance Prediction using Multi-Agent Data Mining Technique
Author Name:L.V. Reddy, K. Yogitha, K. Bandhavi, G. Sai Vinay, and G. Dinesh Kumar
Synopsis:
Among the evolving researches on data mining, one such field of interests is on the education. This emerging field of research in education is called Educational Data Mining, which means the data related to the field of education. One of the main concerns in educational field is the academic scores of a student, which helps in the growth of the student as well as the Institution. To predict a student's performance is a very important one in educational field. To maintain the scores and in order to increase the scores of a student the prediction of one's performance is necessary. To achieve this objective of predicting, the performance is fulfilled by the usage of data mining. A high prediction accuracy of the student's performance is more helpful to identify the slow performance students at the beginning of the learning process. Data mining techniques are used to analyze the models or patterns of data, and it is also helpful in the decision-making [19]. Boosting technique [21][22] is one of the most popular techniques for constructing classifier by ensemble to improve the classification accuracy. Adaptive Boosting (AdaBoost) is a generation of Boosting algorithm. It is applicable for the binary classification and not used in multiclass classification directly. Therefore, an extension for the AdaBoost is proposed which is SAMME boosting technique for the multiclass classification without reducing it to a set of sub-binary classification. In this paper, the authors have evaluated student's performance prediction system to predict the performance of the students based on their data with high prediction accuracy and provide help to the slow learning students by using optimization rules.

A Comparative Analysis of NOC over MVG to Improve Quality of Software

Vol. 10  Issue 1
Year:2015
Issue:Jul-Sep
Title:A Comparative Analysis of NOC over MVG to Improve Quality of Software
Author Name:Dharmendra Lal Gupta, Anil Kumar Malviya, Manish Gaur and Vikash Chauhan
Synopsis:
Software metric is one of the very important elements to predict the quality. The relationship of Number of Children metric with cyclomatic complexity is a significant matter. Here in this paper the relationship of NOC (Number of Children) and MVG (McCabe's Cyclomatic Complexity) have been explained using three real projects developed in JAVA language. The authors have also empirically computed NOC and MVG metrics of these projects and found the correlation between these two. It is found that on increasing NOC, MVG also increases in polynomial form which is showing the directly proportional relationship. This paper is providing an optimal value of NOC up to that software will be quality software.

A Pattern for Stress and its Resolution

Vol. 10  Issue 1
Year:2015
Issue:Jul-Sep
Title:A Pattern for Stress and its Resolution
Author Name:Mohamed E. Fayad and Charles A. Flood III
Synopsis:
In business, stress can account for a surprising energy expense, leaving employees drained and unproductive. Stress can also have detrimental effects to teamwork further hampering their ability to function. The objective of this paper is to provide a model for stress that may be used to analyze stress in practically any environment or scenario. This should subsequently provide clues, if not the means, to effectively combat stress, both in oneself and in others. This article will present a comparison of the traditional and stable models for stress and provide some solutions for managers to manage or mitigate stress of employee and employer alike. The information presented here will benefit managers by helping them reduce or manage the stress of his subordinates, thus improving their performance and productivity. As an added benefit, a less stressful working environment, thus created, will improve the attitude of all involved, again boosting productivity and possibly attracting new talent.