Sunday, 1 January 2017

Software Defect Prediction using Average Probability Ensemble Technique

Vol. 9  Issue 4
Year:2015
Issue:Apr-Jun 
Title:Software Defect Prediction using Average Probability Ensemble Technique
Author Name:T. Vara Prasad, C. Silpa and A. Srinivasulu
Synopsis:
The present generation software testing plays a major role in defect predication. Software defect data includes redundancy, correlation, feature irrelevance and missing value. It is hard to ensure that the software is defective or nondefective. Software applications on day-to-day businesses activities and software attribute prediction such as effort estimation, maintainability, and defect and quality classification are growing interest from both academic and industry communities. Software defect predication using several methods, in that random forest and gradient boosting are effective. Even though the defect datasets contain incomplete or irrelevant features. The proposed system Average probability ensemble technique used to overcome those problems and gives high classification result to compare another method. Because it has integrated with three algorithms to use classification performance and it gives more accurate result in publicly-available software datasets.

Document Summarization using A Hybrid Trace Thrash Data Modeling and Classification Algorithms

Vol. 9  Issue 4
Year:2015
Issue:Apr-Jun 
Title:Document Summarization using A Hybrid Trace Thrash Data Modeling and Classification Algorithms
Author Name:S. Dilli Arasu and R.Thirumalaiselvi 
Synopsis:
Multi-record rundown is utilized for comprehension and investigation of substantial record accumulations, the significant wellspring of these accumulations are news documents, sites, tweets, website pages, examination papers, web indexed lists and specialized reports accessible over the web and different spots. A few cases of the uses of the Multi-report synopsis are breaking down the web list items for helping clients in further skimming and creating outlines for news articles. Report preparing and synopsis era in an expansive content record gathering is computationally unpredictable errand and in the period of Big Data examination where size of information accumulations is high, there is need of calculations for condensing the huge content accumulations quickly. Here the authors display, a Trace Thrash, A multi archive summarizer is introduced in this work with the assistance of semantic likeness based grouping over the prevalent disseminated figuring system Trace Thrash.

Evaluating the Privacy of User Profiles in Personalized Information Systems

Vol. 9  Issue 4
Year:2015
Issue:Apr-Jun 
Title:Evaluating the Privacy of User Profiles in Personalized Information Systems
Author Name:U. Yobu and B. Lalitha
Synopsis:
Collaborative tagging is one of the most well-known and widespread services available online. The key point of collaborative tagging is to distinguish the resources based on user opinion, stated in the form of tags. Collaborative tagging supplies the source for the semantic Web, network will connect all online resources based on their meanings. While this information is a valued source, its total volume limits its value. Most of the research projects and corporations are discovering the use of personalized applications that control this overflow by modifying the information obtainable to individual users. These applications altogether utilize some information about individuals in directive to be active. This zone is generally called user profiling. In this paper some of the most standard techniques for gathering information about users, signifying, and constructing user profiles. This paper mainly focus on measuring the privacy of user profiles through kl divergence and Shannon entropy techniques showing the tag suppression that protects the end user privacy.

Privacy Preserving Access Control to Incremental Data

Vol. 9  Issue 4
Year:2015
Issue:Apr-Jun 
Title:Privacy Preserving Access Control to Incremental Data
Author Name:V. Ravi Kumar Yadav and B. Lalitha
Synopsis:
Data privacy issues are gradually more becoming important for many applications. Usually, database in the area of data safety can be mostly classified into access control research and data confidentiality research. There is little overlie among these two areas. Access Control Mechanisms (ACM) safe the sensitive information from unauthorized users. Even sanctioned users may misuse the data to reveal the privacy of individuals to whom the data. The privacy safety mechanism provides a greater confidentiality for sensitive information to be shared. It is achieved by anonymization techniques [8]. Privacy is achieved by the high accuracy and consistency of the user information, i.e., the precision of user information. In this paper, it offers confidentiality (privacy) preserving access manage mechanism for Incremental relational data. It uses the accuracy forced privacy protected access control mechanism for incremental relational database framework here. It uses the concept of imprecision bound related to access control mechanism for preserving privacy. The imprecision bound is set for all queries. For the privacy protection mechanism, it uses the combination of both the k-anonymity and fragmentation method.

Regression Testing Using IGTCP Algorithm for Industry Based Applications

Vol. 9  Issue 4
Year:2015
Issue:Apr-Jun 
Title:Regression Testing Using IGTCP Algorithm for Industry Based Applications
Author Name:K. Hema Shankari and R. Thirumalai Selvi
Synopsis:
Regression testing is a testing to test the modified software during the maintenance level. Regression testing is a costly but crucial problem in software development. Both the research community and the industry have paid much attention to this problem. The paper try to do the survey of current practice in industry and also try to find out whether there are gaps between them. The observations show that although some issues are concerned both by the research community and the industry. This research discusses the problems about current research on regression testing and quality control in application of regression testing in the engineering practice, and proposes a practical regression method, combing with change-impact-analysis, business rules model, cost risk assessment and test case management. This paper presents an approach to prioritize regression test cases based on the factors such as rate of fault deduction, percentage of fault detected and the risk detection capability. The proposed approach is compared with previous approach using APFD metric. The results represent that propose approach outperforms the earlier approach.

Legality Stable Analysis Pattern

Vol. 9  Issue 4
Year:2015
Issue:Apr-Jun 
Title:Legality Stable Analysis Pattern
Author Name:Mohamed E. Fayad and Siddharth Jindal 
Synopsis:
Legality is an umbrella term that encompasses every aspect of dealing and working with different entities in a lawful manner. Although, legality finds application across almost every existing system, an explicitly defined pattern do not exist for it even now. Hence, this paper will introduce the process of modeling different kinds of related applications without needing to re-think the problem every time and from scratch. The legality pattern represents the core knowledge of anything that complies with the regulations of its arbitration authority. In addition, this pattern can be reused as part of any new model which deals with legality in some way or the other. The pattern utilizes the concepts defined in the Software Stability Model (SSM) to develop a more stable and generic model. This will help in eliminating the need of repeating the process of separately modeling legality for each related domain.

A Systematic Survey on Waterfall Vs. Agile Vs. Lean Process Paradigms

Vol. 9  Issue 3
Year:2015
Issue:Jan-Mar
Title:A Systematic Survey on Waterfall Vs. Agile Vs. Lean Process Paradigms
Author Name:K.K. Baseer, A. Rama Mohan Reddy and C. Shoba Bindu
Synopsis:
We intend to highlight the key features and future directions in the research community of waterfall, agile and lean process paradigms from 2001 to 2014, exemplifying how research on waterfall, agile and lean has progressively increased in the past fourteen years by inspecting articles and papers from scientific and standard publications. Survey materialized in three fold process. Firstly, the authors have investigated on the amalgamation of waterfall, agile and then proceeded with agile-lean. Secondly, they have performed a structural analysis on different author's prominent contributions in the form of tabulation by categories and graphical representation. Thirdly, they huddle with conceptual similarity in the field and also impart a summary table on all process models. In the context of agile, monitoring bottlenecks such as business values and high speed projects, beyond the capacity performed into an organization (CMMI levels) is not clearly defined. But by incorporating lean and agile principles, high quality products can be produced with low cost and desired speed can be achieved in the delivery of projects with optimal capacity. Further, it delivers conclusions by conferring future research directions in the field of software engineering paradigm, such as reducing errors early in the software development process, combining manufacturing principles with lean or agile or both rather than a solo model for providing high quality results. One such type of principle called Poka-Yoke (PY) is a mistake proofing technique used in product design and improves the software development process by integrating software engineering paradigm. A new approach for implementation of Poka-Yoke method in software performance engineering is proposed.