Call For Paper Volume:7 Issue:9 Sep'2020 |

Feature selection method for High Dimensional Data

Publication Date : 05/11/2016


DOI : 10.21884/IJMTER.2016.3102.BYVXE

Author(s) :

Swati Vishnu Jadhav , Vishwakarma Pinki.


Volume/Issue :
Volume 3
,
Issue 10
(11 - 2016)



Abstract :

Feature selection is the process of identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm may be evaluated from both the efficiency and effectiveness points of view. While the efficiency concerns the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. Based on these criteria, a Fast clustering-based feature Selection algorithm (FAST) is proposed and experimentally evaluated. The FAST algorithm works in two steps. In the first step, features are divided into clusters by using graph-theoretic methods. In the second step, the most representative feature that is strongly related to clustering target classes is selected from each cluster to form a subset of features. Features in different clusters are relatively independent; the clustering-based strategy of FAST has a high probability of producing a subset of useful and independent features. The Minimum-Spanning Tree (MST) using Prim’s algorithm can concentrate on one tree at a time. To ensure the efficiency of FAST, adopt the efficient MST using the Kruskal’s Algorithm clustering method. Display the graph with respect to time showing comparison between Prim’s and Kruskal’s algorithm.


No. of Downloads :

1


Indexing

Web Design MymensinghPremium WordPress ThemesWeb Development

Feature selection method for High Dimensional Data

November 1, 2016