Evaluation of SVM Kernels for Health Risks Assessment

Author Name(s): Amrik Singh, K. R. Ramkumar
Author Email: amriksingh07@gmail.com

Abstract

As per the statistics of the National Family Health Survey – 4 (2015-16) only 28.7% of families in India have been covered under Health Insurance. There are many categories of the population that are not covered by the insurance companies as they are reluctant to insure them. The main reason is that they are not able to compute fitness level of the people who wants to get insured properly. In this paper, the preliminary analysis of data set is given equal importance as in constructing and fine-tuning machine learning model. The selection of features for machine learning models was done based on correlation as well as on the medical significance of the attributes. The features that are medically significant and has a minimum correlation among themselves were selected for constructing SVM kernel models. The selection of the most appropriate SVM Kernel was done by multiple evaluations of six SVM kernels. It was found that the Medium Radial kernel performs best in term of accuracy but linear kernel training time is least among all kernels. The values of C-statistics are consistent with the accuracy values in almost all cases and it shows that medium radial kernel is the best choice to automate the health risk assessment as it is faster as well as most accurate in prediction

Introduction

Support Vector Machine [1] [2] [3] [4][5] (SVM) is an algorithm that distinguishes boundaries between the classes within a dataset [6]. The basis of the SVM is a regression, but it is more than just finding and separating borderlines between the classes [7]. The algorithm uses a ‘kernel trick’ [8] to transform the dataset into separate planes [9]. The word support vector represents the coordinates of each individual observations mapped on the vector space model [10]. For example, if (x1,y1) corresponds to Class A and (x2,y2) corresponds to the Class B, and if these data points are far apart, then they can be considered as ‘support vectors’ to differentiate between the Class A and Class B. In other words, higher the distance/difference, easier it is to identify particular class of ‘support vector’. Hence by definition, the aim of the SVM is to identify the separating hyperplane (optimal) which maximizes the margin of the training dataset between respective classes [11]. Secondly, the logic of the SVM algorithm(s) is to maximize the distance between the two decision boundaries of the classes that it deals with. This means increasing the separation distance by choosing the best hyperplanes [12][13]. If the boundary is nonlinear due to overlapping data points then the inner product method is used to transform the data into higher feature space. This is primarily called a kernelized machine learning models but Kernelization can only happen if the kernel function(s) satisfy the condition of symmetry (k(x,y) = k(y,x)) and positive semidefiniteness [14]. Checking symmetry is a straightforward method but checking semi-definiteness can be done by random simulation or experimentation with the feature data. Hence, selecting the right kernel function(s) is required and this can be done using an empirical approach and evaluations on the datasets. The current narrative in every field including insurance [15] [16] [17] [18] is to apply machine learning algorithms to automate the workflows [19] [20]. For, this there is always a need to evaluate the nature of the dataset as well.

Conclusion

This paper work validates the latest finding in the current literature on the quality of dataset, feature sets and process of selection of machine learning algorithms i.e all aspects of the process are critical for the successful implementation and application of the machine learning algorithms for problems of classification and prediction. Since machine learning algorithm takes its inspiration from statistics as well as from the pattern finding algorithm. This paper focused on both the aspects. The evaluation strategy followed in feature selection in this paper can be operated independently of any learning algorithm as it helps to remove insignificant features before the learning process begins. The feature selection process goes through multiple revisions to arrive at the final feature set that leads to the high accuracy of the six algorithms. The classification and the prediction process goes through a series of evaluations and reruns of the SVM with six kernels. The Medium Radial kernel of the SVM gave the maximum prediction power on the dataset. It performed well in almost all evaluations, whether it was the values of true positive rate or Area Under a Curve (AUC) is 0.1. It was empirically found that the main advantage of using SVM kernel functions is that it allows us to change Euclidean geometry so that it fits into the context of the problem and the dataset becomes workable for classification. SVM classification algorithms are useful in cases where the regression method works well to identify the trend lines between the datasets. And it may fail in cases where there are noise and a lot of overlapping data points in the dataset. The initial statistical analysis of the feature set showed some degree of overlapping but the trend lines show a good degree of separation between the classes. Secondly, it was found that, for finding “best” separation hyper-plane, quantity (number of Instances of features) did not always convert into quality, the advantage of SVM is that it does not require a large number of instances to produce a fair percentage of accuracy as it is clear from the outcomes

802 total views, no views today

Download PDF File

About the author: admin