MATLAB is the easiest and most productive software environment for engineers and scientists. MathWorks is the leading developer of mathematical computing software. A description of the new features of MATLAB R2015a may be found at the New. Documentation is also available at the Mathworks Documentation page.
Classification Learner is a new app that lets you train models to classify data using supervised machine learning. You can explore your data, select features, specify cross-validation schemes, train models, and assess results. You can choose from several classification types including decision trees, support vector machines, nearest neighbors, and ensemble classification. Perform supervised machine learning by supplying a known set of input data (observations or examples) and known responses to the data (i.e., labels or classes). Use the data to train a model that generates predictions for the response to new data. To use the model with new data, or to learn about programmatic classification, you can export the model to the workspace or generate MATLAB ® code to recreate the trained model.
The accepts holdout sample predicted labels from both classification models and the true labels. This function implements the asymptotic, exact, or mid- p version of McNemar’s test.
If you specify misclassification costs, testcholdout compares the models using a likelihood ratio or a chi-square test. The object function accepts any two trained classification model objects in Statistics and Machine Learning Toolbox™, sets of holdout predictor data for both models, and corresponding true labels. Like testcholdout, this object function implements the asymptotic, exact, or mid- p version of McNemar’s test. If you specify misclassification costs, compareHoldout compares the models using a likelihood ratio or a chi-square test. The function accepts any two trained classification model objects or templates in Statistics and Machine Learning Toolbox, and repeatedly applies k-fold cross validation using two sets of out-of-sample predictor data and true labels. Then, testckfold assesses the resulting accuracies using a t or an F test. The α coefficients (stored in the Alpha property).
The support vectors (stored in the SupportVectors property). The support vector labels (stored in the SupportVectorLabels property) By default, and do not discard the α coefficients, support vectors, and the support vector labels. You can pass a trained error correcting output codes (ECOC) model (i.e., a or object) to to similarly discard the α coefficients, support vectors, and the support vector labels from all linear SVM binary learners. To control whether linear SVM binary learners store support vectors, create an SVM template using and set the 'SaveSupportVectors' name-value pair argument. 'PlotGroup' allows you to specify whether to plot the marginal distributions by group or for the entire data set. 'Style' allows you to specify whether to display a stairstep plot, which shows the outline of a histogram without filling in the bars, or a histogram bar plot.
If you specify a grouping variable that contains more than one group, then by default scatterhist displays grouped stairstep plots. If you specify a grouping variable that contains only one group, then scatterhist displays a histogram bar plot.
To display kernel density plots, use the 'Kernel' name-value pair argument. The positional argument 'dispopt' in supports two additional options for controlling the appearance of the plots along the diagonal of the plot matrix. Functionality What Happens When You Use This Functionality? Use This Instead Compatibility Considerations princomp Warns Replace instances of princomp with pca. Treedisp Warns ( ClassificationTree) or ( RegressionTree) Use or to grow a tree. Replace instances of treedisp with view ( ClassificationTree) or view ( RegressionTree). Treefit Warns or Replace instances of treefit with fitctree or fitrtree.
Treeprune Warns ( ClassificationTree) or ( RegressionTree) Use or to grow a tree. Replace instances of treeprune with prune ( ClassificationTree) or prune ( RegressionTree). Treetest Warns. treetest(T,'resubstitution') with resubLoss ( ClassificationTree) or resubLoss ( RegressionTree). treetest(T,'test',X,Y) with loss ( ClassificationTree) or loss ( RegressionTree).
treetest(T,'crossvalidate',X,Y) with cvLoss ( ClassificationTree) or cvLoss ( RegressionTree) treeval Warns ( ClassificationTree) or ( RegressionTree) Use or to grow a tree. Replace instances of treeval with predict ( ClassificationTree) or predict ( RegressionTree). Classify Still runs Replace instances of classify with fitcdiscr.
![Mathworks Mathworks](/uploads/1/2/5/3/125382049/506961991.jpg)
Classregtree Still runs or Replace instances of classregtree with fitctree or fitrtree. FitNaiveBayes Still runs Replace instances of fitNaiveBayes with fitcnb. ProbDist Still runs and To create and fit probability distribution objects, use makedist and fitdist instead. ProbDistParametric Still runs and To create and fit probability distribution objects, use makedist and fitdist instead. ProbDistKernel Still runs and To create and fit probability distribution objects, use makedist and fitdist instead.
ProbDistUnivKernel Still runs and To create and fit probability distribution objects, use makedist and fitdist instead. ProbDistUnivParam Still runs and To create and fit probability distribution objects, use makedist and fitdist instead. Svmclassify Still runs Replace instances of svmclassify with fitcsvm. Svmtrain Still runs Replace instances of svmtrain with fitcsvm.
In this session, David covers the latest features and new toolboxes from R2015a.