Show simple item record

dc.contributor.authorZhang, Jesse
dc.date.accessioned2020-04-02T22:50:41Z
dc.date.available2020-04-02T22:50:41Z
dc.date.issued2019-04
dc.identifier.urihttp://hdl.handle.net/11122/10958
dc.descriptionMaster's Project (M.S.) University of Alaska Fairbanks, 2019en_US
dc.description.abstractThis paper explores various techniques to estimate a confidence interval on accuracy for machine learning algorithms. Confidence intervals on accuracy may be used to rank machine learning algorithms. We investigate bootstrapping, leave one out cross validation, and conformal prediction. These techniques are applied to the following machine learning algorithms: support vector machines, bagging AdaBoost, and random forests. Confidence intervals are produced on a total of nine datasets, three real and six simulated. We found in general not any technique was particular successful at always capturing the accuracy. However leave one out cross validation had the most consistency amongst all techniques for all datasets.en_US
dc.language.isoen_USen_US
dc.titleEstimating confidence intervals on accuracy in classification in machine learningen_US
dc.typeThesisen_US
dc.type.degreemsen_US
dc.identifier.departmentDepartment of Mathematics and Statisticsen_US
dc.contributor.chairMcIntyre, Julie
dc.contributor.committeeBarry, Ronald
dc.contributor.committeeGoddard, Scott
refterms.dateFOA2020-04-02T22:50:42Z


Files in this item

Thumbnail
Name:
Zhang_J_2019.pdf
Size:
1.667Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record