最美情侣中文字幕电影,在线麻豆精品传媒,在线网站高清黄,久久黄色视频

歡迎光臨散文網(wǎng) 會(huì)員登陸 & 注冊(cè)

SVM調(diào)參MNIST手寫字符數(shù)據(jù)的推測(cè)模型

2023-03-29 21:18 作者:時(shí)晴charles  | 我要投稿


同樣來自哥大的工程課程machine learning,這是一堂由IBM的首席研究員講授的機(jī)器學(xué)習(xí)課程。

Background:

Handwriting recognition is a well-studied subject in computer vision and has found wide applications in our daily life (such as USPS mail sorting). In this project, we will explore various machine learning techniques for recognizing handwriting digits. The dataset you will be using is the well-known MINST dataset.

(1) The MNIST database of handwritten digits has a training set of 60,000 examples, and a test set of 10,000 examples. (http://yann.lecun.com/exdb/mnist/)

Below is an example of some digits from the MNIST dataset:

The goal of this project is to build a 10-class classifier to recognize those handwriting digits as accurately as you can. Though deep learning has been widely used for this dataset, in this project, you should NOT use any deep neural nets (DNN) to do the recognition. Rather, you need to use the techniques we have learned so far from the class (such as logistic regression, SVM etc.) plus some other reasonable non-DNN related machine learning techniques (such as random forest, decision tree etc. – though we have not covered those subject in the class yet) to do the work.

Build a classifier using all pixels as features for handwriting recognition.

After loading the dataset with R, we have training dataset and test dataset.

Now we are trying to conduce classification and product predictive model based on SVM. This is original code within R with default attributes:

Typical attributes of SVM function within e0171 package of R include: formula,data,x,y,scale,kernel,degree,gamma,cost.

Kernel(http://stats.stackexchange.com/questions/73032/linear-kernel-and-non-linear-kernel-for-support-vector-machine)

Usually, the decision is whether to use linear or an RBF (aka Gaussian) kernel.There are two main factors to consider: Solving the optimisation problem for a linear kernel is much faster, see e.g. LIBLINEAR. Typically, the best possible predictive performance is better for a nonlinear kernel (or at least as good as the linear one).

Gamma parameter needed for all kernels except linear (default: 1/(data dimension))

Cost Intuitively, the C parameter trades off mis_classification of training examples against simplicity of the decision surface. Low value C tends to make decision surface smooth, while a high C tries all training examples correctly by giving the model freedom to select more samples as support vectors.

Tuned code:


SVM調(diào)參MNIST手寫字符數(shù)據(jù)的推測(cè)模型的評(píng)論 (共 條)

分享到微博請(qǐng)遵守國家法律
左云县| 成安县| 屏山县| 温州市| 科尔| 肃宁县| 高邑县| 屯留县| 广宗县| 兰西县| 寻甸| 正定县| 文成县| 聊城市| 高陵县| 达州市| 皮山县| 梅州市| 台东县| 淳化县| 宁陕县| 平南县| 赞皇县| 铜陵市| 张家港市| 霍林郭勒市| 巨野县| 禹城市| 上饶市| 绥棱县| 同心县| 通化市| 新密市| 商丘市| 会昌县| 新野县| 阿拉善左旗| 裕民县| 杭锦旗| 安新县| 绥中县|