zca whitening可以提高准确率吗
发布网友
发布时间:2022-04-07 23:53
我来回答
共3个回答
懂视网
时间:2022-04-08 04:14
使用gem安装mysql引擎
gem install mysql(如果安装失败,请查找一些依赖组建是否安装,例如mysql-devel)
编写ruby脚本,先获取mysql数据,之后从memcached获取数据,并计算两者之间获取数据时间差,代码如下:
运行结果如下:
结论:通过计算我们可知Memcached提升了将近两个数量级;
版权声明:本文为博主原创文章,未经博主允许不得转载。
repcached与mysql缓存测试
标签:memcached 数据库 ruby gem
热心网友
时间:2022-04-08 01:22
Steps:
Step 0: Load data
Step 1a: Implement PCA to obtain U
Step 1b: Compute xRot, the projection on to the eigenbasis
Step 2: Rece the number of dimensions from 2 to 1.
Step 3: PCA Whitening
Step 3: ZCA Whitening
简单介绍:
First, we need to ensure that the data has (approximately) zero-mean. For natural images, we achieve this (approximately) by subtracting the mean value of each image patch.
We achieve this by computing the mean for each patch and subtracting it for each patch. In Matlab, we can do this by using
avg = mean(x, 1); % Compute the mean pixel intensity value separately for each patch.
x = x - repmat(avg, size(x, 1), 1);
Next, we need to compute \textstyle \Sigma = \frac{1}{m} \sum_{i=1}^m (x^{(i)})(x^{(i)})^T. If you're implementing this in Matlab (or
even if you're implementing this in C++, Java, etc., but have access to an efficient linear algebra library), doing it as an explicit sum is inefficient. Instead, we can compute this in one fell swoop as
sigma = x * x' / size(x, 2);
(Check the math yourself for correctness.) Here, we assume that
x is a data structure that contains one training example per column (so,
x is a \textstyle n-by-\textstyle m
matrix).
Next, PCA computes the eigenvectors of Σ. One could do this using the Matlab
eig function. However, because Σ is a symmetric positive semi-definite matrix, it is more numerically reliable to do this using the
svd function. Concretely, if you implement
[U,S,V] = svd(sigma);
then the matrix U will contain the eigenvectors of
Sigma (one eigenvector per column, sorted in order from top to bottom eigenvector), and the diagonal entries of the matrix
S will contain the corresponding eigenvalues (also sorted in decreasing order). The matrix
V will be equal to transpose of
U, and can be safely ignored.
(Note: The svd function actually computes the singular vectors and singular values of a matrix, which for the special case of a symmetric positive semi-definite matrix---which is all that we're concerned with here---is equal to its eigenvectors
and eigenvalues. A full discussion of singular vectors vs. eigenvectors is beyond the scope of these notes.)
Finally, you can compute \textstyle x_{\rm rot} and
\textstyle \tilde{x} as follows:
xRot = U' * x; % rotated version of the data.
xTilde = U(:,1:k)' * x; % reced dimension representation of the data,
% where k is the number of eigenvectors to keep
This gives your PCA representation of the data in terms of \textstyle \tilde{x} \in \Re^k. Incidentally, if
x is a \textstyle n-by-\textstyle m
matrix containing all your training data, this is a vectorized implementation, and the expressions above work too for computing
xrot and \tilde{x} for your entire training set all in one go. The resulting
xrot and \tilde{x} will have one column corresponding to each training example.
To compute the PCA whitened data \textstyle x_{\rm PCAwhite}, use
xPCAwhite = diag(1./sqrt(diag(S) + epsilon)) * U' * x;
Since S's diagonal contains the eigenvalues
\textstyle \lambda_i, this turns out to be a compact way of computing
\textstyle x_{{\rm PCAwhite},i} = \frac{x_{{\rm rot},i} }{\sqrt{\lambda_i}} simultaneously for all
\textstyle i.
Finally, you can also compute the ZCA whitened data \textstyle x_{\rm ZCAwhite} as:
xZCAwhite = U * diag(1./sqrt(diag(S) + epsilon)) * U' * x;
热心网友
时间:2022-04-08 02:40
可以