Prometheus Posted September 23, 2020 Share Posted September 23, 2020 Can someone give me sanity check. I've been doing PCA in matlab and python with sklearn and been getting slightly different results. Here's a toy dataset to illustrate: In Matlab: ty = -1.383405780000000 0.293578700000000 -2.221898020000000 -0.251334840000000 3.605303800000000 0.042243850000000 1.383405780000000 -0.293578700000000 2.221898020000000 0.251334840000000 3.605303800000000 -0.042243850000000 [~, ty_sc] = pca(ty) ty_sc = -2.5821 0.3192 -3.4260 -0.2174 2.4038 0.0184 0.1787 -0.2954 1.0226 0.2412 2.4030 -0.0661 In python with sklearn: from sklearn import decomposition ty = np.array([(-1.38340578, 0.2935787), (-2.22189802, -0.25133484), (3.6053038, 0.04224385), (1.38340578, -0.2935787), (2.22189802, 0.25133484), (3.6053038, -0.04224385)]) pca = PCA(n_components=2) ty_pc = pca.fit_transform(ty) ty_pc array([[ 2.58213714, 0.31918546], [ 3.42598874, -0.21739117], [-2.40383649, 0.01842077], [-0.17871932, -0.29536446], [-1.02257092, 0.24121217], [-2.40299915, -0.06606278]]) Notice how the scores in the first columns are identical but for the sign. If all the signs were flipped i could understand, and it would make no difference to follow-up analyses, but just having one column flipped seems weird. This makes a huge difference when you feed these scores into an LDA classifier which i'm doing with the real data. As far as i can tell both techniques are centering and scaling the data in the same way. Any ideas what's going on to produce the difference? Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now