Build all possible 3-column matrices from 3 input matrices of different sizes
Date : March 29 2020, 07:55 AM
I hope this helps you . I have three different matrices: ## Example matrices
m1 <- matrix(1:4, nrow=2)
m2 <- matrix(1:6, nrow=2)
m3 <- matrix(1:2, nrow=2)
## A function that should do what you're after
f <- function(...) {
mm <- list(...)
ii <- expand.grid(lapply(mm, function(X) seq_len(ncol(X))))
lapply(seq_len(nrow(ii)), function(Z) {
mapply(FUN=function(X, Y) X[,Y], mm, ii[Z,])
})
}
## Try it out
f(m1)
f(m1,m2)
f(m1,m2,m3)
|
Understanding string sizes / unicode / sqlServer column sizes
Tag : chash , By : UpperLuck
Date : March 29 2020, 07:55 AM
will help you When you create a column, you specify the number of characters you need to store, regardless of whether it is Unicode or not. Need up to 5 characters? Then it's either VARCHAR(5) or NVARCHAR(5), depending on whether you actually need Unicode or not - that's a business discussion, not a technical one. The 2 bytes has nothing to do with the column definition - that's about storage size. So a VARCHAR(5) will take 5 bytes if fully populated, and an NVARCHAR(5) will take 10 bytes if fully populated. You don't have to worry about those implementation details when defining the column; however you should be sure that Unicode is required before making the choice, because doubling the space requirement for no reason is wasteful. (Ignoring arguments about whether such a column should be CHAR/NCHAR, null byte overhead, etc.)
|
From Hadoop logs how can I find intermediate output byte sizes & reduce output bytes sizes?
Date : March 29 2020, 07:55 AM
Any of those help You can get these info by using FileSystemCounters. Details of the terms used in this counter is given below: FILE_BYTES_READ is the number of bytes read by local file system. Assume all the map input data comes from HDFS, then in map phase FILE_BYTES_READ should be zero. On the other hand, the input file of reducers are data on the reduce-side local disks which are fetched from map-side disks. Therefore, FILE_BYTES_READ denotes the total bytes read by reducers.
|
numpy.dot different output for same matrices when matrices are manually entered
Date : March 29 2020, 07:55 AM
Any of those help I am wondering why np.dot(U,SV) != np.dot(A,B) when I believe A=U and B=SV, even though I manually key in the entries of A and B while using SVD to recover the matrices U and SV. The code below reproduces the oddity. , Recreating your U and SV: In [627]: U
Out[627]:
array([[ -3.36560511e-01, 8.66235179e-01, -3.69274473e-01,
-4.61618492e-16],
[ -5.07358551e-01, 1.27694290e-02, 4.92365964e-01,
-7.07106781e-01],
[ -6.09837375e-01, -4.99310021e-01, -6.15457455e-01,
-1.06278764e-15],
[ -5.07358551e-01, 1.27694290e-02, 4.92365964e-01,
7.07106781e-01]])
In [628]: SV
Out[628]:
array([[ -9.65886537e+01, -1.97578594e+02, -1.97578594e+02],
[ 7.79142604e+01, -1.90446580e+01, -1.90446580e+01],
[ 0.00000000e+00, 4.63542263e-15, -4.63542263e-15],
[ 0.00000000e+00, 0.00000000e+00, 0.00000000e+00]])
In [629]: np.dot(U,SV)
Out[629]:
array([[ 100., 50., 50.],
[ 50., 100., 100.],
[ 20., 130., 130.],
[ 50., 100., 100.]])
In [630]: np.dot(U.round(),SV.round())
Out[630]:
array([[ 78., -19., -19.],
[ 97., 198., 198.],
[ 97., 198., 198.],
[ 97., 198., 198.]])
|
Why are the matrices sizes are different?
Date : March 29 2020, 07:55 AM
To fix the issue you can do You are getting that error because magnitude == 1 gives you a binary mask of 0 / 1 and by logical indexing, the total number of non-zero values in magnitude == 1 must match the total number of elements you are assigning to edgels, which is not the case. The easiest way to do what you ask would be to find all values in edgels where the magnitude is not 1, or 0, and set those values to 0. edgels(magnitude == 0) = 0;
out = zeros(size(edgels));
out(magnitude == 1) = edgels(magnitude == 1);
out = edgels .* double(magnitude);
|