I am implementing a sparse matrix class in compressed row format. This means i have a fixed number of rows and each row consists of a number of elements (this number can be different for different row
Does 开发者_开发技巧anyone know how to perform svd operation on a sparse matrix in python? It seems that there is no such functionality provided in scipy.sparse.linalg.Sounds like sparsesvd is what yo
I\'d like to do large-scale regression (linear/logistic) in R with many (e.g. 100k) features, where each example is relatively sparse in the feature space---e.g., ~1k non-zero features per example.
I am trying to do some (k-means) clustering on a very large matrix. The matrix is approximately 500000 rows x 4000 cols yet very sparse (only a c开发者_JAVA技巧ouple of \"1\" values per row). I want
I am working on some software that does a sparse matrix-vector multiply. The matrix is stored i开发者_StackOverflown a coordinate format (a row and column index for each non-zero).
I am looking at solving a problem that is a PDE, and the 3D discretized domain can have a different boundary condition on each of the 6 boundaries (or all the same).
I\'m not sure if this is a good question or not — please close it if not. I set out to write (using boost::coordinate_vector as a starting point) a sparse_vector template class that efficiently impl
I think this is a pretty common issue, but I don\'t know what the process is called, so I\'ll describe it with an example. The concept is that I want to join a sparse dataset to a complete series, suc
I have a very large matrix (100M rows by 100M columns) that has a lots of duplicate values right next to each other.For example:
I have to create an image with very large resolution, but the image is relatively \"sparse\", only some areas in the image need to draw.