hypertools.tools.reduce

hypertools.tools.reduce(x, ndims=3, model='IncrementalPCA', model_params={}, normalize=False, internal=False, align=False)[source]

Reduces dimensionality of an array, or list of arrays

Parameters:

x : Numpy array or list of arrays

Dimensionality reduction using PCA is performed on this array. If there are nans present in the data, the function will try to use PPCA to interpolate the missing values.

ndims : int

Number of dimensions to reduce

model : str

Decomposition/manifold learning model to use. Models supported: PCA, IncrementalPCA, SparsePCA, MiniBatchSparsePCA, KernelPCA, FastICA, FactorAnalysis, TruncatedSVD, DictionaryLearning, MiniBatchDictionaryLearning, TSNE, Isomap, SpectralEmbedding, LocallyLinearEmbedding, and MDS.

model_params : dict

Optional dictionary of scikit-learn parameters to pass to reduction model. See scikit-learn specific model docs for details.

normalize : str or False

If set to ‘across’, the columns of the input data will be z-scored across lists (default). If set to ‘within’, the columns will be z-scored within each list that is passed. If set to ‘row’, each row of the input data will be z-scored. If set to False, the input data will be returned (default is False).

align : bool

If set to True, data will be run through the ``hyperalignment’’ algorithm implemented in hypertools.tools.align (default: False).

Returns:

x_reduced : Numpy array or list of arrays

The reduced data with ndims dimensionality is returned. If the input is a list, a list is returned.