FASTICA

FastICA estimates statistically independent latent sources whose linear mixtures reconstruct the observed features. It is commonly used for source separation, blind signal recovery, and compact independent latent representations.

The linear mixing model is defined as:

X = S A

where X is the observed mixture, S are the independent source signals, and A is the mixing matrix. The algorithm finds an unmixing matrix W to approximate S \approx X W.

This wrapper accepts rows as samples and columns as features. It returns the recovered source matrix together with the fitted unmixing matrix, mixing matrix, feature means, and iteration count from the ICA fit.

Excel Usage

=FASTICA(data, n_components, ica_algorithm, ica_fun, max_iter, tol, random_state)
  • data (list[list], required): 2D array of numeric input data with rows as samples and columns as features.
  • n_components (int, optional, default: null): Number of independent components to estimate. Leave blank to keep the estimator default.
  • ica_algorithm (str, optional, default: “parallel”): Optimization strategy used to estimate the independent components.
  • ica_fun (str, optional, default: “logcosh”): Nonlinearity used to approximate negentropy during fitting.
  • max_iter (int, optional, default: 300): Maximum number of fitting iterations.
  • tol (float, optional, default: 0.0001): Positive convergence tolerance for the unmixing update.
  • random_state (int, optional, default: null): Integer seed for deterministic initialization. Leave blank for the estimator default.

Returns (dict): Excel data type containing recovered sources, unmixing matrices, and iteration summaries.

Example 1: Recover two independent components with the parallel algorithm

Inputs:

data n_components ica_algorithm ica_fun max_iter tol random_state
0 0 2 parallel logcosh 500 0.0001 0
1 0.4
2 0.8
0 1
0.4 1.2
0.8 1.4
2 2
2.4 2.2

Excel formula:

=FASTICA({0,0;1,0.4;2,0.8;0,1;0.4,1.2;0.8,1.4;2,2;2.4,2.2}, 2, "parallel", "logcosh", 500, 0.0001, 0)

Expected output:

{"type":"Double","basicValue":2,"properties":{"component_count":{"type":"Double","basicValue":2},"sample_count":{"type":"Double","basicValue":8},"feature_count":{"type":"Double","basicValue":2},"sources":{"type":"Array","elements":[[{"type":"Double","basicValue":-1.23979},{"type":"Double","basicValue":-1.04907}],[{"type":"Double","basicValue":-1.28376},{"type":"Double","basicValue":0.0934344}],[{"type":"Double","basicValue":-1.32773},{"type":"Double","basicValue":1.23593}],[{"type":"Double","basicValue":0.612094},{"type":"Double","basicValue":-1.3068}],[{"type":"Double","basicValue":0.668581},{"type":"Double","basicValue":-0.860105}],[{"type":"Double","basicValue":0.725068},{"type":"Double","basicValue":-0.413413}],[{"type":"Double","basicValue":0.894529},{"type":"Double","basicValue":0.92666}],[{"type":"Double","basicValue":0.951016},{"type":"Double","basicValue":1.37335}]]},"components":{"type":"Array","elements":[[{"type":"Double","basicValue":-0.784725},{"type":"Double","basicValue":1.85189}],[{"type":"Double","basicValue":1.24559},{"type":"Double","basicValue":-0.257729}]]},"mixing":{"type":"Array","elements":[[{"type":"Double","basicValue":0.122469},{"type":"Double","basicValue":0.879987}],[{"type":"Double","basicValue":0.591886},{"type":"Double","basicValue":0.372889}]]},"feature_means":{"type":"Array","elements":[[{"type":"Double","basicValue":1.075}],[{"type":"Double","basicValue":1.125}]]},"n_iter":{"type":"Double","basicValue":3}}}

Example 2: Fit ICA with the deflation algorithm and exponential nonlinearity

Inputs:

data n_components ica_algorithm ica_fun max_iter tol random_state
0 0 1 2 deflation exp 600 0.0001 4
1 0.3 1.1
2 0.6 1.2
0 1 2
0.3 1.2 2.1
0.6 1.4 2.2
2 2 3
2.3 2.2 3.1

Excel formula:

=FASTICA({0,0,1;1,0.3,1.1;2,0.6,1.2;0,1,2;0.3,1.2,2.1;0.6,1.4,2.2;2,2,3;2.3,2.2,3.1}, 2, "deflation", "exp", 600, 0.0001, 4)

Expected output:

{"type":"Double","basicValue":2,"properties":{"component_count":{"type":"Double","basicValue":2},"sample_count":{"type":"Double","basicValue":8},"feature_count":{"type":"Double","basicValue":3},"sources":{"type":"Array","elements":[[{"type":"Double","basicValue":-1.06113},{"type":"Double","basicValue":-1.02565}],[{"type":"Double","basicValue":-1.25726},{"type":"Double","basicValue":0.14239}],[{"type":"Double","basicValue":-1.45339},{"type":"Double","basicValue":1.31042}],[{"type":"Double","basicValue":0.471187},{"type":"Double","basicValue":-1.22815}],[{"type":"Double","basicValue":0.547425},{"type":"Double","basicValue":-0.893195}],[{"type":"Double","basicValue":0.623664},{"type":"Double","basicValue":-0.558241}],[{"type":"Double","basicValue":1.02663},{"type":"Double","basicValue":0.958731}],[{"type":"Double","basicValue":1.10287},{"type":"Double","basicValue":1.29368}]]},"components":{"type":"Array","elements":[[{"type":"Double","basicValue":-0.488436},{"type":"Double","basicValue":0.695373},{"type":"Double","basicValue":0.836946}],[{"type":"Double","basicValue":1.19469},{"type":"Double","basicValue":-0.0320314},{"type":"Double","basicValue":-0.170472}]]},"mixing":{"type":"Array","elements":[[{"type":"Double","basicValue":0.120532},{"type":"Double","basicValue":0.881659}],[{"type":"Double","basicValue":0.65395},{"type":"Double","basicValue":0.313877}],[{"type":"Double","basicValue":0.721831},{"type":"Double","basicValue":0.253746}]]},"feature_means":{"type":"Array","elements":[[{"type":"Double","basicValue":1.025}],[{"type":"Double","basicValue":1.0875}],[{"type":"Double","basicValue":1.9625}]]},"n_iter":{"type":"Double","basicValue":3}}}

Example 3: Estimate a single independent component from two-feature mixtures

Inputs:

data n_components ica_algorithm ica_fun max_iter tol random_state
0 1 1 parallel cube 500 0.0001 2
1 1.5
2 2
3 2.5
4 3
5 3.5

Excel formula:

=FASTICA({0,1;1,1.5;2,2;3,2.5;4,3;5,3.5}, 1, "parallel", "cube", 500, 0.0001, 2)

Expected output:

{"type":"Double","basicValue":1,"properties":{"component_count":{"type":"Double","basicValue":1},"sample_count":{"type":"Double","basicValue":6},"feature_count":{"type":"Double","basicValue":2},"sources":{"type":"Array","elements":[[{"type":"Double","basicValue":-1.46385}],[{"type":"Double","basicValue":-0.87831}],[{"type":"Double","basicValue":-0.29277}],[{"type":"Double","basicValue":0.29277}],[{"type":"Double","basicValue":0.87831}],[{"type":"Double","basicValue":1.46385}]]},"components":{"type":"Array","elements":[[{"type":"Double","basicValue":0.468432},{"type":"Double","basicValue":0.234216}]]},"mixing":{"type":"Array","elements":[[{"type":"Double","basicValue":1.70783}],[{"type":"Double","basicValue":0.853913}]]},"feature_means":{"type":"Array","elements":[[{"type":"Double","basicValue":2.5}],[{"type":"Double","basicValue":2.25}]]},"n_iter":{"type":"Double","basicValue":1}}}

Example 4: Recover two components from a three-feature mixed signal matrix

Inputs:

data n_components ica_algorithm ica_fun max_iter tol random_state
1 0 0.5 2 parallel logcosh 700 0.0001 6
2 0.5 1
3 1 1.5
0.5 2 1
1 2.5 1.5
1.5 3 2
3 3 3
3.5 3.5 3.5

Excel formula:

=FASTICA({1,0,0.5;2,0.5,1;3,1,1.5;0.5,2,1;1,2.5,1.5;1.5,3,2;3,3,3;3.5,3.5,3.5}, 2, "parallel", "logcosh", 700, 0.0001, 6)

Expected output:

{"type":"Double","basicValue":2,"properties":{"component_count":{"type":"Double","basicValue":2},"sample_count":{"type":"Double","basicValue":8},"feature_count":{"type":"Double","basicValue":3},"sources":{"type":"Array","elements":[[{"type":"Double","basicValue":-1.40153},{"type":"Double","basicValue":-0.694154}],[{"type":"Double","basicValue":-1.26564},{"type":"Double","basicValue":0.167602}],[{"type":"Double","basicValue":-1.12976},{"type":"Double","basicValue":1.02936}],[{"type":"Double","basicValue":0.382655},{"type":"Double","basicValue":-1.43255}],[{"type":"Double","basicValue":0.693594},{"type":"Double","basicValue":-0.986126}],[{"type":"Double","basicValue":1.00453},{"type":"Double","basicValue":-0.539703}],[{"type":"Double","basicValue":0.702604},{"type":"Double","basicValue":1.00457}],[{"type":"Double","basicValue":1.01354},{"type":"Double","basicValue":1.451}]]},"components":{"type":"Array","elements":[[{"type":"Double","basicValue":-0.350111},{"type":"Double","basicValue":0.748755},{"type":"Double","basicValue":0.223235}],[{"type":"Double","basicValue":0.830667},{"type":"Double","basicValue":-0.2361},{"type":"Double","basicValue":0.298278}]]},"mixing":{"type":"Array","elements":[[{"type":"Double","basicValue":0.0906056},{"type":"Double","basicValue":1.03867}],[{"type":"Double","basicValue":1.17569},{"type":"Double","basicValue":0.281982}],[{"type":"Double","basicValue":0.678285},{"type":"Double","basicValue":0.683204}]]},"feature_means":{"type":"Array","elements":[[{"type":"Double","basicValue":1.9375}],[{"type":"Double","basicValue":1.9375}],[{"type":"Double","basicValue":1.75}]]},"n_iter":{"type":"Double","basicValue":5}}}

Python Code

import numpy as np
from sklearn.decomposition import FastICA as SklearnFastICA

def fastica(data, n_components=None, ica_algorithm='parallel', ica_fun='logcosh', max_iter=300, tol=0.0001, random_state=None):
    """
    Fit independent component analysis and return source signals with unmixing matrices.

    See: https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.FastICA.html

    This example function is provided as-is without any representation of accuracy.

    Args:
        data (list[list]): 2D array of numeric input data with rows as samples and columns as features.
        n_components (int, optional): Number of independent components to estimate. Leave blank to keep the estimator default. Default is None.
        ica_algorithm (str, optional): Optimization strategy used to estimate the independent components. Valid options: Parallel, Deflation. Default is 'parallel'.
        ica_fun (str, optional): Nonlinearity used to approximate negentropy during fitting. Valid options: Logcosh, Exp, Cube. Default is 'logcosh'.
        max_iter (int, optional): Maximum number of fitting iterations. Default is 300.
        tol (float, optional): Positive convergence tolerance for the unmixing update. Default is 0.0001.
        random_state (int, optional): Integer seed for deterministic initialization. Leave blank for the estimator default. Default is None.

    Returns:
        dict: Excel data type containing recovered sources, unmixing matrices, and iteration summaries.
    """
    def py(value):
        return value.item() if isinstance(value, np.generic) else value

    def cell(value):
        value = py(value)
        if isinstance(value, bool):
            return {"type": "Boolean", "basicValue": bool(value)}
        if isinstance(value, (int, float)) and not isinstance(value, bool):
            return {"type": "Double", "basicValue": float(value)}
        return {"type": "String", "basicValue": str(value)}

    def col(values):
        return [[cell(value)] for value in values]

    def mat(values):
        return [[cell(value) for value in row] for row in values]

    def parse_data(value):
        value = [[value]] if not isinstance(value, list) else value
        if not isinstance(value, list) or not value or not all(isinstance(row, list) and row for row in value):
            return None, "Error: data must be a non-empty 2D list"
        if len({len(row) for row in value}) != 1:
            return None, "Error: data must be a rectangular 2D list"
        data_np = np.array(value, dtype=float)
        if data_np.ndim != 2 or data_np.size == 0:
            return None, "Error: data must be a non-empty 2D list"
        if not np.isfinite(data_np).all():
            return None, "Error: data must contain only finite numeric values"
        if data_np.shape[0] < 2:
            return None, "Error: data must contain at least 2 samples"
        return data_np, None

    def canonicalize_ica(sources, components, mixing):
        source_np = np.array(sources, dtype=float, copy=True)
        component_np = np.array(components, dtype=float, copy=True)
        mixing_np = np.array(mixing, dtype=float, copy=True)
        limit = min(source_np.shape[1], component_np.shape[0], mixing_np.shape[1])
        for index in range(limit):
            component_row = component_np[index, :]
            pivot = int(np.argmax(np.abs(component_row)))
            pivot_value = component_row[pivot]
            if pivot_value == 0 and source_np.shape[0] > 0:
                source_column = source_np[:, index]
                pivot_value = source_column[int(np.argmax(np.abs(source_column)))]
            if pivot_value < 0:
                component_np[index, :] *= -1.0
                source_np[:, index] *= -1.0
                mixing_np[:, index] *= -1.0

        order = sorted(
            range(limit),
            key=lambda index: tuple(np.round(component_np[index, :], 12).tolist())
        )
        source_np = source_np[:, order]
        component_np = component_np[order, :]
        mixing_np = mixing_np[:, order]
        return source_np, component_np, mixing_np

    try:
        data_np, error = parse_data(data)
        if error:
            return error

        component_total = None if n_components in (None, "") else int(n_components)
        max_components = min(data_np.shape[0], data_np.shape[1])
        if component_total is not None and (component_total < 1 or component_total > max_components):
            return f"Error: n_components must be between 1 and {max_components}"

        algorithm_value = str(ica_algorithm).strip().lower()
        if algorithm_value not in {"parallel", "deflation"}:
            return "Error: ica_algorithm must be 'parallel' or 'deflation'"

        fun_value = str(ica_fun).strip().lower()
        if fun_value not in {"logcosh", "exp", "cube"}:
            return "Error: ica_fun must be 'logcosh', 'exp', or 'cube'"

        if int(max_iter) < 1:
            return "Error: max_iter must be at least 1"
        if float(tol) <= 0:
            return "Error: tol must be greater than 0"

        fitted = SklearnFastICA(
            n_components=component_total,
            algorithm=algorithm_value,
            fun=fun_value,
            max_iter=int(max_iter),
            tol=float(tol),
            random_state=None if random_state in (None, "") else int(random_state)
        )

        sources_np = np.asarray(fitted.fit_transform(data_np), dtype=float)
        components_np = np.asarray(fitted.components_, dtype=float)
        mixing_np = np.asarray(fitted.mixing_, dtype=float)
        sources_np, components_np, mixing_np = canonicalize_ica(sources_np, components_np, mixing_np)
        feature_means = np.atleast_1d(np.asarray(fitted.mean_, dtype=float))

        return {
            "type": "Double",
            "basicValue": float(components_np.shape[0]),
            "properties": {
                "component_count": {"type": "Double", "basicValue": float(components_np.shape[0])},
                "sample_count": {"type": "Double", "basicValue": float(data_np.shape[0])},
                "feature_count": {"type": "Double", "basicValue": float(data_np.shape[1])},
                "sources": {"type": "Array", "elements": mat(sources_np.tolist())},
                "components": {"type": "Array", "elements": mat(components_np.tolist())},
                "mixing": {"type": "Array", "elements": mat(mixing_np.tolist())},
                "feature_means": {"type": "Array", "elements": col(feature_means.tolist())},
                "n_iter": {"type": "Double", "basicValue": float(fitted.n_iter_)}
            }
        }
    except Exception as e:
        return f"Error: {str(e)}"

Online Calculator

2D array of numeric input data with rows as samples and columns as features.
Number of independent components to estimate. Leave blank to keep the estimator default.
Optimization strategy used to estimate the independent components.
Nonlinearity used to approximate negentropy during fitting.
Maximum number of fitting iterations.
Positive convergence tolerance for the unmixing update.
Integer seed for deterministic initialization. Leave blank for the estimator default.