RF_CLASSIFY

Random forest classification averages the predictions of many decision trees trained on resampled data (bootstrapping) and feature subsets. For a classification problem with B trees, the ensemble prediction \hat{y} is determined by majority vote:

\hat{y} = \text{mode}\{T_1(x), T_2(x), \dots, T_B(x)\}

This approach reduces model variance without significantly increasing bias, making it a strong default for nonlinear tabular classification. It also exposes feature-importance estimates based on impurity reduction at split nodes.

This wrapper accepts rows as samples and a target supplied as a single row or single column. It returns training accuracy together with predicted labels, class counts, class probabilities, and fitted feature importances.

Excel Usage

=RF_CLASSIFY(data, target, n_estimators, rf_criterion, max_depth, min_samples_leaf, random_state)
  • data (list[list], required): 2D array of numeric feature data with rows as samples and columns as features.
  • target (list[list], required): Target labels as a single row, single column, or scalar when only one sample is present.
  • n_estimators (int, optional, default: 100): Number of trees in the forest.
  • rf_criterion (str, optional, default: “gini”): Split quality measure used by each decision tree.
  • max_depth (int, optional, default: null): Maximum depth of each tree. Leave blank for unconstrained depth.
  • min_samples_leaf (int, optional, default: 1): Minimum number of samples required in each leaf.
  • random_state (int, optional, default: null): Integer seed for reproducible tree sampling. Leave blank for the estimator default.

Returns (dict): Excel data type containing training accuracy, predictions, probabilities, and fitted feature importances.

Example 1: Fit a random forest classifier for two string-labeled groups

Inputs:

data target n_estimators rf_criterion max_depth min_samples_leaf random_state
0 0 cold 25 gini 3 1 0
0 1 cold
1 0 cold
2 2 hot
2 3 hot
3 2 hot

Excel formula:

=RF_CLASSIFY({0,0;0,1;1,0;2,2;2,3;3,2}, {"cold";"cold";"cold";"hot";"hot";"hot"}, 25, "gini", 3, 1, 0)

Expected output:

{"type":"Double","basicValue":1,"properties":{"accuracy":{"type":"Double","basicValue":1},"sample_count":{"type":"Double","basicValue":6},"feature_count":{"type":"Double","basicValue":2},"class_count":{"type":"Double","basicValue":2},"classes":{"type":"Array","elements":[[{"type":"String","basicValue":"cold"}],[{"type":"String","basicValue":"hot"}]]},"predictions":{"type":"Array","elements":[[{"type":"String","basicValue":"cold"}],[{"type":"String","basicValue":"cold"}],[{"type":"String","basicValue":"cold"}],[{"type":"String","basicValue":"hot"}],[{"type":"String","basicValue":"hot"}],[{"type":"String","basicValue":"hot"}]]},"prediction_counts":{"type":"Array","elements":[[{"type":"String","basicValue":"class"},{"type":"String","basicValue":"count"}],[{"type":"String","basicValue":"cold"},{"type":"Double","basicValue":3}],[{"type":"String","basicValue":"hot"},{"type":"Double","basicValue":3}]]},"probabilities":{"type":"Array","elements":[[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":0.04},{"type":"Double","basicValue":0.96}],[{"type":"Double","basicValue":0.04},{"type":"Double","basicValue":0.96}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":1}]]},"feature_importances":{"type":"Array","elements":[[{"type":"Double","basicValue":0.48}],[{"type":"Double","basicValue":0.52}]]},"estimator_count":{"type":"Double","basicValue":25}}}

Example 2: Use entropy splits for one-dimensional numeric labels

Inputs:

data target n_estimators rf_criterion max_depth min_samples_leaf random_state
0 0 25 entropy 3 1 0
0.2 0
0.4 0
1.2 1
1.4 1
1.6 1

Excel formula:

=RF_CLASSIFY({0;0.2;0.4;1.2;1.4;1.6}, {0;0;0;1;1;1}, 25, "entropy", 3, 1, 0)

Expected output:

{"type":"Double","basicValue":1,"properties":{"accuracy":{"type":"Double","basicValue":1},"sample_count":{"type":"Double","basicValue":6},"feature_count":{"type":"Double","basicValue":1},"class_count":{"type":"Double","basicValue":2},"classes":{"type":"Array","elements":[[{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1}]]},"predictions":{"type":"Array","elements":[[{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1}],[{"type":"Double","basicValue":1}],[{"type":"Double","basicValue":1}]]},"prediction_counts":{"type":"Array","elements":[[{"type":"String","basicValue":"class"},{"type":"String","basicValue":"count"}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":3}],[{"type":"Double","basicValue":1},{"type":"Double","basicValue":3}]]},"probabilities":{"type":"Array","elements":[[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":1}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":1}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":1}]]},"feature_importances":{"type":"Array","elements":[[{"type":"Double","basicValue":1}]]},"estimator_count":{"type":"Double","basicValue":25}}}

Example 3: Fit a random forest classifier for three separated groups

Inputs:

data target n_estimators rf_criterion max_depth min_samples_leaf random_state
0 0 left 25 gini 3 1 0
0.2 0.1 left
4 4 center
4.2 3.9 center
8 0 right
8.2 0.1 right

Excel formula:

=RF_CLASSIFY({0,0;0.2,0.1;4,4;4.2,3.9;8,0;8.2,0.1}, {"left";"left";"center";"center";"right";"right"}, 25, "gini", 3, 1, 0)

Expected output:

{"type":"Double","basicValue":1,"properties":{"accuracy":{"type":"Double","basicValue":1},"sample_count":{"type":"Double","basicValue":6},"feature_count":{"type":"Double","basicValue":2},"class_count":{"type":"Double","basicValue":3},"classes":{"type":"Array","elements":[[{"type":"String","basicValue":"center"}],[{"type":"String","basicValue":"left"}],[{"type":"String","basicValue":"right"}]]},"predictions":{"type":"Array","elements":[[{"type":"String","basicValue":"left"}],[{"type":"String","basicValue":"left"}],[{"type":"String","basicValue":"center"}],[{"type":"String","basicValue":"center"}],[{"type":"String","basicValue":"right"}],[{"type":"String","basicValue":"right"}]]},"prediction_counts":{"type":"Array","elements":[[{"type":"String","basicValue":"class"},{"type":"String","basicValue":"count"}],[{"type":"String","basicValue":"center"},{"type":"Double","basicValue":2}],[{"type":"String","basicValue":"left"},{"type":"Double","basicValue":2}],[{"type":"String","basicValue":"right"},{"type":"Double","basicValue":2}]]},"probabilities":{"type":"Array","elements":[[{"type":"Double","basicValue":0.04},{"type":"Double","basicValue":0.92},{"type":"Double","basicValue":0.04}],[{"type":"Double","basicValue":0.04},{"type":"Double","basicValue":0.88},{"type":"Double","basicValue":0.08}],[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":0.04},{"type":"Double","basicValue":0.96}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":0.12},{"type":"Double","basicValue":0.88}]]},"feature_importances":{"type":"Array","elements":[[{"type":"Double","basicValue":0.603273}],[{"type":"Double","basicValue":0.396727}]]},"estimator_count":{"type":"Double","basicValue":25}}}

Example 4: Flatten a single-row boolean target range for random forest classification

Inputs:

data target n_estimators rf_criterion max_depth min_samples_leaf random_state
0 false false false true true true 25 gini 3 1 0
0.3
0.6
1.4
1.7
2

Excel formula:

=RF_CLASSIFY({0;0.3;0.6;1.4;1.7;2}, {FALSE,FALSE,FALSE,TRUE,TRUE,TRUE}, 25, "gini", 3, 1, 0)

Expected output:

{"type":"Double","basicValue":1,"properties":{"accuracy":{"type":"Double","basicValue":1},"sample_count":{"type":"Double","basicValue":6},"feature_count":{"type":"Double","basicValue":1},"class_count":{"type":"Double","basicValue":2},"classes":{"type":"Array","elements":[[{"type":"Boolean","basicValue":false}],[{"type":"Boolean","basicValue":true}]]},"predictions":{"type":"Array","elements":[[{"type":"Boolean","basicValue":false}],[{"type":"Boolean","basicValue":false}],[{"type":"Boolean","basicValue":false}],[{"type":"Boolean","basicValue":true}],[{"type":"Boolean","basicValue":true}],[{"type":"Boolean","basicValue":true}]]},"prediction_counts":{"type":"Array","elements":[[{"type":"String","basicValue":"class"},{"type":"String","basicValue":"count"}],[{"type":"Boolean","basicValue":false},{"type":"Double","basicValue":3}],[{"type":"Boolean","basicValue":true},{"type":"Double","basicValue":3}]]},"probabilities":{"type":"Array","elements":[[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1},{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":1}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":1}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":1}]]},"feature_importances":{"type":"Array","elements":[[{"type":"Double","basicValue":1}]]},"estimator_count":{"type":"Double","basicValue":25}}}

Python Code

import numpy as np
from sklearn.ensemble import RandomForestClassifier as SklearnRandomForestClassifier

def rf_classify(data, target, n_estimators=100, rf_criterion='gini', max_depth=None, min_samples_leaf=1, random_state=None):
    """
    Fit a random forest classifier and return training predictions.

    See: https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html

    This example function is provided as-is without any representation of accuracy.

    Args:
        data (list[list]): 2D array of numeric feature data with rows as samples and columns as features.
        target (list[list]): Target labels as a single row, single column, or scalar when only one sample is present.
        n_estimators (int, optional): Number of trees in the forest. Default is 100.
        rf_criterion (str, optional): Split quality measure used by each decision tree. Valid options: Gini, Entropy, Log Loss. Default is 'gini'.
        max_depth (int, optional): Maximum depth of each tree. Leave blank for unconstrained depth. Default is None.
        min_samples_leaf (int, optional): Minimum number of samples required in each leaf. Default is 1.
        random_state (int, optional): Integer seed for reproducible tree sampling. Leave blank for the estimator default. Default is None.

    Returns:
        dict: Excel data type containing training accuracy, predictions, probabilities, and fitted feature importances.
    """
    def py(value):
        return value.item() if isinstance(value, np.generic) else value

    def cell(value):
        value = py(value)
        if isinstance(value, bool):
            return {"type": "Boolean", "basicValue": bool(value)}
        if isinstance(value, (int, float)) and not isinstance(value, bool):
            return {"type": "Double", "basicValue": float(value)}
        return {"type": "String", "basicValue": str(value)}

    def col(values):
        return [[cell(value)] for value in values]

    def mat(values):
        return [[cell(value) for value in row] for row in values]

    def parse_data(value):
        value = [[value]] if not isinstance(value, list) else value
        if not isinstance(value, list) or not value or not all(isinstance(row, list) and row for row in value):
            return None, "Error: data must be a non-empty 2D list"
        if len({len(row) for row in value}) != 1:
            return None, "Error: data must be a rectangular 2D list"
        data_np = np.array(value, dtype=float)
        if data_np.ndim != 2 or data_np.size == 0:
            return None, "Error: data must be a non-empty 2D list"
        if not np.isfinite(data_np).all():
            return None, "Error: data must contain only finite numeric values"
        return data_np, None

    def parse_target(value, sample_count):
        if not isinstance(value, list):
            labels = [value]
        elif not value:
            return None, "Error: target must be non-empty"
        elif all(not isinstance(item, list) for item in value):
            labels = value
        elif len(value) == 1:
            labels = value[0]
        elif all(isinstance(row, list) and len(row) == 1 for row in value):
            labels = [row[0] for row in value]
        else:
            return None, "Error: target must be a single row or column"

        if len(labels) != sample_count:
            return None, "Error: target length must match sample count"

        parsed = []
        classes = []
        for item in labels:
            item = py(item)
            if isinstance(item, str):
                if not item.strip():
                    return None, "Error: target labels must not be blank"
            elif isinstance(item, bool):
                item = bool(item)
            elif isinstance(item, (int, float)) and not isinstance(item, bool):
                if not np.isfinite(float(item)):
                    return None, "Error: target labels must be finite"
                item = float(item) if isinstance(item, float) else int(item)
            else:
                return None, "Error: target labels must be scalar string, boolean, or numeric values"
            parsed.append(item)
            if not any(type(existing) is type(item) and existing == item for existing in classes):
                classes.append(item)

        if len(classes) < 2:
            return None, "Error: target must contain at least 2 classes"
        return parsed, None

    def count_table(predictions, classes):
        rows = [[{"type": "String", "basicValue": "class"}, {"type": "String", "basicValue": "count"}]]
        for class_label in classes:
            count = sum(type(prediction) is type(class_label) and prediction == class_label for prediction in predictions)
            rows.append([cell(class_label), {"type": "Double", "basicValue": float(count)}])
        return rows

    try:
        data_np, error = parse_data(data)
        if error:
            return error

        target_values, error = parse_target(target, data_np.shape[0])
        if error:
            return error

        if int(n_estimators) < 1:
            return "Error: n_estimators must be at least 1"
        criterion_value = str(rf_criterion).strip().lower()
        if criterion_value not in {"gini", "entropy", "log_loss"}:
            return "Error: rf_criterion must be 'gini', 'entropy', or 'log_loss'"
        depth = None if max_depth in (None, "") else int(max_depth)
        if depth is not None and depth < 1:
            return "Error: max_depth must be at least 1 when provided"
        if int(min_samples_leaf) < 1:
            return "Error: min_samples_leaf must be at least 1"

        fitted = SklearnRandomForestClassifier(
            n_estimators=int(n_estimators),
            criterion=criterion_value,
            max_depth=depth,
            min_samples_leaf=int(min_samples_leaf),
            random_state=None if random_state in (None, "") else int(random_state)
        ).fit(data_np, target_values)

        prediction_array = fitted.predict(data_np)
        predictions = [py(item) for item in prediction_array.tolist()]
        classes = [py(item) for item in fitted.classes_.tolist()]
        accuracy = float(np.mean([
            type(prediction) is type(actual) and prediction == actual
            for prediction, actual in zip(predictions, target_values)
        ]))

        return {
            "type": "Double",
            "basicValue": accuracy,
            "properties": {
                "accuracy": {"type": "Double", "basicValue": accuracy},
                "sample_count": {"type": "Double", "basicValue": float(data_np.shape[0])},
                "feature_count": {"type": "Double", "basicValue": float(data_np.shape[1])},
                "class_count": {"type": "Double", "basicValue": float(len(classes))},
                "classes": {"type": "Array", "elements": col(classes)},
                "predictions": {"type": "Array", "elements": col(predictions)},
                "prediction_counts": {"type": "Array", "elements": count_table(predictions, classes)},
                "probabilities": {"type": "Array", "elements": mat(fitted.predict_proba(data_np).tolist())},
                "feature_importances": {"type": "Array", "elements": col(fitted.feature_importances_.tolist())},
                "estimator_count": {"type": "Double", "basicValue": float(len(fitted.estimators_))}
            }
        }
    except Exception as e:
        return f"Error: {str(e)}"

Online Calculator

2D array of numeric feature data with rows as samples and columns as features.
Target labels as a single row, single column, or scalar when only one sample is present.
Number of trees in the forest.
Split quality measure used by each decision tree.
Maximum depth of each tree. Leave blank for unconstrained depth.
Minimum number of samples required in each leaf.
Integer seed for reproducible tree sampling. Leave blank for the estimator default.