LINEAR_SVC

Linear support vector classification fits a max-margin linear decision boundary. The decision function for a sample x is defined as:

f(x) = w^T x + b

The model parameters w and b are determined by minimizing a combination of the squared norm of w and the hinge loss across all training samples:

\min_{w, b} \frac{1}{2} \|w\|^2 + C \sum_{i=1}^n \max(0, 1 - y_i(w^T x_i + b))

This wrapper accepts rows as samples and a target supplied as a single row or single column. It returns training accuracy together with predicted labels, class counts, decision scores, and fitted coefficient arrays.

Excel Usage

=LINEAR_SVC(data, target, penalty, loss, C, max_iter, tol, fit_intercept, random_state)
  • data (list[list], required): 2D array of numeric feature data with rows as samples and columns as features.
  • target (list[list], required): Target labels as a single row, single column, or scalar when only one sample is present.
  • penalty (str, optional, default: “l2”): Norm used in the linear SVM penalty term.
  • loss (str, optional, default: “squared_hinge”): Hinge-style loss function used during fitting.
  • C (float, optional, default: 1): Inverse regularization strength. Smaller values apply stronger regularization.
  • max_iter (int, optional, default: 1000): Maximum number of optimization iterations.
  • tol (float, optional, default: 0.0001): Convergence tolerance for the optimizer.
  • fit_intercept (bool, optional, default: true): Whether to include an intercept term in the linear decision function.
  • random_state (int, optional, default: null): Integer seed used when the underlying solver shuffles data. Leave blank for the estimator default.

Returns (dict): Excel data type containing training accuracy, predictions, decision scores, and fitted coefficient arrays.

Example 1: Fit linear support vector classification for two string-labeled classes

Inputs:

data target penalty loss C max_iter tol fit_intercept random_state
0 0 cold l2 squared_hinge 1 4000 0.0001 true 0
0 1 cold
1 0 cold
2 2 hot
2 3 hot
3 2 hot

Excel formula:

=LINEAR_SVC({0,0;0,1;1,0;2,2;2,3;3,2}, {"cold";"cold";"cold";"hot";"hot";"hot"}, "l2", "squared_hinge", 1, 4000, 0.0001, TRUE, 0)

Expected output:

{"type":"Double","basicValue":1,"properties":{"accuracy":{"type":"Double","basicValue":1},"sample_count":{"type":"Double","basicValue":6},"feature_count":{"type":"Double","basicValue":2},"class_count":{"type":"Double","basicValue":2},"classes":{"type":"Array","elements":[[{"type":"String","basicValue":"cold"}],[{"type":"String","basicValue":"hot"}]]},"predictions":{"type":"Array","elements":[[{"type":"String","basicValue":"cold"}],[{"type":"String","basicValue":"cold"}],[{"type":"String","basicValue":"cold"}],[{"type":"String","basicValue":"hot"}],[{"type":"String","basicValue":"hot"}],[{"type":"String","basicValue":"hot"}]]},"prediction_counts":{"type":"Array","elements":[[{"type":"String","basicValue":"class"},{"type":"String","basicValue":"count"}],[{"type":"String","basicValue":"cold"},{"type":"Double","basicValue":3}],[{"type":"String","basicValue":"hot"},{"type":"Double","basicValue":3}]]},"decision_scores":{"type":"Array","elements":[[{"type":"Double","basicValue":-1.01639}],[{"type":"Double","basicValue":-0.590164}],[{"type":"Double","basicValue":-0.590164}],[{"type":"Double","basicValue":0.688525}],[{"type":"Double","basicValue":1.11475}],[{"type":"Double","basicValue":1.11475}]]},"coefficients":{"type":"Array","elements":[[{"type":"Double","basicValue":0.42623},{"type":"Double","basicValue":0.42623}]]},"intercepts":{"type":"Array","elements":[[{"type":"Double","basicValue":-1.01639}]]}}}

Example 2: Use hinge loss for one-dimensional numeric labels

Inputs:

data target penalty loss C max_iter tol fit_intercept random_state
0 0 l2 hinge 1 4000 0.0001 true 0
0.2 0
0.4 0
1.2 1
1.4 1
1.6 1

Excel formula:

=LINEAR_SVC({0;0.2;0.4;1.2;1.4;1.6}, {0;0;0;1;1;1}, "l2", "hinge", 1, 4000, 0.0001, TRUE, 0)

Expected output:

{"type":"Double","basicValue":1,"properties":{"accuracy":{"type":"Double","basicValue":1},"sample_count":{"type":"Double","basicValue":6},"feature_count":{"type":"Double","basicValue":1},"class_count":{"type":"Double","basicValue":2},"classes":{"type":"Array","elements":[[{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1}]]},"predictions":{"type":"Array","elements":[[{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":0}],[{"type":"Double","basicValue":1}],[{"type":"Double","basicValue":1}],[{"type":"Double","basicValue":1}]]},"prediction_counts":{"type":"Array","elements":[[{"type":"String","basicValue":"class"},{"type":"String","basicValue":"count"}],[{"type":"Double","basicValue":0},{"type":"Double","basicValue":3}],[{"type":"Double","basicValue":1},{"type":"Double","basicValue":3}]]},"decision_scores":{"type":"Array","elements":[[{"type":"Double","basicValue":-1.00001}],[{"type":"Double","basicValue":-0.71429}],[{"type":"Double","basicValue":-0.428575}],[{"type":"Double","basicValue":0.714285}],[{"type":"Double","basicValue":1}],[{"type":"Double","basicValue":1.28572}]]},"coefficients":{"type":"Array","elements":[[{"type":"Double","basicValue":1.42858}]]},"intercepts":{"type":"Array","elements":[[{"type":"Double","basicValue":-1.00001}]]}}}

Example 3: Fit linear support vector classification for three groups

Inputs:

data target penalty loss C max_iter tol fit_intercept random_state
0 0 left l2 squared_hinge 1 4000 0.0001 true 0
0.2 0.1 left
4 4 center
4.2 3.9 center
8 0 right
8.2 0.1 right

Excel formula:

=LINEAR_SVC({0,0;0.2,0.1;4,4;4.2,3.9;8,0;8.2,0.1}, {"left";"left";"center";"center";"right";"right"}, "l2", "squared_hinge", 1, 4000, 0.0001, TRUE, 0)

Expected output:

{"type":"Double","basicValue":1,"properties":{"accuracy":{"type":"Double","basicValue":1},"sample_count":{"type":"Double","basicValue":6},"feature_count":{"type":"Double","basicValue":2},"class_count":{"type":"Double","basicValue":3},"classes":{"type":"Array","elements":[[{"type":"String","basicValue":"center"}],[{"type":"String","basicValue":"left"}],[{"type":"String","basicValue":"right"}]]},"predictions":{"type":"Array","elements":[[{"type":"String","basicValue":"left"}],[{"type":"String","basicValue":"left"}],[{"type":"String","basicValue":"center"}],[{"type":"String","basicValue":"center"}],[{"type":"String","basicValue":"right"}],[{"type":"String","basicValue":"right"}]]},"prediction_counts":{"type":"Array","elements":[[{"type":"String","basicValue":"class"},{"type":"String","basicValue":"count"}],[{"type":"String","basicValue":"center"},{"type":"Double","basicValue":2}],[{"type":"String","basicValue":"left"},{"type":"Double","basicValue":2}],[{"type":"String","basicValue":"right"},{"type":"Double","basicValue":2}]]},"decision_scores":{"type":"Array","elements":[[{"type":"Double","basicValue":-0.801853},{"type":"Double","basicValue":0.811928},{"type":"Double","basicValue":-0.804821}],[{"type":"Double","basicValue":-0.75962},{"type":"Double","basicValue":0.744738},{"type":"Double","basicValue":-0.788505}],[{"type":"Double","basicValue":0.992554},{"type":"Double","basicValue":-0.971388},{"type":"Double","basicValue":-1.04172}],[{"type":"Double","basicValue":0.939812},{"type":"Double","basicValue":-0.994626},{"type":"Double","basicValue":-0.969078}],[{"type":"Double","basicValue":-1.01204},{"type":"Double","basicValue":-0.996617},{"type":"Double","basicValue":0.97425}],[{"type":"Double","basicValue":-0.969803},{"type":"Double","basicValue":-1.06381},{"type":"Double","basicValue":0.990566}]]},"coefficients":{"type":"Array","elements":[[{"type":"Double","basicValue":-0.0262729},{"type":"Double","basicValue":0.474875}],[{"type":"Double","basicValue":-0.226068},{"type":"Double","basicValue":-0.219761}],[{"type":"Double","basicValue":0.222384},{"type":"Double","basicValue":-0.281607}]]},"intercepts":{"type":"Array","elements":[[{"type":"Double","basicValue":-0.801853}],[{"type":"Double","basicValue":0.811928}],[{"type":"Double","basicValue":-0.804821}]]}}}

Example 4: Flatten a single-row boolean target range for linear support vector classification

Inputs:

data target penalty loss C max_iter tol fit_intercept random_state
0 false false false true true true l2 squared_hinge 1 4000 0.0001 true 0
0.3
0.6
1.4
1.7
2

Excel formula:

=LINEAR_SVC({0;0.3;0.6;1.4;1.7;2}, {FALSE,FALSE,FALSE,TRUE,TRUE,TRUE}, "l2", "squared_hinge", 1, 4000, 0.0001, TRUE, 0)

Expected output:

{"type":"Double","basicValue":1,"properties":{"accuracy":{"type":"Double","basicValue":1},"sample_count":{"type":"Double","basicValue":6},"feature_count":{"type":"Double","basicValue":1},"class_count":{"type":"Double","basicValue":2},"classes":{"type":"Array","elements":[[{"type":"Boolean","basicValue":false}],[{"type":"Boolean","basicValue":true}]]},"predictions":{"type":"Array","elements":[[{"type":"Boolean","basicValue":false}],[{"type":"Boolean","basicValue":false}],[{"type":"Boolean","basicValue":false}],[{"type":"Boolean","basicValue":true}],[{"type":"Boolean","basicValue":true}],[{"type":"Boolean","basicValue":true}]]},"prediction_counts":{"type":"Array","elements":[[{"type":"String","basicValue":"class"},{"type":"String","basicValue":"count"}],[{"type":"Boolean","basicValue":false},{"type":"Double","basicValue":3}],[{"type":"Boolean","basicValue":true},{"type":"Double","basicValue":3}]]},"decision_scores":{"type":"Array","elements":[[{"type":"Double","basicValue":-0.918239}],[{"type":"Double","basicValue":-0.614465}],[{"type":"Double","basicValue":-0.310692}],[{"type":"Double","basicValue":0.499371}],[{"type":"Double","basicValue":0.803145}],[{"type":"Double","basicValue":1.10692}]]},"coefficients":{"type":"Array","elements":[[{"type":"Double","basicValue":1.01258}]]},"intercepts":{"type":"Array","elements":[[{"type":"Double","basicValue":-0.918239}]]}}}

Python Code

import numpy as np
from sklearn.svm import LinearSVC as SklearnLinearSVC

def linear_svc(data, target, penalty='l2', loss='squared_hinge', C=1, max_iter=1000, tol=0.0001, fit_intercept=True, random_state=None):
    """
    Fit a linear support vector classifier and return training predictions.

    See: https://scikit-learn.org/stable/modules/generated/sklearn.svm.LinearSVC.html

    This example function is provided as-is without any representation of accuracy.

    Args:
        data (list[list]): 2D array of numeric feature data with rows as samples and columns as features.
        target (list[list]): Target labels as a single row, single column, or scalar when only one sample is present.
        penalty (str, optional): Norm used in the linear SVM penalty term. Valid options: L2, L1. Default is 'l2'.
        loss (str, optional): Hinge-style loss function used during fitting. Valid options: Squared Hinge, Hinge. Default is 'squared_hinge'.
        C (float, optional): Inverse regularization strength. Smaller values apply stronger regularization. Default is 1.
        max_iter (int, optional): Maximum number of optimization iterations. Default is 1000.
        tol (float, optional): Convergence tolerance for the optimizer. Default is 0.0001.
        fit_intercept (bool, optional): Whether to include an intercept term in the linear decision function. Default is True.
        random_state (int, optional): Integer seed used when the underlying solver shuffles data. Leave blank for the estimator default. Default is None.

    Returns:
        dict: Excel data type containing training accuracy, predictions, decision scores, and fitted coefficient arrays.
    """
    def py(value):
        return value.item() if isinstance(value, np.generic) else value

    def cell(value):
        value = py(value)
        if isinstance(value, bool):
            return {"type": "Boolean", "basicValue": bool(value)}
        if isinstance(value, (int, float)) and not isinstance(value, bool):
            return {"type": "Double", "basicValue": float(value)}
        return {"type": "String", "basicValue": str(value)}

    def col(values):
        return [[cell(value)] for value in values]

    def mat(values):
        return [[cell(value) for value in row] for row in values]

    def parse_data(value):
        value = [[value]] if not isinstance(value, list) else value
        if not isinstance(value, list) or not value or not all(isinstance(row, list) and row for row in value):
            return None, "Error: data must be a non-empty 2D list"
        if len({len(row) for row in value}) != 1:
            return None, "Error: data must be a rectangular 2D list"
        data_np = np.array(value, dtype=float)
        if data_np.ndim != 2 or data_np.size == 0:
            return None, "Error: data must be a non-empty 2D list"
        if not np.isfinite(data_np).all():
            return None, "Error: data must contain only finite numeric values"
        return data_np, None

    def parse_target(value, sample_count):
        if not isinstance(value, list):
            labels = [value]
        elif not value:
            return None, "Error: target must be non-empty"
        elif all(not isinstance(item, list) for item in value):
            labels = value
        elif len(value) == 1:
            labels = value[0]
        elif all(isinstance(row, list) and len(row) == 1 for row in value):
            labels = [row[0] for row in value]
        else:
            return None, "Error: target must be a single row or column"

        if len(labels) != sample_count:
            return None, "Error: target length must match sample count"

        parsed = []
        classes = []
        for item in labels:
            item = py(item)
            if isinstance(item, str):
                if not item.strip():
                    return None, "Error: target labels must not be blank"
            elif isinstance(item, bool):
                item = bool(item)
            elif isinstance(item, (int, float)) and not isinstance(item, bool):
                if not np.isfinite(float(item)):
                    return None, "Error: target labels must be finite"
                item = float(item) if isinstance(item, float) else int(item)
            else:
                return None, "Error: target labels must be strings, booleans, or numbers"
            parsed.append(item)
            if not any(type(existing) is type(item) and existing == item for existing in classes):
                classes.append(item)

        if len(classes) < 2:
            return None, "Error: target must contain at least 2 classes"
        return parsed, None

    def count_table(predictions, classes):
        rows = [[{"type": "String", "basicValue": "class"}, {"type": "String", "basicValue": "count"}]]
        for class_label in classes:
            rows.append([cell(class_label), {"type": "Double", "basicValue": float(sum(type(prediction) is type(class_label) and prediction == class_label for prediction in predictions))}])
        return rows

    try:
        data_np, error = parse_data(data)
        if error:
            return error

        target_values, error = parse_target(target, data_np.shape[0])
        if error:
            return error

        penalty_value = str(penalty).strip().lower()
        if penalty_value not in {"l1", "l2"}:
            return "Error: penalty must be 'l1' or 'l2'"
        loss_value = str(loss).strip().lower()
        if loss_value not in {"hinge", "squared_hinge"}:
            return "Error: loss must be 'hinge' or 'squared_hinge'"
        if penalty_value == "l1" and loss_value == "hinge":
            return "Error: penalty 'l1' cannot be combined with loss 'hinge'"
        if float(C) <= 0:
            return "Error: C must be greater than 0"
        if int(max_iter) < 1:
            return "Error: max_iter must be at least 1"
        if float(tol) <= 0:
            return "Error: tol must be greater than 0"

        fitted = SklearnLinearSVC(
            penalty=penalty_value,
            loss=loss_value,
            C=float(C),
            max_iter=int(max_iter),
            tol=float(tol),
            fit_intercept=bool(fit_intercept),
            random_state=None if random_state in (None, "") else int(random_state),
            dual="auto"
        ).fit(data_np, target_values)

        prediction_array = fitted.predict(data_np)
        predictions = [py(item) for item in prediction_array.tolist()]
        classes = [py(item) for item in fitted.classes_.tolist()]
        scores = np.asarray(fitted.decision_function(data_np))
        score_rows = [[float(value)] for value in scores.tolist()] if scores.ndim == 1 else scores.tolist()
        accuracy = float(np.mean([type(prediction) is type(actual) and prediction == actual for prediction, actual in zip(predictions, target_values)]))

        return {
            "type": "Double",
            "basicValue": accuracy,
            "properties": {
                "accuracy": {"type": "Double", "basicValue": accuracy},
                "sample_count": {"type": "Double", "basicValue": float(data_np.shape[0])},
                "feature_count": {"type": "Double", "basicValue": float(data_np.shape[1])},
                "class_count": {"type": "Double", "basicValue": float(len(classes))},
                "classes": {"type": "Array", "elements": col(classes)},
                "predictions": {"type": "Array", "elements": col(predictions)},
                "prediction_counts": {"type": "Array", "elements": count_table(predictions, classes)},
                "decision_scores": {"type": "Array", "elements": mat(score_rows)},
                "coefficients": {"type": "Array", "elements": mat(np.atleast_2d(fitted.coef_).tolist())},
                "intercepts": {"type": "Array", "elements": col(np.atleast_1d(fitted.intercept_).tolist())}
            }
        }
    except Exception as e:
        return f"Error: {str(e)}"

Online Calculator

2D array of numeric feature data with rows as samples and columns as features.
Target labels as a single row, single column, or scalar when only one sample is present.
Norm used in the linear SVM penalty term.
Hinge-style loss function used during fitting.
Inverse regularization strength. Smaller values apply stronger regularization.
Maximum number of optimization iterations.
Convergence tolerance for the optimizer.
Whether to include an intercept term in the linear decision function.
Integer seed used when the underlying solver shuffles data. Leave blank for the estimator default.