Key terms and concepts in AI and explainability.

  • Black Box System

    A system for which we can only observe the inputs and outputs, but not the internal workings.

  • Classification

    Classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs, on the basis of a training set of data containing observations (or instances) whose category membership is known.

  • Confidence Level, Model Confidence

    The confidence level for a model is a statistical measure of how certain a prediction or outcome is.

  • Convolutional Neural Network

    A class of deep neural networks, most commonly applied to analyzing visual imagery. The name “convolutional neural network” indicates that the network employs a mathematical operation called convolution. Convolution is a specialized kind of linear operation. Convolutional networks are simply neural networks that use convolution in place of general matrix multiplication in at least one of their layers.

  • Counterfactuals

    Rationale for why something is classified as not within the given class.

  • Decision Boundary

    The separation of data points into various regions which represent classes (e.g., poisonous vs. safe).

  • Decision Tree

    A decision tree is a flowchart-like structure in which each internal node represents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes).

  • Deep Learning

    Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.

  • Deep Neural Network

    An artificial neural network (ANN) with multiple layers between the input and output layers. The DNN finds the correct mathematical manipulation to turn the input into the output, whether it be a linear relationship or a non-linear relationship.

  • Ensemble Learning

    Ensemble learning is the process by which multiple models, such as classifiers or experts, are strategically generated and combined to solve a particular computational intelligence problem.

  • Generalized Additive Model

    A generalized linear model in which the linear predictor depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions.

  • Generalized Linear Model

    A flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution.

  • Neural Network

    Computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with task-specific rules.

  • Random Forest

    An ensemble learning method for classification, regression and other tasks that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees.

  • Rule-based Machine Learning

    Any machine learning method that identifies, learns, or evolves 'rules' to store, manipulate or apply. The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system.