A glossary of core concepts for the MindLab platform.
Adaptive Computation: The ability of a system, such as a Heterogeneous Mixture-of-Experts model, to dynamically allocate computational resources based on the complexity of the input.
Agentic Orchestration Layer: The emerging layer in the AI technology stack responsible for managing and coordinating complex, multi-step workflows across a heterogeneous workforce of AI agents.
AI Bill of Materials: A formal record of the components, dependencies, and training data used to build an AI model or agent. In MindLab, the Capsule manifest serves this function.
All-to-All Communication: The collective communication primitive where every device in a distributed system sends a subset of its data to every other device. This is the primary communication pattern for expert parallelism in MoE models.
Argumentation: A reasoning process based on constructing and comparing pro and con arguments, rather than propagating numeric certainty factors.
Capsule (Agent Pack): A self-contained, version-controlled, and distributable unit of work that encapsulates a team of specialized AI agents, their associated playbooks, governance policies (defined in a Model Card), and evaluation suites.
Chain-of-Thought (CoT) Prompting: A technique for improving the multi-step reasoning abilities of LLMs by providing demonstrations that include a series of intermediate reasoning steps.
Co-Intelligence: A term coined by Ethan Mollick to describe the optimal human-AI relationship, where AI acts as a partner that augments and enhances human expertise.
Collective Brain: A concept from Joseph Henrich describing the distributed network of knowledge, skills, and practices held across a social group. The MindLab Marketplace is designed to create a “collective brain” for an industry.
Conditional Computation: The architectural principle of activating only the most relevant parts of a system for a given task.
Context Engineering: The disciplined practice of designing and managing the context provided to an LLM to ensure reliable and high-quality outputs.
Control Problem: The challenge of ensuring that advanced AI systems remain aligned with human values and intentions.
Counterfactual Fairness: A rigorous, causality-based definition of fairness. An algorithm is counterfactually fair if its decision for an individual would have been the same in a hypothetical world where that individual’s protected attributes were different.
Direct Preference Optimization (DPO): A simple, stable, and computationally efficient method for aligning language models with human preferences.
Gating Network: The component in a Mixture-of-Experts system responsible for routing an input to the most appropriate expert(s). In MindLab, the Orchestrator functions as the gating network.
Heterogeneous Mixture of Experts (HMoE): An advanced MoE architecture that uses a portfolio of “expert” sub-networks of varying sizes and capacities.
Improvising Mind: Nick Chater’s central thesis that the mind has no hidden depths but is a brilliant improviser, generating thoughts and actions in the moment based on a history of precedents.
Jagged Frontier: A term coined by Ethan Mollick to describe the unpredictable and counter-intuitive landscape of AI capabilities.
Knowledge Distillation: The process of transferring knowledge from a large “teacher” model to a smaller “student” model.
Livewired: A term coined by David Eagleman to describe the brain as a dynamic, self-reconfiguring system whose physical structure is constantly being shaped by experience.
Model Card: A standardized documentation framework for reporting the performance, intended uses, and limitations of a trained AI model.
NIST AI RMF: The Artificial Intelligence Risk Management Framework developed by the U.S. National Institute of Standards and Technology.
Orchestrator: The user’s personal, persistent, and stateful AI assistant. It serves as the single point of interaction and governance.
Scaling Laws: A set of power laws that govern the performance of Transformer language models as a function of model size, dataset size, and training compute.
SpanBERT: A pre-training method that improves performance on span-selection tasks by masking contiguous spans of text and using a Span-Boundary Objective (SBO).