AI Inspection Technology Glossary of Terms

Artificial intelligence inspection technology has introduced a specialized vocabulary that spans machine learning, computer vision, metrology, and industrial automation. This glossary defines the core terms used across platforms, standards, and regulatory frameworks in the US AI inspection sector. Precise terminology matters because miscommunication between engineers, compliance officers, and procurement teams leads to misapplied systems and failed audits. Each entry below reflects usage as defined or adopted by named standards bodies, including NIST, ISO, and ASTM International.


Definition and scope

AI inspection terminology encompasses the language used to describe automated systems that detect defects, measure dimensions, classify objects, or assess structural integrity using machine learning models and sensor-based hardware. The scope extends from factory floor machine vision to drone-mounted thermal imaging and cloud-based anomaly detection platforms.

Terms in this glossary are categorized across four domains:

  1. Model and algorithm terms — vocabulary related to how AI systems learn and infer
  2. Hardware and sensor terms — physical components that capture inspection data
  3. Performance and accuracy terms — metrics used to evaluate system reliability
  4. Compliance and standards terms — language drawn from regulatory and certification frameworks

The AI Inspection Technology Overview provides broader context for how these terms are applied in live deployments. NIST's AI Risk Management Framework (AI RMF 1.0) establishes foundational definitions for trustworthiness, reliability, and transparency that inform many of the model-related terms below.


How it works

Understanding AI inspection glossary terms requires mapping them to the functional pipeline of a typical inspection system. That pipeline moves through five discrete phases:

  1. Data acquisition — sensors (cameras, LiDAR, ultrasonic transducers) capture raw signals from the inspected asset
  2. Preprocessing — raw data is normalized, denoised, or augmented to prepare it for model input
  3. Inference — a trained model processes input and generates a prediction (defect present/absent, severity class, dimensional measurement)
  4. Post-processing — inference outputs are filtered using confidence thresholds, non-maximum suppression, or ensemble voting
  5. Output and action — results are logged, flagged for human review, or fed into a control system

Key terms by phase:


Common scenarios

Different deployment environments activate different subsets of this terminology. Three representative scenarios illustrate how glossary terms translate into operational practice.

Manufacturing surface inspection: Terms such as pixel classification, semantic segmentation, and anomaly score dominate. ASTM E2905 provides standardized vocabulary for nondestructive evaluation (NDE) that overlaps with AI inspection language in this context. AI inspection for manufacturing contexts rely heavily on throughput-normalized metrics like defects-per-million-opportunities (DPMO).

Infrastructure and utility inspection: Drone-based inspections of transmission lines or bridge decks invoke terms like orthomosaic, point cloud, LiDAR intensity return, and change detection. The FAA's Remote Pilot Certification framework governs the operational context in which these inspections occur.

Predictive maintenance: Terms shift toward feature extraction, time-series anomaly detection, remaining useful life (RUL), and degradation model. ISO 13381-1 defines RUL within the broader machinery prognostics vocabulary adopted by industrial AI platforms. AI inspection predictive maintenance deployments frequently reference this standard in vendor documentation.


Decision boundaries

Not all AI inspection terms are interchangeable, and misapplication creates compliance and procurement risk. Key distinctions:

Machine vision vs. AI inspection: Machine vision systems apply fixed, rule-based algorithms (e.g., pixel intensity thresholds). AI inspection systems use trained statistical models that generalize from labeled data. The distinction is covered in depth at Machine Vision vs. AI Inspection. ISO/IEC 22989:2022 (AI concepts and terminology) explicitly differentiates rule-based and learning-based systems.

Accuracy vs. reliability: Accuracy measures correctness on a test dataset. Reliability — as defined in NIST AI RMF — measures consistent performance across operating conditions, including distribution shift, sensor degradation, and edge-case inputs. A system can be accurate under lab conditions and unreliable in field deployment.

Validation vs. verification: Verification confirms that a model was built correctly (code, architecture, training pipeline). Validation confirms that the correct model was built for the intended task. Both concepts are codified in FDA's AI/ML-Based Software as a Medical Device (SaMD) Action Plan and apply by analogy to non-medical inspection contexts.

Edge inference vs. cloud inference: Edge inference runs the model locally on embedded hardware; cloud inference transmits data to remote servers for processing. Latency, data sovereignty, and bandwidth constraints determine which architecture applies. AI Inspection Edge Computing and AI Inspection Cloud vs. On-Premise cover the tradeoffs in operational detail.


References

📜 1 regulatory citation referenced  ·  🔍 Monitored by ANA Regulatory Watch  ·  View update log

📜 1 regulatory citation referenced  ·  🔍 Monitored by ANA Regulatory Watch  ·  View update log