Crypto Cheat Sheet AI: Explaining 30 Common Slang Terms in One Shot
Original Title: "AI Insider Jargon Dictionary (March 2026 Edition), Recommended for Bookmarking"
Original Author: Golem, Odaily Planet Daily
Now, if you're in the crypto world and not paying attention to AI, you're easily subject to ridicule (yes, my friend, think about why you clicked in).
Are you completely clueless about the basic concepts of AI, asking the soy sauce bean for the meaning of every acronym in a sentence? Are you lost in a sea of proprietary terms at AI events, pretending you're not disconnected?
While it's not realistic to dive into the AI industry in a short amount of time, knowing a summary of high-frequency AI industry basics is worthwhile. Luckily, this article is prepared for you below. Sincerely advise you to read through and bookmark.
Basic Vocabulary (12)
· LLM (Large Language Model)
The core of LLM is a deep learning model trained on massive amounts of data, proficient in understanding and generating language. It can process text and increasingly handle other types of content.
In contrast is the SLM (Small Language Model) - usually emphasizing a language model with lower costs, lighter deployment, and more convenient localization.
· AI Agent
AI Agent refers not only to a "chatting model" but a system capable of understanding goals, invoking tools, step-by-step task execution, planning, and validation when necessary. Google defines an agent as software that can reason based on multimodal input and act on behalf of the user.
· Multimodal
Its AI model is not only text-based but can simultaneously process various input-output forms like text, images, audio, videos, etc. Google specifically defines multimodal as the ability to process and generate different types of content.
· Prompt
The user's input command to the model, the most basic form of human-machine interaction.
· Generative AI (Generative AI / AIGC)
Emphasizing AI "generation" rather than just classification or prediction, generative models can produce text, code, images, emojis, videos, etc., based on a prompt.
· Token
This is one of the concepts in the AI field most similar to the "Gas Unit." Models do not understand content based on "words," but rather process input and output based on tokens, with billing, context length, and response speed usually highly correlated to tokens.
· Context Window
Refers to the total number of tokens a model can "see" and utilize at once, also known as the number of tokens the model can consider or "remember" in a single processing step.
· Memory
Allows a model or agent to retain user preferences, task context, and historical states.
· Training
The process by which a model learns parameters from data.
· Inference
In contrast to training, refers to the process where a model, once deployed, receives input and generates output. In the industry, it is often said that "training is expensive, but inference is even costlier" because many costs in the real commercialization phase occur during inference. The distinction between training and inference is also the foundational framework for discussions of deployment costs in mainstream vendors.
· Tool Use / Tool Calling
Means that a model not only outputs text but can also call tools such as search, code execution, databases, external APIs, etc. This has already been regarded as a key capability of agents.
· API
Infrastructure for AI products, applications, and agents when interacting with third-party services.
Advanced Vocabulary (18)
· Transformer
A model architecture that makes AI better at understanding contextual relationships, serving as the technical foundation for most large language models today. Its key feature is the ability to simultaneously consider the relationship between each word in the entire piece of content.
· Attention
The central mechanism in Transformers, its role is to enable the model to automatically determine "which words are most worthy of attention" when reading a sentence.
· Agentic / Agentic Workflow
This is a recently popular term, which means a system is no longer just "question and answer," but has a certain degree of autonomy to break down tasks, decide on the next steps, and invoke external capabilities. Many vendors see it as a sign of "moving from Chatbot to executable system."
· Subagents
An Agent further breaks down into multiple dedicated sub-agents to handle subtasks.
· Skills
With the rise of OpenClaw, this term has become more common. It refers to installable, reusable, and combinable capability units/instructions for an AI Agent, but also warns of tool misuse and data exposure risks.
· Hallucination
It refers to a model confidently generating erroneous or absurd output by "perceiving non-existent patterns," presenting a seemingly reasonable but actually incorrect overconfident output.
· Latency
The time it takes for a model to process a request and produce an output, is one of the most common engineering jargon, frequently encountered in discussions on deployment and productization.
· Guardrails
Used to limit what a model/Agent can do, when to stop, and what content cannot be output.
· Vibe Coding
This term is also one of the hottest AI slang terminologies today, meaning users express their needs directly through conversation, and AI writes the code, without the user needing to understand how to code specifically.
· Parameters
Numerical scales used internally in a model to store capabilities and knowledge, often used to roughly measure the scale of a model. Phrases like "hundreds of billions of parameters" are common bragging statements in the AI community.
· Reasoning Model
It usually refers to models that are better at multi-step reasoning, planning, validation, and complex task execution.
· MCP (Model Context Protocol)
This is a very hot new buzzword in the past year, serving as a common interface between models and external tools/data sources.
· Fine-tuning
Continuing training on a base model to make it more suitable for a specific task, style, or domain. Google's terminology directly considers tuning and fine-tuning as related concepts.
· Distillation
Transferring the capabilities of a large model to a smaller model, like having the "teacher" instruct the "student."
· RAG (Retrieval-Augmented Generation)
This has almost become a standard configuration in enterprise AI. Microsoft defines it as a "search + LLM" pattern, using external data to ground the answers, addressing issues such as outdated training data and lack of understanding of private knowledge bases. The goal is to base the answers on real documents and private knowledge rather than solely on the model's own recall.
· Grounding
Often associated with RAG, it means ensuring that the model's answers are based on external sources such as documents, databases, web pages, rather than relying only on parameter memorization. Microsoft explicitly identifies grounding as a core value in the RAG documentation.
· Embedding (Vector Embedding / Semantic Vector)
Encoding textual, image, audio, and other content into high-dimensional numerical vectors for semantic similarity calculations.
· Benchmark
An evaluation method that uses a standardized set of criteria to test a model's capabilities, often used by various models to "prove their strength" through leaderboard rankings.
You may also like

Memories: 10 Key Contributions of the TON Core Team That Few People Knew in the Early Days

2025 South Korea CEX Listing Post-Mortem: Investing in New Coins = 70% Loss?

BIP-360 Analysis: Bitcoin's First Step Towards Quantum Immunity, But Why Only the "First Step"?

50 million USDT exchanged for 35,000 USD AAVE: How did the disaster happen? Who should we blame?

The Cryptographic Past of the Middle East

Resolving the Intergenerational Prisoner's Dilemma: The Inevitable Path of Nomadic Capital Bitcoin

Who Will Control AI? Why Decentralized AI May Be the Only Alternative to Government and Big Tech
AI has become critical infrastructure, and governments and corporations are competing to control it. Centralized development and regulation are entrenching existing power structures. The Web3 community is building a decentralized alternative — distributed compute, token incentives, and community governance — before that window closes.

Vitalik wrote a proposal teaching you how to secretly use AI large models

On the eve of the explosion of on-chain options

WEEX AI Hackathon: How Did This AI Trading Winner Succeed?
A self-taught AI trading enthusiast achieved top-10 results at the WEEX AI Hackathon. Learn about the mindset, AI tools, and lessons behind this impressive performance.

One Balance to Rule Them All: Gravitas' On-Chain Prime Broker Ambition

That person who cashed out at the NFT peak is now selling a new shovel in the OpenClaw craze

Inter-generational Prisoner's Dilemma Resolution: The Nomadic Capital and Bitcoin's Inevitable Path

Upstream and downstream are starting to fight, all for the sake of everyone being able to "Lobster"

Circle and Mastercard Announce Partnership, the Next Stage for the Crypto Industry Belongs to Payments

From 5 Mao per kWh of Chinese electricity to a $45 API export: Tokens are rewriting currency units

Why is OpenAI playing catch-up to Claude Code instead?
