In cognitive engineering, Positive Pressure is the act of narrowing the “Exit Gate” for tokens. Standard prompting is low-pressure; it gives the model thousands of tokens of room to wander. Subtractive patterning is high-pressure. By providing a multi-line context but only a few characters of “Slot” space, we are applying a mechanical squeeze.
When a model is under positive pressure, it cannot afford to be verbose. It must perform an internal “Token Valuation” process, discarding any token that does not carry maximum semantic weight. This pressure is what creates the “Code Golfy” feel that characterizes high-quality dredging. We aren’t just asking for brevity; we are creating an environment where brevity is the only surviving strategy.
One of the most dangerous tendencies of LLMs is “Early Commitment”—the “Lazy” model picking the first plausible answer (e.g., picking ‘Auth’ before reading ‘Network’). Negative Pressure is our primary defense against this.
Instead of just asking for the answer, we create a slot for the wrong answer. By including a DISCARDED_HYPOTHESIS slot in the mold, we force the model to explicitly acknowledge the “red herrings” in the context. This “Not-This” directive applies a negative pressure to the incorrect tokens, clearing the way for the correct reasoning to emerge. It turns a “Search” problem into an “Elimination” problem. In Law and Medicine, where red herrings are intentional or frequent, Negative Pressure is not a feature—it is a requirement for accuracy.
We often treat “Temperature” as a “creativity” setting. In Dredge, we treat it as Mechanical Resistance. High temperature (e.g., 0.8) is high entropy; it allows the model to “vibrate” out of the mold. Low temperature (e.g., 0.1) is low entropy; it freezes the model into the pattern.
For dredging, we must operate near the freezing point. By locking temperature to 0.1, we maximize the predictive power of our Anchors. We want the model to behave like a geared machine, not a cloud. At low temperatures, the “Physics of the Pattern” becomes dominant. The model becomes a deterministic function: Context + Mold = Result. This is the foundation of “Non-Deterministic Engineering”—we use a probabilistic engine but apply enough mechanical constraint to simulate a deterministic system.
When an Anchor is followed by a very tight Constraint (e.g., a short suffix or a small max_tokens limit), it creates what we call the Cognitive Vacuum.
The model’s attention is “pulled” into the slot with intense force. Because the “Exit Gate” is so small, the model’s internal probability distribution collapses onto the most likely, most dense tokens. This vacuum “sucks” the signal out of the noise. It is the most potent form of extraction: the context is the atmosphere, the mold is the chamber, and the slot is the pinpoint valve. The result is a high-purity signal that would be impossible to achieve in an unconstrained, low-pressure chat environment.