Back to Narufield

AI Literacy Guide

Build intuition for how AI works and how to use it well

Learning to Work With AI

Using AI well is a skill. Like learning to search effectively or work with spreadsheets, it's about building intuition for how these systems flow.

This manual will help you:

  • Understand what AI is good at (and where it fails)
  • Work with it as a thinking partner
  • Know when to trust what you're reading

You don't need a technical background. If you can have a conversation, you can learn this. We'll introduce the real terminology along the way, so when you see these terms elsewhere, you'll know what they mean.

The Mental Model

These AI systems are called large language models (LLMs). They learned patterns from a massive amount of text (books, articles, code, conversations) and use those patterns to generate responses based on your question.

The process of generating text is called inference. The AI predicts what comes next, word by word. Technically it works in "tokens" (chunks of text), and each prediction is influenced by everything that came before it in the conversation.

This is more powerful than it sounds. The AI can:

  • Explain things it was never explicitly taught
  • Write solution for problems that didn't exist in its training
  • Connect ideas across different fields

But it also means the AI doesn't have a natural "I don't know" signal. It will always generate something, and sometimes that something sounds right but isn't.

Know the Failure Modes

AI fails in predictable ways. Once you know the patterns, you can spot them.

Hallucination

This is the industry term for when AI invents facts, names, or citations that don't exist. The same mechanism that lets AI generalize and connect ideas it was never explicitly taught also makes it confidently fill in gaps that should stay empty.

What to do: For anything you'll act on, verify it. Ask the AI to show its reasoning so you can check the logic. Grounding responses in sources (when available) helps reduce this.

Context and Attention

The context window is how much the AI can take into account at once. When conversations get long, older parts may fall outside this window or get compressed.

But even within the window, attention is limited. The more you load in, the thinner it spreads. Responses get diffuse, less focused on what matters.

These cause distinct problems:

  • Forgetting — earlier instructions or context drop out entirely
  • Bleeding — details from one part of the conversation leak into unrelated responses
  • Slipping — gradual drift from what you originally asked for, often subtle enough you don't notice until you're far off track
  • Diffusion — too much context and the response loses focus, tries to address everything, addresses nothing well

What to do: Keep it focused. For complex tasks, restate key requirements periodically. If responses feel scattered or drift from your goal, start fresh with just what matters.

Overconfidence

AI rarely says "I'm not confident about this." Models are trained to be helpful, which often means committing fully to answers even when uncertainty would be more honest.

What to do: Ask "What could be wrong here?" or "What am I assuming?" The AI can critique its own answers when prompted.

Sycophancy

This is the industry term for when AI mirrors your energy instead of staying objective. If you sound frustrated, it might become overly apologetic. If you push back on something correct, it might cave and agree with you. If you're pessimistic, some models lean pessimistic back; others swing optimistic regardless of the facts.

The model is pattern-matching on what response would be "helpful" or "agreeable," which isn't always what's accurate.

What to do: Notice if the AI shifts position after you push back. Ask it to steelman the other side. Be aware that confident disagreement from you can make the model fold even when it was right.

Knowledge Cutoff

Every model has a training cutoff—the date when its training data ends. It doesn't know recent news, updated prices, or what happened after that date. Some systems add web search to help with this, but it's not always available or reliable.

What to do: Check if your question depends on current information. When in doubt, ask when the AI's knowledge ends.

Thinking Together

AI works best when shaping a shared thinking space.

What you send to the AI is called a prompt. A good prompt sets the frame: what matters, what doesn't, where the boundaries are, what you're actually trying to accomplish.

What Shapes the Response

When you communicate with AI, you're implicitly setting several things:

  • Context — the situation, background, what's already been tried
  • Goals — what you're trying to accomplish or understand
  • Constraints — what's off the table, what resources you have, what matters most
  • Perspective — your role, your level, what kind of response would actually help
  • Boundaries — how deep to go, what's in scope, when to stop

You don't need to spell all of these out explicitly. But the more the AI can infer about them, the more aligned its response and work will be to what you actually need.

Communicate Like a Collaborator

Think of it like working with someone who is only aware about what you bring into the conversation and question into existance. Capable, but dependent on you to surface the relevant parts of the problem.

Useful patterns for shaping the field:

  • "I'm working on X, which is part of a larger Y" — situates the problem
  • "I've already tried A and B, neither worked because..." — shows where you are
  • "The constraint here is..." — flags what matters
  • "I'm optimizing for X, not Y" — clarifies priorities

The more the AI knows, the more useful its reasoning becomes.

Orient Before Diving Deep

For complex topics, ask for the shape first: "Before we get into details—what are the main things I should understand here?"

This keeps responses from overwhelming you before you're oriented. Once you have the map, you can zoom into the parts that matter.

Build Understanding Together

If something doesn't make sense, say so: "Wait, I'm not following the part about X."

Good AI use is iterative. Understanding builds through back-and-forth, each exchange refining what you're both working with. This is called multi-turn conversation, and it's where these models shine.

How This Works

Narufield runs the same AI models you can use elsewhere (Claude, GPT, Gemini) but through a reasoning architecture that shapes how they think.

The architecture tracks coherence as the AI reasons. It questions its own confidence, notices when it's drifting from what you asked, and surfaces uncertainty, woven into the reasoning process itself.

Context is kept intentionally tight. While you can see your chat history, only recent exchanges are passed to the model, preventing the bleeding and drift problems mentioned earlier, while keeping attention sharp.

The result: thinking you can follow, with honest signals about confidence and uncertainty.

Products, Modes, and Vendors

λ-Core is the foundation—coherence tracking, uncertainty surfacing, grounded output. Handles most tasks well.

Synthergy Engine extends that with multi-pass evaluation, creative exploration, and deeper synthesis. For research, complex analysis, or when you need the AI to really work through something.

Modes (Light through Deep) select which tier to use. Light runs fast, efficient options. Deep runs the most capable reasoning engines. Each tier pulls from current, well-suited releases.

Vendors are the model providers: Anthropic (Claude), OpenAI (GPT), and Google (Gemini).

You can swap mid-conversation: get a response, switch vendors, let a different model review or continue from its own flavor. Same reasoning field, different engine underneath.