Beginner Friendly Visual Notes
OpenAI Docs: Prompting and Embeddings
This page explains two big ideas in simple language:
- Prompting = how you ask the model
- Embeddings = how text meaning is turned into numbers for search and matching
1) Prompting
A prompt is the instruction you give the AI. Better instructions usually give better results.
Your request
→
Model thinks
→
Answer
Easy rule: clear input usually leads to clear output.
2) Good Prompt Formula
Role: who the model should act like
Task: what you want done
Context: useful background
Constraints: limits like length or format
Think: Role + Task + Context + Constraints
3) Bad Prompt vs Better Prompt
Weak prompt
Explain transformers
Better prompt
Explain transformers in simple words for a beginner, using 5 bullet points and one real-life analogy.
- Better prompt gives audience, tone, length, and format
- The model has less guessing to do
4) Few-Shot Prompting
You can show examples so the model copies the style.
Example 1
Example 2
Example 3
→
New answer in same style
This is useful when you want a specific pattern, format, or tone.
5) Output Control
You can tell the model exactly how to answer.
- Bullet points
- JSON
- Table
- Short summary
- Step-by-step answer
6) Why prompt quality matters
- Reduces confusion
- Reduces hallucination risk
- Makes answers more useful
- Helps keep output in the format you need
7) Embeddings
Embeddings turn text into numbers so a machine can compare meaning.
"dog"
→
[0.82, -0.14, ...]
Two pieces of text with similar meaning usually end up closer together in embedding space.
8) Visual: Similar meaning stays closer
dog
puppy
pet
car
dog ↔ puppy = close
dog ↔ car = far
The machine uses distance between vectors to estimate similarity.
9) Embeddings in Search / RAG
Embeddings are often used for semantic search and retrieval-augmented generation (RAG).
Documents
→
Chunk text
→
Create embeddings
→
Store in vector DB
User question
→
Create embedding
→
Find nearest chunks
→
Send to LLM
- Chunking = split documents into smaller pieces
- Vector DB = database built for similarity search
- Nearest chunks = most relevant pieces by meaning
10) Prompting vs Embeddings
Prompting = how you ask
Embeddings = how text is represented for similarity
Easy final memory
- Prompt = instruction
- Good prompt = clear + specific + structured
- Embedding = text turned into meaning-based numbers
- RAG = retrieve relevant info first, then answer