Principles of Effective Prompting
What You'll Build Today
Welcome to Day 31. Today marks the beginning of Phase 4: Prompt Engineering.
Up until now, you have been learning how to set up the machinery—Python, API calls, and basic logic. Now, we need to learn how to talk to the machine.
Today, you are going to build a Prompt Optimizer. You have probably noticed that sometimes the AI gives you generic, boring, or completely incorrect answers. You might feel like you have to "fight" the model to get what you want.
We are going to automate that fight. You will build a tool where you input a lazy, vague request, and your program will use AI to rewrite that request into a highly engineered, professional prompt that guarantees better results.
Here is what you will master today:
* Clarity and Specificity: Why "Write a story" fails but "Write a 300-word sci-fi story about a sentient toaster" succeeds.
* Persona Prompting: Assigning a role to the AI to control tone and expertise.
* Constraints and Formatting: Forcing the AI to output clean data (like JSON or CSV) instead of chatty paragraphs.
* Zero-shot vs. Few-shot: The single most powerful technique to improve accuracy by providing examples.
* Task Decomposition: Breaking one big vague request into logical steps.
Let's turn the "black box" into a precision instrument.
The Problem
Imagine you are trying to build an app that summarizes news articles for busy executives. You write a Python script to call the OpenAI API.
Here is the code you write. It looks correct technically, but the output is frustrating.
import os
from openai import OpenAI
# Assuming you have your API key set in your environment variables
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
article_text = """
Global markets shifted significantly today as the tech sector saw a
downturn of 4% following new regulatory announcements in the EU.
Meanwhile, renewable energy stocks surged by 2% due to new government
subsidies announced in the US infrastructure bill.
"""
# The "Lazy" Prompt
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": f"Summarize this: {article_text}"}
]
)
print(response.choices[0].message.content)
The Pain Points:
You run this code, get a mediocre result, and think, "Maybe AI isn't ready for production yet."
The AI is ready. The instructions were just bad. "Summarize this" is the prompting equivalent of telling a chef "Cook food." You will get food, but probably not the specific dish you were hungry for.
Let's Build It
We are going to evolve our prompting strategy step-by-step, moving from the "lazy" version to a sophisticated "Prompt Optimizer."
Step 1: The Baseline (Zero-Shot)
First, let's create a reusable function so we don't have to keep typing the boilerplate code. We will start with a basic request. This is called Zero-Shot prompting because we are giving the model zero examples of what we want.
import os
from openai import OpenAI
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content": prompt}]
response = client.chat.completions.create(
model=model,
messages=messages,
temperature=0.7 # Controls randomness
)
return response.choices[0].message.content
# The vague request
task = "Write a product description for a new ergonomic coffee mug."
print(f"--- BASELINE ---\n{get_completion(task)}\n")
Why this is weak: The output will likely be a generic marketing fluff piece. It doesn't know who the customer is, what the features are, or the tone we want.
Step 2: Adding Persona and Context
To fix the tone, we need to tell the AI who it is. This is done using the "system" message or by explicitly defining a persona in the prompt.
Let's update our function to accept a system instruction.
def get_completion_with_system(system_role, user_prompt):
messages = [
{"role": "system", "content": system_role},
{"role": "user", "content": user_prompt}
]
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=messages,
temperature=0.7
)
return response.choices[0].message.content
system_instruction = """
You are a senior marketing copywriter for a high-end luxury lifestyle brand.
Your tone is sophisticated, minimalist, and persuasive.
Avoid exclamation marks and cheesy buzzwords.
"""
product_details = "A coffee mug that keeps heat for 4 hours and fits in any car cup holder."
prompt = f"Write a product description for this item: {product_details}"
print(f"--- WITH PERSONA ---\n{get_completion_with_system(system_instruction, prompt)}\n")
Why this matters: The "system" role sets the behavior rules. By forbidding exclamation marks and defining the "luxury" persona, the output instantly becomes more usable for a specific brand voice.
Step 3: Constraints and Formatting
Now, let's solve the data problem. We don't just want text; we often want structured data. We can force the AI to output a specific format.
system_instruction = """
You are a data extraction assistant.
You must output your answer in valid JSON format only.
Do not include any conversational text outside the JSON.
The JSON keys must be: 'headline', 'body_text', 'target_audience', 'price_point_guess'.
"""
prompt = f"Write a description for: {product_details}"
print(f"--- WITH FORMATTING ---\n{get_completion_with_system(system_instruction, prompt)}\n")
Why this matters: If you are building software, you need predictable outputs. By adding the constraint "JSON format only," you can now parse the result directly into Python dictionaries or databases.
Step 4: Few-Shot Prompting
This is the most powerful concept for today. Instead of just telling the AI what to do, show it.
If you want a specific style of output, give it examples of input-output pairs. This is called Few-Shot Prompting.
system_instruction = """
You are a creative naming assistant.
I will give you a product description, and you will give me 3 catchy names.
Follow the pattern of the examples below.
"""
# We embed examples directly into the prompt to teach the pattern
few_shot_prompt = """
Input: A shoes that laces itself automatically.
Output:
AutoLace
SnugStep
FutureFit
Input: A noise-canceling window for apartments.
Output:
SilencePane
CityQuiet
ZenGlass
Input: A coffee mug that keeps heat for 4 hours and fits in any car cup holder.
Output:
"""
print(f"--- FEW-SHOT ---\n{get_completion_with_system(system_instruction, few_shot_prompt)}\n")
Why this matters: Notice we didn't have to explain "keep the names short and compound words." The AI inferred that pattern from the examples (AutoLace, SilencePane). Examples are often clearer than instructions.
Step 5: The Prompt Optimizer (Putting it all together)
Now, let's build the main tool. We will create a "Meta-Prompt." This is a prompt that asks the AI to write a better prompt for us, utilizing the principles we just learned (Persona, Context, Constraints, Few-Shot).
def optimize_prompt(bad_prompt):
system_role = "You are an expert Prompt Engineer."
meta_prompt = f"""
I have a vague, ineffective prompt: "{bad_prompt}"
Your goal is to rewrite this into a highly effective prompt for an LLM.
Apply these principles:
1. Assign a specific Persona (e.g., "You are an expert in...").
2. Add Context (why we need this).
3. Add Constraints (length, formatting).
4. Ask the model to think step-by-step.
Output ONLY the optimized prompt text. Do not explain your reasoning.
"""
return get_completion_with_system(system_role, meta_prompt)
# Let's test it
original_idea = "Write code to read a csv file."
print("--- ORIGINAL IDEA ---")
print(original_idea)
print("\n--- OPTIMIZED PROMPT GENERATED BY AI ---")
better_prompt = optimize_prompt(original_idea)
print(better_prompt)
print("\n--- RESULT OF OPTIMIZED PROMPT ---")
# Now we run the BETTER prompt to see the difference
print(get_completion(better_prompt))
What just happened?
optimize_prompt function.Now You Try
You have the core logic. Now expand it to handle specific needs.
Modify the optimize_prompt function to not just rewrite the prompt, but first list three reasons why the original prompt was bad. Use a few-shot approach to show it how to critique.
Create a function called generate_data(topic, schema).
* topic: e.g., "5 fictional planets"
* schema: e.g., "JSON with keys: name, gravity, main_export"
* The prompt must strictly enforce the schema using constraints.
Create a function that takes a user_prompt and a tone (e.g., "Funny", "Professional", "ELI5" - Explain Like I'm 5). Inject that tone into the system role dynamically before sending it to the AI.
Challenge Project: The Math Word Problem Fixer
Large Language Models are notoriously bad at math word problems because they predict the next word, they don't "calculate" (unless they write code). They often hallucinate numbers.
The Challenge:Find a math word problem that GPT-3.5 gets wrong with a basic prompt. Then, write a script that uses Few-Shot Chain of Thought prompting to get it right.
Requirements:tricky_math_problem. (Hint: Look for problems involving multi-step logic or trick questions).get_completion(tricky_math_problem) and confirm it gives the wrong answer.few_shot_cot_prompt.Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
A: Thinking: Roger started with 5 balls. 2 cans * 3 balls per can = 6 new balls. 5 + 6 = 11. The answer is 11.
Q: [Your tricky problem here]
A:
What You Learned
Today you moved from being a "user" of AI to a "director" of AI.
* Garbage In, Garbage Out: You saw that vague prompts lead to generic code and writing.
* Persona: You learned that telling the AI "who" it is changes "how" it speaks.
* Few-Shot: You discovered that giving 2-3 examples is often more effective than writing 2-3 paragraphs of instructions.
* Meta-Prompting: You built a tool that uses AI to improve its own instructions.
Why This Matters:In Phase 4, we are going to build complex applications. If your prompts are weak, your code will fail unpredictably. Effective prompting is the "API documentation" for the neural network.
Tomorrow:We are going to dive deeper into the logic you used in the Challenge Project. It is called Chain of Thought prompting, and we will use it to force the AI to reason through complex problems step-by-step before giving an answer.