Day 22 of 80

OpenAI API: Fundamentals

Phase 3: LLM Landscape & APIs

What You'll Build Today

Welcome to Day 22. Today is a massive milestone.

Up until now, your Python programs have been logical but "brainless." They could calculate numbers, loop through lists, and handle text files, but they couldn't understand anything. If you wanted them to answer a question, you had to hard-code the answer.

Today, we are going to give your code a brain.

We are going to build a Console-based AI Chatbot. It won't just spit out pre-written lines; it will understand context, remember what you said earlier in the conversation, and reply intelligently.

Here is what you will learn and why:

* The OpenAI Python SDK: You'll learn the standard toolset used by professional AI engineers to connect Python to GPT models.

* API Keys: You'll learn how to securely "login" to a paid service through code.

* The Request/Response Cycle: You'll see that AI isn't magic; it's just sending data out and getting data back (exactly like the functions from Day 7).

* Message Roles (System, User, Assistant): You'll learn how to control the AI's personality and how to feed it memory of past events.

* Temperature & Parameters: You'll learn the "knobs and dials" that control how creative or precise the AI becomes.

Get your API key ready. Let's write some intelligence.

---

The Problem

Let's think back to Day 7 when we learned about functions. A function is a machine: input goes in, processing happens, output comes out.

If you wanted to build a chatbot without an LLM (Large Language Model), you would have to write a function that anticipates every single thing a user might say. It is exhausting and brittle.

Here is what that "old school" code looks like. Read this and feel the frustration:

def old_school_chatbot(user_input):
    # We have to guess exactly what the user might type

text = user_input.lower()

if "hello" in text or "hi" in text:

return "Hello there! How can I help?"

elif "weather" in text:

return "I don't know the weather, I'm just a simple script."

elif "name" in text:

return "I am a Python script with a lot of if-statements."

elif "joke" in text:

return "Why did the programmer quit? Because he didn't get arrays."

else:

# The dreaded catch-all failure

return "I'm sorry, I don't understand that."

# Let's try to talk to it

print(old_school_chatbot("Hi there"))

print(old_school_chatbot("Tell me a funny joke please"))

print(old_school_chatbot("What is the capital of France?"))

The Pain Points:
  • Rigidity: If the user types "Greetings" instead of "Hello", the bot fails.
  • No Knowledge: It doesn't know the capital of France because we didn't explicitly program that fact.
  • No Memory: If you say "My name is Alice" and then ask "What is my name?", it won't know.
  • Endless Code: To make this smart, you would need billions of if statements.
  • There has to be a better way. We need a function that doesn't just match words, but understands intent. We need a function that runs on a supercomputer, not just our laptop.

    That function is the OpenAI API.

    ---

    Let's Build It

    We are going to replace those brittle if statements with a call to GPT-4o-mini.

    Prerequisites:

    You need the openai library installed. If you haven't done this yet, run this in your terminal:

    pip install openai

    You also need an OpenAI API Key.

    Step 1: The Basic Connection

    First, let's just prove we can talk to the model. We will send a single message and print the reply.

    Note: In a production app, you should save your API key as an environment variable. For this exercise, you can paste it directly, but be careful never to share this code file with others if it contains your key.
    from openai import OpenAI
    
    # Initialize the client with your API key
    # Replace 'sk-...' with your actual key
    

    client = OpenAI(api_key="sk-...")

    print("Sending message to OpenAI...")

    # This is the "Function Call" to the AI

    response = client.chat.completions.create(

    model="gpt-4o-mini",

    messages=[

    {"role": "user", "content": "What is the capital of France?"}

    ]

    )

    # Extracting the text from the complex response object

    answer = response.choices[0].message.content

    print(f"AI Answer: {answer}")

    Why this matters:

    This is the "Hello World" of AI. We created a client (our phone line to OpenAI), sent a message with the role "user" (that's us), and asked for a completion. The variable answer now holds the intelligence.

    Step 2: Adding a Personality (System Role)

    The messages list is powerful. It doesn't just take user input; it takes instructions. We use the "system" role to set the behavior before the user even speaks.

    from openai import OpenAI
    
    

    client = OpenAI(api_key="sk-...")

    response = client.chat.completions.create(

    model="gpt-4o-mini",

    messages=[

    # System role defines the AI's behavior

    {"role": "system", "content": "You are a helpful assistant that speaks only in rhymes."},

    {"role": "user", "content": "How do I bake a cake?"}

    ]

    )

    print(response.choices[0].message.content)

    Run this code. You will see the AI gives you a recipe, but it forces it into a rhyming scheme. This proves you are in control of the "software's" behavior using plain English.

    Step 3: Making it Interactive

    Hardcoding the question "How do I bake a cake?" isn't very fun. Let's use Python's input() function so we can type our questions in the terminal.

    from openai import OpenAI
    
    

    client = OpenAI(api_key="sk-...")

    print("--- AI Bot (Type 'quit' to stop) ---")

    while True:

    user_input = input("\nYou: ")

    if user_input.lower() == "quit":

    break

    response = client.chat.completions.create(

    model="gpt-4o-mini",

    messages=[

    {"role": "system", "content": "You are a helpful assistant."},

    {"role": "user", "content": user_input}

    ]

    )

    print(f"AI: {response.choices[0].message.content}")

    The Problem with Step 3:

    Run this. Ask: "Hi, my name is Bob." The AI will say hello.

    Then ask: "What is my name?"

    The AI will likely say: "I don't know your name."

    Why? Because every time the loop runs, we send a fresh request. We aren't sending the history of the conversation. The AI has amnesia.

    Step 4: Adding Memory (The Final Code)

    To fix the amnesia, we need to maintain a list called conversation_history. Every time we speak, we append our message to this list. Every time the AI replies, we append its message to the list. Then, we send the entire list back to OpenAI on the next turn.

    Here is the complete, working chatbot:

    from openai import OpenAI
    
    # 1. Setup
    

    client = OpenAI(api_key="sk-...")

    # 2. Initialize History # We start with the system message

    conversation_history = [

    {"role": "system", "content": "You are a friendly, concise technical mentor."}

    ]

    print("--- AI Mentor Bot (Type 'quit' to exit) ---")

    while True:

    # 3. Get User Input

    user_input = input("\nYou: ")

    if user_input.lower() in ["quit", "exit"]:

    print("Goodbye!")

    break

    # 4. Add User message to history

    conversation_history.append({"role": "user", "content": user_input})

    # 5. Send the ENTIRE history to the model

    try:

    response = client.chat.completions.create(

    model="gpt-4o-mini",

    messages=conversation_history,

    temperature=0.7 # Controls creativity (0.0 = robotic, 1.0 = creative)

    )

    # 6. Get the AI's reply

    ai_reply = response.choices[0].message.content

    print(f"AI: {ai_reply}")

    # 7. CRITICAL: Add AI message to history so it remembers next time

    conversation_history.append({"role": "assistant", "content": ai_reply})

    except Exception as e:

    print(f"An error occurred: {e}")

    Run this.
  • Say "Hi, I'm [Your Name]."
  • Ask "What is the capital of France?"
  • Ask "What was my name again?"
  • It will remember you! You have successfully built a stateful application using a stateless API.

    ---

    Now You Try

    You have a working chatbot. Now, let's experiment with the API parameters to see how they change the output.

    1. The "Fact Checker" Bot

    Modify the temperature parameter in the code above. Change it to 0.0. Change the system prompt to: "You are a rigid fact-checker. You answer only with 'True' or 'False' followed by a one-sentence explanation."

    Test it: Ask it if the earth is flat. Ask it if Python is a snake. Notice how consistent and boring (in a good way) the answers become. 2. The "Short-Winded" Bot

    Add a new parameter to the client.chat.completions.create call: max_tokens=20.

    Test it: Ask the bot to explain Quantum Physics. It will start explaining and then get cut off mid-sentence. Why? This controls cost and length. It forces you to handle incomplete data or set limits. 3. The "Translator"

    Change the system prompt to: "You are a translator. No matter what the user says, translate it into Spanish."

    Test it: Type normal English sentences. You've just built a translation app in 5 lines of code.

    ---

    Challenge Project: The Split Personality

    Your goal is to create a script that holds two separate conversations with two different personalities based on user selection.

    Requirements:
  • When the program starts, ask the user: "Choose mode: (1) Pirate or (2) Lawyer".
  • Based on the choice, set a different system message.
  • * Pirate: "You are a rude pirate searching for treasure."

    * Lawyer: "You are a formal, polite corporate attorney."

  • Enter the chat loop.
  • Ensure the AI stays in character for at least 3 turns of conversation.
  • Example Output:
    Choose mode: (1) Pirate or (2) Lawyer: 1
    

    You: Hello

    AI: Arrgh! What brings ye to me ship, landlubber?

    You: I want legal advice.

    AI: The only law here is the law of the sea! Walk the plank!

    Hint: You need to define the conversation_history list inside an if/else block after the user makes their choice, but before the while loop starts.

    ---

    Common Mistakes

    1. The "Amnesia" Bug

    * Mistake: Sending the request to OpenAI but forgetting to append the result back to your conversation_history list.

    * Symptom: You ask "What is my name?" immediately after introducing yourself, and the AI doesn't know.

    Fix: Ensure you have the line conversation_history.append(...) for both* the user input AND the AI response. 2. The "API Key" Error

    * Mistake: Not setting the API key string correctly, or copying extra spaces.

    * Symptom: AuthenticationError: Incorrect API key provided.

    * Fix: Double-check your string. It should look like client = OpenAI(api_key="sk-...").

    3. The Infinite Loop of Costs

    * Mistake: Creating a while True loop without a break condition (like checking for "quit").

    * Symptom: The program never ends. If you were automating this to run without input(), it could rack up costs rapidly.

    * Fix: Always ensure you have a "kill switch" in your loops (the if user_input == "quit": break block).

    ---

    Quick Quiz

    Q1: Which role is used to give the AI its "personality" or initial instructions?

    a) user

    b) assistant

    c) system

    d) server

    Q2: What happens if you set temperature=2.0 (or a very high number)?

    a) The AI becomes extremely accurate.

    b) The AI becomes very fast.

    c) The AI outputs nonsense or hallucinates wildly.

    d) The code crashes.

    Q3: When you run client.chat.completions.create, where is the "thinking" happening?

    a) On your computer's CPU.

    b) On OpenAI's servers.

    c) Inside the Python installation folder.

    d) In your RAM.

    Answers:

    * Q1: c) The system role sets the context/behavior.

    * Q2: c) High temperature makes the model erratic and nonsensical.

    * Q3: b) It is an API call; the heavy lifting happens remotely.

    ---

    What You Learned

    Today you bridged the gap between basic scripting and Artificial Intelligence.

    * The SDK: You used client.chat.completions.create to access a supercomputer.

    * Roles: You learned that system defines the rules, user provides input, and assistant provides output.

    * State: You learned that LLMs are stateless, so you must manage the conversation_history yourself in a Python list.

    Why This Matters:

    Every major AI application you see—from Customer Support bots to Coding Assistants—is built on this exact loop: Append History -> Send to API -> Get Response -> Append History. You now possess the core logic of modern AI apps.

    Tomorrow:

    We are going beyond text. Tomorrow, we will look at Vision (giving the AI eyes to see images) and Structured Output (forcing the AI to return JSON data instead of paragraphs). See you then!