Day 24 of 80

Anthropic Claude API

Phase 3: LLM Landscape & APIs

Day 24: Anthropic Claude API

Welcome to Day 24. Up until now, we have focused heavily on OpenAI's GPT models. They are the industry standard, but they are not the only player in town.

Today, we meet the strongest competitor: Anthropic's Claude.

Many developers (myself included) actually prefer Claude 3.5 Sonnet for coding tasks over GPT-4. Claude tends to be less "lazy," follows complex formatting instructions better, and writes more human-sounding text.

To be a true AI engineer, you cannot be dependent on a single company. If OpenAI goes down, or if their prices go up, or if a specific model just isn't working for your use case, you need to be able to switch gears immediately.

What You'll Build Today

We are going to build a "Code Doctor". This is a tool that takes a messy, broken, or hard-to-read piece of Python code and automatically refactors it into professional, clean, documented code.

Here is what you will master today:

* The Anthropic Python SDK: You will learn how to connect to Claude programmatically. We learn this because every AI provider speaks a slightly different language, and you need to be fluent in more than one.

* Claude 3.5 Sonnet vs. Opus: You will learn the difference between the "fast and smart" model (Sonnet) and the "deep thinker" model (Opus). We learn this so you can balance cost versus intelligence.

* The Messages API: You will learn Anthropic's specific message structure. We learn this because unlike OpenAI, Claude enforces strict roles to prevent errors.

* XML Tag Prompting: You will learn Claude's native language. We learn this because while ChatGPT understands loose text, Claude performs significantly better when you structure data with XML tags (like and ).

The Problem

Imagine you are working on a team and a junior developer sends you this piece of code to review:

def c(x,y):
    # this does math

a = x+y

if a > 10:

print("big")

else:

print("small")

return a

It works, but it is terrible. The variable names c, x, y, and a mean nothing. There are no type hints. The comments are useless.

You want to use AI to fix this. If you have been using OpenAI, you might try a prompt like this:

"Fix this code: [paste code]"

The Pain:
  • Chattiness: The AI often responds with: "Here is the fixed code. I changed the variable names because x and y were unclear..." It mixes the explanation with the code. If you are building software, you can't automatically copy-paste that response into a file because the conversational text will break your program.
  • Hallucination of Instructions: If you paste a long script, the AI might get confused about where the instructions end and the code begins.
  • Vendor Lock-in: You have built your entire mental model around OpenAI. If you try to switch to Claude because it's better at coding, your existing scripts will crash because the API methods are totally different.
  • We need a way to send code to an AI, have it understand exactly which part is data and which part is instruction, and get back only clean code.

    Let's Build It

    We will use the anthropic library and the Claude 3.5 Sonnet model.

    Step 1: Setup and Installation

    First, you need the library.

    ``bash

    pip install anthropic

    
    

    You will also need an API Key from the Anthropic console. Once you have it, set it in your environment (just like you did with OpenAI).

    Create a file named code_doctor.py.

    import os
    

    import anthropic

    # 1. Initialize the client # Anthropic automatically looks for ANTHROPIC_API_KEY in your environment variables # If you haven't set that, you can pass api_key="sk-..." directly (not recommended for sharing)

    client = anthropic.Anthropic(

    api_key=os.environ.get("ANTHROPIC_API_KEY")

    )

    print("Client initialized!")

    Step 2: Your First Message (The Hello World)

    Anthropic uses a messages.create method. It looks similar to OpenAI, but there is a key difference: System prompts are a separate parameter, not part of the messages list.

    import os
    

    import anthropic

    client = anthropic.Anthropic(

    api_key=os.environ.get("ANTHROPIC_API_KEY")

    )

    # 2. Send a basic message

    response = client.messages.create(

    model="claude-3-5-sonnet-20240620", # This is the current "best for coding" model

    max_tokens=1024, # Anthropic requires this parameter

    messages=[

    {"role": "user", "content": "Hello, Claude. Are you ready to code?"}

    ]

    )

    # 3. Print the result # The structure of the response object is slightly different than OpenAI

    print(response.content[0].text)

    Why this matters:

    * model: We use Sonnet 3.5. It is faster and cheaper than Opus, but smarter than GPT-4o for many coding tasks.

    * max_tokens: Unlike OpenAI, this is often mandatory. It limits how much text Claude generates to prevent run-away costs.

    * response.content[0].text: Anthropic returns a list of content blocks. We usually just want the text of the first block.

    Step 3: System Prompts and Personas

    Claude excels when given a "persona" using the system parameter. Note that we do not put the system prompt inside the messages list.

    import os
    

    import anthropic

    client = anthropic.Anthropic()

    system_prompt = "You are a Senior Python Architect. You speak concisely and focus only on technical excellence."

    response = client.messages.create(

    model="claude-3-5-sonnet-20240620",

    max_tokens=1024,

    system=system_prompt, # <--- Top level parameter!

    messages=[

    {"role": "user", "content": "Explain what a variable is."}

    ]

    )

    print(response.content[0].text)

    Step 4: The Secret Weapon - XML Tags

    This is the most important part of today. Claude is trained to pay special attention to XML tags.

    If you mix instructions and data, AI gets confused.

    * Bad: "Summarize this: [Article Text]..."

    * Good (Claude Style): "Summarize the text found inside the

    tags."

    This clearly separates what you want done from the data you want it done to.

    Let's write our messy code variable and wrap it in tags.

    messy_code = """
    

    def c(x,y):

    # this does math

    a = x+y

    if a > 10:

    print("big")

    else:

    print("small")

    return a

    """

    # We wrap the code in tags

    prompt_content = f"""

    Please refactor the python code located in the tags.

    Follow these rules:

  • Use descriptive variable names.
  • Add type hints.
  • Add a docstring.
  • Do not change the logic.
  • {messy_code}

    """

    print(prompt_content)

    # Run this just to see how the string looks before we send it

    Step 5: The Complete Code Doctor

    Now we combine everything. We will also ask Claude to output its answer in specific tags so we can parse it easily later if we wanted to.

    import os
    

    import anthropic

    client = anthropic.Anthropic()

    def heal_code(bad_code):

    system_prompt = "You are an expert code refactoring tool. Output only valid Python code."

    # Construct the user message with XML tags

    user_message = f"""

    Refactor the following Python code to meet PEP-8 standards.

    Improve variable names and add type hinting.

    Output the result inside tags.

    {bad_code}

    """

    print("--- Sending to Code Doctor ---")

    response = client.messages.create(

    model="claude-3-5-sonnet-20240620",

    max_tokens=2048,

    system=system_prompt,

    messages=[

    {"role": "user", "content": user_message}

    ]

    )

    return response.content[0].text

    # The messy input

    my_messy_code = """

    def calc(p, t):

    # p is price, t is tax

    tot = p + (p*t)

    return tot

    """

    result = heal_code(my_messy_code)

    print(result)

    Run this code. You should see Claude return a beautiful, type-hinted function wrapped in
    tags. Why this works:

    Because we used XML tags, Claude knew exactly where the code started and ended. It didn't try to "refactor" your instructions. It treated the content inside as the data payload.

    Now You Try

    You have a working Code Doctor. Now let's expand its capabilities.

  • The "Explain Why" Feature:
  • Modify the prompt to ask Claude to provide a bulleted list of changes it made. Ask it to wrap that explanation in tags, separate from the tags.

  • Model Swap:
  • Change the model string from claude-3-5-sonnet-20240620 to claude-3-opus-20240229. Opus is the "smartest" (and most expensive) model. Run the same code. Do you notice a difference in speed? Does the code quality change?

  • The Bug Hunter:
  • Create a new function called find_bugs(code). Instead of refactoring, ask Claude to output a list of potential security vulnerabilities or logic errors found in the .

    Challenge Project: The Prompt Translator

    One of the hardest parts of switching from OpenAI to Claude is rewriting your prompts. OpenAI handles "lazy" prompts well; Claude demands structure.

    The Challenge:

    Write a Python script that takes a "lazy" OpenAI-style prompt and converts it into a "structured" Claude-style prompt using Claude itself.

    Requirements:

    * Input: A string containing a basic instruction (e.g., "Here is a meeting transcript, summarize it and list action items: [transcript text]").

    * Process: Send this to Claude with a meta-prompt (a prompt about prompting).

    * Output: A new prompt string that uses XML tags (e.g., , , ).

    Example Input: "Read this email and tell me if it's angry or happy: 'I hate this service, cancel my account.'" Desired Output:
    xml

    Please analyze the sentiment of the email found in the tags.

    I hate this service, cancel my account.

    Output your analysis in tags.

    ` Hint:

    Your system prompt for this challenge should be: "You are an expert Prompt Engineer specializing in Anthropic Claude XML formatting."

    What You Learned

    Today you broke out of the OpenAI ecosystem. You now possess the ability to choose the best tool for the job.

    * Anthropic SDK: You can initialize and send messages to Claude.

    * System Parameters: You know that Claude keeps system instructions separate from the message history.

    * XML Tagging: You learned that wrapping inputs in ` prevents confusion and improves performance.

    * Model Selection: You used Claude 3.5 Sonnet, currently considered the gold standard for AI coding assistance.

    Why This Matters:

    In a real production application, you might use a cheap, fast OpenAI model for simple chat, but route all complex document analysis or code generation tasks to Claude 3.5 Sonnet. Being able to orchestrate multiple models makes you a powerful AI engineer.

    Tomorrow:

    We are going to look at the third giant in the room: Google Gemini. We will explore its massive context window (processing entire books at once) and its multimodal capabilities (video and audio).