PromptTemplate

What it is A PromptTemplate is a blueprint for the text you give to a language model. It allows you to define placeholders that can be filled with actual values when you run your code. Think of it as a reusable question format.

Why it exists Without it, you’d have to manually write a new prompt every time you ask the model something. PromptTemplate saves time, avoids errors, and ensures consistency when interacting with an LLM.

Real-world analogy Imagine a letter template:

Dear {name},  
Your appointment is on {date}.  

You can fill in name and date each time without rewriting the whole letter. That’s exactly what a PromptTemplate does for AI prompts.

Minimal beginner example

import os
from dotenv import load_dotenv
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import PromptTemplate

load_dotenv()
api_key = os.getenv("GEMINI_API_KEY")

llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash", api_key=api_key)

template = "Hello, my name is {name}. How are you today?"
prompt = PromptTemplate(template=template, input_variables=["name"])

final_prompt = prompt.format(name="Sai Kiran")
response = llm.invoke(final_prompt)

print(response.content)

Here, {name} is filled dynamically when calling .format().

Small LangChain workflow

  1. Create a PromptTemplate.

  2. Fill it with data.

  3. Send it to the LLM.

  4. Receive the AI response.

It’s the first step before building more complex things like FewShotPromptTemplate or Chains.

Common beginner mistakes

  • Forgetting to list all placeholders in input_variables.

  • Hardcoding values instead of using .format().

  • Making templates too long or unclear—LLMs perform better with concise, clear prompts.

When to use this vs alternatives

  • Use PromptTemplate for single, reusable prompts.

  • Use FewShotPromptTemplate if you need to provide multiple examples to guide the model.

  • Use ChatPromptTemplate when building structured chat-based prompts with roles (system, user, assistant).

Last updated