Skip to main content

How to Write Effective Prompts in MagicSchool

Updated this week

Getting the most out of AI starts with how you ask. This guide walks you through the core principles of effective prompting, from basic structure to advanced techniques.

Why Prompting Matters

A vague prompt gets a vague answer. A well-crafted prompt gives the AI the context it needs to produce something genuinely useful. Think of it like giving directions: the more specific you are, the more likely you'll end up where you want to go.


The Building Blocks of a Good Prompt

Every strong prompt includes some combination of these elements:

  • Persona: Who should the AI act as? (e.g., "You are a High School Principal's Executive Secretary")

  • Task: What do you need done? (e.g., "Write a professional reminder notice about upcoming teacher evaluations")

  • Context: What details does the AI need to do the job well? (e.g., dates, locations, audiences, constraints)

Example: Bad vs. Good

❌ Bad Prompt

✅ Good Prompt

"Make a notice about buses being late"

"You are an Administrative Assistant for an Elementary School. Write an email for parents about changes to afternoon dismissal procedures due to ongoing construction in the south parking lot. Car rider pick-up is relocating to the gymnasium entrance starting next Monday. Bus riders will continue using the main entrance."

"You're a secretary, remind teachers about a meeting."

"You are the High School Principal's Executive Secretary. Create a professional reminder notice about upcoming teacher evaluations. Evaluations are scheduled for March 1–15, require 2 classroom observations, lesson plan submission by February 15th, and self-evaluation forms must be completed in the district portal."

The good prompts are longer, and that's the point. More context = better output.

Prompting for Data Analysis

When working with data, don't just ask the AI to "look at this." Give it a structured set of questions to answer. This produces far more actionable results.

Example: Analyzing Grant Expenditure Data

Instead of: "Look at this data and tell me what it means"


Try:

"Analyze the uploaded grant expenditure data with these parameters:

  1. Calculate monthly burn rate by category

  2. Identify spending patterns or anomalies

  3. Project end-of-year status based on current trends

  4. Flag any compliance risks

  5. Compare spending across funding sources"

A structured prompt like this can surface specific findings, like a spike in professional development spending, budget overruns by funding source, or missing documentation flags, instead of a generic summary.

Prompting for Charts and Visualizations

Use this template when you need a chart or graph created:

"I need to create a [CHART TYPE] to show [DATA RELATIONSHIP] for [AUDIENCE]."

Key Details:

  1. Data Content: Describe the data

  2. Purpose: What you want to show

  3. Audience: Who will view it

  4. Key Message: The main insight to convey

  5. Interactivity Needs: Any interactive features required

Choosing the Right Prompt Complexity

Not every task needs the same depth. Match your prompt complexity to the output you need:

Goal

Prompt Style

Example

Quick, simple output

Short and direct

"Draft a 50-word reminder about a TIA data submission deadline next Friday."

Targeted insight

Focused questions

"List two immediate improvements we can make in how we cross-check staff data between HRIS and PEIMS."

Strategic planning

Comprehensive and detailed

"Outline a district-wide plan to integrate HRIS, SIS, and TEA systems for seamless HR data reporting. Include steps for data governance, staff training, and ongoing quality assurance."

Advanced Technique: Self-Refine Prompting

Once you've built a prompt (or a document, plan, or email), ask the AI to critique and improve it:

"Based on what you know I'm trying to do, what edits or changes would you

make to this prompt?"

This technique, sometimes called self-refine prompting, turns the AI into a collaborator that catches gaps you might have missed.


Pro Tips

  • Be specific about length. If you want something short, say so: "in under 80 words" or "a 5-item checklist."

  • Name your audience. Writing for parents? Principals? Teachers? The tone will shift accordingly.

  • Upload supporting documents. When doing deep research or analysis, attach the relevant files and reference them in your prompt.

  • Iterate. If the first output isn't right, refine your prompt rather than starting over. Add the missing context and try again.

Advanced Technique: Few Shot Prompting

Show, Don't Just Tell: Using Examples to Guide the AI

Sometimes the best way to get the output you want isn't to describe it, it's to show AI what it looks like.

This technique is called few-shot prompting. Instead of only writing instructions, you give the AI one or more examples of what a good output looks like and optionally, what a bad one looks like too.

Think of it like teaching a student a new concept. You could hand them a worksheet, or you could show them a finished project and say "do it like this." The example does a lot of the explaining for you. It’s like showing the Gradual Release model to AI itself,


How to do it in MagicSchool:

When building or editing a custom tool, include examples directly in your instructions, prompt or upload them as files. Be explicit about what they are:

"Here is an example of a strong response that should receive a 4 on the

development of ideas section: [paste or upload example]. Here is an

example of a weak response that should receive a 1: [paste or upload

example]. Use these to calibrate how you score student writing."


You can also include a non-example to help the AI understand what to avoid:

"This is an example of feedback that is too vague — do not respond like this:

'Good job, but try to use more evidence.' Instead, responses should point to

a specific sentence and suggest a specific fix."


The more targeted your examples, the better your results. If the AI is scoring a 4th grade argumentative essay, show it a 4th grade argumentative essay — not a general writing sample. The closer the example matches your actual use case, the more it will pull the AI's output in the right direction.


Quick reference:

Without Examples

With Examples

"Score this using the STAAR rubric"

"Score this using the STAAR rubric. Here is a paper that should score a 3, and here is one that should score a 1."

"Give feedback on this student's writing"

"Give feedback like this example response I've provided. Avoid vague feedback like this non-example."


The bottom line: if you find yourself saying "the AI isn't getting it right," try adding an example before changing anything else. It's often the fastest path to the output you're looking for.

Did this answer your question?