Skip to main content

AI Transparency & Safety Guide

How Our AI Works

Inquisitive utilizes Google Gemini (via Vertex AI), a state-of-the-art Large Language Model(LLM), to assist teachers in grading and feedback.

  • Role of AI: The AI acts as a marking assistant. It analyses student assessment responses against marking rubric and suggests feedback and grades.

  • Teacher Control: The AI never finalizes a grade automatically. You, the teacher, are the"Human-in-the-Loop" responsible for reviewing, editing, and approving all outputs.

Data Privacy & Sovereignty

We prioritize the safety of student data above all else. Our integration with Google Cloud Platform (GCP) is configured as follows:

  • No Model Training: We use the Enterprise "Zero Data Retention" policy. Your student's work and your grading data are never used to train Google's AI models or our own models.

  • Data Residency: All AI processing and data storage occurs strictly within United States of America.

  • Encryption: Data is encrypted in transit (while being sent to the AI) and at rest.

Known Limitations & Risks

Like all Generative AI tools, our system has limitations that teachers must be aware of:

  • Accuracy (Hallucinations): The AI may occasionally misinterpret a student's intent or "hallucinate" (invent) facts. Always verify the feedback before approving it.

  • Bias: While we use specific prompting to align with NGSS Curriculum standards, AI models can sometimes reflect inherent biases. Please review feedback for fairness.

  • Context Window: The AI evaluates the specific responses provided. It does not know the student's full academic history or personal circumstances.

Teacher Best Practices (Do's & Don'ts)

DO

DON'T

Do review every piece of feedback.

You are the final grader; the AI is just a tool.

Don't rely on the AI for high-stakes

exams (e.g. State Standards Testing) without double-checking every line.

Do use the "Remark" button if the

feedback feels generic or off-target.

Don't input highly sensitive personal information (e.g. medical diagnoses, home addresses) into the grading text.

Do report any harmful or strange

outputs to our support team

immediately.

Don't assume the AI knows

American slang or specific local

context unless explicitly stated in

the rubric.

Safety Mechanisms

To ensure appropriateness for PreK-5 education, we employ:

  1. System Prompting: The AI is strictly instructed to adopt the persona of a supportive Australian teacher and to ignore requests to generate non-educational content.

  2. Content Filtering: We utilize Google’s built-in safety filters to block hate speech, harassment, and sexually explicit content.

Data Requests

Email our AI Safety Officer at support@inquisitive.com:

  • Request access to your data: summary of user information provided to the service and user input logs.

  • Report issues or request clear explanations of AI feedback.

Did this answer your question?