top of page
Search

AI for SAT Prep: Powerful Tool or Risky Shortcut?

  • Writer: Topher Roebuck
    Topher Roebuck
  • 3 days ago
  • 3 min read

Google recently announced a partnership with The Princeton Review to offer full-length SAT practice tests directly within its Gemini AI app at no cost. On the surface, this sounds like a game changer. But, what seems too good to be true often is, and this is no exception.


I have a degree in Computer Science from Emory University, and I have reviewed this AI tool extensively to assess how it might affect our students. I want to share some concerns I have about relying on Gemini as a primary prep tool.


Flawed Questions Confuse and Mislead Students


One of the first red flags appears in the Reading/Writing section.


Multiple-choice English grammar question about language learning dedication, with option B highlighted: "practice; indeed,". Black background.

In the example above, students are asked to choose the one option that conforms to Standard English conventions. However, two answer choices (B and D) are both grammatically correct!


On a real SAT, this never happens. The test is carefully constructed so that there is one correct answer and three incorrect answers. Students come to expect that, so preparing with inaccurate or ambiguous questions will shake their confidence and blur the grammar rules that they’ve worked so hard to learn. 


Good practice builds clarity. Flawed questions introduce confusion, and student outcomes suffer.


Content Misalignment Muddles Student Expectations


The math section shows even deeper issues.


Math question on predicting video game points using linear data, x-y pairs given. Options A: 213, B: 211 (highlighted), C: 220, D: 206.

This linear regression question is presented in a way that simply does not reflect how the SAT tests this concept. Students would never be asked to compute a line of best fit.


The real exam is far more likely to:


  • Provide a regression equation and ask students to interpret it

  • Ask about slope meaning or intercept meaning

  • Test understanding of correlation in context

  • Focus on reasoning rather than raw calculation


If AI-generated practice presents concepts in a more complicated or computation-heavy way than the real test will, students will be led to study the wrong skills.


Difficulty Sequencing Is Off


When students do Test Prep, they aren’t just learning math. They’re learning how the SAT will test them in math.


The actual digital SAT carefully ramps up difficulty. Early questions tend to be accessible. Later questions challenge students.


Gemini’s practice test scrambles this, presenting students with a high difficulty parabola problem as the fifth question. Likewise, the second-to-last question asks students to convert 6.2 kilograms to grams—a simple one-step calculation.


Quiz question asks the mass in grams of a 6.2 kg object, given 1 kg = 1000 g. Four choices are shown; "A. 6200" is highlighted in blue.

If difficulty sequencing is off:


  • Strong students get rattled early

  • Average students lose confidence immediately

  • Score predictions become unreliable


Test prep is as much strategy as academic preparation, and the strategies rely, at least in part, on the SAT’s consistency. Gemini falls short in replicating this characteristic. 


Subject Weighting Doesn’t Reflect the Real Exam


The real SAT has a very specific blueprint. The College Board publishes clear guidance about how math questions are distributed across categories. 

Types of Math

Question Distribution

Algebra

~ 35%

Advanced Math

~ 35%

Problem Solving & Data Analysis

~ 15%

Geometry & Trigonometry

~ 15%


In the Gemini-generated test, about 40% of the questions fell under Problem Solving & Data Analysis, compared to about 15% on the official SAT. 


If AI-generated practice systematically overemphasizes certain domains and underrepresents others, students may walk into test day overprepared in one category and underprepared in another. Additionally, score expectations may drift from reality.


Explanations Are Messy and Unclear


Perhaps the most concerning issue appears in Gemini’s explanations.


Math solution on parabolas with vertex form equations displayed on a screen. Text highlights steps for recalculation. Checked as correct.

The explanation for a parabola question shows the AI essentially working through the problem in real time, correcting itself mid-stream, and revising its reasoning. You can literally see the model “thinking”, which is fascinating from a technology standpoint, but confusing and unhelpful from a student’s standpoint.


“Wait, recalculating…” does not inspire confidence in this tool.


Should Students Use Gemini’s Test Prep?


AI absolutely has a place in SAT and ACT prep with tutor supervision, as it can be helpful to provide extra reps after official material is exhausted. However, it should not replace Official College Board practice tests, structured curriculum aligned to SAT blueprints, and human coaching for strategy and error analysis.


AI is a supplement. It’s certainly not a substitute for the one-on-one, student-centered tutoring that Within Reach offers! 

 
 
© Within Reach Education, 2024
  • Instagram
  • Facebook
  • LinkedIn
Independent Educational Consultants Association member
bottom of page