93%

Smaller than frontier

models

6B

Active parameters per

query

80–95%

Less energy per

inference

5%

Parameters active at

once

General-purpose AI has a hidden cost

The largest AI models contain up to 1.8 trillion parameters. Routing

every student submission through these systems carries a significant

environmental footprint.

Research shows the most energy-intensive models consume 33+ Wh

per long prompt over 70× more than smaller, purpose-built

alternatives.

Frontier models activate 37B+ parameters per query

Academic grading needs a fraction of that capacity

Over-engineering creates unnecessary emissions

GPT-4o estimated annual energy

390,000–460,000

MWh/year

Enough to power ~35,000 US homes

Energy per long prompt — frontier model

33+

Wh

70× higher than purpose-built alternatives

Model Architecture

A model built for one thing: assessment

Eduface Model

120B

total parameters

5%

Active at inference

6B

active per query

~5%

activation ratio

Sparse MoE architecture routes each task to only the relevant parameters, no wasted compute.

Frontier Models (est.)

~1.8T

total parameters (GPT-4, reported)

37%

Active at inference

37B+

active per query

15–20×

more compute

Designed for breadth across all knowledge domains — most of which are

irrelevant to academic assessment.

Mixtral 8×7B activates

13B / 47B

28% 3–5× more than Eduface

DeepSeek-V3 activates

37B / 671B

5.5% 6× more than Eduface

Eduface activates

6B / 120B

5% domain-optimised minimum

Sustainability data on request

We can provide energy consumption estimates per deployment, per course,

and per institution. Use our figures to support your institution's

sustainability reporting requirements.

Energy use per 1,000 student submissions

Estimated CO₂ equivalent vs. frontier models

ISO 14001-aligned environmental reporting

Annual estimate 10,000 student institution

Submissions processed

18 per student avg.

180,000

Energy use (Eduface)

3 Wh per submission

~540 kWh

Energy (frontier equivalent)

11× higher

~5,940 kWh

CO₂ saved vs. frontier

NL grid intensity 2025

~2.4 tonnes

Estimates based on published energy benchmarks for frontier models (Luccioni et al., 2023; Hugging Face, 2024) and internal Eduface inference measurements.

Sparse activation only what is needed

Step 01

Submission arrives

A student's assessment is received by the Eduface platform. The task is classified assignment type, rubric, learning objectives.

A student's assessment is received by the Eduface platform. The task is classified assignment type, rubric, learning objectives.

Step 02

Router activates experts

Our MoE architecture routes the task to only the 6B relevant parameters, skipping the other 114B entirely. Domain-specific experts handle assessment.

Step 03

Minimal compute, precise output

Feedback is generated using a fraction of the energy of a general-purpose model without compromising on quality or alignment to learning objectives.

Efficiency by design, not as an afterthought

Book a session to see how Eduface's infrastructure choices translate to measurable sustainability gains for your institution.

Efficiency by design, not as an afterthought

Book a session to see how Eduface's infrastructure choices translate to measurable sustainability gains for your institution.