Over the last twenty years, roughly half a dozen serious models have tried to describe how technology should be used in classrooms. They have different shapes (ladders, Venn diagrams, matrices, grids), different authors, and different jobs. They are also confusing to compare, because nobody writes about them all in one place. This page is my attempt to fix that.

I'll run through the six that matter, give each one a one-line verdict, and show you the quick table at the bottom so you can see them side by side. If you want the punchline now: most schools need two, not one, and I'll explain which two at the end.

The quick table

Model Shape What it describes Best for
RAT (2002) 3-step ladder What the technology does to the task Quick personal reflection
TPACK (2006) Venn diagram, 7 regions What the teacher needs to know Teacher training, CPD planning
SAMR (2010) 4-rung ladder What the technology does to the task Individual planning reflection
LoTi (1994, revised) 7-level scale How advanced the technology use is District-wide benchmarking
TIM (2005) 5 × 5 matrix Classroom characteristics and tech integration Formal observation and accreditation
PICRAT (2020) 3 × 3 grid What teacher AND students are doing Whole-school coaching and lesson design

Now the longer version of each.

The six models, with a verdict on each

RAT (Hughes, 2002)

The original three-step

Before SAMR came along, Joan Hughes at the University of Texas proposed a three-level model: Replacement, Amplification, Transformation. The RAT axis at the bottom of PICRAT is essentially Hughes's original idea, borrowed and preserved almost unchanged.

RAT has the same weakness as SAMR (it describes the task, not the students), but it has the virtue of being only three buckets, which is exactly as many as a teacher can actually use. If SAMR had stopped at three rungs, RAT would still be the standard.

Verdict. The unsung hero of the field. If you want the cleanest possible "what is this technology actually doing?" question, RAT is it. Most people using it don't know they're using it, because PICRAT absorbed it in 2020.

TPACK (Mishra and Koehler, 2006)

Three overlapping circles, seven named regions

TPACK describes the knowledge a teacher needs to use technology well: Technological Knowledge, Pedagogical Knowledge, and Content Knowledge, plus all the overlaps. Originally written as TPCK and then renamed for easier pronunciation.

It's a real contribution to teacher education. The insight (that tech + pedagogy + content is more than the sum of its parts) reshaped how initial teacher training works. The problem is that TPACK was never designed to be a lesson-design tool, and when schools have tried to use it that way it has collapsed under its own weight. It describes the teacher, not the classroom.

For a fuller comparison of where PICRAT and TPACK actually sit relative to each other, read the detailed comparison.

Verdict. Indispensable for teacher trainers. Much harder to use in a staff meeting. If you find yourself teaching teachers how to use technology, use TPACK. If you find yourself helping teachers see what they're doing in the classroom, use something else.

SAMR (Puentedura, 2010)

Four rungs on a ladder

Probably the best-known model in schools. Ruben Puentedura's ladder goes Substitution, Augmentation, Modification, Redefinition, and for most of the 2010s this was the shared vocabulary of technology integration.

It did an enormous amount of good simply by getting teachers to ask "is the tech actually doing anything?" That's a question worth asking. The trouble is that the middle two rungs are defined in a way teachers can't reliably apply. If you've sat through a department meeting arguing over whether shared Google Docs are Augmentation or Modification, you know what I mean.

A full treatment of where SAMR succeeds and where it struggles is in the PICRAT vs SAMR piece.

Verdict. Historically important, still widely used. If your school is already fluent in SAMR and it's working, don't tear it out. If you're starting from scratch, start with PICRAT and use SAMR as background reading.

LoTi (Moersch, 1994, revised multiple times)

Seven-level scale of technology implementation

Chris Moersch's Levels of Teaching Innovation scale runs from Level 0 (Non-use) through to Level 6 (Refinement). LoTi has a serious claim to being the most rigorously validated of the models on this list: thirty years of iteration, a large body of quantitative research behind it, and wide use in American school districts for benchmarking technology adoption at scale.

What it is not is a planning tool. If you need to answer the question "where is this whole district on the technology adoption curve?" LoTi will give you numbers the other models can't. If you want to help Mrs Jones think about tomorrow's science lesson, LoTi is the wrong shape of hammer.

Verdict. Useful if you're a superintendent or MAT lead trying to measure technology integration across forty schools at once. Not useful for individual teachers or individual lessons.

TIM (Florida Center for Instructional Technology, 2005)

Five-by-five matrix

The Technology Integration Matrix crosses five characteristics of a meaningful learning environment (Active, Collaborative, Constructive, Authentic, Goal-Directed) with five levels of technology integration (Entry, Adoption, Adaptation, Infusion, Transformation). Twenty-five cells in total, each with a paragraph of description.

TIM is the most theoretically complete of the models on this list. It's also the most demanding. Many US teacher preparation programmes use it, and I have a lot of respect for it. But twenty-five cells is too many to carry in your head, and in practice you need the matrix printed on paper to apply it. Most busy teachers don't.

Verdict. The model to reach for if you are writing a school's technology strategy, running formal observations, or needing to provide evidence to an accreditation body. Overkill for Tuesday's lesson planning.

PICRAT (Kimmons, Graham and West, 2020)

Three-by-three grid

The newest of the serious models, and the one I spend most of my professional life using. A two-axis grid that borrows the RAT axis from Hughes (2002) and adds a PIC axis describing what the students are actually doing: Passive, Interactive, or Creative.

The PIC axis is the bit that changes the conversation. Every other model on this list describes the teacher's use of technology in some way. PICRAT is the first one that puts students on the map as well. That sounds like a small thing until you try it: the moment you start asking "what were my kids actually doing?" most of your own lessons suddenly look different.

The grid is also small enough to hold in your head and move around in. Nine cells. Two axes. You can draw it on a Post-it. That's not a detail. It's the thing that makes the model survive contact with a real school.

A fair critique is that PICRAT is newer and therefore has less empirical research behind it than the models it is trying to improve on. That is true. If you need a model with decades of validation studies, LoTi or TPACK will give you more to cite. If you need a model that works in a staff meeting tomorrow, PICRAT will get you further.

Verdict. The cleanest model I've found for whole-school use. If you want one shared vocabulary for lesson design, observation, coaching and conversation, PICRAT is the one that sticks. I'm not a neutral observer here. I've built a suite of tools on top of it. But I've also tried the alternatives at scale, and this is the one that survives a Tuesday.

Which should your school actually use?

My recommendation, after running this conversation in a lot of schools, is simple. Use PICRAT.

It's the one that handles the real jobs of running a school: lesson design, classroom observation, coaching conversations, the shared vocabulary a staff can actually carry around in their heads. Everything the other models do well is either captured inside PICRAT already, or lives at an altitude most classroom teachers don't operate at. Hughes's RAT axis is now the bottom half of PICRAT. SAMR's insight about task transformation survives in the same place, minus the arguments over Modification. TPACK has real value in teacher preparation research and in initial training programmes, but it was never designed to guide tomorrow's lesson, and in my experience it doesn't.

The others are worth knowing about. If you're writing a dissertation on technology integration, you'll want to cite TPACK. If you're running a district-level benchmarking exercise, LoTi will give you the numbers. If you're building a formal accreditation framework, TIM is the most complete. For the everyday work of helping teachers use technology well, though, none of those alternatives earn their keep in the way PICRAT does.

The useful rule of thumb

If a model can't fit on a Post-it, it won't survive a staff meeting.

This isn't a rule about education. It's a rule about how busy people hold tools in their heads. PICRAT fits on a Post-it. That's not a small thing. It's the reason the model survives contact with real schools, real teachers, and real Monday mornings. It's also why, after twenty years of models trying to solve this problem, PICRAT is the one I've ended up building my professional life around.

If you want to see PICRAT applied to a real lesson you've just taught, Analyse will place it on the grid in about two minutes. If you want help planning one that lands in a particular cell, Generate works the other way round.

Andy Perryer is a global leader of digital learning and the creator of PICRAT Suite. Sources: Hughes (2002) on RAT, Mishra and Koehler (2006) on TPACK, Puentedura (2010) on SAMR, Moersch (1994) on LoTi, the Florida Center for Instructional Technology (2005) on TIM, and Kimmons, Graham and West (2020) on PICRAT.