PICRAT is a model for classifying how technology is being used in a teaching and learning context. It was introduced in 2020 by Royce Kimmons, Charles R. Graham and Richard E. West in a paper in the CITE Journal. This page is the reference version: what the model is, where it came from, and how it sits in the wider research literature.
If you're looking for the practical introduction with worked examples, the pillar guide is probably what you want. If you want to compare PICRAT to the model you're already using, the side-by-side page will help. This one is the encyclopedia entry.
Definition
PICRAT is a two-axis classification tool. The acronym is built from two sub-axes.
The first, PIC, describes student engagement with the technology. It stands for Passive, Interactive, Creative. A student can be consuming information through the tech (Passive), responding to or through the tech (Interactive), or using the tech to produce something new (Creative).
The second, RAT, describes the instructional impact of the technology on the task. It stands for Replace, Amplify, Transform. The tech can replace a non-tech version of the task (Replace), enhance it in some meaningful way (Amplify), or redesign it into something that could not have existed without the tech (Transform).
Crossed, the two axes produce a three-by-three matrix of nine possible cells.
Replace
Amplify
Transform
Replace
Amplify
Transform
Replace
Amplify
Transform
The PICRAT matrix. Student engagement on the vertical axis, instructional impact on the horizontal.
The originating paper
The model was formally introduced in a 2020 paper titled The PICRAT Model for Technology Integration in Teacher Preparation, published in Contemporary Issues in Technology and Teacher Education (CITE Journal), Volume 20, Issue 1.
The authors' explicit aim was to build a synthesis model. They surveyed roughly twenty existing frameworks for technology integration and noted that they fell into two broad camps: models that focus on how the technology changes the task (such as RAT and SAMR), and models that focus on what teachers need to know to integrate technology well (such as TPACK and TPCK). Neither camp, they argued, describes what students are actually doing with the technology in front of them, and that is the part of the classroom that matters most.
Their proposal was therefore to combine the two perspectives in a single diagram. The horizontal axis would borrow from Hughes's 2002 RAT model. The vertical axis would be new, naming three categories of student engagement that the authors had found consistently useful when observing classrooms: Passive, Interactive and Creative.
Why does this move matter? Before PICRAT, a teacher could report that she had moved from SAMR Substitution to SAMR Redefinition and still have sat a class through forty minutes of quiet compliance. The older models were not dishonest about this; they simply did not have a place on the diagram for it. PICRAT's contribution is to insist that "what is the technology doing?" and "what are the students doing?" are two questions, not one. That insistence is why the model now appears in teacher preparation courses, international school networks, and classroom observation studies that earlier models would not have reached.
The antecedents
PICRAT did not appear out of nowhere. The RAT half of the model is a direct inheritance from earlier work. The PIC half is new, but builds on a long line of thinking about active learning.
Hughes's RAT model (2002)
Joan Hughes of the University of Texas proposed RAT as a three-level framework for classifying the instructional effect of a technology: whether it Replaced a non-digital version of the activity, Amplified it, or Transformed it into something genuinely new. Hughes's paper was focused on teacher education and the challenge of helping trainee teachers see past surface-level technology use.
The bottom axis of PICRAT is Hughes's RAT, almost unchanged. Kimmons, Graham and West credit her work directly.
Puentedura's SAMR model (2010)
SAMR (Substitution, Augmentation, Modification, Redefinition) is the better-known cousin of RAT. Ruben Puentedura's four-rung ladder became the dominant vocabulary of technology integration through the 2010s. PICRAT's authors critique SAMR on two grounds: that the distinction between Modification and Redefinition is not reliably applicable, and that SAMR, like RAT, describes only the teacher's use and ignores student engagement.
A detailed comparison of PICRAT and SAMR is in the dedicated comparison piece.
Mishra and Koehler's TPACK (2006)
TPACK describes the knowledge base a teacher needs in order to integrate technology meaningfully. Its circles of Technological, Pedagogical and Content Knowledge overlap to form seven named regions of expertise. Kimmons, Graham and West recognise TPACK as a significant contribution but argue that, as a model of teacher expertise, it describes the teacher's preparation rather than what is happening in the classroom itself.
A fuller treatment of how the two models relate is in the PICRAT vs TPACK piece.
Other models in the family
The 2020 paper also reviews RAT's successor TIM (Technology Integration Matrix, from the Florida Center for Instructional Technology), LoTi (Levels of Teaching Innovation, from Moersch), TAM (Technology Acceptance Model), TIP (Technology Integration Planning), and several others. The conclusion the authors reach is that while each model captures something real, none of them does both jobs at once: neither describing the teacher's use of technology nor describing the student's engagement with it in a single visual framework.
An overview of all the main models, with a verdict on each, is in the compare-models page.
What the two axes actually mean
The axes are worth unpacking more carefully because the labels hide some real subtlety.
The PIC axis: student engagement
- Passive. Students are receiving information through the technology. Watching a video, reading a digital text, listening to audio. This is not a pejorative category: a well-produced explainer video can be the best possible use of a lesson segment.
- Interactive. Students are responding to or through the technology. Answering questions, annotating, contributing to a shared document, using an adaptive learning tool.
- Creative. Students are producing something new with the technology. Building a presentation, coding a programme, making a film, composing music, designing a prototype.
The distinction the authors draw is not about the quality of the learning but about the kind of cognitive activity the technology is supporting.
The RAT axis: instructional impact
- Replace. The technology does essentially the same job as the non-digital alternative. Same task, different medium.
- Amplify. The technology makes the task more efficient, more accessible, or more engaging, without fundamentally redesigning it.
- Transform. The task could not have happened without the technology. Its existence depends on the affordances of the tool.
One point often missed is that Transform does not mean "more advanced use of technology." It means "the task is different." A primary school class using video-calling software to interview an author on the other side of the world is in Transform territory, even though the technology (video call) is extremely simple.
How the model has been used since 2020
In the years since publication, PICRAT has been adopted in a growing number of settings. Its strongest uptake has been in three places. Initial teacher education programmes have used it as a planning frame during school placements. International school networks have used it as a shared vocabulary across multilingual teaching staff, which works particularly well because the model is visual and the labels are concrete. And a smaller group of researchers have used it as an observational coding scheme for classroom video studies.
The model is also now being applied beyond its original K-12 context, including in higher education, adult training, and (in the most recent literature) AI-enabled instructional design.
Known limitations
The authors are careful about what the model does not do, and it's worth repeating those limitations.
PICRAT does not measure learning outcomes. It describes what happened in a lesson from a particular angle. Two lessons in the same cell can produce very different learning; two lessons in different cells can produce similar learning. The model is a reflection tool, not an evaluation tool.
PICRAT does not prescribe a "best" cell. Creative-Transform is not superior to Passive-Replace. The model is deliberately non-hierarchical. The right cell for a lesson depends on the subject, the pupils, the learning objective and the time available.
PICRAT does not capture pedagogy in full. It describes the role of technology in the lesson. The quality of questioning, the handling of misconception, the pace, the relationships (all the things a good teacher actually does) are outside the model's scope.
PICRAT tells you what role the technology is playing. It leaves the question of whether the lesson worked to the teacher.
Where to go next
If you want to see the model applied to lessons you actually teach, Analyse classifies a lesson on the grid in about two minutes. If you want to plan a new lesson that lands in a specific cell, Generate does the reverse. Both are free, and both use the original definitions from the 2020 paper.
If you're writing up the model for a dissertation, a policy paper or a school strategy document, the single source to cite is Kimmons, Graham and West's 2020 CITE Journal paper linked above. Everything else, including this page, is downstream of that.
Andy Perryer is a global leader of digital learning and the creator of PICRAT Suite, an application of the PICRAT model for classroom use. The PICRAT framework was developed by Royce Kimmons, Charles R. Graham and Richard E. West in their 2020 paper in the CITE Journal.