This is the INSET I run when a department says it wants to know where it stands on technology integration. It works for a department of four and a department of twenty-four. The structure is the same. The only thing that changes is how long step four takes.
The output is a heatmap of every lesson the department has taught in the last half-term, three nudge commitments per teacher, and a vocabulary that the staffroom corridor will keep using afterwards. The framework that does the work is PICRAT, a 3x3 grid that classifies what the technology is doing for the lesson and what students are doing while it does it.
Before the session
Three things to set up.
Each teacher brings five lessons they have taught in the last half-term, written on five slips of paper or on a tablet they can scroll. Subject and year group on each. Any subject, any year group, any cell. The instruction is honest selection. The heatmap fails if everyone brings only their proudest moments.
The PICRAT grid printed on A2, taped to the longest wall in the room. Cells big enough to fit at least thirty sticky notes each. A pad of sticky notes per teacher in a different colour each, so everyone can spot their own placements without naming names later.
A facilitator who can keep time and resist the urge to comment on placements during the silent steps. The framework does the work. The facilitator protects the silence.
0 to 10 minThe framework refresh
If the department is new to PICRAT, run a five-minute refresh. The PIC axis on one wall, the RAT axis on another. Two thirty-second scenarios for each axis: passive, interactive, creative on PIC, and replace, amplify, transform on RAT. Pair the two. The grid on the back wall is the result.
If the department already knows PICRAT, skip the refresh. Show the grid, name the placement question (what is the technology doing, and what are the students doing) and move on.
Ten minutes is a hard cap. If the framework refresh runs over, the conversation never reaches the audit.
10 to 30 minPlace the lessons
Twenty silent minutes.
Each teacher places their five lessons on the grid using their colour of sticky note. One placement per lesson. Subject and year group written on the note. No discussion yet.
The instruction is one sentence: place the lesson where it actually lives, not where you wish it lived. Honest placement is the only useful placement.
The facilitator's job is to circulate and answer single-lesson questions ("does this go in IA or IT?") with the placement test rather than an opinion. For the activity axis: are the students consuming, responding, or making? For the technology axis: take the technology away. Does the lesson still run? If yes, R. If yes but worse, A. If the lesson cannot run on paper, T.
By minute 30, the grid is covered. Walk to the back of the room and look at the cluster.
30 to 45 minRead the heatmap
Step back from the grid. Ask the room: where do the dots cluster? Most departments will see a heavy weighting in the bottom row (PR, PA, PT). That is the typical baseline.
The conversation now has three questions.
Is the cluster a deliberate choice or a drift? Some PR is the right placement: a recap, a recall starter, a reading task before a discussion. Most PR is drift; the lesson defaults there because it was easier to plan that way. A department audit is the moment to be honest about which lessons fall into which category.
Where is the missing middle? Most departments find that IA, the workhorse cell, is under-occupied. The conversation then becomes about why. Often it is because the move from PR to IA feels like more planning, when in practice it is one swap (read this together becomes mark up this together).
Where are the unicorns? The CT outliers are often the best lessons in the department. Surface them. Let the teacher who placed a sticky note in CT describe what made it work. The rest of the department learns more from one detailed account of a working CT lesson than from any framework explainer.
The facilitator does not propose a target distribution. The grid is for the department to read for itself.
45 to 60 minPick three nudge commitments
The last fifteen minutes is the commitment step.
Each teacher picks one of their PR lessons and writes the smallest IA move on a fresh sticky note: the single change that would lift it up a row. Same lesson, different student activity. Stick the new note on top of the old one on the grid.
Then each teacher picks one of their IA lessons and writes the smallest CT move: the single change that would push the artefact to a real audience or a defensible argument. Same content, different output. Stick on top of the IA note.
Then each teacher picks one of their CT lessons (or, if they have none, one of their best IA lessons) and writes a share-back commitment: a five-minute slot at the next department meeting, a one-paragraph write-up, or a co-observation invitation. The point is to spread the unicorn across the team.
Three sticky notes. Three commitments per teacher. The grid now has the audit (the original placements) and the next moves (the commitments) layered on top.
Take a photograph of the grid before the room empties. Email it to the department by the end of the day. The photograph is the heatmap. The next half-term's department meetings now have a reference image.
What the department walks away with
Three things end up in the department's hands.
A visual baseline. Where the lessons currently sit. No opinions, no judgements, just placements.
Three nudge commitments per teacher. Realistic, specific, and visible to the rest of the department because they are on the grid in a different colour.
A vocabulary that survives the staffroom corridor for the next half-term. Teachers who have placed five lessons on a grid have internalised the language in a way that no slide deck can reproduce. The next time someone proposes a Creative Transform lesson, the room will know what is being asked.
If the department wants to deepen the audit between now and next half-term, drop a sample lesson into Review for an AI classification, or run Practice as a ten-minute warm-up at the start of the next department meeting. Coach is a useful follow-up for one-to-one conversations with teachers whose placements surprised them. All three reinforce the placement habit between INSETs.
Where the dots cluster, the conversation begins.
The audit is a recurring habit. The grid that comes out of the first session is the baseline. Six months later, run the same INSET with five new lessons per teacher, and compare the heatmaps. The drift is visible, the work is visible, and the conversation is shorter every time.
Andy Perryer is the head of digital learning at a group of international schools and the creator of PICRAT Suite. The PICRAT framework was developed by Royce Kimmons, Charles Graham and Richard West in their 2020 paper in the CITE Journal. A printable INSET deck for this workshop is in development.