The Teachers Toolbox – Documents
Professor John Hattie’s Table of Effect Sizes
• advancing learners’ achievement by one year, or improving the rate of learning by 50%
• a correlation between some variable (e.g., amount of homework) and achievement of approximately .50
• A two grade leap in GCSE, e.g. from a C to an A grade
An effect size of 1.0 is clearly enormous! (It is defined as an increase of one standard deviation)
Below is a small selection of Hattie’s table of effect sizes. (Source: The Research of John Hattie where you can access the full list)
|Influence||Effect Size||Source of Influence|
|Teacher estimates of achievement||1.29||Teacher|
|Cognitive task analysis||1.29||Teacher|
|Student’s prior cognitive ability||.98||Student|
|Behavioural intervention programs||.62||Teacher|
|Challenge of Goals||.59||Teacher|
|Working memory strength||.57||Student|
|Positive peer influences||.53||Peers|
|Postive family / home dynamics||.52||Parents|
|Teacher – student relationship||.52||Teacher|
|Perceived task value||.46||Teacher|
|Simulation & games||.34||Teacher|
|Co or team teaching||.19||Teacher|
|Reducing class size||.16||School|
|Student control over learning||.02||Teacher|
|Student feeling disliked||-.19||Teacher/Peers|
• An effect size of 0.5 is equivalent to a one grade leap at GCSE
• An effect size of 1.0 is equivalent to a two grade leap at GCSE
• ‘Number of effects is the number of effect sizes from well designed studies that have been averaged to produce the average effect size.
• An effect size above 0.4 is above average for educational research
The effect sizes are averaged, and are a synthesis of research studies thought to be well designed and implemented by research reviewers. Hence they are the best guess we have about what has the greatest effect on student achievement.
Some effect sizes are ‘Russian Dolls’ containing more than one strategy e.g. ‘Direct instruction’ is a strategy that includes active learning, structured reviews after one hour, five hours and 20 hours study. There is also immediate feedback for the learners, and some corrective work if this is necessary.
Hattie does not define most of the terms in his table. My understanding of them is:
Teacher estimates of achievement
Feedback Hattie has made clear that ‘feedback’ includes telling students what they have done well (positive reinforcement), and what they need to do to improve (corrective work, targets etc), but it also includes clarifying goals. This means that giving students assessment criteria for example would be included in ‘feedback’. This may seem odd, but high quality feedback is always given against explicit criteria, and so these would be included in ‘feedback’ experiments.
As well as feedback on the task Hattie believes that students can get feedback on the processes they have used to complete the task, and on their ability to self-regulate their own learning. All these have the capacity to increase achievement. Feedback on the ‘self’ such as ‘well done you are good at this’ is not helpful. The feedback must be informative rather than evaluative.
Instructional quality: This is the student’s view of the teaching quality; the research was done mainly in HE institutions and colleges.
Instructional quantity: How many hours the student is taught for. Direct instruction: Active learning in class, student’s work is marked in class and they may do corrective work. There are reviews after one hour, five hours, and 20 hours study. See the separate handout.
Home factors Issues such as social class, help with home work, extent to which the learner’s education is thought important; etc
Bilingual programs Self explanatory??
Mastery learning A system of tests and retests of easy material with a high pass mark, if a student does not pass they must do extra work and then take a retest on the material they were weak at. See Teaching Today by Geoffrey Petty.
Questioning Students being questioned. The most effective questions are high order ‘why?’ ‘how?” and ‘which is best?’ questions that really make students think . They need to be given time to think too, and can do better if they work in pairs than work alone.
Testing Testing by itself is not as effective as remediation/feedback where the test is used to find what the student needs to improve and they then do corrective work.
Effect sizes Below 0.4, some of these add a lot of value in a short time so don’t ignore them…
Programmed instruction A form of instruction that involves students being taught by a computer or set of workbooks, by doing a series of prescribed tasks. If the student gets an answer wrong they are directed back to correct their misunderstanding. Devised by Skinner in the 1960s, but not much used now.
Finances/money Funny ….. this seems to have a larger effect when paid to me…
Behavioural objectives Having and using objectives in the form: “The students should be able to…” immediately followed by an observable verb. For example ‘explain’ is okay because you can listen to, or read the student’s explanation. However ‘understand’ isn’t behavioural because you can’t see or read the understanding.
Retention Students who do not do well enough in one school year, being kept back to do the year again.
- Surface learning (e.g. rote remembering without understanding) could produce high effect sizes short term for low cognitive skills such as remembering. For example the use of mnemonics has an effect size of about 0.8 (There is more to learning than passing memory tests.)
- Most of the research was done in schools, though Hattie says effect sizes are remarkably stable and not much influenced by age
- Some high-effect strategies are ‘Russian Dolls’ with other strategies ‘inside’.
Some low effect sizes are not very time consuming and well worth trying for their additive effect.
Classroom Observation DVDs
View our wide selection of classroom observation DVDs with lesson plans, feddback forms and CPD activities.
Video learning subscription
Hundreds of thousands of teachers, including school admin staff, were advised to invest extra contributions outside the main pension scheme, and into an arrangement run by insurance companies.
For a number of reasons, this was very bad advice, and resulted in huge loses for those with FSAVCs.