This document contains links and citations to various external sources including photos, cartoons, and websites. The sources relate to topics around education, student assessments, and evidence-based research. Randall Munro's xkcd cartoon on copyright and licensing is referenced multiple times.
Other are fields more advanced in use of evidence. Learn from them.
Other mainly quantitative fields have hierarchies of evidence – learning analytics has not moved high up these hierarchies
Randomised control trials are not always appropriate
Sometimes you need to be confident that an approach will work
Even when you carry out a test, it can be misleading
For example, the Hawthorn Effect can suggest an intervention is working, when it is just the attention being paid to participants that is having the effect
This study of a dead salmon shows the danger of false positives
https://blogs.scientificamerican.com/scicurious-brain/ignobel-prize-in-neuroscience-the-dead-salmon-study/
And when you are talking about p values, you have to know what you mean
Beware of aimply accepting the evidence that confirms your opinion
And when you are talking about p values, you have to know what you mean
Beware of aimply accepting the evidence that confirms your opinion
And when you are talking about p values, you have to know what you mean
Beware of aimply accepting the evidence that confirms your opinion
And when you are talking about p values, you have to know what you mean
Beware of aimply accepting the evidence that confirms your opinion
Why do we have this problem?
Well, education is hard.
It’s not only hard to learn – it’s hard to understand learning
We can’t easily see and measure learning, we can only use proxies for learning
Like self-report, or pre- and post-test
And once people know the proxies you are using, they start to game them
Like the PISA text
The Programme for International Student Assessment
Every three years, tests students in random schools worldwide on reading, science and maths
They have done a lot of work on the methodology and have responded to critiques
We should be able to use this information to compare performance on these tests
But several things go wrong – and more goes wrong as this is increasingly taken as a measure of countries; education systems.
Sometimes the results are invalid because there is not enough evidence.
Sometimes they are invalid because the importance of the tests causes countries to cheat
Sometimes they are invalid because they are taken as a proxy for a country’s educational system as a whole
So, we have a problem
It’s difficult to define learning
It’s even more difficult to measure learning
We have a tendency to look for evidence that confirms our opinion
If our work is widely reported, it can get distorted
We as LA have this problem particularly, and we should be able to do better.
So, on the LACE project, we set out to find the evidence that does exist about learning analytics
We set up an evidence hub – grouping published work in terms of these four statements
We asked partners from across Europe to contribute
We looked at LAK conferences and the Journal of Learning Analytics
We put a call out to the community
We prompted people at last year’s LAK to add their papers.
So it’s not all the evidence, but it is a lot of it (an dyou can add more, if you see a gap)
We found three main things:
There was no point classifying papers in terms of a hieracrchy of evidence, because most of the work was exploratory or think pieces or small scale
There was relatively little evidence. Lots of papers have nothing to say in relation to our four propositions
What evidence there was turned out to be overwhelmingly positive. Which seemed unlikely and prompted our Failathons
Lots of the papers don’t address the cycle,
No benefits shown for learners.
We looked closely at a load of papers,
Signals paper was one of the best at this
There has been fairly wide agreement in the literature that the Course Signals work at Purdue University shows that learning analytics can support learning.
People who engaged with Course Signals were more likely to be retained by the university. They were more likely to get high grades.
Here was a (LAK12) paper that gave us real evidence.
But there have been criticisms of the paper – most notably, the chocolate box critique
And then we run into the problem that it is almost impossible to check the figures because the data are not freely available, and the researchers either no longer have access to it or are not assigned time to work on it.
We have highlighted the problems with the Purdue paper because it is so significant in the field
But we all make mistakes.
Here is a chart produced by the two of us and published at Alk and in the JLA
It was checked by both of us and, presumably, by two sets of reviewers and by proof readers and editors.
Can you spot the mistake?
Yes, two mistakes.
And we can tell you about them, and we can issue a correction to the JLA
But how do we correct the conference proceedings?
How do we, as a community, stop the mistakes being propogated.?
Does this simply mean that learning analytics is a disaster zone? No.
What can we do about it? It’s not about individuals.
Punitive approaches terrible, let’s not tear ourselves apart like the psychologists
How can we improve our systems and structures to reduce mistakes, improve quality overall?
We are the superheroes who can save our field from the spectre of non-evidence
LAK: Use the peer review process to address the problem
NEXT UP
LAK: Prioritise gaps in evidence in call for papers
LAK: Prioritise gaps in evidence in call for papers
NEXT UP
LAK: Strengthen the scrutiny of statistics in the review process
LAK: Strengthen the scrutiny of statistics in the review process
NEXT UP
LAK: Aim to make findings more accessible to non-researchers
LAK: Aim to make findings more accessible to non-researchers
NEXT UP
LAK: Review best practice from fields more advanced in use of evidence
LAK: Review best practice from fields more advanced in use of evidence
NEXT UP
LAK: Ask authors to identify how their work fits into the learning analytics cycle
LAK: Ask authors to identify how their work fits into the learning analytics cycle
NEXT UP
Bidding for grants? How could your work fill an existing gap in the evidence?
Bidding for grants? How could your work fill an existing gap in the evidence?
NEXT UP
Bring together bodies of work and highlight main findings
Bring together bodies of work and highlight main findings
NEXT UP
Pathways to impact: how will you share with your finding with those outside universities?
Pathways to impact: how will you share with your finding with those outside universities?
NEXT UP
Establish expectations about quality of evidence
Establish expectations about quality of evidence
NEXT UP
Help doctoral students to fill gaps and to fit their work into the learning analytics cycle
Help doctoral students to fill gaps and to fit their work into the learning analytics cycle
LAST ONE