You are here

Feed aggregator

People are sticky

The EdTechie Martin Weller's Personal Blog - Mon, 15/08/2016 - 20:09

Do I win the “eeeuuuwww” blog post award? There’s a concept in web design about stickiness, ie content that has people returning or spending longer. So in web design this might be having up to date content, nice design, etc. In light of my previous post about OER (read the comments by the way, some great stuff from Pat Lockley, Jim Groom, Lorna Campbell and Alan Levine in there) I’ve been thinking about why we like blogs and are a bit meh about OER sometimes (some OER is great of course, and many blogs are woeful, but you get my drift).

Stickiness, for want of a better, less punchable phrase, may be the answer. Blogs are generally more personal, social content. People are sticky – we like reading certain people’s take on a subject precisely because it is human. I don’t want the BBC interpretation of a new technology, I want to know what Audrey Watters thinks about it. Two things about stickiness: it’s a continuum, not a binary; you don’t always want or need something to be sticky.

On the first point, people are good at being sticky (I’m already annoying myself with the term, so I can imagine how you feel). Indeed in a world where our jobs may be taken by robots, stickiness may be one of our defining attributes. It’s nebulous, shifting, personal and rooted in thousands of years of culture and millions of years of evolution. But a newspaper, project, organisation or website can be sticky (because it is made up of good contributors). Some things are more sticky than others and to different people, so it’s a hard quality to pin down and provide a template for that is reproducible.

On the second point, you need to determine if stickiness is an attribute that is important. For example, if I’m creating an open textbook, it needs to be great for that course, but it doesn’t really need to be something that people want to come back repeatedly. This may get at the distinction Jim was making in the comments about why he likes people and not resources. So, “how much stickiness do we want?” is now a valid project question.

What if OER was blogging?

The EdTechie Martin Weller's Personal Blog - Wed, 10/08/2016 - 11:26

I work a lot in OER, and I do a lot of blogging, and I often blog about OER. But I don’t blog as OER. In this post I’m going to compare two things that are completely different – OER repositories and blogs – and so you can’t make any valid comparisons. But that’s the point of the post really, to see if there is a different way of looking at a topic.

I’ve been looking at the stats for various repositories recently, both OA publishing ones, and OER ones. Thanks to David Kernohan for pointing me at JISC’s IRUS service, which provides a breakdown of publication repositories from UK universities. You need to have a login from a UK university to access it, so I’m not sure how public the data is. But it does provide you with a breakdown across all unis. The figures vary wildly eg the number of deposits per institution range from just six to over 37,000. The average monthly downloads ranges from 0 to 174,000. But in general most institutions have a total number of deposits in the low 1000s, and monthly download figures between 5-20K.

If we look at the UK’s now retired nationwide OER repository, JORUM, the stats are quite strange. They vary wildly by month eg 9K in Feb 2015 and 463K just a few months later in June. They list “views” and “downloads” – my guess would have been that views would always exceed downloads (people tend to look at an item to assess it rather than download I thought). But this shows wide variation also – sometimes views far outstrips downloads (eg Sept 2015 285K vs 80K) but other times the opposite occurs (eg Sept 2014 8K vs 351K). It would be interesting if anyone has theories about this, but that’s not really the point of my post.

I’ve also seen the stats on a few institutional repositories (which I won’t name) – some are impressive with millions of hits and others really don’t get much traffic at all. I was thinking about this in relation to blog stats. This blog has reasonably high traffic, whereas my new blogs have zero visitors. Partly that is a function of having built up enough content in here that others have inked to, so it has some SEO juice. It is also a function of being caught by lots of bots, so the stats are not always reliable. Visitors (which I think is the more reliable figure) over the past year was 214K and visits (probably mainly bots) 3.3 million.

I offer these figures up not as a poorly disguised humble brag (ok, not that poorly disguised), but just because they’re the ones I have. I know plenty of other bloggers who far outstrip these. The point is, they are the type of access figures that are comparable to many big projects and which would be reported happily reported in impact statements. Now, as I said I am deliberately comparing things which are not alike – a blog visit is not the same as an article download.

But the thing it set me thinking about was the figures are in the same sort of league. And blogging is done in spare time, at little or zero cost to the institution. What if we started envisaging projects more in terms of the blog as the core element rather than the dissemination or engagement channel? When a project or an institution is tasked wit building an OER repository, we all know what that looks like, and our default mode is to produce content, build a database, recruit a technical team, etc. But what if we said instead, we’re going to employ four bloggers (say), who will write engaging posts about the topics rather produce academic content? Are those posts better accessed and used than formal OER?

I’m pretty sure someone (Jim Groom? Alan Levine?) has written on this before. And I’m not quite sure I know what I mean by it. But I think there is something in there about rethinking what we mean by OER to be content that is more socially embedded and personal. The impact stats suggest it might be a more successful route if number of eyeballs is our measure.

Dear reader, I blogged it

The EdTechie Martin Weller's Personal Blog - Mon, 08/08/2016 - 17:45

A couple of posts coming up about every blogger’s two favourite subjects: themselves and blogs. Since moving to Reclaim Hosting (slogan: We put the host in hosting) I’ve started creating blogs willy nilly. Partly this is because I can, and it’s a fun thing to do on a Saturday afternoon when you live on your own and have no friends when it’s raining. But I think it also reflects that I have a number of discrete interests now that qualify for blogs of their own.

It started when Blipfoto, where I posted my photo a day, began having financial difficulties. I didn’t like the thought of losing that three year catalogue of memories. They seem to have sorted themselves out now (and I recently stopped doing the photo a day thing anyway), but I liked creating a backup that I owned and could control.

Then last year I set myself the goal of seeing a current film every week. I decided to continue that this year, but also set up a blog to record it. I don’t exactly review the films, I go on the basis that people know the plot, but rather I use it to talk about my personal reaction to a film. It’s quite fun, but I’m well aware it’s not that great. Writing about movies is tough beyond “I liked it/I didn’t like it”.

Last week I created (still messing with the themes) a new blog for the upcoming Cardiff Devils ice hockey season. This will be even harder to write about than films I predict. It’s very difficult to write about sport without sinking into a quagmire of cliche, sentimentality and melodrama. Plus I’m not really grounded in hockey knowledge.

So why do it? I don’t really promote these other blogs (allright, this post is doing that I confess, but I don’t tweet them often or seek out traffic). I don’t particularly want anything from them – the sports and movies blogosphere is a crowded place, so you’re not going to make a dent there. It is this very difficulty with writing for these last two blogs in particular that is the point of it really. I think it improves my writing overall to stretch myself beyond the usual topic (I mean, I can write about OER until everyone starts crying). Blogging is how I get to grips with a subject. Making myself write about it, in a public forum (even if no-one beyond Jim Groom actually reads it) forces me to think about ways in which I can frame it, respond to it and analyse it, be that a game, a film or anything.

This is exactly what I did with ed tech blogging at the start. Blogging is a key aspect of how I engage with a topic and come to understand it. That is allied to twitter and other forms of social media also, but blogging is at the centre of it. Some of you will have read that piece in the Guardian about how using social media was not serious academic work . Although the writer is mainly sniffy about twitter and instagram, I imagine they lump blogging in there too. My feeling is the opposite – I can’t imagine being a serious (or otherwise) academic without blogging.

Revisiting my own (blog) past

The EdTechie Martin Weller's Personal Blog - Sun, 07/08/2016 - 11:57

Here’s a fun thing to try if you’ve been blogging for a while (Warning: may not actually be fun). Get a random date from when you started blogging until present (eg using this random date generator), find the post nearest that date and revisit it. The date I got was 27th October 2010 (remember those crazy days?). Luckily I had a post on that very date: An unbundled publishing business proposal.

In revisiting it I set myself four questions:

1) What, if anything, is still relevant?
2) What has changed?
3) Does this reveal anything more generally about my discipline?
4) What is my personal reaction to it?

Answering questions 1) and 2) first, I was proposing an academic publishing model that allowed self publishing, but with a set of services. Authors paid for peer review and copy-editing, and perhaps most importantly, the prestige of it being ‘approved’ by a publisher. But they could then own the rights and distribute freely. I would suggest this is still relevant, and we haven’t really seen a model this ‘unbundled’ take off. Publishers such as Ubiquity offer a range of services, and they publish the book under a CC license, which is pretty close to the model I was suggesting (except I removed the publishing costs and used external services). Not much has changed really, except I think we have seen a gradual development of such models, and wider acceptance. But the traditional academic publishers still dominate and not owning your own work is still the norm for academics.

In terms of what it reveals about ed tech I think it shows that change happens slowly. There are lots of cultural issues around processes such as publishing and dissemination that are deeply embedded. The point I was trying to make was less about new publishing models but more about how we can rethink traditional academic practices by considering what are the core functions they provide. We publish books because we want to share knowledge, but we use publishers partly to handle the logistics, but also to give legitimacy to the work (it has passed a “is it worthy of publication?” test). Six years on I think we are probably as, if not more, conservative in our approach to publishing in academia.

In terms of my personal reaction, I was pleased it wasn’t too embarrassing (there are lots of such posts in my back catalogue). But I do think I was still a bit enamoured of the whole new shiny digital thing, and it might be a bit more nuanced if I wrote it today. I think I overlooked the value of marketing and the lock big publishers have on many channels. But generally the lack of an emergence of exciting new, viable, publishing models in academia in the six years since I wrote it I found kind of depressing.

Anyway, the revisiting your past posts is the equivalent of those episodes in long running serials that consist of flashbacks. It’s cheap, but sort of fun.

The Open Flip

The EdTechie Martin Weller's Personal Blog - Mon, 01/08/2016 - 15:25

I wrote a piece for the Journal of Learning for Development recently, which expanded on an idea in a blog post, called the Open Flip. The basic idea is quite simple really (I’m a simple kinda guy) – it is that under certain conditions, there is an economic argument for shifting costs from purchasing copyrighted goods to producing openly licensed ones. Open Textbooks are an obvious example. This is a bit ‘no shit Sherlock’, but I think it’s worth exploring as a model in its own right. The paper only starts to do this really.

My argument is that most of the digital economic models, theories and ideologies haven’t really transferred across to education very successfully. This is either because the ideas themselves are rather poor (hello disruption) and don’t really transfer anywhere, or because the nature of education is different from a very straightforward consumer model. Education is structured differently, and is characterised by large grant or budget spends. In these circumstances that money can be reallocated, often leading to savings overall, and openly licensed content that can be adapted and used by all. The mythical win-win.

Apart from not being very good, one of my gripes with digital economic models is that are often over-applied, way beyond the context where they might be suitable. So I wanted to set out some conditions as to when the open flip might be applicable. My list of conditions is:

  • There is large scale spending on the purchasing of resources that can be practically refocused through single channels. This does not apply to standard consumer purchases, for instance.
  • The resources are largely digital in nature, or production can be cheap. The main component in the purchase price relates not to the physical aspect but to the intellectual property. For instance, the wide range in prices for academic textbooks is not related to any physical characteristics of their production, which varies only by a small degree.
  • The initial production of the content is a task that can be financed. With open source software and many community driven approaches, it has been found that money is not an effective incentive. These community driven, peer based models are more adequately explained by Benkler’s model.
  • Open licencing offers a particular benefit beyond just cost. While cost savings may be the initial driver, it is the advantages offered by openly licensed material that often sustains a movement. For example, the pedagogic advantages of adapting open textbooks.

With these in mind, the open flip model I propose could have applications beyond education – for example, GM crops. I don’t want to go into the whole GM debate here, but beyond some of the irrational fears (“playing God”) I think a very real concern about GM is that large corporations will own the genetic code for useful crops. An open flip model could spend money on developing certain crops (for example, ones that might better survive extreme weather in developing nations) and release that code openly. Producing the seeds then is relatively cheap. The same is true for certain medicines – increasingly drug companies are reluctant to spend the investment on drugs that actually cure people, since that’s a one-off purchase. Those that help ameliorate chronic conditions represent a better market. The current model puts the research costs onto Big Pharma, who will then recoup those costs through sales. But for some desired drugs different agencies might contribute to the research to produce an openly licensed drug, which is then cheap to produce. And so on. It won’t be applicable everywhere, but for certain problems, the open flip represents an economic model that utilises the advantages of the internet, digital solution and open licences. That’s my argument anyway.

Adaptive Learners, Not Adaptive Learning

Some variation of adaptive or personalized learning is rumoured to “disrupt” education in the near future. Adaptive courseware providers have received extensive funding and this emerging marketplace has been referred to as the “holy grail” of education (Jose Ferreira at an EdTech Innovation conference that I hosted in Calgary in 2013). The prospects are tantalizing: each student receiving personal guidance (from software) about what she should learn next and support provided (by the teacher) when warranted. Students, in theory, will learn more effectively and at a pace that matches their knowledge needs, ensuring that everyone masters the main concepts.

The software “learns” from the students and adapts the content to each student. End result? Better learning gains, less time spent on irrelevant content, less time spent on reviewing content that the student already knows, reduced costs, tutor support when needed, and so on. These are important benefits in being able to teach to the back row. While early results are somewhat muted (pdf), universities, foundations, and startups are diving in eagerly to grow the potential of new adaptive/personalized learning approaches.

Today’s technological version of adaptive learning is at least partly an instantiation of Keller’s Personalized System of Instruction. Like the Keller Plan, a weakness of today’s adaptive learning software is the heavy emphasis on content and curriculum. Through ongoing evaluation of learner knowledge levels, the software presents next step or adjacent knowledge that the learner should learn.

Content is the least stable and least valuable part of education. Reports continue to emphasize the automated future of work (pfdf). The skills needed by 2020 are process attributes and not product skills. Process attributes involve being able to work with others, think creatively, self-regulate, set goals, and solve complex challenges. Product skills, in contrast, involve the ability to do a technical skill or perform routine tasks (anything routine is at risk for automation).

This is where adaptive learning fails today: the future of work is about process attributes whereas the focus of adaptive learning is on product skills and low-level memorizable knowledge. I’ll take it a step further: today’s adaptive software robs learners of the development of the key attributes needed for continual learning – metacognitive, goal setting, and self-regulation – because it makes those decisions on behalf of the learner.

Here I’ll turn to a concept that my colleague Dragan Gasevic often emphasizes (we are current writing a paper on this, right Dragan?!): What we need to do today is create adaptive learners rather than adaptive learning. Our software should develop those attributes of learners that are required to function with ambiguity and complexity. The future of work and life requires creativity and innovation, coupled with integrative thinking and an ability to function in a state of continual flux.

Basically, we have to shift education from focusing mainly on the acquisition of knowledge (the central underpinning of most adaptive learning software today) to the development of learner states of being (affect, emotion, self-regulation, goal setting, and so on). Adaptive learners are central to the future of work and society, whereas adaptive learning is more an attempt to make more efficient a system of learning that is no longer needed.

Doctor of Education: Athabasca University

Athabasca University has the benefit of offering one of the first doctor of education programs, fully online, in North America. The program is cohort-based and accepts 12 students annually. I’ve been teaching in the doctorate program for several years (Advanced Research Methods as well as, occasionally, Teaching & Learning in DE) and supervise 8 (?!) doctoral students currently.

Applications for the fall 2017 start are now being accepted with a January 15, 2017 deadline. Just in case you’re looking to get your doctorate . It really is a top program. Terrific faculty and tremendous students.

How edtech should react to the next Big Thing

The EdTechie Martin Weller's Personal Blog - Thu, 14/07/2016 - 09:32

This week has all been about Pokemon Go. Inevitably there are pieces about Pokemon Go for education. This happens with every technology that makes a popular breakthrough. I’m not going to comment on Pokemon here, I’m sure it’s fun, and it does raise lots of interesting sociological questions about Augmented Reality and physical space intersection. Instead though, after a good discussion on Twitter last night, I thought I’d look for more general principles regarding how educational technologists should react when the same thing happens again in three months time with some new piece of technology. Off the top of my head, here are my thoughts on what to do when the next “Future of learning” innovation arrives.

Pick the narrative battle carefully – a common reaction (well from me anyway) is to be dismissive. MOOCs, learning analytics, augmented reality – none of these are new. But just saying “it’s not new” doesn’t mean it’s not relevant, and can make you look a bit pompous. Sometimes though there are battles around narrative that are worth fighting. I bemoaned this the other day about the manner in which MOOCs are now seen as the first generation of online learning. The narrative here is worth defending not just for accuracy, but because the new narrative has implicit intentions: to establish the tech industry as innovators, not education; to promote commercialisation of education as a result; to control the narrative and therefore direction of development.

Extract what is actually interesting for learning – I feel there is a tendency to focus on surface characteristics, and rush off to replicate those. Instead, take a moment to reflect and think what is actually interesting about this development, and why it has people engaged. Then map that onto what we want to do with education (developing a generic “Aims of education” scoring sheet might be a useful thing here). It may be that, despite some surface similarities, once you do this, there isn’t much that is relevant for education. In which case, be prepared to ignore it.

Recognise the opportunity – while it is often the case that the things that make the headlines are not new (museums have been playing with AR for years), they do represent a breakthrough moment. There is no point decrying this, and saying “it should’ve been me (or this project over here)”. This sudden attention means things you might have wanted to do are now possible. Which brings me on to the next point.

Be experimental – the very worst thing to do is simply ape the commercial solution (hello MOOCs). So, just sticking Pokemon in your library might get some people through the door, but it won’t make them engage, and they’ll probably just leave litter in your nice atrium. Use the attention the new buzz has created to do different things that only universities can do.

I’m sure you will have other factors, but whatever they are, taking this higher level approach to every new technology will allow us to engage meaningfully, ignore hype and develop useful ed tech. I’m off now to capture a Jigglypuff in my garden.

Brexit silver linings

The EdTechie Martin Weller's Personal Blog - Mon, 27/06/2016 - 11:09

Ok, this is my attempt to get out of the pit with this one, and find some positives. I don’t suggest all of these things will happen, but they might, as a result of the Brexit decision. They largely arise from the fact that it has been a disaster. Within hours the country was in financial and constitutional crisis, there was a Tour de France of backpedalling from Leave campaigners on their promises, it became apparent there was no plan and Britain had become the laughing stock of the world. By lunchtime after the victory the Brexit dream was dead, making it a contender for the shortest lived revolution in history. It now looks as though Johnson will seek a Norway deal. My guess is this will end up costing as much as we currently give and involve free movement of labour. Which pretty much makes the whole thing a monumental waste of time, but from the crisis we’re in now, a monumental waste of time begins to look like a pretty good deal.

So what might be the positives then? Here’s my attempt at happy face:

Closer EU union – rather than emboldening many exit feelings across Europe, I think they will now have a concrete example to look at and be able to say “that was a disaster, maybe this being in Europe thing isn’t so bad”.

The US is saved from Trump – in the US they may have thought there was no way a populist campaign based on lies, and targeting immigrants could be successful. Now they know it can and so can learn how to combat it.

A retreat from racism – there have been reports of an increase in racism as those elements feel emboldened by the result. However, it’s possible that once people actually see it, they will feel repulsed by it and rather than seeing a rise in racism, we are actually witnessing its death rattle. Okay, maybe this one is wishful thinking.

Political engagement of the young – many young people have felt very upset by this result (my own daughter is very despondent), but I think it will be a defining moment for many of them. They have been betrayed by politicians who have blatantly lied and used their futures for their own ambition, so they will need to get engaged themselves.

The last hurrah of newspaper influence – many who voted leave are already feeling tricked by the newspapers that promised a bold new future. In the future, Brexit will become a by-word for being duplicitous with the public and people will be more wary.

Being nice – I have been deeply touched by the nice comments from people around the world, sympathising with us in the UK. As Jo Cox commented we have more in common than that which divides us, and certainly I have felt this. At the same time of course there have been very painful divides and we will need to work hard to repair these. But to be reminded of decent humanity is a good thing.

The end of Europe as a topic – this has been such a divisive, unnecessary campaign that I don’t think anyone will want to go near the subject of Europe as a political topic for a generation. This will hopefully mean the end of Farage, one of the most despicable political figures in the last 50 years.

Now, I know there is quite a lot of wishful thinking in the above, and there is no need to tell me about all the negative issues, I’m very aware of them. But in the spirit of trying to have a group hug, my challenge is to post a positive possible outcome in the comments. We’ve got the rest of the internet to be angry in.

Yours, in despair

The EdTechie Martin Weller's Personal Blog - Fri, 24/06/2016 - 08:14


The unthinkable has happened and Britain has voted to leave the EU. The nation stared into the abyss last week and I had hoped that would be enough to make it pull back, but no, it seems that 52% of my fellow Brits decided the abyss looked just fine and plunged in. I feel for my European colleagues who work and live in the UK. They must feel very uncertain about their future now in a country that has shown itself to be so aggressively anti-European.

This is a personal post, I’m not going to dissect the campaigns or implications here. I feel lost. It is not just the decision itself, but what it has revealed about the country I live in. Every aspect of the Leave campaign has illustrated that Britain is now a place where you cannot feel any sense of belonging. It demonstrated that being openly racist was now a viable political tactic for the first time since the 1930s. It was anti-intellectual, as experts were widely dismissed in favour of slogans. It was distinctly Kafkaesque when a rich city banker and aristocrat talk about fighting the elite, when a Prime Minister hopeful proudly boasts “I don’t listen to experts”. It was post-truth, with deliberate lies told repeatedly and no rational argument or model proposed. It was selfish, with most young people wanting to Remain, the over 65s who will the least affected, voted to Leave.

As a liberal, academic who tries to do research gathering evidence with European colleagues, this is pretty much my anti-society. It feels very different to when your side doesn’t win in a general election. I could always understand, even if I didn’t agree with, those choices. But my country has just voted gleefully for hatred and economic ruin. What am I supposed to do with that fact?

There have been many casual nazi references thrown around in this campaign. But the similarities are horrible – right wing demagogues coming to power by blaming the current financial problems on immigrants and employing hate based tactics. No-one in Britain ever gets to ask again “How did Nazi Germany happen?” In The Drowned and the Saved, Auschwitz survivor Primo Levi talks about letters he received from Germans. One of them seeks forgiveness, saying “Hitler appeared suspect to us, but decidedly the lesser of two evils. That all his beautiful words were falsehood and betrayal we did not understand at the beginning.” Levi replies angrily highlighting that Hitler’s intentions were always obvious. This sentiment will be expressed by the people who voted Leave in a few years time when the economy has worsened and things have lurched to the right too far even for them. “How could we have known we were being tricked?” they will cry. Yes, you were tricked, but only because you wanted to be. The facts were there but you chose to deliberately ignore them in favour of indulging self pity and rage. I will find it very difficult to forgive anyone who voted Leave for what they have done to this country and to my daughter’s future.

I know I should feel emboldened to fight on for the things I believe in, but at the moment I need to find personal tactics to get through it. This whole process has brought the full, boiling, rage of Brits to the surface and it’s been like living with YouTube commentators for the past few weeks. It has made me feel quite ill, and so I need to find tactics for dealing with the new reality, as the only thing I have at the moment is curling up in a ball in the corner. I’m taking a social media and news break for a while, I’ll walk my dog and try to tell my daughter that things will be ok.

Digital Learning Research Network Conference 2016

As part of the Digital Learning Research Network, we held our first conference at Stanford last year.

The conference focused on making sense of higher education. The discussions and prsentations addressed many of the critical challenges faced by learners, educators, administrators, and others. The schedule and archive are available here.

This year, we are hosting the 2nd dLRN conference in downtown Fort Worth, October 21-22 The conference call for papers is now open. I’m interested in knowledge that exists in the gaps between domains. For dLRN15, we wanted to socialize/narrativize the scope of change that we face as a field.

The framework of changes can’t be understood through traditional research methods. The narrative builds the house. The research methods and approaches furnish it. Last year we started building the house. This year we are outfitting it through more traditional research methods. Please consider a submission (short, relatively pain free). Hope to see you in Fort Worth, in October!

We have updated our dLRN research website with the current projects and related partners…in case you’d like an overview of the type of research being conducted and that will be presented at #dLRN16. The eight projects we are working on:

1. Collaborative Reflection Activities Using Conversational Agents
2. Onboarding and Outcomes
3. Mindset and Affect in Statistical Courses
4. Online Readiness Modules and Student Success
5. Personal Learning Graphs
6. Supporting Team-Based Learning in MOOCs
7. Utilizing Datasets to Collaboratively Create Interventions
8. Using Learning Analytics to Design Tools for Supporting Academic Success in Higher Education

Waking up on a Brexit morning

The EdTechie Martin Weller's Personal Blog - Wed, 15/06/2016 - 12:17

In order to get people to think through complex issues, one technique is to get them to envisage waking up the day after it has happened and imagining their feelings. Bizarrely, inexplicably, insanely, it seems that a British exit from Europe might actually be on the cards, so here is my attempt to imagine how I would feel on the morning of the 24th if that did occur. Note it is not an attempt to make reasoned argument (the Leave campaign seems largely post-rational and immune to any factual arguments anyway), but entirely a personal assessment. I think the emotions I would experience are as follows.

Anxiety – most observers seem agreed there will be a short to long term negative impact on the UK economy, with possibly an extra two years of austerity. After eight years of austerity, the thought of a deeper recession fills me with dread. In terms of universities we have just about accommodated the impact of fees, which has hit part-time study particularly hard. More uncertainty and lack of finance is unlikely to be a good thing. In addition a good deal of research funding comes from Europe, and although promises have been made to compensate for this, I feel the same money has been promised several times over, and in the end university research will be at the back of a long queue. I will also feel anxious about social cohesion – if we do enter a long, deep recession as a result of this national self-immolation, it will be difficult not to resent those who brought it upon us for no real gain.

Shame – I did my PhD as part of a European project and have been engaged with numerous research projects over the years. I collaborate and communicate with European colleagues on a regular basis. These interactions have been socially, culturally and intellectually enriching. I will feel a sense of shame that my country has chosen to abandon the European project.

Isolation – if you’re a large nation (the US, China) you don’t need to be part of a larger group. But generally it helps to be part of a collective social, economic, geographical group. Snubbing our local neighbours will make us more isolated in the world, as a nation. As an individual I feel that the campaign has not been one of project fear, but project anger. I’ve been dismayed by the casual racism, small minded mentality of many in the Leave camp (not all, there are justifiable reasons for being anti-EU). I will now feel trapped on a small island with angry people, grimly clutching their Tesco carrier bags and attempting to make a living by selling Royal Wedding souvenirs to each other. It doesn’t feel like a forward looking, progressive place to be.

Grief – like the end of a marriage there will be a sense of grief following the break-up. I am fully aware of the dubious history of Europe, but I do classify myself as a European. I like being with other Europeans. I appreciate that I am in a privileged position working in a university on joint research projects, so my experience is not the same as everyone else’s. Also I understand that the European Union isn’t devised for my entertainment. But in those European research projects is a microcosm of the grander European Project – people from different countries working on goals of joint interest, with shared values and celebrated differences. Whatever shape our relationships take with Europe following an exit, it will be much more difficult to realise this.

Of course Europe won’t disappear, I can still go on holiday there and attend conferences. But undeniably we will all wake up after a Brexit a lot less European. That is the point of it after all. And that fills me with sadness.

Announcing: aWEAR Conference: Wearables and Learning

Over the past year, I’ve been whining about how wearable technologies will have a bigger impact on how we learn, communicate, and function as a society than mobile devices have had to date. Fitness trackers, smart clothing, VR, heart rate monitors, and other devices hold promising potential in helping understand our learning and our health. They also hold potential for misuse (I don’t know the details behind this, but the connection between affective states with nudges for product purchases is troubling).

Over the past six months, we’ve been working on pulling together a conference to evaluate, highlight, explore, and engage with prominent trends in wearable technologies in the educational process. The aWEAR conference will be held Nov 14-15 at Stanford. The call for participation is now open. Short abstracts, 500 words, are due by July 31, 2016. We are soliciting conceptual, technological, research, and implementation papers. If you have questions or are interested in sponsoring or supporting the conference, please send me an email

From the site:

The rapid development of mobile phones has contributed to increasingly personal engagement with our technology. Building on the success of mobile, wearables (watches, smart clothing, clinical-grade bands, fitness trackers, VR) are the next generation of technologies offering not only new communication opportunities, but more importantly, new ways to understand ourselves, our health, our learning, and personal and organizational knowledge development.

Wearables hold promise to greatly improve personal learning and the performance of teams and collaborative knowledge building through advanced data collection. For example, predictive models and learner profiles currently use log and clickstream data. Wearables capture a range of physiological and contextual data that can increase the sophistication of those models and improve learner self-awareness, regulation, and performance.

When combined with existing data such as social media and learning management systems, sophisticated awareness of individual and collaborative activity can be obtained. Wearables are developing quickly, including hardware such as fitness trackers, clothing, earbuds, contact lens and software, notably for integration of data sets and analysis.

The 2016 aWEAR conference is the first international wearables in learning and education conference. It will be held at Stanford University and provide researchers and attendees with an overview of how these tools are being developed, deployed, and researched. Attendees will have opportunities to engage with different wearable technologies, explore various data collection practices, and evaluate case studies where wearables have been deployed.

What’s in a name?

The EdTechie Martin Weller's Personal Blog - Fri, 27/05/2016 - 09:39
!function(d,s,id) {var js,fjs=d.getElementsByTagName(s)[0];if (!d.getElementById(id)) {js=d.createElement(s);js.id=id;js.src="https://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

Yesterday I had a bit of a pedant tantrum, when following the announcement about FutureLearn MOOCs offering credit, Leeds Uni tweeted they were the first Russell Group university to offer credit for online courses. They deleted the tweet after I complained because online courses aren’t the same as MOOCs, and of course many universities have been offering online courses for credit for years. I fully appreciate it was the demands of twitter and communications that caused this, there wasn’t anything sinister in their intent, and I apologise if I seemed a bit grumpy about it. But it was the latest example of a move to conflate MOOCs and ‘online courses’ that has a number of negative effects. It’s not just historical pedantry that wants this clarification, there are other issues at stake also. Here are the implications of this confusion:

It’s disrespectful – say you’ve been creating innovative online courses for years. Suddenly all of this work is dismissed because MOOCs represent a year zero for online education, and therefore everything you have done previously cannot be counted.

It’s a landgrab – some of this confusion is accidental (as I believe the case was with the Leeds tweet), but in other cases it is more deliberate. By claiming that MOOCs invented online learning they look to be the inheritors of its future.

It underplays the role of universities – this quote from a piece in the Times Higher captures this I think:

“If we have learned nothing else from the move by universities worldwide to be part of the massive open online course (Mooc) movement, it is that education or research development can easily be shared without the need for time and place dependencies.”

The piece has the title “Moocs prove that universities can and should embrace online learning”. I mean, really? Universities have been embracing online learning for at least 15 years. And yet this view makes it seem that we needed those silicon valley types to make us notice the internet. This adds to the landgrab. Similarly FutureLearn’s Simon Nelson stated “our platform means that they can achieve meaningful qualifications whilst still being able to work”. This rather seems to downplay the 40 year history of the OU which was designed for that very purpose, and once again makes it appear as a MOOC invention.

It limits our options – if MOOCs and online courses are synonymous then MOOCs become the only way of doing online learning. Let’s not limit ourselves again, now that we’re just emerging from the VLE restrictions. You can see some of this in this NYT piece: “After Setbacks, Online Courses Are Rethought”.

This conflation of MOOC and online learning means that MOOC failures become the failure of all online learning, and MOOC future becomes the future of all online learning. It’s more important than that, so we shouldn’t cede the ground to lazy terminology. That’s why I’m pedantic about the use of the term. Or maybe I’m just pedantic.

!function(d,s,id) {var js,fjs=d.getElementsByTagName(s)[0];if (!d.getElementById(id)) {js=d.createElement(s);js.id=id;js.src="https://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

Appropriate use of MOOCs

The EdTechie Martin Weller's Personal Blog - Thu, 26/05/2016 - 14:42
!function(d,s,id) {var js,fjs=d.getElementsByTagName(s)[0];if (!d.getElementById(id)) {js=d.createElement(s);js.id=id;js.src="https://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

One of the unfortunate downsides of all the MOOC hype is that it pushed people into opposing camps – you either buy into it all or reject them absolutely. And of course, MOOCs are not going to kill every university, educate the whole world, liberate the masses. But they can be used for some purposes effectively.

Today the OU, FutureLearn and University of Leeds announced a mechanism by which you can gain credit for studying MOOCs and transfer this to count towards a degree. Getting this set up is the type of thing that just takes ages and lots of negotiation (we never cracked it with SocialLearn), so well done to all those involved.

Some will suggest this marks the beginning of the much heralded unbundling of higher education. But I am increasingly inclined to always resist big claims, and instead focus on more modest, realisable ones. I don’t think this model will appeal to everyone, and is unlikely to massively transform the university sector. But what it does allow is more flexibility in the higher education offering. One of the claims the OU has always made for OpenLearn, who are also working in accrediting learning, is that it helps smooth the transition into formal learning. For lots of learners, committing to a three year full time degree is off-putting. This was partly why the OU was invented in the first place. But even signing up for a course complete with fee commitment is a high threshold. MOOCs with a smaller accreditation fee offers a lower step down still.

I suggested a while back that MOOCs might offer a first year replacement, thus reducing some of the financial barriers. The OU itself has run programs where students can study with us for two years and then complete on another campus. More of these hybrid models in education is generally a good thing – students come in many different shapes and sizes now, and will have different needs. But loads of students still want the traditional, 3 year campus model. And that is the key – stop trying to replace one universal model with another one. It is less about blowing up the core and more about fraying the edges productively.

!function(d,s,id) {var js,fjs=d.getElementsByTagName(s)[0];if (!d.getElementById(id)) {js=d.createElement(s);js.id=id;js.src="https://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

What does it mean to be human in a digital age?

It has been about 30 months now since I took on the role to lead the LINK Research Lab at UTA. (I have retained a cross appointment with Athabasca University and continue to teach and supervise doctoral students there).

It has taken a few years to get fully up and running – hardly surprising. I’ve heard explanations that a lab takes at least three years to move from creation to research identification to data collection to analysis to publication. This post summarizes some of our current research and other activities in the lab.

We, as a lab, have had a busy few years in terms of events. We’ve hosted numerous conferences and workshops and engaged in (too) many research talks and conference presentations. We’ve also grown significantly – from an early staff base of four people to expected twenty three within a few months. Most of these are doctoral or post doctoral students and we have a terrific core of administrative and support staff.

Finding our Identity

In trying to find our identity and focus our efforts, we’ve engaged in numerous activities including book clubs, writing retreats, innovation planning meetings, long slack/email exchanges, and a few testy conversations. We’ve brought in well over 20 established academics and passionate advocates as speakers to help us shape our mission/vision/goals. Members of our team have attended conferences globally, on topics as far ranging as economics, psychology, neuroscience, data science, mindfulness, and education. We’ve engaged with state, national, and international agencies, corporations, as well as the leadership of grant funding agencies and major foundations. Overall, an incredible period of learning as well as deepening existing relationships and building new ones. I love the intersections of knowledge domains. It’s where all the fun stuff happens.

As with many things in life, the most important things aren’t taught. In the past, I’ve owned businesses that have had an employee base of 100+ personnel. There are some lessons that I learned as a business owner that translate well into running a research lab, but with numerous caveats. Running a lab is an entrepreneurial activity. It’s the equivalent of creating a startup. The intent is to identify a key opportunity and then, driven by personal values and passion, meaningfully enact that opportunity through publications, grants, research projects, and collaborative networks. Success, rather than being measured in profits and VC funds, is measured by impact with the proxies being research funds and artifacts (papers, presentations, conferences, workshops). I find it odd when I hear about the need for universities to be more entrepreneurial as the lab culture is essentially a startup environment.

Early stages of establishing a lab are chaotic. Who are we? What do we care about? How do we intersect with the university? With external partners? What are our values? What is the future that we are trying to create through research? Who can we partner with? It took us a long time to identify our key research areas and our over-arching research mandate. We settled on these four areas: new knowledge processes, success for all learners, the future of employment, and new knowledge institutions. While technologies are often touted as equalizers that change the existing power structure by giving everyone a voice, the reality is different. In our society today, a degree is needed to get a job. In the USA, degrees are prohibitively expensive to many learners and the result is a type of poverty lock-in that essentially guarantees growing inequality. While it’s painful to think about, I expect a future of greater racial violence, public protests, and radicalized politicians and religious leaders and institutions. Essentially the economic makeup of our society is one where higher education now prevents, rather than enables, improving one’s lot in life.

What does it mean to be human in a digital age?

Last year, we settled on a defining question: What does it mean to be human in a digital age? So much of the discussion in society today is founded in a fetish to talk about change. The narrative in media is one of “look what’s changing”. Rarely is the surface level assessment explored to begin looking at “what are we becoming?”. It’s clear that there is much that is changing today: technology, religious upheaval, radicalization, social/ethnic/gender tensions, climate, and emerging super powers. It is an exciting and a terrifying time. The greatest generation created the most selfish generation. Public debt, failing social and health systems, and an eroding social fabric suggest humanity is entering a conflicted era of both turmoil and promise.

We can better heal than any other generation. We can also better kill, now from the comfort of a console. Globally, less people live in poverty than ever before. But income inequality is also approaching historical levels. This inequality will explode as automated technologies provide the wealthiest with a means to use capital without needing to pay for human labour. Technology is becoming a destroyer, not enabler, of jobs. The consequences to society will be enormous, reflective of the “spine of the implicit social contract” being snapped due to economic upheaval. The effects of uncertainty, anxiety, and fear are now being felt politically as reasonably sane electorates turn to solutionism founded in desire rather than reality (Middle East, Austria, Trump in the US to highlight only a few).

In this milieu of social, technology, and economic transitions, I’m interested in understanding our humanity and what we are becoming. It is more than technology alone. While I often rant about this through the perspective of educational technology, the challenge has a scope that requires thinking integratively and across boundaries. It’s impossible to explore intractable problems meaningfully through many of the traditional research approaches where the emphasis is on reducing to variables and trying to identify interactions. Instead, a complex and connected view of both the problem space and the research space is required. Trying to explore phenomena through single variable relationships is not going to be effective in planning

Complex and connected explorations are often seen to be too grandiose. As a result, it takes time for individuals to see the value of integrative, connected, and complex answers to problems that also possess those attributes. Too many researchers are accustomed to working only within their lab or institutions. Coupled with the sound-bite narrative in media, sustained and nuanced exploration of complex social challenges seems almost unattainable. At LINK we’ve been actively trying to distribute research much like content and teaching has become distributed. For example, we have doctoral and post-doctoral students at Stanford, Columbia, and U of Edinburgh. Like teaching, learning, and living, knowledge is also networked and the walls of research need the same thinning that is happening to many classrooms. Learning to think in networks is critical and it takes time, especially for established academics and administrators. What I am most proud of with LINK is the progress we have made in modelling and enacting complex approaches to apprehending complex problems.

In the process of this work, we’ve had many successes, detailed below, but we’ve also encountered failures. I’m comfortable with that. Any attempt to innovate will produce failure. At LINK, we tried creating a grant writing network with faculty identified by deans. That bombed. We’ve put in hundreds of hours writing grants. Many of which were not funded. We were involved in a Texas state liberal arts consortium. That didn’t work so well. We’ve cancelled workshops because they didn’t find the resonance we were expecting. And hosted conferences that didn’t work out so well financially. Each failure though, produced valuable insight in sharpening our focus as a lab. While the first few years were primarily marked by exploration and expansion, we are now narrowing and focusing on those things that are most important to our central emphasis on understanding being human in a digital age.

Grants and Projects

It’s been hectic. And productive. And fun. It has required a growing team of exceptionally talented people – we’ll update bios and images on our site in the near future, but for now I want to emphasize the contributions of many members of LINK. It’s certainly not a solo task. Here’s what we’ve been doing:

1. Digital Learning Research Network. This $1.6m grant (Gates Foundation) best reflects my thinking on knowing at intersections and addressing complex problems through complex and nuanced solutions. Our goal here is to create research teams with R1 and state systems and to identify the most urgent research needs in helping under-represented students succeed.

2. Inspark Education. This $5.2m grant (Gates Foundation) involves multiple partners. LINK is researching the support system and adaptive feedback models required to help students become successful in studying science. The platform and model is the inspiration of the good people at Smart Sparrow and the BEST Network (medical education) in Australia and the Habworlds project at ASU.

3. Intel Education. This grant ($120k annually) funds several post doctoral students and evaluates effectiveness of adaptive learning as well as the research evidence that supports algorithms that drive adaptive learning.

4. Language in conflict. This project is being conducted with several universities in Israel and looks at how legacy conflict is reflected in current discourse. The goal is to create a model for discourse that enables boundary crossing. Currently, the pilot involves dialogue in highly contentious settings (Israeli and Palestinian students) and builds dialogue models in order to reduce legacy dialogue on impacting current understanding. Sadly, I believe this work will have growing relevance in the US as race discourse continues to polarize rather than build shared spaces of understanding and respect.

5. Educational Discourse Research. This NSF grant ($254k) is conducted together with University of Michigan. The project is concerned with evaluating the current state of discourse research and to determine where this research is trending and what is needed to support this community.

6. Big Data: Collaborative Research. This NSF grant ($1.6m), together with CMU, evaluates the impact of how different architectures of knowledge spaces impacts how individuals interact with one another and build knowledge. We are looking at spaces like wikipedia, moocs, and stack overflow. Space drives knowledge production, even (or especially) when that space is digital.

7. aWEAR Project. This project will evaluate the use of wearables and technologies that collect physiological data as learners learn and live life. We’ll provide more information on this soon, in particular a conference that we are organizing at Stanford on this in November.

8. Predictive models for anticipating K-12 challenges. We are working with several school systems in Texas to share data and model challenges related to school violence, drop out, failure, and related emotional and social challenges. This project is still early stages, but holds promise in moving the mindset from one of addressing problems after they have occurred to one of creating positive, developmental, and supportive skillsets with learners and teachers.

9. A large initiative at University of Texas Arlington is the formation of a new department called University Analytics (UA). This department is lead by Prof Pete Smith and is a sister organization to LINK. UA will be the central data and learning analytics department at UTA. SIS, LMS, graduate attributes, employment, etc. will be analyzed by UA. The integration between UA and LINK is one of improving the practice-research-back to practice pipeline. Collaborations with SAS, Civitas, and other vendors are ongoing and will provide important research opportunities for LINK.

10. Personal Learning/Knowledge Graphs and Learner profiles. PLeG is about understanding learners and giving them control over their profiles and their learning history. We’ve made progress on this over the past year, but are still not at a point to release a “prototype” of PLeG for others to test/engage with.

11. Additional projects:
- InterLab – a distributed research lab, we’ll announce more about this in a few weeks.
- CIRTL – teaching in STEM disciplines
- Coh-Metrix – improving usability of the language analysis tool

Going forward

I know I’ve missed several projects, but at least the above list provides an overview of what we’ve been doing. Our focus going forward is very much on the social and affective attributes of being human in our technological age.

Human history is marked by periods of explosive growth in knowledge. Alexandria, the Academy, the printing press, the scientific method, industrial revolution, knowledge classification systems, and so on. The rumoured robotics era seems to be at our doorstep. We are the last generation that will be smarter than our technology. Work will be very different in the future. The prospect of mass unemployment due to automation is real. Technology is changing faster than we can evolve individually and faster than we can re-organize socially. Our future lies not in our intelligence by in our being.

But.

Sometimes when I let myself get a bit optimistic, I’m encouraged by the prospect of what can become of humanity when our lives aren’t defined by work. Perhaps this generation of technology will have the interesting effect of making us more human. Perhaps the next explosion of innovation will be a return to art, culture, music. Perhaps a more compassionate, kinder, and peaceful human being will emerge. At minimum, what it means to be human in a digital age has not been set in stone. The stunning scope of change before us provides a rare window to remake what it means to be human. The only approach that I can envision that will help us to understand our humanness in a technological age is one that recognizes nuance, complexity, and connectedness and that attempts to match solution to problem based on the intractability of the phenomena before us.

The Godfather: Gardner Campbell

Gardner Campbell looms large in educational technology. People who have met him in person know what I mean. He is brilliant. Compassionate. Passionate. And a rare visionary. He gives more than he takes in interactions with people. And he is years ahead of where technology deployment current exists in classrooms and universities.

He is also a quiet innovator. Typically, his ideas are adopted by other brash, attention seeking, or self-serving individuals. Go behind the bravado and you’ll clearly see the Godfather: Gardner Campbell.

Gardner was an originator of what eventually became the DIY/edupunk movement. Unfortunately, his influence is rarely acknowledged.

He is also the vision behind personal domains for learners. I recall a presentation that Gardner did about 6 or 7 years ago where he talked about the idea of a cpanel for each student. Again, his vision has been appropriated by others with greater self-promotion instincts. Behind the scenes, however, you’ll see him as the intellectual originator.

Several years ago, when Gardner took on a new role at VCU, he was rightly applauded in a press release:

Gardner’s exceptional background in innovative teaching and learning strategies will ensure that the critical work of University College in preparing VCU students to succeed in their academic endeavors will continue and advance…Gardner has also been an acknowledged leader in the theory and practice of online teaching and education innovation in the digital age

And small wonder that VCU holds him in such high regard. Have a look at this talk:

Recently I heard some unsettling news about position changes at VCU relating to Gardner’s work. In true higher education fashion, very little information is forthcoming. If anyone has updates to share, anonymous comments are accepted on this post.

There are not many true innovators in our field. There are many who adopt ideas of others and popularize them. But there are only a few genuinely original people doing important and critically consequential work: Ben Werdmuller, Audrey Watters, Stephen Downes, and Mike Caulfield. Gardner is part of this small group of true innovators. It is upsetting that the people who do the most important work – rather than those with the loudest and greatest self-promotional voice – are often not acknowledged. Does a system like VCU lack awareness of the depth and scope of change in the higher education sector? Is their appetite for change and innovation mainly a surface level media narrative?

Leadership in universities has a responsibility to research and explore innovation. If we don’t do it, we lose the narrative to consulting and VC firms. If we don’t treat the university as an object of research, an increasingly unknown phenomena that requires structured exploration, we essentially give up our ability to contribute to and control our fate. Instead of the best and brightest shaping our identity, the best marketers and most colourful personalities will shape it. We need to ensure that the true originators are recognized and promoted so that when narrow and short-sighted leaders make decisions, we can at least point them to those who are capable of lighting a path.

Thanks for your work and for being who you are Gardner.

The role of policy in open ed

The EdTechie Martin Weller's Personal Blog - Mon, 16/05/2016 - 20:14
!function(d,s,id) {var js,fjs=d.getElementsByTagName(s)[0];if (!d.getElementById(id)) {js=d.createElement(s);js.id=id;js.src="https://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

I was invited to give a talk at the Dept of Business Information and Skills for a meeting organised by ALT, on the role of policy in open education. I looked at OER policies at the institutional, regional and national level and open access policies. I argued that open policies are a good example of how policy can influence practice, and also some of the issues. But the same applies to other areas you might want to consider. The Open Flip I argued will be significant, and policy offers us a means of reallocating resources and encouraging new models, such as Open Library Humanities.

Putting these slides together was a good example of what I was talking about in my last post. Creating a new talk forced me to pull together the different strands on open policy that I have gathered over the past year. The slidedeck is below:

!function(d,s,id) {var js,fjs=d.getElementsByTagName(s)[0];if (!d.getElementById(id)) {js=d.createElement(s);js.id=id;js.src="https://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

The Future of Learning: Digital, Distributed, Data-Driven

Yesterday as I was traveling (with free wifi from the good folks at Norwegian Air, I might add), I caught this tweet from Jim Groom:

@dkernohan @cogdog @mweller A worthwhile think piece for sure, almost up there with "China is My Analytics Co-Pilot"

— Jim Groom (@jimgroom) May 11, 2016

The comment was in response to my previous post where I detailed my interest in understanding how learning analytics were progressing in Chinese education. My first internal response was going to be something snarky and generally defensive. We all build in different ways and toward different visions. It was upsetting to have an area of research interest be ridiculed. Cause I’m a baby like that. But I am more interested in learning than in defending myself and my interests. And I’m always willing to listen to the critique and insight that smart people have to offer. This comment stayed with me as I finalized my talk in Trondheim.

What is our obligation as educators and as researchers to explore research interests and knowledge spaces? What is our obligation to pursue questions about unsavoury topics that we disagree with or even find unethical?

Years ago, I had a long chat with Gardner Campbell, one of the smartest people in the edtech space, about the role of data and analytics. We both felt that analytics has a significant downside, one that can strip human agency and mechanize the learning experience. Where we differed was in my willingness to engage with the dark side. I’ve had similar conversations with Stephen Downes about change in education.

My view is that change happens on multiple strands. Some change from the outside. Some change from the inside. Some try to redirect movement of a system, others try to create a new system altogether. My accommodating, Canadian, middle child sentiment drives my belief that I can contribute by being involved in and helping to direct change by being a researcher. As such, I feel learning analytics can play a role in education and that regardless of what the naysayers say, analytics will continue to grow in influence. I can contribute by not ignoring the data-centric aspects in education and engage them instead and then attempting to influence analytics use and adoption so that it reflects the values that are important for learners and society.

Then, during the conference today, I heard numerous mentions of people like Ken Robinson and the narrative of creativity. Other speaking-circuit voices like Sugata Mitra were frequently raised as well. This lead to reflection about how change happens and why many of the best ideas don’t gain traction and don’t make a systemic level impact. We know the names: Vygostky, Freire, Illich, Papert, and so on. We know the ideas. We know the vision of networks, of openness, of equity, and of a restructured system of learning that begins with learning and the learner rather than content and testing.

But why doesn’t the positive change happen?

The reason, I believe, is due to the lack of systems/network-level and integrative thinking that reflects the passion of advocates AND the reality of how systems and networks function. It’s not enough to stand and yell “creativity!” or “why don’t we have five hours of dance each week like we have five ours of math”. Ideas that change things require an integrative awareness of systems, of multiple players, and of the motivations of different agents. It is also required that we are involved in the power-shaping networks that influence how education systems are structured, even when we don’t like all of the players in the network.

I’m worried that those who have the greatest passion for an equitable world and a just society are not involved in the conversations that are shaping the future of learning. I continue to hear about the great unbundling of education. My fear is the re-bundling where new power brokers enter the education system with a mandate of profit, not quality of life.

We must be integrative thinkers, integrative doers. I’m interested in working and thinking with people who share my values, even when we have different visions of how to realize those values.

Slides from my talk today are below:

Future of Learning: Digital, distributed, and data-driven from gsiemens

The new or reused keynote dilemma

The EdTechie Martin Weller's Personal Blog - Thu, 12/05/2016 - 10:51
!function(d,s,id) {var js,fjs=d.getElementsByTagName(s)[0];if (!d.getElementById(id)) {js=d.createElement(s);js.id=id;js.src="https://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

James Clay wrote a post about ‘the half life of a keynote‘ recently in which he pondered how long you should keep giving the same talk for. I know people who always create a new talk, and people who give the same one for almost their entire careers. This year I decided I would create new talks for every keynote, so it’s something I’ve been thinking about. I think the initial reaction is that creating new talks is better. But now I’m through my new talk phase, I’m less convinced. To add to James’s conversation then, here are my pros and cons.

The advantages of giving the same talk multiple times are:

You get better. As anyone who has seen me talk will attest, I’m not a great public speaker. Giving the same talk allows me to tighten it up, as the first version is often a bit rambling. You take bits out, strengthen other points, know which jokes work, etc. It’s a bit like a comedian going on tour, if you only give new talks each time then it is always the equivalent of the pre-tour show when material is being trialled, compared with the 15th night when it is finely honed.

People want that talk. I have given versions of my digital scholarship talk since 2011. I keep retiring it and then people ask “can you come and give that talk I saw, to my team”. It feels a bit like that group who had one hit in the 70s and every gig they play, people just want to hear the hit and not their electro jazz fusion material.

It saves time. This is not just me being lazy, but is a real consideration for people who have a substantive job. Creating a new talk can take a day, giving the talk takes at least a day out of your normal work, and if you don’t want to be rambling you will practice and refine the talk beforehand, which might be another day. That’s at least 3 days per talk. Most talks I give are unpaid or there is a small honorarium, but the OU doesn’t get anything. If I give 5-10 talks a year that is 15-30 days out of my job. Now there are benefits (see below) so it’s not all lost time, but even so, that is a sizeable chunk of workload. If you reuse talks then you can cut that amount down by half probably.

I don’t really have that much to say. I mean, come on, one or two decent ideas every couple of years is enough surely?

The advantages of giving new talks are:

It really helps pull together your thinking. Often you have lots of ideas and content but it’s not until you create a talk for others that it helps shape your thoughts. There is real scholarly benefit in creating a new talk.

It makes you think about the audience more. There is a danger when giving the same talk repeatedly (usually modified) that you don’t tailor it sufficiently to the audience.

It keeps you fresh. The flip side of the advantage given above of getting sharper with familiar material is that you can also be complacent and not really engaged with it.

It avoids repetition and gives you online content. Prior to the internet you probably could get away with giving the same talk forever. But now you share content on blogs and slideshare, or it is livestreamed. So people may have seen it in some form already before you even get there. Creating new talks help feed the online beast, if that is important to you.

I’ve created new talks where I’ve been mildly incoherent, and given old talks where it has not really been appropriate, so there are merits to both. I usually come down in the middle and adapt, remix material from previous talks, but I’m finding this year of refreshing my presentation stock very useful and quite challenging.

!function(d,s,id) {var js,fjs=d.getElementsByTagName(s)[0];if (!d.getElementById(id)) {js=d.createElement(s);js.id=id;js.src="https://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");

Pages

Subscribe to Open Learning Network aggregator