Can we have an #IndieWeb webmentions credentialing system? #OpenBadges

Many have come to know me as an inquisitive skeptic when is comes to badging. Yes I believe we need to remediate assessment. Yet I know the introduction of measure changes learning, and always felt we used a huge system for what boils down to a dichotomous measure. You either earn a badge or you don’t. It is a checkbox with a bit of metadata.

Over the past few weeks I have discussed this in the #IndieWeb channels with Aaron Parecki and Tantek Çelik if we could use webmentions as a credentialing tool. When you think about it an #openbadges boils down to two permalinks: the task, with criteria and evidence; and the learner artifact with evidence of learning.

In this example the teacher would put a target url to a task that has mark up for criteria and evidence such as https://edu305.jgregorymcverry.com/moduleone.html

They then enter in the url to the student work.

You add a badge image and indicate if it is awarded. The webmention would be sent to the students post and to the class website. The hcard of the teacher and the organization would act as endorsement.

other option for credential

In this example, a bit easier technically as only website gets parsed. The teacher adds in the url to the learner evidence and puts the criteria and evidence for the credential in the body of the post.

Then some kind of ledger or validator like my Bridgy page (above) or webmention.rocks (below) would create a permanent record outside of both both the class website and the source of learner evidence.

What do people think? Can we do it? Should a badge just be two permalinks and not a gigantic protocol with quasi-open stewardship of the standards?

Big Impact with Big Data: Towards a New Research Design

This is part one of a two part series on my opening keynote of the Big Data Smart Technology Forum held at Tianjin University of Technology on October 13.

I come to you today with a challenge. We have a problem in educational research and I hope we do not recreate inequities of the past in our era of Big Data.

For example, An examination of results from the National Assessment of Educational Progress (NAEP), a common Big Data source, provides a bleak picture of educational progress in the United States.

First in the United States centuries of systemic racism have left massive scars on our great nation. As you can see your chances of born into poverty often have more to do with ethnicity and race than any other factor.

These disparities translate into the classroom. As you can see White and Asian students consistently outscore their peers on the NAEP Assessment. In each f the three years highlighted significant differences remain.

The United States Federal Government throws a lot of money at this problem. Last year the Department of Education spent over 15 billion dollars.

The Institute of Educational Sciences, alone invested over 600 million dollars in research.

Yet if you look at trends in NAEP scores they remain virtually flat. We see some gains in early reading scores but these fizzle in the upper grades and poverty and race still play a major role explaining variance in scores.

I ask you, “Why?” Many of you here today seek degrees in management statistics but you do not need a PhD to understand that 15 billion dollars and no change in progress is a bad Return on Investment.

We need a new paradigm for educational research and Big Data analytics overall. For too long our higher educational systems reinforces inequities and concentrates wealth amongst the elite rather than the people. We chase citations rather than helping communities.

I ask each of you in your respective fields to think back to how many studies over the last forty years have been truly influential. Research that has changed lives? I know for many of you you can probably count these studies on one hand. Again a bad ROI. So where does all this money go?

It doesn’t make it into the communities or our classrooms. Instead we create a false scarcity of intellectual capital. University professors, usually themselves from privileged backgrounds, apply for grants, train new PhD students, do research, and publish in journals that few folks will ever read. Our elitist economy runs on a currency of citation counts. The kicker… the public must pay exorbitant fees to publishers in order to read the research that their tax dollars already paid for. It is a double taxation cloistering money in the hands of the few while the many suffer.

I challenge you today to move our focus out of citations and into the community. In order to have a big impact with Big Data we need a new research paradigm. I turn to the field of community engaged scholarship which grew out of the nursing fields.

This methodological approach suggests we merge our research, teaching, and service into a common direction of helping the communities in which we live. Our research should focus on people not participants. They must be involved in the work and not simply a sample size. Significance should have as much to do with community impact as it does with p-values.

 

Yet in this era of Big Data I take community engaged scholarship a step further. Today I call us to an emerging field of digitally engaged scholarship. I define this as an interdisciplinary approach of designed based research using distributed talent and networked technologies to open source our knowledge creation for the greater good of both local and global communities.

Interdisciplinary Research

First digitally engaged scholarship must take an interdisciplinary approach. The problems the world faces are too big for one person, or even one nation to solve alone. Pollution, climate change, education. We must all work together.

We also need specialists when it comes to big data. If you try to master all of the fields necessary in Big Data you will be a master of none. In this room we have folks from public health, management, statistics, and public health. Let’s work together.

We will need front end and back end engineers to help scrape, collect, and and analyze data. We need management statistics teams well versed in Python, R, Hadoop and libraries that someone in this room may soon development.

You can’t do it alone.

Formative Design Based Research

We can trace our empirical designs and scientific inquiry back to Kantian humanity. Our commitment to objectivity has guided science for centuries. I say its time to embrace our subjectivity. Have clear goals rather than just questions. Louis Pasteur wasn’t just searching for answers when he developed methods to ensure food safety. He had a problem to fix. Do the same.

Digitally engaged scholarship requires the use of Design Based Research techniques. These methodologies draw on many names but I use the work of Reinking and Bradley in their understanding of Formative Design.

Overall the goal of research should not be fidelity of models but forkability to local contexts. We need interventions centered in the community that utilize inclusive methodologies that allow for iteration.

 

Distributed Knowledge

I also challenge you to rethink our definition of memory and cognition. In our Western traditions we have placed great emphasis on the self. Yet what if knowledge does not reside inside my brain. What if my memories are situated in the interstices between us, our environment, and our communities?

As the web explodes in size our external knowledge storage tools grow in vast size and complexities. Each of you in your pocket has more computing technology than humankind first took to the moon. As China prepares for the next moon landing I challenge us to rely on the networks that distribute knowledge across the globe.

Networked Technologies

It took the book 800 years to spread across the globe. Moveable print emerged in both China and Europe. Still in almost a millennia literacy reached only a fraction of the world’s population. In contrast the web has spread to a billion people in just under thirty years. A billion people, and in the next decade another billion will come online. No technology for reading and writing as spread with such speed.

We must take advantage of this opportunity while also protecting the way we read, write, and participate from emerging threats. Large multinational corporation suck up our data and sell it to the highest bidder. If we are not careful a new digital colonialism will emerge that will repeat the errors of our past. We must fight for a future in the world of Big Data where we empower people through privacy. You should control your data rather than handing it over to the Google’s and Facebooks of the world.

Improving our Communities

Let’s use Big Data to Build a better tomorrow not just focus on the bottom line. Like community engaged scholarship digitally engaged scholars serve the greater good. Let us use Big Data to not simply understand the past but to light a beacon on where we should head next.

Now let’s look at a few examples of where Big Data can be applied in educational settings and then you folks, as experts in your respective fields can help me help the world. Second half of the talk is here.

Educators Teaching Educators: #EDcampCT

 

edcampct sign up board
All images takes by Tyler Varsell. License not specified

 

On Friday August 18th close to hundred teachers descended on the Ethel Walker school for a day of learning at #edcampct. For those who do not know edcamps started as an unconference where no vendorsd or proposal submissions. Instead learning occurs on demand.

When you attend an unconference teachers propose sessions by grabbing a sharpie and an index card. Teachers can either propose a session they want to attend of offer up their knowledge. You then, as a participant, “vote with your feet.” You attend the sessions that matter the most to your needs.

This year’s #edcampct, now in its seventh year was phenomenal. The amazing Sara Edson led a team of volunteers who pulled off an amazing event. The staff at the Ethel Walker school (especially the cooks) make us feel right at home. Ethel Walker School has quickly become one of the premier institute for exploring student centered learning and rethinking professional development.

Exploring Meaning Making in New Spaces

The first session I attended was on badges and digital credentials. Over twenty people sat in a room and waited for a presenter. No one stood up. The session was requested by a participant and not a presenter. So I stood up and offered what I knew of badges. Jeff Gilberto, who is exploring badges and professional development also jumped in to help. I first hopped on to Slack and asked for materials. Doug Belshaw sent me a quick slide deck. We went over terminology and then I demonstrated the pathways for academic blogging I use in my writing classes. I used badgr.io  to create a badge for folsk and demonstrated how to iss ue a “Learning about Badges” badge.

The next session I attended was one I proposed called, “Hacking Fake News.” We first defined fake news using a turn and talk and then discussed the difference between fake news and perspectives. I suggested that very little news is fake and what we must focus on instead is understanding how perspectives shade truth. I also described my research that demonstrates teaching website credibility through checklists doesn’t work. We then discussed how having students create their own fake news can create a production based tool for learning. I then demonstrated how to install and use Mozilla’s x-ray goggles. Every participant put themselves on the front page of their local paper.

Next I described strategies for hacking sources for credibility. We discussed markers that we can play with such at author expertise and source credibility. We messed with making some authors more and other authors less credible. Then it was time for real important learning. We took President Donal Trump’s response to Charlottesville and rewrote it to reflect what he should have said instead of providing cover to neo-nazis and hate groups.

At the end of the day I was please when a teacher got up and said my session made her rethink how she teaches using the web. A history teacher got up and explained, “The fake news session taught  not checklist but making students create their own fake news (Mozille X-ray goggles)”

 

 

Rethinking Assessment

To many in higher education they see problems as nails and testing as the only hammer in our kit. Yet we stand at a time where we can re-mediate assessment using new technologies and old definitions of what it means to learn.

I serve on the Tech Fluency (TF) affinity group at Southern Connecticut State College. TF is a tier one competency in our liberal education program (our general ed program but with more hoops and loftier goals). The goals is to ensure all students have the minimum tech skills they will require after college. In our current (permanent) budget crisis we have been asked to review the effectiveness of LEP.

We developed a series of rubrics instructors could use in their classroom. The newly appointed LEP assessment committee decide our approach was “too subjective.” They suggested a common task, filling out a spreadsheet was their recommendation, that students could complete in a controlled and supervised  environment.

This set me off. I responded (probably with not enough to delay and too much acerbic snark) with some of the following comments.

Objective Assessments are a Hoax


Subjective flickr photo by EVRT Studio shared under a Creative Commons (BY-NC-ND) license \
I am not anti-testing. Most of my research revolves around item design and testing. Yet I think what set me off was the belief that some assessments are objective.  Those who rely on standard measures ignore the bias inherent in statistical models and deciding as “what counts as learning.” They look at rubric scored items as being “too subjective” yet ignore the error variance, the noise in their models. I say bring the noise. It is in outliers where we see interesting methods and learning.

Technology Assessment Lack Ecological Validity

We were asked to create a shared assessment. It’s just the task sounds like 1996 wants their Computer Applications textbook back. We have to move beyond, “These kids don’t know spreadsheets” as the only critique in our self-assessment. There is so much more in the competencies beyond the basics of Excel.
The idea that you do anything in tech under supervision and sitting alone in a crowded room is the wrong approach to assessment. What we are calling cheating will be required collaboration for anyone doing any thing with tech in any field.
This is why I think a digital credentialing platform is the correct path forward. If you  begin by mapping pathways and  rubrics similar to ours or better yet even more fine grained criteria we could develop a system where faculty still had the freedom to design (hopefully co-design) a pathway for students.

Students Should Drive Assessment

 I think we   should  involve the students as stakeholders to a much higher degree in any assessment.We do this by helping stduents tell their story. This also  shifts responsibility onto them to build the data trails we need.

 Purpose of Assessment

I also took issue with the the purpose of our assessment. If the goal is to evaluate the effectiveness of our LEP program then why is our gut reaction to assess each student individually.

Its not that the approach of of a learning artifact (the spreadsheet assessment we were asked to develop) is the wrong path in terms of overall  measurement.
Let us as faculty assess the individual and let machines surface the patterns at class, school, and system level.

Technological Solutions

I think the Academy long term should push off most system wide assessment onto machines. Its way more effective and correlates so highly with well trained human raters.
If we scored could score a batch with high inter-rater reliability once, and laser honed the criteria, much of this could be machine scored and credentialed with minimum faculty involvement. Faculty could build whatever assessments they wanted to on top of the task. It really wouldn’t matter to LEP assessment.
This isn’t fantasy. It’s usually $7-10 a user (for the scoring).

Moving Forward

All measurement and all grades are subjective. Yet I think we have a chance to rethink the academy by empowering learners through assessment. Its time to kill the Carnegie Credit hour.
In fact across the state of Connecticut we have been discussing how to seamlessly transfer students between seventeen community colleges and four universities. Plus students would like to receive credit for work that would demonstrate competencies in our Tier One classes. If we really wanted to think about Transfer Articulation we would forget about tracking credit hours and think of each student as an API. If we had the matching criteria, or even a crosswalk of offerings, it would be a matter of plug and playing the assertions built into our credentialing platform.
This would also allow students to apply previous work they completed in high school or outside of school and get credit for meeting the technology fluency competencies. We can use the new endorsement feature in the Badge 2.0 specification so local schools, computer clubs, or even boot camps could vouch for the independence of student work. The learning analytics can help us with our programmatic review and tracking student knowledge growth.

Some Examples

As some example I threw together a quick prototype that could be using technology fluency and one that could be used in our writing intensive classes (I see very little light between writing and technology)