INTERTEXTrEVOLUTION

Make.Hack.Play.Learn

#QuestionTheWeb

Published by J. Gregory McVerry, Sandra Schamroth Abrams, Troy Hicks on

Updated on

#QuestionTheWeb is a two phase intervention that recognizes agrumentation as a discourse. We first teach students to own their truth from their own domain and then understand how perspectives shape truth through the use of biased avatar based read alouds.

1.0 Significance

1.1 Statement of the Problem and Rationale for the Proposed Research

Almost twenty percent of the 21st century has passed, and authors of national reports continue to bemoan youths’ inability to read and write critically, underscoring the continued writing crisis in this country (Graham & Perin, 2007). The most recent NAEP report card for writing (NCES, 2011) reveals that only 27% of eighth graders performed at or above a proficient level, and only 3% of eighth graders performed at the advanced level. It comes as no surprise, therefore, when national performance statistics indicate that adolescents consistently “still do not read or write well enough to meet grade-level demands” (Graham & Herbert, 2010, p. 3). Furthermore, adolescents are not engaging in enough academic writing, with many students typically composing responses less than one paragraph in length (Wilcox & Jeffery, 2014).

Relatedly, for almost two decades, literacy researchers have noted the imminent need to teach critical evaluation skills. Studies have demonstrated a generation of students could not or would not examine multiple sources beyond very surface level cues (Bråten, Strømsø, & Britt, 2015; Coiro & Dobler, 2007; Ey & Cupit, 2011; Goldman, Wiley, & Graesser, 2005; Kulikowich, 2008; Kuiper & Volman, 2008; Lawless & Schrader, 2008; Leu, et al., 2007; Mayer, 2008; Metzger et al., 2015; Rouet, 2006).

This Goal Two proposal, which centers on the #QuestionTheWeb instructional model, not only offers pathways for middle-school students to think critically, to write cogent and convincing arguments, and to be aware of bias, but also to help students hone collaborative, digital literacy practices.

To address these problems, we have developed an instructional model that focuses on students creating their own space on the web. More specifically, students will blog and maintain websites while and engaging in writing, peer review, and reflective practice. This approach will (a) target national concerns of the deficient writing abilities of US youth, (b) support students’ development of necessary skills for source evaluation, and thus (c) help students hone their digital literacies so that they become critical thinkers, readers, and writers on the web.

As such, our #QuestionTheWeb: Supporting Argumentative Writing and Critical Evaluation intervention (QTW) is inspired by approaches to learning in which all learners complete activities from their own blogs and student responses and class feeds can be aggregated and assessed. This approach has been successfully used at the higher education levels since 2008 (Downes & Siemens, 2009) and modified for the -12 level (McVerry et al, 2015).

Our aim is to conduct research that will lead to an intervention that can be effectively implemented and thus useful to school leaders, teachers, and policy makers. The instructional model will help school leaders to define a vision of what could be accomplished with innovative approaches to an important challenge: how to support struggling adolescent learners who cannot write well, all the while honing critical digital literacy practices. Teachers also will have a testable model for teaching in the content areas using web-based technologies. For policy makers, it will provide new directions for thinking about how best to serve our adolescent youth in all contexts, especially economically challenged districts and students who typically struggle with reading and writing. Specifically, this Goal Two research project seeks to answer three questions:

  • What effect does writing from their own domain have on participants’ self-efficacy?
  • What effect does the QTW have on performance of the Analytic Writing Continuum-Source-Based Argument over time when controlling for prior internet use and writing ability?
  • What effect does the QTW have on performance on a measure of critical evaluation over time when controlling for prior internet use and writing ability?

At the conclusion of the study we will have the data and fully developed materials to investigate the effect sizes of our model when we scale the grant to a Goal Five measurement grant.

1.2 The Proposed Intervention: #QuestionTheWeb: Supporting Argumentative Writing and Critical Evaluation Skills

In Phase I of QTW, students will consider how their identity and the perspective of others shape dominant narratives, or “truths,” online. They will practice sourcing skills, meaning the ability to evaluate and integrate multiple documents into understanding, and writing with claims and evidence when exploring the inquiry task, “What effect does social media have on me and society?” Students will focus on markers of credibility, examine an author's perspective, and identify how claims and evidence are used in mentor texts. Students will annotate and reflect on media across many different modes. They will be simultaneously designing “spoof” websites and a personal website to translate learning into text.

In Phase II of the study, students will encounter a variety of controversial topics that invite argumentative writing in the domain of social studies, history, or contemporary politics, depending on the standards deployed in each participating LEA.

Students will also encounter avatars that will read sources with a biased read aloud. The avatars will read and annotate two sources they agree with and a source they disagree with. For each topic, the avatars/will interject links into student feeds. These links will be to researcher created websites hosted on our servers. There will be a video file overlaid on the website that contains an avatar “reading” and “mousing-over” a website. How they interact with claims and evidence presented by the author will be determined by their perspective.

 As students complete their research they will utilize interactive graphic organizers. These graphic organizers will provide formative assessment data while also scaffolding student knowledge growth. The final essay, for example, may utilize an argumentative Vee diagram (Naussbaum, 2003) and other graphic organizers like Tolumin models (2003).

As students compose a digital essay they will publish reflective posts on the writing process and how they are shaping their truth. They will reflect on key revisions they made or steps they took to shape media. These reflections will also provide novel assessment data of student understanding specific online research and media skills.

1.2.1 Sample

The intervention will focus on 7th graders in the domain of social studies. We chose this population to draw our sample primarily based on extensive use of this 4th to 7th grade age group in previous research (Kuhn, 1991; 2005, Midgette, Haria, & MacArthur, 2008; Page-Voth & Graham, 1999; Reznitskaya & Anderson, 2002; Schwarz, Neuman, & Biezuner, 2000). This allowed us to draw on previous research in our theory of change. From the bracket of 4-7 we chose 7th grade based on the Common Core State Standards.

Seventh grade is the first time students are expected to address author perspective by the end of the year. Students need to “Analyze two or more authors write about the same subject,” “determine an author's point of view or purpose in a text”, “analyze how the author distinguishes his or her position from that of others”, “Analyze the structure an author uses to organize a text.” In the writing standards students must, “acknowledge alternate or opposing claims,” “use accurate credible sources,” and “link to and cite sources.” This is a major instructional shift for students to face. By concentrating on 7th grade we increase the feasibility of the grant and strength of our theory of change.

1.3 Theory of Change

#QuestionTheWeb builds our theory of change from an overarching lens of Argumentation as Discourse (Applebee & Langer, Beach & Newell, 2015; 2013, Hillocks, 1986). Teaching students the norms of academic writing remains one of the most consistent challenges for teachers of the English language arts. The earliest comprehensive studies of writing instruction at the high school level suggested that most students were taught highly formulaic structures for producing academic argument (Applebee & Langer, 2013, Hillocks, 1986), and that trend has remained consistent over time (Applebee and Langer, Newell et al., 2015). Teaching and learning academic argument, especially in middle- and high-schools, is often reduced to formulaic essay structures. In fact, Newell et al. suggests that we need to expand what counts in the teaching of argumentation

To clarify: what counts as argumentative writing, indeed what counts as argumentation more generally, is not a given. It is not something that just exists. It is instead a set of social practices deeply embedded in our everyday lives and the social institutions in which we all participate. It is socially constructed through and exists only through teaching and learning (Newell et al, 2015, p. 1)

From a theory of Argumentation as Discourse we will reach our stated outcomes through two phases that draw on empirical research in their design. In the first phase we focus on bridging the cognitive and social practices of argumentation by focusing on the writer as an agent as we operationalize our theory of change through blogging. Then, in the second phase, we draw on calls to unite the domains of reading and writing to teach argumentation by having students interact with read alouds, biased avatars, and graphic organizers. Before we can begin to focus on reading and writing skills, however, we first address the social practices inherent in argumentation

1.3.1 Bridging Cognitive Skills and Social Practices of Argumentation.

From Argumentation as Discourse we begin by recognizing the need to bridge cognitive and social practices in writing instruction. Reznitskaya & Anderson (2002), for example, lay out methodologies and explain the reasoning for cognitive and social integration in Argument Schema Theory. Furthermore, in a review of research on argumentative writing Newell et al. (2015) called for an integration of the skills and social practices. We continue this effort by building our theory of change on self-efficacy, identity, dialogical discourse, and community

1.3.1.1 Self-Efficacy

Studies have suggested that self-efficacy has a strong impact on learning (Bandura & Zimmerman, 1995). Research reveals that students with low self-efficacy do not engage in metacognitive practices around their learning (Walker, 2003). Yet metacognitive reflection is the cornerstone of much of the instruction and feedback in argumentative writing (Newell et al., 2015). If we want students to see writing as part of a universal search for knowledge, then we must improve self-efficacy skills. Additionally, students with low self-efficacy in writing focus on extrinsic motivations (e.g., grades) instead of the overarching goal of improving writing (Linnenbrink & Pintrich, 2003). Students also need to consider their self-efficacy with digital writing tools, especially since belief in one's skills with digital tools can predict performance in inquiry tasks (Leu et al. 2015) We also know that some students approach digital writing with higher self-efficacy than traditional paper genres taught in school (Meyers & Beach, 2008).

1.3.1.2 Agency

A theory of change based on Argumentation as Discourse recognizes the role of the learner. Building upon work by Scardamailia (2009), who argues for epistemic agency giving students control over all the range of knowledge building skills. we suggests learners’ control of the topic, audience, and impact of writing will improve learning. Durst (2009) argues that we see students express agency by resisting curriculum and suggests instead we should professionalize and make their goals central. Beyond resisting the curriculum, agency impacts writing ability; Sheehy (2003) found that, when students sensed a lack of agency, they did not invest effort in writing.

Teachers can encourage agency in the classroom. To negate the detrimental impact of lack of agency, Sheehy worked with students to formalize the genre for specific audiences and this led to a collaborative sense of agency and the exchange of strategies. Research suggests there are benefits to using digital texts to increase agency of writers. O'Brien, Beach, and Scrhaber (2007) found struggling readers and writers develop agency in digital texts. Multiple studies find having students write for authentic purposes improves their writing. In fact, giving students critical questions leads to increased frequencies of claims in student writing (Nusbaum & Edwards, 2011). As educators channel learner agency, they help to increase student identities as writers.

1.3.1.3 Identity

Identity and agency are deeply intertwined. Once students sense ownership over their writing, educators can help students build on their identities as a writer. Any theory of change that begins with Argumentation as a Discourse must recognize the role identity plays in writing (Scardamalia, & Bereiter, 2006). By focusing on individuals rather than outcomes, we hope to break from cycles of programmatic writing instruction. In fact, Trainor (2009) noted that we must think about our framing when students build identities, or they may construct in opposition to the classroom in response to spaces that provide no agency.

We also know that multimodal tools provide opportunities for students to try identity kits (Gee, 2009), or play with different discourses that signify membership. Johnson (2007) demonstrated how students creatively played with dress and identity to navigate social norms. Similarly, Zoss et al. (2008) found multimodal construction of masks as an avenue for identity construction with adolescent boys. These opportunities to try on different identities are essential as students learn to adopt different voices (Elbow, 2000) through classroom and teacher models (Lee, 2007). As students develop different voices they can begin to engage in dialogic discourse.

1.3.1.4 Dialogical Discourse

An intervention based on a theory of Argumentation as Discourse recognizes Bhaktin's notion of doubletalk (Newell et al., 2015). Double voicing refers to adopting different worldviews as a tool for building community. In writing research, this may mean speaking to an audience, to other students in the class, or even to one's own past through written reflection (Newell et al., 2015).

In argumentative writing research, attending to and adopting different perspectives has shown promise. Felton & Herko (2004) found students speaking to their claims and to those of their peers by graphing arguments led to greater counterclaims. Lunsford (2002) found through doubletalk that definitions of key terms in graphic organizers or models of arguments, specifically Toulmin, are not settled and evolve with student writing.

We also recognize the dialogical role of double voice through the creation of avatars or temporal identities.. Nystrand (2002) found students construct personas with strong beliefs when writing. When developing their own belief systems in argumentative writing, researchers have found it takes multiple iterations of writing and single persuasive essays do not allow for the internal dialogue required to understand the perspectives involved in an issue (van Eemeren & Grootendorst, 2004; Walton, 1998, 2007). Instead students simply adopt a double talk and write papers to a school context rather than to a suggested audience (Smidt, 2002).

Studies have indicated online interactions and role play may increase the efficacy of dialogical discourse in the classroom (Newell et al. 2015. Beach and Doerr-Stevens (2011) found that moving from online to real contexts allowed students to use double talk to change school policies. Online interactions have also been shown to shift student arguments (Morgan & Beaumont, 2003) and, while some studies have shown face to face arguments to be richer, different conditions did not translate to difference in writing (Joiner, Jones, & Doherty, 2008).

Others argue that we need a reciprocal relationship with the student and teacher (Wallace & Ewald, 2000). Dialogical discourse can be encouraged through the use of feedback and reflective conversations (Gorzelsky, 2009). The use of feedback drives discourse which in turn encourages peers to engage with each other and thus help to build a community of writers.

1.3.1.5 Community of Writers

Community is essential to process writing (Applebee & Langer, 2013; Graham et al., 2016; Graham, & Perin, 2007; National Council of Teachers of English, 2016; Troia & Olinghouse, 2013). In writing communities outside school, community drives all learning (Winn, 2015), and it is this community that has enabled the web to thrive. In fact, Scardamalia and Bereiter (2006) argue we should look to grow the knowledge of the community rather than the individual writer.

Atwell (1998) has long called for teachers to come out from behind their desks and join students as writers. This step starts to form a community that will allow students to see themselves as writers by "changing knowledge, skill, and discourse as part of a developing identity" (Lave & Wenger, 1991, p. 12). Numerous studies have shown the power of building a community of writers (Brokerick, 2015; Hall, 2015; Hull, Stornaiuolo, & Sahni, 2010; Wright & Mahiri, 2012) to bridge the cognitive skills with social practices.

Communities of writers have been essential in online spaces (O'Byrne, 2015). This is especially pronounced in English language learners (Yi, 2008) who turn to community for fanfiction writing (Black, 2005), identity work (Gee, 2015), and connections to home cultures. Online communities have also been shown to increase writing by giving students a place to share (Kehus, 2000).

At the same time online spaces can create negative writing behaviors (Chandler-Olcott, 2014) and this requires a holistic approach to creating community (Hagood, 2011) that re-centers writing communities to consider online spaces (Grisham & Wolsey, 2005). By intentionally designing online spaces for writing communities, educators can take advantage of the affordances of technology while actively modeling civic engagement in these spaces and improve student cognitive skills through the delivery of feedback.

1.3.2 Operationalizing Our Theory of Change in Phase I

When given the opportunity to develop pieces of writing for their own blog, students will engage in the social practices necessary for argumentative writing. In Phase I, students will explore markers of credibility and consider how an author’s point of view impacts the textual choices of the writer. Then, in the second phase of the intervention, students will encounter biased read alouds and reflective blog posts written by avatar characters who interact with students.

We can embed and reinforce these social practices through the use of an interactive reading and writing platform. Improving argumentative writing skills requires a focus on sourcing skills (Birkenstein & Graff, 2018; Harris, 2006). The social reader that emerges from this project will enable students to annotate sources for claims and evidence, review the claims of others, and leave comments for their peers. This will help students see an argument not as a thesis or position but as a socially complex text that unfolds in real-time while being situated in history. Students encounter these texts across different networks with various amplification and authority (McVerry, 2015). By focusing on first telling the story of oneself,we build on our theory of change of Argumentation as Discourse.

In the first phase of our intervention, we will operationalize our theory of change by (a) supporting expression of agency, the opportunity to reflect and create identities, (b) relying on dialogical discourse, and (c) building the self-efficacy of participants. This will mainly occur as students develop a personal website and blog. The learning activities included in Phase I all draw from previous research drawn from our theory of change.

In the early days of the internet, youth designed their identities in spaces they controlled (boyd, 2014). Now the spaces of reading and writing are algorithmically driven, and the design choices severely limited for the user (Best, Manktelow, & Taylor, 2014). Machines often determine which photographs students see from friends. The news youth read is promoted by bots across social media. As youth engage in these reading and writing spaces, it is imperative that they know to #QuestionTheWeb (Barker, 2009; Leu, 2015). In Phase II of the intervention, we will build dialogical discourse among students.

1.3.2.1 Domain Of One's Own.

In fact, we hope to build student agency and efficacy through multiple steps in our intervention by utilizing dialogical discourse about one's own identity. Dialogical discourse is at the heart of our theory of change, and we believe in providing students opportunities for multimodal identity construction (Zoss et al, 2008). Allowing participants to first question their own voice will prepare them to speak to claims first by peers and then by others (Felton & Herko, 2004).Given the need for youth to hone critical evaluation skills, particularly in light of the effectiveness of writing communities and the rise of fake news, the proposed #QTW instructional model begins by focusing on spaces where students consume information and construct meaning. It is essential for students to “manage and consider the implications of becoming the owner of their own story” (Groom & Lamb, 2014) not only to understand themselves as a reader and writer, but also understanding how perspectives shape claims and evidence is a core critical literacy skill. In other words, it is much easier for students to understand how an author uses the affordances of digital tools to shape truths after they have experienced shaping their own.

1.3.2.2 Power of Student Blogging.

Teaching argumentation first through student blogging will create a sense of ownership leading to greater agency while providing the time and space to build self-efficacy skills with technology. Work done in content areas with Internet inquiry shows an initial drop in knowledge acquisition (Leu technical report), and studies of online learning find it often takes four weeks for students to adjust. By providing time to develop one's identity as a writer, we hope to take advantage of growing self-efficacy.

1.3.2.3 Critical Questions.

After students build their personal websites they will choose from a set of critical questions (Nusbaum and Edwards, 2011) around the use of digital technology and social media in their lives. As they delve into these questions, students will learn to produce different types of multimedia compositions, such as podcasts, photos, videos, and infographics. This will allow us to teach critical multimodal composition skills (Newell et al., 2015) while also providing agency to those who might find refuge in digital texts (Beach & Meyers, 2008). As students engage with these questions they will learn to annotate texts for markers of credibility.

1.3.2.4 Dialogical Reflections.

While students develop efficacy and knowledge with new forms of textual communication they will also be publishing reflection posts explaining how they learn. Creating an internal reflective dialogue will encourage the double talk (Bakhtin, 1985) that students need when questioning different genres and perspectives. More importantly, we provide students an opportunity to learn how to learn through reflection. This will also professionalize their writing around their goals to improve agency (Durst, 2009).

1.3.2.5 Choice in Inquiry.

Finally in Phase I students, will have to choose a personal challenge around technology or digital media. This could be a period of not using social media, a personal goal to write or shoot photos once a day, or a project to examine body image online. By allowing students choice in their expression, we hope students will learn to see themselves as writers with increased self efficacy and agency.

Throughout the first intervention phase we will also strive, and consistently evaluate the space, both the classroom and online interactions, encouraging a community of writers. We will take lessons from out of school literacy spaces (Winn, 2015) and ensure students have a space to explore their own identities. Specifically, we will use a code of conduct, modeling, and video guides to create the community. We will take field notes on the participatory practices and seek to increase these goals through constant iteration in our formative design.

1.3.3 Unifying Theories of Reading and Writing

In order to overcome the challenges of teaching argumentation we must not only combine cognitive skills and social practices but also address an interactive theory (Flowers, 1989; Newell et al., 2015) towards reading and writing. To this end the second phase of our intervention seeks to use dialogical discourse to improve both critical evaluation skills while reading sources and argumentative writing as students undergo activities in the reading of websites and the creation of argumentative texts. Through an interactive approach to reading and writing, we hope to see evidence of greater appropriation (Rogoff, 1995) of cognitive skills and social practices.

1.3.3.1 Reading and Writing Argumentative Texts

Cognitive approaches have identified many skills associated with argumentation, yet students still struggle. Researchers often find readers ignore information that does not align to their perspective (Perkins et al., 1991) and skills rarely transfer (Newell et al., 2011). In terms of sourcing, school students rarely engage in critiquing author perspective or text (Brante & Strømsø, 2018). As they write students do not make critical judgements (Perie, Grigg, & Donahue, 2005) and show a lack of analytical writing (Persky, Daane, & Jin, 2003). Our theory of change suggests decontextual skill instruction does not effectively teach argumentation.

1.3.3.2 Dialogical Interactions

Promising research has been done in the investigation of interactions through Cognitive Reasoning (Anderson et al, 2001; Jabadallah et al, 2011; Reznitskaya et al., 2007, 2009; Waggoner, Chinn, Yi, & Anderson, 1995) where the intersection of written models and student and teacher talk translate into rhetorical moves for investigating texts or crafting claims. These textual moves are greatest when the teacher consistently asks for evidence and provides feedback for the use of evidence (Jabadallah, 2011). Therefore any intervention must focus on encouraging interactions that solicit and deliver feedback. This should focus specifically on the process of evaluating and adopting of perspectives. This has to occur in a learning environment full of interactions that create dialogues with text (Newell et al., 2015) and their authors.

1.3.3.3 Text Structure

Interventions asking readers and writers to evaluate texts structure have a long shown promise. Ambruster, Anderson, and Ortega (1989) explored text frames with expository texts and demonstrated that attuning to structure improves comprehension. In argumentative writing this has usually involved the use of a graphic organizers (Brooks & Jeong, 2006; Easterday, Aleven, & Scheines, 2007; Nussbaum & Kardish, 2005). These tools have been consistently helpful in encouraging the use of exploring counter-arguments (Newell et al., 2015) which is an essential step in considering multiple perspectives. Graphic organizers also provide rich data sources for evaluation. Any intervention focused on teaching the reading and writing of argumentation should utilize these tools.

1.3.3.4 Read Alouds

Reading aloud in secondary schools has recently shown promise (Braun, 2010; Fisher & Iver, 2006; Ivey, 2003) including argumentative texts (Hillocks, 2010). Ivey (1999) noted that struggling readers in middle school especially benefit from read alouds and they enjoy the performative nature of reading and writing. Thus, any intervention focused on teaching argumentation with struggling readers should provide scaffolds for reading aloud and should encourage read alouds by the instructor.

Any intervention designed to teach argumentation must differentiate for reading ability. Including teacher read alouds can help ensure the success of all students. These read alouds can be small group, teacher lead, or even mediated using technology.

1.3.3.5 Mentor Texts

Providing mentor texts for students to read and analyze has moderate effects in improving writing ability, as students learn how to well written texts are crafted. (Caswell & Duke, 1998; Duke & Kays, 1998; Maloch, 2008). Knduson (1989) found exposure to models of expository texts to have the strongest effect (.25) of four different interventions. This same trend with young readers is supported with adolescents (Graham & Perin, 2007; Hillocks, 1986). Chambliss (1995) noted that in teaching argumentation, most focus on textual clues. Pytash et al., (2014), through self report instruments, found mentor texts improved students' understanding of why authors write in economics.

The use of mentor texts in argumentation provides the chance for dialogical interactions and the teaching of reading and writing. Students not only identified elements of the craft, but also gained understanding of author perspective. Any intervention based on Argumentation as Discourse should widely deploy the use of mentor texts.

1.3.3.6 Modeling

Teachers have long used models in teaching of writing. Studies have found struggling readers and writers benefit from teachers modeling (Fisher & Frey, 2012). Schunk (1991) noted that writing models work best when learners perceive they are on their level. Graham & Harris (1994) found teacher modeling improved writing. Braaksma et al. (2002) confirmed that similarity between the model and learner skills lead to the greatest improvements when compared to expert models. Sawyer (1992) found that models work better with skilled students, and studies of models and direct instruction with struggling students find direct instruction will suffice in teaching writing strategies (Schultz, 1997).

Current research reveals modeling of reading argumentative tasks may be more effective for struggliningreaders but models of of argumentative writing maybe more effective with skilled students. In our theory of change, with an interaction approach to literacy, we believe the modeling of reading and writing can not be separated from each other nor direct instruction. It is also clear from the literature that difference in quality between the student work and the model impacts learning. Therefore any intervention should utilize models that are written at the student level while stressing important features such as signal words (Heibert, 2017) through direct instruction.

1.3.3.7 Online Texts

Much of the research in argumentation has started to study texts in online spaces. Studies examining differences between offline and online argumentation have found that there may be some differences in face to face discussions but the writing products do not significantly vary in quality (Joiner, Jones, & Doherty, 2008). In fact, others have found the unique affordances of online discussions lead to greater dialogical interactions (Beach & Doerr-Stevens, 2009). Given that the majority of students now consume more online than offline texts (Fox & Grainnie), it is critical we teach argumentation in online spaces. Furthermore, given the role of misinformation, we have a critical need as a nation to make sure students can evaluate online information.

Researchers have also taken advantage of computer aided tools to scaffold learning (Scheuer et al.,2010). Students have read articles as text maps (Dwyer, Hogan, & Stewart, 2010) and found no difference than reading a full text. Numerous studies look at the use of computer generated graphic organizers (Dowell et al., 2009; Salminen, Marttunen, & Laurinen, 2010; Suthers et al., 2008). In these studies, computer aided scaffolds lead to increased number of arguments in writing but little is known about whether this transfers to other situations. Any study utilizing online writing spaces should allow for the use of graphic organizers. Further efforts should be made to see how graphic organizers increase dialogical interactions

While the use of models and graphic organizers maps to similar results with offline text the role of authority shifts online (Newell et al., 2015) through the creation of believable personas or avatars (Bogost, 2007; Jamaludin, Chee, & Ho, 2007; Rourke et al., 1999; Swan, 2002.) In their study of online role play to teach argumentative writing, Beach & Doerr-Stevens (2009) noted that students who created avatars with more credible personas were seen as perceived as experts. Any study examining dialogical interactions in online spaces should attune learners to how markers of authority are amplified.

1.3.4 Operationalizing our Theory of Change in Phase II

Phase II of our intervention, based off of Kuhn and Cohen's (2011) study may lead to improvements in both critical evaluation skills and argumentative writing as we seek to integrate research of cognitive perspectives and social perspectives (Newell et al.,2011) while addressing the lack of research unifying reading and writing of argumentative texts.

In Phase II students will first begin what Kuhn and Cohen labeled the “pre-game” by completing a series of challenges around the critical evaluation of websites. Researchers will create short ten minute puzzles and class challenges around identifying authors, evaluating authority, and evaluating bias. Students will also complete challenges around identifying claims, sources, and evidence verification.

During the "game component” of the lesson (Kuhn & Cohen, 2011) participants will be put into dyads or triads and enter a simulated web environment. Webpages will be recreated following the methods of Leu et al. (2018) in the development of the IES funded online reading comprehension assessment (see Appendix C). There will be pages around consequential topics. Students will choose from the list. Each topic will consist of two perspectives. Each perspective of an issue will have four sources. There will be two websites representing each side of an issue and an informational source.

As they enter a source, participants will be greeted by a video avatar in the lower right hand corner, a character they can toggle on and off. As students read the website, the character, who has a background persona, will annotate the site from their bias. For example, a representative from the Chamber of Commerce might call a "Carbon Tax" too expensive a solution to climate change and call into question the cost listed on a website from an environmental group. Each avatar will read two sources confirming their bias and one source they oppose. As the avatars discuss the text, they will pay close attention to textual features, such as signal words and rhetorical devices. Participants will trace the text structure of each source using a graphic organizer. The groups will each publish two blog posts: one explaining why each character is right and another explaining why each is wrong.

In the "end game" (Kuhn & Cohen, 2011), students will then choose a different topic from the list. Using the same interactive graphic organizers students will investigate the issue on the open web using the provided sources but without the avatar read alouds. They will then publish a multimedia essay on the topic.

While the use of the topics and sources previously used by their peers may introduce a prior knowledge bias, the performance on this essay is not included in our final models. By limiting students to our simulated sources we reduce the need to include searching skills as a covariate. Finally, this also fits our theory of change given that argumentation around a subject should unfold over time in discussion, reading, and writing.

1.4 Summary of the Significance of This Proposal in Relation to the IES Review Criteria

The primary purpose of this grant is to increase writing proficiency among adolescent youth. We build on existing models and lessons learned from teaching student blogging over the last eleven years (Hicks & McVerry, 2007). There has never been a large-scale IES study to see what, if any, effect student blogging can have on argumentative writing scores.

We also address the need to explore alternative writing assessments. By teaming with the National Writing Project in the use of the Analytic Writing Continuum Source-Based Argument (AWC-SBA), we will provide insight into assessing writing.

As scholars we must also understand how to best shape hybrid learning environments to support developing writers. This study seeks to expand the boundaries of the classroom beyond typical school-day times and spaces, providing insight into how educators should use the affordances of technology to overcome the persistent failure in teaching argumentative writing.

Finally, we believe students cannot learn how to engage in formulating arguments without investing heavily in learning source evaluation. This study seeks to link the critical evaluation of online sources (Leu, Foranzi, & Kennedy, 2015) with argumentative writing.

2.0 Research Plan

This developmental goal research grant will occur over four years. The first two years will be formative design studies. In Year Three, we conduct a switching replication design pilot study. Year Four involves data analysis and a cost/benefit study.

2.1 Research Goals

Yr

Goal

Method

Outcomes

1

Develop Intervention Learning Activities

student blogging platform

Formative Design utilizing field notes and cognitive labs

  • Establish baseline data (e.g., students’ initial writing sample, self-efficacy assessment; use of critical evaluation skills)
  • Revise existing Phase I learning activities
  • Refine existing social reader.
  • Test student use of multiple features on blogging platform.

2

Iterative design of curriculum materials

Formative Design utilizing field notes, basic learning analytics, and cognitive labs

  • Establish baseline data and track over time to ensure efficacy of our theory of change
  • Reliability estimates
  • Revise existing Phase II learning activities
  • Record, edit, and publish biased think alouds.

3

Pilot Study

Switching Replication Design

  • t-test for mean comparison
  • Repeated Measures for effect of time and order

4

Cost Feasibility Study

Analyzing Year Three data

  • Research ready for dissemination
  • Findings ready for a Level 5 Measurement grant

2.2 Baseline Assessments

2.2.1 Analytic Writing Continuum.

In 2013, NWP developed the AWC for Source-Based Argument (AWC-SBA) to focus on specific features of source-based argument writing. In adapting the AWC, NWP reviewed extant argument writing rubrics (e.g., the Smarter Balanced and PARCC rubrics). The AWC-SBA retain the AWC’s basic structure rooted in the “six traits” of writing but each has a particular focus on the attributes related to source-based argument writing. The AWC-SBA measures four attributes: content (e.g., quality of reasoning and strength of evidence); structure (e.g., organization to enhance the argument); stance (e.g., tone, establishment of credibility); and conventions (e.g., control of usage, punctuation, spelling, capitalization, and paragraphing). The AWC-SBA has been used in three large-scale scorings (n>5,000). Reliability estimates for the AWC-SBA ranged from 89%–92% on each attribute (Gallagher, Arshan, & Woodworth, 2017).

2.2.2 Critical Online Information Literacies Assessment

COIL is a previously validated multiple choice assessment of website evaluation. The Critical Online Information Literacies (COIL) instrument is based on measures developed by Kiili, Laurinen and Marttunen (2008), Brem, Russell, and Weems (2001), and Leu et al. (2010). The final instrument is delivered using an online survey tool. The items measured each of the following constructs: author, bias, publisher, and source. The COIL contained forced response answers that included entire screenshots of websites. This violates basic principles of comprehension assessment that call for short distractors (Fuchs, Fuchs, & Maxwell, 1988; Keenan & Betjemann, 2008). However, in each administration of the COIL, the reliability had a coefficient alpha that exceeded 70% (McVerry, 2013) which will meet the needs of a developmental research grant.

2.2.3 Writing Self-Efficacy.

The Self-Efficacy for Writing Scale (SEWS) (Bruning et al., 2013) will be used to estimate writing efficacy. The measure contains items that assess load on three scales: (1) self-efficacy for writing ideation, (2) writing conventions, and (3) writing self-regulation. More specifically, following Bruning et al.’s design, the instrument will include 16 items that examine the three aforementioned dimensions of self-efficacy for writing: (1) Ideation (five items); (2) Conventions (five items); and (3) Self-regulation (six items). Following recommendations by Bandura (2006) and Pajares and Schunk (2001), the Self-Efficacy for Writing Scale (SEWS; Bruning et al., 2013) includes items answered on a Likert scale from 0 (“I'm not sure I could do”) to 100 (“I'm totally sure I could do”).

2.2.4 Survey of Internet Use and Online Reading.

This online survey identifies frequency and type of Internet use inside and outside school, as well as assesses knowledge and skills related to Internet-based reading and writing activities. With funding from a previous IES project, it was developed, field tested, revised, and then administered to approximately 1600 middle-grade students in several urban and rural school districts under the supervision of Dr. James Witte at Clemson University, a nationally recognized expert in online survey development. Samples appear in Appendix (X). This survey has been previously validated and used in numerous studies (Leu et al., 2004; Leu et al., 2007, Leu et al., 2016)

2.3 Formative Design in Year One

We will use a series of design experiments (Brown, 1992; Cobbet al., 2003) to build upon our existing model of teaching argumentative writing with students composing texts and posting them to their own domain/blog. Our design experiments are distinctive in that they engage teachers as “insiders” who collaborate with the research team, using outcome data to revise the curriculum and tools before experimentation.

In Year One, we begin with our pedagogical goal to improve the self efficacy and ability of adolescents to critically evaluate sources and use this evidence in argumentative writing.

2.3.1 Year One Research Question.

In a formative experiment, we must identify essential components of the intervention to be studied a priori to implementation and data collection (Reinking & Bradley, 2008) and then iterate on this understanding as we identify variables and strategies that enhance our pedagogical goal. In Year One we will ask, “What are the essential components a hybrid writing environment needed to improve argumentative writing when we give every student their own domain?

2.3.2 Overview of Research Activities.

The formative design experiment in Year One of the study will follow a three phase model adapted from Reinking and Bradley (2004, 2008; see also Reinking & Watkins, 1998): (a) gathering baseline data; (b) implementing and developing the intervention based on data collection and analysis; (c) retrospective analysis.

Setting: The formative design study will take place in one school district in Connecticut in two classrooms. We will begin with a one-day training for teachers on the technology of blogging and setting up their own personal websites. We will then provide a second day overview on the interventions. Then we will follow an iterative design process.

The participants will have access to an online coach who will offer pre-recorded tutorials on an as needed basis. Teachers will be able to attend open office hours that are hosted online. These will be drop-in sessions using video chat software.

Phase 1: Gathering Baseline Data. We will begin by gathering background data on how the school previously taught source evaluation and argumentative writing. Interviews will be conducted with the teachers and classroom observations conducted

We will then administer the baseline assessments of writing efficacy and the COIL assessment of critical evaluation. The Critical Online Information Literacies Inventory will be scored to identify key areas of instruction. The ACW, which is out student outcome measure will be scored after the three year data collection.

Phase II: Implementing and Developing the Intervention. Once the learning activities are developed, we will focus on the delivery of the learning activities. These activities will center around the role of identity and social media. Students will then practice sourcing skills as they read about “The effect of social media” while making argumentative writing pieces across a variety of modes. They will then create “their most credible self” while building an About Me Page.

The development of these lessons will occur over an eight-week period. The graduate student and project manager will work with the teacher and act as participant observers by intentionally designing lessons for a hybrid learning spaces. We will establish a set of learning goals for Phase 1.

These goals will then be taken by the research team and proposed learning activities will be developed. Each activity will be based on a read, write, participate model (McVerry, Belshaw, O’Byrne, 2015) where students read and annotate a text, write a reaction to the text, and then participate in an activity that required the use of content from the text.

After the learning activities are developed they will be sent back to the scientific advisory board for content validity. Participants will be asked to identify the skill being taught, how important they believe the skill to be, and how sure they are in their answer. Using these data we will calculate a content validity ratio. We will also ask for general qualitative feedback on the learning activities.

Along with The classroom teachers, we will then identify 2-4 writers in each class as focal students for cognitive labs. These students will offer feedback on the lessons and on the user experience of having a blog on their own domain and a hybrid learning environment that allows students to curate, read, and comment on the blog post of their peers. We will also have students work through tutorials and lesson plans while thinking out loud.

We will then finalize the Phase I lessons. The collection of activities will then be refined and reviewed by the scientific advisory board. Next we will meet with teachers and identify any areas where they may need instructional support. The lessons will then be taught in their classrooms for a period of eight weeks, twice a week, for one class session.

After the first iteration, the PIs and graduate researchers will use the data collected to revise the learning activities. They will analyze field notes and teacher reflections. Student feedback about what they like will be collected and coded using a dichotomous plus/delta format. Student work also will be coded using methods suggested by Kuhn and Crowell (2011). Student writing will be parsed into idea units, and each idea unit was classified into one of four categories (no argument, own-side argument, dual-perspective argument, or integrative-perspective argument). Using these tools, we will decide if lessons need revision to meet our pedagogical goals.

The lessons will then again be delivered to another class. However, these students will have access to their websites from the beginning of the study. This will allow us to understand the impact new technology has on the learning environments and outcomes, meaning we will be able to compare our data between the two groups to see if pre-exposure to the tools lead to greater writing efficacy.

Phase II: Implementing and Developing the Technology. As we iterate on the lesson plans, we will develop the blogging platform for students to use. As stated, we are not creating a “new” platform or learning management system. Instead we are encouraging students to blog from their own web domain using a customized theme we create, including both the blogging tools and the updated h-feed reader.

While we are using mature web standards and will be using well-established, open source solutions, we will be doing some custom development in Year One. We will use industry standard methods to evaluate accessibility and use of the tools. In the Year One formative design, the goals will be to test and develop technologies that will make the platform available on any desktop or mobile device:

  • Functioning comment system called webmentions
  • Functioning annotation system called fragmentions
  • A coding vocabulary for markers of credibility
  • A social reader with APIs for publishing writing called micropub (writing) and another for curating a collection of websites to read called microsub
  • These blogging tools use very specific types of metadata directly in the html file of every post, and this metadata is called microformats. This has been a tool used on the web for over thirteen years. Using this metadata, students will be able to send comments back and forth to each other from their own websites. This uses a technology called webmentions. They will be able to highlight and annotate each other’s blogs from their own website using a technology called fragmentions.

    The students also will have a social reader that enables them to curate all posts from their classmates and the teacher. They will be add other feeds using a syndication technology called h-feed. This tool will be catered to the purpose of source evaluation. For example, we will have a tag for evaluating the claims and evidence used by their peers. Students will highlight blog posts of their peers and then judge the credibility of the evidence.

    The teacher will be given a social reader with additional tools for tracking the progress. This includes counts of publishing and commenting, the number of claims evaluated by students, and relational maps that illustrate which students interact together. The teacher will also be able to deliver feedback and comments to their students directly from their social reader.

    The students will also have an annotation tool with codes they develop and also used by the W3C credibility community group in predictive models of website credibility (Hawke, et al, 2018). These metadata vocabularies will allow us to triangulate student growth with our other measures.

    In the Year One formative design study we will work on refining existing tools through an iterative process that focuses on the environment of the user interaction as well as the user experience (See Appendix C). Weekly design sprints will be used based on a roadmap developed a priori but later revised based on the data collected. The overall software development goal for the Year One formative design is a refinement of existing technologies to meet our classroom needs.

    Data Collection and Analysis.The graduate assistants will act as participant observers who will provide assistance to the classroom teacher, who will remain the primary instructor and deliver the lessons included in the intervention. A graduate assistant will be in attendance for the twice-weekly delivery. The principal investigators will provide just-in-time coaching and feedback.

    Qualitative data will include (a) field notes recorded and analyzed by graduate research assistants who have had explicit training in gathering and analyzing qualitative data; (b) teacher logs with observations and reflections about #QuestionTheWeb, which are particularly useful for our awareness of possible effects and issues that arise during times when a member of the research team is not present; (c) weekly interviews/debriefings (15-30 minutes) with teachers conducted by a member of the research team, which will be recorded and transcribed; and (d) periodic focus-group interviews with students.

    The protocol for collecting class observations for qualitative research will be developed during this face. Specifically the participant observers will rate a classroom on elements of participatory learning environments. A draft framework is included in the Appendix. Researchers will observe classes for elements that lead to greater dialogical interaction of four different categories: leading, building, communicating, and thinking. The PIs will first work with research assistance to establish anchor observations and discuss difference in scale. The instrument will undergo iterative design.

    Student artifacts such as digital presentations and various assignments will also be included in data collection and analysis. These data will primarily guide development, but will also include theoretical notes aimed at generating broader pedagogical theories and generalizations across contexts related to implementing a digital tools to teach argumentation. As noted, the research team will meet at least every two weeks to conduct preliminary retrospective analyses aimed at generating broader theoretical understandings and at sharing observations across sites focusing on determining (a) factors that enhance or inhibit #QTW success in achieving its goals, (b) what modifications or adaptations were made in response to that data, and (c) whether the modifications or adaptations produced desirable results.

    Quantitative data collected during the intervention will include the administration of the COIL Assessment at three different time points: at the start of the intervention, the four-week mark, and the eight-week mark. The assessments will first be checked for internal consistency. Item characteristics will be tested for difficulty and discrimination indices. The students will also complete self-efficacy measures at three different time points during the intervention. As part of year one we will check our factor loadings to see if the hypothetical model holds. We will work with the Scientific Advisory Board (SAB), specifically Jonna Kulikovich, who has lead previous IES research grants building online assessment environments, to understand the model design possibilities and decisions of what scales to use.

    The frequency counts of student claims and their categorization will also be analyzed. The posts will be scored blindly by two people and a third rater will score a set of random posts to establish the reliability of the ratings. Raw percentage and Cohen’s Kappa will be calculated.

    We will then run a correlation of all quantitative data at the four-week and eight-week time point. Our theory of change would suggest that as self-efficacy and agency rise, the frequency and claims and evidence will rise as well.

    Phase III: Retrospective Analysis. During this phase the research team will collectively conduct a comprehensive and exhaustive retrospective analysis (Cobb, et al., 2003; Gravemiejer & Cobb, 2006) of all available data. The object of this analysis will be to (a) determine the “active ingredients” of our intervention that are likely to be factors related to its success in any context, (b) consider adjustments that may enhance the intervention in the following years (Years 2 and 3), (c) evaluate findings in relation to existing theory and research, and (d) identify context-specific factors. The members of the Scientific Advisory Board will participate in a portion of these discussions via teleconferencing or online webinar applications. The research team will also determine the conference presentations, technical reports, and publications that will be the focus of each summer’s work.

    2.3.4 Desired Outcomes. We will use intermittent assessments and triangulate the descriptive data with our field observations to ensure our theory of change is leading to the desired outcomes. We anticipate using a repeated measures ANOVA and that there could be an effect size of Cohen’s d=0.3 when examining the difference in mean scores of the frequencies of claims and evidence in student writing at both the four-week mark and the eight-week mark of the intervention.

    Based on our theory of change, we expect to see a positive correlation between properties of participatory learning environments and frequency of argumentative idea units. We hypothesize that spaces which allow for greater participation will engage in more dialogical interactions which will improve critical evaluation and argumentative writing.

     After the conclusion of the first year, we will have refined the blogging platform. We will have also collected data to check instrument reliability to ensure we can measure the effectiveness of our theory of change. Every student and the teacher will have an updated feed reader that allows them to comment and annotate on each other’s work while writing their own post.

    2.4 Formative Design in Year Two

    In Year Two we will use a similar formative design approach. However, we will concentrate on refining the Phase II lesson plans, developing the bias read-aloud videos to assist students and teachers in source evaluation and argumentative writing.

    2.4.1 Year Two Research Questions.

    In Year Two we ask, “Can encountering bias think alouds and receiving additional scaffolds improve student critical evaluation and argumentative writing skills?” We will shift our focus from developing learning activities about exploring how “I shape truth” to refining the learning activities and technologies for students to consider the implications of how “we shape truth” in online spaces.

    2.4.2 Overview of Research Activities.

    Year Two will be an iterative design process that proceeds through the same four phases as Year One. However, in Year Two we will be in two classrooms across each state, CT and MI, during each of the eight-week periods. This means we will conduct our research in a total of eight classrooms in Year Two. The second phase will begin after the same Phase I Baseline data from Year One is collected.

    Setting and Participants:In Year Two of the formative design, the lessons will take place in four different classrooms in four different schools, two in each state. The same cohort of Year One teachers will be involved as well as the addition of a new cohort. Each intervention period will last a total of four weeks. However, all students and teachers will be given access to the blogging platform and the social reader at the beginning of the year. This will let us explore the role teacher experience has in shaping hybrid learning environments to support argumentative learning.

    The participants will have access to an online coach who will offer pre-recorded tutorials on an as-needed basis. Teachers will also be able to attend open office hours that are hosted online. These will be drop in sessions using video chat software.

    Material:The Phase II lesson plans revolve around students investigating contemporary issues in social sciences and the humanities. The topical choices will be developed by the research team and classroom teachers. We will first begin by examining the local curriculum with the teachers to identify possible inquiry topics. These choices will then be refined by the SAB. The lessons will consist of eight 45-minute to one hour lessons delivered twice a week but using a simulated internet environment. Each page will have a corresponding video where a character, or biased avatar, narrates and annotates the source from a particular perspective.

    Phase II: Implementing and Developing the Learning Activities . Phase II development will begin with the Principal Investigators and Graduate Assistants drafting the learning activities. They will then present these drafts to the SAB. Once topics are chosen, the research team will create the websites.

    Using methods adopted from Leu et al. (2014) the researchers will follow a protocol to create a simulated internet given the challenges of scoring and assessing learning on the open web. Greg McVerry, Principal Investigator, lead the effort to create the content for the Online Reading Comprehension Assessment IES grant (Kulikowich, 2013). First researchers will conduct cognitive labs with students to identify search terms and patterns. Next researchers will use these keywords to search out websites. A list of websites will be developed. Reading levels will be calculated and qualitative components of text complexity will be considered. The SAB will then complete a survey indicating how they feel about a source. Finally these webpages will be remixed into HTML and hosted on university servers.

    Next the scripts for the biased real alouds will be recorded. They will include the use of five websites per social topic. Two sources will be chosen on each side of an issue, with one additional source considered to be the most neutral. Two avatars will be created, one for each of the positions. The avatars will be pop-up windows with an animated mouse that hovers and highlights. Each avatar will complete a think aloud for two sites they disagree with and one site where they share an opinion. Each avatar in a given social set will read the source considered most neutral.

    After each script is written the avatar and screen recordings of website mouse clicks, and cursor highlights of annotation, will be recorded. The voice of the avatars will be from the PIs and research assistants. These videos will then be layered as proxies on top of the original sources so users can toggle them on and off in their social reader.

    The avatars will then undergo a series of cognitive labs before deployment. In conjunction with the classroom teacher, students will be chosen to participate. Students will be asked to identify key decisions they make when summarizing the conclusions of sources through a think-aloud protocol. The participants will also be asked how about the avatars and their perspectives.

    Next the Phase II lessons will be developed. We will begin by first designing the pre-game learning challenges. These are short 15 minute live or web based lessons designed to build in background skills around searching and navigating author credibility. Using methods outlined in previous research (Castek 2010; Coiro et al, 2015,) we will develop machine scorable activities around identifying an author, evaluating author authority, and considering author perspective. We will also have mini-lessons on rhetorical devices, use of claims and evidence, and annotating for credibility. The lessons will focus on text structure and annotation utilizing tools developed in Phase I. The curriculum development will follow the same model as Phase I with content validity provided by the SAB.

    We will then develop the eight week unit for the “game” phase (Kuhn & Crowell, 2011) and corresponding learning materials in conjunction with the classroom teacher. These activities will introduce the new interactive graphic organizers, and explain the use of the simulated internet. As the students complete research they will be using annotation tools and taking notes on their personal domains.

    Finally, we will develop the prompts for the “end game” (Kuhn & Crowell, 2011) portion of the intervention. Given the role of agency in our student model we will provide dyads or triads of students a choice of four topics. They will then create a digital essay and write an argumentative piece.

    The classrooms teachers will then receive a two-day training on supporting student blogging platforms in the classroom. Then in each of the four settings the participants will engage in the second phase of the learning activities included in the lesson. Similar to Year One, the interventions will take place in two 45-60 minute classes over eight weeks. After eight weeks, a second class in each of the four schools will conduct the intervention.

    Phase II: Implementing and Developing the Technology. As stated above, the Phase II technology development focuses on interactive graphic organizers. The interactive graphic organizers will rely heavily on javascript and the micropub and microsub APIs. In short, students will be able to visit the domain of the graphic organizer and “fill in a box”; this will then publish to their own domain. Alternatively, students can publish from their own domain, and this entry will pre-populate on their graphic organizers.

    Data Collection and Analysis. The graduate assistants will act as participant observers who attend twice-weekly instruction and offer assistance to the classroom teacher, who remains the primary instructor and delivers the lessons included in the intervention. The principal investigators will provide just-in-time coaching and feedback.

    Qualitative data will be collected by research team members, who include Principal and Co-Investigators, as well as graduate research assistants who have had explicit training in gathering and analyzing qualitative data. Data will stem from (a) field notes of _classroom observations by the research team; (b) weekly interviews/debriefings (15-30 minutes) with teachers; (c) periodic focus-group interviews with students; (d) cognitive labs with individual students and (e) teacher self-reports of each lesson (n=16) and reflections about #QTW, which are particularly useful for our awareness of possible effects and issues that arise during times when a member of the research team is not present..

    We will also examine field notes to continue our efforts to understand how space shaped the quality of hybrid learning environments. We will meet with the SAB to review results from Phase I. We will then develop a coding system to encourage the use of specific practices noted in Year One with the goal of increasing the frequency of these techniques in Year Two.

    During the end game inquiry, all group discussions will be recorded using computer software. Two groups from each class will be randomly selected and their recordings transcribed. This data will then be coded using methods outlined in (Jadallah et al., 2011) with each utterance being considered a unit of analysis. Prompts will be coded for interrogating sources, addressing perspectives, use of evidence, feedback, and reflection.

    Quantitative data collected in Year Two will be very similar to the data collected in Year One. The same battery of baseline assessments from Phase I will be administered. We will add frequency counts of student annotations and the codes they assign to annotation to our correlation testing.

    Phase III Retrospective Analysis. As we finish Year II, we will examine all the data collected in both formative design years to inform final changes before the pilot study. In Year Two we did not include the Phase I lessons. These lessons provided an exploration of self while also scaffolding the blogging platform. A comparison of Year Two and One field notes will provide insight into the types of supports students and teachers will need. We will also compare demographic data between the two years. Specifically, we will seek to triangulate the claims and evidence students use with the specific artifacts from learning activities, field notes, and platform uses.

    The retrospective analysis will first occur in December after the intervention comes to a conclusion in four classrooms who completed the lessons. Focus will be placed on analyzing the results of cognitive labs and revising the avatar scripts. Then after the conclusion of Year Two the scientific advisory board will gather for one last time to make final revisions based on the second cycle of learning activities delivered in Year Two.

    2.4.3 Desired Outcomes. After the conclusion of the Year Two formative assessment we will have greater evidence that our theory of change leads to desired outcome. First, with a sample size between 100-120 in Year One, we hope to derive reliability estimates of our measure before undertaking the pilot study in Year Two. This will help to ensure we can estimate effect sizes that would indicate we are progressing towards our outcomes in the theory of change.

    At the conclusion of Year Two, we will have developed and recorded avatar scripts for four different argumentative topics in the field of humanities and social studies. This will be a total of 20 different bias read alouds.

    2.5 Pilot Study in Year Three

    In Year Three of the grant we will perform a pilot study testing out both the learning activities and the blogging platform developed in the first two years of the project. The goals of the study will be to provide evidence on the relationship between source evaluation, argumentative writing and any possible effects our learning interventions may have.

    2.5.1 Participants.

    This will be an approximate sample size of 300 students. According to Tabachnick and Field (2001), a sample of 300 participants exceeds their guidelines for a t-test using a Cohen’s d effect size of 0.3 and also for any possible regression analysis of 100 + m, where m equals the number of predictors. This sample size is also adequate for a regression model with one dependent variable and three independent variables with an =.05, a desired power size of 0.8, and an anticipated effect size of 0.15 (Sloper, 2010). This anticipated Cohen’s f2 is a medium effect. The estimate of power size was chosen to ensure an adequate effect (Sloper, 2010).

    Setting:strong> The pilot study will take place in four schools across two states, with two schools in each state. We have chosen to focus on middle school students due to federal child data protection laws and alignment with curriculum goals. There will be one classroom in each school assigned to one of three conditions: Condition A, Condition B, and control. Each group will have 300 students. While this sample size is large for a Goal Two: Development grant it will provide balanced design to strengthen our estimations and allow us to estimate the reliability of our measures when estimating effect sizes to track if our theory of change will lead to our hypothetical educational outcomes.

    2.5.2 Measures

    Assessments in the pilot study will include the AWC-SBA, the COIL, a survey of self-efficacy of writing, and the internet use survey. All reliability estimates will be conducted after the Year Two Formative Design. We will also have access to frequency counts of student annotations to establish validity of our instruments.

    2.5.3 Methods The pilot study will use a switch replication design with two conditions. Both groups will start with the assessment battery and then be given access to the blogging platform. Group A will begin the eight-week intervention while group B completes normally scheduled classroom activities with the option of using the blogging platform. The control group will complete the battery of assessments in the third testing window. A research assistant will observe the control group 4 times throughout the intervention and interview the classroom teacher in order to capture regular classroom practices about writing instruction.

    Group

    Period One 10 weeks

    Period Two 10 weeks

    Condition A

    X

     Students get blogging platform.

    Complete Phase I and II

    X

    X

    Condition B

    X

    Students Get blogging platform

    X

    Complete Phase I and II

    X

    Control

    X

    2.5.4 Analysis

    The data analysis to examine the effects of instruction on source evaluation and scores on the AWC-SBA will be different models. We will first begin by testing for the classical assumptions for a repeated measures design. First, we will test if either condition had learning gains over a control group which will consist of the same sample size and be given the three assessments at the final time point.

    We will then calculate and run correlations and consider the inter-class correlations of our variables at each time point. Given that this is a timed series intervention the scores will not be dependent on each other. Meaning instruction that impacted scores on the first assessment may also impact scores on the third.

    Next we will conduct multiple regression analysis models that test for mean difference scores on the AWC using each of the dependent variables while conditioning on two covariates: prior internet use and offline writing ability.

    If there is a significant difference between the means with a medium to large effect size, we will have evidence that our theory of change lead to the expected outcomes. If we do not find a statistical difference we will also establish that our intervention did not hinder students’ progress when compared to their peers. If the control group has a significant greater average then we will have practicality evidence that at some point our interventions did not lead to student growth.

    Then to answer our research questions we will use a series of t-test at each time point. To answer research question one, the dependent variable will be the self-efficacy measure given at three timepoints (Time: 1, 2, 3), and the independent variable will be the order factor that represented the sequence of instruction for each student (Order: blogging platform with intervention, intervention after having blogging platform). Covariates will include a proxy of prior knowledge of the internet using a self report measure of internet frequency which has been used in previous studies (Leu et al., 2015) and the previous years scores on the Smarter Balanced Consortium writing scores.

    To answer research question two, a repeated measures design the dependent variable will be the AWC measure given at three timepoints (Time: 1, 2, 3), and the independent variable will be the order factor that represented the sequence of instruction for each student (Order: blogging platform with intervention, intervention after having blogging platform). We will again use the survey of internet use and prior writing ability as a covariate.

    To answer research question three, the dependent variable will be scores on the COIL given at three timepoints (Time: 1, 2, 3), and the independent variable will be the order factor that represented the sequence of instruction for each student (Order: blogging platform with intervention, intervention after having blogging platform). We will again use the survey of internet use as a covariate.

    In conjunction with the SAB, specifically Jonna Kulowich, a variety of model fits will be tested to see what best explains variance in argumentative writing skills. This will allow us to estimate the effect of time. Other mediating variables that may be included in the model include prior knowledge, as measured using concept maps, and critical evaluation scores on the COIL. These data are intended to complement and extend the qualitative and quantitative data collected during the intervention phase. These statistical comparisons may be made, but these comparisons are not intended to carry the weight of a randomized experimental design or analysis using HLM procedures, which would be more consistent with a Goal Five project. Furthermore, we do not have the sample sizes to do so. However, it does allow us an opportunity to field-test these procedures, especially in Year Three with an eye toward a subsequent Goal Three project. We also expect that these assessments may be useful, not just for data collection and analysis, but also useful tools for teachers who wish to formatively assess students’ progress.

    A switch replication design was chosen for the study given our firm belief that all adolescents will benefit from writing on their own domain and receiving additional instruction in digital literacies. While a quasi-experimental study with a control group would allow us to estimate if treatment groups performed better than no instruction at all, the required sample size for such is suited for a Goal Three research project.

    Furthermore, a switched replication design lets us better understand any possible mediators between time and order of instruction. Previous research introducing technology found an initial drop in performance on measures of reading comprehension (Leu et al, 2016). This finding is mirrored in qualitative studies of distributed classrooms using blogging technologies similar to our #QTW intervention. Our proposed model will allow us to explore for any mediators of time and order that should be further explored when we seek to scale up the grant as a Goal Three.

    2.6 Practicality, Fidelity, and cost study in Year Four

    It is essential that any instruction model to be practical. Beyond the data analysis in Year Four we will also focus on practicality. We define this as having reproducible fidelity and a low cost. Time and money are precious commodities during the school year. For this proposal, we have thought carefully about this issue, recognizing that no instructional model will be adopted unless it is ultimately practical to stakeholders. First, the implementation of any model must have fidelity to ensure its success, yet each classroom is a unique context. We try to address this paradigm by creating choice within a constrained system. We will develop a series of learning activities and a similar tech set-up, but students and teachers can adopt the blogging to their local platforms. To measure fidelity in the Year Three pilot study the research team will use a checklist of “Qualities of Hybrid Writing Spaces” developed in conjunction with the scientific advisory board after the formative design experiments in Year Two.

    Second, we have already piloted an initial versions of our model (McVerry, 2015) and draw on a long history of distributed courses. Scholars have already looked closely at the initial version of the model and made changes to it, in order to make it more practical for a whole school context.

    We will also conduct a cost analysis in Year Four. We have built in an online coach into our model. While some see this as an expense districts could not afford, we believe non-union, non-employed coaches provide a huge cost saving opportunity. The cost of two districts splitting a part time coach is vastly cheaper than each having a full-time employee with the associated fringe.

    To complete a cost benefit analysis in Year Four we will first develop criteria for ascertaining what is considered a cost. If, for example, a graduate assistant or PI must teach in Year Three, the cost of one day for a secondary teacher will be added. The technology costs are fixed and easy to project to scale.

    We will then look to our benefits in terms of cost. In order to do this, we will examine the student learning outcomes as measured by the AWC. This will then be compared to other writing instruction curriculums that have had similar outcomes.

    3.0 Resources

    3.0.1 Southern Connecticut University is one of four regional public comprehensive universities that, together with twelve community colleges, form the Connecticut State Colleges and Universities (CSCU) system. In its capacity as a primarily undergraduate institution SCSU holds NSF RUI, NIH AREA and IES grants, and its Office of Sponsored Programs and Research manages approximately 200 active projects across all internal and external funders. SCSU will provide Dr. McVerry with the resources, space, graduate student and personnel support he requires throughout the administration of the project.

    3.0.2 Central Michigan University encourages research, scholarship and creative activity and promotes the scholarly pursuit and dissemination of new knowledge, artistic production and applied research. Through its support of research, the university enhances the learning opportunities of both its undergraduate and graduate students and promotes economic, cultural and social development. They will provide Troy with the resources, computers and recording equipment to conduct field work. The University will also provide the space and will host the essay raters. Graduate assistants can be drawn from both the masters and doctoral programs in educational technology.

    3.0.2 St. John's University is a Doctoral Research Intensive institution, granting the Ph.D. in a number of fields, including Literacy and Curriculum and Instruction. Many classrooms and meeting rooms feature state-of-the-art collaboration and communication technologies that thus provide technology to collaborate with field-based education partners, including live telecasts. St John’s University will also provide personal laptops and field equipment for research.

    4.0 Personnel

    4.1 Key Personnel

    4.1.1 Dr. J Gregory McVerry, Jr is a researcher and educator studying the impact of technology on the literacy skills of today’s young and adolescent readers. Dr. McVerry received the Joanne Finn Early Career Fellowship where Dr. McVerry and Dr. Hicks developed early prototypes of the lessons to be refined in this study. Dr. McVerry served as a Neag Fellow at the University of Connecticut New Literacies Lab. During his time has a fellow Greg worked on numerous IES research projects investigating online reading comprehension and online reading comprehension assessment. As part of this IES grant Dr. McVerry oversaw the creation of a simulated web environment. He also serves of the W3C,credible web community group. The W3C is the standards board governing the web. The mission of the W3C Credible Web Community Group is to help shift the Web toward more trustworthy content without increasing censorship or social division. Dr. McVerry is also a well-respected research in the field of literacy; he has published research exploring pedagogy in distributed classrooms with MIT Press and research into instrument development in the Journal of Literacy Research. He has an extensive publication record in the creation and validation of instruments from surveys, forced response tests, and observational rubrics. McVerry is also respected in the open source technology communities. In 2015 he was recognized by Mozilla, the makers of Firefox, as one of the 50 most important people protecting the open web. In 2019 Mozilla again recognized Dr. McVerry as an open leader and is supporting the role out of an Open Educational Resources (OER) network Ghana.

    Dr. McVerry will work with Jonna Kulowich, the scientific advisory board methodologist, to handle all data analysis. He will also work with Dr. Abrams on the data collection in Connecticut schools.

    4.1.2 Dr. Mary Brown served as full-time research assistant on NSF/NIH-funded research in cognitive psychology (Princeton University, 1989-1994), has written and managed grants including from US DOE and CT-OHE/TQP (2017-2018); has been a partner responsible for designing and carrying out assessment components and co-planning and facilitating a one-week institute on an IMLS-funded grant to Westport Library (2013-2015) and as administrator of the educational experience component and assessment coordinator on a US DOE/FIPSE-funded grant to Voices of September 11th (2009-2012).

    Dr. Brown will serve as the grant coordinator and handle all logistical tasks such as IRB forms, confidential data, communications. She will coordinate the hiring and supervision of all student workers and GA’s/ Dr. Brown will also run any reliability and scoring trainings.

    4.1.3 Dr. Troy Hicks is a researcher and educator, working at the intersection of digital literacies and teacher professional development. Dr. Hicks has published findings of his research in leading journals including English Journal, English Education, Research in the Teaching of English, and the Journal of Adolescent & Adult Literacy and has presented at conferences including the National Council of Teachers of English, the International Society for Technology in Education, the Literacy Research Association, and the American Educational Research Association. Hicks directs CMU’s Chippewa River Writing Project, a site of the National Writing Project.He frequently conducts professional development workshops related to writing and technology, and has been the PI or Co-PI on over a dozen National Writing Project and Title II grants. Also, Hicks is author of Crafting Digital Writing (2013) as well as a co-author of Because Digital Writing Matters (Jossey-Bass, 2010), Create, Compose, Connect! (Routledge/Eye on Education, 2014), Connected Reading (NCTE, 2015), Research Writing Rewired (Corwin Literacy, 2015), Argument in the Real World (Heinemann, 2017).

    He will serve as a co-principal investigator, focusing most of his attention on 1) developing lessons for teachers, 2) visiting classrooms for observations and think-aloud protocols with students, and 3) collaborating with the National Writing Project to administer the assessment of student work with the Analytic Writing Continuum.

    4.1.4 Dr. Sandra Schamroth Abrams is a thought leader and researcher of adolescents’ digital literacies, A hallmark of her work is the examination of meaning making in, across, and beyond digital and nondigital spaces. Her research has been featured in leading journals, including Teachers College Record, Journal of Literacy Research, the Journal of Adolescent & Adult Literacy, Language & Linguistics, and Educational Media International. She is the author of Integrating Virtual and Traditional Learning in 6-12 Classrooms: A Layered Literacies Approach to Multimodal Meaning Making (Routledge), co-author of Conducting Qualitative Research of Learning in Online Spaces (SAGE) and Managing Educational Technology: School Partnerships and Technology Integration (Routledge), and co-editor of Bridging Literacies with Videogames (Sense). Forthcoming collaborative publications include An Integrated Mixed Methods Approach to Nonverbal Communication Data: A Practical Guide to Collection and Analysis in Online and Offline Spaces (Routledge). Abrams also is a founding co-editor of the Gaming and Ecologies Series (Brill) and an Associate Editor of the International Journal of Multiple Research Approaches. Abrams was a Research Team Member for the Institute of Education Sciences Goal II Grant: Assess-As-You-Go [Scholar] Project, and she was Research Team Member and Professional Development Facilitator for a related Literacy Courseware Pilot in a Bill & Melinda Gates-sponsored study. Abrams also served as a Technology Consultant and Assessment Coordinator for a New York City Department of Education Award: Learning and Technology Grant.

    Dr. Schamroth Abrams will serve as a co-principal investigator, focusing most of her efforts on (1) visiting classrooms for observations and think-aloud protocols with students, (2) analyzing data, and (3) writing and reporting findings.

    4.2 Key Consultants

    4.2.1 Marcus Povey will serve as lead developer on the project. Marcus Povey has previous full stack development experience creating blogging platforms similar to the one we will use. Marcus took development lead on KQED Teach. He built large parts of the core platform, based on Known, integrating Open Badges, Elastic search, and many other enhancements. All technologies to be included in this platform. Povey built a lightweight course management platform for one of the largest public media broadcasters in the United States.

    4.2.2 Alan Levine will serve as the online instructional coach. He has done similar roles for the University of Mary Washington and the University of Ontario in support of their rollout of Domain of One’s Own across college campus. Levine has served as a faculty member teaching some of the first distributed courses on the web. Most notable Alan was instrumental in developing Digital Storytelling 106 #ds106, which is the longest continuous class taught through blogging and RSS on the web.

    4.2.3 Jon Udell will serve as the annotation developer. Jon is the technical lead of Hypothesis, a non profit open annotation platform used by thousands of educators. In Year one Jon will map the annotation spec to the student websites and social reader. In year 2 Jon will apply Credibility Community Group vocabularies to student annotations, run correlation studies, and if ICCs are high will complete a multivariate regression analysis. In year 4 he will help develop the cost estimates for districts who may want to use tools developed in the grant.

    4.3 Scientific Advisory Board

    Dr. Elyse Eidman-Aadahl is Executive Director of the National Writing Project (NWP), where she draws upon 15 years of experience designing and leading national programs, partnerships, and action-learning efforts for the NWP and other educational organizations. Her scholarship includes studies of literacy and learning in the context of our new digital, networked ecology. Dr. Eidman-Aadhal will advise the team in the use of the AWC-SBA.

    Dr. Kevin Leander is a Professor of Literacy and Technology at Vanderbilt University’ Peabody College. Dr. Leander has published dozens of articles that have appeared in leading literacy journals, such as Reading Research Quarterly and the Journal of Literacy Research. Dr. Leander’s current research focuses on literacy practices as embodied spaces. He will serve on the scientific advisory board to understand the role of space in shaping hybrid learning environments.

    Dr. Jonna M. Kulikowich is Professor of Educational Psychology at Pennsylvania State University and Associate Editor of the Journal of Educational Psychology. She in an international authority in reading comprehension assessment and research methods. She will serve as a consultant in these areas.

    Aaron Parecki is the author of OAuth 2.0 Simplified, and maintainsoauth.net. He regularly writes and gives talks about OAuth and online security. He is an editor of several W3C specs, and is the co-founder of IndieWebCamp, a yearly unconference focusing on data ownership and online identity. Aaron was the co-founder and CTO of Geoloqi, a location-based software company acquired by Esri in 2012. His work has been featured in Wired, Fast Company and more.

    Dr. David Reinking David Reinking, is an emeritus Distinguished Professor of Teacher Education at Clemson University, currently with a courtesy appointment as an Adjunct Professor University of Georgia’s College of Education. He served as an editor of Reading Research Quarterly and the Journal of Literacy Research. He is a past-president of the Literacy Research Association and is an elected member of the Reading Hall of Fame. He is author of the book On Formative and Design Experiments (Teacher’s College Press) and widely recognized as the leading expert on formative and design experiments within the field of literacy education. Dr. Reinking will advise the team on their formative design.