Guzey et. al (2021) discuss their investigation on the impact of the tool Edpuzzle on the achievement of an Assessment and Measurement undergraduate course. Edpuzzle is a tool that enables technology-enhanced assessment. The tool became more heavily used during the pandemic when schools went remote and instructors wanted to track seat time and better understand how students were engaging with their content. The research study employed a an exploratory research design with a pre- and post-test control group. Guzey et. al (2021) explains that tools like Edpuzzle provide a practical approach to technology-enhanced assessments because the instructor can make enhanced item types without coding. Users can quicky create new content or customize existing content based on their objectives. Expuzzle also provides immediate feedback and scoring or students and enhanced reporting for the instructor. Guzey et. al (2021) explains the affordances of using Edpuzzle that have been discussed in the literature. Specifically, the researchers claim that the research shows that Edpuzzle increases student motivation, helps students construct conceptual knowledge, improves academic achievement while reducing cognitive load, and supports self-regulation skills. Having identified Edpuzzle as a web 2.0 tool, the researchers also point out the positive impact web 2.0 tools have been linked to having on student achievement in the literature. For this reason, this study sought to elaborate on the impact of using Edpuzzle, a web 2.0 tool, on an undergraduate course.
Guzy et. al (2021) took a pre- and post-test-matched control groups design approach. One hundred and sixty six students undergraduate students participated in the study. Groups were matched based on their teaching assignments and randomly assigned as control and experimental groups. The control group was made up of science and social studies teacher candidates, while the experimental group was made up of math and Turkish Language teacher candidates. An achievement test from a nationwide public personnel selection exam used in earlier years was used to collect achievement data about the Assessment and Measurement course. Quizzes consisted of multiple-choice and short response open-ended question format and were prepared for the following topics: fundamentals, statistics/central tendency, reliability, and validity. A rubric was used to evaluate students' performance on a performance assessment that incorporated writing items on a chosen topic and developing an achievement test. The analytics gathered from student engagement with the tool were used for analysis. A pre-test was given to both groups before the experimental intervention. Those in the experimental group watched videos and took quizzes in Edpuzzle each week. Student performance was used to identify gaps in the learning. Students in the experimental group also received corrective feedback and were exposed to instructional strategies like group activities, presentation, discussions, peer learning, and question-answer sessions. In the control group, the instructor taught in a traditional way.
The result showed no significant difference between control and experimental groups on their pre-test score. However, there was a significant difference between groups on their post-test scores (medium-level effect size). The experimental group also performed significantly better than the control group on the topics of fundamental concepts, central tendency, central dispersion, and validity. The groups did not perform significantly differently in the performance assessment task. Guzy et. al (2021) conclude by explaining that the results suggest that the group that utilized Edpuzzle as a web 2.0 tool achieved significantly better results than the control group that did not use the web. 2.0 tool. Potential reasons for non-significant difference on student achievement on the performance tasks is suggested as an avenue future studies.
Students in the experimental group received corrective feedback and were exposed to instructional strategies like group activities, presentation, discussions, peer learning, and question-answer sessions. In the control group, Guzy et. al (2021) state that the instructor taught in a traditional way. What "traditional" instruction is and looks like is not clearly defined by the researchers. The educators also fail to recognize the limitations of the study. The research focused on adult learners participating in an undergraduate teaching course that incorporated opportunities for face-to-face learning experiences. These results could not be generalized to adolescents and online teaching environments. This could also be mentioned as an area for future research since it was noted in the introduction that interest around the tool stemmed from a desire to make sure students were really watching videos during the Pandemic. While this behavior is not limited to adolescents, it does seem more characteristic of them than adult learners.
While web 2.0 tools are not an area of research I am interested in pursuing for my dissertation, I was interested in finding out what the research says about tools like Edpuzzle, Playposit, and Discovery Education Experiences video quizzes. Teachers have gravitated towards these tools because of the analytics, reporting, and progress monitoring functionality, but little is discussed about their impact on teaching and learning. While this study does show that Edpuzzle had a positive impact on undergraduate students student achievement in an Assessment and Measurement course, I'm not convinced these same results could be replicated in a k-12 environment. I also think that some concepts lend themselves better to video instruction and some learners prefer to engage with the learning through video while others do not. This leaves a great deal of gray area in terms of the value tools like Edpuzzle have on teaching and learning.
SULAK GÜZEY, S., AKDOĞDU YILDIZ, E., DEMİR, M. C., & AKSU-DÜNYA, B. (2022). Impact of EDpuzzle Use on the Assessment and Measurement Course Achievement. Journal of Hasan Ali Yücel Faculty of Education / Hasan Ali Yücel Egitim Fakültesi Dergisi (HAYEF), 19(1), 52–60. https://doi.org/10.5152/hayef.2021.21045
A Project-based digital storytelling approach to improving students’ learning motivation, problem-solving competence and learning achievement
Researchers have found that several problems emerge when a project-based learning approach is applied to large classrooms. For instance, difficulties motivating students, concentrating on the learning tasks, making connections between new content and prior knowledge, and effectively engaging in cooperative learning activities. While technology has resolved many of the problems related to the efficiency of cooperative learning activates, barriers to project-based learning such as promoting motivation students' motivation and concentration on the learning tasks persist. Another challenge is the need for the development of an effective instructional strategy for conducting project-based learning activities. Researchers have identified storytelling as an effective approach to overcoming. For this reason, Hung et. al (2012) explain that their study "aims to propose a project-based digital storytelling approach, which combines project-based learning and digital storytelling strategies to conduct learning activities for elementary schools students" (pg. 369). The experiment conducted sought to verify the effects of a project-based digital storytelling approach on enhancing students' motivation, problem solving skills, and overall achievement.
Through a review of the related literature, Hung et. al (2012) presents several understandings about project-based learning. First, that project-based learning is an instructional strategy that appeals to students. Second, that project-based learning enhances collaboration and cooperation between group members, which in turn reinforces learning cognition, and promotes learning achievement. In addition to student appeal and enhanced collaboration and cooperation, understandings about the design of project-based learning experiences are discussed as well. For instance, scholars note that situations need to be established to guide students when designing project learning activities. In this study, Hung et. al (2012) integrate the digital storytelling strategy in the project-based learning approach.
Digital storytelling as a strategy integrates multimedia applications and software with digital storytelling techniques to help learners become involved in a learning situation. Hung et. al (2012) notes that in previous studies digital storytelling has been perceived as an effective strategy for encouraging cooperation and knowledge construction in classrooms. This study is one of significance because previous studies have not researched the effects of integrating digital storytelling and PBL on problem-solving competence and learning achievement. For this reason, this study investigates how digital storytelling can be implemented to develop PBL learning experiences that include taking pictures with digital cameras, developing a storyline using captured images, and producing a video with a soundtrack, and presenting the final product.
A pre-test and post-test designed quasi-experiment with non-equivalent groups was conducted. The different learning modes served as variables with the experimental group participating in PBL with digital storytelling and the control group experiencing the traditional approach to PBL. The science learning motivation scale, the problem solving competence scale, and the science course achievement test represented the dependent variables. In relation to the effect of project-based learning with digital storytelling on science learning motivations the score of the experimental group proved to be superior to the control group. Hung et. al (2012) explains that the score shows that the project-based learning with digital storytelling could effectively enhance the science learning motivation of the students. Similarly, the effect of project-based learning with digital storytelling on problem-solving competence proved to be superior for the experimental group. From this result, Hung et. al (2012) again draws the conclusion that digital storytelling can effectively enhance the problem-solving competence of the students. Finally, the results reveal that project-based learning with digital storytelling can also effectively enhance science learning achievement of the students. No significant differences were found when looking at the results in relation to gender.
Hung et. al (2012) concludes with a discussion of the limitations noting it could be difficult to generalize the findings to other courses or subjects because this study was done in an elementary school science course. The software used in this study was also highly structured. More research would need to be done how the effects of PBL with digital storytelling using flexible software applications.
.Hung et. al (2012) begin the introduction with a broad problem in the field. Specifically, they explain that "researchers have tried to develop various computerized systems or guiding strategies to assist students in improving their learning performance" (pg. 368). However, effective instruction is required to cultivate key competences of students--especially instruction that leverages technology to promote student-centered learning. A comprehensive review of related project-based learning and digital storytelling literature from the proceeding decade follows. A pre-test and post-test designed quasi-experiment with non-equivalent groups using the different learning modes as the independent variables brings into question whether the characteristics of the participants in the elementary science course provide a different explanation for the outcomes. This is because in nonequivalent group designs researchers select groups that appear similar.
I found the investigation into the effects of a project-based digital storytelling approach on enhancing students' motivation, problem solving skills, and overall achievement interesting. Having taught a digital literacy and media design course that embraced PBL for ten years, I could relate to the difficulties that with implementing project based learning that were outlined n the introduction and review of related literature. Project-based learning places the teacher in the role of the guide on the side while students engage in inquiry and grapple with complex problems. This type of learning environment can be very engaging for some students, while other students tend to find the lack of structure overwhelming and experience difficulty focusing on the learning goals. Hung et. al (2012) note that the results of this study might be difficult to generalize because the technology used in the study was highly structured. Given that the ISTE Standards for Students promote students choosing their own technology applications based on a target audience and their message, I would be interested in finding out if the results of this study could be replicated when students are able to choose the technology they use to engage in the process of digital storytelling.
Hung, C. M., Hwang, G. J., & Huang, I. (2012). A Project-based digital storytelling approach to improving students’ learning motivation, problem-solving competence and learning achievement. Educational Technology & Society, 15(4), 368–379.
Participants did not require any technical support or onboarding to designing hypermedia at the beginning of the study. The majority of students graduated from computer departments in vocational or technical high schools. An interview schedule was developed to collect qualitative data on the students' opinions and perceptions of the use of hypermedia as a cognitive tool in a constructivist leaning environment. This schedule consisted of sixteen questions around the following topics:
Eight groups were interviewed in total at the end of the semester. Through content analysis of the data, eight interviews were transcribed into interview files. A predetermined set of terms from the literature were used for coding which in turn was used to classify and organize the data. Themes were then applied to make meaningful connections of the codes. And finally, the coded data was provided with a description under the themes of problems and difficulties faced by students for interpretation and discussion.
Overall Assessment of the Constructivist Learning Environment
Learning Activities Conducted in the Classroom
Assessment of the Students' Performance
Based on these results, the study reveals that "students found using hypermedia as a cognitive tool to be effective for constructing an understanding of the content" (115). Yildirim notes that the amount of content covered in a learning environment of this nature must be taken into account. Another result of this study is related to group performance. The results show that most of the students benefited from the group work. Yildirim suggests that instructors provide more learning activities to make the strategy more effective. Hie concludes with a suggestion for future research, stating that "further research studies are needed to examine the full effects of hypermedia as a cognitive tool for knowledge acquisition, in comparison to traditional classroom instruction and other computer-based cognitive tools" (pg. 116).
Yildirim, Z. (2005). Hypermedia as a Cognitive Tool: Student Teachers’ Experiences in Learning by Doing. Journal of Educational Technology & Society, 8(2), 107–117.
Shapiro and Niederhauser (2004) explain that learning from hypertext is more complicated than learning from traditional text. While many of the elements are the same--character decoding, word recognition, sentence comprehension, etc.--research of hypertext in education focuses on the unique features that add complexity. First, hypertext is non-linear. For this reason, information within a hypertext may be consumed by each user in a unique sequence. This places greater metacognitive demands on the reader because he or she must monitor comprehension, determine what information will need to be found to close information gaps, and make decisions about where best to look or that information in the text. Second, user traits (goals, motivation, and prior knowledge) are also factors that interact with the characteristics of hypertext and influence learning outcomes. Shapiro and Niederhauser "identify the variables that affect HAL most strongly and the mechanisms through which this occurs" (pg. 605). Shapiro and Niederhauser claim that the two theories that have had the greatest impact on research and our understanding of the process are the construction-integration model (CIM; Kintsch, 1998) and cognitive flexibility theory (CFT; Spiro, Coulson, Feltovitch, & Anderson, 1988; Spiro, Feltovitch, Jacobson, & Coulson, 1992).
The CIM of text processing (Kintsch, 1988) is a three-stage process of text comprehension:
According to Shapiro and Niederhauser, the third stage of the process--creation of the situation model--is significant to our understanding of learning from hypertext. It is also noted that hypertext promotes active learning because the learner must choose which links to click on to interact with the learning. Construction Integration model has become many hypertext researchers' standard framework for understanding hypertext-assisted learning. Specifically, user behaviors such as link choice, navigation patterns, and metacognitive practices.
Cognitive Flexibility Theory "is based on the supposition that real-world cases are each unique and multifaceted, thus requiring the learner to consider a variety of dimensions at once" (pg. 606). In other words, the prior knowledge necessary to understand new knowledge is derived from aspects of a variety of combined prior experiences and applied to the new situation. The implication of this model then is that advanced learning takes place as a consequence active learning, the use of prior knowledge, and as a consequence of constructing new knowledge for each new problem. CFT is relevant to hypertext because a learner can access a single document from multiple other sites. In doing so, the he or she will come to that document with multiple perspectives. In turn the mental representations resulting from repeated exposure to ill-structured hypertext will be multifaceted and therefore one's ability to use that knowledge should be more flexible. Shapiro and Niederhauser claim that CFT offers an explanation of meaningful learning on the part of advanced learners.
Numerous cognitive factors associated with reading and learning from hypertext show that there are distinct differences between reading traditional text and reading hypertext. Factors include--
Shapiro and Niederhauser summarize that the "nature of hypertext renders HAL a more cognitively demanding mode of learning" (pg. 608). For this reason, the use of metacognitive strategies is all the more important in this context. However, several studies have shown that minimal training and/or automated prompts may be used to promote metacognitive strategies and influence learning outcomes with some degree of success.
Interest around learning with hypertext stems from the belief that notion hypertext information structures may mirror the semantic structures of human memory. There is little evidence, though, that the simple act of working with a hypertext designed to mirror an expert's conceptual understanding of a topic can lead to a direct transfer of expert like mental representations to the reader. Research shows conflicting results about the effect of system structure (e.g. organization of links on pages, maps, overviews, and indexes) on learning. While some studies have shown advantages to using a highly organized system structure such as a hierarchy, others have actually found advantages of working with ill-structured hypertexts. Yet, other existing studies show the pitfalls of an ill-structured system design. Despite these contradictions, two general conclusions have been drawn from the literature and are said to explain the ways in which these variables interact to impact learning. First, "well-structured hypertexts may offer low-knowledge learners an introduction to the ways in which topics relate to one another and an easy-to-follow introduction to a domain" (pg. 611). And second, ill-structured hypertexts are beneficial to advanced learning for active, engaged learners. To conclude, research on organizing tools and system structure has shown that well-defined structures like hierarchies are helpful if the goal is to achieve basic, factual knowledge. Ill-structured systems are often beneficial for deep learning. This is especially true for advanced learners.
Researchers have also attempted to identify learning variables like individual knowledge and engagement, reading patterns, and learning goals. In regards to individual knowledge and engagement, those with limited prior knowledge are unable to establish information needs in advance. Shapiro and Niederhauser explain that individual differences in learning style are often important to the learning outcomes because they interact with other factors such as system structure. As for reading patterns, researchers have sought to identify patterns of reader navigation as they read hypertext. They found that learner interest and domain knowledge had a notable impact on readers' navigational strategies. They also noted that knowledge seekers tend to learn more from the text than feature explorers.
Hypertext navigation is not always systematic and purposeful though. A great deal of research has attempted to address what is called the keyhole phenomenon. In short, the keyhole phenomenon examines the effect of different types o user interfaces on user disorientation. Shapiro and Niederhauser summarize that the need to navigate through a hypertext is a defining characteristic that differentiates reading and learning with traditional text. Therefore, navigation strategies may influence what the reader learns from the text. This in turn may be influenced by the conceptual difficulty associated with the content and the learning task (pg. 614). The literature also shows that with consistency, learning with hypertext is greatly enhanced when the learning goal is specific.
According to Shapiro and Niederhauser, the bulk of related literature surrounds techniques in user modeling. User modeling refers to any methods used to gather information about users' knowledge, skills, motivation, or background. Characteristics o users are used to alter system features like links and document content. These studies suggest a need for further investigation into the educational effectiveness of adaptive systems to determine what characteristics are most effectively used in user modeling, and what system characteristics are most important to adapt.
Several theoretical issues surround HAL research as well as methodological issues. Methodological issue stem from a difficulty in comparing and reviewing hypertext research because of the absence of a unified, coherent framework for studying hypertext. Shapiro and Niederhauser argue that this creates two problems when trying to understand the hypertext literature. First, the text based reading research foundation is compromised when extensive graphics and audio and video components are included in the hypertext. Second, issues comparing research studies emerge when our language about the field is lacking in precision. It should also be noted that methodological flaws have been widely reported in the literature. While a great deal of excitement surrounds hypertext as an educational tool, Shapiro and Niederhauser conclude with a reminder that there is very little research published on the technology that is related to education and learning. They call for future research in this area to generate a well-grounded understanding of the processes underlying HAL and a standardization of terminology and methodology to be developed.
Shapiro, A., & Niederhauser, D. (2004). Learning from hypertext: Research issues and findings. In D. H. Jonassen (Ed), Handbook of Research for Educational Communications and Technology (pp. 605-620). New York: Macmillan.
Photo by Edho Pratama on Unsplash
In their article "Design Experiments in Educational Research", Cobb et al. (2003) draw on prior understandings about conducting design experiments to share characteristics of the methodology and to describe what conducting a design experiment entails. Design experiments are an iterative process in which the "designed context is subject to test and revision" (pg. 9). Design experiments are conducted t develop theories that target domain-specific learning processes. Special emphasis is placed on theories to reflect the view that "explanations and understandings inherent in them are essential if educational improvement is to be a long-term, generative process" (pg. 9). Design experiments are also said to ideally end in greater understanding of a learning ecology by designing the element of the complex system and predicting how these element interact to support learning. In this way, design experiments aptly represent the complexity of educational systems. Cobb et al. notes that design experiments move beyond tinkering with effective designs by focusing on a design theory that explains why designs work and making recommendations for how they be modified to new circumstances.
Five crosscutting features apply to design experiments:
Several issues must be addressed when preparing for a design experiment. First, before conducting a design experiment one must answer the question: What is the point of the study? Research teams should also draw on and synthesize the prior research literature to "identify central organizing ideas for a domain" (Cobb, et al., pg.11). Other preparations include clearly defining the conjectured starting points, elements of trajectory, and prospective endpoints as well as formulating a design that embodies testable conjectures. The size of the research team and their expertise will vary.
In order to conduct a design experiment, the team must simply have the collective expertise needed to carry out the preparation procedures and conduct the experiment. Cobb et al. identify four important functions that will require the teams direct engagement.
Successful design experiments will also attend to the problem of measure. To conclude, Cobb et al. reiterates that the five crosscutting features outlined in the article are defining characteristics of a genre of science that holds great potential if researchers manage the preparation of difficulties associated with conducting design experiments appropriately.
Given that the potential for rapid pay-off is high with design experiments, the five crosscutting features and critical components for successfully planning and conducting this type of research is invaluable. Design experiments are also said to ideally end in greater understanding of a learning ecology by designing the element of the complex system and predicting how these element interact to support learning. Both the crosscutting features and the complex nature of a learning ecology are developed with detailed example that make the article invaluable to anyone looking to better understand the various methods of research in educational technology.
Design experiments are certainly an area of educational research that has peaked my interest now that I understand they ideally end in greater understanding of a learning ecology. Barron (2004) defined a learning ecology as a “set of contexts found in physical or virtual spaces that provide opportunities for learning.” Each context consists of a unique blend of activities, resources, relationships, and developing interactions. The research discussed by Barron in "Interest and self-sustained learning as catalysts of development: A learning ecologies perspective. Human Development" had strong connections to the ISTE Student Standards (Global Collaborator and Knowledge Constructor). These standards guide a portion of my work as an instructional technology consultant for grades K-12. For this reason, all discussions that lead to a greater understanding of a learning ecology are of interest to me at this point in my doctoral journey.
Barron, B. (2006). Interest and self-sustained learning as catalysts of development: A learning ecologies perspective. Human Development, 49, 193-224.
Cobb, P., Confrey, J., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9-13.
González-Sanmamed, M., Muñoz-Carril, P.-C., & Santos-Caamaño, F.-J. (2019). Key Components of Learning Ecologies: A Delphi Assessment. British Journal of Educational Technology, 50(4), 1639–1655.
Does technology make you smarter?
In the article "Do Technologies Make Us Smarter? Intellectual Ampliﬁcation With, Of, and Through Technology" Salomon and Perkins offer a three-way framework to answer the question--whether and in what senses do technologies make us cognitively more capable?
Consider how each of the following themes represents a way in which cognitive technologies might "make us smarter":
Effects with Technology
Effects with technology transpire when technologies have functionality that enables them to mirror intellectual functions. The effects then enable the user to form a partnership with the technology that "frees the user from distractions of lower-level cognitive functions" (p. 74). When this occurs, the effects with technology likely lead to improved intellectual performance (Perkins, 1993).
So, does technology make us smarter? Salomon and Perkins say it boils down to this: "Cognitive technologies-technologies that afford substantial support of complex cognitive processing make people smarter in the sense of enabling them to perform smarter" (p. 76).
Effects of Technology
According to Salomon and Perkins, it's also important to consider whether experiences with cognitive technologies can develop cognitive capabilities that remain available without the tool at hand. While Effects of technology can be positive or negative, they must persist for a period after the technology is no longer in hand. To show studies in support of effects of technology, Salomon and Perkins point to other cases. Research conducted in the 1980's for example explored how learning computer programing might enhance thinking. While findings varied, Salomon and Perkins say the work shows clear examples of effects of.
Effects through Technology
Here Salomon and Perkins build on the first two themes--effects with and effects of-- that were previously explained by Salomon, Perkins & Globerson (1991) and present a third theme for discussion--effects through technology--which they posit necessary to address the impact of "radically transformative" technologies. Here they consider how technologies have impacted warfare or the construction of communities. Through the use of technologies, effects that would have been otherwise unimaginable have been achieved. Salomon and Perkins point to how the internet has transformed the nature of teamwork. Effects through technology have made it possible for people to collaborate regardless of their geographic location.
Salomon and Perkins conclude by comparing the three themes to pieces of a puzzle. In other words, the themes are worth putting together to answer to determine what relative magnitudes of impact we can anticipate and how quickly we can expect such effects to emerge. When pace is the point of comparison, effects with excels. This is also true when comparing magnitude of impact because of the immediate payoff of effects with and the improvements made over time. Salomon and Perkins note that effects of technology are less in terms of magnitude of impact and the pace at which it takes for the effects to emerge. For these reasons, Salomon and Perkins answer the question "does technology make us smarter" with a "nuanced yes."
Salomon and Perkin's three prong approach to thinking about the impacts of technology on cognition provides a simple framework that invites innovators to begin thinking more deeply about the potential affordances of technologies. Salomon and Perkins note that "it takes time for innovators to see the possibilities, time for early trials, time for a kind of Darwinian sifting of those new ways of working that truly offer a lot, and time for the new ways of working to pass into widespread use" (p. 81). A limitation of the framework work is discussed in the conclusion when Salomon and Perkins point out that they have shown examples of how effects of technology, effects with technology, and effects through technology can positively impact cognition in a controlled environment when in reality the three effects occur in complex systems. For this reason, the pace and realization of their full potential will take longer to realize.
The SAMR model is one of several technology integration models that exist to guide educators to be purposeful about technology integration. In their discussion of intellectual ampliﬁcation with, of, and through technology, Salomon and Perkins explain that "learners need time and guidance to achieve the effects that many contemporary cognitive technologies afford" (81). This got me thinking about how SAMR might provide a model that guides educators to facilitate the type of guidance students might need to achieve all three effects. The following connections can be seen between the two models:
Effects with Technology - Effects with technology transpire when technologies have functionality that enables them to mirror intellectual functions.
Effects through Technology - Through the use of technologies, effects that would have been otherwise unimaginable have been achieved.
While the second theme--effects of technology--does not have as strong of a connection to SAMR, I cant help but wonder if with time and lessons that purposefully apply the other two effects, more effects of technology will emerge. In other words, the cognitive residues that enhance performance even without the technology will become more observable.
Salomon, G., & Perkins, D.N. (2005). Do Technologies Make Us Smarter? Intellectual Ampliﬁcation With, Of, and Through Technology.
Photo by Mayer Tawfik on Unsplash
In the article "Educational Technology Research That Makes a Difference: Series Introduction" M.D. Roblyer addresses the need for a series of how-to articles on writing educational technology research that make a "strong case for technology's pedagogical contributions" (2005). A number of authors, according to Roblyer, have cited weaknesses that include disjointed efforts to study technology resources and strategies, weak methods, methods that do not match research questions, and poor reporting that make attempts to replicate subsequent studies difficult at best. For this reason, Roblyer's provides five pillars or criteria educational technology research should adhere to in order to be helpful.
Pillar 1: The Significance Criterion
Helpful research must provide a "clear and compelling case" for why it exists. Specifically, technology researchers need to recognize what makes a study significant enough to take on in in the context of education today.
Pillar 2: The Rationale Criterion
New research should seek to build on a foundation of theory. In doing so, helpful research will include a rationale that is grounded in theory and discusses expected effects drawn from past research.
Pillar 3: The Design Criterion
According to Roblyer, The Design Criterion is the most challenging. Here, the research has established research questions and must determine a suitable approach (i.e. experimental and quasi-experimental designs) and method or measuring impact on the identified variables. Articles reporting technology research and meeting this criterion will have a well developed methods section that shows a strong connection between the questions posed in the study and the designs and methods utilized.
Pillar 4: The Comprehensive Reporting Criterion
This criterion urges technology researchers to include a "structured abstract" with every research report. In doing so, researchers ensure that completed research enables future researchers can use and build upon it. Structured abstracts will follow APA format and include the following elements in detail: background on the study, purpose, setting, subjects, intervention, research design, data collection and analysis, findings, and conclusion.
Pillar 5: The Cumulativity Criterion
The best research will be well situated between the past and the future. This means that the research will clearly state the study is part of current or proposed research for the future and will pose next steps for future research.
To conclude, Roblyer provides four types of studies that move the field forward: research to establish relative advantage, improve implementation strategies, monitor impact on important societal goals, and monitor and report on common uses and shape desired directions.
Through the "Educational Technology Research That Makes a Difference: Series Introduction" Roblyer provides practical solutions to a significant problem with technology research--quality assurance. By providing a solution in the form of five criterion or "pillars" the article serves as the How-to guide it was intended to be. By providing detailed examples and clear actionable steps the series introduction provides would-be researchers a roadmap for developing technology research that moves the field of technology research in the culture of education today forward. Furthermore, Roblyer's conclusion is a clear call to action for would-be researchers and existing researchers alike to conduct and share good educational research in the hopes of ultimately finding a path to educational technology that makes a difference.
As noted in the readings this week, doctoral students choose to become researchers because they want to make a difference. For this reason, a how-to guide with a clearly defined criteria for conducting and reporting on educational technology that makes a difference is invaluable. The detailed descriptions of each criterion are especially helpful as I begin to think about what contributions I would like to make to educational technology research. Finally, I will be revisiting the "structured abstract" format that was outlined in Pillar 4: The Comprehensive Reporting Criterion in the future. I appreciate having a tick list of elements to include in my writing to ensure it is comprehensive.
Roblyer, M. D. (2005). Educational technology research that makes a difference: Series introduction.
Contemporary Issues in Technology and Teacher Education, 5(2), 192-201.
In the "Introduction" to The Cambridge Handbook of the Learning Sciences, R. Keith Sawyer argues that schools today do not reflect what research shows about the science of learning, but rather common sense assumptions that have been made about teaching and learning. For this reason, through his handbook Sawyer seeks to show key stakeholders how to design learning environments and classrooms that are rich with technology and reflect scientific research. According to Sawyer, citizens need to be able to move beyond memorizing facts to think critically about information and develop understandings that lead to innovations that solve real-world problems, but practices that reflect Instructionism function as an anchoring mechanism to such progress. Sawyer explains that by the 1970's researchers came to consensus on several key understandings about learning--
Sawyer provides a robust review of the related literature about Instructionism and the research findings on the science of learning with the help of two accomplished scholars that are both authorities on the learning sciences. Sawyer acknowledges that Roy Pea, a professor of Education and Learning Sciences at Stanford University and former Editor-in-Chief Emerita of The Journal of the Learning Sciences, Janet Kolodner, helped with the historical details. Sawyer uses the historical details to argue that schools today are not based on research, but rather common sense assumptions about teaching and learning. While the claim certainly holds some merit today, one would be remiss if the date of publication was not taken into consideration as since 2006 the vast majority of schools have redesigned curriculum and adopted new practices that better align with the research findings related to the science of learning. Regardless of where schools are at on the continuum of redesigning classrooms to promote better learning, Sawyer's concise list of the practical implications of the research about the science of learning will serve schools well.
Sawyer's recommendations for promoting better learning based on the research findings of the learning sciences could easily serve as a guide for curriculum directors and/or department teams seeking to update learning spaces or redesign new ones to better serve the needs of today's students and foster a culture of knowledge construction and innovation. Furthermore, for schools that have adopted the ISTE Standards for Educators and Students, the historical details provide a rationale for the educator and student shifts that were made between the NETS and the current ISTE standards. As such, the introduction can one of two functions. For some, the introduction may be a practical roadmap for designing spaces to promote better learning. And for others, this resource may simply be a starting point that prompts reflection and generates conversation for future planning.
Sawyer, R. K. (2006). Chapter 1 introduction: The new science of learning. In R. K. Sawyer (Ed.). The Cambridge Handbook of the Learning Sciences (p. 1-16). New York: Cambridge University Press.