In the editorial "TPACK -- time to reboot?," Saubern et al. (2020) argue that while TPACK has proven to be highly popular foundation for educational technology research, the framework continues to be critiqued today. Specifically, critiques continue to explore the relation to defining and delineating the construct components, the relationships between construct components, measurement and validation, the predictive and prescriptive value of the framework and the relationship between TPACK and practice (Angeli et al., 2016). This editorial reflects on how TPACK scholarship is continuing to evolve with a focus on two key questions:
Using the search terms "TPACK" and "TPCK" to search the AJET website, 44 papers were reduced to 20 that used TPACK substantially as a theoretical or methodological base. The research problem, purpose and research question for each paper were identified and noted and the discussions and conclusions were analyzed in relation to the frameworks original goals.
A common thread found throughout all of the papers is that "to use technology effectively in teaching and learning, teachers must integrate knowledge of technology with knowledge of pedagogy and content" (pg. 3). However, Saubern et al. (2020) explain that a shift in the research occurs between the identification and the application of the TPACK framework. Rather than focusing on the integrated knowledge base of the framework, much of the research examines the several components of the framework separately. They note that few papers, if any, have investigate what it means to develop the knowledge that can only emerge from the application of all three components and the relationship between this knowledge and what is considered effective classroom practice. The central idea of TPACK, Saubern et al. (2020) argues, seems to have slipped away. This editorial claims that the problem lies in the TPACK diagram because it encourages study of the seven components "as if they are the thing that we are interested in" (pg. 5). Analysis of the 20 papers reveals a great deal of time and energy has been spent on validating the structure of TPACK rather than focusing on the integrated knowledge base that emerges from the application of integrating the 7 components. For this reason, the authors argue through this editorial for a reboot in TPACK research. To refocus on understanding the specialist form of knowledge that emerges when knowledge of technology is integrated with pedagogical knowledge and content knowledge. The editorial concludes with a call to action for researchers to engage with the critical question which TPACK may provide insight--how best to improve teaching and learning with technology?
This editorial provides a strong meta-analysis of the existing research surrounding TPACK. Claims made by Saubern et al. (2020) are significant in that they highlight an area of research that has yet to be examined. Through careful analysis of 20 papers, they make convincing claim that if researchers rebooted the focus of research around TPACK to focus on the integration of the seven components and how they work together to allow for the emergence of a specialized, integrated knowledge base the answer to how best to improve teaching and learning with technology may also emerge. The call to action in the conclusion provides future facing language that is helpful for researchers looking to make new contributions to the field while build on the work of those that came before.
As mentioned by the authors, those who have spent time examining best practice models and frameworks for technology integration are familiar with TPACK. What I found particularly interesting was how spot on the authors argument about the use of the frameworks diagram was. I've been introduced to the model by multiple instructors, and yet few have talked about the specialized, integrated knowledge that emerges at the intersection of all seven components. Instead, the all too common introduction is an oversimplification of the three intersecting circles. While I’m not particularly interested in exploring how TPACK can be used to answer the question of how best to improve teaching and learning with technology, the meta analysis of the 20 papers certainly makes the case for someone who is interested in pursuing this line of thinking.
Angeli, C., Valanides, N., & Christodoulou, A. (2016). Theoretical Considerations of Technological Pedagogical Content Knowledge. In M. Herring, M. J. Koehler, & P. Mishra (Eds.), Handbook of technological pedagogical content knowledge (2nd ed) (pp. 21–42). Routledge.
Saubern, R., Henderson, M., Heinrich, E., & Redmond, P. (2020). TPACK–time to reboot?. Australasian Journal of Educational Technology, 36(3), 1-9.
Guzey et. al (2021) discuss their investigation on the impact of the tool Edpuzzle on the achievement of an Assessment and Measurement undergraduate course. Edpuzzle is a tool that enables technology-enhanced assessment. The tool became more heavily used during the pandemic when schools went remote and instructors wanted to track seat time and better understand how students were engaging with their content. The research study employed a an exploratory research design with a pre- and post-test control group. Guzey et. al (2021) explains that tools like Edpuzzle provide a practical approach to technology-enhanced assessments because the instructor can make enhanced item types without coding. Users can quicky create new content or customize existing content based on their objectives. Expuzzle also provides immediate feedback and scoring or students and enhanced reporting for the instructor. Guzey et. al (2021) explains the affordances of using Edpuzzle that have been discussed in the literature. Specifically, the researchers claim that the research shows that Edpuzzle increases student motivation, helps students construct conceptual knowledge, improves academic achievement while reducing cognitive load, and supports self-regulation skills. Having identified Edpuzzle as a web 2.0 tool, the researchers also point out the positive impact web 2.0 tools have been linked to having on student achievement in the literature. For this reason, this study sought to elaborate on the impact of using Edpuzzle, a web 2.0 tool, on an undergraduate course.
Guzy et. al (2021) took a pre- and post-test-matched control groups design approach. One hundred and sixty six students undergraduate students participated in the study. Groups were matched based on their teaching assignments and randomly assigned as control and experimental groups. The control group was made up of science and social studies teacher candidates, while the experimental group was made up of math and Turkish Language teacher candidates. An achievement test from a nationwide public personnel selection exam used in earlier years was used to collect achievement data about the Assessment and Measurement course. Quizzes consisted of multiple-choice and short response open-ended question format and were prepared for the following topics: fundamentals, statistics/central tendency, reliability, and validity. A rubric was used to evaluate students' performance on a performance assessment that incorporated writing items on a chosen topic and developing an achievement test. The analytics gathered from student engagement with the tool were used for analysis. A pre-test was given to both groups before the experimental intervention. Those in the experimental group watched videos and took quizzes in Edpuzzle each week. Student performance was used to identify gaps in the learning. Students in the experimental group also received corrective feedback and were exposed to instructional strategies like group activities, presentation, discussions, peer learning, and question-answer sessions. In the control group, the instructor taught in a traditional way.
The result showed no significant difference between control and experimental groups on their pre-test score. However, there was a significant difference between groups on their post-test scores (medium-level effect size). The experimental group also performed significantly better than the control group on the topics of fundamental concepts, central tendency, central dispersion, and validity. The groups did not perform significantly differently in the performance assessment task. Guzy et. al (2021) conclude by explaining that the results suggest that the group that utilized Edpuzzle as a web 2.0 tool achieved significantly better results than the control group that did not use the web. 2.0 tool. Potential reasons for non-significant difference on student achievement on the performance tasks is suggested as an avenue future studies.
Students in the experimental group received corrective feedback and were exposed to instructional strategies like group activities, presentation, discussions, peer learning, and question-answer sessions. In the control group, Guzy et. al (2021) state that the instructor taught in a traditional way. What "traditional" instruction is and looks like is not clearly defined by the researchers. The educators also fail to recognize the limitations of the study. The research focused on adult learners participating in an undergraduate teaching course that incorporated opportunities for face-to-face learning experiences. These results could not be generalized to adolescents and online teaching environments. This could also be mentioned as an area for future research since it was noted in the introduction that interest around the tool stemmed from a desire to make sure students were really watching videos during the Pandemic. While this behavior is not limited to adolescents, it does seem more characteristic of them than adult learners.
While web 2.0 tools are not an area of research I am interested in pursuing for my dissertation, I was interested in finding out what the research says about tools like Edpuzzle, Playposit, and Discovery Education Experiences video quizzes. Teachers have gravitated towards these tools because of the analytics, reporting, and progress monitoring functionality, but little is discussed about their impact on teaching and learning. While this study does show that Edpuzzle had a positive impact on undergraduate students student achievement in an Assessment and Measurement course, I'm not convinced these same results could be replicated in a k-12 environment. I also think that some concepts lend themselves better to video instruction and some learners prefer to engage with the learning through video while others do not. This leaves a great deal of gray area in terms of the value tools like Edpuzzle have on teaching and learning.
SULAK GÜZEY, S., AKDOĞDU YILDIZ, E., DEMİR, M. C., & AKSU-DÜNYA, B. (2022). Impact of EDpuzzle Use on the Assessment and Measurement Course Achievement. Journal of Hasan Ali Yücel Faculty of Education / Hasan Ali Yücel Egitim Fakültesi Dergisi (HAYEF), 19(1), 52–60. https://doi.org/10.5152/hayef.2021.21045
In the "Introduction" to The Cambridge Handbook of the Learning Sciences, R. Keith Sawyer argues that schools today do not reflect what research shows about the science of learning, but rather common sense assumptions that have been made about teaching and learning. For this reason, through his handbook Sawyer seeks to show key stakeholders how to design learning environments and classrooms that are rich with technology and reflect scientific research. According to Sawyer, citizens need to be able to move beyond memorizing facts to think critically about information and develop understandings that lead to innovations that solve real-world problems, but practices that reflect Instructionism function as an anchoring mechanism to such progress. Sawyer explains that by the 1970's researchers came to consensus on several key understandings about learning--
Sawyer provides a robust review of the related literature about Instructionism and the research findings on the science of learning with the help of two accomplished scholars that are both authorities on the learning sciences. Sawyer acknowledges that Roy Pea, a professor of Education and Learning Sciences at Stanford University and former Editor-in-Chief Emerita of The Journal of the Learning Sciences, Janet Kolodner, helped with the historical details. Sawyer uses the historical details to argue that schools today are not based on research, but rather common sense assumptions about teaching and learning. While the claim certainly holds some merit today, one would be remiss if the date of publication was not taken into consideration as since 2006 the vast majority of schools have redesigned curriculum and adopted new practices that better align with the research findings related to the science of learning. Regardless of where schools are at on the continuum of redesigning classrooms to promote better learning, Sawyer's concise list of the practical implications of the research about the science of learning will serve schools well.
Sawyer's recommendations for promoting better learning based on the research findings of the learning sciences could easily serve as a guide for curriculum directors and/or department teams seeking to update learning spaces or redesign new ones to better serve the needs of today's students and foster a culture of knowledge construction and innovation. Furthermore, for schools that have adopted the ISTE Standards for Educators and Students, the historical details provide a rationale for the educator and student shifts that were made between the NETS and the current ISTE standards. As such, the introduction can one of two functions. For some, the introduction may be a practical roadmap for designing spaces to promote better learning. And for others, this resource may simply be a starting point that prompts reflection and generates conversation for future planning.
Sawyer, R. K. (2006). Chapter 1 introduction: The new science of learning. In R. K. Sawyer (Ed.). The Cambridge Handbook of the Learning Sciences (p. 1-16). New York: Cambridge University Press.