Shapiro and Niederhauser (2004) explain that learning from hypertext is more complicated than learning from traditional text. While many of the elements are the same--character decoding, word recognition, sentence comprehension, etc.--research of hypertext in education focuses on the unique features that add complexity. First, hypertext is non-linear. For this reason, information within a hypertext may be consumed by each user in a unique sequence. This places greater metacognitive demands on the reader because he or she must monitor comprehension, determine what information will need to be found to close information gaps, and make decisions about where best to look or that information in the text. Second, user traits (goals, motivation, and prior knowledge) are also factors that interact with the characteristics of hypertext and influence learning outcomes. Shapiro and Niederhauser "identify the variables that affect HAL most strongly and the mechanisms through which this occurs" (pg. 605). Shapiro and Niederhauser claim that the two theories that have had the greatest impact on research and our understanding of the process are the construction-integration model (CIM; Kintsch, 1998) and cognitive flexibility theory (CFT; Spiro, Coulson, Feltovitch, & Anderson, 1988; Spiro, Feltovitch, Jacobson, & Coulson, 1992).
The CIM of text processing (Kintsch, 1988) is a three-stage process of text comprehension:
According to Shapiro and Niederhauser, the third stage of the process--creation of the situation model--is significant to our understanding of learning from hypertext. It is also noted that hypertext promotes active learning because the learner must choose which links to click on to interact with the learning. Construction Integration model has become many hypertext researchers' standard framework for understanding hypertext-assisted learning. Specifically, user behaviors such as link choice, navigation patterns, and metacognitive practices.
Cognitive Flexibility Theory "is based on the supposition that real-world cases are each unique and multifaceted, thus requiring the learner to consider a variety of dimensions at once" (pg. 606). In other words, the prior knowledge necessary to understand new knowledge is derived from aspects of a variety of combined prior experiences and applied to the new situation. The implication of this model then is that advanced learning takes place as a consequence active learning, the use of prior knowledge, and as a consequence of constructing new knowledge for each new problem. CFT is relevant to hypertext because a learner can access a single document from multiple other sites. In doing so, the he or she will come to that document with multiple perspectives. In turn the mental representations resulting from repeated exposure to ill-structured hypertext will be multifaceted and therefore one's ability to use that knowledge should be more flexible. Shapiro and Niederhauser claim that CFT offers an explanation of meaningful learning on the part of advanced learners.
Numerous cognitive factors associated with reading and learning from hypertext show that there are distinct differences between reading traditional text and reading hypertext. Factors include--
Shapiro and Niederhauser summarize that the "nature of hypertext renders HAL a more cognitively demanding mode of learning" (pg. 608). For this reason, the use of metacognitive strategies is all the more important in this context. However, several studies have shown that minimal training and/or automated prompts may be used to promote metacognitive strategies and influence learning outcomes with some degree of success.
Interest around learning with hypertext stems from the belief that notion hypertext information structures may mirror the semantic structures of human memory. There is little evidence, though, that the simple act of working with a hypertext designed to mirror an expert's conceptual understanding of a topic can lead to a direct transfer of expert like mental representations to the reader. Research shows conflicting results about the effect of system structure (e.g. organization of links on pages, maps, overviews, and indexes) on learning. While some studies have shown advantages to using a highly organized system structure such as a hierarchy, others have actually found advantages of working with ill-structured hypertexts. Yet, other existing studies show the pitfalls of an ill-structured system design. Despite these contradictions, two general conclusions have been drawn from the literature and are said to explain the ways in which these variables interact to impact learning. First, "well-structured hypertexts may offer low-knowledge learners an introduction to the ways in which topics relate to one another and an easy-to-follow introduction to a domain" (pg. 611). And second, ill-structured hypertexts are beneficial to advanced learning for active, engaged learners. To conclude, research on organizing tools and system structure has shown that well-defined structures like hierarchies are helpful if the goal is to achieve basic, factual knowledge. Ill-structured systems are often beneficial for deep learning. This is especially true for advanced learners.
Researchers have also attempted to identify learning variables like individual knowledge and engagement, reading patterns, and learning goals. In regards to individual knowledge and engagement, those with limited prior knowledge are unable to establish information needs in advance. Shapiro and Niederhauser explain that individual differences in learning style are often important to the learning outcomes because they interact with other factors such as system structure. As for reading patterns, researchers have sought to identify patterns of reader navigation as they read hypertext. They found that learner interest and domain knowledge had a notable impact on readers' navigational strategies. They also noted that knowledge seekers tend to learn more from the text than feature explorers.
Hypertext navigation is not always systematic and purposeful though. A great deal of research has attempted to address what is called the keyhole phenomenon. In short, the keyhole phenomenon examines the effect of different types o user interfaces on user disorientation. Shapiro and Niederhauser summarize that the need to navigate through a hypertext is a defining characteristic that differentiates reading and learning with traditional text. Therefore, navigation strategies may influence what the reader learns from the text. This in turn may be influenced by the conceptual difficulty associated with the content and the learning task (pg. 614). The literature also shows that with consistency, learning with hypertext is greatly enhanced when the learning goal is specific.
According to Shapiro and Niederhauser, the bulk of related literature surrounds techniques in user modeling. User modeling refers to any methods used to gather information about users' knowledge, skills, motivation, or background. Characteristics o users are used to alter system features like links and document content. These studies suggest a need for further investigation into the educational effectiveness of adaptive systems to determine what characteristics are most effectively used in user modeling, and what system characteristics are most important to adapt.
Several theoretical issues surround HAL research as well as methodological issues. Methodological issue stem from a difficulty in comparing and reviewing hypertext research because of the absence of a unified, coherent framework for studying hypertext. Shapiro and Niederhauser argue that this creates two problems when trying to understand the hypertext literature. First, the text based reading research foundation is compromised when extensive graphics and audio and video components are included in the hypertext. Second, issues comparing research studies emerge when our language about the field is lacking in precision. It should also be noted that methodological flaws have been widely reported in the literature. While a great deal of excitement surrounds hypertext as an educational tool, Shapiro and Niederhauser conclude with a reminder that there is very little research published on the technology that is related to education and learning. They call for future research in this area to generate a well-grounded understanding of the processes underlying HAL and a standardization of terminology and methodology to be developed.
Shapiro, A., & Niederhauser, D. (2004). Learning from hypertext: Research issues and findings. In D. H. Jonassen (Ed), Handbook of Research for Educational Communications and Technology (pp. 605-620). New York: Macmillan.
Photo by Edho Pratama on Unsplash
In their article "Design Experiments in Educational Research", Cobb et al. (2003) draw on prior understandings about conducting design experiments to share characteristics of the methodology and to describe what conducting a design experiment entails. Design experiments are an iterative process in which the "designed context is subject to test and revision" (pg. 9). Design experiments are conducted t develop theories that target domain-specific learning processes. Special emphasis is placed on theories to reflect the view that "explanations and understandings inherent in them are essential if educational improvement is to be a long-term, generative process" (pg. 9). Design experiments are also said to ideally end in greater understanding of a learning ecology by designing the element of the complex system and predicting how these element interact to support learning. In this way, design experiments aptly represent the complexity of educational systems. Cobb et al. notes that design experiments move beyond tinkering with effective designs by focusing on a design theory that explains why designs work and making recommendations for how they be modified to new circumstances.
Five crosscutting features apply to design experiments:
Several issues must be addressed when preparing for a design experiment. First, before conducting a design experiment one must answer the question: What is the point of the study? Research teams should also draw on and synthesize the prior research literature to "identify central organizing ideas for a domain" (Cobb, et al., pg.11). Other preparations include clearly defining the conjectured starting points, elements of trajectory, and prospective endpoints as well as formulating a design that embodies testable conjectures. The size of the research team and their expertise will vary.
In order to conduct a design experiment, the team must simply have the collective expertise needed to carry out the preparation procedures and conduct the experiment. Cobb et al. identify four important functions that will require the teams direct engagement.
Successful design experiments will also attend to the problem of measure. To conclude, Cobb et al. reiterates that the five crosscutting features outlined in the article are defining characteristics of a genre of science that holds great potential if researchers manage the preparation of difficulties associated with conducting design experiments appropriately.
Given that the potential for rapid pay-off is high with design experiments, the five crosscutting features and critical components for successfully planning and conducting this type of research is invaluable. Design experiments are also said to ideally end in greater understanding of a learning ecology by designing the element of the complex system and predicting how these element interact to support learning. Both the crosscutting features and the complex nature of a learning ecology are developed with detailed example that make the article invaluable to anyone looking to better understand the various methods of research in educational technology.
Design experiments are certainly an area of educational research that has peaked my interest now that I understand they ideally end in greater understanding of a learning ecology. Barron (2004) defined a learning ecology as a “set of contexts found in physical or virtual spaces that provide opportunities for learning.” Each context consists of a unique blend of activities, resources, relationships, and developing interactions. The research discussed by Barron in "Interest and self-sustained learning as catalysts of development: A learning ecologies perspective. Human Development" had strong connections to the ISTE Student Standards (Global Collaborator and Knowledge Constructor). These standards guide a portion of my work as an instructional technology consultant for grades K-12. For this reason, all discussions that lead to a greater understanding of a learning ecology are of interest to me at this point in my doctoral journey.
Barron, B. (2006). Interest and self-sustained learning as catalysts of development: A learning ecologies perspective. Human Development, 49, 193-224.
Cobb, P., Confrey, J., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9-13.
González-Sanmamed, M., Muñoz-Carril, P.-C., & Santos-Caamaño, F.-J. (2019). Key Components of Learning Ecologies: A Delphi Assessment. British Journal of Educational Technology, 50(4), 1639–1655.
Does technology make you smarter?
In the article "Do Technologies Make Us Smarter? Intellectual Ampliﬁcation With, Of, and Through Technology" Salomon and Perkins offer a three-way framework to answer the question--whether and in what senses do technologies make us cognitively more capable?
Consider how each of the following themes represents a way in which cognitive technologies might "make us smarter":
Effects with Technology
Effects with technology transpire when technologies have functionality that enables them to mirror intellectual functions. The effects then enable the user to form a partnership with the technology that "frees the user from distractions of lower-level cognitive functions" (p. 74). When this occurs, the effects with technology likely lead to improved intellectual performance (Perkins, 1993).
So, does technology make us smarter? Salomon and Perkins say it boils down to this: "Cognitive technologies-technologies that afford substantial support of complex cognitive processing make people smarter in the sense of enabling them to perform smarter" (p. 76).
Effects of Technology
According to Salomon and Perkins, it's also important to consider whether experiences with cognitive technologies can develop cognitive capabilities that remain available without the tool at hand. While Effects of technology can be positive or negative, they must persist for a period after the technology is no longer in hand. To show studies in support of effects of technology, Salomon and Perkins point to other cases. Research conducted in the 1980's for example explored how learning computer programing might enhance thinking. While findings varied, Salomon and Perkins say the work shows clear examples of effects of.
Effects through Technology
Here Salomon and Perkins build on the first two themes--effects with and effects of-- that were previously explained by Salomon, Perkins & Globerson (1991) and present a third theme for discussion--effects through technology--which they posit necessary to address the impact of "radically transformative" technologies. Here they consider how technologies have impacted warfare or the construction of communities. Through the use of technologies, effects that would have been otherwise unimaginable have been achieved. Salomon and Perkins point to how the internet has transformed the nature of teamwork. Effects through technology have made it possible for people to collaborate regardless of their geographic location.
Salomon and Perkins conclude by comparing the three themes to pieces of a puzzle. In other words, the themes are worth putting together to answer to determine what relative magnitudes of impact we can anticipate and how quickly we can expect such effects to emerge. When pace is the point of comparison, effects with excels. This is also true when comparing magnitude of impact because of the immediate payoff of effects with and the improvements made over time. Salomon and Perkins note that effects of technology are less in terms of magnitude of impact and the pace at which it takes for the effects to emerge. For these reasons, Salomon and Perkins answer the question "does technology make us smarter" with a "nuanced yes."
Salomon and Perkin's three prong approach to thinking about the impacts of technology on cognition provides a simple framework that invites innovators to begin thinking more deeply about the potential affordances of technologies. Salomon and Perkins note that "it takes time for innovators to see the possibilities, time for early trials, time for a kind of Darwinian sifting of those new ways of working that truly offer a lot, and time for the new ways of working to pass into widespread use" (p. 81). A limitation of the framework work is discussed in the conclusion when Salomon and Perkins point out that they have shown examples of how effects of technology, effects with technology, and effects through technology can positively impact cognition in a controlled environment when in reality the three effects occur in complex systems. For this reason, the pace and realization of their full potential will take longer to realize.
The SAMR model is one of several technology integration models that exist to guide educators to be purposeful about technology integration. In their discussion of intellectual ampliﬁcation with, of, and through technology, Salomon and Perkins explain that "learners need time and guidance to achieve the effects that many contemporary cognitive technologies afford" (81). This got me thinking about how SAMR might provide a model that guides educators to facilitate the type of guidance students might need to achieve all three effects. The following connections can be seen between the two models:
Effects with Technology - Effects with technology transpire when technologies have functionality that enables them to mirror intellectual functions.
Effects through Technology - Through the use of technologies, effects that would have been otherwise unimaginable have been achieved.
While the second theme--effects of technology--does not have as strong of a connection to SAMR, I cant help but wonder if with time and lessons that purposefully apply the other two effects, more effects of technology will emerge. In other words, the cognitive residues that enhance performance even without the technology will become more observable.
Salomon, G., & Perkins, D.N. (2005). Do Technologies Make Us Smarter? Intellectual Ampliﬁcation With, Of, and Through Technology.
Photo by Mayer Tawfik on Unsplash
In the article "Educational Technology Research That Makes a Difference: Series Introduction" M.D. Roblyer addresses the need for a series of how-to articles on writing educational technology research that make a "strong case for technology's pedagogical contributions" (2005). A number of authors, according to Roblyer, have cited weaknesses that include disjointed efforts to study technology resources and strategies, weak methods, methods that do not match research questions, and poor reporting that make attempts to replicate subsequent studies difficult at best. For this reason, Roblyer's provides five pillars or criteria educational technology research should adhere to in order to be helpful.
Pillar 1: The Significance Criterion
Helpful research must provide a "clear and compelling case" for why it exists. Specifically, technology researchers need to recognize what makes a study significant enough to take on in in the context of education today.
Pillar 2: The Rationale Criterion
New research should seek to build on a foundation of theory. In doing so, helpful research will include a rationale that is grounded in theory and discusses expected effects drawn from past research.
Pillar 3: The Design Criterion
According to Roblyer, The Design Criterion is the most challenging. Here, the research has established research questions and must determine a suitable approach (i.e. experimental and quasi-experimental designs) and method or measuring impact on the identified variables. Articles reporting technology research and meeting this criterion will have a well developed methods section that shows a strong connection between the questions posed in the study and the designs and methods utilized.
Pillar 4: The Comprehensive Reporting Criterion
This criterion urges technology researchers to include a "structured abstract" with every research report. In doing so, researchers ensure that completed research enables future researchers can use and build upon it. Structured abstracts will follow APA format and include the following elements in detail: background on the study, purpose, setting, subjects, intervention, research design, data collection and analysis, findings, and conclusion.
Pillar 5: The Cumulativity Criterion
The best research will be well situated between the past and the future. This means that the research will clearly state the study is part of current or proposed research for the future and will pose next steps for future research.
To conclude, Roblyer provides four types of studies that move the field forward: research to establish relative advantage, improve implementation strategies, monitor impact on important societal goals, and monitor and report on common uses and shape desired directions.
Through the "Educational Technology Research That Makes a Difference: Series Introduction" Roblyer provides practical solutions to a significant problem with technology research--quality assurance. By providing a solution in the form of five criterion or "pillars" the article serves as the How-to guide it was intended to be. By providing detailed examples and clear actionable steps the series introduction provides would-be researchers a roadmap for developing technology research that moves the field of technology research in the culture of education today forward. Furthermore, Roblyer's conclusion is a clear call to action for would-be researchers and existing researchers alike to conduct and share good educational research in the hopes of ultimately finding a path to educational technology that makes a difference.
As noted in the readings this week, doctoral students choose to become researchers because they want to make a difference. For this reason, a how-to guide with a clearly defined criteria for conducting and reporting on educational technology that makes a difference is invaluable. The detailed descriptions of each criterion are especially helpful as I begin to think about what contributions I would like to make to educational technology research. Finally, I will be revisiting the "structured abstract" format that was outlined in Pillar 4: The Comprehensive Reporting Criterion in the future. I appreciate having a tick list of elements to include in my writing to ensure it is comprehensive.
Roblyer, M. D. (2005). Educational technology research that makes a difference: Series introduction.
Contemporary Issues in Technology and Teacher Education, 5(2), 192-201.
In the "Introduction" to The Cambridge Handbook of the Learning Sciences, R. Keith Sawyer argues that schools today do not reflect what research shows about the science of learning, but rather common sense assumptions that have been made about teaching and learning. For this reason, through his handbook Sawyer seeks to show key stakeholders how to design learning environments and classrooms that are rich with technology and reflect scientific research. According to Sawyer, citizens need to be able to move beyond memorizing facts to think critically about information and develop understandings that lead to innovations that solve real-world problems, but practices that reflect Instructionism function as an anchoring mechanism to such progress. Sawyer explains that by the 1970's researchers came to consensus on several key understandings about learning--
Sawyer provides a robust review of the related literature about Instructionism and the research findings on the science of learning with the help of two accomplished scholars that are both authorities on the learning sciences. Sawyer acknowledges that Roy Pea, a professor of Education and Learning Sciences at Stanford University and former Editor-in-Chief Emerita of The Journal of the Learning Sciences, Janet Kolodner, helped with the historical details. Sawyer uses the historical details to argue that schools today are not based on research, but rather common sense assumptions about teaching and learning. While the claim certainly holds some merit today, one would be remiss if the date of publication was not taken into consideration as since 2006 the vast majority of schools have redesigned curriculum and adopted new practices that better align with the research findings related to the science of learning. Regardless of where schools are at on the continuum of redesigning classrooms to promote better learning, Sawyer's concise list of the practical implications of the research about the science of learning will serve schools well.
Sawyer's recommendations for promoting better learning based on the research findings of the learning sciences could easily serve as a guide for curriculum directors and/or department teams seeking to update learning spaces or redesign new ones to better serve the needs of today's students and foster a culture of knowledge construction and innovation. Furthermore, for schools that have adopted the ISTE Standards for Educators and Students, the historical details provide a rationale for the educator and student shifts that were made between the NETS and the current ISTE standards. As such, the introduction can one of two functions. For some, the introduction may be a practical roadmap for designing spaces to promote better learning. And for others, this resource may simply be a starting point that prompts reflection and generates conversation for future planning.
Sawyer, R. K. (2006). Chapter 1 introduction: The new science of learning. In R. K. Sawyer (Ed.). The Cambridge Handbook of the Learning Sciences (p. 1-16). New York: Cambridge University Press.