Prasad V //
The achievements of Kerala state in education is comparable with the developed countries. This is a widely accepted fact till some years back. The role played by missionaries and progressive movements in the late 19th and early 20th centuries supported by the rulers of that time to establish schools set a new age in education sector of Kerala.
But the recent fall in quality of education in school children is alarming. A survey conducted by the state Council for Educational Research and training nearly 2 years back indicates that among the 7th graders nearly 35% cannot read or write their mother tongue. Like that 85% of the students are poor in basic Sciences and 73% poor in basic mathematics. More or less similar result was seen in among the fourth graders also.
Many reports points to the deteriorating condition of school education. Some time back, Mathrubhumi the most popular newspaper in Kerala brought this to attention through their editorial. Recently an order was circulated by the director of school education which directs to teach the whole school children with their mother tongue letters before the declaration of complete literacy in the State. Maybe this is the best certificate the present mode of education can achieve from the department of education itself. After such a self-declaratory order, there is no need to discuss further about the deteriorating standard of education in Kerala.
But this deterioration in the standard of school education has not happened in Kerala alone. Throughout the world, wherever the same pedagogical methods were adopted more or less similar fall in the standard of education was visible. This has led to worldwide debate on pedagogical methods, standard of education, etc. It is important to keep an eye on such debates all over the world. Because, the Government of India is now considering and is in the process of extending the constructivist pedagogy to all over the country. In some States the change has already happened, but with a relatively lesser degree than Kerala.
Earlier president of United States of America, Mr Barack Obama during his campaign for election in first term has made a promise to the public that he will raise the standard of education in US to same as in India. Mr Obama has been pushed to make such a promise in the backdrop of the fact that a large number of the highly skilled workers in US in different professions were from India. The native population is aggrieved in that. But, as a matter of fact, Mr Barack Obama couldn’t raise the standard of US education. Nevertheless, the Financial Institutions under his dictation like the IMF and World Bank were able to push a particular type of pedagogy which bases itself on the philosophy of constructivism as a condition for the loan for reconstruction of the school education, Kerala has received. Many facts and evidences in this regard where brought out by Mr Rajan Cherukkad through his famous book ‘ imperialism and Indian education’.
Before entering into the debate between different pedagogical methods and the recent researches in this regard, let us examine what is being implemented in Kerala during this time. The pedagogy that is being implemented in Kerala in school levels is gradually being extended to the higher education also. This system has been tried in different parts of the world in different times earlier. The minimally guided approach That is being implemented in Kerala has been called by several names including discovery learning, problem-based learning, inquiry learning, experiential learning, etc. other than constructivist learning. on the contrary, providing information that fully explains the concepts and procedures that students are required to learn is defined as direct instructional guidance.
Before the destruction of the time tested education system that was existed in Kerala, some of the so-called superfluous progressive leaders criticized that system. According to their criticism, that system was with a reactionary content and was implemented by Lord Macaulay. More than that, it is alleged by those superfluous intellectuals that the existed system was based on rote memory only. In this way, they have problematized the central role of memory in learning. But they have not changed the reactionary content even after reconstruction. On the other hand, they have changed the time tested pedagogical approach. Let us examine this through an example. The children study letters, basic mathematics, etc. through a pedagogical system. The one who studies letters can write both reactionary content as well as progressive content. This content changes in accordance with the production system existing in that society and the relative strength of people’s movements and consciousness. But how the child absorbs that is a part of the pedagogical system. That is part of natural science. That develops with human experience and comprehension and gets verified through the scientific means of experimentation, verification, etc.
Here we have to keep one thing in mind. Compared with the 19th century, significant development has been made in the field of biological Sciences during the 20th century. One of the most important area of study then was the functioning of the central nervous system. Ivan Pavlov (1849-1936) is remembered for his theories of learning by conditioning, which were developed as a result of his acclaimed research into digestion. In particular, Pavlov’s research during the 1890s and early 1900s used classical conditioning to demonstrate conditioned reflexes. The discovery Pavlov made through his experiments were significant because his theory of conditioning can be applied to learning not just in dogs, but also in other species, including humans. Later on, Pavlov’s contemporaries further extended his theory of conditioning. B.F. Skinner (1904-1990) argued that operant conditioning could be used to modify behaviour when a schedule of reinforcement was used to promote or discourage an action. Like that, with the passage of time the understanding of the central nervous system in question of learning has been improved very much. In the last half century, so much research has been undertaken by different influential scientists throughout the world on different areas of pedagogy. But to explain the function of the higher nervous activities, the theory of conditioned reflex continues as the basis. No theory going contrary to that has got any experimental evidence.
In Philosophy, constructivism is a world outlook that rejects the objective basis of knowledge. This outlook is a variety of relativism. A follower of this philosophy will narrate like following: All of us see green colour. But how can we conclude that the green in an individual’s brain is the same as that of the green in another individual’s brain. There is no mechanism to examine that. On the other hand, each one constructs a green of themselves. So each green is different and that depends on mind. So the knowledge is personal in nature and depends on mind. But according to science, the colour green represents a particular wavelength of light. By changing the wavelength, the colour can be changed. So according to science, the colour green has an objective basis. On the contrary, according to constructivism, the colour green is a mental construct. This is the difference. Following from the above example, constructivism doesn’t accept any knowledge that is applicable to all. This translate that, each student constructs his own knowledge. There is no need of a teacher imposing his knowledge to the child. So the followers of constructivism in teaching doesn’t say the word teaching.
According to Jean Piaget, an important propagandist of this philosophy,’ the children can construct better solutions than the directives given by a teacher who is standing with his sticks’. For constructivists, individual discoveries, individual problem solving and free thinking matters. With regards to pedagogy, they put forward 3 ideas.
- Let the children construct their own questions; Let them construct their own answers also.
- There is no need of giving explicit guidance to the children; even if they are mistaken, there is no need of correcting them.
- There is no need of teaching the traditional algorithms.
If the above described arguments are the basis of Instruction, then what can be done by the teacher to develop the inner capacity of the child? The constructivists suggest three steps for that.
- Accept the questions the children are asking. They come from children’s standard. So those questions are better for their development than the questions in standard textbooks.
- By imposing traditional algorithms upon the children, their free thinking capacity will deteriorate. So encourage the children to develop their own problem solving methods.
- There is no need of the teacher commenting whether the answer is correct or incorrect. That will destroy the children’s initiative and thinking. For that reason, only encourage the children to support or oppose within their peer group.
But the empirical evidences from the experimental studies in last half century points to just contrary results. Those studies consistently indicate that minimally guided instruction is less effective and less efficient than instructional approaches that place a strong emphasis on guidance of the student in learning process. To understand this more, we have to look into the structure of the human cognitive architecture as well as what happens with the learning process.
Memory And Learning
The long term memory and working memory are the components of human cognitive architecture. This will be further discussed later. Long-term memory is more than a knowledge archive. It provides the background information that is needed for understanding the world. This is done so by bringing the relevant knowledge into the working memory as and when it is needed. Information in long-term memory could be stored indefinitely. That is why older people can retain their childhood memories even in advanced age. many older people can retain their knowledge about their expertise also in old age. As nobody has ever made any complaint that their memory is being exhausted, it can be considered that for all practical purposes the long-term memory is limitless. memory is kept in neuronal circuits and the connection in between them. humans have billions of neurons which can make trillions of connections. the basic functional unit of memory is called schema. Schemas are an efficient way to organize interrelated concepts in a meaningful way.
The difference between an expert and a novice is that a novice hasn’t acquired the schemas of an expert. Learning requires a change in the schematic structures of long term memory and is demonstrated by performance that progresses from clumsy, error-prone, slow and difficult to smooth and effortless. The change in performance occurs because as the learner becomes increasingly familiar with the material, the cognitive characteristics associated with the material are altered so that it can be handled more efficiently. this process is called the automatisation of Schema.
De Groot’s Chess Experiment And Memory
Adriaan De Groot’s (1945/1965) work on chess expertise, followed by Chase and Simon (1973), has served as a major influence on the field’s reconceptualization of the role of long-term memory. The question, how do chess experts evaluate positions to find the best move, has been studied for decades, dating back to the ground breaking work of de Groot and later to work by William Chase and Herbert Simon.
De Groot interviewed several chess players as they evaluated positions, and he concluded that experts and weaker players tended to “look” about the same number of moves ahead and to evaluate similar numbers of moves with roughly similar speed. The relatively small differences between experts and novices suggested that their advantages came not from brute force calculation ability but from something else: previous memory based on experience. According to De Groot, the core of chess expertise is the ability to recognize huge number of chess positions (or parts of positions) and to derive moves from them. In short, their greater efficiency came not from evaluating more outcomes, but from considering only the better options. [Note: Some of the details of de Groot’s claims, which he made before the appropriate statistical tests were in widespread use, did not hold up to later scrutiny—experts do consider somewhat more options, look a bit deeper, and process positions faster than less expert players (Holding, 1992). But de Groot was right about the limited nature of expert search and the importance of knowledge and pattern recognition in expert performance.]
In de Groot’s most famous demonstration, he showed several players images of chess positions for a few seconds and asked the players to reconstruct the positions from memory. The experts made relatively few mistakes even though they had seen the position only briefly. He found that grandmasters and masters were able to recall the location of 93% of the pieces, while the experts remembered 72% and the class players merely 51%.
Years later, Chase and Simon replicated de Groot’s finding with another expert (a master-level player) as well as an amateur and a novice. They also added a critical control: The players viewed both real chess positions and scrambled chess positions (that included pieces in implausible and even impossible locations). The expert excelled with the real positions, but performed no better than the amateur and novice for the scrambled positions (later studies showed that experts can perform slightly better than novices for random positions too if given enough time; Gobet & Simon, 1996). The expert advantage apparently comes from familiarity with real chess positions, something that allows more efficient encoding or retrieval of the positions.
The finding that expert chess players are far better able than novices to reproduce briefly seen board configurations taken from real games, but do not differ in reproducing random board configurations, suggest that expert problem solvers derive their skill by drawing on the extensive experience stored in their long-term memory. Chess is always considered to be the game of intellect. In reality, the efficiency of a player in chess depends on his cognitive abilities. The De Groot experiment and it further verification by different researchers later has made the chess game, a game of memory. The experiments have placed memory as the basis of higher cognitive abilities.
Human Cognitive Architecture
The human cognitive architecture is the manner in which functionally the human cognitive structures are organised. even though there are many models proposed, the sensory memory–working memory– long-term memory model is considered to be the basis of human cognitive architecture buy modern researchers. The long term memory and working memory are the components of that. The working memory is the processing part. The hippocampus and amygdala plays the most important role in processing of information and sending them to the long term memory. Long-term memory is the central, dominant structure of human cognition. Everything we see, hear, and think about is critically dependent on and influenced by our long-term memory. Rather than a group of rote learned facts, the contents of long term memory are “sophisticated structures that permit us to perceive, think, and solve problems.”
Working Memory Characteristics And Functions
Working memory is the cognitive structure in which conscious processing occurs. Working memory has two well-known characteristics: When processing new information, it is very limited in duration and in capacity. It will be very difficult for us to remember a number consisting of 12 or 13 digits. This is because our short term memory has a capacity of seven plus or minus two items in numbers. An item varies in letters, words and numbers. This has been shown by George Miller 1956. His paper was named ‘the magical number seven plus or minus two: Some limits on our capacity for processing information’.
New ideas, information and procedures must first be processed in the working memory before passing into the long term memory. Too many items to process together leads to cognitive overload and impairment to learning.
The interactions between working memory and long-term memory may be even more important than the processing limitations of working memory. The limitations of working memory only apply to new, yet to be learned information that has not been stored in long-term memory. When dealing with previously learned information stored in long-term memory, these limitations disappear. In the sense that information can be brought back from long-term memory to working memory over indefinite periods of time, the sequential limits of working memory become irrelevant. Similarly, there are no known limits to the amount of such information that can be brought into working memory from long-term memory. Problem solving, analysis or any complex task draws on resources from long term memory.
Implications Of Human Cognitive Architecture For Constructivist Instruction
These memory structures and their relations have direct implications for instructional design. Inquiry-based instruction requires the learner to search a problem space for problem-relevant information. All problem-based searching makes heavy demands on working memory. Furthermore, that working memory load does not contribute to the accumulation of knowledge in long-term memory. Because while working memory is being used to search for problem solutions, it is not available and cannot be used to learn. Indeed, it is possible to search for extended periods of time with quite minimal alterations to long-term memory. The goal of instruction is rarely simply to search for or discover information. The goal is to give learners specific guidance about how to cognitively manipulate information in ways that are consistent with a learning goal, and store the result in long-term memory.
Constructivism And Minimally Guided Instruction
Given the incompatibility of minimally guided instruction with our knowledge of human cognitive architecture, what has been the justification for these approaches? The most recent version of instruction with minimal guidance comes from constructivism, which appears to have been derived from observations that knowledge is constructed by learners and so (a) they need to have the opportunity to construct by being presented with goals and minimal information, and (b) learning is personal and so a common instructional format or strategies are ineffective.
There is no evidence that presenting learners with partial information enhances their ability to construct a representation more than giving them full information. On the other hand, quite the reverse seems most often to be true. Complete information will result in a more accurate representation that is also more easily acquired.
Another consequence of attempts to implement constructivist theory is a shift of emphasis away from teaching a discipline as a body of knowledge toward an exclusive emphasis on the processes and procedures of the discipline. This change in focus was accompanied by a notion shared by many educators that knowledge can best be learned or only learned through experience that is based primarily on the procedures of the discipline. This point of view led to extensive practical or project work, and the rejection of instruction based on the facts, laws, principles and theories that make up a discipline’s content. naturally this notion is accompanied by the use of discovery and inquiry methods of instruction. The addition of a more vigorous emphasis on the practical application of inquiry and problem-solving skills seems very positive. Yet it may be a fundamental error to assume that the pedagogic content of the learning experience is identical to the methods and processes of the discipline being studied.
scientific enquiry needs and ability to systematically investigate with huge thinking capabilities. without a deep domain knowledge, it is simply not possible. Deep domain knowledge can be obtained only through systematic and formal teaching and learning process.
Cognitive Load Theory
During the 1980s, cognitive load theory was born from the extensive research produced by John Sweller in the area of problem solving. The of cognitive load theory suggests that, learning happens best under conditions that are beneficial to the human’s own cognitive structure. Cognitive load refers to the total amount of mental effort being used in the working memory during learning. Here one has to keep in mind, working memory has a capacity of seven plus or minus two items. The total learning process cannot broadly exceed this. John Sweller, demonstrated that instructional design can be used to reduce cognitive load in learners so as to enhance the process of learning.
Cognitive load theory differentiates cognitive load into three types: intrinsic, extraneous, and germane. Intrinsic cognitive load is the inherent level of difficulty associated with a specific instructional topic. All instructional topic has an inherent difficulty associated with it. For example, the calculation of 1 + 1 takes much lesser load than solving a problem with integral calculus. This inherent difficulty may not be altered by an instructor usually. However, dividing the information into parts, many subschemata could be formed at the beginning which can be later combined together to form a comprehensive schema. through this mechanism the intrinsic load of a particular information to grasp can be reduced.
Extraneous cognitive load is generated by the manner in which information is presented to learners. This is under the control of instructional designers. An example of extraneous cognitive load is following – an instructor can use two possible ways to describe a square to a student. a square is a figure. a figure can be used to explain about a square. the explanation about a square can be given verbally also. but by showing a figure, the child grasps what the teacher want to say in some seconds. but by describing verbally what is queries the square is, the child takes much more time and energy to understand that. this is because, by verbally describing about a square, due to the long description, the cognitive load is increased. In this instance, the efficiency of the visual medium is superior. This is because it does not unduly load the learner with unnecessary information. This unnecessary cognitive load is described as extraneous. This load can be attributed to the design of the instructional materials.
Germane cognitive load refers to the work put into creating a permanent store of knowledge, that is the construction of knowledge structures (schemata) and its automation. by repeated use of a particular schema, that becomes automatized. consider a child using multiplication table in mathematics. by the continuous use of table, child becomes more and more familiar with that. with the familiarisation, the information can be retrieved faster. this adds to the child’s capacity to further go to mathematics. this can be put in other words – by automatisation of the schema representing the multiplication table, a favourable condition is created for the formation of higher schemata that depends on the former.
Consider the working memory as a limited capacity jug as in the following diagram.
Intrinsic Cognitive Load is effectively fixed, but we should try and reduce it by breaking the task down into smaller parts. Extraneous Cognitive Load is the environment and the way we present the information. We should try and minimise this. Germane Cognitive Load is the processing that takes place comparing the new information to what we already know and encoding new learning to the long-term memory as schema. The more we know about something the lower the Germane Cognitive Load will be.
So efficient learning can occur when Working Memory Capacity is greater than the sum of Intrinsic Cognitive Load, Extraneous Cognitive Load and Germane Cognitive Load. today with the development of neurosciences in different areas and with the invention of so many instruments, there are different methods used to measure cognitive load.
The Cognitive load theory suggests that the free exploration of a highly complex environment may demand a heavy working memory load that is detrimental to learning. This suggestion is particularly important in the case of novice learners, who lack proper schemata to integrate the new information with their prior knowledge.
Mayer (2001) described an extended series of experiments in multimedia instruction that he and his colleagues have designed drawing on Sweller’s cognitive load theory. In all of the many studies he reported, guided instruction not only produced more immediate recall of facts than unguided approaches, but also longer term transfer and problem-solving skills.
Here, cognitive load theory predicts two successful strategies in instruction –
1. Teacher guides students through the context in a highly structured way, pointing out what is important and what is irrelevant, modelling his thinking.
2. Simple contexts that are limited in scope are used early in learning and the teacher guides students towards more complex contexts.
Learning Effects Predicted By Cognitive Load Theory
Many learning effects are predicted by cognitive load theory. The worked-example effect, expertise reversal effect, split attention effect, imagination effect, etc. are some of them. for the time being, let us concentrate on worked example effect alone in order to shorten the description.
A worked example constitutes the epitome of strongly guided instruction, whereas discovering the solution to a problem in an information-rich environment similarly constitutes the epitome of minimally guided discovery learning. The worked-example effect refers to the learning effect observed when worked-examples are used as part of instruction, compared to other instructional techniques such as problem-solving and discovery learning. According to Sweller, “The worked example effect is the best known and most widely studied of the cognitive load effects”. The worked-example effect occurs when learners based on constructivist instructional strategy required to solve problems perform worse than learners who study the equivalent worked examples. Accordingly, the worked-example effect, which has been replicated a number of times, provides some of the strongest evidence for the superiority of directly guided instruction over minimal guidance.
Why does the worked-example effect occur? It can be explained by cognitive load theory, which is grounded in the human cognitive architecture as discussed earlier. Solving a problem requires problem-solving search and search must occur using our limited working memory. Problem-solving search is an inefficient way of altering long-term memory because its function is to find a problem solution, not alter long-term memory. Indeed, problem-solving search can function perfectly with no learning whatsoever. Thus, problem-solving search overburdens limited working memory and requires working memory resources to be used for activities that are unrelated to learning. As a consequence, learners can engage in problem-solving activities for extended periods of time and learn almost nothing.
In contrast, studying a worked example both reduces working memory load because search is reduced or eliminated and directs attention (i.e., directs working memory resources) to learning the essential relations between problem- solving moves.
Researches Supporting Direct Guidance
Project Follow Through
Project Follow Through, was the largest and most expensive experimental project in education funded by the U.S. federal government that has ever been conducted. The most extensive evaluation of Follow Through data covers the years 1968–1977. The project has its origin as an extension of Lyndon Johnson’s 1960s “Headstart” program for pre-school children.
The Idea behind the experiment was to improve teaching of disadvantaged first graders. Many programs were tested. A large number were based upon constructivist and ‘child-centred’ ideas about learning that were popular with educationalists. Most of those programs continues to be popular still. Among them, a couple of interventions were labelled ‘basic skills programs’ by the researchers and one of these was the DISTAR direct instruction program, led by Zig Engelmann. Although there was large variation between the effectiveness of the programs from site to site, the ‘basic skills’ programs were clearly found to be the most effective, with direct instruction the most effective of all.
Direct instruction was labelled a ‘basic skills’ program because it emphasised things like basic arithmetic. The name has led to the misconception that direct instruction is good for teaching basic skills but not for things like problem solving and that it may harm motivation. In fact, direct instruction produced the largest gains in problem solving skills and in self-esteem in the Project Follow Through experiment.
Carl Bereiter and Midian Kurland put it well in “A constructive look at Follow Through results.” (1981):
“When child-centred educators purport to increase the self-esteem of disadvantaged children and yet fail to show evidence of this on the Coopersmith Self-Concept Inventory, we may ask what real and substantial changes in self-esteem would one expect to occur that would not be reflected in changes on the Coopersmith? Similarly, for reasoning and problem-solving. If no evidence of effect shows on a test of non-verbal reasoning, or a reading comprehension test loaded with inferential questions, or on a mathematical problem solving test, we must ask why not? What kinds of real, fundamental improvements in logical reasoning abilities would fail to be reflected in any of these tests?”
John Hattie’s method is controversial because he groups together very different kinds of trials and calculates an ‘effect size’. An effect size of zero would represent no effect and 1 represents a very large effect. Because ‘everything works’, Hattie doesn’t simply look for positive effect sizes, he looks for those above 0.4.
Instructional strategies that are predicted to be successful by cognitive load theory fare well in this analysis; the strategies criticised by Kirschner, Sweller and Clark did not fare well. For instance, direct instruction and mastery learning have high effect sizes (0.59 and 0.57) whereas inquiry-based learning and problem-based learning have low effect sizes (0.31 and 0.15). [Hattie, John. Visible learning: A synthesis of over 800 meta-analyses relating to achievement. Routledge, 2013.]
Hardiman, Pollatsek, and Weil (1986) and Brown and Campione (1994) noted that when students learn science in classrooms with pure-discovery methods and minimal feedback, they often become lost and frustrated, and their confusion can lead to misconceptions. Others (e.g., Carlson, Lundy, & Schneider, 1992; Schauble, 1990) found that because false starts are common in such learning situations, unguided discovery is most often inefficient.
Klahr and Nigam (2004), in a very important study, not only tested whether science learners learned more via a discovery versus direct instruction route but also, once learning had occurred, whether the quality of learning differed. Specifically, they tested whether those who had learned through discovery were better able to transfer their learning to new contexts. The findings were unambiguous. Direct instruction involving considerable guidance, including examples, resulted in vastly more learning than discovery. Those relatively few students who learned via discovery showed no signs of superior quality of learning.
In a 2004 paper, Kroesbergen and colleagues report on a trial conducted in the Netherlands. Low-achieving maths students were given either no intervention, a constructivist intervention where the students’ own strategies for solving problems were surfaced and explored, or an explicit intervention where a teacher directly taught problem-solving strategies. Students in both interventions improved but those given the explicit instruction improved the most.
Thus a number of reviews of empirical studies have established a solid research-based case against the use of instruction with minimal guidance. Mayer (2004) recently reviewed evidence from studies conducted from 1950 to the late 1980s comparing pure discovery learning, defined as unguided, problem-based instruction, with guided forms of instruction. He suggested that in each decade since the mid-1950s, when empirical studies provided solid evidence that the then popular unguided approach did not work, a similar approach popped up under a different name with the cycle then repeating itself. Each new set of advocates for unguided approaches seemed either unaware of or uninterested in previous evidence that unguided approaches had not been validated. This pattern produced discovery learning, which gave way to experiential learning, which gave way to problem- based and inquiry learning, which now gives way to constructivist instructional techniques. Mayer (2004) concluded that the “debate about discovery has been replayed many times in education but each time, the evidence has favoured a guided approach to learning”.
Even though influential cognitive scientists have done half century of experimentation to compare the constructivist pedagogy as well as the direct instructional pedagogy, there is no experimental evidence from controlled studies, to support the constructivist pedagogy. On the other hand, the hitherto accumulated body of experimental evidence suggest that the direct instructional method is superior to the other one. This is not only applicable to novices and intermediate learners but also students of considerable previous knowledge. the constructivist approach is not only ineffective and inefficient but also in many conditions it creates negative results leading to misconceptions, frustration and incomplete or disorganized knowledge. In this short article, we have completely avoided examining the constructivist approach in language studies which also creates negative result only.
Originally published in The Truth: Platform for Radical Voices of The Working Class (Issue 5/ September ’20)