Deep Help – Providing Multi-Level Assistance to Students in Online Learning Environments

Erik Epp

November 14, 2013 - November 16, 2013
Abstract: 

Online learning environments have shown that students work at all hours.  To provide assistance to students at all hours, these systems frequently offer hints, feedback, solutions, videos, eBooks, and similar problems.  This article discusses a new, multi-level assistance framework (Deep Help) based on principles from knowledge space theory, the zone of proximal development, and cognitive load theory.

 

Article PDF: 

 

Introduction

Homework usage stats:

     Unlike analog homework, students can access online learning environments at any hour of the day and analytics are available to track student usage patterns.  As shown in Figure 1, student usage peaks in the evening, when instructors tend to be unavailable.

Figure 1. Relative usage of an online learning system over the course of a day

 

Online learning environments often incorporate various tools to provide real-time assistance to the student, including hints, feedback, solutions, videos, eBooks, and access to example problems.  Students in introductory-level courses frequently come from a diverse set of education backgrounds and consequently require different types and levels of assistance.  This article describes the new Deep Help system, which provides on-demand, multi-level assistance to the learners when and where they run into difficulty. This system was designed to help every student best by using principles from the zone of proximal development, knowledge space theory, and cognitive load theory. When applied together these theories provide the foundation for the innovative design of the Deep Help system.

 

 

Foundational Frameworks

Zone of Proximal Development

     Vygotsky’s Zone of Proximal Development concept describes the range of abilities that a learner cannot perform independently, but can perform with assistance. The zone of proximal development, represented by the blue center area in Figure 2, is the gap between tasks a learner can do without help and what the learner cannot do, even with assistance.

Figure 2. Graphical representation of the zone of proximal development.

 

The role of a teacher is to provide guidance and assistance so that the learner can accomplish tasks in the center section. Teachers can use online learning systems as an extension of their role to provide additional assistance when they are not available. Organizing which tasks fall into each of these segments is further described by the next framework.

 

Knowledge Space Theory

     Knowledge space theory considers the dependent relationship between subsets of knowledge.  For instance, the concept of the balanced equation for a reaction is a necessary prerequisite for the concept of stoichiometry.  This relationship is also why it is rare to assess student understanding of stoichiometry with a problem involving a 1:1 ratio, since that could be solved without meeting the prerequisites.

Figure 3a. Example of a portion of a knowledge space.

 

For instance, in Figure 3a, an understanding of B requires a prerequisite understanding of A. Similarly, D depends on B and C, indicating that to understand D, a learner must already comprehend A, B, and C.  Thus a knowledge state of ACE, FACE, or CAB would be possible, but FEB or DEC would not.

 

Applying the Zone of Proximal Development to a knowledge space diagram requires that the three regions be aligned in ways allowed by the dependencies in the diagram.  A hypothetical example of this is illustrated in Figure 3b, with a learner able to accomplish A,B,C,E, and F without assistance, D and G with assistance, but unable to perform H.  The dependencies make clear that if, for instance, F required assistance, then G would require assistance as well.

Figure 3b. Application of the zone of proximal development to a knowledge space diagram

 

Cognitive Load Theory

     The major idea behind cognitive load theory is the assumption of a finite cognitive load capacity in a learner. This capacity is spread among intrinsic, extraneous, and germane aspects of the activity being performed.  Intrinsic cognitive load comes from the difficulty and complexity of the concept.  While extraneous cognitive load relates to the means through which a concept is presented.  And germane load addresses the construction of schemas.  This theory indicates that in order for the student to use the majority of his or her cognitive abilities for learning, it’s important to avoid extraneous tasks and distractions.

 

Deep Help Framework

Design

     The Deep Help framework was designed to provide stepped tutorials for prerequisite information. Students can dive deeper into the provided extra support as needed, until they fully understand all elements required to perform the original problem. The instructor has full control over student access to tutorials and can configure Deep Help to always be available or available only after a specified number of answer submissions.  While many interactive tutorials and other help tools are associated at the question level in a student’s assignment, the Deep Help system is associated with individual steps in a tutorial (Figure 4).

 

Figure 4.  Help systems associations

Cognitive load theory suggests a “just in time” paradigm: making it easy to find while the student is learning and limiting decision options so that less cognitive load is expended. Rather than having four tutorials to choose from, in the Deep Help system each step of the tutorial (highlighted in orange) only has one or two options for a learner to choose.  The multi-step approach used in the tutorial helps the student see which step they are having difficulty with and easily identify what Deep Help exists.

From the perspective of knowledge space theory and zone of proximal development, we presume that instructors would assign questions that their students are able to do with assistance, however, we recognize that there are cases where this is not practical, such as before-class assignments or when students have missed class due to illness or other reasons.  To use the example shown in Figure 3, the Deep Help system allows a student to backtrack to activities they can accomplish with the assistance of the system (blue), which can expand their capabilities to tackle the original question.  It is important to note that students would not usually access a large portion of the Deep Help available for a given question, but would dive as deep as needed in an area in which they are having trouble.  For students who are completely lost, the tutorial offers a step-by-step breakdown of the problem.

 

Student View

     A student who is working on a problem has easy access to the relevant reference materials, lowering cognitive demands, as well as a link to the tutorial, which leads to the Deep Help (Figure 5).  A basic demo can be reviewed at: http://www.webassign.net/info/demo_assignment.html?deployment=2934

 

Figure 5. Student view of a question with part of the associated tutorials and Deep Help
(click here to download original image)

 

Conclusion

By applying these learning theories in the design of the Deep Help System, instructors are able to offer additional support to students through the use of an online instructional system that assists students in indentifying where they are having trouble, provides multi-level assistance in those areas, and is always available. 

Comments

I use an up to date Mac and

I use an up to date Mac and the demo didn't work.

Formative assessment

Erik,

You are dealing with a very important and difficult task.

Students think and solve problems in different ways, but there are basic concepts that are essential to understand higher level ideas and most of us teach them in same paterns. It is important to have tools like the one you presented to guide students as you described.

One tool you might know, which I like very much is one that Minstrell and collegues designed http://www.ccce.divched.org/P7Fall2013CCCENL

they began to build a tool on Physics, Chemistry was next and will be available for free 

Chemistry Facets: Formative Assessment to Improve Student Understanding in Chemistry

http://www.facetinnovations.com/daisy-public-website/fihome/g1/6963-FI.html

Malka

Formative assessment

Malka,

The different ways that students process information and solve problems is a tricky one, but formative assessment, such as what you referenced, will definitely be a major part of identifying student traits to better serve them.  Thanks for bringing up this tool, I wasn't previously aware of it.

-Erik

Is Knowledge Space Theory an Ontology?

Eric, I doubt this is a question you can answer off your head, but it is something I have been thinking about, and I think is worth asking and discussing. Is Knowledge Space Theory, or should I say the “knowledge maps” (figures 3a &3b) a form of ontology?  And if so, could an ontology editor like Protégé be used to generate these?  Here is a quote from their website, http://protege.stanford.edu/ .

Protégé is a free, open source ontology editor and knowledge-base framework.

Now, forgive my ignorance, but my current (and hopefully improving) understanding of an ontology is that it is sort of a knowledge based taxonomy of universals (in contrast to specifics) that can be used to organize information, relate concepts and actually predict behavior and answer questions when applied to specifics. An easy example of an ontology for chemists to understand is the Periodic Table.  For example, through this ontology, you can explain why sodium forms a [+1] cation and chlorine a [-1] anion.

As for the “universals” that the periodic table is built upon, atoms have protons, neutrons and electrons, the electrons reside in orbitals, the orbitals can be defined by the three quantum numbers (for general chemistry), only two electrons can reside in an orbital (Pauli Exclusion), the ground state is the lowest energy state (Aufbau), different orbitals are shielded differently by other orbitals - depending on their shape and location (affecting effective nuclear charge), electrons in well shielded orbitals are easier to ionize, vacant orbitals that feel a strong nuclear charge want to gain electrons…  All of the above are “Universals” and can be set into an ontological framework, which in reality, I would argue, is what the periodic table is.

Now we apply the above ontology to a specific, sodium with 11 electrons, (should I say any atom with a ns1 valence configuration), will tend to form a [+1] cation, while chlorine with 17 electrons, (like any atom with a ns2np5 valence electron configuration) will tend to form [-1] anions. 

So is Knowledge Space Theory an ontology based on the curriculum? The WebAssign framework is the ontology, and the student’s personal knowledge map an application to a specific, (as chlorine and sodium are specific representations of the ontology based on the principles behind the periodic table)?

Do you know how WebAssign generates their knowledge maps? (I hope I am using a legitimate term [knowledge maps]). Do they use a program like Protégé? Has anyone on the list played around with curriculum delivery and development through ontological based semantic frameworks?

Is Knowledge Space Theory an Ontology?

Bob,
I would say that a knowledge space is a type of ontology.  To use my simple example in Figure 3a/b the completion of skill H by a student would allow us to infer that all prerequisites are able to be completed by that individual.  So the WebAssign knowledge space is the ontology and the portion that a student has completed is their knowledge state – I’m not sure that there is a parallel for knowledge state in terms of ontology. We used a few custom tools that we developed to create our knowledge space.
-Erik

Ontologies

Bob, thanks for including me in this discussion. Coming from a computing background I apologize for my naive chemistry, but I wanted to focus on the question of what is an ontology.

Think of an ontology as a set of terms (let's call them symbols?) and a set of relationships among them. So an example of a simple ontology might be:

electron is-a subatomic particle

atom has-a electron

molecule has-a atom

and so on ... I've tried to mention a few terms and some relationships among them.

One more brief comment, as long as you have names for the concepts you want to have in an ontology, you can do it. Universal concepts will work fine ... specific ones too, as long you can identify names and relationships among them.

Once you have an ontology, you could, say, write software to make inferences of various kinds. In the example above, you might infer that: molecule has-a electron. The reason is that has-a is transitive: if a person has-a hand and a hand has-a finger, then a person has-a finger. Other kinds of relations among terms might support other kinds of inferences. That's one reason ontologies are so useful - you can make inferences from them about various things.

 

 

Future of Ontologies in Education

Thanks Dan. I think that in order to apply ontologies to science or education, the ontology probably needs to be based on universals, although I understand that ontologies can be applied to bounded systems. But science and education really need to be founded in “truth” (whatever that is), and so the ontology needs to be universal.  For example, if “you” have a finger, you have a hand, but if “you” have a hand, you do not necessarily have a finger.  So a finger is a “part of” a hand, but a hand is not part of a finger. Now if you bounded your system to only people who have fingers, as you implicitly did in your example, then if you have a finger, you have a hand.  The thing is, some people do not have fingers, but still have hands. So that ontology is not based on universals.

Now, multiplication is the repetition of addition, so one could call addition a prerequisite to [read “part of”] multiplication.  So, from a knowledge spaces perspective, a person who can multiply can add, but a person who can add may not necessarily be able to multiply. That is, the later is not a universal. So I think that in getting to Erik’s looking at a parallel for a knowledge state in terms of ontology.  It is the “hand”.  That is, if you have a hand, you know you have an arm, but are not sure if you have fingers. So to push the analogy, you teach the student about the finger, not the arm.

I do believe these (and social cognitive) technologies will evolve to enable us educators in the classroom to contribute our knowledge and experience to our students through ontological based extensible frameworks.  

I am adding an addendum to my comment (editing it) below:

I take back the above analogy for describing a knowledge state in the ontological sense that having a hand means you teach about the finger (a sort of inverted partsonomy) - it is wrong. I suspect the answer to the question of how do you determine a knowldege space from an ontology lies in graph theory. I believe Erik’s color coding in diagram 3b conceptually describes a knowledge state, where you have what from graph theory is a directed graph.  The squares are the nodes and the arrows (edges in graph theory) are directed, indicating a dependency. From this visual representation, your personal knowledge state is represented by the green nodes, your distal knowledge is all nodes with dependencies not green, and fringe knowledge, (which the article is calling the Zone of Proximal Development) is everything else. So you do not teach material for which a dependency is not mastered. I do not understand graph theory enough to answer this question, but suspect if you can apply graph theory to an ontology, you can create an alogrotihm to describe "the parallel of a knowlege state in terms of an ontology" in a way that could direct students to their fringe knowledge.

 

Semantic pitfalls....

...can plague ontologies.

Daniel has succinctly and lucidly described the structure and usefulness of ontologies. However, generic ontologies have not been very useful because a word rarely means the same thing to everybody in every context. To give an example, the meaning of has_a is different in

He has_a neck

He has_a disease

He has_a car

He has_a grudge

It is nearly impossible to construct ontologies_in_the_large without facing problems of this kind because we cannot anticipate such (and similar other) pitfalls.

Then again, it is difficult to modify an ontology in which such problems are discovered over time, because retrospective application of any change is a nightmare. Thus we cannot start on an experimental ontology and expect to refine and improve it over successive revisions.

All these problems can be easily (well, not very easily) handled for ontologies_in_the_small, because variation in meaning, change in meaning over time and such other issues are less troublesome in small spaces.

I have experimented with "stitching" small ontologies together (either by crossover apis or through maintained dictionaries) to encompass larger spaces, and prima facie such a stitched ontology is easier to handle.

~Milind

Extent of scaffolding?

There are many aspects of these tutorials that are nice because, as the paper made available by Erik says, a guided approach while learning is helpful to a novice.

The big question is what extent of scaffolding is appropriate?

Since amount in moles seems to be a theme of various comments, I worked through your tutorials both on calculating amount in moles and on limiting reagent.
Your (or your company) obviously believe that the students should be guided (scaffolded) in a way that the thinking is done for them, and all they have to do is enter the numerical answers.

This includes entering the relative atomic masses from the periodic table and calculating molar masses even for the more advanced problem on limiting reagent.

Question 1:  Is there any evidence in the literature that this is appropriate scaffolding? OR were you limited to this type of scaffolding because of the system that you use to enter the scaffolded problems?

Irrespective of your answer to question 1, what you have done is a good start, and you should have access to data analysed which can be used to confirm or deny whether the amount of scaffolding used is appropriate.

For example MAYBE most students (myself included) are falling down on putting in fewer than the number of significant figures required from the molar masses in the periodic table.  MAYBE most students, once they have got the hang of the number of significant figures, do not need to be led through calculation of the molar mass, and it is more appropriate to give them molar masses and focus on the real point of the problem.

Question 2:  Are you analysing the data to see which parts of your tutorial can be done by the majority of users and which parts are causing problems?

 My surmise, for the two problems that I worked through, is that the conversion factor and its orientation are keys to the successful solution.  The two numbers 1 and 23 and the two units mol and g mol-1 can be combined in four different ways to generate a conversion factor for sodium's molar mass.  Getting the right number with the right unit provides a challenge for some (a more likely challenge than adding up molar masses).

Question 3:  Are there any plans to construct tutorials where the students have to do some of the thinking (for example construct their own conversion factor) AS WELL AS entering the numbers?

There is a big emphasis on entering the appropriate number of significant figures for all answers, including those on the way to the final answer.
 In my experience if students round all of the intermediate answers to four sig figs (which they see on the screen) and then combine them to give a final answer in four sig figs, that answer may not be the same as the student who held all of the numbers in their calculator and didn't round early.

Question 4: Do you make allowance for this in calculated answers that are derived from those that appear on the screen?  Is the emphasis on significant figures for intermediate answers appropriate?

Finally, it struck me that the format for the tutorials could have been more compact for reviewing or printing on completion once the tutorial was complete.
I was somewhat concerned that the long-winded format and could detract from the punchy part of the problem. 

Question 5:  Did I miss a print-friendly form of the solved problem?  I would have thought that this would be handy to guide students in solving other problems.

Extent of scaffolding?

Sheila,
The approach we took was based around the idea of remediation.  The Deep Help tutorials are available to the student based on criteria set by the instructor, with the default being after a single attempt of the problem.  Since the primary audience would be students having difficulty at times when other assistance is unavailable, a more verbose approach was taken with the writing since we wanted to avoid the problem of students getting stuck and giving up.

Q1: The amount of scaffolding was determined based on a review process with five chemistry faculty, with each of the tutorials being reviewed by at least one of the faculty and several WebAssign chemists with teaching experience.  We recognize that this is a first pass, and that more data will allow us to improve the system.  If you have any suggestions for literature on this topic, I would be most appreciative as I no longer have access to the variety of literature that I did while in academia.

Q2: We expect to analyze the results via Markov chain modeling, but as of yet have insufficient data.  This was a drawback to releasing 1400+ tutorials simultaneously – with different instructors choosing different questions and only a fraction of the students accessing a given tutorial – we still need more data to start informing revisions based on student trajectories through a tutorial.

Q3: We presently are able to accept more complex answer forms (number and complex units in a single answer blank), but are still working on giving appropriate feedback for each of the several permutations you discuss and thus opted to use simpler answer blanks with better feedback.  We are also working towards technology that would allow for entering and checking of a student’s entire work for a problem.

Q4: Tolerances for numerical answers are part of our internal grading process, and the chemistry content specialists who program the questions ensure that they are reasonable for each question, including the more complex cases involving logarithms and exponents.

Q5: This isn’t available in the demo, but in normal usage, the instructor can determine whether and if so, when, to release a solution for student’s problems.  This solution uses whatever randomized values the student was assigned, making it easier to troubleshoot difficulties.

-Erik

basic demo

When using the basic demo after I hit submit nothing seems to happen?

 

basic demo

pankuch,

Would you please clarify what happened?  I just checked the demo and submitting was working.

Thanks,

Erik

Conceptual vs. Algorithmic Problem Solving

Personally, one of the joys of BCCE, second only to the after-dinner colloquia convened by Dr. Belford and his colleagues, has been questioning Dr. Epp on how cognitive sciences says we can help students learn chemistry.  Eric’s background in chem and cognitive psych balances both components of “chemistry education.” I appreciate his willingness to share his considerable expertise.

In chem ed, the great debate seems to be, do we encourage students to solve problems based on conceptual understanding or by fluent recall of facts and algorithms?  The cognitive load research you cite seems (to me) to say that, due to a brain that evolved to learn speech instinctively, undergrads must start by solving via facts and algorithms, and then over several years construct the conceptual framework needed to know what to use when.  I know that is not the answer we want, but it seems to be the answer that cognitive science is giving us. 

Q1:  In your view, is that the mainstream view in cognitive science (or did I mangle it)? 

Q2:  What articles might Eric recommend that summarize (without too much technical cog sci) the science of how the undergraduate brain solves problems in chemistry?

Two plain-English articles on these issues I found helpful were a 6 page PDF by Clark, Sweller, and Kirschner available here: 

 http://www.aft.org/pdfs/americaneducator/spring2012/Clark.pdf

(I think they added balance to the views in their oft-noted 2006 paper), and a summary by Herb Simon and colleagues addressing how the brain solves calculations, available here:

      http://act-r.psy.cmu.edu/papers/misapplied.html

My thanks to the CCCE Newsletter volunteers for all of their work on this forum.

-- Eric A. (rick) Nelson

 

Conceptual vs. Algorithmic Problem Solving

Rick,

sorry about the delay in replying - you gave me a lot to think about and I'm going to take a different sort of approach in answering your questions.

I can't speak for what the mainstream view is, so I'll speak to how I classify things.  There is a saying in business that "management is getting things done, leadership is making sure the right things get done" and I would say something similar about the difference between algorithmic (solving a problem) and conceptual (solving the right problem).  In both cases, the determination of the correct course of action is often valued higher.  I concur with your assessment that many cognitive science frameworks point to the idea of learning algorithms first, then determining the scope of their application.  A theme that I've noticed is that the aspect differentiating "experts" from "novices" is the emphasis of determining the problem to be solved.  This would point in the direction of work by Sandra McGuire of LSU on study and problem-solving strategies as being one of the better ways to improve student performance.

Let’s take a concrete example: the “classic” problem of the pH of a 10^-9 M HCl solution.  Here, the problem is determining what are the relevant sources of hydronium ion in solution.  However, it is presented in the way that a simple calculation of pH from hydronium ion concentration is.  So any student taking a quick read of the question, perhaps in a high-stakes test, might misinterpret it as such.  Once that algorithm has been applied and an answer of pH=9 is obtained, only study/test-taking skills will have the student check if the answer is reasonable.  A basic pH for an acidic solution is not reasonable, and the hope is that students would recognize that the problem is other than its first appearance, determine the correct problem to be solved and solve that instead.

-Erik

psychological language

"For students who are completely lost, the tutorial offers a step-by-step breakdown of the problem."

I sympathize with that student. I am quite unfamiar with the psychological language (dare I call it "jargon"?) here.  I feel like I need a Deep Help system myself.

My main question is: What exactly is Deep Help: it is an abstract design concept, a Web-based service, a software system that I install on my computer, or something else? 

psychological language

Tom,

Deep Help is a framework for individualized assistance to students developed at WebAssign.  It has been integrated into several WebAssign offerings.

-Erik

Difference between ZPD and Fringe Knowledge

Dear Erik,

Thank you for this very interesting article on Deep Help and the application of learning theories to online tutorial systems.  I would like to start the discussion with two related questions, both of which I have often thought about.  The first deals with the difference between the “fringe knowledge” of Knowledge Space Theory and the ZPD (Zone of Proximal Development) of Vygotsky’s Social Cognitive Theory.  What is the difference?

The second deals with the ZPD and social cognitive development. It is my understanding the ZPD evolved out of Vygotsky’s observation of children at play. An example of the ZPD I have read about related the ability to “pretend play” to the group of kids [read friends] a kid would play games with.  So a kid who could pretend a broom stick was a horse, would be able to play with other kids a game of something like Cowboy’s and Indians (I am not sure that is the game they played in Russia, or kids would play today, but was a game I could have played in the early 60’s).  The thing is, in order to play in the game, the kid had to be able to pretend the broom was a horse in the context of play, but know the broom was really not a horse, but a broom. A kid who could not generate this distinction, could not play in that game, and would play with other kids.

Now I am finding it a bit difficult to articulate my second question, which does relate to the first. But what is the role of social interaction in the use of the ZPD, and is Deep Help taking the place of social interaction, that is, a sort of “human agent”.  Does Deep Help become a surrogate friend?  I like your idea of extending the ZPD to nonhuman factors in cognitive interactions, which is something I too have done in describing my own work (enhancing reading comprehension of documents in one’s distil knowledge space through a targeted expansion of the ZPD via the coupling of social definitions to canonical definitions in the WikiHyperGlossary). I also realize many educators have extended the concept of ZPD to scaffolding of curriculum content, but are these “extensions” that I and others have made of the ZPD representative of an accurate understanding of the role of the ZPD in social cognition? I am thinking it might be more appropriate to describe this to the person’s interaction with the system to find their fringe knowledge, than it is a definition of their fringe knowledge. And well, what is the difference between the ZPD and fringe knowledge? Are they the same? Does social interaction play a role in differentiating these? Do you have any thoughts on this?

Thank you for sharing with us this wonderful article.

Respectfully,
Bob Belford

 

Difference between ZPD and Fringe Knowledge

Bob,

There are parallels between aspects of the ZPD and fringe knowledge.  The biggest difference I see is that fringe knowledge consists of concepts for which all prerequisites in the knowledge space have been met, each of which can have a variable number of prerequisites.  This measure of complexity gives some indication of which pieces of fringe knowledge will be easiest to acquire, as they require the integration of fewer pieces of prior knowledge – a clear indication of intrinsic cognitive load.  The ZPD, on the other hand, does not draw distinctions to this degree.

With regard to your second question, the Deep Help system can be counted as a form of assistance to the student and as a social interaction mediated by computer, but it is not bidirectional at the student level.  So in that regard, it doesn’t really fit the idea of social cognition.
-Erik