## June 16, 2012

### Procedural Rhetoric

Thanks to Gail at The Female Perspective of Computer Science for her post, Procedural Rhetoric in Games.

I'd never heard of Procedural Rhetoric before I clicked into Gail's blog, where she links to Ian Bogost. I am super interested in the implementation details of Procedural Rhetoric because I want to know if some of the approaches are similar to what I've been gathering for my work on pedagogical modelling. Here's an old entry that kind of explains where I am on that: Strategy and Process.

However, I think I need to do a better job of articulating my findings. I've still got Anita Sarkeesian's videos on my brain, and how well she can articulate concepts and present an argument. I aspire to be able to explain my thoughts and ideas like her -- she's a great role model!

 Posted by Frozone Permalink on June 16, 2012 11:11 AM | Comments (1) categorized under Pedagogical modelling Tweet

## February 18, 2012

### Applied Graph Theory - Education Prerequisites

Hi, I just invented an applied graph theory concept which I call "Prerequisite Connectivity". I have not done a literature survey, so I don't know if someone else has already invented this and uses a different name. Well, okay, fine, I did a *little" bit of research and cite the following as my literature review: "Graph Theory and its Applications in Educational Research: A Review and Integration" by Maurice M. Tatsuoka, published "Review of Educational Research", Fall 1986, Vol. 56, No. 3, Pp. 291-329. In this work, the author describes the use of graph theory for designing "hierarchies of test items, instructional materials, and so forth". This brief discussion appears in the Concluding Remarks of the paper. Perhaps later work expands on the idea, but i have not yet found it.

All I know is that I need a theoretical structure to give me quantitative distances between nodes in a prerequisite graph, and I couldn't find an existing thing to help me, so I invented my own. And I am self publishing this concept on my own blog. What kind of rogue academic am I? LOL

Regular Graph Connectivity means that in your graph there is a path to get from every node to every other node. In my case, my directed graph refers to Learning Objects as nodes, and directed edges indicating that the source edge A is a prerequisite to destination edge B. If there is a path from A to B, then my definition of "Prerequisite Connectedness" is the same as regular "Graph Connectedness" between the two nodes, i.e. their degree of connectedness is simply the number of hops to get from A to B.

This changes when there is NO path to get from A to B. In regular Graph Connectivity, if there are no edges that you can follow from A to get to B, the two nodes are simply disconnected, end of story. In Education, however, they CAN still be connected. For example, what if A and B share a common parent? Then Prerequisite Connectivity would say that there is 1 step required in order to get from an understanding of "A" to "B". That step is to visit the parent first. As you can see, the prerequisite connectivity will change the moment you have a student agent to compare to it. In fact, I would say that Prerequisite Connectivity is dependent on having agent involvement. I can give definitions for Prerequisite Connectivity assuming an imaginary student has NO history or background experience whatsoever. This is what I will do. But suppose I have a student who has already visited A, and I want to find the Prerequisite Distance to node B when there is no direct path from A to B, but A and B do share a parent. We can either assume that since the student agent as achieved A, then perhaps they have already achieved the parent concept, which means that there are 0 steps required in order to be eligible to work on B. Or, we can assume that the student has visited A, but has not yet viewed the parent concept, therefore, there is 1 step necessary for them to visit before being eligible for B, and, that is to visit the parent first.

My point is that the "Prerequisite Distance" is dependent on having a student model with knowledge of their browsing history and achievement levels on each learning object.

Here is my formal presentation of Prerequisite Connectivity".

Definition. "Prerequisite Connectivity" is a quantitative representation of learning distance between two nodes A and B, where nodes are Learning Objects in a prerequisite graph. (Do I have to define Prerequisite Graph, too? Look in my paragraphs above. It's a DAG.) In other words, Prerequisite Distance is the number of other Learning Objects that you must study first in order to get from A ("current Learning Object") to B ("destination / objective learning object").

Definition "Prerequisite Distance":



public int getPrerequisiteDistance(Node a, Node b) {
if (there is a path from a -> b) {
return plainOldPathLengthBetween(a,b);
} else {
if (b.hasNoParents()) {
return 0;  //if b has no parents, this means there are no prerequisites, so you can jump in and start learning it anytime.  that's why we return zero - there is nothing you have to do first.
} else {
//b has parents, so
if (a.hasCommonAncestor(b)) {
return 1;  //well, actually we only return 1 if a and b are siblings under the same parent.  Truthfully, if we want to allow for grandparents, then change the 1 to be like b.getNumberOfPrerequisitesToGetFromBUpToThatAncestor();   There is probably a great way to do this recursively, but alas, I have run out of time.
} else {
//then a and b have no common ancestors
return b.getNumberOfPrerequisites();  //if A and B have nothing in common, then the fact that the client is saying we are starting at A is actually not very relevant (but they didn't know that, so don't blame them), so we can just tell them that the number of steps necessary to get to B is just to visit all the prerequisites necessary for B alone.
}
}
}
}


 Posted by Frozone Permalink on February 18, 2012 12:40 PM | Comments (0) categorized under Pedagogical modelling Tweet

## December 03, 2011

### System Dynamics & Game Theory

For giggles, I Googled "system dynamics" "game theory. I found this cool paper by M. Rasouli from Sharif University of Technology in Tehran, Iran, and was presented at a systemdynamics.org conference. It's called A Game-Theoretic Frame Work for Studying Dynamics of Multi Decision-maker Systems.

Once upon a time, I was attracted to Artificial Intelligence. (still am!) Then I learned about Cooperative Game Theory (recently - circa 2008). Then I learned about System Dynamics (very recently - Jan 2011). I am attracted to all of these things because the answer to my "building adaptive tutoring systems" might rest in here somewhere. Or, at least, I may find the start of a path.

Here's how I tried to find this path in Game Theory but am not quite there yet: One time, (blog entry) I explained how I consider the student to be some kind of agent and I am trying to build an adaptive environment that tries to optimize their learning. I thought I could take the strategies from Game Theory, and have the educational environment use them, as if the environment itself were an agent, too. Then the educational environment would execute its "moves" to change itself to adapt to the student.

However, I remain critical of Game Theory, even Cooperative Game Theory because it seems designed to deal with a different problem than mine. I even blogged one time that I should try to turn my equation inside out, because the equation was built to deal with "uncertainty" coming from a slightly different angle then the one I needed. (I tried to find that old entry, but haven't yet. Closest so far, Plan-space planning and the optimal policy calculation (June, 2009).)

I am enthralled with System Dynamics because even Cooperative Game Theory had this "competition" thing going on. I remember (blog entry) when I was looking for "details of cooperation" but in the text I only found "groups teaming up against other groups". The "cooperation" wasn't paid any explicit detail. It was just competition re-packaged as a more complex thing involving multiple individuals.

But, System Dynamics has the "detailed look at the cooperation" that Game Theory was lacking. And that is why I am so excited.

The reason I am posting this is that I wanted to say that this author's summary of the relationship between System Dynamics and Game Theory has helped me understand. He says that System Dynamics is usually for a single decision-maker who is seeking to construct a policy that will change the system's overall behaviour to match her desire. On the other hand, Game Theory is about multiple decision makers and finding winning strategy.

There's an article in System Dynamics Review 1997 (A system dynamics model for a mixed-strategy game between police and driver by Dong-Hwan Kim and Doa Hoon Kim) about how SD and GT relate and work together. I should read more. (But I probably won't, even though I want to, because I've already picked my research topic and have finished the first round of my lit review. I can only continue working on things directly related to content sequencing in an educational curriculum. But this is good to know about.)

Edit: Jan 2012 - In emailing a fellow student in one of my classes I wrote something that I wanted to copy to this blog entry because it is relevant: "The reason I was so interested in System Dynamics was that I have been looking for a knowledge representation technique for educational systems. I wanted a way to represent the various knowledge levels of each individual, and to use the computer to help suggest which students should work together or independently, over time, to best support each other as they work on both their individual and common goals. I have found that Decision Theory and Game Theory are quite "competitive", but that System Dynamics is much better to represent the whole environment instead of only one perspective. It could show which areas need more cooperation and which areas are "too busy". "

 Posted by Frozone Permalink on December 03, 2011 08:01 AM | Comments (0) categorized under Pedagogical modelling Tweet

## July 30, 2011

### This is what I mean by "global coherence"

I originally wrote the following during correspondence with a colleague. But I like the wording enough to stamp it on this blog as well. :)

This is what I mean by "global coherence":

I am trying to mechanize the process of putting together a personalized curriculum that takes into account a person's background, preferences & experience. I would be using collaborative filtering (like amazon recommendations). I am primarily interested in generating a curriculum with long-term coherence. In other words, I am picturing a ("bad") system that would result in a learner jumping from "Ohh! Shiny object" and "oh, another one!", and then to another, etc.. If you jump from one amazon book recommendation to the next, it does not take long to forget the original point of interest. For deeper learning, a longer stretch of coherence is necessary across a series of learning object recommendations.

To map this problem to artificial intelligence research, I suggest Model-Theoretic Planning.

 Posted by Frozone Permalink on July 30, 2011 05:19 PM | Comments (0) categorized under Pedagogical modelling Tweet

### Instructional Design as Process Modelling

I was reading a piece recommended to me by Prof. Rick Schwier: Teaching a Design Model vs. Developing Instructional Designers by Elizabeth Boling, Indiana University.

I couldn't help but fly like a moth to a flame to the term, "process modelling". My favourite part of the paper was when I read this, on page 3: "..where the products of a professional’s activity represent an
intervention in the lives of others for an intended purpose." The author is talking about Instructional Design relative to other kinds of professional work. I don't know, would the practice of Medicine fall into the same class of human activity?

I have a history of studying "process modelling" and I believe that I invented a way to computationally represent a teaching technique that is abstracted from content. See: Language to articulate teaching strategy. It was really, really cool for me to see an actual legitimate person discuss process modelling in an educational context. Yay! Now I have something to relate to. I think my angle was more about "Let's encode this style of behaviour" and less "Let's actually figure out how to effectively teach people". I definitely need to draw from the latter, even though the product of my work is the former.

Anyway, my primary reason for posting this was to keep tabs on that link to Boling's paper.

 Posted by Frozone Permalink on July 30, 2011 01:45 PM | Comments (0) categorized under Pedagogical modelling Tweet

### Meet my friends: Ploo, Dip-lc, and Krott

I decided to give names to the trails of thought in my research. Please meet my friends, Ploo, Dip-lc, and Krott.

Ploo, or, P.L.O.O., stands for "Porous Learning Object Repository". Relevant entry: (one of them, anyway) topics at hand: graphical models in game theory, operations research and open learning object repositories. Basically, Ploo represents my efforts to build a system that could take a new learning object, and contextualize it for a particular person within a particular learning community. (note: a learning community always has an associated learning object repository of shared ideas & reference points for that community). Thanks to my supervisor Prof Gord McCalla for the adjective, "porous"!

Dip-lc, or D.I.P. - L.C.s, stands for "Distributed Instructional Planning amongst Learning Communities". Basically, Dip-lc represents my recent simulation model work. The WWW is explicitly represented as a thing that is constantly growing a s a result of human activity: just as we shape it, it shapes us. (Thanks to Professor Nate Osgood for that wording! And for discussing these ideas with me for my 858 term project (link to paper on Scribd).)

Krott, or K.R., O. of T.T., stands for "Knowledge Representation, Ontology - Teaching Technique". Krott represents my quest to find a knowledge representation for teaching techniques. The word "ontology" is in there because my earliest work was to find an ontology for teaching techniques. But I really need a knowledge representation. See also: Language to articulate teaching strategy

 Posted by Frozone Permalink on July 30, 2011 11:37 AM | Comments (0) categorized under Pedagogical modelling Tweet

## July 23, 2011

### Model-Theoretic Planning

A couple years ago, I said, "I'm trying to find a tool in AI that can help me model a process. Like, my knowledge is in this sort of shape: "Generally, first you do A. Then, normally, you would do B. Next, most of the time you'd do C, but occasionally K happens, in which case you'd do D."

This chapter talks about Planning Based on Model Checking includes a discussion about "temporal goals", which state conditions along the execution path, rather than on its final state. Booyeah! See page 414 in Automated Planning: Theory and Practice by Ghallab, Nau & Traverso.

How would this relate to my work in cooperative game theory, then?

 Posted by Frozone Permalink on July 23, 2011 04:36 PM | Comments (0) categorized under Pedagogical modelling Tweet

### Language to articulate teaching strategy

For many years I have been trying to figure out how to represent the style of a teaching strategy. Finally, I invented a visual language that lets me articulate this. To articulate one teaching strategy, or thread of student/tutor interaction, draw a Cartesian graph, and label each point with a timestamp {t1, t2, t3, ...}. This will give a very quick way to precisely communicate the style of teaching, separate from content. I have not quite figured out the best way to define the axes, but they would be something like:
- X axis shows the degree of system vs. learner control: "System directs the activities (Cognitive tutoring / Model tracing?)" vs "Free exploration". 'Could also think of this as the "direction" of conversation on student vs system: who is providing, who is asking, who is listening. A "low" X means the student is maximum passive and the system is maximum directing.
- Y axis shows the content roughly scaled according to threshold concepts, with "basic" prerequisite knowledge on the bottom and more advanced, built-up concepts up high.
- Z axis shows the spectrum of whether it is an independent activity or group activity, or where the student would pour most of their energy (internal thinking vs external collaboration)

Here is an example. Suppose it is a quiz and the system is "drilling" an individual student with repetitive questions about a very basic topic. This would appear as a line with a low X and a low Y and the lowest Z. Then, if the conversation proceeds to a group quiz on the same thing, then you'd keep the same X and Y but slide the Z around to show the changing degree of learners. Then, if the conversation moved to "Everybody grill the teacher on a very advanced topic" then the next point on the graph would be at a very high X (i.e. students control), the same Z (still all students vs individual), and and a high Y. A representation like this could allow the system to "project" its intent for the direction of the teaching scenario, and give itself something to correct from if things go off track.

---- Update -- just some more typing around this line of thought...
If you want to make a computer guide a person through a lesson in a certain way, following certain teaching styles and techniques, you need a language to express these styles. Last summer, (see: Language to articulate teaching strategy) I proposed that you can express these styles using a set of vectors, where each vector represents a dimension of teaching, and all the vectors put together fully describe a teaching technique.

Let's describe an example of a teaching style. Give the student tons of freedom on what to look at, and involve lots of people interacting with similar topics, and the focus is on learning the facts (as opposed to synthesizing say social consequences of a certain fact).

Nov 2012: Another thought, maybe this would be a W-axis, but maybe we should have a way to indicate the "precision of instruction to the student", for example give them a vague instruction or a precise instruction. Or maybe not necessarily "instruction" but the precision of the activity. This is kind of like the X axis except it's not about "who's in control" but rather the scope of the actvity. Maybe these are related after all, becasue if the system is in control you should be giving them very precise instructions right? Or maybe it's possible for the system to be giving close guidance but have the learner still doing something broad. Specific, but broad. Hmm I need an example I think.

 Posted by Frozone Permalink on July 23, 2011 03:05 PM | Comments (0) categorized under Pedagogical modelling Tweet

## July 06, 2011

### Actually, Research topic: Teaching techniques have Shapes.

Do you remember when I got all excited recently about a TED talk, and I said "Oh, oh! THIS is my research topic!" (See previous entry, "Sweet, TED video showing my research area".)

Well, that was all well and good, but I have a better description now.

Imagine you have a whole bunch of recorded conversations between teachers and students, a group of students and one teacher, maybe several teachers and one student, a single tutor and a single student. The topics discussed are of all and any kinds. And you try to get as many different shapes of conversations as possible. For example, one type might be "Drilling" where one person is continually and repeatedly prodding the other person about the same or very similar topic. Another type might be where all parties are equally asking, providing, listening.

So, you have a whole bunch of teaching scenarios. And you categorize them.

Here's the original part. You take the "shape" of the category and you turn it into a computational model that could be repeated on ANY topic, where any subset of the parties in the conversation (except a complete subset) are played by a computer.

How? Well, I think that the first thing that you'd do is make a visual representation of each "shape". The Y axis could represent the topic, perhaps at levels of difficulty. Threshold concepts could be at the bottom, and background information under that. Above would be more advanced synthesis / analysis skills. I guess you would have to quantify difficulty level into one spectrum.

The X axis would represent the "direction" of the conversaton: Who is providing, who is asking, who is listening.

The line part in the middle would have dots representing time. So if you start with a "Drilling" on a very advanced topic, you'd plot a line with a low X and a high Y. THen, if the conversation proceeds to a group quiz on the same thing, um, I think you'd have to use a Z axis. Then, if the conversation moved to "Grill the teacher" mode, the next line would be at a very high X and still the same Z. So your line could spiral and turn back in on itself and go forward and backward.

Now that we have a language to articulate teaching / interaction strategy, we can pick a tool from artificial intelligence to represent it.

If you have been following my blog, you will know that I've looked at the following to attempt this.
- Simulation / Process Modelling
- Cooperative Game Theory
- Decision Theory
- Bayesian Networks (ok, just a scratch)
- Constraint Satisfaction (again, I haven't done enough to claim effort, but by creating this list I also wish to articulate possibility for future reference!)
- Semantic Network

Once you have modelled a particular technique, or Teaching Shape, you have to have an algorithm that selects when to use each one. This will vary a LOT (maybe entirely?) on the student model and their recent activity.

A second issue arising is "How to Keep Content / Themes Coherent over the Long Term". I think that this means you have to have Teaching Shapes on top of Teaching Shapes, where some dimension within the Y axis is held constant. That is, when you overlap or build Teaching Shapes on top of each other, you have to select the Concept / Topic that you are keeping constant. Or maybe you aren't keeping the topic constant, but, you're Drilling over an array of stuff (like for an exam). But maybe in these cases, global coherence is less important.

Now that I have typed all that and re-read it, I wonder if I am the only person on this earth who would ever understand that. I could probably explain most of it to my supervisor. And I can think of a handfull of others in my lab who would get it, if they were willing to spend several hours with me in front of a chalkboard.

I guess that's what this blog is for! Hopefully I will be able to continue to clairify, explain, expand, develop. :-D

 Posted by Frozone Permalink on July 06, 2011 06:36 AM | Comments (0) categorized under Pedagogical modelling Tweet

## March 14, 2011

### Knowledge is a resource that does not become depleted

Viara Popova and Alexei Sharpanskykh. Formal analysis of executions of organizational scenarios based on process-oriented speciﬁcations.

I liked reading about the kinds of references that are often used in process modelling systems: starting and ﬁnishing time points of processes, types and amounts of resources used/ consumed/ produces/ broken, names of actors, who perform processes.

Next, I would like to address a thought that has been jumping up and down, screaming at me for attention for about a week. :-) In my area, many of the "Resources" do not get "consumed" per se. For example, in order to perform a Tutoring act, one of the things that you require is Knowledge. However, making reference to the knowledge alone does not cause it to become depleted. This is different from most of the applications in planning and operations research literature I have been studying.

At the same time, teaching requires a lot of careful planning. So, what is it that we are trying to optimize? What is the scarce resource? I would say: The limited size of the world in the Learner's head. You can only ask them to hold so many brand new facts in their head at one time as they plough through the lesson. The goal is to get the learner to apply the new concepts, to make them stick long-term. But also recognizing that not EVERYONE needs to be learning EVERYTHING deeply. There are some things (in fact, probably most things) that the tutoring system does not need to work hard to make sure it is helping the Learner cultivate that long-term knowledge. The resources that a tutoring system has available are: Learnign objects that the Learner has stumbled across before (perhaps they read something before that did not make sense earlier, but maybe it would now, etc.), Learning objects that were effective for similar learners in similar situations, or, as a last resort, a Web Search performed by the machine on behalf of the learner.

Anyway, I was not able to read the whole paper but am happily keeping a "bookmark" right here and hope to return to this knowledge again in a future cycle!

I find it a little bit shocking that the paper is dated 2009 but didn't show up in the journal until 2011. I don't get it. Well, I sort of do... I understand that before the Internet, it probably DID take that long to publish things. I also understand that it takes time for peer review to occur. So, now that I talk about it out loud, I am no longer shocked by the date discrepancy.

 Posted by Frozone Permalink on March 14, 2011 10:58 AM | Comments (0) categorized under Pedagogical modelling Tweet

## March 10, 2011

### Simulation as solution to decision theory dilemma

Last year, I explained a problem I was having with decision theory. This year, I learned how to use simulation environments, which are a more appropriate tool for the problem I am studying. This post explains why. There is also some game theory stuff mixed in, and I'm trying to get it all sorted out, but still have a long way to go!

First, I will begin by copy/pasting my articulation of the decision theory problem, and edit for clarity:

In my understanding, a pure strategy means that your decisions are always based on the same criteria. A mixed strategy means that the basis for your decisions changes with your circumstances.

If you can enumerate your set of strategies, then you can spread them over a probability distribution. That is, the probability that you will select one of the strategies from the set is 1. Each strategy will have its own probability of being selected, and the sum of the probabilities of each strategy is 1.

Let the set S = {s1, s2, ... sn } where each strategy is represented by an integer. The name of the set S is represented by an upper-case letter S. Each member of the set S is represented by a lower-case letter S with a unique numerical subscript.

Let P(sn) represent the probability that the agent will apply the particular strategy as they make their next decision. So,

$\sum_{i=1}^{n}P({s_i})= 1$

This is REALLY basic, but, I am new at this so I felt it was necessary to state all of this explicitly.

As I identified earlier, my main problem with my attempt at applying Decision Theory is that the whole point is to work around the uncertainty about which sn is going to happen next. Normally we assign each P(sn) according to our best guess, updating as we go. This works really well for the right kinds of problems.

But my problem isn't exactly this shape. My uncertainty is not about P(sn). The uncertainty in my problem comes from:

- anticipate user actions
- anticipate user goals
- guess on the user's experience, what happened in their head, which ideas they processed, and defining utility according to what we can sniff about what they experienced and whether it followed our mechanics for significant learning experiences.

I talked about the third point in a more mathematical way in this other entry.

I would say that my goal is to "guess at the set of next-actions the user will take". Let X be the set of all possible actions that the user can take within this learning environment. And X could be multi-dimensional, based on the system's preceptors. (user model sniffers like keystroke listeners, browsing history trackers, whatever.) I wanted to look at the problem where my uncertainty is surrounding the state. We don't know what the state is. And we don't know what is going to happen next, because the user - or the other player in the game - is going to influence the environment which in turn influences us.

SO -- The point of decision theory was to give us a tool so that we can handle uncertain state transitions. I wanted a tool that could acknowledge that some state transitions are KNOWN, i.e. we can select a strategy.

Conclusion: A simulation environment is an excellent tool that lets you establish flow charts to represent the processes you do know, and to design agents and environments with other transition properties that you also know, and it allows you to let 'er rip and watch for any emergent properties in the system. A simulation environment lets you deal with uncertainty without having to assume that the uncertainty is about which state transition is going to be taken.

Whew!

 Posted by Frozone Permalink on March 10, 2011 02:58 PM | Comments (0) categorized under Pedagogical modelling Tweet

### Why simulations are useful

Recently, I asked, "Why are simulations meaningful?"

I think I understand the answer now: it's because in the real world it is impossible to thoroughly explore all possibilities of a situation and observe outcomes. For scientific inquiry, you need a hypothesis. And you want to test the HYPOTHESIS, not pieces of it. And often the only way to thoroughly test the whole system is with a simulation.

My worry last time was that by creating the simulation, making assumptions, putting in starting parameters, that I was "making shit up". This becomes much less worrisome when you view the simulation as an articulation of your hypothesis. Of COURSE when you are trying to figure out the ways in which the world works, you will require creativity. You *have* to make shit up. And then you have to run the data through, see what it does, observe the trends, and see if they match up with real world observations. Karl Popper said that true science means that it has to be possible for your hypothesis to be proven wrong, and that successful science makes a discovery when a hypothesis IS proven wrong, and that progress happens when you keep making mistakes, which gradually steer you in the only direction that's left, i.e. the right one. Creativity is so important because that's what uncovers the possible new directions.

Anyway. Time to get back to reading!

 Posted by Frozone Permalink on March 10, 2011 01:05 PM | Comments (0) categorized under Pedagogical modelling Tweet

## February 12, 2011

### Instructional Planning 2.0?

One of the academic leaders of instructional planning is, of course, my supervisor, Dr. Gord McCalla, as well as one of his students, Dr. Barbara Wasson. Dr. Wasson was one of the first to distinguish Content vs Delivery planning in her PhD thesis.

Today, I was trying to design a simulation model for an eLearning environment, and I have been thinking a lot about content vs. delivery in the context of "individual vs group" learning.

I just wanted to note the observation that Content planning would normally be based on negotiated Content learning goals of both the individual and the group, while Delivery planning would be based more on what currently available (temporally, practically speaking) on the WWW and also on the individual's current preferences.

I guess I just wanted to voice my observation. That is, to distinguish that the "group" part -- i.e. where you have to negotiate the instructional plan to account for other people -- seems more heavily based on Content planning than Delivery planning.

 Posted by Frozone Permalink on February 12, 2011 03:47 PM | Comments (0) categorized under Pedagogical modelling Tweet

## November 16, 2010

### Path simulation from Operations Research in AIED

In Operations Research, you can do path simulations. Path simulations distinguish event nodes that *are* v.s "are not* under the system's control: system logic and system inputs, respectively. From my trusty and as of late frequently quoted book, Stochastic Modeling: Analysis & Simulation by Barry L. Nelson, I understand that a major part of stochastic modelling (at least, the approach to stochastic modelling in the book) is the system-inputs vs system-logic distinction.

The point of this entry is to acknowledge the terminology, "system input" and "system logic" and how they relate to the problem I've been working on. (This previous entry has a not bad description, Sets of relationships over time in multi-agent influence diagrams.) I want to model pedagogy as separate from task domain knowledge. Pedagogy has an important dimension of "time", in that the order in which you present material to learners is significant. This is why I started looking into Operations Research and the reason I bought Nelson's book. Thus, to adopt this new approach from operations research may be to put "patterns of teaching" into the category of "system logic" and to put things like learner actions and currently-available learning objects into the category of "system input". (Remaining is another question that does not fit into this framework: System-generated, customized Learning Objects.)

So, yeah. That is my point. Thanks for listening! :)

By the way, I wrote this entry while wearing blue eyeshadow and orange lipstick. Booyeah!

The reason I share this fact about my choice of cosmetics for today is that I was inspired by the following article by Melissa McEwan, "On shoez and getting personal, a.k.a. How are we supposed to take feminist bloggers seriously if they post about shoes?"

I think that McEwan's article may relate to Open Research Bloggers such as myself who enjoy throwing in stories about parenting or other such things not obviously related to my research topic.

 Posted by Frozone Permalink on November 16, 2010 12:42 PM | Comments (0) categorized under Pedagogical modelling Tweet

## October 02, 2010

### Creation of an artifact, not

For the last several months, I've been trying to establish some kind of structure for my thesis work. It's hard to "get started" even though I have momentum already from my years building up this blog. For several weeks I've been saying to my supervisor, "I want to create an artifact!" Like, start the literature survey, or write a paper, or do a thesis outline, or put in place some architecture for the software I'm going to build for my thesis. But this has not been panning out. As passionate as I am about my work, I just couldn't make it happen without extrinsic motivation. And who knows if extrinsic motivation would have had any effect other than to stress me out.

So instead of that approach (i.e. trying to create an artifact), I'm going to throw all meta-research structures to the wind and instead focus on developing the idea. Even though the work you can see on this blog has not produced ANY publications, results, contributions, citations... it HAS given me experience and momentum. That means something.

Digression aside! Let's develop this thesis. The strongest themes that are surfacing for my research so far include:
- global coherence and local adaptivity
- pedagogical process modelling
- instructional planning
- agents, ecological approach
- ontology referencing
- strategy from game theory, modelling of relationships using graphical models and game theory
- utility as a meaningful experience

This is probably not a complete list, but, I trust that if I missed something it will pop up on its own later. ;)

Now, to help this thesis begin to mature and grow, my supervisor suggested I select the "ultra" research issue, fully knowing it can change, but we gotta start somewhere. Something that keeps popping up over and over is,

"adding new learning objects into an environment".

Usage data, meta data - these will be very important.

Next, I have to define this more thoroughly with the intent of designing an application that would really explore the boundaries of this thing.

Then, depending on our results, what could we say about a real-world problem we are solving? The real-world problem is that when a person sits down to try and learn something, it's REALLY REALLY hard to dig your own path to any level of depth. The idea is that if you had a system that was watching out for you in the long run, it could help you contextualize new materials you come across in your surfing.

 Posted by Frozone Permalink on October 02, 2010 10:58 AM | Comments (0) categorized under Pedagogical modelling Tweet

## September 23, 2010

### Glenn Shafer's talk and possibly bypassing the non-enumerable states problem

This morning, Glenn Shafer (yes, THE Shafer like from the Dempster-Shafer theory) visited the University of Saskatchewan and gave a seminar entitled, Game-theoretic probability and its applications.

I got a lot out of the talk. Earlier (Pen scratches: elementary game theory) I identified a problem with Decision Theory (DT) for my application purpose, namely, that DT is built around the uncertainty being directed at state TRANSITIONS, while the states themselves are presumed to be well-defined and enumerable. I think that the structure presented by Shafer this morning is freer of some of the baggage that was holding me back.

An important theme in the talk was to contrast two frameworks for probability: Measure Theory and Game Theory.

I might have this wrong, but, my understanding is:

In the Measure Theoretic sense, you basically count up all the leaf nodes in your probability tree and count up the favourable outcomes in order to discover the probability you are trying to measure.

In the Game Theoretic sense, you basically calculate a "starting wager" (which is related to the measured probability above) which would identify the next branch in the game tree that leads to your desired outcome. (?) Your prior/given/assumed/wanting-to-test-THIS-result/ -type information basically points the leaf nodes which identify the subtree.

Shafer's new point is that his presentation of Game Theoretic Probability can be used to construct proofs (i.e. find solutions) even when we don't have cases for everything.

My brain wanted to translate this to "even when we can't fully enumerate all possible states".

I have been trying to use formalisms to bring in "observables" (learner model attributes, etc.) and pre-know processes / tricks-of-the trade/ even production rules, if you will - and some other stuff together to build an adaptive, personalized learning environment.

In an educational system, you CANNOT fully enumerate all possible states, so this is an interesting connection.

You can't fully enumerate all possible states in an education system because the state will represent the learner model (which you are TRYING to change anyway, see Payoff matrix) and will also represent the learning environment, which would also change -- new learning objects get imported, etc.. See also Strategy and Process. I believe my thinking here, and even ability to articulate it, was also influenced by some post-talk conversation with Gord. :)

 Posted by Frozone Permalink on September 23, 2010 12:15 PM | Comments (0) categorized under Pedagogical modelling Tweet

## September 19, 2010

### Two Frameworks (mathematical, rigorousCultural)

The textbook for my new class is making me sweat. It's presenting "familiar" information to me (i.e. pretty much the content of this blog -- my passion!) but within a framework I don't necessarily jive with. It's forcing me to struggle with my assumptions. This is very, very good, and is one of the main reasons I'm in grad school. :)

The book is Dr. B. Woolf's Building Intelligent Interactive Tutors(link to a Google Book of this).

Although each chapter screamed relevance "You NEED to know me!", I zeroed in on Chapter 4, Teaching Knowledge.

In my own head, "teaching" is presenting content, invitations, communication channels, tools, context and a presentation of goals based on the machine's knowledge about the student. Lately, I've been trying to articulate teaching mathematically, separately from the task domain ontology, by:

• exploring Bayesian event nodes for the environment and collecting "priors" on the fly as newly applied evidence for a dynamic Bayesian network (i.e. preceptors)
• looking at how an expected value calculation or optimum policy calculation might be used to power the adaptiveness of the environment by driving the selection of "next action" (which, in my head was an obvious translation to "adapt the environment in this way by executing the next action 'a'),
• and other tricks I'm sure.... like, Game Theory (example entry)

Whew! So, given all of this intellectual overhead, now I am trying to absorb a presentation for a different world view and it's making me sweat. The author is presenting a rigorous framework for analysis of the design of such systems. But the approach is not triggering very much of my own past experience, so it's harder to take. The lingo is not employing any formal notation, which makes sense because it's too early to know what we need to build, too early to abstract it back up into the math. But it's a different approach than what I've been taking on my own lately.

Where to next? I'm just going to have to acknowledge the gulf between my previous experience and that which is being presented before me. I'm going to make a commitment to grapple with this new stuff. Maybe my perspective will generate some new ideas. Stay tuned!

 Posted by Frozone Permalink on September 19, 2010 01:15 PM | Comments (0) categorized under Pedagogical modelling Tweet

## August 15, 2010

### An example of strategy

Earlier, I mentioned I'd spotted a paper about an actual research project that applied game theory. My motivation to study the paper was to discover:

1. how they implemented strategy

2. why equilibria was important

In this work, an agent's strategy was the formation of a subset. It wasn't a Markov Decision Process and it wasn't a graph traversal, like I'd been expecting. In this paper, the application was "community discovery", where many different agents belong to many different communities. From the first person, an agent could say, "my strategy is which communities I picked". The "strategy profile" of the game was a set of vectors, one vector per agent, each vector representing that agent's selection of communities.

#2 is related because equilibria, as I learned earlier, is a strategy profile (that might have to meet certain conditions, say, in order to be a Nash equilibria).

I was delighted to read about the utility function in this work because it showed how this too was related to strategy.

 Posted by Frozone Permalink on August 15, 2010 09:54 AM | Comments (0) categorized under Pedagogical modelling Tweet

## July 28, 2010

### Conditions for action-application

This entry is a hasty record of a brainwave about strategy, process, and action generation.

How does a game theoretic agent generate its next action? I kept thinking that certainly the action-selection must be a choice for the agent: See, it should be given a list of POSSIBLE actions, and it has to pick the best one. (Perhaps it is another subsystem that generates this set of possible actions.)

I've devoted a fair bit of energy into "Process Modelling", and I keep coming back to the question: If you are following a known process, why is "action generation" such a problem? If you have a sequential process modelled ahead of time, isn't the next step obvious? There should be just 1 choice when you are following a pre-defined process, right?

Well, it's not so simple. Each action would have conditions under which you should apply them, and you have to take the learner's current situation into account.

When you model process, you can't just list a sequence of actions. Also required is the set of conditions under which each action-selection would be best. This is strategy, and this is why I am reading up on game theory. I want to know about action selection, and the application of conditions -- how is strategy encoded? How can this map to pedagogical knowledge? (For more on computational modeling of teaching strategy, see this other entry, Revisiting: What is teaching? Some models)

Gord asked me the other day, "So what does Constraint Satisfaction have to do with Game Theory?" Maybe this is it. Maybe conditions for applying a particular action in a particular situation can be modelled with constraints; I am not sure how you would use language for the fact there are many agents and the constraints may deal with specific ones, relative to your current position.

 Posted by Frozone Permalink on July 28, 2010 05:53 PM | Comments (0) categorized under Pedagogical modelling Tweet

## July 21, 2010

### Payoff matrix

I don't think I've ever talked about Payoff Matrices before. They are an element from game theory. My advisor suggested they might be an interesting place to put things like

• listeners from the learner model that know about learner motivation.
I was also thinking you could dump in
• negotiated learning goals, pedagogical measures for attainment
• pedagogical rules for changing your "strategy" or "mode of interaction" with the learner -- conditions for switching, payoff...

And how does constraint satisfaction relate to game theory?

And another thing I haven't thought about much in my other work is the "Many Humans (learners, instructors, etc.)" aspect. Mostly I've been working on {IndividualHumanLearner, ArtificialSupportAgents}.

And, in Education, don't forget that the GOAL is to CHANGE the environment, and even the behaviour (or, experience & understanding, I guess) of other agents. Many robotics applications would be just as happy to try and model the universe though (limited) preceptors, while adapting to change in what I would call a "passive" manner, like kind of a defensive approach. The sort of agents I'm interested in need to be much more aggressive, and strategic. Game Theory meets Planning, I guess.

 Posted by Frozone Permalink on July 21, 2010 08:01 AM | Comments (0) categorized under Pedagogical modelling Tweet

## July 04, 2010

### Sets of relationships over time in multi-agent influence diagrams

I'm having a hard time spitting out this idea: Sets of relationships over time in multi-agent influence diagrams

For a long time I've been thinking that you could use an ontology as a referral point between multiple agents. In other words, if you have multiple agents interacting in an environment, and you want to compare their strategies, you could compare each strategy to some shared ontology which effectively normalizes it and allows comparison.

Why would you want to compare agent strategies in a cooperative setting? Maybe so you could play them out separately and pick the best one (dynamic programming)? Or maybe this could fit into a Model Tracing kind of thing to help you figure out what another agent is trying to do, so you can jump in and help.

I remember also being preoccupied with projection (in the entry, Formal constructs for projection) and what "givens" might have in common with agent actions. In other words, how does Machine Learning relate to Planning?

Other related thoughts:

I am making closer progress to building an influence-diagram-like abomination that has a normalized, built-in process. I want agents to be able to compare each other's strategies using ontology, so, Event nodes would have multiple dimensions.

By creating this thought experiment, I'm not trying to accomplish anything, really. I just want to see how the whole system works and to see if I can master it adequately to change it around.

Ultimately the encoded-process would be provided to the agent and they would be able to project it into the environment, on the fly in an adaptable sort of way. The agent could watch another agent interact with the "action-things" that it introduced into the environment, and thereby know how to select the next action, based on the learner's reactions.

I don't know , man. I definitely need to chew on this for a while. Also I should share my abomination of an influence diagram with you, after I work out some interaction between the process and some agent's decision.

Oh, what the heck. Here it is. I am out of time now, so I will have to explain the example in another entry.

 Posted by Frozone Permalink on July 04, 2010 03:37 PM | Comments (0) categorized under Pedagogical modelling Tweet

## June 19, 2010

### Action=variable assignment

Let some subset S of the event variables in your influence diagram represent decision points from a normalized process. (Where I define process as a pre-assignment of action event outcomes ((decisions) spread out over normalized time.) Therefore, decision nodes are still free to be used to represent unknown actions of other agents. This way, The process modeling is separate from decision. Further, let "actions", which are timestamped instantiations of process, manifest themselves as variable assignments in S.

Next question: what advantages would this structure give us? All of this time, I wanted to model process, and I have just presented a thought about how it could be done.

Personally, I would be impressed if I could show that the set S is produced from pre-defined model, and was worked into the "environment" (the influence diagram) on the fly.

(this entry is an elaboration on an earlier question, Givens and Action)

 Posted by Frozone Permalink on June 19, 2010 10:37 PM | Comments (0) categorized under Pedagogical modelling Tweet

## June 15, 2010

### Process ?= Algorithm

In my quest to learn about computational models for process, I musn't forget the fundamental concept, "algorithm".

(It is presently 4:15 in the morning so I hope it is forgivable that I speak the following in pseudo-mentalese. I hope i can elaborate later.:)

How is "algorithm" attached to task domain and influence diagrams? What exactly does the algorithm manipulate? It is for instantiating the teaching algorithm into the environment at hand.

(The word "teaching" is a little sullied. What I actually mean is, "whatever variety of technique employed to provide opportunity for the learner to output and test their own theories and become exposed to new information in a contextualized environment.")

 Posted by Frozone Permalink on June 15, 2010 04:22 AM | Comments (0) categorized under Pedagogical modelling Tweet

## June 07, 2010

### Planning + Learning Objectives

I was scanning my Google Reader yesterday about eLearning stuff. Someone wrote something to the effect of, "It is important that we give learners the opportunity to make decisions, and those decisions must be tied to learning objectives. Making decisions helps people learn because they get to observe the consequences, to test their own hypotheses. Tailoring the decisions that they make, i.e. by offering opportunity to make decisions in just the right situations, can help us make sure the decisions learners are being asked to make are *relevant* to the learning objectives of the course."

Actually, it's wasn't anywhere near that elaborate, and I'm pretty sure that this is not the point that the author was trying to make; it's just what my brain spat out from reading a vaguely related thought. Sigh. But I am still irritated that I cannot find whatever article it was that made me think that. Oh well.

I want to talk about how this could relate to AI and planning. (surprise, heh.)

Whenever I see the word "decision", I think, "decision nodes! influence diagrams!" But of course these are designed to help you program an agent (like, a robot). Typically, with an influence diagram you are trying to influence the direction of a robot; you, the god-like programmer, are trying to "make" it "smart". My approach is different. I want to design an enriched, tailored environment for a learner. But the technology from the former is very good, and I think that it can be adapted. (If you don't know what an influence diagram is, check out my earlier entry, Decision Theory for Teaching Strategies.)

Trying to adapt and extend this technology, I ask: How might decision nodes relate to learning objectives?

Suppose an educational environment generates some kind of Plan for the learner (where the plan is always changing and adapting of course, while aware of overall themes and coherence and meaningful experiences, yaddayadda). How would you represent "This is Where I want to give the Student A Chance to Make a Decision", with a link to the topic, the pedagogical strategy being followed, the learning objective attached to this decision, and some predictions on what to do next based on possible student reactions.

So the point of this entry boils down to one question:

What is the difference in mathematical notation between the traditional "decision node" and a "present learner with a decision" type action in an instructional plan?

Also I need to clarify in my head the difference between influence diagrams and plans. (like in this older entry, Plan-space planning)

 Posted by Frozone Permalink on June 07, 2010 10:51 PM | Comments (0) categorized under Pedagogical modelling Tweet

## June 05, 2010

### "Givens" and action

What is the difference between a "given" in a causal network and the action taken by an agent in a Game?

 Posted by Frozone Permalink on June 05, 2010 02:11 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 25, 2010

### Partial order plans, Greedy algorithms and Educational Objectives

I had a brainwave the other day about partial order plans. POPs are where you identify the "steps-needed-to-take in order to reach the goal", but the steps are not placed in order -- the agent is free to mix and match them according to its current situation.

(Hmm, I wonder if a greedy algorithm could be used to complete a partial plan, gradually taking the current "best value" step.)

Anyway, my brainwave was about a visualization of "My Learning Objectives" in an online course as compared to "The Actual Structure Of the Course". For example, students are accustomed to seeing a course laid out kinda like: Module 1, Module 2, Module 3, etc... However, it would be important for my system to show "My Learning objective 1", "My learning objective 2", "My Learning objective 3", etc.. within the context of the former.

(I used LucidChart.com to make the image above.)

The point of this image is to show the student's learning objectives within the context of the Module hierarchy. I believe I need another dimension in my visualization in order to make this effective and easy to understand at-a-glance.

In this image, I'm showing that my learning objective is something quite specific - tied to a fine-grained item (exercise) in the overall hierarchy. In order to achieve this specific exercise, the student must pass through the overall "lesson", and then have some exercise-specific activity. This "narrow to broad to narrow" is just one type of teaching strategy.

I hope I am remembering this correctly, but I believe that a discussion about computational encodings of "narrow to big" or similar strategies appears in Wasson 1998, Facilitating dynamic pedagogical decision making: PEPE and GTE.

I am also remembering something about "the fontier" here from Etienne Wenger's book. As usual, I am writing this from my iPod at home and am unable to check my paper references at this tme.

 Posted by Frozone Permalink on May 25, 2010 10:34 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 21, 2010

### It's not Time-Series Data

I love data and I think that visualizations are delicious things. I read SimpleComplexity and I enjoyed the latest ACM Queue article, A Tour through the VIsualization Zoo.

Reading the latter, I loved the description of time-series data. Lately, anything about "Time" and "Data" is piquing my interest. (ex. recent entry, Decision-making over Time)

Today, I just wanted to note firmly that I realized my interest in process involves time, but that it is not time-series data. Time-series data is like an HMM -- one variable whose value changes over time. (see related entry with discussion about Hidden Markov Models, "It's about influencing the process.")

My fascination with process is about CHANGING RELATIONSHIPS over time. Using graphical models, this would be a change in the set of Edges over time, somehow.

So, yes. The point of this post was to clarify something in my mind about time-series data being about the "Nodes" (Events), and distinguishing my work as being moreso about the "Edges" (Relationships over time, strategy...).

WIth this new perspective, I would like to re-visit my work in game theory and planning.

(On Twitter, I summarized this entry as: Contrasting time-series analysis with process/strategy as "changing relationships", not events)

 Posted by Frozone Permalink on May 21, 2010 08:40 AM | Comments (0) categorized under Pedagogical modelling Tweet

## May 20, 2010

### Guy Brousseau - Mathematics education

Thanks to Egan Chernoff for the reference to the work of Guy Brousseau. I found this article that gives a description of Brousseau's work.

According to my own new, and limited, understanding, Brousseau developed a theory about the situation of a teacher and a student interacting to have the student learn math. I am really excited about seeing SPECIFIC and THOROUGH work on teaching as I read into this work. As a computer scientist I want to further identify the dimensions of this interaction (i.e. task domain ontology, story, techniques, strategy, etc.) for the purpose of designing effective technology support for these things.

That's all for now; time to go to work...

 Posted by Frozone Permalink on May 20, 2010 07:39 AM | Comments (0) categorized under Pedagogical modelling Tweet

## May 19, 2010

### Decision-making over time

Lately I've been looking at applying game theory to my problem. (Previous entry:Strategy and Process)

Recently, I had an invigorating conversation with a former colleague (yo Dylan!) about AI, planning, memes, feedback/reinforcement, swarm intelligence (or mind as a set of autonomous agents), influence diagrams and decision theory, and many other things. It was super awesome.

The point of this post is to record a take-away thought from this conversation that I think is important. We had sketched out a sample influence diagram (sort of like this example from a previous post, Decision theory for teaching strategies) and pointed out Event nodes, Decision nodes, and utility nodes. At the time, I couldn't remember how "Event" nodes took into account an agent's observations. I think we had been talking about agent preceptors and actuators. Later, I remembered that "observables" take the form of "givens" in a conditional probability, and an event is a statement of conditional probability. I have talked about this before, too, in Learned some Stats lingo.

Anyway, the important point is that "Calculating Optimal Policy is Important OVER TIME." I figure that an influence diagram looks "frozen". If your givens in the Events are changing all the time, and what if the Utility function itself is changing, and your decisions have to change... this is STRATEGY, and this takes into account the dimension of Time.

I look at a couple different optimal policy calculations in this previous entry, Conditional probabilities, and "the argmax thinggy". Notice how one of them takes time into account and the other does not. I would say that Time is a critical dimension in planning.

The calculation in that post lacks an overall picture of process.

(On Twitter, I summarized this entry as: Decision theory- optimal policy over time)

 Posted by Frozone Permalink on May 19, 2010 12:05 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 17, 2010

### Rubrics

Generating rubrics http://rubistar.4teachers.org/ (Courtesy of ULC summer student Sarah via Liv)

This got me thinking about automation and gathering feedback for successful instructional plans.

 Posted by Frozone Permalink on May 17, 2010 03:47 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 08, 2010

### Strategy and Process

The reason I am interested in game theory is that I am interested in process. Game theory studies multiple agents who employ their own different strategies. The agents act according to their strategies, and they observe how the environment changes based on their own actions and actions of others.

In my field, the student performs actions in a learning environment, which adapts itself to best support the student(s). The student is following some unknown strategy, and the learning environment will have its own set of teaching-technique-strategies and will pick and choose from this arsenal as it continues to act both responsively and preemptively. A common language that student and learning environment will share is the task domain ontology (in other words, the subject matter that the student is studying.)

I have articulated this idea very well myself, from a slightly different angle. So, I will quote myself:

Why do I want to write a planner that employs ontological references? Because a planner instantiates "the organization of the delivery and directed coverage of content". I want to explore the interplay between ontology and methods and models of supported individual or group study. I feel that the best way to explore this interplay is to use mathematical models. These force you to be specific. Getting specific forces you to pinpoint subtlety, and deal with it. You have to put names to things, and you have to define criteria for decision-making. --Frozone, on a previous blog entry

So that is all. I just wanted to tie together some big ideas. Game Theory. Process. Planning. Strategy. Pedagogy.

(On Twitter, I summarized this entry as: Game theory, Process, Strategy, Planning and Educational environments)

 Posted by Frozone Permalink on May 08, 2010 11:16 AM | Comments (0) categorized under Pedagogical modelling Tweet

## May 05, 2010

### Discovery and assembly of objectives

I was thinking about artificial intelligence, and planning, and goals.

When you are studying, you often don't have a specific objective. You jut want an idea of the overall picture, so that you can be prepared for whatever unknown questions the prof is going to put on the exam. A helpful study tool might be some thing that helps you formulate your own objectives and build them into a bigger picture. As you study you will constantly be testing this framework you have constructed.

 Posted by Frozone Permalink on May 05, 2010 11:17 AM | Comments (0) categorized under Pedagogical modelling Tweet

## May 04, 2010

### Coalition or Non-cooperative games?

I am reading Essentials of Game Theory: A Concise Multidisciplinary Introduction by Kevin Leyton-Brown and Yoav Shoham. I'm enjoying it because it is helping me to learn about Game Theory faster than some of the other texts I have obtained. This text does not emphasize mathematical notation. Although I believe that mathematical notation is critical for clarity when working with complex systems or ideas, I have also found that it slows me down when I am still in the "broad sweeping" phase of research. It takes longer for me to extrapolate meaning from mathematical notation than it does for natural language.

Anyway, the purpose of this entry is to comment on the distinction between "Non-Cooperative" and "Cooperative/Coalitional" games. The text suggests that the distinction is about the units of study: individuals or groups.

This caused some eyebrow furrowing on my part. My self-proclaimed interest is Cooperative games. However, I would also tell you that I care more about the interactions between individual agents than I care about interactions between groups of agents.

At any rate, I will keep reading the text. My field is pretty new, so, perhaps all that is needed is an elaboration on some of these concepts.

 Posted by Frozone Permalink on May 04, 2010 06:48 AM | Comments (2) categorized under Pedagogical modelling Tweet

## April 17, 2010

### Planning with ontology references

My research keeps going in loops. As the application process for grad school comes to a close and I re-direct my efforts from that surprisingly arduous process towards actual research, I "decided" that I want to write a planner that references ontologies. I have looked at this before, in a previous loop of research.

I find that the best way to break these loops and transform them into progress is to collaborate - either by comparing your ideas to those in others' work - i.e. reading papers - or by chit chatting with real life colleagues.

Why do I want to write a planner that employs ontological references? Because a planner instantiates "the organization of the delivery and directed coverage of content". I want to explore the interplay between ontology and methods and models of supported individual or group study. I feel that the best way to explore this interplay is to use mathematical models. These force you to be specific. Getting specific forces you to pinpoint subtlety, and deal with it. You have to put names to things, and you have to define criteria for decision-making.

This is all I can say right now. For the rest of my free time today I'm going to poke aimlessly through my library of papers. Thinking about how to direct my research in months to come, I'm toying with a set of "possible outcomes". I envision possible worlds resulting from varying answers to questions like: "What sort of tool will I build? What research methodology will I apply? What programming languages will I use?" I am fully aware that I will not build an all-encompassing perfect eLearning tool; I have been strategic about picking a sub component that (I think) by building it will unearth a lot of questions.

Also, I am desperately hungry for mentorship. This is one of the major reasons I have applied to grad school - for the opportunity to communicate with other researchers. I want to collaborate with more junior researchers so I can improve my own skills by sharing them with younger students. But, most of all (selfishly!) I want the opportunity to communicate with more senior researchers.

 Posted by Frozone Permalink on April 17, 2010 01:06 PM | Comments (0) categorized under Pedagogical modelling Tweet

## April 13, 2010

### Two perspectives

I'm working out a system design and am trying to articulate some assumptions. I have heard two perspectives and I'm trying to figure out if they are really just two ways of seeing the same thing, or if they are separate approaches. I will write it here now and intend to come back later.

1. Take what you know (read something new), and work to organize it around a bigger picture.

2. Take what you know (read something new), and articulate it in the context of what you already know -- the big picture is *already* in your head.

 Posted by Frozone Permalink on April 13, 2010 09:46 AM | Comments (0) categorized under Pedagogical modelling Tweet

## March 07, 2010

### Stochastic processes

The term stochastic process comes from probability theory. Suppose you have a domain X. The stochastic process can be described as a series of random variables, call this set V, that take values within X. Usually the stochastic process is an enumerated set of V over time, like$V_{1}, V_{2}, V_{3}, ... V_{n}.$, where t represents a point in discrete time. Each $V_{t}$ is a random variable from X, so, $V_{t} \varepsilon X$.

(Did I use the right greek letter for "is an element of" here? I am such a n00b.)

I'm surprised I'd never stumbled upon stochastic processes before, given my work in process modelling. But I shouldn't be surprised because there is SO MUCH out there. I will never master it all, but I hope to one day become articulate enough to be able to pitch into the collective effort with a contribution of my own.

How are stochastic processes related to my work?
- They represent continuity of something
- They model how multiple dimensions can affect this continuous something. (For example, check out the discussion about the Wiener process in the Wikipedia article on stochastic processes. The particle can be influenced by a) surrounding bumping particles and b) medium viscosity.

How are stochastic processes unrelated to my problem?
- I want to advise the machine to act in a certain direction. We are not helpless here, trying to understand the "poor particle". We know about its past, we know some of its goals, and have some influence over its environment.

Why am I clinging so hard to wanting to model teaching processes? Is this really beyond the scope of AI? Teaching is an informed art. A skill. I think that we can do better than big data, shallow reasoning”. (Thanks to D. Pennock on Oddhead Blog for a good discussion on this topic.)

 Posted by Frozone Permalink on March 07, 2010 10:49 AM | Comments (0) categorized under Pedagogical modelling Tweet

## February 13, 2010

### Full circle

This is good stuff, here folks. I know it's only been 1 day since I've jumped back into programming, so this is gut reaction reflection. But I am a highly metacognitive, spiritual and introspective individual, so bear with me. ;-D

Over the last day or so, seeing the web-inf directories again, and the conf files, and the business logic objects, and re-living some of my old Model-View-Controller architectures, I'm recalling my headspace when I chose a career change. At the time I had a sense of mastery of web application architectilures, in the Java world, at least, and I wanted to move beyond modelling objects and explicitly building pages to support business processes. Everything had turned cookie-cutter, and I wanted to learn how to make adaptable processes, where the user could have some ownership in how they pivveyd through the data, but that the system could still offer guiding support in a presentation of options.

I learned a lot about AI. And I learned a lot about the delivery and maintenance of programs (I mean programs like "student leaders provide study sessions" not "computer programs".

I got to re-connect with people, to become deeply involved in a team, and to reflect about becoming a mentor myself, what this means to me and how I think I could improve.

What next? I have to choose how I want to grow, to choose how I spend my time.

I love the technology, I love the AI, I love the system design. This is my primary love. (professionally-speaking) I also think that technology design must come from real people to support real situations. That's why my work with people is so important, it keeps my work "real".

I am also remembering that specific, nitty-gritty technology can be a HUGE time sink. I am extremely busy and time is a precious resource. I believe that he "new" Coder-Stephanie will be highly critical about possible paths to follow during problem-solving. I will be using more prediction, foresight and metaevaluation than I had in my younger days. I hope that I do continue to blast into the unknown and continue acquiring the new skills that exploration and experience brings. It's just that I have become calculated about it.

 Posted by Frozone Permalink on February 13, 2010 10:53 AM | Comments (0) categorized under Pedagogical modelling Tweet

## February 11, 2010

Well, fuzzlewink. So one of the folks on my Twitter retweeted a link to a blog entry where I learned something interesting.

Apparently there is a stadard for Business Process Model and Notation, organized by the Object Management Group, which I don't know much about. Most of the standards I work with are either from the WC3 or the IEEE. I downloaded the 496 page document and was pleased as punch at the organization and detail.

(How do you get funding to be able to produce such things? 496 pages? I feel that such work is extremely valuable but it is very difficult for me to explain to the people around me why I think such extensive detail is important. To me, to produce a thorough, organized document means that the territory has been explored, to me, it is a map of knowledge to make it faster for other folks to learn and extend.)

It would be thrilling for me to pick through this document some more and find some overlaps with my field.

The question that tugs most at me is whether the specification addresses user environment issues: how to present choice, how to deal with the LOC of control (as presented in this paper).

 Posted by Frozone Permalink on February 11, 2010 08:40 AM | Comments (0) categorized under Pedagogical modelling Tweet

## February 07, 2010

I don't know who these people are, but I sure feel like they are asking many of the same questions as I am. The company is ActionBase, and I understand that they build software, an "Action Tracking and Human Process Management System."

They're into business processes and I'm into learning processes, but, in just one of their blog posts (The Power of a Process Repository), I clued into a couple common abstract questions.

Reading, I was reminded of some of my earliest research into ontology discovery, as I sought for my system to learn good material presentation techniques using emergent AI. Back in those early days, I thought that a machine could learn good teaching techniques with some input from learning theory and some input from ecological data, and somehow magically a rich educational experience for the learner would arise. (snort, yeah right! ;-D)

Both fields struggle to model human endeavour in order to build environments that support it. (i.e. environments that support learning, environments that support business development) Both have history attempting to model the "ideal" and both fields realized that most human endeavours are actually about the hacks or the in between things that nobody thought to model.

It's like the lesson we were taught in our undergraduate software engineering course: (or, what I was taught, when I was a third year student in the year 2001): You can model your use cases, but you have to realize that MOST of the time, how the software is actually used is covered in exceptions or special circumstances of the actual use cases you designed for. Rarely is the system used in a typical or expected way.

I think that what I was taught still holds true.

So why am I seeking to model good teaching? Because I think it's about the strategy. I think that a machine - or better, an electronic environment - can be *strategic* in the way it supports you and your group.

It's just a matter of mashing out the math to separate this strategy from task domain ontology, from your learner models, from your UI, from probabilistic predictions feeding the UI, from affective user modeling, from planning, from material sequencing, from learner agent negotiations, tweedle doo tweedle dum.

 Posted by Frozone Permalink on February 07, 2010 08:11 PM | Comments (0) categorized under Pedagogical modelling Tweet

## January 31, 2010

### Multi-agent planning, critical resource

In my problem, I would say that the Learner's time and their screen real estate are "critical resources".

That is, if many agents (the learner, the system helping the learner reach their goals, maybe some learning objects themselves) are acting independently, but cooperatively, they all have to be aware of the shared critical resource which is the learner's attention. They have to cooperate to arrange options on the screen effectively. Ultimately the Learner has the Uberest power to procrastinate or follow their lessons diligently, but, the other agents might be aware of affective and motivational things, too.

This thought occurred to me as I was reading Larbi et al. 2007 Extending Classical Planning to the Multi-agent Case: A Game-Theoretic Approach.

 Posted by Frozone Permalink on January 31, 2010 10:19 PM | Comments (0) categorized under Pedagogical modelling Tweet

## December 27, 2009

### Minimax for planning

'Picking up from my previous entry, Planning as environment adaptation.

A strong theme in my recent work is the attempt to match up tools from AI to apply to my problem. For example, most recently, I tried to adapt Markov Decision Processes.

Mike suggested that I might want to look at my problem as a game. In game theory, an agent's decisions are affected by the decisions of another agent. A cooperative game is when utility goes up as all parties work towards common goals. This fits my problem. So I would like to explore it a little more.

When I think game theory, I think Minimax. And I keep coming back to planning. So I found this cool paper (1978) on Minimax and planning. (Minimax Solutions to Stochastic Programs - An Aid to Planning Under Uncertainty.) I enjoyed this paper because it was helpful for me to see an application of Minimax outside of a checkers game, chess game, etc.. The take-away message I got from skimming the paper is that instead of using Minimax to anticipate the opponent's move, you are anticipating possible "future scenarios" and the good vs. bad futures, and you want to do everything you can to push towards the GOOD possible future.

During my conversation with Gord, he identified the theme of Global Coherence vs Local Adaptivity in Instructional Planning.

In terms of human learning, it is important for a student to project forward their "theories" of how they believe the world works. That is, they form a theory that is some approximation of the actual task domain model. I think that this is where Global Coherence fits in. The learning environment must create the grounds for where the learner can manifest their theories.

Within these grounds, the learning environment must then provide opportunity for the learner to receive confirmation, affirmation, test their theories. This is how learning works. And this is the Local Adaptivity. This finer-grained territory might be where my process modelling fits in.

Blarg, baby's awake. And I was just getting started! :(

 Posted by Frozone Permalink on December 27, 2009 01:55 PM | Comments (0) categorized under Pedagogical modelling Tweet

## December 12, 2009

(This entry was written in the middle of November. I found it in my Drafts. 'Some neat ideas.)

I want to explore planning as environment adaptation. I have a book that presents planning in terms of student modeling, where the planner is trying to figure out what the student is doing so it can detect misconceptions and therefore provide useful hints. This approach is different than what I was thinking. By environment adaptation, I mean that I'm interested in planning as a way to lay out possibilities (hooks) in self-directed learning. Usually, when you are taking a course (or following a set of learning goals, however you want to phrase it) there are several concepts that you are trying to learn. Sometimes the concepts are related to each other and sometimes they aren't. The idea is to let the learner pick which concept they want to work on at any given time, to allow them to switch back and forth, while providing supports and good pedagogical activities or structures for them to follow. Also you want some sense of continuity: this can be accomplished using running themes or stories, running examples.

How do I measure and define a meaningful experience this way? The hooks are each steps in a teaching process and you step along each point at whatever pace you wish. I've talked about this before. So my interests are very much into cooperative planning, i.e. an interleaving of many mini plans where you have many mini processes and want your environment to support them. Deriving and applying examples and finding common ontology references is crucial.

Is it at all possible for me to approach this idea in some scientific manner? This the point where I usually ask myself if I'm out to lunch.

I think there is something worthwhile in the application of a task domain ontology in a lesson plan. The important thing for me is to take a tool from AI and apply it and expand it creatively.

 Posted by Frozone Permalink on December 12, 2009 09:20 AM | Comments (0) categorized under Pedagogical modelling Tweet

## November 20, 2009

### Significant learning experiences

I took a vacation day today so I could bring my daughter in for immunizations this afternoon. This morning, she is at my mother's house. So I have a couple hours of glorious, precious freedom. :-)

Also, did you know that if you accidentally make your chai tea latte too watery, you can fix it up with a jolt of nutmeg and a spoonful of coffee whitener? Mine tastes just lovely right now.

I recently purchased this book: Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses by L. Dee Fink, 2003. The website, www.significantlearning.org is down at the moment of this writing, so I will link instead to this other page which at least shows a picture of the book's cover.

Reading the book reminded me of one of my older posts about my attempts to measure, computationally, a significant learning experience. To me, Fink's book is valuable because he fully describes the meaning of "a significant learning experience", even presenting a formal taxonomy. And, as we know, statisticians and computer scientists LOVE models. (And I'm sure there are other fields that love models, too! What I'm getting at is that if an ethereal idea is explored enough to create a model, that allows scientists to apply our tools to it because we work with data that is structured or guided in some way.)

I haven't moved deep enough into the book to know if the author considers Anderson's work (which I mentioned in this post).

 Posted by Frozone Permalink on November 20, 2009 10:38 AM | Comments (0) categorized under Pedagogical modelling Tweet

## October 02, 2009

### Theories of learning

I am creating this entry for the purpose of listing theories of learning. The importance of these is blaringly obvious to me, and, using the search engine on my own blog, I am flabberghasted that I have not written about them before. One of those things that Is late blooming, I guess.

Learning about the existence of these is what lead me to investigate educational psychology, then, cognitive science. These tangents have lasted years, but I always return to computer science as my home field, and "first love". :)

I will begin this list of learning thories from what I can recall off the top of my head, then I will continually return to this list to elaborate as memories return later, or are called up from reading papers, or even as I learn of the existence of new ones.

- behaviourism
- socio-constructivism
- constructivism

I want to identify the components of these thories that are most relevant for building the computational model of different pedagogical strategies, which would be referenced by an instructional planner.

 Posted by Frozone Permalink on October 02, 2009 05:14 PM | Comments (0) categorized under Pedagogical modelling Tweet

## September 20, 2009

### Process modelling, evaluation

Okay, maybe the raw fear of waking the baby up jogged my memory. I remember what was in my notebook, so I don't have to go into the baby's room after all.

I was looking into "Process Modelling", though I'm not sure if that's the correct term. I'm finding lots of stuff in software engineering literature, where work is going into building software that supports business processes. I am particularly interested in seeing how researchers have built a computational model for a process, where the general order of things is known but the system needs to adapt for special cases. (And in fact, where most of the time, you are operating in a "special" case! The world has a way of never going according to plan.) I want to look at this as an AI planning problem, but not all research takes this perspective. For a more thorough articulation of this problem, check out this older post: Whimsy and Smarty on Process Modelling.

Often a "good business process support" translates into "a good GUI". But this solution doesn't work in my domain because my purpose for having a process model is to help the machine anticipate a user action as well as create long-range plans during a tutoring session. In other words, the point of having the process model is to inform the machine. In my view, having a GUI sort of "hard codes" it; my solution cannot use a static model.

I also want to know how to evaluate such a model. Do you measure how close your model is to the real-world model? (No, I don't think so.) How do other researchers do it? I'm sure I've mentioned this before, but, I'm interested in looking at building a measure for putting the learner through "a meaningful experience". (Oh, yeah, I have talked about this before. First here, then here, and here, then here. I love how I can search through my own blog. But good grief, that idea has sure popped up in my head several times. I'm going nuts that I haven't done anything about it yet.) But there's still a lot of work to do.

Over and over again, I find I'm whapped across the back of the head and am seeing stars when I stumble upon research that I think would be relevant, but the mathematical implementation is just too advanced for me. I wish there were an easier way to poll the group of scientists who work in my field in an unobtrusive way. But right now all I have is direct email, and, none of my questions seem so important to warrant that.

I'm thinking too much, I think. I don't know. Too worn out, maybe, from doing this working mom thing. Anyway. We are going to dinner with some friends tonight, and I am looking forward to that. And I will think a little bit about the seemingly-related-but-too-advanced paper I read. Maybe I will email one of my research friends and discuss over coffee. Who knows, it won't hurt.

 Posted by Frozone Permalink on September 20, 2009 03:27 PM | Comments (2) categorized under Pedagogical modelling Tweet

## August 16, 2009

Taking a baby step forward after Whimsy and Smarty's conversation the other day, here are a few more thoughts.

What, precisely, is a Markov Decision Process?

The last two points pose a problem fro my situation, but I think there must be some way to extend the MDP to operate over unbounded action and state spaces, probably by allowing some other restriction instead, somewhere else. Maybe you just need a more flexible algorithm during the decision making step.

But to me, the most interesting part of the MDP is P and R, i.e. the first 2 points. Looking at P: It's like it assumes we have pre-defined the states and just don't know which one we're in. But this is not true for my situation. I am using an *incomplete* state, really. And my set of state transitions is fully known (if you choose to look at a teaching strategy as a series of steps, or, their own little state transitions in a way.)

So where is the uncertainty? (Where can we use probability?) How about, the uncertainty is "that my state space is in X", where X is some proposed enumeration. X can be my projection/anticipation of the learner's set of building materials, kind of. I talked bout this more in this older post.

Other lingering thoughts:
- I want to talk about X some more, to explore this concept of the flexible set, that is defined in terms of preceptors rather than of subsets of some fully enumerated superset.
- 'Will have to re-read my older thoughts on the utility function.
- preceptors to specialize on subsets of the state

 Posted by Frozone Permalink on August 16, 2009 01:48 PM | Comments (0) categorized under Pedagogical modelling Tweet

## August 14, 2009

### Whimsy and Smarty on Process Modelling

I'm trying to find a tool in AI that can help me model a process. Like, my knowledge is in this sort of shape: "Generally, first you do A. Then, normally, you would do B. Next, most of the time you'd do C, but occasionally K happens, in which case you'd do D."

Then I was reminded of how you can get 2 characters to dialogue to help you explore your problem. Like Tortoise and Achilles in Hofstadter's Godel, Escher, Bach. ;)

I will name my two characters Whimsy and Smarty.

And, so, for the rest of this post, I hope that you enjoy this conversation that I'm about to have with myself. LOL

Whimsy (repeat, above): I'm trying to find a tool in AI that can help me model a process. Like, my knowledge is in this sort of shape: "Generally, first you do A. Then, normally, you would do B. Next, most of the time you'd do C, but occasionally K happens, in which case you'd do D."

Smarty: Well, why don't you use a Markov Decision process?

Whimsy: I have trouble with that approach because I think of an MDP as being so SEQUENTIAL. I have multiple teaching strategies in my mind, and sometimes I'm interweaving them, like applying steps from one approach followed by steps from another approach, then making observations to test which approach looks the most promising. I guess in my head the MDP is only useful if you are applying only a single strategy.

Smarty: Have you considered interleaving several Markov chains, maybe?

Whimsy: No, actually, Can you elaborate on that idea?

Smarty: Consider your opening statement. The choice of actions, say, A-B-C-D. This could be modelled as a Markov chain. Suppose you have another teaching strategy, which would require that you execute actions P-Q-R, which could be a second Markov chain. Could you not interleave these using decision theory, where decision nodes A,B,C,D,P,Q, and R could be connected to all of the same observation nodes, and the best action chosen that way, like by developing a policy in this infulence diagram?

Whimsy: I suppose, but doesn't that get kind of heavy, computationally?

Smarty: Don't worry about that. That's my job.

Whimsy: Okay. But isn't the optimal policy usually pre-computed? What if we don't even know what our decision nodes (A,B,C,D,P,Q, and R) are even going to be at a given step? Is it okay to re-compute at every step? And how do you dynamically add new decision nodes? And, for some processes, maybe different observables are more relevant than for other processes. For example, suppose you have observational or status nodes labelled 1,2,3,4,5. And maybe you know that the first Markov chain, A-B-C-D is very reliant on observations 1,2 and 3 but it doesn't care about 4 and 5. And maybe the chain P-Q-R is very reliant on observations 3,4, and 5 but doesn't care about 1 and 2. Does it even make sense to have all of these nodes in the same influence diagram? It's like we're deliberately making the computation more difficult.

Smarty: What about dynamic Bayesian networks?

Whimsy: I don't know very much about those. I should read up on them.

Smarty: Let's go back to the first fork in our conversation. Tell me more about why you think - or don't think - that MDPs would be a good tool to apply to your problem.

Whimsy: Well, during the decision making process, you would have to generate a tree of next possible actions. This enumeration enables you to assign value to alternatives and then pick the best one. I think that the strategy that you are using - like, A-B-C-D vs. P-Q-R - would influence the tree of next possible actions. It's like you have 2 trees, really, and you want to pick the best one. So maybe it is only 1 tree, but it's just really big. But maybe it would be computationally more efficient to evaluate the 2 trees separately, even though it's really the same decision.

Smarty: Or, a single meta hyperdimensional tree thing.

Whimsy: Now you're starting to sound like me!

Smarty: Let's not forget our original problem. You have a KNOWN PROCESS, and you're trying to optimize it, right?

Whimsy: Right. I think I articulated it fairly well in this previous entry.

Smarty: And part of the reason why we are having this conversation is that you're not confident that the tool you are studying - the Markov Decision Process - is suitable for the shape of your problem. This is because the strength of the MDP is that it handles situation in which the next step is not known. However, in your case, it IS known, sort of, because you have a library of known, effective teaching strategies to follow (to an extent...).

Whimsy: Right. So, is there a way we can flip the tool around, and use it upside down? Like, take the loose part and make it fixed, while taking the fixed part and making it loose?

Smarty: That is a creative idea. And I know you well enough that before you can go and turn your tools upside down that you need a definition of what they are first. So, let us review. A Markov Decision process is....... Okay, this is where I need to break out the LaTeX. But I am quickly running out of energy.

Whimsy: That's okay. I think you have said enough that we can pick up with the math next time. Now let me spit out my thoughts as well. So, the "loose" part, in a classic MDP, is represented by the probability of the state S being the next state. And the fixed part is.. the utility function? Or, I know, it's the precomputed policy, where all the observable and state nodes are set in an influence diagram to "hold still" while the policy is computed. So, we want a new type of MDP where the sequence of steps is fixed, but that we have flexibility in the relevance of observable and state nodes. And in the "new MDP", we still wish to measure the utility at the end, because we are trying to decide which process to follow.

Smarty: I think you're on to something. That's a creative idea, but we'll see if it holds up against the math. You're going to need a couple hours of uninterrupted time to work that out. Also don't forget to browse around the literature again. Someone has probably already solved this problem.

Whimsy: Yeah, I'm going to need some desk time, for sure. But it's time to pick up the baby from daycare now. And I have to return to my office job next week. Will I be able to come back to this head space? I hope, I hope, I hope so. This is fun. :)

Smarty: Good luck, you'll need it.

Whimsy Thanks. Bye!

I have been trying to articulate this problem for a long time now. Here are my other entries following this train of thought.

 Posted by Frozone Permalink on August 14, 2009 01:12 PM | Comments (0) categorized under Pedagogical modelling Tweet

## August 07, 2009

### What good learners think and do

I dropped off my daughter at daycare this morning, then I came home and slept from 9:30 a.m. - 1:00 p.m.. I'm definitely feeling better now than I was before! I was just digging through my "to read later, when I have time" email before going to pick her up, and stumbled on this interesting article about student retention that my boss had circulated to the staff, but that I had not yet read. I'm interested in student retention issues because I'm keeping an eye out for building in elements into educational software that keep the experience exciting & fulfilling. (Game designers do this too.)

From Tomorrow's Professor mailing list, Are Your Students On Course to Graduation?, here are 8 things that good learners think and do.

1. accept self-responsibility, seeing themselves as the primary cause of their outcomes and experiences;
2. discover self-motivation, finding purpose in their lives by discovering personally meaningful goals and dreams;
3. master self-management, consistently planning and taking purposeful actions in pursuit of their goals and dreams;
4. employ interdependence, building mutually supportive relationships that help them achieve their goals and dreams (while helping others to do the same);
5. gain self-awareness, consciously employing behaviors, beliefs, and attitudes that keep them on course;
6. adopt life-long learning, finding valuable lessons and wisdom in nearly every experience they have;
7. develop emotional intelligence, effectively managing their emotions in support of their goals and dreams; and
8. believe in themselves, seeing themselves capable, lovable, and unconditionally worthy as human beings. (oncourseworkshop.com)

Other entries, in chronological order, where I have logged these "pedagogy nuggets" are:

 Posted by Frozone Permalink on August 07, 2009 01:33 PM | Comments (1) categorized under Pedagogical modelling Tweet

## August 06, 2009

### A problem statement

I'm fixated on the thought that ideas can be represented spatially -- and by ideas, I mean concepts-to-be-explored-by-a-learner -- and that I might be able to use planning technology to help fold together these ideas in the art that is called tutoring. It requires empathy of the learner's context (empathy that can be enhanced using a computer's superior memory) as well as expertise on the subject matter, as well as knowledge of how to apply pedagogical techniques.

Advances in knowledge engineering give us tools to represent domain expertise. Lots of work is going into student modelling, and I believe there are whole conferences devoted to it. (Mental note to continue my quest to get a better "in" on the world of computer science conferences!) But I am still foggy on what to do with the pedagogical techniques, i.e. how it all fits in. I tried to squeeze it in using a utility function in decision theory. I looked at the idea of tagging micro-pieces of teaching techniques and assembling those into a pedagogical ontology, like this. Recently, I was inspired to look at pedagogy as a "mode".

Where am I now? I'm not sure. I'm in a lot of pain, not just from my throat, so I think I'll end here and maybe flip through some papers. But I'm glad to have articulated my problem from a broader vantage point. :)

(I have a sore throat. It is intensely painful. I can't sleep. Tea with honey seems to help. Since it's the middle of the night, I thought maybe I could at least use this rare time to myself (although pain-filled, alas) to play with some ideas.

 Posted by Frozone Permalink on August 06, 2009 12:49 AM | Comments (0) categorized under Pedagogical modelling Tweet

## July 16, 2009

### Strategies as subsets of preceptors and actuators

This is the entry I was working on the other day when I lost half of it during a blip of the universe. The half that is missing was a tangent about how I got this idea. It involved finding a daycare for my daughter (YAY!) and about how she was sleeping when I arrived on the first day to pick her up (DOUBLE YAY! She was relaxed enough to actually fall asleep, at some point!) and I was sitting on a bench in the hallway reading The Emotion Machine by Marvin Minsky, because I didn't want to wake my baby up right away. I thought my tangent was worthy of sharing because it described some of my experience about finding a daycare, and that maybe if other scientists-who-are-the-primary-caregivers-of-young-children were reading, they might be interested in that as well. I was stressed out about finding a daycare (found one), and about whether the baby would be able to fall asleep in a stranger's arms (she did, eventually), and about whether she would survive even though she's unfamiliar with the daycare's type of sippy cups (ended up bringing a cup from home), etc, etc.. But there, I've just summarized the story in a single paragraph, anyway. Doubtless I will be relating other daycare stories in the future.

Getting back to the research. I'm only 10 pages into Minsky's book, but already it made me try to relate the material to the work I'm doing. I asked myself, How would I define a teaching strategy as a set of activated preceptors and actuators? And how would these fit into decision theory?

And so continues my original entry........

Minsky proposes that our brains are like machines, and emotions are like "modes". For example, Minsky notes that when you're in love with someone, you don't see their faults. In this sense, love is the "mode" and it operates by turning off some of its preceptors, namely, the ones that would normally see imperfections in the other person.

I've talked before about how different teaching strategies require you to pay attention to different things in the student's behaviour. But I think Minsky's book made me think of a teaching strategy as a "mode". (Not that Minsky's way is necessarily the "right" way, it's just a perspective I hadn't seen before and therefore worthy paying attention to! And more strongly so just because of the author's prestige. :) ) At some level, I feel that the "teaching strategy as a MODE" relates to this older post, but I need some time to chew on it.

For the rest of this entry, I want to look at preceptors and actuators as things that feed into teaching strategies. Since I'm still such an AI newbie, I only have one big tool I know how to use, sort of, and that is decision theory. So I'm going to apply decision theory to complete the following analysis.

#### Preceptors

A preceptor is a thing that pays attention to something. So, we have a keystroke preceptor, we might have a "student's emotions" preceptor (affective computing), a preceptor for keeping track of how the student interacts with various learning objects, and multitudes more. We might even have electrodes attached to the student's head so we could monitor their brain waves! (tee hee, I wish I could find a screenshot of the scene from the movie Back to the Future, when Doc first meets Marty in 1955 and puts one of those suction-cup-electrodes on his forehead. The expression on Marty's face is priceless.)

You can see how some preceptors might overlap with others (for example, the keystroke preceptor could team up with the student-emotion preceptor... a bored student might type slowly or an excited student may type more quickly). Maybe there's even some kind of granular hierarchy where sharing of input can occur. I think that an important part of my problem is clarifying just WHAT you need to pay attention TO, in order to help you anticipate and do what you need to do to support the student in their endeavours.

Looking at decision theory, where do we put our preceptors? I think it's the observables. I'm most familliar with them as part of the optimal policy calculation; in the equation below, the observables are the instantiated evidence in the form O = o, where uppercase "O" is a variable being observed, and lowercase "o" is the value attributed to that variable that you do observe.

LaTeX: \delta^*(o) = arg\max_D \sum_{S}p(S|O=o,D) U(S,O=o,D)

It makes sense to limit these observations as much as possible because you can simplify your computation by trimming out the irrelevant data (observations). So, if your selected teaching strategy allows you to throw out some preceptors (and therefore disregard some subset of the observations available to you) then this is good!

#### Actuators

An actuator is a thing that you use to translate your intent into action; it's how the machine interacts with the world. It could be the wheels on your robot, it could be a display screen projecting images for the user to look at, it could be a software system that pushes data out into a database somewhere. Some sub-systems could have both preceptors (receiving data) AND actuators (sending data). It gets really abstract because you can't see all the "ins" and "outs" of our system.

So, looking at decision theory, what are our actuators? Well, I see it as delta-star ($\delta^*$), the optimal policy, or in other words, the actions you execute. But I have a problem with this view because it seems a little flat; it's not separating the ACTUATOR from the ACTION. Or, separating the action-being-done from the limb-that-you-do-the-action-with. For example, to open a door, I can use my hand to turn the doorknob, or I could kick it open with my foot. The action (opening the door) is the same, but the actuator (arm, leg) is different.

#### Conclusion

So there are my thoughts....... I'm going to end here partly because I'm running out of energy and partly because my time is UP and it's time to pick up my daughter from daycare. I get SO MUCH more done when she is at daycare. It feels SO GOOD! I hope that she had a good morning, too. I hope that I'm not a horrible woman who is exploiting her daughter by causing her some discomfort for my daughter in order to benefit myself. I don't think so. I know that I was (am) positively starved for "thinking time", and I know in my heart that she is not being damaged at daycare -- in fact, she is learning to interact with other kids and is hopefully bringing some happiness to some of the residents, which is a good & healthy thing. (My daugher's daycare is in the same building as a long-term care facility for older people or people with disabilities, and the daycare works to arrange daily activities with the children and the residents so that the residents don't get bored or lonely, and to give the children some fun & exciting things to do during the day!)

Following are a few other stray notes trying to relate to some of the other thoughts I had blogged about before.

#### Other notes

------------
I alluded to how different teaching strategies require you to pay attention to particular things here, when I thought about what could be part of a teaching model. This post lists types of teaching, or snippets of what might be involved with teaching. I think it could be helpful to look at this list again while asking yourself the question, "So what is the machine paying attention to when it is doing this action?" For example, if the machine is doing "advice", then it would be watching what the student is doing, and would have to assume what they are trying to do, and then it would offer some possible "in-between" steps. What the machine is paying attention to depends, obviously, on the activity the student is engaged in, and, what their goals are. Gosh, I gave to get much more specific.

At the same time, you have to measure whether or not you're putting the student through a meaningful experience. (See this post about the utility function.) This older post was interesting because it talked about "the fringes". Would your model of "the fringe" be affected by the teaching strategy being employed? Which preceptors and actuators are relevant in updating the fringe?

 Posted by Frozone Permalink on July 16, 2009 05:14 PM | Comments (0) categorized under Pedagogical modelling Tweet

## June 13, 2009

### Carrot ninja ball

I had a thought - quite a messy thought: I don't know if I'll be able to articulate it -- but I'm going to try here, clumsily. I don't even know how to describe it, so the title of this entry is just some random string of words. Mmmm, carrot ninja ball.

It started with my thinking about the job of the computer: to provide an environment for students to build1 on their new knowledge, to apply it and keep track of and evaluate it.

So the planner, or a part of it, would include a transition function, i.e. a mapping of possible future states, somehow related to the next-action-to-be-executed. I guess I'm thinking of the student model. The current state would include a definition of what we believe the student to "know". Or, looking at it another way, we could just have the current state keep track of any new knowledge that we just presented to the learner.2

The set of possible FUTURE states would describe the different ways that this new knowledge could manifest itself. My original scribble/articulation of this idea read, "Different ways that what-you-showed-them (i.e. when you introduced new material for the first time) could manifest itself." Right. So, future steps - what you are predicting - are all the ways that you think the student could take what you showed them, and, using the environment and the tools at hand, "build" it. (There's my learning theory side-thought again, hrm.)

There are a zillion ways to build the whole thing (i.e. ways that the student could put their knowledge into practice, to move from remembering to understanding to application or other some sort of progression through levels of learning (sorta relevant wikipedia entry to give an idea of what I'm getting at), I'm thinking about Bloom's taxonomy or Anderson's refinement of it). Dispite that there are so many ways the student can let their new knowledge manifest itself in this physical environment, hopefully you can refine and narrow down the options as you gather more clues about what they are doing.3

Clarifying again. This is like 4 or 5 iterations now of the idea, heh.

The first step of the teaching strategy is to introduce the material. The crux of the plan is to predict how you think the student might create that new knowledge, given their current environment and the tools at hand. Why are we trying to anticipate the student's actions? So we can provide the right tools and prompts, according to the teaching strategy we are following.

So how does the machine's next step fit into the plan-state plan? It's like first there's the introduction, then the prediction of the student's reaction, then the establishment of the machine's next action.... and the machine's next action sort of folds up into the start again, doesn't it, somehow?? Like, the learner's own creation feeds in as input, and the machine's prediction of this and how it should provide the learner with like a mirror so they can see into their own mind/understanding....

I just had this vision of a ball, or that strange loop thing again ("carrot ninja ball -- of course it is orange like a carrot"), rolling from new material, to creation, to reflection... and the ball was rolling through task domain knowledge, because every step: the introduction, the creation and the reflection -- al involves ontology references. But it's like you're chewing them from different angles. Chewing and regurgitation, moving along in a crunchy path like pacman!

So the plan-state plan starts with the node that represents that you introduced the concept. Neighbour nodes represent the laying down of tools as the student creates. The machine has to ensure appropriate tools are available.

The teaching strategy could dictate the system's reactions to the learner laying down tools, by questioning or challenging or affirming, etc..

Err. I was right, that was clumsy. But I can't expect new ideas to come out perfectly. SO there they are, hopefully I'll be able to work them into some structure, making ties to decision theory or AI planning to tame the beast and make it do its job. :)

And the baby is awake now, so that is good timing!

---
1 I'm having a side-thought about biases towards different learning theories: constructivism, etc.. I really should compose an entry about those, too, because I keep wanting to refer to the thought.

2 Another side-thought: I know that every state transition will not involve the latest action being "introduce new material". But I don't know if I want to restrict or define the "introduction of new material" to be an ACTION, per se. But I guess it is. Hrmm. Anyway.

3How is this different from the "prediction of misconceptions and taking actions to correct them"?

 Posted by Frozone Permalink on June 13, 2009 09:53 AM | Comments (0) categorized under Pedagogical modelling Tweet

## June 09, 2009

### Ordering of steps

I often start writing entries with an excited feeling, because I like to explore ideas that I've never talked about before. But between those thoughts and the time I sit down at my computer and click "New Entry", these other questioning thoughts sorta peer out of the bushes chanting things like, "That idea is so obvious, and so silly, it makes you look like a n00b." To those little gremlins, I say, "shoo! shoo!". I've got work to do, here!

So I'm still working my way through that book I mentioned last time. I read about sub-goals, and how these transform into "adding a new action to the plan". I also read about action ordering, and about causality between actions. And naturally I'm trying to relate all of this to the process of teaching.

From what I recall from the literature, traditionally a sub-goal in instructional planning manifests itself as a discovery that the student is missing a piece of knowledge, so you kinda have to plan in a "catch-up" lesson to work through before continuing with your overall plan of teaching them something "bigger".

I wanted to spend some time thinking about how else these "ordering of steps" (i.e. adding new actions (sub-goals), adding ordering constraints and other plan-specific things) might relate to my problem.

Earlier, I mentioned IMS LD, and I'm at a point now where I want to take a closer look at it. Can I pick out "activity ordering" from these technical specifications and discover how these might relate to AI planning? (Hasn't anyone written a paper about this already?)

On my first crack, I guess I'm looking at Learning Design Levels A and B.

And now the baby is awake. Sheesh! Grumble, grumble, snippet researcher.

 Posted by Frozone Permalink on June 09, 2009 02:15 PM | Comments (0) categorized under Pedagogical modelling Tweet

## June 03, 2009

### Planning as a projection of the learner's creation

So, picking up my thought from last time: the planning is a projection of what "I" (taking the perspective of the artificial learning-support system) want the student to create.

The basic flow of learning goes like this: Learner is introduced to new material, and it kinda sits in the back of their brain (literally, sensory input fires at the back of your head). Then, to "absorb", the knowledge, the learner has to re-assemble that input so that it fits into their contextual mind, and essentially re-create the knowledge by PRODUCING it in some way. For example, they could build a model of what they just learned, or do some practice questions, or something. This creation, or projecting forward by the learner requires them to "push out", whether it be by speaking or writing or using their motor skills or something like that. Continuing the pretty picture, these "outward" functions such as motor skills, etc. cause neurons in the FRONT of your brain to fire off. Finally, the learner observes their creation (more sensory input at the back of the brain). They can compare their original input to the new input they received about their own creation -- it sort of creates a loop. From what little I understand about learning, it happens here, when they are comparing their own creation to what they were originally taught. Eventually they forget the original lesson, but they retain their "creation" skills, so that the knowledge is truly internalized.

The computer's job is to
1) present the new material to the learner in such a way that it is easy for them to remember/absorb it initially
2) provide some means for the learner to "create" this new knowledge themselves, i.e. by providing activities, tools, support, etc.
3) facilitate the learner's evaluation of their own creation, and to support refinement, questioning, deeper analysis, integration of even more new material, etc..

From an AI planning perspective, then, I guess the machine has to "stay ahead" of the learner so that it can anticipate where they're going so it can provide the proper tools. The machine's projections can be informed by what it believes to be their learning goals. 1

What else does the machine have to model?
- The set of next possible teaching-actions, which I talked about a few entries ago.
- The "story threads", which I also talked about a while ago.
- I should keep an overlay of the relevant task domain knowlege. "The fringes", to use lingo from the AIED field. :)

I could be suggesting building materials, but, this is part of the subset of teaching actions, I think.

Hmm, so, ya. The journey continues.

--
1How do these learning goals get entered into the system? Well, I assume that the student tells me their learning goals. But I also know that these goals can change without them telling me right away, or at all, so there will always be some uncertainty here, even about our GOALS!

 Posted by Frozone Permalink on June 03, 2009 11:00 AM | Comments (0) categorized under Pedagogical modelling Tweet

### Action selection

Yay! Found another paper about action selection:

Looking Ahead to Select Tutorial Actions: A Decision-Theoretic Approach

I did a quick skim and couldn't find the arg max thinggy in this paper. This makes it a little harder for me to find something common for me to grasp and then be able to compare to the work I've already done.

Once again I'm overwhelmed by the amount of detail that my problem considers. But it is nice to have one more "very relevant" paper to add to the pile.

 Posted by Frozone Permalink on June 03, 2009 10:17 AM | Comments (0) categorized under Pedagogical modelling Tweet

## June 01, 2009

### Following the steps, but not really

Some other random meanderings as I try to find my way again...

It seems so simple -- I just need an engine to "follow the steps", right? But of course the steps are not in a rigid order. There's a basic pattern, and at each step I have a set of choices and I want to compute the best one as I'm following the pattern, but I definitely have to veer off "track" for the sake of flexibility. It's like there's 2 loops going on: one "master pattern", and a smaller, more specific detail-oriented guide that sort of conversationally follows the big loop in a general way but has its own mini-goals to take care of. (watching for motivational cues, hints, etc.)

I feel like I'm way out of my league, here. Sorry mome wraths, I couldn't do it!

 Posted by Frozone Permalink on June 01, 2009 01:27 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 30, 2009

### It's an optimization problem

This entry is just a thought, like a little fishy swimming in one year and out the other; 'just wanted to catch it between activities!

The title of my last post was, "Is it really a planning problem?" and today I just thought of a new angle: it's an optimization problem!

I'm fixated on the successor function, or the decision of selecting the next action. Unlike with the robot crossing the room we saw last time, the selection of the next action isn't based on eliminating options (i.e. picking one action because the alternatives would lead to failure), or what's POSSIBLE in order to transition to some desired world (i.e. pre-computing a sequence of actions to see if they will lead you to a desired state as opposed to selecting a sequence of actions that would NOT lead you to your desired state).... rather, it's more an optimization problem. Ahh, and I think my professor realized that, and told me, a couple of years ago but I didn't really hear him until I re-figured it out for myself. About time it sunk in, eh?!

With a teaching process, it really doesn't matter the order in which you execute actions (where actions are things like, "show the student a diagram" or "ask the student a question" or "give the student some choices"). Sure, the order matters at some level, and the point IS to choose a sequence of actions that will lead the student to learn something, but, choosing any one action at any one time is relatively low cost. The action selection is not where the big money is. (So, where is it?)

I've got to chop away at some of the ambiguity here and put some assumptions in place so I can get some traction. Maybe instead of being focused on the selection of a single action, I should choose some small set, and use the planning as a projection of what I want to help the student to create.

I've been chewing on this problem for years, and I'm still chewing.... but somehow I thought this brainwave was worth recording here. Hrrm.

And I haven't forgotten about the mome wraths! Or should I go dig up some examples of optimization problems to refresh my memory?

Anyway, I'll be back, doubtless. =D

 Posted by Frozone Permalink on May 30, 2009 05:12 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 28, 2009

### Is it really a planning problem?

I think of my work as being in "instructional planning", which is a subfield of AIED, which is a subfield of AI. Or, "instructional planning" is an adapted type of "planning" in this sense of the term from wikipedia.

But family trees of research aside, I'm really questioning whether I'm looking at this problem in the right way. I'm trying to model a natural process, where the order of things is usually known. The point is to have the machine select CONTENT and transform that content from an abstract/Platonic/metaphysical/ontological sort of format and to give it CONTEXT by applying a particular teaching strategy, or appealing to ongoing themes in the student's course of study, taking advantage of transitivity laws by using familiar examples, and FILTERING out the currently unnecessary things from the reams of data at our fingertips.

I just keep bumping into a brick wall. I started writing a blog entry about designing a successor function using situation calculus. But I didn't get very far: I'm having trouble even concocting an example! I need an example where one thing changes in some way. Last time, I was a magical fairy who waved her wand and a variety of things could happen as a result. Let's see if we can upgrade this scenario into a planning problem. Say, maybe I'm a magical fairy with a GOAL. To, umm... I guess this should be parallel to my research somehow -- I know, to find a path to guide the mome wraths (reference needed for Alice in Wonderland) through the garden of knowledge.

In robotics, your goal could be to walk (or roll, or whatever) across a room full of obstacles. You rely on your sensors to tell you what's out there, and then you have to build a series of actions to execute in order to reach your goal. For example, maybe you would "walk" in the direction of the goal, but then come across an obstacle, so you execute a "climb" action, then continue the "walk" in the same direction until you reach the other side. Here the plan is: walk, climb, walk. In situation calculus, you would have a bunch of predicates in the form do(action, state). So, I guess you would start in the "state of being at the wrong side of the room", call this state $S_{w}$ and your goal would be to get in the "state of being at the right side of the room", call this state $S_{r}$. So your plan would be like:

do(walk, $S_{w}$) - This means, when you are in the "state of being at the wrong side of the room", you should walk.

Then, define the state of being at the front of the obstacle as $S_{atobstacle}$ and the state of having overcome the obstacle as $S_{overobstacle}$

Then your next action would be:

do(climb, $S_{atobstacle}$) - This means, when you are in the "state of being in front of the obstacle", you should climb.

Finally,

do(walk, $S_{overobstacle}$) - This means, when you are in the "state of having overcome the obstacle", you should walk.

and maybe

do(stop, $S_{r}$). - This means, when you are in the state of having reached the right side of the room, you should stop walking.

Can you believe how TEDIOUS that is? The other thing that gets me is that I had to explicitly define the states of being at the start, being at the finish, being in front of the obstacle, and having overcome the obstacle. In the problem I want to solve, there is no way you can know all of the states ahead of time. You discover them as you go. I have to figure out how to deal with that. Anyway. Back to my beloved mome wraths.

The mome wraths live in the garden of knowledge, and they want some cupcakes. However, the cupcakes are located on the other side of The Fundamental Theorem of Calculus.

Gahhh! The baby is awake. So I shall have to put my magic wand away for now, and the mome wraths will have to wait for their cupcakes. This next example will be different because instead of navigating through a room with an obstacle across the middle, I'll have to look at my mome wraths's previous knowledge, look at a teaching strategy, and look at how the "ordering of actions" might be different. Hrrrm. I have no idea what I'm doing. LOL

See ya next time..............

 Posted by Frozone Permalink on May 28, 2009 01:19 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 22, 2009

### The plan equals the Markov chain

This seems obvious, but didn't really click for me until recently. We talked earlier about how a policy π is basically a set of decisions, one for each decision node in your influence diagram. In AI planning, this sequence of actions-to-execute is linear (although it can be revised). This step-by-step plan is your Markov chain.

In other words, a Markov Decision process is related to AI planning in that the solution to the MDP is a Markov chain, and that chain equals your plan for execution.

I left off last time with an analysis of the state transition function, which I am still exploring. I want to continue my analysis, keeping a thumb on how it would be different if you knew you wanted to follow a certain type of pattern. Maybe there's room to expand the model, and usually "more knowledge" means a tradeoff where you can take some shortcuts elsewhere for computational (space/time) gains.

But I don't know if I will go in that much of a theoretical direction next, or if I should spend more time on measuring the effectiveness of the pedagogical technique. Do you compare it with a human tutor doing something similar? Suppose I fully develop this new extension to planning, how can I prove it works?

I have a lot of work to do to figure out how to measure such things and setting up proper experiments or simulations. But that is further down the road, and I think I would need a lot of help from an advisor with that part, or at least another researcher with more experience. =)

What I can do, though, is continue with my exercise in STRIPS and/or situation calculus, while exploring the boundaries of states, actions, observables, predicates, reward functions, utility functions, probabilities and so on.

Tweedledoo!

P.S. Note to self: If you know the sequence of actions (i.e. the chosen micro-teach technique), what are you planning for? Where is the uncertainty? What are you trying to DO? heh.

 Posted by Frozone Permalink on May 22, 2009 12:28 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 18, 2009

### It's about influencing the process

Slowly, here, I'm wiggling through notes and examples about the specifics of AI planning using Markov Decision Processes. I have an entry cooking about using STRIPS or situation calculus to examine the particulars of the state transition function so that I can later highlight the differences I need for my model... but that is a little out of reach yet.

First, I decided to review my notes about MDPs. At this point in time, whenever I thought, "MDP", a similar thought triggered in my mind, "the milk in the fridge example". It took me a few days to find the time to dig through my files ("did I have that on paper? or was it a PDF?" etc.) to find it. But this morning, I did. And I also realized that point of "the milk in the fridge example" was to illustrate Hidden Markov Models, which is slightly different.

So, that's what I wanted to note today: What I understand about Hidden Markov Models, and why this isn't exactly the right model for me. And maybe to make some further progress on the Markov Decision Process front.

A Hidden Markov Model (HMM) is named so because you use it when you want to make predictions about or ask questions about a variable whose value you cannot observe directly. With the milk in the fridge, we know that if left too long it goes bad, but we can't exactly tell if the milk is bad until you open it and give it a whiff, or if you look at the best before date. Also pretend that your roommate can randomly go buy milk, replacing the bad stuff with fresh stuff. Because of these two things -- going bad over time and roommate replacing it -- you never know exactly when you walk up to the fridge whether you're going to be able to drink the milk or not.

The Markov part comes in when you add time. Say, every day (or, at each "step" in the "process"), the milk gets a little "badder" and the badness resets when your roommate replaces it.

The Hidden part comes in when you can't see the state of the milk directly, i.e. you can't tell if it's "good" or "bad" so instead we rely on other preceptors that help us infer the value indirectly. For example, maybe we can measure the odor.

The HMM is useful if you want to talk about a value-that-you-cannot-observe-directly that changes over time. I don't know how, or if, this is applicable to planning. It might help you predict some unobservable obstacles, maybe.... but I don't think the HMM is directly useful in the computation of "the next step", or what I'm calling the transition function, or, selecting the next action based on the current state and previous actions.

So that's probably all I'm going to say about HMMs for a while. I think this entry helped me distinguish HMMs from MDPs in my head a little, so now I can go back to whatever path I was following before. =D

Oh, and about the title of this entry -- I just wanted to emphasize that my problem is to find a way to choose the selection of the next-action while following a known process. I am interested in the order of the actions, and the patterns that this ordering creates as it interacts with the environment.

(mmm, I enjoyed articulating my research goal again! It still doesn't feel precise enough to the real problem, but I'm one iteration closer...)

 Posted by Frozone Permalink on May 18, 2009 11:04 AM | Comments (0) categorized under Pedagogical modelling Tweet

## May 05, 2009

### Generating actions and the transition function

I was reading this paper (Decision-Theoretic Planning for Playing Table Soccer [Tacke, Weigel & Nebel, 2004]) about an application of decision theory to a robotic planner that played foosball. I thought that the paper did a good job of explaining the "nuts and bolts" I've been looking for.

To put a name to it, I want to look at the transition function. Basically this function takes the current state and an action in as input, and outputs possible new states. Then you pick your action according to the possible state with the highest utility.

In this paper, their system generated a tree of next possible actions, and for each of those, opponent reactions and corresponding consequences, with probabilities of those consequences. Next came another layer of the next possible states. It looks like they used a bit of naive Bayes and some minimax, which was odd because it was a decision-theoretic system, not game-theoretic. This is an art, really -- you have the tools and you can make them do whatever you want to suit your problem!!

I'm starting to get an idea of how planning works, but I need to look at a few more systems to compare and contrast them so I can pick out the bigger patterns.

I think that the teaching strategy will influence the next-possible actions. I don't know how incoming observations about the learner will fit in... maybe the probabilities of consequences? I also have to keep in mind the overall story -- i.e. my utility is "giving the learner an overall sense of a meaningful experience". I really have to define that mathematically. I want to put them through an introduction, a middle, and an end. And you can braid many of these together.

Well, baby's awake. Hope to come back next time with another example of the process of planning.

 Posted by Frozone Permalink on May 05, 2009 09:17 AM | Comments (0) categorized under Pedagogical modelling Tweet

## April 24, 2009

### A good survey-ish paper for my area

Floating around on my little feverish cloud, I found this snippet of a blog post that I can't believe I hadn't published yet. So here she be.

***

I found a niche of really hot papers for my topic. Many are mentioned in a previous post ("My pedagogical issues"). Another such paper is:

This paper describes a history of attempts to do what I'm trying to do. (Excellent for chasing references!) It also tries to specify what good teaching *is*; you need to know that before you can model it! I'll have to read the paper again to see if it answers the question about *when* to apply the different teaching strategies. It might be under the heading "Judging task difficulty and degree of assistance".

I feel like I need to better define what I want to do vs. what has already been done.

***

(2 months later) Ah, ha! And to answer the statement in that last line is, "apply decision theory".

 Posted by Frozone Permalink on April 24, 2009 04:38 AM | Comments (0) categorized under Pedagogical modelling Tweet

### My friend, the utility function

I've been thinking more about planning lately and I had a thought the other day that tasted like a milestone in understanding to me. At the same time it felt obvious, but I wanted to push myself to articulate it here.

So my epiphany was about the utility function. I'll back up a bit. Decision theory, in a nutshell, to me, is that you lay out your problem in an influence diagram where you model the relevant factors such as the agent's allowable actions and other variables that affect the agent's decisions, then you build the utility function according to how you want the agent to act, and let the thing rip. More here.

Out there in the research world, I've often seen the utility function as a reflection of "how much the student learned". It's impossible to look inside the student's head and read this in as a variable. Instead, researchers have commonly used quizzes and so on as an indirect measure of this.

So the epiphany was this: Don't make the utility function based on the amount the student has learned. You can't measure this anyway. Instead, design your utility function to measure whether or not you have put the student through a meaningful experience, like a story with a beginning and an end, with a goal in mind. You can braid multiple story threads together, and any one activity can contribute towards multiple experiences. The main thing is to pull together a set of relevant activities that have long-term meaning in mind.

This take on the utility function is not mine. I'm sure I heard it before from one of my mentors, probably Gord or Jim, or possibly Mike or Gina. Anyway, I'm happy the thought stuck in my head, wherever it came from, so that it could come back now that I'm tackling the theory in more depth.

I'm a little tipsy on the idea of the utility function as something the programmer places there to influence their desired behaviour, vs. "discovering" good behaviour and then learning to reinforce it.

la la la...

 Posted by Frozone Permalink on April 24, 2009 04:25 AM | Comments (0) categorized under Pedagogical modelling Tweet

## April 20, 2009

### Revisiting: What is teaching? Some models

So I had a lovely chat with friend of many years Chris Brooks, who gave me 2 good leads on where to look for models of teaching/learning.

Lead #1 - Anderson's amendments to Bloom's taxonomy: I haven't gotten my hands on a copy of the book yet, but I found a link over at the University of Georgia that looks like it describes the basics pretty well.

Lead #2 - IMS LD. I'm frightened that my baby is going to wake up any second now, so I'm just going to post this link for now. (I'm not even sure this is the right page, heh. All I have in my head is "IMS LD" and some concept of what I am looking for, i.e. a computational model on the "How"s of different kinds of teaching.)

Over the next while I will be stewing about why I thought these were good leads, how they relate to my work in decision theory, and to maybe compare/contrast them a bit. I pray that I can churn this out in the form of a blog entry, soon. =)

 Posted by Frozone Permalink on April 20, 2009 01:10 PM | Comments (0) categorized under Pedagogical modelling Tweet

## April 15, 2009

### Ontologies: a subtlety

The word "ontology" has become a buzz word. It's too bad, because my research interests are veering in this direction, an i feel kinda.... yucky.... to be researching something "trendy". Oh well, it's minor; I can't stop following this path just because it's starting to become more popular in recent years. =)

Anyway, I'm noticing that different papers use the term "ontology" to mean subtly different things., I wanted to file away this mini post to clarify what I mean by the term in my work. I first learned about the term in a philosophy/metaphysics course, and that will forever influence my use of he term in computer science. Your ontology is what exists in your world. It is an organization of the concepts you can refer to.

I was just reading a paper that said something to the effect of, "the ontology is a description of educational goals". I disagree, and that's what prompted me to record this entry. I think that task domain ontology references form an important, even a central part of the educational goals. However I think that it's a different component that specifies *what level* you are shooting to have the student experience a concept. For example, your ontology might refer to concepts in geology such as types of rock (metamorphic, igneous, sedimentary), but what you want the student to do with that information (example: use the term in a matching game vs. describing the process to a fellow student) is part of a different representation. (Possibly employing another "ontology of levels of learning"!)

This whole issue is a little ironic, if you think about it. The point of an ontology is to help machines intercommunicate and have a way of letting them know if they are referring to the same concept. Yet, the word "ontology" itself has its ambiguities.

 Posted by Frozone Permalink on April 15, 2009 04:08 PM | Comments (0) categorized under Pedagogical modelling Tweet

## February 19, 2009

### My pedagogical issues

I found another paper (Dagger et al., "Developing Adaptive Pedagogy with the Adaptive Course Construction Toolkit (ACCT)") that finally does justice to the teaching strategies I've been wondering about. Unfortunately there is very little technical detail in this paper. They have a nice website though -- I'll have to go take a deeper look.

This paper identified several teaching strategies:

• case study
• problem/enquiry
• didactic
• web quest

I brainstormed a few more, below. I may come back to edit this entry later if I remember more. These should be categorized properly in the flavour of Jackpot! A pedagogical ontology.

• the jokester/challenger: who puts forward incorrect information with the intent that the student challenge him, increasing confidence in their knowledge of the material. (Kinda like my character Poons from another project.)
• the interview: where you personify concepts from the task domain ontology and have them engage in a conversation with each other. This technique can be used to compare/contrast concepts that seem similar at first. You can use social cues like competition, pride, etc. to colour how the concepts interact with and differentiate each other. I observed this technique in use while reading "Head First Statistics" by Dawn Griffiths. Actually, the entire "Head First" series is excellent at employing pedagogic tricks.
• the fellow student who asks "dumb" questions
• introducing new material gently (I've seen this implemented as a "rule" in a system somewhere... citation needed)
• refinement
• review of prerequisites, linking back to previous knowledge & experiences, building up from

My quest continues to actually find COMPUTATIONAL MODELS of these things. Too often I find them explicitly programmed into systems. Surely someone, somewhere has tried to abstract these into a model that can be executed on arbitrary task domain ontologies.

edit, Feb 20th/2009, more teaching strategies, from [Kumar, Greer & McCalla 2005 Assisting online helpers].

• cognitive apprenticeship
• successive refinement
• discovery learning
• abstraction
• practice
• Socratic diagnosis

(Update, Feb. 23, 2009) In the same paper, they include a table with more:

• analogy
• analyse
• assert (confimr)
• choose next problem
• clarify concepts
• clarify misconceptions
• diagnose
• encourage
• explain
• go to < resource >
• hint
• plan
• predict
• promote reflection
• provide evidence
• provide feedback
• question
• reason
• reflect
• reject
• relate
• rephrase
• request
• summarize
• suggest

I can see already how some of these are going to overlap. Hence the need for an organized ontology, mentioned above.

Again, it is also important to identify when, and more interestingly, HOW, such strategies would be applied.

Feb 25th 2009 -- More from a related paper [Kumar V.S., Greer, J.E., McCalla, G.I. (2000). Pedagogy in Peer-supported Helpdesks. In Sasikumar, M., Rao, D.D. & Prakash, P.R. (Eds.), Proceedings of KBCS 2000: International Conference on Knowledge Based Computer Systems, Mumbai, India. (pp. 205-216). New Delhi: Allied Publishers.]

• Provide evidence
• Browse models
• Observe help session
• Show example-part-of-solution
• Rephrase diagnosis
• Hint example-complete-solution

 Posted by Frozone Permalink on February 19, 2009 03:28 PM | Comments (0) categorized under Pedagogical modelling Tweet

## February 11, 2009

### Decision theory for teaching strategies

I'm trying to wrap my head around decision-making under uncertainty using decision theory and Markov decision processes. After a lot of tumbling and turning, I realized that I'm trying to compare and contrast two things:

1. optimal policy construction, and
2. Markov decision processes (MDPs).

These are 2 ways (not the only 2) to tackle decision-making. This post is going to be about #1 -- I'll tackle MDPs another day. As for policy construction -- I began my hunt with the notion that you would start with an influence diagram such as the one here.

Your decision problem is modelled as a graph. The square nodes are decision nodes. The diamond nodes are utility nodes. The circle nodes are the same as the nodes in a bayesian network. Circles can point to diamond and squares. Squares can point to diamonds and circles. Only one diamond allowed per diagram.

A policy (I learned to represent policies with the greek letter delta, δ 1) is like a "rule of thumb" for the action you choose when faced with a decision. The optimal policy is the policy that gives you the greatest utility (from the triangle node). You can think of the policy as being connected to the decision nodes (the squares).

One thing that has confused me for a looong time is that your random variables (circle nodes) can be either "states" or "observables/evidence". Recently I had a little epiphany where I thought of the states as your ontology and your observables as your epistemology.

I'm perplexed about the application of a decision network like this. Would you use the same network over and over? I guess you would have to build a network for each type of decision you'll need to face. And, the only time you'd re-use the same network is if you face exactly the same sort of decision again. Although you could modify the values in the CPTs (conditional probability tables) if you had better information the next time around.

Anyway, a "policy" is something that you can apply to your decisions, and the policy tells you which direction to go. It is a function from states to actions. Basically, for all decision nodes in your network, (all squares), your policy is the set of decisions to make -- one decision per decision node. (So, is policy construction always an offline problem?)

With that background in mind, I want to return to the paper I was reading last time. Remember, my whole quest right now is to figure out, "What is teaching?".

In this paper, I think I was a little mislead by their usage of "pedagogical strategy". I was thinking, "oh, are they modeling how to gently guide a student vs. material that's new to them vs. challenging a student to get them to become even more familiar with material they've already been introduced to?" But after reading the paper a couple of times (and I could still be missing something -- the material was pretty dense and a lot of it very technical) I think what they meant by "pedagogical strategy" was "the order in which concepts are introduced". To me this is only a small dimension of a teaching strategy. It's like, this is the "content planning" without "delivery planning".

I was also a little surprised to learn that these researchers used artificial students. I didn't understand what was being measured with the artificial students -- which part of the system was being "tweaked" by optimizing against different types of students, and where they got the "student types" from. (Thinking, 'hey, could I ever use artificial students in my experiments?').

I missed out on learning about "Reinforcement learning" and how they were using MDPs. I still have so much more to learn before I can really grasp a lot of the research that is going on.

On the bright side, this paper did force me to take a closer look at decision theory.

Anyway, my journey about finding teaching strategies continues. I also feel like I'm getting closer to picking a thesis topic. (HA! I know, I've been saying that for years..... ugh.... lol). But, I'm confident enough this time that I might put this statement on my "About Me" page: I'm interested in how to model teaching strategies such that an abstract task domain ontology can be taken and "filtered through" the teaching strategy. This way, you'd have a universal machine that can teach. Scientists all over the world can continue to make discoveries about physics or math or chemistry or astronomy or geology or medicine or anything, and any Jane Doe could learn about it if she wants because she'd have a(n artificial) tutor to help her explore the material whenever and however she wants. I'd like to figure out how to take a learning object and weave it into an instructional plan that is conscious of overall themes and stories that can stretch from lesson to lesson to create an enjoyable, meaningful experience.

1I have also seen pi (π) used to denote a policy. I don't know if there is a difference or if it's just inconsistent notation.

 Posted by Frozone Permalink on February 11, 2009 03:22 PM | Comments (0) categorized under Computer Science & AI Tweet

## February 09, 2009

### What is teaching?

I started reading a paper ([Iglesias et al., 2009] Learning teaching strategies in an Adaptive and Intelligent Educational System through Reinforcement Learning). On the first page, the article starts talking about how to define a teaching strategy, or rather what a pedagogical strategy specifies, exactly. The authors say that it "specifies how to sequence, how to provide feedback to students and how to show, explain or summarize the system content", and they reference the Murray paper (which I too have referenced before, indirectly!).

I stopped reading the article here because I wanted to brainstorm for myself what I thought a computational model of a teaching strategy ought to address.

• it's about what you're making the student do
• it could include the LOC of control, a la [Vassileva & Wasson, 1996]
• the frequency of interaction between the system and the student (i.e. are you giving them lots of time to reflect & build on their own, or are you "holding their hand" by questioning and guiding a lot? This is related to the point above, I guess.)
• group dynamics: when different members are called to do different things, a la SI style
• a model of how to "step back" to let the student struggle, vs. how to jump in and hint, how the system figures out *what* to hint (vs. the point above, which was more about modelling the frequency of the interactions vs. how to model the interactions themselves)
• when and how to be the "trixter" and put forward a deliberate incorrect piece of information to give the student the chance to say "hey, there's something wrong here!" so they can build up a little bit of conviction

Okay, I'm going to keep reading the paper now. It'll be neat to see how much of my thoughts overlap with what these authors have done. I also appreciate how deeply they've gone into implementation details (as I flip thorough the rest of the paper) because that's where I've really been struggling: how to turn wishy washy ideas into proper SCIENCE. (Reminded of my posting with the pink fairy at the end... lol, I'm such a kid!)

 Posted by Frozone Permalink on February 09, 2009 10:14 AM | Comments (0) categorized under Pedagogical modelling Tweet

## February 07, 2009

### Modelling teaching strategies

Modeling teaching strategies: this has been a tug at my research interests for a long time, but I don't know if I've ever actually tackled it head on before.

I keep reading about systems that teach *specific topics*, but I want to know if teaching *itself* has ever been modelled in its abstract form, then applied to a task domain ontology for flexible tutoring. Following the hack1 I developed over the last couple posts, I'll come back and keep updating this entry as I find more. For now I'll sketchy an outline:

• what is a teaching strategy: references from the educational world
• relevance of the student model
• specifically, how teaching appears in "flattened" models
• sweep of approaches

Following this, I'd like to take a look at how learning objects are the output of a teaching strategy plus task domain ontology, and how a machine might reverse-engineer these two factors. The point of this would be to allow you to take a learning object out of context and maybe use a chunk of it, or a dimension of it, in the execution of an instructional plan. The biggest goal is to maintain "themes" throughout your delivery plan.

1 hack: My desperate attempt to do research in 15 minute intervals while being a good mommy to my five month old. :) Namely, I publish a post one day, and keep editing it and going back and changing it over several days or weeks. I'm sure this is infuriating to a reader, and I feel bad about that, but then again, the point of this blog is to help me develop my ideas. If I ever publish anything academically, I guess those would be "finished" or "polished" pieces of work that would be more appropriate for actually sharing my work. Anywayz.

 Posted by Frozone Permalink on February 07, 2009 11:28 PM | Comments (0) categorized under Pedagogical modelling Tweet

## November 18, 2006

### Using OWL to reference constraints in tutoring systems

I wonder if there exists a system somewhere that:
- uses a constraint-based model for diagnosis of student misconceptions
- gets the constraints it needs from an OWL-based ontologies

Further, when you update the ontology, would the system in turn adapt its teaching behaviour?

I guess this all depends on the role of the ontology. Is it used strictly to define domain knowledge? Can it also be used to model teaching strategies? Can the role of the ontology fall somewhere in between, where it maps out different ways to use specific teaching strategies in specific task domains?

 Posted by Frozone Permalink on November 18, 2006 01:33 PM | Comments (0) categorized under Pedagogical modelling Tweet

## June 15, 2006

I began my research about Education by starting with a question:

What would the College of Education teach a first-year student about how to be a teacher? What topics would a course like "Education 101" cover?

I hunted around, hoping to find a course website with assignments or even a reference to a textbook... but no luck. I even (very) briefly considered enrolling myself in the college and signing up for a couple of the courses, heh heh.

Somewhat defeated, I switched tracks back off of Education towards my home territory in Computer Science. I dug up this paper written by my favourite German researcher, Carsten Ullrich, and read about how his ActiveMath computer system models pedagogy in Pedagogical Rules in ActiveMath and their Pedagogical Foundations. He makes reference to an Educational Technology researcher named M. David Merrill, who has worked on an excellent and thorough review of instructional design theories in First Principles of Instruction. Hurray!

From skimming Merrill's work, I found these particularly interesting, also in brackets I've marked Ullrich's hooks back into Computer Science terminology:

• Instructional design theories:
• Problem - "Learning is facilitated when learners are engaged in solving real-world problems."
• Acivation - "Learning is facilitated when relevant previous experience is activated." (Learner recalls, describes, applies)
• Demonstration - "Learning is facilitated when the instruction demonstrates what is to be learned rather than merely telling information about what is to be learned." (Examples)
• Application - "Learning is facilitated when learners are required to use their new knowledge or skill to solve problems." (Feedback/error diagnosis)
• Integration - "Learning is facilitated when learners are encouraged to integrate (transfer) the new knowledge or skill into their everyday life." (Motivation)
• Student modelling
• Topic mastery levels / Learning outcomes
• Knowledge / Comprehension / Application
• Cognitive modelling
• individual tends to organise information into wholes or parts
• individual is inclined to represent information during thinking verbally or in mental pictures

• Overview Scenario - general overview of course concepts
• Guided Tour - can take 3 different angles on the course, one for each of Bloom's Knowledge/Comprehension/Application
- Knowledge Scenario - runs the student through a dimension of the course (or can be thought of as a separate course altogether) that enables the student to recall/describe/name concepts
- Comprehension Scenario - generates a course that enables the student to explain/identify/grasp concepts
- Application Scenario - generates a course that enables the student to apply/use concepts.
- Union Scenario - Honestly, I didn't understand this one. I quote from the paper, "The fourth scenario, in principle the union of the above scenarios, teaches the student about the chosen concepts without focusing on a cognitive domain. These scenarios use the ActiveMath extension competence-level of the OMDoc [6] metadata. Using this metadata, an autho can cencode whether the learning outcome of an element mainly targets knowledge, comprehension, or application." ....Perhaps this is the one that adapts each concept to the learner.(???)
• Excercises-only Scenario
• Concepts-only Scenario (exam preparation)
• Rehersal Scenario - shows the learner learning objects that they've already seen
• Terse scenario - removes all well-mastered content
• Polya-style proof-presentation scenario - not much detail on this one

I think I'll go get another cup of coffee and read Merrill's paper more thoroughly.

 Posted by Frozone Permalink on June 15, 2006 07:40 AM | Comments (0) categorized under Pedagogical modelling Tweet

## May 24, 2006

### Jackpot! A pedagogical ontology

A German researcher named Carsten Ullrich has developed an instructional ontology that is looking VERY promising. He has linked his journal article describing the ontology, as well as an academic paper: Description of an Instructional Ontology and its Application in Web Services for Education.

I was amused to see my B.Sc. supervisor's name in the 3rd entry of Mr. Ullrich's list of publications, all of which look incredibly interesting.

Wow - what a breakthrough! 'Very excellent listing of related research.

 Posted by Frozone Permalink on May 24, 2006 06:27 PM | Comments (0) categorized under Pedagogical modelling Tweet

## Index to Steph's Notes

Feb. 24th 2007 - Weee! This new part of my website is not an entry, but rather a permanent fixture whose purpose is to "Look Down on All Those Notes With Some Grand Vision of Organization". Wish me luck. LOL
1. Representing meta-data (fuel) & the different kinds of "hooks" that intelligent systems can use (how fuel is injected into the motor of the engine)
1. Motivation: Semantic net / Rationalizable to a machine
2. Technology & Philosophy: RDF, modus ponens,
1. Predicates, Logic & situation calculus
3. What kinds of data? - What kinds of meta-data would an AIEd system possibly need, and how is it represented?
2. "is-prerequisite-to"-type knowledge
3. interactions with learning objects & other learners - (location, composition is-a/part-of, sequencing by restricting navigation, personalization, ontologies for LO context)
4. lesson plans, curriculum plans, practicing sessions (What is stored, what is generated on the fly? What is remembered?)
4. How to organize it - When is it stored in a database? Meta-data? Agent memory banks? Protocols? Repositories? XML files? Home-servers? WSDL services? Frameworks? Portable banks? P2P access?
1. Database of object-agent interactions
2. Concept of "Home" on a P2P network -- maybe the bulk of a learning object's usage data is on its home server and can be queried using WSDL or something ? Similar homes for each student's usage history, etc. Baggage problem.
1. referring to a concept/relationship - ex. AgentOwl?
6. Generation of this data
1. Rationalization: For use by other AIEd systems
2. What is generated - discuss items under part I.C.
3. When it's generated - describe procedural model, which parts of the engine generate what (isa-part-of data, XML feeds, web services, meta data bout groups and collaboration, protocols, examples Friend of A Friend FOAF project)
4. Technical notes of HOW it's generated: JENA, issues of implementation demo, my Hermione & Ron agent examples, lol
5. Usage of this generated data - see part IV. A.
2. Given the engine, who uses it?
1. Students / Learners / "Me"
1. instructional planning, student model, pre-requisites, tutoring, coaching, collaboration,constructivism
2. Teachers / Educators / "Me"
1. putting together lessons
2. be able to browse through task domain knowledge in an objective / encyclopaedia format, then be able to pick-and-choose what you need for your students
3. compose examples, design explanations, pull together diagrams, learning objects, etc. Haystack Relo?
3. Administration / Governement / Structure / Crowd Control
1. as restrictions/obstacles/sand pit to the robot in agent environment
2. can't just have a swarm of students and teachers out there -- need structure of courses, curriculum, objectives, requirements (at least, we do in this day and age!) - Report cards, evaluation, feedback
3. government, marks, certificates, requirements, funding, curriclum, attendance, delinquent, non-attending, motivation
4. school''s images, goals, strengths, payroll, HR, security, accounts, permissions, privacy
5. registration, failed courses
3. User Environment -- How does this engine work? What does the user see on the screen?
1. Introduction - Given a background in educational psychology, how does the system present itself -- what does the user see, and were does this data come from? Links to thoughts from part I.)
2. Task Domain Browsing - Suppose you're you're just idly browsing through the "raw" content. How would it look when it's not wrapped around a learning-context or lesson or tutorial or anything. 'Cross between browsing a raw task domain ontology and browsing a learning object repository.
1. Cleaning up the data -- Visualizing the data for humans to pick through the task domain and work on it. Suppose the "Subject Expert" discovers an advancement in science and needs to update the "world's" domain knowledge. (I used the "Subject Expert" terminology from Ontologies to Support Learning Design Context - Thanks Chris) How would they make corrections to ontologies and learning objects, or at least point the users of "old" objects towards adopting the newer ones.
2. "Modes" - Learning & Lessons / Checklist - Homework, Assignments, Courses being taken / Collaborative mode / Teaching mode / Calendar- email -adminisrative mode -- See also the different kinds of scenarios in the ActiveMath system
4. Evolution of this engine
1. target some key implementation hooks discussed in part I - design an experiment/demo
1. scrape a page - (Note, scraping can only give objective data, not in-context dat)
2. LO repository - related to browsing the task domain?
3. a learners "To Do" list - where does it come from? Assignments, courses.
4. sample group scenario
5. sample teacher lesson planning
6. sample data "left behind"
7. sample use of that data
2. Data mining (for what? lol )
1. discovery / generation of ontologies - when do you need to hunt for them, and when do you have to have a solidly-known & predictable ontology?
3. I/O - where it happens, which languages, protocols, which agents perform i/o and when, precepts, actuators
1. Role Assignments
2. My Environment Adapts to me
1. Displaying feedback from the server on JSP pages (Software engineering considerations)
2. Sketching out a design (Content planning vs. Delivery planning)
3. agent negotiations / social structures / ummm... Web 2.0 ?
4. garbage collection of meta data
1. Artificial Intelligence & Evolution
2. open learning environments
5. Agents, pets, grouping, Community modelling
1. Protocols - finding groups, cyber dollars, state diagrams (?)
2. "Community Studies" - graphs & communication hubs, types of communities (free-for-all, hierarchy of authority, etc.)
3. implications of joining a community - what do you share, which parts of your student model are relevant
4. Walls & sand traps -- deliberate restrictions as problem-solving for learning
5. Communication channels - individual-to-individual, individual-to-community, chat channels, agent-only "administrative" communications, ex. requests for related learning objects in a particular community, etc.
6. Educational/Pedagogical focus (this part probably shouldn't be its own section but rather incorporated into the whole picture, but it's separate for me right now because I'm still only just starting to learn about it.)
1. Semantics - what there is to talk about in Education
1. ex. Merril's First Principles of Instruction, linking educational terms to AI terms
2. Pedagogical skills for tutors -- supporting human *and* artifical tutors
3. Student modelling - what the machine needs to know about the student, pedagogically-speaking, about learning history/preferences
4. Roles - Simulated students, Coaches, Tutors, Teachers,