## July 30, 2011

### This is what I mean by "global coherence"

I originally wrote the following during correspondence with a colleague. But I like the wording enough to stamp it on this blog as well. :)

This is what I mean by "global coherence":

I am trying to mechanize the process of putting together a personalized curriculum that takes into account a person's background, preferences & experience. I would be using collaborative filtering (like amazon recommendations). I am primarily interested in generating a curriculum with long-term coherence. In other words, I am picturing a ("bad") system that would result in a learner jumping from "Ohh! Shiny object" and "oh, another one!", and then to another, etc.. If you jump from one amazon book recommendation to the next, it does not take long to forget the original point of interest. For deeper learning, a longer stretch of coherence is necessary across a series of learning object recommendations.

To map this problem to artificial intelligence research, I suggest Model-Theoretic Planning.

 Posted by Frozone Permalink on July 30, 2011 05:19 PM | Comments (0) categorized under Pedagogical modelling Tweet

### Instructional Design as Process Modelling

I was reading a piece recommended to me by Prof. Rick Schwier: Teaching a Design Model vs. Developing Instructional Designers by Elizabeth Boling, Indiana University.

I couldn't help but fly like a moth to a flame to the term, "process modelling". My favourite part of the paper was when I read this, on page 3: "..where the products of a professional’s activity represent an
intervention in the lives of others for an intended purpose." The author is talking about Instructional Design relative to other kinds of professional work. I don't know, would the practice of Medicine fall into the same class of human activity?

I have a history of studying "process modelling" and I believe that I invented a way to computationally represent a teaching technique that is abstracted from content. See: Language to articulate teaching strategy. It was really, really cool for me to see an actual legitimate person discuss process modelling in an educational context. Yay! Now I have something to relate to. I think my angle was more about "Let's encode this style of behaviour" and less "Let's actually figure out how to effectively teach people". I definitely need to draw from the latter, even though the product of my work is the former.

Anyway, my primary reason for posting this was to keep tabs on that link to Boling's paper.

 Posted by Frozone Permalink on July 30, 2011 01:45 PM | Comments (0) categorized under Pedagogical modelling Tweet

### Meet my friends: Ploo, Dip-lc, and Krott

I decided to give names to the trails of thought in my research. Please meet my friends, Ploo, Dip-lc, and Krott.

Ploo, or, P.L.O.O., stands for "Porous Learning Object Repository". Relevant entry: (one of them, anyway) topics at hand: graphical models in game theory, operations research and open learning object repositories. Basically, Ploo represents my efforts to build a system that could take a new learning object, and contextualize it for a particular person within a particular learning community. (note: a learning community always has an associated learning object repository of shared ideas & reference points for that community). Thanks to my supervisor Prof Gord McCalla for the adjective, "porous"!

Dip-lc, or D.I.P. - L.C.s, stands for "Distributed Instructional Planning amongst Learning Communities". Basically, Dip-lc represents my recent simulation model work. The WWW is explicitly represented as a thing that is constantly growing a s a result of human activity: just as we shape it, it shapes us. (Thanks to Professor Nate Osgood for that wording! And for discussing these ideas with me for my 858 term project (link to paper on Scribd).)

Krott, or K.R., O. of T.T., stands for "Knowledge Representation, Ontology - Teaching Technique". Krott represents my quest to find a knowledge representation for teaching techniques. The word "ontology" is in there because my earliest work was to find an ontology for teaching techniques. But I really need a knowledge representation. See also: Language to articulate teaching strategy

 Posted by Frozone Permalink on July 30, 2011 11:37 AM | Comments (0) categorized under Pedagogical modelling Tweet

## July 06, 2011

### Actually, Research topic: Teaching techniques have Shapes.

Do you remember when I got all excited recently about a TED talk, and I said "Oh, oh! THIS is my research topic!" (See previous entry, "Sweet, TED video showing my research area".)

Well, that was all well and good, but I have a better description now.

Imagine you have a whole bunch of recorded conversations between teachers and students, a group of students and one teacher, maybe several teachers and one student, a single tutor and a single student. The topics discussed are of all and any kinds. And you try to get as many different shapes of conversations as possible. For example, one type might be "Drilling" where one person is continually and repeatedly prodding the other person about the same or very similar topic. Another type might be where all parties are equally asking, providing, listening.

So, you have a whole bunch of teaching scenarios. And you categorize them.

Here's the original part. You take the "shape" of the category and you turn it into a computational model that could be repeated on ANY topic, where any subset of the parties in the conversation (except a complete subset) are played by a computer.

How? Well, I think that the first thing that you'd do is make a visual representation of each "shape". The Y axis could represent the topic, perhaps at levels of difficulty. Threshold concepts could be at the bottom, and background information under that. Above would be more advanced synthesis / analysis skills. I guess you would have to quantify difficulty level into one spectrum.

The X axis would represent the "direction" of the conversaton: Who is providing, who is asking, who is listening.

The line part in the middle would have dots representing time. So if you start with a "Drilling" on a very advanced topic, you'd plot a line with a low X and a high Y. THen, if the conversation proceeds to a group quiz on the same thing, um, I think you'd have to use a Z axis. Then, if the conversation moved to "Grill the teacher" mode, the next line would be at a very high X and still the same Z. So your line could spiral and turn back in on itself and go forward and backward.

Now that we have a language to articulate teaching / interaction strategy, we can pick a tool from artificial intelligence to represent it.

If you have been following my blog, you will know that I've looked at the following to attempt this.
- Simulation / Process Modelling
- Cooperative Game Theory
- Decision Theory
- Bayesian Networks (ok, just a scratch)
- Constraint Satisfaction (again, I haven't done enough to claim effort, but by creating this list I also wish to articulate possibility for future reference!)
- Semantic Network

Once you have modelled a particular technique, or Teaching Shape, you have to have an algorithm that selects when to use each one. This will vary a LOT (maybe entirely?) on the student model and their recent activity.

A second issue arising is "How to Keep Content / Themes Coherent over the Long Term". I think that this means you have to have Teaching Shapes on top of Teaching Shapes, where some dimension within the Y axis is held constant. That is, when you overlap or build Teaching Shapes on top of each other, you have to select the Concept / Topic that you are keeping constant. Or maybe you aren't keeping the topic constant, but, you're Drilling over an array of stuff (like for an exam). But maybe in these cases, global coherence is less important.

Now that I have typed all that and re-read it, I wonder if I am the only person on this earth who would ever understand that. I could probably explain most of it to my supervisor. And I can think of a handfull of others in my lab who would get it, if they were willing to spend several hours with me in front of a chalkboard.

I guess that's what this blog is for! Hopefully I will be able to continue to clairify, explain, expand, develop. :-D

 Posted by Frozone Permalink on July 06, 2011 06:36 AM | Comments (0) categorized under Pedagogical modelling Tweet

## March 10, 2011

### Simulation as solution to decision theory dilemma

Last year, I explained a problem I was having with decision theory. This year, I learned how to use simulation environments, which are a more appropriate tool for the problem I am studying. This post explains why. There is also some game theory stuff mixed in, and I'm trying to get it all sorted out, but still have a long way to go!

First, I will begin by copy/pasting my articulation of the decision theory problem, and edit for clarity:

In my understanding, a pure strategy means that your decisions are always based on the same criteria. A mixed strategy means that the basis for your decisions changes with your circumstances.

If you can enumerate your set of strategies, then you can spread them over a probability distribution. That is, the probability that you will select one of the strategies from the set is 1. Each strategy will have its own probability of being selected, and the sum of the probabilities of each strategy is 1.

Let the set S = {s1, s2, ... sn } where each strategy is represented by an integer. The name of the set S is represented by an upper-case letter S. Each member of the set S is represented by a lower-case letter S with a unique numerical subscript.

Let P(sn) represent the probability that the agent will apply the particular strategy as they make their next decision. So,

$\sum_{i=1}^{n}P({s_i})= 1$

This is REALLY basic, but, I am new at this so I felt it was necessary to state all of this explicitly.

As I identified earlier, my main problem with my attempt at applying Decision Theory is that the whole point is to work around the uncertainty about which sn is going to happen next. Normally we assign each P(sn) according to our best guess, updating as we go. This works really well for the right kinds of problems.

But my problem isn't exactly this shape. My uncertainty is not about P(sn). The uncertainty in my problem comes from:

- anticipate user actions
- anticipate user goals
- guess on the user's experience, what happened in their head, which ideas they processed, and defining utility according to what we can sniff about what they experienced and whether it followed our mechanics for significant learning experiences.

I talked about the third point in a more mathematical way in this other entry.

I would say that my goal is to "guess at the set of next-actions the user will take". Let X be the set of all possible actions that the user can take within this learning environment. And X could be multi-dimensional, based on the system's preceptors. (user model sniffers like keystroke listeners, browsing history trackers, whatever.) I wanted to look at the problem where my uncertainty is surrounding the state. We don't know what the state is. And we don't know what is going to happen next, because the user - or the other player in the game - is going to influence the environment which in turn influences us.

SO -- The point of decision theory was to give us a tool so that we can handle uncertain state transitions. I wanted a tool that could acknowledge that some state transitions are KNOWN, i.e. we can select a strategy.

Conclusion: A simulation environment is an excellent tool that lets you establish flow charts to represent the processes you do know, and to design agents and environments with other transition properties that you also know, and it allows you to let 'er rip and watch for any emergent properties in the system. A simulation environment lets you deal with uncertainty without having to assume that the uncertainty is about which state transition is going to be taken.

Whew!

 Posted by Frozone Permalink on March 10, 2011 02:58 PM | Comments (0) categorized under Pedagogical modelling Tweet

### Why simulations are useful

Recently, I asked, "Why are simulations meaningful?"

I think I understand the answer now: it's because in the real world it is impossible to thoroughly explore all possibilities of a situation and observe outcomes. For scientific inquiry, you need a hypothesis. And you want to test the HYPOTHESIS, not pieces of it. And often the only way to thoroughly test the whole system is with a simulation.

My worry last time was that by creating the simulation, making assumptions, putting in starting parameters, that I was "making shit up". This becomes much less worrisome when you view the simulation as an articulation of your hypothesis. Of COURSE when you are trying to figure out the ways in which the world works, you will require creativity. You *have* to make shit up. And then you have to run the data through, see what it does, observe the trends, and see if they match up with real world observations. Karl Popper said that true science means that it has to be possible for your hypothesis to be proven wrong, and that successful science makes a discovery when a hypothesis IS proven wrong, and that progress happens when you keep making mistakes, which gradually steer you in the only direction that's left, i.e. the right one. Creativity is so important because that's what uncovers the possible new directions.

Anyway. Time to get back to reading!

 Posted by Frozone Permalink on March 10, 2011 01:05 PM | Comments (0) categorized under Pedagogical modelling Tweet

## August 15, 2010

### An example of strategy

Earlier, I mentioned I'd spotted a paper about an actual research project that applied game theory. My motivation to study the paper was to discover:

1. how they implemented strategy

2. why equilibria was important

In this work, an agent's strategy was the formation of a subset. It wasn't a Markov Decision Process and it wasn't a graph traversal, like I'd been expecting. In this paper, the application was "community discovery", where many different agents belong to many different communities. From the first person, an agent could say, "my strategy is which communities I picked". The "strategy profile" of the game was a set of vectors, one vector per agent, each vector representing that agent's selection of communities.

#2 is related because equilibria, as I learned earlier, is a strategy profile (that might have to meet certain conditions, say, in order to be a Nash equilibria).

I was delighted to read about the utility function in this work because it showed how this too was related to strategy.

 Posted by Frozone Permalink on August 15, 2010 09:54 AM | Comments (0) categorized under Pedagogical modelling Tweet

## June 15, 2010

### Process ?= Algorithm

In my quest to learn about computational models for process, I musn't forget the fundamental concept, "algorithm".

(It is presently 4:15 in the morning so I hope it is forgivable that I speak the following in pseudo-mentalese. I hope i can elaborate later.:)

How is "algorithm" attached to task domain and influence diagrams? What exactly does the algorithm manipulate? It is for instantiating the teaching algorithm into the environment at hand.

(The word "teaching" is a little sullied. What I actually mean is, "whatever variety of technique employed to provide opportunity for the learner to output and test their own theories and become exposed to new information in a contextualized environment.")

 Posted by Frozone Permalink on June 15, 2010 04:22 AM | Comments (0) categorized under Pedagogical modelling Tweet

## May 19, 2010

### Decision-making over time

Lately I've been looking at applying game theory to my problem. (Previous entry:Strategy and Process)

Recently, I had an invigorating conversation with a former colleague (yo Dylan!) about AI, planning, memes, feedback/reinforcement, swarm intelligence (or mind as a set of autonomous agents), influence diagrams and decision theory, and many other things. It was super awesome.

The point of this post is to record a take-away thought from this conversation that I think is important. We had sketched out a sample influence diagram (sort of like this example from a previous post, Decision theory for teaching strategies) and pointed out Event nodes, Decision nodes, and utility nodes. At the time, I couldn't remember how "Event" nodes took into account an agent's observations. I think we had been talking about agent preceptors and actuators. Later, I remembered that "observables" take the form of "givens" in a conditional probability, and an event is a statement of conditional probability. I have talked about this before, too, in Learned some Stats lingo.

Anyway, the important point is that "Calculating Optimal Policy is Important OVER TIME." I figure that an influence diagram looks "frozen". If your givens in the Events are changing all the time, and what if the Utility function itself is changing, and your decisions have to change... this is STRATEGY, and this takes into account the dimension of Time.

I look at a couple different optimal policy calculations in this previous entry, Conditional probabilities, and "the argmax thinggy". Notice how one of them takes time into account and the other does not. I would say that Time is a critical dimension in planning.

The calculation in that post lacks an overall picture of process.

(On Twitter, I summarized this entry as: Decision theory- optimal policy over time)

 Posted by Frozone Permalink on May 19, 2010 12:05 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 17, 2010

### Rubrics

Generating rubrics http://rubistar.4teachers.org/ (Courtesy of ULC summer student Sarah via Liv)

This got me thinking about automation and gathering feedback for successful instructional plans.

 Posted by Frozone Permalink on May 17, 2010 03:47 PM | Comments (0) categorized under Pedagogical modelling Tweet

## April 17, 2010

### Planning with ontology references

My research keeps going in loops. As the application process for grad school comes to a close and I re-direct my efforts from that surprisingly arduous process towards actual research, I "decided" that I want to write a planner that references ontologies. I have looked at this before, in a previous loop of research.

I find that the best way to break these loops and transform them into progress is to collaborate - either by comparing your ideas to those in others' work - i.e. reading papers - or by chit chatting with real life colleagues.

Why do I want to write a planner that employs ontological references? Because a planner instantiates "the organization of the delivery and directed coverage of content". I want to explore the interplay between ontology and methods and models of supported individual or group study. I feel that the best way to explore this interplay is to use mathematical models. These force you to be specific. Getting specific forces you to pinpoint subtlety, and deal with it. You have to put names to things, and you have to define criteria for decision-making.

This is all I can say right now. For the rest of my free time today I'm going to poke aimlessly through my library of papers. Thinking about how to direct my research in months to come, I'm toying with a set of "possible outcomes". I envision possible worlds resulting from varying answers to questions like: "What sort of tool will I build? What research methodology will I apply? What programming languages will I use?" I am fully aware that I will not build an all-encompassing perfect eLearning tool; I have been strategic about picking a sub component that (I think) by building it will unearth a lot of questions.

Also, I am desperately hungry for mentorship. This is one of the major reasons I have applied to grad school - for the opportunity to communicate with other researchers. I want to collaborate with more junior researchers so I can improve my own skills by sharing them with younger students. But, most of all (selfishly!) I want the opportunity to communicate with more senior researchers.

 Posted by Frozone Permalink on April 17, 2010 01:06 PM | Comments (0) categorized under Pedagogical modelling Tweet

## April 13, 2010

### Two perspectives

I'm working out a system design and am trying to articulate some assumptions. I have heard two perspectives and I'm trying to figure out if they are really just two ways of seeing the same thing, or if they are separate approaches. I will write it here now and intend to come back later.

1. Take what you know (read something new), and work to organize it around a bigger picture.

2. Take what you know (read something new), and articulate it in the context of what you already know -- the big picture is *already* in your head.

 Posted by Frozone Permalink on April 13, 2010 09:46 AM | Comments (0) categorized under Pedagogical modelling Tweet

## November 20, 2009

### Significant learning experiences

I took a vacation day today so I could bring my daughter in for immunizations this afternoon. This morning, she is at my mother's house. So I have a couple hours of glorious, precious freedom. :-)

Also, did you know that if you accidentally make your chai tea latte too watery, you can fix it up with a jolt of nutmeg and a spoonful of coffee whitener? Mine tastes just lovely right now.

I recently purchased this book: Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses by L. Dee Fink, 2003. The website, www.significantlearning.org is down at the moment of this writing, so I will link instead to this other page which at least shows a picture of the book's cover.

Reading the book reminded me of one of my older posts about my attempts to measure, computationally, a significant learning experience. To me, Fink's book is valuable because he fully describes the meaning of "a significant learning experience", even presenting a formal taxonomy. And, as we know, statisticians and computer scientists LOVE models. (And I'm sure there are other fields that love models, too! What I'm getting at is that if an ethereal idea is explored enough to create a model, that allows scientists to apply our tools to it because we work with data that is structured or guided in some way.)

I haven't moved deep enough into the book to know if the author considers Anderson's work (which I mentioned in this post).

 Posted by Frozone Permalink on November 20, 2009 10:38 AM | Comments (0) categorized under Pedagogical modelling Tweet

## September 20, 2009

### Process modelling, evaluation

Okay, maybe the raw fear of waking the baby up jogged my memory. I remember what was in my notebook, so I don't have to go into the baby's room after all.

I was looking into "Process Modelling", though I'm not sure if that's the correct term. I'm finding lots of stuff in software engineering literature, where work is going into building software that supports business processes. I am particularly interested in seeing how researchers have built a computational model for a process, where the general order of things is known but the system needs to adapt for special cases. (And in fact, where most of the time, you are operating in a "special" case! The world has a way of never going according to plan.) I want to look at this as an AI planning problem, but not all research takes this perspective. For a more thorough articulation of this problem, check out this older post: Whimsy and Smarty on Process Modelling.

Often a "good business process support" translates into "a good GUI". But this solution doesn't work in my domain because my purpose for having a process model is to help the machine anticipate a user action as well as create long-range plans during a tutoring session. In other words, the point of having the process model is to inform the machine. In my view, having a GUI sort of "hard codes" it; my solution cannot use a static model.

I also want to know how to evaluate such a model. Do you measure how close your model is to the real-world model? (No, I don't think so.) How do other researchers do it? I'm sure I've mentioned this before, but, I'm interested in looking at building a measure for putting the learner through "a meaningful experience". (Oh, yeah, I have talked about this before. First here, then here, and here, then here. I love how I can search through my own blog. But good grief, that idea has sure popped up in my head several times. I'm going nuts that I haven't done anything about it yet.) But there's still a lot of work to do.

Over and over again, I find I'm whapped across the back of the head and am seeing stars when I stumble upon research that I think would be relevant, but the mathematical implementation is just too advanced for me. I wish there were an easier way to poll the group of scientists who work in my field in an unobtrusive way. But right now all I have is direct email, and, none of my questions seem so important to warrant that.

I'm thinking too much, I think. I don't know. Too worn out, maybe, from doing this working mom thing. Anyway. We are going to dinner with some friends tonight, and I am looking forward to that. And I will think a little bit about the seemingly-related-but-too-advanced paper I read. Maybe I will email one of my research friends and discuss over coffee. Who knows, it won't hurt.

 Posted by Frozone Permalink on September 20, 2009 03:27 PM | Comments (2) categorized under Pedagogical modelling Tweet

## May 30, 2009

### It's an optimization problem

This entry is just a thought, like a little fishy swimming in one year and out the other; 'just wanted to catch it between activities!

The title of my last post was, "Is it really a planning problem?" and today I just thought of a new angle: it's an optimization problem!

I'm fixated on the successor function, or the decision of selecting the next action. Unlike with the robot crossing the room we saw last time, the selection of the next action isn't based on eliminating options (i.e. picking one action because the alternatives would lead to failure), or what's POSSIBLE in order to transition to some desired world (i.e. pre-computing a sequence of actions to see if they will lead you to a desired state as opposed to selecting a sequence of actions that would NOT lead you to your desired state).... rather, it's more an optimization problem. Ahh, and I think my professor realized that, and told me, a couple of years ago but I didn't really hear him until I re-figured it out for myself. About time it sunk in, eh?!

With a teaching process, it really doesn't matter the order in which you execute actions (where actions are things like, "show the student a diagram" or "ask the student a question" or "give the student some choices"). Sure, the order matters at some level, and the point IS to choose a sequence of actions that will lead the student to learn something, but, choosing any one action at any one time is relatively low cost. The action selection is not where the big money is. (So, where is it?)

I've got to chop away at some of the ambiguity here and put some assumptions in place so I can get some traction. Maybe instead of being focused on the selection of a single action, I should choose some small set, and use the planning as a projection of what I want to help the student to create.

I've been chewing on this problem for years, and I'm still chewing.... but somehow I thought this brainwave was worth recording here. Hrrm.

And I haven't forgotten about the mome wraths! Or should I go dig up some examples of optimization problems to refresh my memory?

Anyway, I'll be back, doubtless. =D

 Posted by Frozone Permalink on May 30, 2009 05:12 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 28, 2009

### Is it really a planning problem?

I think of my work as being in "instructional planning", which is a subfield of AIED, which is a subfield of AI. Or, "instructional planning" is an adapted type of "planning" in this sense of the term from wikipedia.

But family trees of research aside, I'm really questioning whether I'm looking at this problem in the right way. I'm trying to model a natural process, where the order of things is usually known. The point is to have the machine select CONTENT and transform that content from an abstract/Platonic/metaphysical/ontological sort of format and to give it CONTEXT by applying a particular teaching strategy, or appealing to ongoing themes in the student's course of study, taking advantage of transitivity laws by using familiar examples, and FILTERING out the currently unnecessary things from the reams of data at our fingertips.

I just keep bumping into a brick wall. I started writing a blog entry about designing a successor function using situation calculus. But I didn't get very far: I'm having trouble even concocting an example! I need an example where one thing changes in some way. Last time, I was a magical fairy who waved her wand and a variety of things could happen as a result. Let's see if we can upgrade this scenario into a planning problem. Say, maybe I'm a magical fairy with a GOAL. To, umm... I guess this should be parallel to my research somehow -- I know, to find a path to guide the mome wraths (reference needed for Alice in Wonderland) through the garden of knowledge.

In robotics, your goal could be to walk (or roll, or whatever) across a room full of obstacles. You rely on your sensors to tell you what's out there, and then you have to build a series of actions to execute in order to reach your goal. For example, maybe you would "walk" in the direction of the goal, but then come across an obstacle, so you execute a "climb" action, then continue the "walk" in the same direction until you reach the other side. Here the plan is: walk, climb, walk. In situation calculus, you would have a bunch of predicates in the form do(action, state). So, I guess you would start in the "state of being at the wrong side of the room", call this state $S_{w}$ and your goal would be to get in the "state of being at the right side of the room", call this state $S_{r}$. So your plan would be like:

do(walk, $S_{w}$) - This means, when you are in the "state of being at the wrong side of the room", you should walk.

Then, define the state of being at the front of the obstacle as $S_{atobstacle}$ and the state of having overcome the obstacle as $S_{overobstacle}$

Then your next action would be:

do(climb, $S_{atobstacle}$) - This means, when you are in the "state of being in front of the obstacle", you should climb.

Finally,

do(walk, $S_{overobstacle}$) - This means, when you are in the "state of having overcome the obstacle", you should walk.

and maybe

do(stop, $S_{r}$). - This means, when you are in the state of having reached the right side of the room, you should stop walking.

Can you believe how TEDIOUS that is? The other thing that gets me is that I had to explicitly define the states of being at the start, being at the finish, being in front of the obstacle, and having overcome the obstacle. In the problem I want to solve, there is no way you can know all of the states ahead of time. You discover them as you go. I have to figure out how to deal with that. Anyway. Back to my beloved mome wraths.

The mome wraths live in the garden of knowledge, and they want some cupcakes. However, the cupcakes are located on the other side of The Fundamental Theorem of Calculus.

Gahhh! The baby is awake. So I shall have to put my magic wand away for now, and the mome wraths will have to wait for their cupcakes. This next example will be different because instead of navigating through a room with an obstacle across the middle, I'll have to look at my mome wraths's previous knowledge, look at a teaching strategy, and look at how the "ordering of actions" might be different. Hrrrm. I have no idea what I'm doing. LOL

See ya next time..............

 Posted by Frozone Permalink on May 28, 2009 01:19 PM | Comments (0) categorized under Pedagogical modelling Tweet

## April 24, 2009

### A good survey-ish paper for my area

Floating around on my little feverish cloud, I found this snippet of a blog post that I can't believe I hadn't published yet. So here she be.

***

I found a niche of really hot papers for my topic. Many are mentioned in a previous post ("My pedagogical issues"). Another such paper is:

This paper describes a history of attempts to do what I'm trying to do. (Excellent for chasing references!) It also tries to specify what good teaching *is*; you need to know that before you can model it! I'll have to read the paper again to see if it answers the question about *when* to apply the different teaching strategies. It might be under the heading "Judging task difficulty and degree of assistance".

I feel like I need to better define what I want to do vs. what has already been done.

***

(2 months later) Ah, ha! And to answer the statement in that last line is, "apply decision theory".

 Posted by Frozone Permalink on April 24, 2009 04:38 AM | Comments (0) categorized under Pedagogical modelling Tweet

### My friend, the utility function

I've been thinking more about planning lately and I had a thought the other day that tasted like a milestone in understanding to me. At the same time it felt obvious, but I wanted to push myself to articulate it here.

So my epiphany was about the utility function. I'll back up a bit. Decision theory, in a nutshell, to me, is that you lay out your problem in an influence diagram where you model the relevant factors such as the agent's allowable actions and other variables that affect the agent's decisions, then you build the utility function according to how you want the agent to act, and let the thing rip. More here.

Out there in the research world, I've often seen the utility function as a reflection of "how much the student learned". It's impossible to look inside the student's head and read this in as a variable. Instead, researchers have commonly used quizzes and so on as an indirect measure of this.

So the epiphany was this: Don't make the utility function based on the amount the student has learned. You can't measure this anyway. Instead, design your utility function to measure whether or not you have put the student through a meaningful experience, like a story with a beginning and an end, with a goal in mind. You can braid multiple story threads together, and any one activity can contribute towards multiple experiences. The main thing is to pull together a set of relevant activities that have long-term meaning in mind.

This take on the utility function is not mine. I'm sure I heard it before from one of my mentors, probably Gord or Jim, or possibly Mike or Gina. Anyway, I'm happy the thought stuck in my head, wherever it came from, so that it could come back now that I'm tackling the theory in more depth.

I'm a little tipsy on the idea of the utility function as something the programmer places there to influence their desired behaviour, vs. "discovering" good behaviour and then learning to reinforce it.

la la la...

 Posted by Frozone Permalink on April 24, 2009 04:25 AM | Comments (0) categorized under Pedagogical modelling Tweet

## Index to Steph's Notes

Feb. 24th 2007 - Weee! This new part of my website is not an entry, but rather a permanent fixture whose purpose is to "Look Down on All Those Notes With Some Grand Vision of Organization". Wish me luck. LOL
1. Representing meta-data (fuel) & the different kinds of "hooks" that intelligent systems can use (how fuel is injected into the motor of the engine)
1. Motivation: Semantic net / Rationalizable to a machine
2. Technology & Philosophy: RDF, modus ponens,
1. Predicates, Logic & situation calculus
3. What kinds of data? - What kinds of meta-data would an AIEd system possibly need, and how is it represented?
2. "is-prerequisite-to"-type knowledge
3. interactions with learning objects & other learners - (location, composition is-a/part-of, sequencing by restricting navigation, personalization, ontologies for LO context)
4. lesson plans, curriculum plans, practicing sessions (What is stored, what is generated on the fly? What is remembered?)
4. How to organize it - When is it stored in a database? Meta-data? Agent memory banks? Protocols? Repositories? XML files? Home-servers? WSDL services? Frameworks? Portable banks? P2P access?
1. Database of object-agent interactions
2. Concept of "Home" on a P2P network -- maybe the bulk of a learning object's usage data is on its home server and can be queried using WSDL or something ? Similar homes for each student's usage history, etc. Baggage problem.
1. referring to a concept/relationship - ex. AgentOwl?
6. Generation of this data
1. Rationalization: For use by other AIEd systems
2. What is generated - discuss items under part I.C.
3. When it's generated - describe procedural model, which parts of the engine generate what (isa-part-of data, XML feeds, web services, meta data bout groups and collaboration, protocols, examples Friend of A Friend FOAF project)
4. Technical notes of HOW it's generated: JENA, issues of implementation demo, my Hermione & Ron agent examples, lol
5. Usage of this generated data - see part IV. A.
2. Given the engine, who uses it?
1. Students / Learners / "Me"
1. instructional planning, student model, pre-requisites, tutoring, coaching, collaboration,constructivism
2. Teachers / Educators / "Me"
1. putting together lessons
2. be able to browse through task domain knowledge in an objective / encyclopaedia format, then be able to pick-and-choose what you need for your students
3. compose examples, design explanations, pull together diagrams, learning objects, etc. Haystack Relo?
3. Administration / Governement / Structure / Crowd Control
1. as restrictions/obstacles/sand pit to the robot in agent environment
2. can't just have a swarm of students and teachers out there -- need structure of courses, curriculum, objectives, requirements (at least, we do in this day and age!) - Report cards, evaluation, feedback
3. government, marks, certificates, requirements, funding, curriclum, attendance, delinquent, non-attending, motivation
4. school''s images, goals, strengths, payroll, HR, security, accounts, permissions, privacy
5. registration, failed courses
3. User Environment -- How does this engine work? What does the user see on the screen?
1. Introduction - Given a background in educational psychology, how does the system present itself -- what does the user see, and were does this data come from? Links to thoughts from part I.)
2. Task Domain Browsing - Suppose you're you're just idly browsing through the "raw" content. How would it look when it's not wrapped around a learning-context or lesson or tutorial or anything. 'Cross between browsing a raw task domain ontology and browsing a learning object repository.
1. Cleaning up the data -- Visualizing the data for humans to pick through the task domain and work on it. Suppose the "Subject Expert" discovers an advancement in science and needs to update the "world's" domain knowledge. (I used the "Subject Expert" terminology from Ontologies to Support Learning Design Context - Thanks Chris) How would they make corrections to ontologies and learning objects, or at least point the users of "old" objects towards adopting the newer ones.
2. "Modes" - Learning & Lessons / Checklist - Homework, Assignments, Courses being taken / Collaborative mode / Teaching mode / Calendar- email -adminisrative mode -- See also the different kinds of scenarios in the ActiveMath system
4. Evolution of this engine
1. target some key implementation hooks discussed in part I - design an experiment/demo
1. scrape a page - (Note, scraping can only give objective data, not in-context dat)
2. LO repository - related to browsing the task domain?
3. a learners "To Do" list - where does it come from? Assignments, courses.
4. sample group scenario
5. sample teacher lesson planning
6. sample data "left behind"
7. sample use of that data
2. Data mining (for what? lol )
1. discovery / generation of ontologies - when do you need to hunt for them, and when do you have to have a solidly-known & predictable ontology?
3. I/O - where it happens, which languages, protocols, which agents perform i/o and when, precepts, actuators
1. Role Assignments
2. My Environment Adapts to me
1. Displaying feedback from the server on JSP pages (Software engineering considerations)
2. Sketching out a design (Content planning vs. Delivery planning)
3. agent negotiations / social structures / ummm... Web 2.0 ?
4. garbage collection of meta data
1. Artificial Intelligence & Evolution
2. open learning environments
5. Agents, pets, grouping, Community modelling
1. Protocols - finding groups, cyber dollars, state diagrams (?)
2. "Community Studies" - graphs & communication hubs, types of communities (free-for-all, hierarchy of authority, etc.)
3. implications of joining a community - what do you share, which parts of your student model are relevant
4. Walls & sand traps -- deliberate restrictions as problem-solving for learning
5. Communication channels - individual-to-individual, individual-to-community, chat channels, agent-only "administrative" communications, ex. requests for related learning objects in a particular community, etc.
6. Educational/Pedagogical focus (this part probably shouldn't be its own section but rather incorporated into the whole picture, but it's separate for me right now because I'm still only just starting to learn about it.)
1. Semantics - what there is to talk about in Education
1. ex. Merril's First Principles of Instruction, linking educational terms to AI terms
2. Pedagogical skills for tutors -- supporting human *and* artifical tutors
3. Student modelling - what the machine needs to know about the student, pedagogically-speaking, about learning history/preferences
4. Roles - Simulated students, Coaches, Tutors, Teachers,