## May 30, 2009

### It's an optimization problem

This entry is just a thought, like a little fishy swimming in one year and out the other; 'just wanted to catch it between activities!

The title of my last post was, "Is it really a planning problem?" and today I just thought of a new angle: it's an optimization problem!

I'm fixated on the successor function, or the decision of selecting the next action. Unlike with the robot crossing the room we saw last time, the selection of the next action isn't based on eliminating options (i.e. picking one action because the alternatives would lead to failure), or what's POSSIBLE in order to transition to some desired world (i.e. pre-computing a sequence of actions to see if they will lead you to a desired state as opposed to selecting a sequence of actions that would NOT lead you to your desired state).... rather, it's more an optimization problem. Ahh, and I think my professor realized that, and told me, a couple of years ago but I didn't really hear him until I re-figured it out for myself. About time it sunk in, eh?!

With a teaching process, it really doesn't matter the order in which you execute actions (where actions are things like, "show the student a diagram" or "ask the student a question" or "give the student some choices"). Sure, the order matters at some level, and the point IS to choose a sequence of actions that will lead the student to learn something, but, choosing any one action at any one time is relatively low cost. The action selection is not where the big money is. (So, where is it?)

I've got to chop away at some of the ambiguity here and put some assumptions in place so I can get some traction. Maybe instead of being focused on the selection of a single action, I should choose some small set, and use the planning as a projection of what I want to help the student to create.

I've been chewing on this problem for years, and I'm still chewing.... but somehow I thought this brainwave was worth recording here. Hrrm.

And I haven't forgotten about the mome wraths! Or should I go dig up some examples of optimization problems to refresh my memory?

Anyway, I'll be back, doubtless. =D

 Posted by Frozone Permalink on May 30, 2009 05:12 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 29, 2009

### Things I would like to Eat right now

alfredo noodles

rice pudding

chana masala

a baked potato loaded with all the toppings

ginger glazed carrots

salt & vinegar potato chips

polenta

Mmmmmmmmmm!

(No, I'm not pregnant again. =D )

 Posted by Frozone Permalink on May 29, 2009 05:56 PM | Comments (2) categorized under Being at Home Tweet

## May 28, 2009

### Is it really a planning problem?

I think of my work as being in "instructional planning", which is a subfield of AIED, which is a subfield of AI. Or, "instructional planning" is an adapted type of "planning" in this sense of the term from wikipedia.

But family trees of research aside, I'm really questioning whether I'm looking at this problem in the right way. I'm trying to model a natural process, where the order of things is usually known. The point is to have the machine select CONTENT and transform that content from an abstract/Platonic/metaphysical/ontological sort of format and to give it CONTEXT by applying a particular teaching strategy, or appealing to ongoing themes in the student's course of study, taking advantage of transitivity laws by using familiar examples, and FILTERING out the currently unnecessary things from the reams of data at our fingertips.

I just keep bumping into a brick wall. I started writing a blog entry about designing a successor function using situation calculus. But I didn't get very far: I'm having trouble even concocting an example! I need an example where one thing changes in some way. Last time, I was a magical fairy who waved her wand and a variety of things could happen as a result. Let's see if we can upgrade this scenario into a planning problem. Say, maybe I'm a magical fairy with a GOAL. To, umm... I guess this should be parallel to my research somehow -- I know, to find a path to guide the mome wraths (reference needed for Alice in Wonderland) through the garden of knowledge.

In robotics, your goal could be to walk (or roll, or whatever) across a room full of obstacles. You rely on your sensors to tell you what's out there, and then you have to build a series of actions to execute in order to reach your goal. For example, maybe you would "walk" in the direction of the goal, but then come across an obstacle, so you execute a "climb" action, then continue the "walk" in the same direction until you reach the other side. Here the plan is: walk, climb, walk. In situation calculus, you would have a bunch of predicates in the form do(action, state). So, I guess you would start in the "state of being at the wrong side of the room", call this state $S_{w}$ and your goal would be to get in the "state of being at the right side of the room", call this state $S_{r}$. So your plan would be like:

do(walk, $S_{w}$) - This means, when you are in the "state of being at the wrong side of the room", you should walk.

Then, define the state of being at the front of the obstacle as $S_{atobstacle}$ and the state of having overcome the obstacle as $S_{overobstacle}$

Then your next action would be:

do(climb, $S_{atobstacle}$) - This means, when you are in the "state of being in front of the obstacle", you should climb.

Finally,

do(walk, $S_{overobstacle}$) - This means, when you are in the "state of having overcome the obstacle", you should walk.

and maybe

do(stop, $S_{r}$). - This means, when you are in the state of having reached the right side of the room, you should stop walking.

Can you believe how TEDIOUS that is? The other thing that gets me is that I had to explicitly define the states of being at the start, being at the finish, being in front of the obstacle, and having overcome the obstacle. In the problem I want to solve, there is no way you can know all of the states ahead of time. You discover them as you go. I have to figure out how to deal with that. Anyway. Back to my beloved mome wraths.

The mome wraths live in the garden of knowledge, and they want some cupcakes. However, the cupcakes are located on the other side of The Fundamental Theorem of Calculus.

Gahhh! The baby is awake. So I shall have to put my magic wand away for now, and the mome wraths will have to wait for their cupcakes. This next example will be different because instead of navigating through a room with an obstacle across the middle, I'll have to look at my mome wraths's previous knowledge, look at a teaching strategy, and look at how the "ordering of actions" might be different. Hrrrm. I have no idea what I'm doing. LOL

See ya next time..............

 Posted by Frozone Permalink on May 28, 2009 01:19 PM | Comments (0) categorized under Pedagogical modelling Tweet

## May 22, 2009

### The plan equals the Markov chain

This seems obvious, but didn't really click for me until recently. We talked earlier about how a policy π is basically a set of decisions, one for each decision node in your influence diagram. In AI planning, this sequence of actions-to-execute is linear (although it can be revised). This step-by-step plan is your Markov chain.

In other words, a Markov Decision process is related to AI planning in that the solution to the MDP is a Markov chain, and that chain equals your plan for execution.

I left off last time with an analysis of the state transition function, which I am still exploring. I want to continue my analysis, keeping a thumb on how it would be different if you knew you wanted to follow a certain type of pattern. Maybe there's room to expand the model, and usually "more knowledge" means a tradeoff where you can take some shortcuts elsewhere for computational (space/time) gains.

But I don't know if I will go in that much of a theoretical direction next, or if I should spend more time on measuring the effectiveness of the pedagogical technique. Do you compare it with a human tutor doing something similar? Suppose I fully develop this new extension to planning, how can I prove it works?

I have a lot of work to do to figure out how to measure such things and setting up proper experiments or simulations. But that is further down the road, and I think I would need a lot of help from an advisor with that part, or at least another researcher with more experience. =)

What I can do, though, is continue with my exercise in STRIPS and/or situation calculus, while exploring the boundaries of states, actions, observables, predicates, reward functions, utility functions, probabilities and so on.

Tweedledoo!

P.S. Note to self: If you know the sequence of actions (i.e. the chosen micro-teach technique), what are you planning for? Where is the uncertainty? What are you trying to DO? heh.

 Posted by Frozone Permalink on May 22, 2009 12:28 PM | Comments (0) categorized under Pedagogical modelling Tweet

### The Little Things

My doorbell rang at about 9:30 this morning. I had just finished feeding baby her breakfast, was drinking my coffee. I was still in my pyjamas but thought, "oh well" (after having gone through the indignities of childbirth, having strangers see me in my jammies doesn't seem to have the effect it used to).

So anyway, baby on my hip, I answered the door and it was a delievery person with some washing machine parts that we were expecting. I accepted the cardboard box from him and rather clumsily set it on the floor while he held out the electronic signature thinggy that you have to sign to confirm that you received the delivery.

Just the way he held the electronic thinggy and sturdied it with both of his hands so that I could sign it while holding the baby on my hip -- it was just a moment where I noticed his consideration, this extra effort to make things a little easier for me.... it meant a lot.

And now he is gone to make the next delivery, and I will probably never see this person ever again in my life, but hope that someone does something nice to brighten HIS day in return.

 Posted by Frozone Permalink on May 22, 2009 09:50 AM | Comments (0) categorized under Being at Home Tweet

## May 18, 2009

### It's about influencing the process

Slowly, here, I'm wiggling through notes and examples about the specifics of AI planning using Markov Decision Processes. I have an entry cooking about using STRIPS or situation calculus to examine the particulars of the state transition function so that I can later highlight the differences I need for my model... but that is a little out of reach yet.

First, I decided to review my notes about MDPs. At this point in time, whenever I thought, "MDP", a similar thought triggered in my mind, "the milk in the fridge example". It took me a few days to find the time to dig through my files ("did I have that on paper? or was it a PDF?" etc.) to find it. But this morning, I did. And I also realized that point of "the milk in the fridge example" was to illustrate Hidden Markov Models, which is slightly different.

So, that's what I wanted to note today: What I understand about Hidden Markov Models, and why this isn't exactly the right model for me. And maybe to make some further progress on the Markov Decision Process front.

A Hidden Markov Model (HMM) is named so because you use it when you want to make predictions about or ask questions about a variable whose value you cannot observe directly. With the milk in the fridge, we know that if left too long it goes bad, but we can't exactly tell if the milk is bad until you open it and give it a whiff, or if you look at the best before date. Also pretend that your roommate can randomly go buy milk, replacing the bad stuff with fresh stuff. Because of these two things -- going bad over time and roommate replacing it -- you never know exactly when you walk up to the fridge whether you're going to be able to drink the milk or not.

The Markov part comes in when you add time. Say, every day (or, at each "step" in the "process"), the milk gets a little "badder" and the badness resets when your roommate replaces it.

The Hidden part comes in when you can't see the state of the milk directly, i.e. you can't tell if it's "good" or "bad" so instead we rely on other preceptors that help us infer the value indirectly. For example, maybe we can measure the odor.

The HMM is useful if you want to talk about a value-that-you-cannot-observe-directly that changes over time. I don't know how, or if, this is applicable to planning. It might help you predict some unobservable obstacles, maybe.... but I don't think the HMM is directly useful in the computation of "the next step", or what I'm calling the transition function, or, selecting the next action based on the current state and previous actions.

So that's probably all I'm going to say about HMMs for a while. I think this entry helped me distinguish HMMs from MDPs in my head a little, so now I can go back to whatever path I was following before. =D

Oh, and about the title of this entry -- I just wanted to emphasize that my problem is to find a way to choose the selection of the next-action while following a known process. I am interested in the order of the actions, and the patterns that this ordering creates as it interacts with the environment.

(mmm, I enjoyed articulating my research goal again! It still doesn't feel precise enough to the real problem, but I'm one iteration closer...)

 Posted by Frozone Permalink on May 18, 2009 11:04 AM | Comments (0) categorized under Pedagogical modelling Tweet

## May 15, 2009

### Conference proceedings

So I thought I was officially a hotshot researcher because I was staying up-to-date with all the latest publications in my area. I figured out how to subscribe to the RSS feeds to a bazillion different journals. Whenever my baby is nursing, I sit down with the ol' iPod touch and skim through paper titles, occasionally reading through the abstracts, and for the really relevant ones I star them and download the full paper into my Papers app. Doubly cool of me, I figured out how to dig deep down in my university library's many links in order to access the full PDF of just about any paper I want. This is working REALLY well. Even when I go back to work, and if baby is still interested, I can see myself keeping up this habit of skimming through research papers while nursing the baby every evening. Staying abreast of the latest research, with a baby abreast, if you will. (ho ho ho, I am so funny!)

So yes. As I was saying. I found this awesome research rhythm, using all the latest technology to stay ahead of the game. I was, how do you say, "the shiz"!

But then I read this article in the latest Communications of the ACM, "Conferences vs. Journals in Computing Research" by Moshe Y. Vardi. Basically, the article says that in Computer Science, Journal publications are SECONDARY, and that the primary means for publishing research results in in CONFERENCE PROCEEDINGS.

So, I was like WHAT?? Am I missing out on some totally huge world of computing research right now because I only have journal pubs in my RSS reader?

And that is where I'm at right now. Free time over the next few days shall be dedicated to figuring out how to get me some conference proceedings.

UPDATE (July 2009) Okay, I learned about the existence of IJCAI. This makes TWO conferences I know about, so I will start this bulleted list and will continue to add conferences here as I learn about them.

 Posted by Frozone Permalink on May 15, 2009 03:39 PM | Comments (4) categorized under academia & thesis Tweet

## May 13, 2009

### Going to sleep: oh, the pain!

Why do children have such a hard time going to sleep? Baby is sleeping fine in my arms, then I put her down, oh so gently... and... gah, she explodes into wails and shrieks. Why does this happen? Sleep is GOOD for babies, so, evolutionarily-speaking, wouldn't it be beneficial for them to be able to just go back to sleep nicely? Or maybe this is supposed to teach some kind of lesson to the parents?

Anyway. My parenting book says that for babies older than 6 months, if this happens to you -- i.e. they have fallen asleep in your arms but then when you put them down they freak out -- you should let them cry for 5 minutes and give them a chance to try to fall asleep on their own.

So that's what I did. And that is why I am ranting on my blog. And it has now been 6 minutes, and baby is still wailing. Sigh. I have a huge mess in the kitchen I want to clean up. I desperately need to sweep the floor. Not to mention having time for myself, but you can throw that idea out the window right now. So, off I go.........

I don't mean to sound bitter. It is only 3 days after Mother's Day. I love being a mom! Really! It is wonderful!

Ok bye, thanks all for listening to this mommy rant. =P =D

 Posted by Frozone Permalink on May 13, 2009 01:40 PM | Comments (0) categorized under Parenting / Motherhood Tweet

## May 12, 2009

### Karl Popper & Bayesian inference

I was just cleaning out my starred items in my Google Reader and I stumbled upon this item from Andrew Gelman's blog about "the most important philosophical point of confusion about Bayesian inference". An excerpt:

From a philosophical point of view, I think the most important point of confusion about Bayesian inference is the idea that it's about computing the probability that a model is true. In all the areas I've ever worked on, the model is never true. But what you can do is find out that certain important aspects of the data are highly unlikely to be captured by the fitted model, which can facilitate a "model shift" moment. This sort of falsification is why I believe Popper's philosophy of science to be a good fit to Bayesian data analysis.

I was delighted - it made me feel validated, somehow -- because this reminded me of I question I asked in my AI class once. I couldn't see how a Bayesian network could be used as a problem-solver without relying on "using evidence as proof", which would be contradictory to Popper's philosophy. I couldn't see how it was possible to put forth a query that could be refuted by evidence.

I remember thinking that this was a really huge question, and it actually upset me quite a bit that Bayesian inference may not hold up to Popper's definition of true scientific pursuit. I remember that my instructor offered up the idea of the "clarity test" where basically your theory is falsifiable when our variables take on the value OTHER than the one that the theory proposed. I am probably totally bungling up the logic here, but... I guess the clarity test supposes that if you structure your question such that it could be answered by a magical all-knowing being, and they could verify your theory as true or false, then it is falsifiable. In your Bayesian network you can enter the evidence you DO have, and make appropriate inferences based on the data.... So even though your inference is coming from evidence, it is still OK by Popper because it is possible that the evidence could have shown your theory is false.

Hrrm, I'm not totally satisfied that I remembered that or explained it properly. But anyway, I just wanted to make a special little posting for my starred item and document the story behind it. :-)

As always, I welcome discussion in the comments if anyone would enjoy playing with these ideas some more to clarify!

 Posted by Frozone Permalink on May 12, 2009 01:55 PM | Comments (0) categorized under Computer Science & AI Tweet

## May 11, 2009

### Probability as a projection on the selection of teaching strategy

The more I think about it, the more it makes sense to me that the probability in the argmax thinggy should be for the probability that each teaching strategy will be the most effective at this point, given the student model and resources at hand. It fits that the distribution is over 1, because surely you will chose ONE teaching strategy. Over time, each strategy will increase/decrease in probability of its effectiveness. For example, the longer you follow the same story thread, the more likely it is that the student will get bored, so the probability for this strand will gradually shrink with each time-step, showing our belief that the student is more and more likely to get bored and would like to side-track for a bit with a distraction. I don't know how yet to juggle the student's multi-tasking ability; I'm sure I can fit that in somehow as a balance of weights over the strands.

The point of this planning system is to help the student navigate through hoards of information and activities in a pedagogically meaningful way, taking advantage of the usability laws of transitivity, and exploiting our knowledge of how learning works - building up from what the student is already familliar with, etc.. So my point is clear: selection of the best opportunities, based on our history. Flexible, personalized exploration/learning.

But now I have no idea what to do with the utility function... oh, right! It's supposed to reflect whether the path represents a "meaningful experience", which I understand in my head but have yet to define precisely. Conveniently (but I'm not sure how significantly!) this also plays well with the "feedback" thing you hear about in popular science, such as Jeff Hawkins's Hierarchical Temporal Memory where he emphasizes the importance of feedback in machine learning, and, in Douglas Hofstadter's I am a Strange Loop, where the emphasis is on self-referencing systems.

I am aware that I might be abandoning the greatest strength of my tools, here. I may have to search for a different model other than my new love, these Markov Decision Processes. Namely, the strength of the MDP is that it can model uncertainty when you don't know the result of your state transition, but you still want to be able to act confidently. (At least, I think that's the point and the power of the MDP. Gosh, I still feel like I don't know anything!) By using the probability distributions to measure the belief that various teaching strategies/strands will be effective at a given point in time, I am not modelling any uncertainty about state transitions. i.e. This probability distribution is NOT over a set of possible next-states. Wait a minute. It is, kinda. But the point is not to predict which one is coming next, the point is to pick the one I believe will be the most effective. Gosh, this is subtle. Will it mean the difference between adopting the MDP vs. abandoning it for a different model??? Or maybe I can just introduce new notation to distinguish this new subtlety.

I haven't yet figured out what my states are, so, I can't say for certain that the MDP is or isn't the right tool for me, i.e. whether I'm in the situation of knowing which state I'm in after an action. But at least I'm aware of this question so I can recognize it as I progress. And maybe in the future, I'll stumble upon a better tool or model that fits my problem better.

 Posted by Frozone Permalink on May 11, 2009 09:34 AM | Comments (0) categorized under Computer Science & AI Tweet

## May 10, 2009

### State transitions: the impact of the probability distribution

Mental note, I have several posts now on state transitions. I should organize them all and create a new subsection on my research page.

Warning: This post doesn't make a lot of sense. I have a lot of "cleaning up" to do... really it is just a dump of convoluted ideas, but I hope that by "dumping" I give myself something to build up from for a future post. Sorry dear readers for the mess!

So, you're executing a plan. Your next action is chosen according to the option with the highest utility. For reference (and because I still get a little rush when I can post some sexy greek symbols on my blog, LOL, I hope that wears off soon and I can grow up already) here is the argmax thinggy again.

LaTeX: \delta^*(o) = arg\max_D \sum_{S}p(S|O=o,D) U(S,O=o,D)

As we just said, the expected utility U(S,O,D) has a huge impact on our choice. But recall the other component of the argmax thing: the probability, P(S|O,D). What are these probabilities? I think they represent the probability of your action causing the referenced state to be the next actual state. In other words, the probabilities usually represent the fact that you don't know exactly how your chosen action will affect the next state in the state transition.

But as I was thinking about this, I felt like I was hooked on something -- I'm not at all satisfied with this structure, so I want to plough through the details a bit; I want to look at it in a different way.

Normally, you would have predefined probabilities of actions outcoming with different results, right? Like, part of modelling your problem before you let the robot rip is to pre-define the graph of state transitions. You would know, offline, which states are likely to follow which other states, with degrees of probability for each alternative. I'm still a little uncertain about where "actions" relate to the whole transition function thing. Are actions always specified as part of the state transition function?

Looking back... in STRIPS, it looks like the state transition function IS an action definition. i.e. The state transition function is a triple (action,preconditions,effect) where
- the action is defined to be some predicate like action_name(argument1,argument2),
- the preconditions are a set of predicates, and
- the effect is another set of predicates.

(In situation calculus, it's similar. .... Bah, but I can't explain this without an example. I have to go back to my fairly world and contrive something with the upside-down As. So there is an idea for my next post!)

Anyway, my point for now is that in all my past experiences, the actions HAVE been "tied in" as arguments somehow in the state transition functions.

So if you usually have predefined probabilities of transitions between states - such as what your Markov Decision Process would rely on -- then this means that we usually assume that you define your actions and their consequences ahead of time, with probabilities. Is that right? Am I beating this one to death? I'm trying to establish what "the norm" is here, so that I can propose an alternative. But man, am I ever building on shaky ground here! But for the sake of explorations, moving on...

So what if, instead of having predefined probabilities of transitions between states, you already know exactly how the "order of things" goes. You already have a repretoire of microteach strategies. So we don't have any uncertainty about "what happens next" but rather we are choosing the best action from a SET of KNOWN processes. It just occurred to me that I may be looking at the difference between partially-observable MDPs and regular MDPs, I think, maybe. Because in my situation, since I know "the order of things", doesn't that erase the "partially-observable" part? On top of that I'm trying to braid together multiple MDP chains, representing different strands of interleaving learner goals.

So, suppose instead our job is to pick which strand to follow next. Usually, you want to follow a single, mainstream process so that the learner has a sense of continuity and purpose. However, for fun and variety you need to side tack onto other goals or interests and have some "side quests" going on. So what is the machine's role here? To anticipate valuable opportunities and present them to the learner. The machine is a filter on the opportunities out there. A faciliatator. A provider of context.

Pulling this back to decision theory: your distribution should be a discretization of story threads, each with a learning goal and quest history. The utility is a function over each strand, I guess, telling you how much value to expect from choosing that option based on maximum relevance, keeping the balance of continuity and progress vs. variety -- the LOC of control -- etc.

So the probabilities. What are these? How are these distinct from the utility function? Is it because we don't know what the learner will do next? We don't care, really, there is no sense trying to predict. Maybe it's based on what we assume will happen if they take that strand, because we don't exactly know what will happen when we go ahead and weave those learning objects into our story (where the learning objects are associated with the strand). I really like this thought, but I have to figure out how it fits into the math.

But 1 more problem: why would all our story strands have to sum to 1? I guess because you're bound to pick 1 of them, i.e. we know for certain that we will do SOMETHING. But that doesn't matter. We said already there is no sense trying to predict what the user is doing. Where is our uncertainty? It is on what our assembly will do tithe learner. How do we distribute that over 1? Definitely I have some thinking to do...... and there we go, baby is awake, blog time is over!!!!!!!!!!!!

 Posted by Frozone Permalink on May 10, 2009 09:42 AM | Comments (0) categorized under Computer Science & AI Tweet

## May 08, 2009

### This dichotomy

I was reading yet another paper about the application of Markov Decision Processes in AI planning (BI-POMDP: Bounded, Incremental Partially-Observable Markov-Model Planning [Washington, 1997]), and this one in particular had a clear and precise abstract . So clear in fact that it helped me pinpoint another subtlety that's been tugging at me since I began this journey - this dichotomy - between trying to define my problem and learning about the tools (i.e. concepts from AI) so that I can apply them in a solution. (hold that thought..!)

The other aspect to the dichotomy is that the body of research related to my problem has two "sides" with little overlap, and i'm trying to help bringthem closer together.

One side is so mechanical that you see things like "good pedagogy" being defined as "minimizing the number of teacher actions required for the student to learn something". You can see how this is easy to measure -- the number of "teaching actions" is easily counted. But measuring whether the students "learn something" is much harder. The other side of the literature is much more general, the topics more wide-reaching. Too big for a computational model. I'm in the middle, working with some computational awesomeness, but with "heart". =D

Anyway, the subtlety I was talking about was related to "uncertainty about action outcomes" vs. "uncertainty about the current state you are in". Yes, your actions affect the state... but... is there a way you can have a model with certain state transitions, leaving uncertainties as fringe observables or something? They can still affect your state transition using weights or whatever to push in another direction, but why design your system so that your state is fuzzy? I shouldn't use that word to avoid confusion with fuzzy logic. Uncertain, fuzzy... unclear? Misty? Blurry? Shaky? Not discrete? Continuous? Elusive? Gah.

Anyway, I'll keep reading the paper. But I am gathering my details. Gaw, haw, haw!

I feel like I must have said this already. Oh well, there's the swirliness of human thought for ya; backtracking and solidifying can be good, too. :)

 Posted by Frozone Permalink on May 08, 2009 09:03 AM | Comments (0) categorized under Computer Science & AI Tweet

## May 06, 2009

### The fiasco of the objectionable slides at the Ruby conference

So I heard about this incident at a Ruby conference where one of the presentations contained a bunch of pictures of scantily clad women. I guess it was supposed to be funny. I thought, "whatever," and didn't think much of it other than a pang of "well, that's not fair". Then Phizzle articulated the issue really well, and I took a moment to try and figure out what this situation meant to me, as a woman in computer science. This affects me. I owe it to myself to figure out where I stand.

(For more background, check out this article, or the thread on rubyrailways.com and some "reactions from actual women" over at hackety.org.)

So here was my reaction. When I look at that sexy bum on the front of the slides, I hear in my head: "YOU don't look like that. This image on the screen is up on a pedestal showing what is accepted as "the best" and what we want. This is what "good" is. This is what we value. If you are a man, you will understand how to be uber by following the advice in this presentation about size, multiple partners, scalability, etc.. If you are a woman, well, then you had better look like the hot ass in this picture otherwise we don't want you around."

So, there. Stuff like this makes me feel left out. Unwanted. Like I don't matter, like I'm not good enough.

Which is completely stupid, because I'm a very good engineer and a hard-working scientist.

I'm glad to see some of the spin-off that is coming as a result of this conflict. A LOT of developers are going, "what's the big deal? why are you getting all offended?" and I understand where they're coming from. It was all just supposed to be a joke. But I hope in my heart that the people who think that there's nothing wrong with this take a moment, just a moment, to realize that there are minority groups in the room who are trying so hard to join in and try to contribute in positive ways, but because of their small numbers they are much more sensitive to ostracism, and jokes like this can kill what little confidence these people have. And that's one reason why diversity is not thriving.

So ya. There's my two cents on this one. =) When I was reading the reactions of the other women over at hackety dot com, I didn't see my own point of view repeated anywhere, so I wanted to voice it here, at least.

 Posted by Frozone Permalink on May 06, 2009 09:17 PM | Comments (1) categorized under Community Networking Tweet

## May 05, 2009

### Generating actions and the transition function

I was reading this paper (Decision-Theoretic Planning for Playing Table Soccer [Tacke, Weigel & Nebel, 2004]) about an application of decision theory to a robotic planner that played foosball. I thought that the paper did a good job of explaining the "nuts and bolts" I've been looking for.

To put a name to it, I want to look at the transition function. Basically this function takes the current state and an action in as input, and outputs possible new states. Then you pick your action according to the possible state with the highest utility.

In this paper, their system generated a tree of next possible actions, and for each of those, opponent reactions and corresponding consequences, with probabilities of those consequences. Next came another layer of the next possible states. It looks like they used a bit of naive Bayes and some minimax, which was odd because it was a decision-theoretic system, not game-theoretic. This is an art, really -- you have the tools and you can make them do whatever you want to suit your problem!!

I'm starting to get an idea of how planning works, but I need to look at a few more systems to compare and contrast them so I can pick out the bigger patterns.

I think that the teaching strategy will influence the next-possible actions. I don't know how incoming observations about the learner will fit in... maybe the probabilities of consequences? I also have to keep in mind the overall story -- i.e. my utility is "giving the learner an overall sense of a meaningful experience". I really have to define that mathematically. I want to put them through an introduction, a middle, and an end. And you can braid many of these together.

Well, baby's awake. Hope to come back next time with another example of the process of planning.

 Posted by Frozone Permalink on May 05, 2009 09:17 AM | Comments (0) categorized under Pedagogical modelling Tweet

## May 03, 2009

### Shape trees

In an earlier post, I was talking about an analogy between comparing 2 visual things vs. comparing 2 abstract ideas for the purposes of compare/contrast in an educational setting. Basically, I'm trying to make some progress in the field of instructional planning, i.e. using robotics research - how machines figure out how to walk around obstacles to reach their destination - and applying it in an analogous way, presenting ideas/activities/support to a learner as they work towards their own goals, where the obstacles are gaps in knowledge or crevasses caused by lack of context.

Anyway, I just wanted to say that at the end of the post, I mentioned that I couldn't find a particular paper that gave a document a "shape", and I wanted to see if those ideas could be used to give task domain ontons their own "shapes". I found the paper: Document identification using shape trees by [Henker & Petersohn, 2009]. I flagged it for reading!

 Posted by Frozone Permalink on May 03, 2009 10:27 AM | Comments (0) categorized under Computer Science & AI Tweet

## May 01, 2009

### Programming languages for AI planning

What programming language do folks use for AI planning these days? Is STRIPS still the primary choice? Or is there a "modern" alternative?

Even though I'm following a lot of AI folks, I didn't get any leads. Strange! But then I thought, maybe I should try harder to send out more tweets and answer other people's questions... maybe then I would get some of my own questions answered... isn't that how the universe works?? :)

See, I really want to take a closer look at the nails and bolts of PLANNING. Right now I'm doing a lot of math, and I thought that if I could figure out how people are building AI systems these days (i.e. which programming languages are they using...) then maybe I could learn about some more details. No luck yet, though!

So in the meantime, I dug up some assignments from my AI class a couple years ago and will take another look at those. We used situation calculus and STRIPS. Maybe those are still state-of-the-art, I have no idea. 'Will also read some more papers on planning to see if I can figure out how MDPs get rolled into planning, and maybe take another look under my new perspective of conditional probability (see previous post about "conditioning").

Update: I just re-read a couple papers in my field ("instructional planning") and both used STRIPS. Then I went and flipped through some robotics journals and found a 2008 paper that used STRIPS. I don't want to make too strong of an induction, but it looks like STIPS is still fairly state-of-the-art. Hrm...... okay....

 Posted by Frozone Permalink on May 01, 2009 12:21 PM | Comments (0) categorized under Computer Science & AI Tweet

### Learned some stats lingo

I learned that "conditioning" means "to instantiate known variables" or, you could also say, "to apply the evidence to your influence diagram".

It clicked when I was reading this document (The Boxer, The Wrestler & The Coin Flip - PDF) by Andrew Gelman about the difficulties with Bayesian inference, and I read the sentence, "Figure 3(b) displays the posterior distribution after conditioning on the event X = Y."

And all I could think was , "What?" and I remembered from CMPT 417 that your priors are variables without any "givens", i.e. those variables with no observations, no evidence applied. So posteriors must be variables where values ARE known, i.e. where you HAVE introduced evidence into the network. So when he says "conditioning on the event X = Y" I remembered the probability notation P(A|B), pronounced "the probability of A given B" and that B is a "given", i.e. It is evidence, it is an observation, it is an instantiated variable. And that "B" is called "the condition" because P(A) - the probability of A is - is affected by your knowledge of B. If you didn't know B, then P(A) could be different, unless the events A and B are independent.

So when you say "after conditioning on X = Y" it means that you discovered the values for X and Y, and it turns out that they were the same. I'm still a little foggy on what exactly a "posterior distribution" is, and just how it is affected by your conditioning.

But at least I'm a step closer!

 Posted by Frozone Permalink on May 01, 2009 08:52 AM | Comments (0) categorized under Computer Science & AI Tweet

## Index to Steph's Notes

Feb. 24th 2007 - Weee! This new part of my website is not an entry, but rather a permanent fixture whose purpose is to "Look Down on All Those Notes With Some Grand Vision of Organization". Wish me luck. LOL
1. Representing meta-data (fuel) & the different kinds of "hooks" that intelligent systems can use (how fuel is injected into the motor of the engine)
1. Motivation: Semantic net / Rationalizable to a machine
2. Technology & Philosophy: RDF, modus ponens,
1. Predicates, Logic & situation calculus
3. What kinds of data? - What kinds of meta-data would an AIEd system possibly need, and how is it represented?
2. "is-prerequisite-to"-type knowledge
3. interactions with learning objects & other learners - (location, composition is-a/part-of, sequencing by restricting navigation, personalization, ontologies for LO context)
4. lesson plans, curriculum plans, practicing sessions (What is stored, what is generated on the fly? What is remembered?)
4. How to organize it - When is it stored in a database? Meta-data? Agent memory banks? Protocols? Repositories? XML files? Home-servers? WSDL services? Frameworks? Portable banks? P2P access?
1. Database of object-agent interactions
2. Concept of "Home" on a P2P network -- maybe the bulk of a learning object's usage data is on its home server and can be queried using WSDL or something ? Similar homes for each student's usage history, etc. Baggage problem.
1. referring to a concept/relationship - ex. AgentOwl?
6. Generation of this data
1. Rationalization: For use by other AIEd systems
2. What is generated - discuss items under part I.C.
3. When it's generated - describe procedural model, which parts of the engine generate what (isa-part-of data, XML feeds, web services, meta data bout groups and collaboration, protocols, examples Friend of A Friend FOAF project)
4. Technical notes of HOW it's generated: JENA, issues of implementation demo, my Hermione & Ron agent examples, lol
5. Usage of this generated data - see part IV. A.
2. Given the engine, who uses it?
1. Students / Learners / "Me"
1. instructional planning, student model, pre-requisites, tutoring, coaching, collaboration,constructivism
2. Teachers / Educators / "Me"
1. putting together lessons
2. be able to browse through task domain knowledge in an objective / encyclopaedia format, then be able to pick-and-choose what you need for your students
3. compose examples, design explanations, pull together diagrams, learning objects, etc. Haystack Relo?
3. Administration / Governement / Structure / Crowd Control
1. as restrictions/obstacles/sand pit to the robot in agent environment
2. can't just have a swarm of students and teachers out there -- need structure of courses, curriculum, objectives, requirements (at least, we do in this day and age!) - Report cards, evaluation, feedback
3. government, marks, certificates, requirements, funding, curriclum, attendance, delinquent, non-attending, motivation
4. school''s images, goals, strengths, payroll, HR, security, accounts, permissions, privacy
5. registration, failed courses
3. User Environment -- How does this engine work? What does the user see on the screen?
1. Introduction - Given a background in educational psychology, how does the system present itself -- what does the user see, and were does this data come from? Links to thoughts from part I.)
2. Task Domain Browsing - Suppose you're you're just idly browsing through the "raw" content. How would it look when it's not wrapped around a learning-context or lesson or tutorial or anything. 'Cross between browsing a raw task domain ontology and browsing a learning object repository.
1. Cleaning up the data -- Visualizing the data for humans to pick through the task domain and work on it. Suppose the "Subject Expert" discovers an advancement in science and needs to update the "world's" domain knowledge. (I used the "Subject Expert" terminology from Ontologies to Support Learning Design Context - Thanks Chris) How would they make corrections to ontologies and learning objects, or at least point the users of "old" objects towards adopting the newer ones.
2. "Modes" - Learning & Lessons / Checklist - Homework, Assignments, Courses being taken / Collaborative mode / Teaching mode / Calendar- email -adminisrative mode -- See also the different kinds of scenarios in the ActiveMath system
4. Evolution of this engine
1. target some key implementation hooks discussed in part I - design an experiment/demo
1. scrape a page - (Note, scraping can only give objective data, not in-context dat)
2. LO repository - related to browsing the task domain?
3. a learners "To Do" list - where does it come from? Assignments, courses.
4. sample group scenario
5. sample teacher lesson planning
6. sample data "left behind"
7. sample use of that data
2. Data mining (for what? lol )
1. discovery / generation of ontologies - when do you need to hunt for them, and when do you have to have a solidly-known & predictable ontology?
3. I/O - where it happens, which languages, protocols, which agents perform i/o and when, precepts, actuators
1. Role Assignments
2. My Environment Adapts to me
1. Displaying feedback from the server on JSP pages (Software engineering considerations)
2. Sketching out a design (Content planning vs. Delivery planning)
3. agent negotiations / social structures / ummm... Web 2.0 ?
4. garbage collection of meta data
1. Artificial Intelligence & Evolution
2. open learning environments
5. Agents, pets, grouping, Community modelling
1. Protocols - finding groups, cyber dollars, state diagrams (?)
2. "Community Studies" - graphs & communication hubs, types of communities (free-for-all, hierarchy of authority, etc.)
3. implications of joining a community - what do you share, which parts of your student model are relevant
4. Walls & sand traps -- deliberate restrictions as problem-solving for learning
5. Communication channels - individual-to-individual, individual-to-community, chat channels, agent-only "administrative" communications, ex. requests for related learning objects in a particular community, etc.
6. Educational/Pedagogical focus (this part probably shouldn't be its own section but rather incorporated into the whole picture, but it's separate for me right now because I'm still only just starting to learn about it.)
1. Semantics - what there is to talk about in Education
1. ex. Merril's First Principles of Instruction, linking educational terms to AI terms
2. Pedagogical skills for tutors -- supporting human *and* artifical tutors
3. Student modelling - what the machine needs to know about the student, pedagogically-speaking, about learning history/preferences
4. Roles - Simulated students, Coaches, Tutors, Teachers,