Index - Student Modelling
- Lingo: "Student" or "Learner"? (January 14, 2012)
- Felder and Silverman (November 13, 2011)
- Likeness Matrix (September 17, 2011)
- Model-Tracing tutoring and Constraint-Based tutoring (November 18, 2006)
- Dealing with Ambiguity in Constraint-based modelling of student understanding (September 23, 2006)
- Deep Modelling of Thought (September 09, 2006)
- Implementing some baby AI agents (June 09, 2006)
- Java servlets as agents?? (June 06, 2006)
January 14, 2012
When I first started work in this field, I called the users of the educational systems I was working on "Students".
But then as I continued work I realized that educational systems are for EVERYONE, and not only for people formally registered with some sort of educational institution. Think: adult learners, corporate classroom, hobbyists, etc.. All of these people could be users of an educational software system, but, the word "student" may not be appropriate.
Now it is a couple of years later and I'm definitely in the habit of saying "learners" all the time without thinking about it. But now that I'm firmly into grad school and have had to do a lot of denser writing about my work, I see that as I have to explain "learning objects" and "learners" a lot, it's hard on they eyes and hard on the mind to try and keep these two separate when you are reading quickly because they both have the same sequence of letters "learn-" inside of them. Nobody has ever pointed this out to me, but it's something that's been bothering me lately as I've been re-reading my own work.
It's also surprisingly hard on me when I try to give a talk. Talk about tongue twisters!
So, as of today I am switching back to using "student". I shall declare to the world that even though I say "student" I mean anyone who wants to do any form of learning.
Skidoosh! (I just thought I would conclude my blog entry with a random reference to Kung Fu Panda. Awesome movie.)
November 13, 2011
As I've been reviewing papers for my literature review, I've noticed a handful of authors talking about Felder and Silverman.
The first time, I basically ignored it: "Whatever, some other study I don't know about."
Then I saw it again, and made note that it had something to do with psychology, maybe.
Then I saw it AGAIN and thought: "Hum, this is popular, whatever it is."
Finally, I realized it's basically an alternative to Myers-Briggs.
So, there. I learned something while doing my lit review.
September 17, 2011
Okay, so I'm building a collaborative filtering system. This means that I am trying to take 1 person and be able to say to them, "here are some good choices for you. I found these by looking at other people in the system who are similar to you and picking some of the things that they liked."
However, in order for a computer to do this, you need a find a way to get the "who are similar to you" part.
This is called a likeness matrix. It is a set of numbers telling you how much "alike" each person is to every other person. So you create an N by N chart where N is the number of people. You X out the diagonal because it isn't useful to say how much a person is like themselves.
For example, we have suppose we have Anna, Bruce, Christy, and Derek. The likeness matrix might look something like the image, below. Anna and Bruce have a lot in common so they are 9. Anna and Christy have quite a lot in common, also, so they are an 8. Anna and Derek are nothing alike so they are 1. And so on.
Computationally, you only need one half of the diagonal because they are mirrored. What would be a better data structure than a matrix? Maybe a database? A whole bunch of triples. (persona_id, personb_id, likeness_value).
Yeah, I think I'm going to use a database. Thanks for talking this through with me!
November 18, 2006
This site's stylsheet still needs a lot of work, but I'll have to save that for another day. Time is too fleeting for me to actually be able to do everything I want; instead, I have to "pick my battles". as they say.
I have always been confused between Case-based reasoning and Constraint-based modelling, simply because they both start with the letter C and contain the word "based". I had hoped that reading this paper would help clarify at least "Constraint-based modelling" As a bonus along the way, I also learned about Model-Tracing; I don't recall ever hearing about this particular approach before.
Model-tracing, as I understand it, involves setting up a set of rules about the Task Domain. (I am trying to use terminology from Kurt VanLehn's summary paper, The Behavior of Tutoring Systems.) In Model-tracing you also need to have rules about misconceptions and incorrect ways of doing things. Then, throughout the tutoring process, the system "watches" the student as they are working on a problem at attempts to match up a series of rules that the student seems to be following. Then, once the system has identified the actual "incorrect rule(s)" that the student is "executing", then the system is able to offer focussed, relevant support to help the student get back on track.
Maybe I am totally wrong about that, but, I'll keep reading.
Constraint-based modelling (CBM) was easier for me to understand because I've already read about it somewhat. I'm also seeing some of the same ideas in the earlier 1994 paper Granularity-Based Reasoning and Belief Revision in Student Models [McCalla, Greer].
Anyway, the gist of CBM (to my own limited understanding) is that when given a particular problem that a student may be working through, the system understands a set of constraints that have to be met in order for the solution to be correct. At each step, the system can watch the student's solution and match up which constraints the student may be satisfying and which constraints may be missing. In this way - by examining the unsatisfied constraints - the system knows which areas of knowledge that the student is missing and where the system can focus on helping them.
The main difference, as pointed out in ["Kodaganallur, Weitz & Rosenthal] is that Model-tracing is focused on what the student is *doing* and the processes that they are following as they are working away, while CBM is more focused on what the student is demonstrating that they *know* as they put stuff down on paper. Something is tugging at the back of my head about the differences between Behaviorism and Cognitive perspective approaches to educational psychology.
I wondered if maybe Model-tracing would be the best approach when teaching procedural knowledge. For example, in a pottery class - first you take the clay, then you mould it for a bit, then you add the water, then you push your thumbs in the middle to start to form the bowl shape... Meanwhile, CBM might be better for representing the overall picture, like - "the finished bowl must be shaped like a, well, bowl, and it also must be free of cracks". If the student's bowl is showing evidence of cracks, then the violation of that particular constraint leads the system to suggest to the student that perhaps they should dab a little water on their fingers to moisten the clay, and thereby eliminate the cracks.
Maybe those are crummy examples, I don't know!
Personally, I am thinking that the CBM approach has more potential for "intelligent" tutoring, but I'll keep reading -- this is good stuff!
September 23, 2006
As part of building a background for deep modelling of thought, I wanted to read a good paper on current techniques for diagnosis of student misconceptions. I read Constraint-based Modeling and Abiguity [Menzel, 2006] -- discovered because of my new membership in the International Artificial Intelligence in Education Society. The society has been a great resource - I'm so happy!! :-) :-)
The paper gave a good review of Constraint-based Modelling (CBM) in general, but it was intended as more of a review for scientists already familliar with CBM, which I am not. 'Mental note to do further background reading.
After establishing these "basic" concepts of CBM using an example of a student struggling with an addition problem in mathematics, the author examines another problem in second-language learning that introduces ambiguity. Unlike math, which is relatively easy (er, well, easier) for a computer to model, in the second example the student attempts to construct a sentence in the English language but struggles with grammatical agreement, i.e. "These fish stinks". It is much harder for a computer to analize ambiguous things like language-learning than it is to analyse mathematical-concept-learning. So, more advanced techniques are needed and are described in the paper.
Most of it went over my head, but I read through the whole paper anyway, grasping what I could. I remember that the author discussed weighted constraints and also looked at transitive relationships over the conditions that need to be satisfied. I had to pull out my old 2nd-year logic textbook because I forgot what a transitive relationship was. It's pretty easy:
if x is > y
and y > z
Then you can apply the transitive relational property that x > z.
This is roughly applied in the sense that the demonstrative pronoun "These" and the verb "stinks" aren't matching up properly with the transitive relationship that the whole phrase is either talking about a plural set of fish or a single fish -- the items should be consistent but they're not. It's like knowing the rule that x > z but for some reason in the student's work, they indicated that y < z and yet x > y. Or something like that.
I also thought the idea of weighted constraints was interesting, but I don't understand enough about Constraint-based modelling to really appreciate it.
The author's discussion on the role of CBM in a general ITS architecture was also interesting. In my own set of assumptions, I held that CBM was useful as a diagnostic approach for student misconceptions. Apparantly CBM can also be used as a student model and as a domain mode, but l still don't really understand how to best classify the usefulness of CBM in a general ITS architecture.
The next paper that I'm reading takes a more theoretical approach to ITS architectures -- reminded me of Software Patterns in ITS Architectures -- maybe I should have taken more courses in software engineering at the U of S; I only went up to CMPT 370. Oh well. =D
Back to Constraint-based modelling: My old 862 notes indicated that a good introductory paper would be Evaluation of a Constraint-Based Tutor for a Database Language [Mitrovic & Ohlsson, 1999]. 'Quick highlight helped improve my understanding:
- "The purpose of constraint-based modeling (CBM) is to overcome the overspecificity problem via abstraction (Ohlsson, 1992)."
* keeps reading *
September 09, 2006
I'm barely grasping these wisps of magic, these faint traces of thought -- I'm able to identify them as I read from paper to paper, but I can't even name this *idea* that I'm trying to identify. The closest way I can describe it is a "deep modelling of thought".
I was made first aware of this idea while reading the 1994 paper Granularity-Based Reasoning and Beief Revision in Student Models. S-objects (which in my own muddled understanding I think of them as "the things a student would want to learn about" or maybe "the strategies a student uses as they practice the things they are learning about") are connected with abstraction-refinement relations and aggregation-component relations. K-clusters are groupings of aggregation links of sub-objects, and L-clusters are groupings of abstraction links of sub-objects. There are observer objects thrown in the mix also, but I'm still having trouble grasping the big picture. (How amusing to watch my own broken understanding of reasoning and belief revision about Reasoning and Belief Revision.)
I attempted to apply this notation to my running example of the unit on "The Atom" in Grade 11 Chemistry -- but I failed; my understanding is yet too limited to be able to apply the knowledge. So, I kept reading.
Another apprearance of what I'm calling "deep modelling of throught" is in the generation of global and local content plans in the PEPE system. A cycle of student model checking, diagnosis and plan-generation (I think?) uses an added dimension of knowledge levels (fact,analysis,syntheses) to different concepts (S-objects??) to build a learning path to aid the student. 'As in Determining the focus of instruction : content planning for intelligent tutoring systems / by Barbara Jane Brecht. I really have to read that work again -- all I need it to bury myself in the Special Collections office at the U of S once more. 'Will keep reading related works so as to better understand the thesis itself on those rare trips to the library that I'm able to make.
I see this pattern again in studies of the neocortex and the memory-prediction framework of intelligence as in On Intelligence. The neocortex is built as a hierarchy and it maps senses - vision, touch, hearing, etc -- to memories and predictions, which is the essence of intelligence. In my AI textbook, I can find computational techniques for achiving similar learning-, prediction-, and memory-abilities in machines. I'm interested in tracing these computational structures through its bases in straightforward mathematical foundations that can be represented in, say, Prolog, through to the multi-dimensional stuctures as in this list of research works I'm trying to scrape together.
The traces again occur in the 2006 paper, "Constraint-based Modeling and Ambiguity" by German researcher Wolfgang Menzel. It was neat for me to try and understand the 1994 paper (Granularity-Based Reasoning and Beief Revision in Student Models) from the perspective of this new 2006 paper. That is, now that I have the ability to begin comparing and contrasting, both papers started to make more sense.
I was about to try and elaborate and define this "sense" -- turning magic into science -- (or, rather, turning science into magic, as I see it) -- but my brain is not co-operating, even after 3 cups of coffee. Grah!
I promised myself 2 more hours of reading time. 'Here goes.
June 09, 2006
I'd really love to find a tutorial somewhere so I can try to write my own AI agents in hopes of learning something about them. How do I build their enivronment? How do I build an agent's memory? What is the programming representation of an agent's arms and legs so it can "move around" in the environment? (ex. data mining through RDF statements on the 'web.) How does one agent say "Hello" to another agent? Even better, is there a framework or library out there already that I can simply use, rather than having to start with a public static void main? (hehe)
I'm starting to become a little familliar with AI theory (knowledge representation, learning, problem solving by searching, planning, etc.), but still have no idea how to tinker with it myself. When I ask myself, "Which programming language?", somehow my education at the U of S causes me to answer "Prolog" but the only things I've ever really done in prolog are write search algorithms. I know that agents -use- search algorithms, but how do I build the agent itself?
Further to my musings on java servlets as agents: Even if these are ill-suited to 'Learner' agents, perhaps servlets will work for 'Non-human Learning Object' agents. A java servlet may be inappropriate for representing a human student because of a servlet's inherent being-tied-to-a-physical-server-ness, and human learners are most certainly -not- tied to a particular server (unless you count their localhost, but this, of course, changes too. Computer labs, anyone?)
For servlets as non-human learning object agents: Suppose there's a learning object that was written and designed by a Teacher-Developer at the Cyber School, and this object (encapsulated by an AI-agent) would naturally have to be published on a server somewhere, in a predictable location so that this teacher's students can interact with it to assist their learning. So, perhaps object-agents are suited to java servlets.
I need to continue my search to see how to represent a human's agent. I wonder how they did it in I-Help.
June 06, 2006
Oh dear, I'm so lost - don't know anything about AI.
This whitepaper on Sun's site suggests that java servlets can be used for agents needing to communicate with each other. (Under "any Ways to Use Servlets", third bullet.) Perhaps I'm assuming they mean 'agent' in the AI sence of the term, but they mean 'agent' in more of a commercial sense.
My gut feeling is that java servlets are too heavy for being used as AI-agents, and, isn't a servlet tied to the web server it's running on? Don't agents need to "move around", so to speak, without being tied to a particular server? Perhaps I'll borrow my brother's CMPT 317 textbook. Hum.
Index to Steph's NotesFeb. 24th 2007 - Weee! This new part of my website is not an entry, but rather a permanent fixture whose purpose is to "Look Down on All Those Notes With Some Grand Vision of Organization". Wish me luck. LOL
- Representing meta-data (fuel) & the different kinds of "hooks" that intelligent systems can use (how fuel is injected into the motor of the engine)
- Motivation: Semantic net / Rationalizable to a machine
- Semantic network
- Genetic graph
- Prerequisite AND/OR graph
- Constraint Satisfaction Problems
- Bayesian networks / causal graphs
- Technology & Philosophy: RDF, modus ponens,
- Predicates, Logic & situation calculus
- What kinds of data? - What kinds of meta-data would an AIEd system possibly need, and how is it represented?
- task domain knowledge
- "is-prerequisite-to"-type knowledge
- interactions with learning objects & other learners - (location, composition is-a/part-of, sequencing by restricting navigation, personalization, ontologies for LO context)
- lesson plans, curriculum plans, practicing sessions (What is stored, what is generated on the fly? What is remembered?)
- How to organize it - When is it stored in a database? Meta-data? Agent memory banks? Protocols? Repositories? XML files? Home-servers? WSDL services? Frameworks? Portable banks? P2P access?
- Database of object-agent interactions
- Concept of "Home" on a P2P network -- maybe the bulk of a learning object's usage data is on its home server and can be queried using WSDL or something ? Similar homes for each student's usage history, etc. Baggage problem.
- Links to the ontologies
- referring to a concept/relationship - ex. AgentOwl?
- Generation of this data
- Rationalization: For use by other AIEd systems
- What is generated - discuss items under part I.C.
- When it's generated - describe procedural model, which parts of the engine generate what (isa-part-of data, XML feeds, web services, meta data bout groups and collaboration, protocols, examples Friend of A Friend FOAF project)
- Technical notes of HOW it's generated: JENA, issues of implementation demo, my Hermione & Ron agent examples, lol
- Usage of this generated data - see part IV. A.
- Given the engine, who uses it?
- Students / Learners / "Me"
- instructional planning, student model, pre-requisites, tutoring, coaching, collaboration,constructivism
- Teachers / Educators / "Me"
- putting together lessons
- be able to browse through task domain knowledge in an objective / encyclopaedia format, then be able to pick-and-choose what you need for your students
- compose examples, design explanations, pull together diagrams, learning objects, etc. Haystack Relo?
- Administration / Governement / Structure / Crowd Control
- as restrictions/obstacles/sand pit to the robot in agent environment
- can't just have a swarm of students and teachers out there -- need structure of courses, curriculum, objectives, requirements (at least, we do in this day and age!) - Report cards, evaluation, feedback
- government, marks, certificates, requirements, funding, curriclum, attendance, delinquent, non-attending, motivation
- school''s images, goals, strengths, payroll, HR, security, accounts, permissions, privacy
- registration, failed courses
- User Environment -- How does this engine work? What does the user see on the screen?
- Introduction - Given a background in educational psychology, how does the system present itself -- what does the user see, and were does this data come from? Links to thoughts from part I.)
- Task Domain Browsing - Suppose you're you're just idly browsing through the "raw" content. How would it look when it's not wrapped around a learning-context or lesson or tutorial or anything. 'Cross between browsing a raw task domain ontology and browsing a learning object repository.
- Cleaning up the data -- Visualizing the data for humans to pick through the task domain and work on it. Suppose the "Subject Expert" discovers an advancement in science and needs to update the "world's" domain knowledge. (I used the "Subject Expert" terminology from Ontologies to Support Learning Design Context - Thanks Chris) How would they make corrections to ontologies and learning objects, or at least point the users of "old" objects towards adopting the newer ones.
- "Modes" - Learning & Lessons / Checklist - Homework, Assignments, Courses being taken / Collaborative mode / Teaching mode / Calendar- email -adminisrative mode -- See also the different kinds of scenarios in the ActiveMath system
- Evolution of this engine
- target some key implementation hooks discussed in part I - design an experiment/demo
- scrape a page - (Note, scraping can only give objective data, not in-context dat)
- LO repository - related to browsing the task domain?
- a learners "To Do" list - where does it come from? Assignments, courses.
- sample group scenario
- sample teacher lesson planning
- sample data "left behind"
- sample use of that data
- Data mining (for what? lol )
- discovery / generation of ontologies - when do you need to hunt for them, and when do you have to have a solidly-known & predictable ontology?
- I/O - where it happens, which languages, protocols, which agents perform i/o and when, precepts, actuators
- Role Assignments
- My Environment Adapts to me
- Displaying feedback from the server on JSP pages (Software engineering considerations)
- Sketching out a design (Content planning vs. Delivery planning)
- agent negotiations / social structures / ummm... Web 2.0 ?
- garbage collection of meta data
- Artificial Intelligence & Evolution
- Memory Culling: Necessary part of intelligence? (artificial or human)
- Applications for the Genetic/Evolutionary algorithm
- open learning environments
- Agents, pets, grouping, Community modelling
- Protocols - finding groups, cyber dollars, state diagrams (?)
- "Community Studies" - graphs & communication hubs, types of communities (free-for-all, hierarchy of authority, etc.)
- implications of joining a community - what do you share, which parts of your student model are relevant
- Walls & sand traps -- deliberate restrictions as problem-solving for learning
- Communication channels - individual-to-individual, individual-to-community, chat channels, agent-only "administrative" communications, ex. requests for related learning objects in a particular community, etc.
- Educational/Pedagogical focus (this part probably shouldn't be its own section but rather incorporated into the whole picture, but it's separate for me right now because I'm still only just starting to learn about it.)
- Semantics - what there is to talk about in Education
- ex. Merril's First Principles of Instruction, linking educational terms to AI terms
- Pedagogical skills for tutors -- supporting human *and* artifical tutors
- Student modelling - what the machine needs to know about the student, pedagogically-speaking, about learning history/preferences
- Roles - Simulated students, Coaches, Tutors, Teachers,