March 31, 2009
Ontological comparisons as spatial relationships
Despite my new love for open notebook science, I have not been very loyal to the vision. All of my research notes lately are on paper. I blame it on having close-to-nil computer time. Alas! I will be so happy when mediatronic paper becomes cheap & affordable (heh, or even "in existence"!) so that I don't have to "wait for computer time" in order to share my thinking. I'd be able to carry around a crumpled up piece of paper in my pocket, write on it when I have a spare 15 seconds, and I would be able to categorize it and link it up properly in my blog from wherever I happen to be (walking the baby in the stroller, working in the kitchen, nursing, etc). Ah, yes. It would be grand.
Fubble wubbles; enough fantasizing, Frozone! =)
Lately, I've been thinking about how to squeeze my problem into a decision theoretic framework. I laid out my problem as an influence diagram. This helped me identify where the information required for my problem comes from. Sources include
- the learner's history
- assumptions projected onto the learner
- usage data from other, similar learners (this is the world of recommender systems)
All of these are "chance" nodes, which come in two types: states and observations. I figured that the states are your assumptions or projections about the world (bullet point #2 above), and that your observations come from hard inferences from real data (bullet points #1 and #3 above). (I tweeted about this a while ago; you probably saw it if you are following me on Twitter, heh.)
Decision nodes where were I found myself trying to build patterns of behaviour, such as teaching strategies. I thought that you could implement a teaching strategy as a policy (also discussed in my first decision theory entry).
I've been using example teaching scenarios to try and lay everything out and abstract the "shape" of teaching. For some reason, I have this fix in my head about how I think ontological engineering will be an important tool here. Right now, I see it as a way to "rise above" and be able to give my computation a little more subtlety.
A "Top Ontology" is an ontology that includes concepts that are common across domains such as time, space, etc.. This is introduced in Breuker et al., which I talk about in this other entry. What I'm getting at here is that ontologies have layers; if they themselves are hierarchies then categorizing the ontologies themselves gives you a sort of hypergraph. (I'm not going to let my research veer in this direction yet because I'm in too much danger of sprouting fairy wings -- maybe I'll revisit when I have some mathematical foundations to keep me real.)
Getting back to the idea of the Top Ontology, I think that one (fundamental?) component that appears over and over in teaching is the ability to compare, contrast and present an idea as an example using a story or metaphor. It's like: how do you express one idea in terms of another? How do you compare two ontons? (I learned the term "onton" from someone else's blog, which I shared & commented on via my Google Reader. The title is Deciding, Learning.)
I'm trying to pick out my ontons for the process of teaching, and maybe these ontons will themselves form a hyper-ontology, sorta like the Top Ontology, but a level below that.
Being a mostly visual person, I naturally thought that the obvious way to compare 2 ideas is to compare their shapes. Contrasting points are represented with great distances, and close points are represented with short distances.
So this forks into 2 problems. First, how do you represent a subset of a task domain ontology as a shape? I was sure I'd found a paper where I thought I might find a lead, but now I can't find it. Was this it? I thought the paper was more about document profiling. Gah, I'll come back here if I find it. (UPDATE: May 3rd, 2009 - found it! Document identification using shape trees by [Henker & Petersohn, 2009].)
Secondly, how do you compare the shapes and work them into your plan? The problem of spatial navigation is commonly tackled in robotics. This is where you'll find the nuances of different types of teaching -- how you handle the contrasts of the shapes.
Gosh, I'm treading on shaky ground here. But then again, I always do, don't I? And I love it. LOL
Looking forward to the next post!
Post a comment
Index to Steph's NotesFeb. 24th 2007 - Weee! This new part of my website is not an entry, but rather a permanent fixture whose purpose is to "Look Down on All Those Notes With Some Grand Vision of Organization". Wish me luck. LOL
- Representing meta-data (fuel) & the different kinds of "hooks" that intelligent systems can use (how fuel is injected into the motor of the engine)
- Motivation: Semantic net / Rationalizable to a machine
- Semantic network
- Genetic graph
- Prerequisite AND/OR graph
- Constraint Satisfaction Problems
- Bayesian networks / causal graphs
- Technology & Philosophy: RDF, modus ponens,
- Predicates, Logic & situation calculus
- What kinds of data? - What kinds of meta-data would an AIEd system possibly need, and how is it represented?
- task domain knowledge
- "is-prerequisite-to"-type knowledge
- interactions with learning objects & other learners - (location, composition is-a/part-of, sequencing by restricting navigation, personalization, ontologies for LO context)
- lesson plans, curriculum plans, practicing sessions (What is stored, what is generated on the fly? What is remembered?)
- How to organize it - When is it stored in a database? Meta-data? Agent memory banks? Protocols? Repositories? XML files? Home-servers? WSDL services? Frameworks? Portable banks? P2P access?
- Database of object-agent interactions
- Concept of "Home" on a P2P network -- maybe the bulk of a learning object's usage data is on its home server and can be queried using WSDL or something ? Similar homes for each student's usage history, etc. Baggage problem.
- Links to the ontologies
- referring to a concept/relationship - ex. AgentOwl?
- Generation of this data
- Rationalization: For use by other AIEd systems
- What is generated - discuss items under part I.C.
- When it's generated - describe procedural model, which parts of the engine generate what (isa-part-of data, XML feeds, web services, meta data bout groups and collaboration, protocols, examples Friend of A Friend FOAF project)
- Technical notes of HOW it's generated: JENA, issues of implementation demo, my Hermione & Ron agent examples, lol
- Usage of this generated data - see part IV. A.
- Given the engine, who uses it?
- Students / Learners / "Me"
- instructional planning, student model, pre-requisites, tutoring, coaching, collaboration,constructivism
- Teachers / Educators / "Me"
- putting together lessons
- be able to browse through task domain knowledge in an objective / encyclopaedia format, then be able to pick-and-choose what you need for your students
- compose examples, design explanations, pull together diagrams, learning objects, etc. Haystack Relo?
- Administration / Governement / Structure / Crowd Control
- as restrictions/obstacles/sand pit to the robot in agent environment
- can't just have a swarm of students and teachers out there -- need structure of courses, curriculum, objectives, requirements (at least, we do in this day and age!) - Report cards, evaluation, feedback
- government, marks, certificates, requirements, funding, curriclum, attendance, delinquent, non-attending, motivation
- school''s images, goals, strengths, payroll, HR, security, accounts, permissions, privacy
- registration, failed courses
- User Environment -- How does this engine work? What does the user see on the screen?
- Introduction - Given a background in educational psychology, how does the system present itself -- what does the user see, and were does this data come from? Links to thoughts from part I.)
- Task Domain Browsing - Suppose you're you're just idly browsing through the "raw" content. How would it look when it's not wrapped around a learning-context or lesson or tutorial or anything. 'Cross between browsing a raw task domain ontology and browsing a learning object repository.
- Cleaning up the data -- Visualizing the data for humans to pick through the task domain and work on it. Suppose the "Subject Expert" discovers an advancement in science and needs to update the "world's" domain knowledge. (I used the "Subject Expert" terminology from Ontologies to Support Learning Design Context - Thanks Chris) How would they make corrections to ontologies and learning objects, or at least point the users of "old" objects towards adopting the newer ones.
- "Modes" - Learning & Lessons / Checklist - Homework, Assignments, Courses being taken / Collaborative mode / Teaching mode / Calendar- email -adminisrative mode -- See also the different kinds of scenarios in the ActiveMath system
- Evolution of this engine
- target some key implementation hooks discussed in part I - design an experiment/demo
- scrape a page - (Note, scraping can only give objective data, not in-context dat)
- LO repository - related to browsing the task domain?
- a learners "To Do" list - where does it come from? Assignments, courses.
- sample group scenario
- sample teacher lesson planning
- sample data "left behind"
- sample use of that data
- Data mining (for what? lol )
- discovery / generation of ontologies - when do you need to hunt for them, and when do you have to have a solidly-known & predictable ontology?
- I/O - where it happens, which languages, protocols, which agents perform i/o and when, precepts, actuators
- Role Assignments
- My Environment Adapts to me
- Displaying feedback from the server on JSP pages (Software engineering considerations)
- Sketching out a design (Content planning vs. Delivery planning)
- agent negotiations / social structures / ummm... Web 2.0 ?
- garbage collection of meta data
- Artificial Intelligence & Evolution
- Memory Culling: Necessary part of intelligence? (artificial or human)
- Applications for the Genetic/Evolutionary algorithm
- open learning environments
- Agents, pets, grouping, Community modelling
- Protocols - finding groups, cyber dollars, state diagrams (?)
- "Community Studies" - graphs & communication hubs, types of communities (free-for-all, hierarchy of authority, etc.)
- implications of joining a community - what do you share, which parts of your student model are relevant
- Walls & sand traps -- deliberate restrictions as problem-solving for learning
- Communication channels - individual-to-individual, individual-to-community, chat channels, agent-only "administrative" communications, ex. requests for related learning objects in a particular community, etc.
- Educational/Pedagogical focus (this part probably shouldn't be its own section but rather incorporated into the whole picture, but it's separate for me right now because I'm still only just starting to learn about it.)
- Semantics - what there is to talk about in Education
- ex. Merril's First Principles of Instruction, linking educational terms to AI terms
- Pedagogical skills for tutors -- supporting human *and* artifical tutors
- Student modelling - what the machine needs to know about the student, pedagogically-speaking, about learning history/preferences
- Roles - Simulated students, Coaches, Tutors, Teachers,