February 03, 2007
Memory Culling: Necessary part of intelligence? (artificial or human)
Speaking of manually imposed amnesia: I've been picking up trends lately about how a truly intelligent being (artificial or human) has mechanisms for long-term memory retention. Given the huge overload of information available to both humans and machines (via sensory input, information on the 'web, agent-to-agent interactions, etc.) the only way to keep coherent and useful long-term memories is to use a mechanism for culling out only the minimum, key "hooks" to the past events that will enable you (artifical or human) to put back together the information you need from your past experiences.
In Ray Kurzweil's, The Age of Spiritual Machines in Chapter 4 under the sub-heading "The Holographic Nature of Human Memory", he discusses how as humans, we don't store every memory of our friend's face as see from different angles and different lighting conditions; instead, our brains store her face as a series of synaptic strengths. I'm no biochemist, but I understand that the "immediate" or "what I'm experiencing now" recognition occurs in the cortex (outer-layer) of the brain, and, (according to what I read in Kurzweil's book) the longer-term memories are stored deeper in the brain chemically encoded in RNA or in peptides. Somehow, the human brain decides which experiences from the cortext get filtered down and stored more deeply for long-term memory.
In Edward Hallowell's CrazyBusy he explains "Rhythm", a mechanism to put some busy jobs in our lives on auto-pilot in order that we may better focus on the unpredicatable, impossible-to-prepare-for demands that come flying into our lives. Like riding a bike or playing the piano: At first, some actions are difficult for humans and require meticulous attention over every movement. This activity happens in the brain's frontal lobes. Eventually, though, as our brains learn the activities, the actions move back into the cerebellum. When this part of the brain is working on the activity, we don't have to pay attention to every little detail -- each finger movement on the piano, for example -- instead, our minds now have room to concentrate on other things, such as putting "expression and shading" into the song (as recounted in CrazyBusy), or, carrying on a conversation with a friend as you ride your bike. This "clearing out" of the attention on meticulous frontal-lobe activity, I wonder -- is this also a function/ability required for intelligence?
Bringing all this back to the core of this blog - AI in education: Grant that "you" are an artificial agent and you are representing, say, a learning object on the semantic web. Your job is to make yourself useful under different contexts as various learners and learning-facilitators come along with their instructional plans and with their lesson designs and query "you" along with millions and billions of other agents like yourself to see if "you" will suit the purpose of the context-at-hand. Supposing that you do meet this criteria, you'll have to negotiate with other agents as you settle yourself into an arrangement with other learning objects for a quick brush-up lesson for a student, say - then, you'll record your interactions with the other learning-object-agents and also with the student's agent as the student breezes by on their learning journey. At the end of just this one learning interaction, you will be left with a snapshot of the student model and snapshots of how you related yourself to other learning objects (as in, if I'm not too far-off-the mark, [McCalla 2004: The Ecological Approach to the Design of E-Learning Environments: Purpose-based Capture and Use of Information About Learners] and [Vassileva, McCalla and Greer, 2003: Multi-Agent Multi-User Modelling in I-Help]). That's a lot of memories. How do you figure out what to keep and what you need, so that you know how "you" are useful relative to other learning objects and to certain types of students using your long-term memories? What do you keep and what do you discard because you can re-assemble it later? What core things do you need in order to be able to re-assemble later?
Memory Culling & intelligence. Hmmmm. Surely this is a topic in AI studies somewhere...? Ah, ha! I smell a trail. 'Time to go dig up some papers. =)
A great blog you been hosting, truely!
I don't think I am the only reader here, but I guess I'm very welcomed to post some :-)
The only thing I could perceive 100% is about the CrazyBusy book you talking about. Although I am not sure how the book helps exactly, but I would like to buy one and read it JUST to reduce the possibility of falling from walking through the damn slippery ice ... (watch out!)
Anyway, good luck to your paper seek ;-) and I would come back some time and dig up something cool
Posted by: yudi XUE at February 19, 2007 05:56 AM
Post a comment
Index to Steph's NotesFeb. 24th 2007 - Weee! This new part of my website is not an entry, but rather a permanent fixture whose purpose is to "Look Down on All Those Notes With Some Grand Vision of Organization". Wish me luck. LOL
- Representing meta-data (fuel) & the different kinds of "hooks" that intelligent systems can use (how fuel is injected into the motor of the engine)
- Motivation: Semantic net / Rationalizable to a machine
- Semantic network
- Genetic graph
- Prerequisite AND/OR graph
- Constraint Satisfaction Problems
- Bayesian networks / causal graphs
- Technology & Philosophy: RDF, modus ponens,
- Predicates, Logic & situation calculus
- What kinds of data? - What kinds of meta-data would an AIEd system possibly need, and how is it represented?
- task domain knowledge
- "is-prerequisite-to"-type knowledge
- interactions with learning objects & other learners - (location, composition is-a/part-of, sequencing by restricting navigation, personalization, ontologies for LO context)
- lesson plans, curriculum plans, practicing sessions (What is stored, what is generated on the fly? What is remembered?)
- How to organize it - When is it stored in a database? Meta-data? Agent memory banks? Protocols? Repositories? XML files? Home-servers? WSDL services? Frameworks? Portable banks? P2P access?
- Database of object-agent interactions
- Concept of "Home" on a P2P network -- maybe the bulk of a learning object's usage data is on its home server and can be queried using WSDL or something ? Similar homes for each student's usage history, etc. Baggage problem.
- Links to the ontologies
- referring to a concept/relationship - ex. AgentOwl?
- Generation of this data
- Rationalization: For use by other AIEd systems
- What is generated - discuss items under part I.C.
- When it's generated - describe procedural model, which parts of the engine generate what (isa-part-of data, XML feeds, web services, meta data bout groups and collaboration, protocols, examples Friend of A Friend FOAF project)
- Technical notes of HOW it's generated: JENA, issues of implementation demo, my Hermione & Ron agent examples, lol
- Usage of this generated data - see part IV. A.
- Given the engine, who uses it?
- Students / Learners / "Me"
- instructional planning, student model, pre-requisites, tutoring, coaching, collaboration,constructivism
- Teachers / Educators / "Me"
- putting together lessons
- be able to browse through task domain knowledge in an objective / encyclopaedia format, then be able to pick-and-choose what you need for your students
- compose examples, design explanations, pull together diagrams, learning objects, etc. Haystack Relo?
- Administration / Governement / Structure / Crowd Control
- as restrictions/obstacles/sand pit to the robot in agent environment
- can't just have a swarm of students and teachers out there -- need structure of courses, curriculum, objectives, requirements (at least, we do in this day and age!) - Report cards, evaluation, feedback
- government, marks, certificates, requirements, funding, curriclum, attendance, delinquent, non-attending, motivation
- school''s images, goals, strengths, payroll, HR, security, accounts, permissions, privacy
- registration, failed courses
- User Environment -- How does this engine work? What does the user see on the screen?
- Introduction - Given a background in educational psychology, how does the system present itself -- what does the user see, and were does this data come from? Links to thoughts from part I.)
- Task Domain Browsing - Suppose you're you're just idly browsing through the "raw" content. How would it look when it's not wrapped around a learning-context or lesson or tutorial or anything. 'Cross between browsing a raw task domain ontology and browsing a learning object repository.
- Cleaning up the data -- Visualizing the data for humans to pick through the task domain and work on it. Suppose the "Subject Expert" discovers an advancement in science and needs to update the "world's" domain knowledge. (I used the "Subject Expert" terminology from Ontologies to Support Learning Design Context - Thanks Chris) How would they make corrections to ontologies and learning objects, or at least point the users of "old" objects towards adopting the newer ones.
- "Modes" - Learning & Lessons / Checklist - Homework, Assignments, Courses being taken / Collaborative mode / Teaching mode / Calendar- email -adminisrative mode -- See also the different kinds of scenarios in the ActiveMath system
- Evolution of this engine
- target some key implementation hooks discussed in part I - design an experiment/demo
- scrape a page - (Note, scraping can only give objective data, not in-context dat)
- LO repository - related to browsing the task domain?
- a learners "To Do" list - where does it come from? Assignments, courses.
- sample group scenario
- sample teacher lesson planning
- sample data "left behind"
- sample use of that data
- Data mining (for what? lol )
- discovery / generation of ontologies - when do you need to hunt for them, and when do you have to have a solidly-known & predictable ontology?
- I/O - where it happens, which languages, protocols, which agents perform i/o and when, precepts, actuators
- Role Assignments
- My Environment Adapts to me
- Displaying feedback from the server on JSP pages (Software engineering considerations)
- Sketching out a design (Content planning vs. Delivery planning)
- agent negotiations / social structures / ummm... Web 2.0 ?
- garbage collection of meta data
- Artificial Intelligence & Evolution
- Memory Culling: Necessary part of intelligence? (artificial or human)
- Applications for the Genetic/Evolutionary algorithm
- open learning environments
- Agents, pets, grouping, Community modelling
- Protocols - finding groups, cyber dollars, state diagrams (?)
- "Community Studies" - graphs & communication hubs, types of communities (free-for-all, hierarchy of authority, etc.)
- implications of joining a community - what do you share, which parts of your student model are relevant
- Walls & sand traps -- deliberate restrictions as problem-solving for learning
- Communication channels - individual-to-individual, individual-to-community, chat channels, agent-only "administrative" communications, ex. requests for related learning objects in a particular community, etc.
- Educational/Pedagogical focus (this part probably shouldn't be its own section but rather incorporated into the whole picture, but it's separate for me right now because I'm still only just starting to learn about it.)
- Semantics - what there is to talk about in Education
- ex. Merril's First Principles of Instruction, linking educational terms to AI terms
- Pedagogical skills for tutors -- supporting human *and* artifical tutors
- Student modelling - what the machine needs to know about the student, pedagogically-speaking, about learning history/preferences
- Roles - Simulated students, Coaches, Tutors, Teachers,