Posts tagged ‘eliza’

Cognitive Context

Eliza brings a couple of things to the table that other systems don’t – mostly as it allows a way to quickly load some structure into systems – which then allow the running of test data against those structures.  It’s often a way to short-circuit starting from 0 knowledge (new born infant) and to boot-strap yourself a 3 year old.  A simple example is extracting sentences from a paragraph.  It can be used as a pre-parser or a post parser or as a way of rephrasing data.  Rephrasing is a handy tool for testing validity.

It provides a vehicle for asking questions but also provides an approach to determining the relevance of information within the available context.  The term available context was used as it’s often interesting to limit the available information to cognitive processes.

You often ask questions about statements you encounter: Who, What, Where, When, Why

You’ll also have an operational mode that you’ll switch between: operational modes help to define how a cognitive process should approach the problem.

In the human model – think along the lines of how your state of mind changes based on the situational aspects of the encounter.  The context of the situation can be external, reflective or constructed.

External contexts are where we are expected to respond – maybe not to all input – but to some.  Often these situations are where an action or consensus is required.
Reflective contexts are where information is absorbed and processed – generally to bring out understanding or knowledge but also when a pattern is reverse fit – not proving a fact but re-assimilating input so that it correlates.
Constructed contexts are the what if situations & problem solving. Similar to the reflective context but more about adjusting previous input to test fitness to something new while attempting to maintain it’s validity to other knowledge.

You’ll often start in a reflective context as you assimilate information and then move into a constructed context to maximise knowledge domains.  Then you’ll often edge into the external context – while running reflective contexts in the background.  Periodically you’ll create constructed contexts to boot-strap knowledge domains and to learn from how knowledge domains are created (which in turn will tune how the reflective domains obtain information).

Essentially this is a lot of talk for saying that you don’t always need to provide an output.  🙂

Now I mentioned at the beginning that it’s often interesting to limit the information available to an available context – often it’s not only interesting but also important.  The available context is the set of prior knowledge (and the rules (or the approach) of applying the relationships to the information the it’s surrounding knowledge).

If all knowledge is available to an available context and the same approach is used for processing that information – then it’s hard for a system to determine relevance or importance of which facts to extract from data.  In essence the system can’t see the wood from the trees.

Think about how you tackle a problem you encounter – you start with one approach based on your experience (so you’re selecting and limiting the tools you’re going to apply to deal with the situation) and based on how the interaction with the situation goes – you’ll adjust.  Sometimes you’ll find that you adjust to something very basic (keep it simple stupid or one step at a time) – at others you’ll employ more complex toolsets.

The Eliza approach can be used not just as a processing engine – but also as a way of allowing cognitive systems to switch or activate the contexts I mentioned earlier.  It’s also a handy pre-parser for input into SOAR.

One of the reasons for these recent posts is after visiting zbr’s site and reading his interest in NLP and cognition.  I stumbled over his site when looking to understand more about POHMELFS, Elliptics and your DST implementation.  I’ve been looking for a paralleled distributed storage mechanism that is fast and supports a decent approach to versioning for a while for a NLP & MT approach.  Distribution and parallelism are required as I implement a virtualised agent approach which allow me to run modified instances of knowledge domains and/or rules to create dynamic contexts.  Versioning is important as it allows working with information from earlier time periods, replaying the formation of rules and assumptions and greatly helps to roll-back processing should the current decision tree appear fruitless.  In human cognitive terms these act as sub-concious processing domains.

Saturday 20th June, 2009 at 3:08 pm Leave a comment


Recent Posts