The Congruent Interaction Matrix (CIM) model is being
formulated to represent knowledge updating and language production. The line of research taken in this paper is limited to a set of questions restricted to concepts and actions that can be used in modeling human language behavior. The Congruent Interaction Matrix model introduced here proposes virtual structures represented as matrices. The theoretical and practical value of developing this framework and set of algorithms is discussed, in order to create tools useful for modeling human communication interactions. Possible future research studies and applications are suggested. The development of these tools could have future implications for human and machine communication analysis and production.
Steven Gibson
SuperAnt Computing, Los Angeles, USAIntroduction
This presentation addresses uncertainty in data sets gathered about animal responses using multi-step observations and model selection. Uncertainty is ubiquitous in studies of animal response and behavior. This presentation proposes calculating animal choices using rule tables. It is assumed that animal behavior is responsive, following stimulus or sensory input followed by behavior choices or triggered results. The challenges include: 1. incomplete, imprecise and uncertain data, 2. building rule sets, and 3. determining trigger points. It is postulated that triggered behaviors result in classifiable posterior results.
“Is there any point to which you would wish to draw my attention?”There are real results in observing animal behavior. We need to find the significance in measurable ways, even when the data is sparse or uncertain.
Uncertainty & Methods
Incompleteness and uncertainty usually are a regular part of data collection. Uncertainty is a fundamental component of model selection. Data uncertainty has several origins: Imprecision due to lack of valid observations, distortions in observational processes, or errors in transformations or calculations introduced to the data after collection. Data uncertainty can reduce our ability to measure the quality of our predictions. We are sometimes faced with limits using statistical inference with incomplete data and need to take care with our assumptions. Fortunately incompleteness and uncertainty can sometimes represent knowledge or help build priors. As the models are built for selection, the data and models need to be matched.
The data embodied in the priors must be applicable to the data currently being measured.
Decision rules are used to serve predictive functions.
The goal is to select models based on how they can give the best predictions of future observations generated by similar data.
Animal Behavior
We are using a thought experiment of dog behavior based upon a hypothetical dog choosing to bark. Animal behaviors often seem computationally and practically unpredictable. There is a combinatorial property of animal behavior that requires repeated model selection. Behaviors involve cycles of needs, desires, responses and triggers. Behaviors are very non-linear and subject to variation and uncertainty in most priors. But we beleive a large enough set of observations will exhibit stochastic charcteristics that point to repeated patterns. The behaviors can be built into classifiers of mult-level models for model selection. These are represented as behavior models that demonstrate patterns that can only be predicted using multi-level selections. The difficultly in selecting models for animals behavior may partially be due to the combinatorial manner of observed behavior.
Our hypothecial example uses “canis lupus familiaris”, the common dog. We postulate that some behaviors are instinctual while some are modified by learning processes. Our one example simply addresses the prediction of a dog barking based on observed data limited to sleep, relaxation, alertness, and degrees of barking. Using survey data, we postulate that the trigger for barking is high alertness or previous barking. We use a two level model selection to give us the prediction result.
Behavior predictions result from models chosen based upon and mediated by trigger rules.
This is NOT an attempt to model cognitive processes. The model selection is limited to prior data and posterior data.
Priors To Decisions
In building and selecting models, priors are used instead of likelihood. These models are meant to predict outcomes. The prior knowledge is used to develop classifiers and derive predictions. For given data, it belongs to class 1 or class 2. Priors are collected on behaviors of the animal like alertness, barking and sleep.
Inference
Rule Tables
We base the built classifiers on observed data. Of course we have selected a subset of all possible data to use, and not all data is observable. The important assumption is that by repeated runs, we will obtain accurate predictive ability even without filling in the missing data. So the key is the emperical nature of the cycle. With only conserns for the posterior results we should be able to move closer and closer to accuracy with repetition. Our predictive inference will be made with a sub-sample of the data. The steps will involve building a set of utility models to be selected. Then based on outcomes the models will be refined.
This hypothetical data is meant to represent repeated data collection of animal behaviors that help choose the models. Either class 1 or class 2 will fit the data best.
Discussion & Future Work
The problem of incomplete data in statistical inference is pervasive in statistical practice. Our ability to validate our conclusions can be limited with incomplete and uncertain priors. The process of using levels of model selection recreates the scientific process. We go through repeating cycles of evidence collection, interpretation in context dependent ways, evidence weighting and production of coherent results. Uncertainty is inherent in this approach, so misleading predictions are often produced. Special care should be taken to document the assumptions underlying the models built with the missing data. Other animal behavior models should be built and tested.
References
J. Scott and J. Berger. “An exploration of aspects of Bayesian multiple testing”. Technical Report, Duke University. 2003
G. de Cooman and M. Zaffalon. Updating beliefs with incomplete observations. Artificial Intelligence 159, 2004
G. Socher, G. Sagerer and P. Perona. “Bayesian Reasoning on Qualitative Descriptions from Images and Speech” Image and Vision Computing 2000
More info: http://superant.livejournal.com
Presentad at:: 9th World Conference of the International Society for Bayesian Analysis (ISBA), held on Hamilton Island Australia in July 2008