I've been thinking a lot on how to model knowledge and how to let a system work with it in a programmatic way.
Other articles in the AI Knowledge Based Reasoning series on this site:
Knowledge based reasoning in .net c#
Reasoning in Open or Closed models
Logic Expression evaluation with open-world assumption
Expression evaluation on object based models
Expression evaluation over time
Expression evaluation over time - Was?
Closed-world model
This is where I am at the moment in my thinking and in the coding of my current project. Mostly because it is quite stand forward to implement in code.
public bool Evaluate(expression);
In the closed-world model, everything that you do not find an answer for in your model is assumed to be false.
At first, this seems like a OK thing to do. Assuming that your model covers everything. For example in games, where the AI-engine has access to all information this is the way to go. But in an situation where the model does not cover everything I find it lacking. My current project tries to interface with the real world and when its reasoning returns False on everything that it does not know the end results are quite off the board.
Open-world assumption
So instead of just the boolean result of true or false. In the open-world assumption we introduce a third option, the NotSure result of an evaluation.
public EvaluationResult Evaluate(expression);
public enum EvaluationResult
{
True,
False,
NotSure
}
So far quite easy, just convert your Evaluation method to return NotSure when its not sure.
But the question is, what to do when the system is not sure about something?
Options are:
- Nothing, just wait until it is sure. Could be OK for systems that receive a lot of information. Just assume that the information missing will arrive at a later date.
- Formulate a question regarding the missing piece of information. Break the evaluated expression into pieces and find out what was missing and ask a user or two to provide that input.
- Figure out how much of an expression is unsure. Is it OK to still act on a result with 75% knowledge and 25% gaps? Maybe the AI should figure out the accepted level of certainty by trial and error.
Conclusions
As I wrote in the beginning. I'm not sure how to implement this kind of reasoning myself. First step in converting the closed-world system that I have now to an open world one is to go by the Nothing approach. Basically just returning NotSure and then not acting on it for starters. Could not be worse than assuming a false falsehood that the system does now.
Sources
https://en.wikipedia.org/wiki/Closed-world_assumption
https://en.wikipedia.org/wiki/Open-world_assumption