Iterative Refinement of Possibility Distributions by Learning for Pixel-Based Classification
Abstract of Iterative Refinement of Possibility Distributions by Learning
Iterative Refinement of Possibility Distributions by Learning
In addition, the proposed IRPDL requires fewer parameters for the mathematical representation and presents a reduced computational complexity.
This is mainly due to the lack of solid models capturing the representative constraints of the available knowledge.
The quality of this “representativity” can significantly influence the performance of the classifier to be used for the image representation.
Starting with limited initial prior knowledge, an efficient classifier is assumed to have the capacity to extract additional knowledge, with a high degree of confidence, while preserving the previously acquired knowledge.Various models can be used to represent different forms of knowledge imperfections.
Belief functions or Dempster-Shafer theoryhas been used to express imprecise and uncertain knowledge.
A set of possibility distributions can be used to represent thematic classes present in an image.
Defining these distributions in terms of standard shapes with their associated parameters is generally not an easy task, especially with limited prior knowledge. The approach proposed in this paper is for that.
Starting from a limited initial prior knowledge expressed by the expert and represented by possibility distributions using Dubois-Prade’s transformation.
The proposed approach implies no expert intervention except for the selection of the initial possibility distributions (learning areas).
Synthetic images as well as real images have been used to evaluate the IRPDL performance.
The performance analysis has been done using the possibilistic recognition rate criterion and compared favorably to three reference methods: region growing, semi-supervised fuzzy pattern matching, and Markov random field.