The evaluation was done according to two scenarios. For each scenario, this page presents the raw results that were obtained. For further details, we refer the reader to this report and to the slides presented at the Ontology Matching workshop.
The results sent by the three participants are available in the data section of the track.
|Participant (links evaluated)||Precision||Coverage|
|TaxoMap (exactMatch only)||88.1||41.1|
|TaxoMap (non-exactMatch, strength above 0.5)||20 (+/- 11!)||NA|
|TaxoMap (non-exactMatch, all)||25.1 (+/- 8.3)||NA|
These are the results for the automated evaluation, using a gold standard of books indexed against both GTT and Brinkman thesauri:
|Participant||Precision (book level)||Recall (book level)||Precision (annotation level)||Recall (annotation level)||Jaccard (annotation level)|
|TaxoMap (exactMatch + broadMatch)||46.68||19.81||40.90||13.84||12.52|
|TaxoMap (exactMatch + broadMatch + narrowMatch)||45.57||20.23||39.51||14.12||12.67|
|TaxoMap (exactMatch + broadMatch + narrowMatch + relatedMatch)||45.51||20.24||39.45||14.13||12.67|