Skip to content

Evaluate

Evaluate the model performance using the given metrics.

Parameters:

Name Type Description Default
metrics dict

The evaluation metrics.

required
dataset_id str

The ID of the dataset.

None
multi_column_roots list

A list of column roots.

None
input_text_col_index int

The index of the input text column.

None
document_files list

List of paths to document files, if any.

None

Returns:

Name Type Description
dict

A dictionary containing evaluation results.

Sample usage

from anote import EvaluationMetric

metrics = [
    EvaluationMetric.PRECISION,
    EvaluationMetric.RECALL,
    EvaluationMetric.F1
]

evaluation = anote.evaluate(
    metrics=metrics,
    task_type=task_type,
    report_name="evaluation_report",
    dataset_id=dataset_id,
    multi_column_roots=multi_column_roots,
    input_text_col_index=input_text_col_index
)
print(f"Evaluation Results: {evaluation}")