openfactcheck.evaluator.CheckerEvaluator#

class openfactcheck.evaluator.CheckerEvaluator(ofc)[source][source]#

This class is used to evaluate the performance of a FactChecker.

Parameters:
  • input_path (Union[str, pd.DataFrame]) – The path to the CSV file or the DataFrame containing the FactChecker responses. The CSV file should have the following three columns: - label: The label assigned by the FactChecker. This should be a boolean value. - time: The time taken by the FactChecker to respond. - cost: The cost of the FactChecker response.

  • eval_type (str) – The type of evaluation to perform. Either “claim” or “document”.

  • gold_path (str) – Optional. The path to the gold standard file. If not provided, the default gold standard file will be used. This is useful when evaluating the FactChecker on a different dataset.

  • eval_type

  • ofc (OpenFactCheck)

input_path#

The path to the CSV file or the DataFrame containing the FactChecker responses.

Type:

Union[str, pd.DataFrame]

gold_path#

The path to the gold standard file.

Type:

str

eval_type#

The type of evaluation to perform. Either “claim” or “document”.

Type:

str

results#

The evaluation results.

Type:

dict

confusion_matrix#

The confusion matrix of the evaluation.

Type:

numpy.ndarray

classification_report#

The classification report of the evaluation.

Type:

dict

evaluate(input_path: Union[str, pd.DataFrame], eval_type: str, gold_path: str = ""):

This function evaluates the performance of the FactChecker.

evaluate_binary_classification(y_true, y_pred, pos_label="yes"):

Evaluate the performance of a binary classification task.

__init__(ofc)[source][source]#

Initialize the FactCheckerEvaluator object.

Parameters:

ofc (OpenFactCheck)

Methods

__init__(ofc)

Initialize the FactCheckerEvaluator object.

evaluate(input_path, eval_type[, gold_path])

This function evaluates the performance of the FactChecker.

evaluate_binary_classification(y_true, y_pred)