DQC toolkit
DQC Toolkit is a Python library and framework designed with the goal to facilitate improvement of Machine Learning models by identifying and mitigating label errors in training dataset. Currently, DQC toolkit offers CrossValCurate
and LLMCurate
. CrossValCurate
can be used for label error detection / correction in text classification (binary / multi-class) based on cross validation. LLMCurate
extends PEDAL: Enhancing Greedy Decoding with Large Language Models using Diverse Exemplars to compute LLM-based confidence scores for free-text labels.
Installation
Installation of DQC-toolkit can be done as shown below
Quick Start
CrossValCurate
Assuming your text classification data is stored as a pandas dataframe data
, with each sample represented by the text
column and its corresponding noisy label represented by the label
column, here is how you use CrossValCurate
-
data_curated
is a pandas dataframe similar to data
with the following columns -
>>> data_curated.columns
['text', 'label', 'label_correctness_score', 'is_label_correct', 'predicted_label', 'prediction_probability']
'label_correctness_score'
represents a normalized score quantifying the correctness of'label'
.'is_label_correct'
is a boolean flag indicating whether the given'label'
is correct (True
) or incorrect (False
).'predicted_label'
and'prediction_probability'
represent the curation model's prediction and the corresponding probability score.
LLMCurate
Assuming data
is a pandas dataframe containing samples with our target text for curation under column column_to_curate
, here is how you use LLMCurate
-
llmc = LLMCurate(model, tokenizer)
ds = llmc.run(
data,
column_to_curate,
ds_column_mapping,
prompt_variants,
llm_response_cleaned_column_list,
answer_start_token,
answer_end_token,
batch_size,
max_new_tokens
)
model
andtokenizer
are the instantiated LLM model and tokenizer objects respectivelyds_column_mapping
is the dictionary mapping of entities used in the LLM prompt and the corresponding columns indata
. For example,ds_column_mapping={'INPUT' : 'input_column'}
would imply that text underinput_column
indata
would be passed to the LLM in the format"[INPUT]row['input_column'][/INPUT]"
for eachrow
indata
prompt_variants
is the list of LLM prompts to be used to curatecolumn_to_curate
andllm_response_cleaned_column_list
is the corresponding list of column names to store the reference responses generated using each promptanswer_start_token
andanswer_end_token
are optional text phrases representing the start and end of the answer respectively.
ds
is a dataset object with the following additional features -
- Feature for each column name in
llm_response_cleaned_column_list
- LLM Confidence score for each text in
column_to_curate
For more details regarding different hyperparameters available in CrossValCurate
and LLMCurate
, please refer to the API documentation.