Judging Facts, Judging Norms: Training Machine Learning Models to Judge Humans Requires a New Approach to Labeling Data
An examination of data labeling practices for normative applications.
Run the following commands to clone this repo and create the Conda environment:
git clone repo
cd repo/
conda env create -f environment.yml
conda activate label_exp
We provide the Clothing
, Meal
, Pet
, and Comment
datasets as .csv files in this repository.
To reproduce the experiments in the paper which involve training grids of models using different hyperparameters, refer to files within the image_models
and text_models
folders.
bash ${folder}/{bash_script}
where:
folder
corresponds to eitherimage_models
ortext_models
bash_script
corresponds to script used on the compute cluster
Sample bash scripts showing the command can also be found in bash_scripts/
.
Jobs can also be launched using the sweep.py
in image_models
as:
python sweep.py launch \
--experiment_name {experiment_name} \
--output_dir {output_root} \
--command_launcher {launcher}
We aggregate results and generate tables using the aggregation scripts in lib
.
Data and model sheets for all analyses can be found in the data_model_sheets
folder.
If you use this code in your research, please cite the corresponding publication.