<?xml version="1.0" encoding="UTF-8"?><oembed><type>video</type><version>1.0</version><html>&lt;iframe src=&quot;https://www.loom.com/embed/9753b8b67b3444a1a51f18cb3a6f7a81&quot; frameborder=&quot;0&quot; width=&quot;1280&quot; height=&quot;960&quot; webkitallowfullscreen mozallowfullscreen allowfullscreen&gt;&lt;/iframe&gt;</html><height>960</height><width>1280</width><provider_name>Loom</provider_name><provider_url>https://www.loom.com</provider_url><thumbnail_height>960</thumbnail_height><thumbnail_width>1280</thumbnail_width><thumbnail_url>https://cdn.loom.com/sessions/thumbnails/9753b8b67b3444a1a51f18cb3a6f7a81-1600940023578.gif</thumbnail_url><duration>313</duration><title>&amp;quot;To See is to Stereotype&amp;quot; - CSCW 2020 - Barlas, Kyriakou, Guest, Kleanthous, Otterbacher</title><description>PREPRINT:&amp;nbsp;https://zenodo.org/record/4028263&quot;To &quot;See&quot; is to Stereotype: Image Tagging Algorithms, Gender Recognition, and the Accuracy-Fairness Trade-off&quot; (Paper 6033) - Pinar Barlas, Kyriakos Kyriakou, Olivia Guest, Styliani Kleanthous, Jahna Otterbacher -&amp;nbsp;CSCW 2020Abstract:Machine-learned computer vision algorithms for tagging images are increasingly used by developers and&amp;nbsp;researchers, having become popularized as easy-to-use “cognitive services.” Yet these tools struggle with&amp;nbsp;gender recognition, particularly when processing images of women, people of color and non-binary individuals.&amp;nbsp;Socio-technical researchers have cited data bias as a key problem; training datasets often over-represent&amp;nbsp;images of people and contexts that convey social stereotypes. The social psychology literature explains&amp;nbsp;that people learn social stereotypes, in part, by observing others in particular roles and contexts, and can&amp;nbsp;inadvertently learn to associate gender with scenes, occupations and activities. Thus, we study the extent to&amp;nbsp;which image tagging algorithms mimic this phenomenon. We design a controlled experiment, to examine&amp;nbsp;the interdependence between algorithmic recognition of context and the depicted person’s gender. In the&amp;nbsp;spirit of auditing to understand machine behaviors, we create a highly controlled dataset of people images,&amp;nbsp;imposed on gender-stereotyped backgrounds. Our methodology is reproducible and our code publicly available.&amp;nbsp;Evaluating five proprietary algorithms, we find that in three, gender inference is hindered when a background&amp;nbsp;is introduced. Of the two that “see” both backgrounds and gender, it is the one whose output is most consistent&amp;nbsp;with human stereotyping processes that is superior in recognizing gender. We discuss the accuracy – fairness&amp;nbsp;trade-off, as well as the importance of auditing black boxes in better understanding this double-edged sword.</description></oembed>