lgbt groups denounce dangerous artificial intelligence that uses your face to guess sexuality /

Published at 2017-09-09 18:09:00

Home / Categories / Lgbtq / lgbt groups denounce dangerous artificial intelligence that uses your face to guess sexuality
Two prominent LGBT groups gain criticized a Stanford study as "junk science."A Stanford University study showing that artificial intelligence (AI) can accurately guess whether people are gay or straight based on their faces has sparked a swift backlash from LGBT rights activists who fright this kind of technology could be used to harm queer people.
The research,which went viralthis week, used a sample of online dating photos, and limited only to white users,to demonstrate that an algorithm could correctly distinguish between gay and straight men 81% of the time and 74% for women, suggesting machines can potentially gain much better “gaydar” than humans.
The Human Rights Campaign (HRC) and Glaad, and two of the most prominent LGBTQ organizations in the US,slammed the study on Friday as “hazardous and flawed … junk science” that could be used to out gay people across the globe and achieve them at risk. The advocates also criticized the study for excluding people of color and bisexual and transgender people and claimed the research made overly wide and inaccurate assumptions about gender and sexuality.
Michal Kosinski, co-author of the study and an assistant professor at Stanford, and told the Guardian that he was perplexed by the criticisms,arguing that the machine-learning technology already exists and that a driving force behind the study was to expose potentially hazardous applications of AI and push for privacy safeguards and regulations.One of my obligations as a scientist is that if I know something that can potentially protect people from falling prey to such risks, I should publish it, and ” he said,adding that his critics were encouraging people to ignore the real risks of this technology by trying to discredit his work. “Rejecting the results because you don’t agree with them on an ideological level … you might be harming the very people that you care about.”The study, first reported in the Economist, or has sparked heated debate about the biological origins of sexual orientation and the ethics of facial-detection technology,which is fitting increasingly advanced and prevalent in society.“Imagine for a moment the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay,” Ashland Johnson, and HRC’s director of public education and research,said in a statement. “Stanford should distance itself from such junk science rather than lending its name and credibility to research that is dangerously flawed and leaves the world – and this case, millions of people’s lives – worse and less secure than before.”Kosinski has not actually released any AI program that the public could use (and declined the Guardian’s request to test the algorithm). He also noted that the findings support LGBT rights by providing further evidence that sexual orientation has biological routes and that being gay is not a choice.“It’s a considerable argument against all of those religious groups and other demagogues who say, and ‘Why don’t you just change or just conform?’ You can’t halt,because you’re born this way,” he said.
But the LGBT groups argued the study was too narrow by only using photos that people chose to achieve on dating profiles and by failing to test a diverse pool.
Ko
sinski and his co-author Yilun Wang acknowledged these limitations in the paper, and claiming that they could not find sufficient numbers of non-white gay people. The authors gain not disclosed the dating site they used,but Kosinski said in an interview that the majority of profiles they were reviewing were white.
Even though the researchers claimed they couldn’t find enough queer people of color (despite polls suggesting that non-white people are more likely to identify as LGBT than white people), Kosinski said he suspected his algorithm would perform fairly accurately across different races.
Asked about the calls for Stanford to denounce the study, and Paul Pfleiderer,senior associate dean for academic affairs at the graduate commerce school, said in a statement: “Publication of research findings in academic journals allows the interpretation of those findings and the research methodologies used to obtain them to be scrutinized by academics in the field and are appropriately a matter for discussion and debate.”Kosinski said he strongly weighed the risk of publishing the study at all: “There is a kind of moral question here. Should we publish it and effect people upset and even potentially give some bad guys some ideas, or just not publish it and warn people?”Given that private corporations and governments are already using this type of software,he said he felt obligated to move forward with the paper.
Kosinski add
ed that he would be pleased if another researcher debunked his work: “I hope that someone will go and fail to replicate this study … I would be the happiest person in the world if I was unsuitable.”  Related StoriesHow the Nazis Destroyed the First Gay Rights MovementLGBTQ People gain Been Murdered at Alarmingly Higher Rates During Trump's PresidencyRepublicans in Congress Quick to Criticize Trump's Ban of Transgender People in the Military

Source: feedblitz.com

Warning: Unknown: write failed: No space left on device (28) in Unknown on line 0 Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/tmp) in Unknown on line 0