Logo
QuestõesDisciplinasBancasDashboardSimuladosCadernoRaio-XBlog
Logo Questionei

Links Úteis

  • Início
  • Questões
  • Disciplinas
  • Simulados

Legal

  • Termos de Uso
  • Termos de Adesão
  • Política de Privacidade

Disciplinas

  • Matemática
  • Informática
  • Português
  • Raciocínio Lógico
  • Direito Administrativo

Bancas

  • FGV
  • CESPE
  • VUNESP
  • FCC
  • CESGRANRIO

© 2026 Questionei. Todos os direitos reservados.

Feito com ❤️ para educação

/
/
/
/
/
/
  1. Início/
  2. Questões/
  3. Língua Inglesa/
  4. Questão 457941201745108

Com relação ao teste desenvolvido pelos pesquisadores para calcular...

📅 2017🏢 IF SUL - MG🎯 IF Sul - MG📚 Língua Inglesa
#Compreensão de Texto

Esta questão foi aplicada no ano de 2017 pela banca IF SUL - MG no concurso para IF Sul - MG. A questão aborda conhecimentos da disciplina de Língua Inglesa, especificamente sobre Compreensão de Texto.

Esta é uma questão de múltipla escolha com 4 alternativas. Teste seus conhecimentos e selecione a resposta correta.

1

457941201745108
Ano: 2017Banca: IF SUL - MGOrganização: IF Sul - MGDisciplina: Língua InglesaTemas: Compreensão de Texto
Texto associado
AI Picks Up Racial and Gender Biases When Learning from What Humans Write

AI1 picks up racial and gender biases2 when learning language from text, researchers say. Without any supervision, a machine learning algorithm learns to associate female names more with family words than career words, and black names as being more unpleasant than white names.

For a study published today in Science, researchers tested the bias of a common AI model, and then matched the results against a well-known psychological test that measures bias in humans. The team replicated in the algorithm all the psychological biases they tested, according to a study from co-author Aylin Caliskan, a post-doc at Princeton University. Because machine learning algorithms are so common, influencing everything from translation to scanning names on resumes, this research shows that the biases are pervasive, too. 

An algorithm is a set of instructions that humans write to help computers learn. Think of it like a recipe, says Zachary Lipton, an AI researcher at UC San Diego who was not involved in the study. Because algorithms use existing materials — like books or text on the internet — it’s obvious that AI can pick up biases if the materials themselves are biased. (For example, Google Photos tagged black users as gorillas.) We’ve known for a while, for instance, that language algorithms learn to associate the word “man” with “professor” and the word “woman” with “assistant professor.” But this paper is interesting because it incorporates previous work done in psychology on human biases, Lipton says.

For today’s study, Caliskan’s team created a test that resembles the Implicit Association Test (IAT), which is commonly used in psychology to measure how biased people are (though there has been some controversy over its accuracy). In the IAT, subjects are presented with two images — say, a white man and a black man — and words like “pleasant” or “unpleasant.” The IAT calculates how quickly you match up “white man” and “pleasant” versus “black man” and “pleasant,” and vice versa. The idea is that the longer it takes you to match up two concepts, the more trouble you have associating them.

The test developed by the researchers also calculates bias, but instead of measuring “response time”, it measures the mathematical distance between two words. In other words, if there’s a bigger numerical distance between a black name and the concept of “pleasant” than a white name and “pleasant”, the model’s association between the two isn’t as strong. The further apart the words are, the less the algorithm associates them together.

Caliskan’s team then tested their method on one particular algorithm: Global Vectors for Word Representation (GLoVe) from Stanford University. GLoVe basically crawls the web to find data and learns associations between billions of words. The researchers found that, in GLoVe, female words are more associated with arts than with math or science, and black names are seen as more unpleasant than white names. That doesn’t mean there’s anything wrong with the AI system, per se, or how the AI is learning — there’s something wrong with the material.

1AI: Artificial Intelligence
2bias: prejudice; preconception

Disponível em <http://www.theverge.com/>
Com relação ao teste desenvolvido pelos pesquisadores para calcular o preconceito, assinale a alternativa correta.
Gabarito comentado
Anotações
Marcar para revisão

Acelere sua aprovação com o Premium

  • Gabaritos comentados ilimitados
  • Caderno de erros inteligente
  • Raio-X da banca
Conhecer Premium

Questões relacionadas para praticar

Questão 457941200515432Língua Inglesa

Observe a seguinte frase: “And yet and yet. Whatever accommodation he reached with his first wife in life hasn't survived her death. Diana haunts Char...

#Compreensão de Texto#Tradução
Questão 457941200521879Língua Inglesa

Assinale a questão que NÃO possui o mesmo uso do “Genitive Case” como na seguinte parte do texto: “Charles's many supporters will argue(...)”:

#Caso Genitivo
Questão 457941201011229Língua Inglesa

O tempo verbal grifado na seguinte frase “Twenty years on, there's been a coup at the palace” é o:

#Verbos
Questão 457941201422939Língua Inglesa

Observe a frase retirada do texto e escolha a opção CORRETA: “It’s a horizon that was once obscured by the War of the Waleses”

#Aspectos Linguísticos
Questão 457941201637508Língua Inglesa

A referência do pronome “its” em “Its latest quarterly survey of SMEs says business optimism has fallen at its fastest rate since January 2009” (5º pa...

#Compreensão de Texto
Questão 457941201885214Língua Inglesa

O texto acima tem por objetivo

#Compreensão de Texto

Continue estudando

Mais questões de Língua InglesaQuestões sobre Compreensão de TextoQuestões do IF SUL - MG