The National Institute of Standards and Technology (NIST) is issuing new instructions to scientists affiliated with the American Institute of Artificial Intelligence Safety (AISI), which eliminates mentions of “AI fairness” that expect members to refer to “AI safety,” “responsible AI,” and “AI fairness,” and introduces a request to “prioritize the existence of ideology.”
This information comes as part of the latest collaborative research and development agreements for AI Safety Institute Consortium members submitted in early March. Previously, the agreement encouraged researchers to contribute to technical work that helps identify and correct discriminatory model behaviors related to gender, race, age, or wealth inequality. Such biases are extremely important as they can directly affect end users and can disproportionately harm minorities and economically disadvantaged groups.
The new agreement removes the development of a tool “to authenticate content and track its origin” and mentions “labeling synthetic content,” indicating less interest in tracking misinformation and deepfakes. It also focuses on bringing America first, asking one working group to develop a testing tool “to expand America’s global AI position.”
“The Trump administration has removed safety, fairness, misinformation and responsibility as the key things about AI. I think this speaks to itself,” says a researcher from an organization that works with the AI Safety Institute.
Researchers believe that ignoring these issues could hurt ordinary users by the possibility that algorithms identifying based on income and other demographics may not be checked. “Unless you’re a tech billionaire, this will lead to a worse future for you and those you care about. We expect AI to be unfair, discriminatory, safe and unresponsibly unfold,” the researchers argue.
“It’s wild,” says another researcher who has previously worked with the AI Safety Institute. “Does it even mean that humans will flourish?”
Elon Musk, who is currently leading a controversial effort to reduce government spending and bureaucracy on behalf of President Trump, criticizes the AI model built by Openai and Google. Last February, he posted a meme on X. There, Gemini and Open were labelled “racist” and “wake up.” He often Quote One of Google’s models is an incident, a highly unlikely scenario, where it debates whether it is wrong to misunderstand someone, even if it hinders nuclear apocalypse. In addition to Tesla and SpaceX, Musk runs Xai, an AI company that competes directly with Openai and Google. Researchers advising Xai have recently developed new techniques that could change the political trends of large-scale language models, as reported by Wired.
An ever-growing number of research shows that political bias in AI models can affect both Liberals And conservatives. for example, Research on Twitter’s recommended algorithms The release in 2021 showed that users are likely to show a right-leaning perspective on the platform.
Since January, Musk’s so-called Government Efficiency Bureau (DOGE) has taken the US government by storm, effectively firing civil servants, suspending spending, and creating an environment where it is considered hostile to those who may oppose the purposes of the Trump administration. Some government departments, such as the Ministry of Education, have archived and deleted documents that mention DEI. Doge has also targeted NIST, the parent organization of AISI in recent weeks. Dozens of employees have been fired.