Artificial Intelligence Can Be Influenced By Censorship

Artificial Intelligence Can Be Influenced By Censorship

According to researchers, algorithms in artificial intelligence learn by associating words along with more words. AI can hardly be put off anywhere around the globe. It is used in Universities, in schools, governments as well as in businesses. It is used to help individuals churn about their ideas, inventions, talents, motivations, along with the algorithms. But, at the end of the day, such gadgets are ultimately created by humans. Hence, they also can reflect the depth of cultural divides they live in. 

A newer study into Artificial Intelligence has revealed that the censorship inflicted on AI can affect the algorithms they work on. It can further influence the devices that function on these algorithms. An examination of AI was taken under a Ph.D. student from UC in San Diego, as well as a professor of political science called Margaret Roberts. Their subjects were the Wikipedia page in the Chinese language as well as Baidu Baike, a similar website, which is operated under China’s leading search engine, Baidu. It is also subject to the government’s censorship in China. 

The Language Of Artificial Intelligence

The researchers expressed curiosity about whether Artificial Intelligence is at all affected by the censorship of words or phrases. And also whether these words could still be learned and submitted into their algorithm and software. This influences the language a voice assistant utilizes. This could be possible through autocomplete options or translation programs while forming sentences. The algorithm’s language analyzes how certain words appear together in a larger gathering. Like connected nodes: the words that appear close to each other, have a similar or closer meaning. 

According to the research, a word like democracy showed stability as a closer meaning to it on Chinese language Wikipedia. Although Baidu Baike, from the Baidu search engine, analyzes the same word as “chaos.” Recent research has shown that biases against race and genders also linger in Artificial Intelligence.