AI governance around the world, from China to Brazil

0 3

Artificial intelligence has quickly moved from computer science textbooks to the mainstream, generating excitement such as the reproduction of celebrity voices and chatbots ready to host meandering conversations.

But the technology, which refers to machines trained to perform intelligent tasks, also threatens a major disruption: social norms, entire industries and the fortunes of tech companies. It has huge potential to change everything from diagnosing patients to predicting weather patterns – but it could also put millions of people out of work or even surpass human intelligence , some experts say.

Last week, the Pew Research Center released a survey in which a majority of Americans – 52 percent – say they are more worried than excited about the increased use of artificial intelligence , including concerns about personal privacy and human control over the new technologies.

A curious person’s guide to artificial intelligence

This year’s proliferation of next-generation AI models such as ChatGPT, Bard and Bing, all of which are publicly available, brought artificial intelligence to the fore. Now, governments from China to Brazil to Israel are also trying to figure out how to harness the transformative power of AI, while containing the worst excesses and drafts rules for use in everyday life.

Some countries, including Israel and Japan, have responded to their lightning-fast growth by clarifying data protection, privacy and copyright – in each case clearing the way for copyrighted content to used to train AI. Others, such as the United Arab Emirates, have issued vague and broad announcements around an AI strategy, or launched working groups on AI best practices, and published draft legislation for public review and consideration.

Still others have taken a wait-and-see approach, even as industry leaders, including OpenAI, creator of viral chatbot ChatGPT, have urged international cooperation on regulation and monitoring . In a statement in May, the company’s CEO and two co-founders warned against the “possibility of an existential threat” associated with superintelligence, a hypothetical entity whose minds would exceed human cognitive performance. .

“Stopping it would require something like a global monitoring regime, and even that is not guaranteed to work,” the statement said.

However, there are few concrete laws around the world specifically aimed at AI regulation. Here are some of the ways in which lawmakers in different countries are trying to deal with the questions about its use.

Brazil has a draft AI law that is the result of three years of proposed (and suspended) bills on the subject. The document – released late last year as part of a 900-page Senate committee report on AI – details the rights of users who interact with AI systems and provides guidelines for classifying different types of AI based on the risk they pose to society.

The law’s focus on consumer rights places a responsibility on AI providers to provide consumers with information about their AI products. Users have a right to know that they are interacting with AI – but also a right to an explanation about how AI made a particular decision or recommendation. Users can also object to AI decisions or request human intervention, especially if the AI ​​decision is likely to have a significant impact on the user, such as systems related to self-driving cars -driving, hiring, credit assessment or biometric identification.

AI developers must also conduct risk assessments before bringing an AI product to market. The highest risk classification refers to any AI systems that use “subliminal” methods or exploit users in ways that are harmful to their health or safety; these are absolutely forbidden. The draft AI law also outlines potential “high-risk” AI implementations, including AI used in healthcare, biometric identification and credit scoring, among other applications. another. Risk assessments for “high-risk” AI products will be published in a government database.

AI developers are all liable for damage caused by their AI systems, although developers of high-risk products are held to an even higher level of liability.

China has released a draft regulation for generational AI and yes requesting public comment on the new rules. Unlike most other countries, however, China’s draft says that generational AI must reflect “Socialist Core Values.”

In their current version, the draft rules say that developers are “responsible” for the output created by their AI, according to a translation of the document by Stanford University’s DigiChina Project. There are also limitations to the retrieval of training data; developers are liable under the law if their training data infringes on someone else’s intellectual property. The regulation also states that AI services must be designed to generate only “true and accurate” content.

These proposed rules will build on existing legislation on depth-fakes, recommendation algorithms and data security, giving China an opportunity over other countries to draft new laws. from the very beginning. The country’s internet regulator also announced restrictions on facial recognition technology in August.

China has set incredible goals for its tech and AI industries: In the “Next Generation Artificial Intelligence Development Plan”, an ambitious 2017 document published by the Chinese government, the authors write that by 2030, “China’s AI theories, technologies and applications should achieve world-class standards.”

Will China overtake the US on AI? Maybe not. Here’s why.

In June, the European Parliament voted to approve the so-called “AI Act.” Like the draft Brazilian legislation, the AI ​​Act classifies AI in three ways: as unacceptable, high and limited risk.

AI systems that are considered inappropriate are those that are considered a “threat” to society. (The European Parliament offers “toys with voices that encourage dangerous behavior in children” as one example.) This type of system is banned under the AI ​​Act. High-risk AI must be approved by European officials before it goes to market, and also throughout the product’s life cycle. These include AI products related to law enforcement, border management and employment screening, among others.

AI systems that are considered to be of limited risk must be properly notified to users in order to make informed decisions about their interaction with the AI. Otherwise, these products mostly avoid regulatory scrutiny.

The Act still needs to be approved by the European Council, although lawmakers hope that process will be completed later this year.

Europe moves forward with AI regulation, challenging the power of tech giants

In 2022, Israel’s Ministry of Innovation, Science and Technology published a draft policy on AI management. The document’s authors describe it as “a moral and business-oriented compass for any company, organization or government agency involved in the field of artificial intelligence,” and emphasize its focus on “responsible innovation.”

Israel’s draft policy states that the development and use of AI should respect “the rule of law, fundamental rights and public interests and, in particular, [maintain] human dignity and privacy.” Elsewhere, vaguely, it says “reasonable steps must be taken in accordance with accepted professional concepts” to ensure that AI products are safe to use.

In general, the draft policy encourages self-regulation and a “soft” approach to government intervention in AI development. Rather than proposing uniform, industry-wide legislation, the document encourages sector-specific regulators to consider very specific interventions where appropriate, and for government to try to co- compatibility with global AI best practices.

In March, Italy briefly banned ChatGPT, citing concerns about how — and how much — user data was being collected by the chatbot.

Since then, Italy has allocated around $33 million to support workers at risk of being left behind by digital transformation – including but not limited to limited to AI. About a third of that amount will be used to train workers whose jobs may disappear due to automation. The remaining money will be directed to teaching digital skills to unemployed or economically inactive people, with the hope of encouraging them to enter the labor market.

As AI changes jobs, Italy is trying to help workers retrain

Japan, like Israel, has adopted a “soft law” approach to AI regulation: the country has no prescriptive rules governing specific ways in which AI can and cannot be used. Instead, Japan has chosen to wait and see how AI develops, citing a desire to avoid innovation.

For now, AI developers in Japan have to rely on nearby laws – such as those related to data protection – as guidance. For example, in 2018, Japanese lawmakers revised the country’s Copyright Act, allowing the use of copyrighted content for data analysis. Since then, lawmakers have clarified that the review also applies to AI training data, clearing the way for AI companies to train their algorithms on other companies’ intellectual property. (Israel has taken a similar approach.)

Regulation is not at the forefront of every country’s approach to AI.

In the United Arab Emirates’ National Strategy for Artificial Intelligence, for example, the country’s regulatory intentions are given in just a few paragraphs. Overall, the Artificial Intelligence and Blockchain Council will “review national approaches to issues such as data governance, ethics and cybersecurity,” and observe and integrate global best practices on AI .

The remainder of the 46-page document is dedicated to promoting AI development in the UAE by attracting AI talent and integrating the technology into key sectors such as energy, tourism and healthcare. This strategy, the executive summary of the document, aligns with the UAE’s efforts to become “the best country in the world by 2071.”

Leave A Reply

Your email address will not be published.