Separator

Why Japan is Opting for a Softer Approach in its AI Governance

Separator

img

The late 2010s saw the publication of a number of principles for human-centric artificial intelligence (AI) as a result of this, from governments, international organizations, and research institutions all over the world. What started as general ideas are now evolving into more precise regulations.

The draft Artificial Intelligence Act, which categorizes AI into four categories and lays out related requirements, including improved security, transparency, and accountability measures, was issued by the European Commission in 2021. The Algorithmic Accountability Act of 2022 was presented to both houses of Congress in the US in 2022. The Artificial Intelligence and Data Act (AIDA), which Canada proposed in 2022, will make it essential to manage risk and provide information about highly consequential AI systems.

There is fear that the burden of compliance and the uncertainty of the legislative contents may discourage innovation, even while regulation of AI is partially required for mitigating threats to fundamental values. Additionally, regulatory fragmentation will have a significant negative impact on society as well as enterprises. One of the most challenging issues facing governments, particularly Group of Seven (G7) leaders, is how to handle the risks associated with AI while promoting positive innovation and adoption.

Comparatively to the European Union, Japan is considering adopting more lax laws for the use of artificial intelligence (AI).

Different from EU’s Approach

The nation wants to use this technology to expand its position as a global leader in advanced chips and boost economic growth.

With demands like corporations providing copyrighted material used to train AI systems that generate content like text and images, a milder Japanese approach could hinder EU ambitions to establish its standards as a global benchmark.

While the same, Thierry Breton, the industry chief of the European Union is currently in Tokyo advocating the EU’s approach to AI regulations, as well as, enhancing the collaboration in the field of semiconductors.

It is said that there are certain areas of Japan’s regulations differing from that of the EU’s regulations.

The reason being can be heard from one of the voices from Japan’s AI strategy council, saying that the EU’s regulations are strict, making it particularly difficult to identify the copyright content used in deep processes.

Regardless, AI’s potential along with other technologies, especially advanced semiconductor and quantum computers, have made room for more competition. Countries like the US and its allied industrial democracies are now becoming equally competitive with China for their development.

In Japan, AI could increase the demand for advanced chips, which is clearly why the country aims to utilize this technology for, while boosting its economic growth. Japan plans to create an AI policy by the end of the year that is expected to be more in line with the approach taken by the United States than with the stringent guidelines promoted by the European Union.

 

However, experts indicate that the country lacks the amount of computing power, specifically in the availability of graphics processing units (GPUs) which is necessary to train AI.

Let’s take a peek into Japan’s regulations on AI.

No Particular Restrictions

There are no laws restricting the use of AI in Japan.

A report by the Ministry of Economy, Trade, and Industry (METI) indicated that the country deemed it unnecessary for legally binding horizontal requirements for AI systems at the moment. The reason being is that, as AI progresses, it becomes difficult for regulations to keep up with it. Even if there is a prescriptive, static or detailed regulation, it could hinder the growth of AI innovation. With that, METI concluded by saying that the government should allow companies to have their own AI governance while providing a flexible guidance to support or guide their efforts.

Regarding legislation that apply to specific sectors, none specifically forbid the use of AI; rather, they demand that companies take the necessary precautions and report any dangers.

For instance, the Digital Platform Transparency Act mandates that sizable online marketplaces, app stores, and digital advertising companies provide fairness and transparency in their dealings with business users, including disclosing the important variables affecting their search ranks.

On the other hand, some regulations are applicable to the creation and use of AI even though they do not directly regulate AI systems.

The Japan Fair Trade Commission examined the possible dangers of cartel and unfair trade to be performed by algorithms from the perspective of fair competition and came to the conclusion that the majority of difficulties could be covered by the current Antimonopoly Act.

Government Instrumental in Aiding Companies towards Suitable AI Governance Procedures

Additionally, there have been instances where AI projects have been abandoned owing to social criticism rather than necessarily since they broke any laws—most notably in the area of privacy. In response to these demands, the government offers a number of instruments to assist companies in voluntarily implementing suitable AI governance procedures.

There are many established guidelines for the use and preservation of data. The METI and the Ministry of Internal Affairs and Communications (MIC) jointly developed the Guidebook on Corporate Governance for Privacy in Digital Transformation and the Guidebook for Utilization of Camera Images, both of which offer guidelines on how to handle privacy data in terms of not only complying with the APPI but also taking appropriate actions based on communication with stakeholders.

Japan has chosen to approach AI legislation in a way that respects companies' voluntary governance and offers non-binding recommendations to help it, while enforcing transparency requirements on some significant digital platforms. Regarding AI regulation, Japan is working to enact reforms that will enable the use of AI for both obtaining regulatory goals and having a beneficial social impact. What kind of AI will truly comply with the regulation is still up in the air, though. Global norms should be taken into account, and here is where international cooperation in AI regulation is required.

Current Issue