Issues of cooperation in the field of regulation of computing power, joint insurance of AI products, and determining the social status and protection of persons with a hybrid nervous system (hybrid people)
Dear Colleagues and Friends,
In furtherance of the discussion on the topic of «Key technology policy issues will be grappling with in 2025», let me outline some mullings at the intersection of the theory and practice of computer science, law, and neuroethics.
«Our society and thus each individual has the possibility to (help) decide how the world, in which we want to live with artificial intelligence in the future, should look. Philosophy, law, and technology play a central role in the discourse that has to be conducted for this purpose» (from Fraunhofer IAIS’s report «Trustworty use of AI»).
Essential idea of that proposals — drawing from the deep technical expertise of the computing community, to provide to policy leaders and stakeholders nonpartisan theses on policy gaps in the field of neuroethics, development and implementation of ML/AI tools. It is preferable that this regulation be a harmonious symbiosis of legal norms both at the state level and at the level of socially responsible professional communities.
The approach mentioned in paragraph 6.2 of The CEN-CENELEC Focus Group Report: Road Map on Artificial Intelligence (AI) and, in fact, focused on autonomous self-regulation of AI tools (systems) seems inappropriate to current risks and treats: “An alternative approach is that the system itself ensures that modifications of its functionality due to self-learning have no negative impact on assessment topics like safety or fairness”. Further, in the same place, the potential ineffectiveness of the certification system for AI tools is actually recognized: “For artificial intelligence systems that learn as they are used, the problem is that the behavior of the system changes continuously and will require a new conformity assessment each time.”
In these circumstances, the opinion of Professor Paolo Missier sounds very convincing (quoted from the ETPC mailing): «Trustworthiness mechanisms and measures are being advanced in AI regulations and standards that may not actually increase trust».
Trust in AI tools cannot be increased by the fact that IT giants behave selfishly and unceremoniously, like new conquistadors — using gaps in legislative regulation, they arbitrarily accumulate the resources of all mankind as if they were their own assets and pass judgment on the inefficiency of mankind (spending billions on creating simulacra of workers in mass employment sectors without balanced retraining of the released workers).
The above suggests an approach to regulating ML/AI tools that is both more granular and more human-centric.
Key Insights Based on the content of the broad discussion on the regulation of new computing tools, and taking into account the normative provisions already formulated both at the level of the UN, the EU and individual states on both sides of the Atlantic, it is important to achieve consensus between regulators, inventors and users (in a broad sense) on the following issues: · legislative recognition of AI/ML tools as a source of increased danger and the extension of the rules on strict liability to all cycles of circulation of AI/ML tools as both an IT product and a commercial product (at the international, conventional and national levels of ML/AI regulation); · quotas for computing power used to create tools based on ML/AI, and if the legislatively established threshold is exceeded, quotas will be set strictly on an auction and public basis; · licensing of the activities of vendors, owners, distributors and users of AI tools — when the legislatively established threshold value of the potential impact of AI/ML tools on a local or global population is exceeded; · implementation of an online monitoring system for compliance with license conditions and energy quotas (with the placement of the necessary equipment at the expense of and on the networks of recipients of licenses and quotas); · establishment of insurance funds for joint participation of inventors, owners, sellers, suppliers and users of ML/AI tools in co-insurance programs for AI/ML products or imposing on insurance companies the obligation to provide services for compulsory liability insurance for the creation, ownership and use of ML/AI tools; · anticipating objections from representatives of the insurance community, it is proposed to establish three basic criteria for determining the size of the insurance premium: minimum/maximum volume of the insured’s computing power; minimum/maximum volume of clients/subscribers/users; minimum/maximum population size at risk of exposure/use (including unintentional) of the insured product; · selling compulsory insurance policies for ML/AI products and civil liability through the IAI insurance store (modelled on the Estonian digital store for public services in the IT startup sector); · legislative establishment of a percentage deducted from insurance premiums (for compulsory and voluntary insurance of ML/AI tools) to NPOs created with the purpose of financing the transition from primitive physical employment to the mass release of low-skilled workers and the retraining of released workers; · establishing/recognizing the legal status of individuals with a hybrid nervous system (hybrid people) and ensuring the implementation of their rights and obligations, including the right to lifelong insurance coverage for risks of harm to the health and property of both hybrid people and third parties (as a result of incidents caused by the impact of experimental instruments on hybrid people (including brain-computer interfaces), as well as a result of the inoperability of experimental instruments identified during or upon completion of the experiment); · development of standards and programs for reverse socialization – this is proposed to be understood as the process of restoring the social activity of hybrid persons, associated with determining the degree, ability and conditions to act independently and on their own behalf (in physiological, emotional, social and legal aspects) – in the context of experiments by Nueralink and other flagship ML/AI tools for “reviving” people with spinal cord injuries and other neurological and physical disabilities and limitations; · developing and ‘teaching multidisciplinary courses on AI and social responsibility and building networks with industry practitioners, government policymakers, and community partners to produce AI technologies and governance mechanisms that are responsive to community needs, rather than driven solely by business interests’ — as suggested by Michael C. Loui et al. in Opinion ‘Artificial Intelligence, Social Responsibility, and the Roles of the University’ (August 2024 | Vol. 67 | No. 8 | Communications of the ACM | p.22-25). |
Key Takeaways Taking as a basis the principles of safe release of products to the market — ‘making available on the market’ — Art. R1, R4, R5 Annexes 1 of Decision 768/2008/EC, complemented by rules for licensing computing power and co-insurance of AI products for AI market participants (of inventors, owners, vendors, providers and users of AI/ML tools), we can arrive at a human-centric approach to the release and use of products artificial intelligence. This approach increases the chances: · avoid regulation of AI for the sake of regulation itself; · create mechanisms to protect users of AI products, and society as a whole, from both unintentional and intentional damage (for selfish, hooligan or terrorist purposes using ML/AI tools); · limit irresponsible and unprofessional experiments with ML/AI tools (open source ones in particular) — establish and enforce the principle of “not putting ML/AI tools into the wrong hands”; · reduce the risk of exponential growth in the volume of irreparable damage from the release of AI products to the market — both through licensing of computing power and co-insurance of ML/AI products (mentioned above), and through legislative criminalization of the activities of individuals who ignore the system of licensing computing power and joint insurance of artificial intelligence products; · establish a regime of effective sanctions not for difficult to prove violations of regulation/certification of ML/AI, but on a formal basis — entering the market of AI/ML products without the necessary insurance coverage of risks and/or exceeding the established/purchased capacity quotas or licensing conditions.
With this approach, we get: · stable protection of society from abuses in the AI sphere — without the need to form and maintain a bloated bureaucratic staff to assess the risks of AI/ML products; · reducing the risk of exponential growth in the volume of uncompensated damage from the release of AI/ML products to the market — both through licensing/quotas for computing power and measures of joint insurance of AI/ML products (mentioned above), and through legislative criminalization of the activities of persons ignoring the licensing/quotas for computing power and joint insurance of AI products. It seems that regulation based on the stated fundamental principles can help to harmonize the interests of, on the one hand, IT inventors and AI development enthusiasts, and, on the other hand, an alarmed society, government institutions and each individual. |
Insurance |
Licensing |
Quotas |
Ø Purchase Ø Renewal Ø Cancellation |
Ø Purchase Ø Renewal Ø Cancellation |
Ø Purchase Ø Renewal Ø Cancellation |
Fig.1 An UI’s prototype of an IAI online service
Functionality
• insurance service — mandatory and voluntary liability insurance
• licensing service — registration/extension of licenses and payment of license fees
• quota service — obtaining and buying out quotas (including during auction sales, additional payment for exceeding quotas, for extension/renewal)
Fig.2 An UI’s prototype of an IAI online service
Of course, the presented theses are only rough drafts of conceptual proposals for discussion by the professional community.
ссылка на оригинал статьи https://habr.com/ru/articles/856600/
Добавить комментарий