AI4People’s third conversation on Sandboxes

The conversation was the third part of the cycle of conversations organized by AI4People about relevant topics emerging from the Artificial Intelligence Act Proposal, which is a fundamental part of the EU Digital Agenda.

The sandboxes were the main topic of this conversation. They are regulated by the Fifth Title of the AI act Proposal (‘’Measures in support of innovation’’) at articles 53, 54 and 55. The proposal allows regulatory sandboxes to be established by one or more Member States competent authorities, or the European Data Protection Supervisor, to provide a controlled experimental environment that facilitates the development, testing and validation of innovative AI systems. Any significant risk to health, safety or fundamental rights identified during this phase, leads to immediate mitigations, or temporary suspension of the development before they are introduced to the market.

The conversation started with Dragos Tudorache, Member of the European Parliament, Chair of the Special Committee on Artificial Intelligence in a Digital Age, who gave some important updates about the proposal progresses in the European Parliament. Although there have been some delays in the legislative process due to the complexity of the subject and some amendments that were made, a vote should be scheduled for February and the trilogues could start in March. In relation to this, it can be said that, probably, by the end of the next year the work on the proposal may come to an end. It’s important to notice that almost every member agreed about the establishment of sandboxes because this is a kind of instrument that allows the comparison of the best practice that can take place in different Member States. Thanks to this comparison, the regulation will be as close as possible to reality, a very important advantage for institutions and enterprises. This is very relevant to have direct feedback from private and publics stakeholders, who both positively welcomed the idea of establishing sandboxes.

This was confirmed by the second speaker Dr. Chandrima Ganguly, Senior Researcher in the AI Ethics team of Fujitsu, who pointed out that also private entities, such as technology corporations, think that sandboxes are useful because they promote partnership and dialogue with other stakeholders. In the development of an ethic AI, they try to make it as human centric as possible and that it is important to involve civil society starting from the sandbox phases. Doing so, it will be easier to guarantee more transparency to users, citizens, and consumers and to protect their social values and fundamental rights. The role of civil society is also essential to ensure good privacy preserving protocols. It also came to light that sandbox phases respect the principle that most tech companies do not want the algorithms that they developed to be introduced too quickly into the market, trying to limit the risk of bias.

The third intervention was made by Virginia Dignum, Professor of Computer Science at Umea University and Wallenberg Chair on Responsible Artificial Intelligence, who focused on the generalization of some innovators who think that regulation is an obstacle to free innovation. She believes that this kind of opinion is based on a misunderstanding, because regulation can be a very beneficial instrument and sandboxes are a great way to combine innovation and regulation. This allows to identify more suitable implementations through an experimental approach such as sandboxes and a cooperation between innovators and regulators. Once the sandboxes have been used in several Member States of the European Union, it is possible to integrate the different experiences (also international ones), extrapolate from them the best practices and try to develop a harmonized and effective technical and regulatory framework. An experimental phase like this can contribute to have more transparency in the algorithms and to increase society’s trust in AI, always considering that a full de-bias process is very difficult, if not impossible, to obtain.

During the conversation was pointed out by Angeliki Dedopoulou, Public Policy Manager – AI and Fintech at Meta- that sandboxes are not the only type of regulatory and innovation experimentation phases. There is a similar instrument, called policy prototypes, which are used by several private entities and might be used also as an alternative method to sandboxes. The policy prototypes operate in a context where there is not an existing legislation and it consent to make policy experimentation when contemplating a new regulatory framework; in this case there is a stakeholders’ involvement too. Meta also started a global strategic initiative establishing a consortium of different stakeholders to promote experimental regulatory efforts in the field of new emerging technologies. Although policy prototypes can be a valid method of experimentation, the substantial difference, highlighted by Dragos Tudorache, between policy prototypes and sandboxes is that in sandboxes there is a direct involvement of the regulator and there is a sharing of different experiences. However, also the model proposed by policy prototypes supports the thesis that a participatory and decentralized approach can be very useful for regulation and innovation.

The conversation continued with the intervention of Yordanka Ivanova, Legal and Policy Officer on AI at DG Connect, who highlighted that the AI Act will be the first legislation that tries to create the common framework out of a key tool like sandboxes. To reassure private entities about the use of sandboxes and, more generally, about artificial intelligence regulation, it’s important to observe that EU institutions and their legislations have proved to be innovation friendly on several occasions and that they firmly believe in this process. The sandbox period is as important for institutions as it is for private entities because they ensure that the AI based systems has all the legal requirements and, not less important, a greater legal certainty develops. Because of these factors, Ivanova underlined that it might be useful to launch regulatory sandbox even before the adoption of the AI Act to test its requirements, just like the Spanish pilot case, and to prevent possible deficiencies of the regulation.

In relation to this, there is an issue that should not be underestimated, pointed out by Robert Madelin, former Director-General DG Connect, which concerns the access limitations to sandboxes. This might lead to an excess of demand over the supply. Therefore, in this regard, integrating private experimentation experiences could lead to ensure more participation and decentralization. A regulatory sandbox would always require the presence of a regulator, but that need not in any way preclude distributed ownership and governance of sandboxes.

In discussion, there was also debate on the best ways to ensure that from Day One no member state on EU territory was left behind. Various initiatives of the Commission were underway, but more was needed, not excusing peer-to-peer reviews and learning. In the very short term, too, all national regulators could partner around existing sandboxes.

Given these observations that were made by the participants, a positive and proactive attitude can be deduced from the stakeholders. However, the sandbox model has received some criticism during the discussion, especially from Detlef Eckert, Special Advisor at FIPRA International. He said that it’s true that sandboxes have beneficial aspect like partnerships but as mentioned before, the number of sandboxes is limited; so, it could be considered a contribute rather than an effective solution. A benefit that Eckert recognized to sandboxes, is that that they allow to understand how complex AI regulation is. However, he wonders if it would not be better to involve mainly small medium enterprises (SMEs) rather than big companies, which have more independent resources to make their own assessments on regulations and innovation. Another criticism is that often, when it comes to SMEs involved in this kind of programs, they might give up on it due to economic and bureaucratic problems like additional costs and overregulation. According to Eckert, a further disincentive to participate in the sandboxes is the secret of the algorithms developed by the companies, protected by intellectual property forms. It may be not convenient for most enterprises to openly share datasets behind their algorithms, because this could lead to less competitiveness in the market. For these reasons, some companies may choose to operate directly on the market, or to participate in private experimental projects that do not involve excessive sharing of datasets and too much liability. It should be remembered that the topic of the algorithms’ secrecy and Copyrights is highly controversial and has generated several transparencies problems; the most evident case concerned the COMPAS algorithm used in the American predictive justice.

As it was mentioned before, all the participants in the discussion highlighted the fundamental role of civil society in the process, especially when it comes to removing bias from algorithms. Despite this, Eckert considered that particular attention should be paid on this involvement, because it doesn’t have to imply heavy lobbying practices and to avoid that, civil society representatives should openly specify which interests they are protecting as civil society.

The conclusions of the meeting were made by Robert Madelin, who emphasized the positive receptions of sandboxes by stakeholders, the need to pay attention on details like the full access. In fact, the hope is that sandboxes will be a distributed and expanded initiative as much as possible, possibly to every Member State, also with a P2P approach, respecting the differences. It is important that there will not be only regulators and regulated, but also third parties, like experts, to mediate the conversation in a sandbox. The final aspect, but not less important, is that enough resources need to be allocated.

Our initiatives

Atomium-EISMD