Will States open the way to the artificial intelligence regulation?

Will States open the way to the artificial intelligence regulation?


2024 was a demanding year for legislators (and lobbyists) worried about the IA, in particular in California, where Gavin Newsom signed 18 new laws on artificial intelligence, also checking the legislation of Ai high -profile.

And 2025 could see equally assets, especially at state level, according to Mark Weatherford. Weatherford, in his words, has seen the “policy and legislation sand” both state and federal level; It was Chief Information Security Officer for the States of California and Colorado, as well as deputy secretary for computer security under President Barack Obama.

Weatherford has said that in recent years he has kept several working qualifications, but his role is usually reduced to understanding “how we can increase the level of conversation on safety and privacy so that we can help influence the way it is done politics “. Last autumn, he joined the Gretel synthetic data company as Vice President of Policies and Standards.

So I was enthusiastic about talking to him about what he thinks comes after the AI ​​regulation and because he thinks that the states will probably open the way.

This interview has been modified for length and clarity.

The goal of increasing the conversation level will probably resonate with many people in the technological sector, who have perhaps seen congress auditions on social media or related topics in the past and have rubbed their heads, seeing what some elected officials know and do not know it. How optimistic are you that legislators can obtain the context they need to make informed decisions on regulation?

Well, I am very confident that they can get there. What I am less confident of is the time sequence to get there. You know, the IA is changing daily. For me it is a amazing blow for the fact that the problems we were talking about only a month ago have already evolved in something else. So I am confident that the government will come to us, but they need people to help them guide them, educate them, to educate them.

At the beginning of this week, the Chamber of Representatives of the United States had a task force that started about a year ago, a task force on artificial intelligence and published their relationship – Well, it took a year to do it. It is a 230 pages ratio; I’m looking at it right now. (Weatherford and I spoke for the first time in December)

(When it comes to) The creation of politics and legislation sausage, you have two different very partisan organizations and are trying to gather and create something that makes everyone happy, which means that everything is watered down only a little. It only takes a long time, and now, while we move to a new administration, everything is in the air how much attention will get some things.

It seems that your point of view is that we could see more regulatory actions at the state level in 2025 than in the federal level. Is it right?

I absolutely believe it. I mean, in California, I think the governor (Gavin) Newsom, just in the last two months, has signed 12 pieces of legislation that had something to do with the IA. (Once again, it is 18 for the counting of Techcrunch.)) He took the veto to the great invoice of the AI, which would have really requested artificial intelligence companies to invest much more in the tests and slowly slow down things.

In fact, yesterday I kept a speech in Sacramento at the top of California computer security education, and I spoke some of the legislation that is happening in all the United States, in all states, and it is like something like over 400 different pieces of legislation. The state level has been introduced only in the last 12 months. So there is a lot to do there.

And I think that one of the great concerns is a great concern in technology in general and in computer security, but we are seeing it on the side of artificial intelligence at this moment, it is that there is an harmonization requirement. Harmonization is the word that (the National Security Department) and Harry Coker to the (Biden) White House have used to (refer to): how we can harmonize all these rules and regulations around these different things in order to wear ” T Having this (situation) of all those who do their own things, which makes companies go crazy.

I think there will be much more activities on the state side and we hope to be able to harmonize them a little, so there is no very diversified series of regulations that companies must respect.

I had not heard that term, but it would have been my next question: I imagine that most people would agree that harmonization is a good goal, but are there mechanisms with which it is happening? Which incentive do states must actually make sure that their laws and regulations are in line with each other?

Honestly, there are not many incentives to harmonize the regulations, except for the fact that I can see the same type of language sprouting in different states – which for me indicate that they are all looking at what is doing mutual.

But from a purely, as “we adopt an approach to the strategic plan to this of all states”, this will not happen, I have no great hopes to happen.

Do you think other states could follow the example of California in terms of general approach?

Many people don’t like listening to it, but California pushes the envelope (in technological legislation) that helps people come, because they do all the heavy lifting, they do a lot of work to do the search that enters a part of that legislation.

The 12 invoices that the Governor Newsom has just passed were through the map, everything from pornography to the use of data to form websites to all different types of things. They were quite complete in leaning there.

Although my understanding whether they have exceeded more targeted and specific measures and therefore the greatest regulation that attracted most of the attention, the governor Newsom in the end took the veto.

I could see both sides. There is the privacy component that was initially guiding the bill, but then you must consider the cost of doing these things and requirements that risk artificial intelligence companies to be innovative. So there is a balance there.

I would expect completely (in 2025) that California will pass something a little more severe than they did (in 2024).

And is your meaning that at the federal level, is there certainly an interest, how the relationship of the chamber you mentioned, but will not necessarily be such a great priority or that we will see important legislation (in 2025)?

Well, I don’t know. It depends on the amount of emphasis that the (new) congress enters. I think we will see. I mean, you read what I read and what I read is that there will be an emphasis on the lower regulation. But technology in many respects, certainly on privacy and computer security, is a kind of bipartisan problem, it is good for everyone.

I am not a big fan of regulation, there are many duplications and many wasted resources that occur with such a different legislation. But at the same time, when the company’s security is at stake, as is with AI, I think there is certainly a place for further regulations.

You said it’s a bipartisan problem. My meaning is that when there is a division, it is not always predictable – they are not just all republican votes compared to all democratic votes.

This is an excellent point. Geography is important, whether we like to admit it or not, and that is why places like California are really leaving in some of their legislations compared to some other states.

Obviously, this is an area in which Gretel works, but it seems that you believe, or the company believes that, since there is more regulation, it pushes the industry in the direction of multiple synthetic data.

Perhaps. One of the reasons I am here is, I believe that synthetic data is the future of the AI. Without data, it does not exist and the quality of the data is becoming more a problem, since the data pool – it runs out or reduces. There will be more and more need for high quality synthetic data that guarantee privacy and eliminate prejudice and take care of all that type of non -technical and soft issues. We believe that synthetic data is the answer to this. In fact, I am 100%convinced.

This is less directly on politics, even if I think it has a sort of political implications, but I would like to hear more about what has brought you around at that point of view. I think there are other people who recognize the problems you are talking about, but they think of the synthetic data potentially amplifying any prejudice or problems in the original data, instead of solving the problem.

Of course, this is the technical part of the conversation. Our customers feel as if we had solved this, and there is this concept of flywheel generation of data – which if you generate negative data, worsens more and more, but by building checks in this flywheel that validates that the data has not worsened , who remains the same way or improves every time the fly will arrive. This is the problem that Gretel has solved.

Many figures aligned with Trump in Silicon Valley have warned of the “censorship” of the AI-Various weights and guardrails that companies put around the content created by artificial generative intelligence. Do you think it’s likely to be adjusted? Should it be?

As for the concerns on the censorship of artificial intelligence, the government has a series of administrative levers that can pull and when there is a perceived risk for society, it is almost certain that they will act.

However, finding that weak point between moderation of reasonable content and restrictive censorship will be a challenge. The coming administration was clear enough that “less regulation is better” will be the modus operandi, therefore through formal legislation or executive order or less formal means such as (National Institute of Standards and Technology) Guidelines and paintings Or joint statements through interactive coordination, we should expect a guide.

I want to go back to this question about how a good regulation might appear. There is this great diffusion in terms of how people speak of artificial intelligence, as if you save the world or will destroy the world, it is the most surprising technology or is wildly supreme. There are so many divergent opinions on the potential of technology and its risks. How can a single piece or even more pieces of artificial intelligence regulation enclose it?

I think we have to be very careful to manage the sprawl of ai. We have already seen with Deepfakes and some of the truly negative aspects, it is worrying to see small children now in high school and even smaller ones who are generating deep fakes that are putting them in trouble with the law. So I think there is a place for legislation that controls the way people can use artificial intelligence that does not violate what could be an existing law: we create a new law that strengthens the current law, but simply by bringing the component of artificial intelligence.

I think that we – those of us who have been in the technological space – we must all remember, many of these things that we consider only a second nature for us, when I speak with my family and some of my friends who are not technology, they literally do not have Idea of ​​what I am talking most of the time. We don’t want people to feel like that great government is too regular, but it is important to talk about these things in the language that non -technologists can understand.

On the other hand, you can probably say it only from speaking with me, I am stunned about the future of the AI. I see so much goodness to arrive. I think we will have a couple of years united while people are more in tune with it and they will understand it more, and the legislation will have a place there, to let people understand what it means to for them and put guardrails around the Ia.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *