ICE Knowledge Hub
Access the very latest and best CPD content to help you grow your knowledge and skills.
AI specialist, Alan Mosca, explains that coordinated efforts are needed to craft an AI governance framework that ensures responsible innovation.
Artificial intelligence (AI) has grown at an exciting and unprecedented pace, and it’s not showing signs of slowing down.
AI systems are becoming increasingly advanced and far-reaching, raising important questions around ethics, safety, and accountability that regulation may help address.
There’s an active debate around whether AI should be regulated on a technology-specific basis or if existing laws can be extended to cover AI systems sufficiently.
Both approaches have merits and challenges.
Historically, regulation has lagged technological innovation significantly. We’ve seen this happen across the centuries, as legislators have always found themselves having to be reactive to innovation.
However, it does appear that governments around the world, spurred by public opinion, are keen to move fast on AI regulation, and understandably so.
We’ve seen this already with the EU’s proposed AI Act and the UK’s AI Summit, and we will certainly see more regulatory action over the next couple of years.
The potential of AI is simply enormous.
With that comes the risk of misuse and uncontrolled expansion if it remains unregulated.
For example, while the development of autonomous weapons systems raises understandable concerns, we want the notion of ‘killer robots’ roaming the streets to remain firmly in the realm of science fiction through sensible governance.
More realistically though, regulators will need to focus their attention on a large number of key areas around responsible AI development:
For civil engineering and construction, some of these key areas will be vital.
AI could make a mistake that impacts the stability and/or quality of a structure or building.
As such, ensuring that models aren’t harmful, and that the accountability for any error is well defined, will help prevent many potentially bad outcomes.
This will likely involve escalation to an expert human, such as a civil engineer, who will need to correct any potential mistakes.
For this regulatory approach to work, governments and AI companies worldwide will need to cooperate closely.
Many of these rules can be completely sidestepped if there’s one place where bad actors are allowed to operate without consequences.
This will be the single hardest thing to put in place - examples of this exist for financial regulators, where most collaborate and coordinate to be able to regulate the global markets.
It’s increasingly likely that we will see dedicated AI regulator entities that operate in similar ways to the US Securities and Exchange Commission (SEC) and the UK Financial Conduct Authority (FCA).
An international or national AI regulatory body could actually be modelled after financial regulators like the SEC to enable coordinated oversight of the AI industry across borders.
This regulator would be responsible for creating and enforcing a common set of rules, standards, and compliance mechanisms related to development, testing, and deployment of AI systems.
Companies releasing AI products and services would need to:
The regulator would have authority to investigate violations through methods like fines, product bans, or criminal charges for willful noncompliance.
Staffed by subject matter experts in AI ethics, technical standards, and policy, this regulator would aim to balance innovation with managing risks by taking a data-driven, adaptive approach.
With participation from industry, academia, and the public, an SEC-style model could enforce responsible AI development on a global scale.
Only time will tell what the eventual outcome will be, but we don’t actually have a lot of time to start steering towards desirable results proactively.
The UK has an abundance of AI experts that can help formulate policies, laws, and standards that promote innovation while managing risks - paving the way for a future where AI is a positive advancement for all.
The path forward will require navigating complex challenges around balancing innovation with risk prevention, maintaining public trust, and gathering international alignment.
But the diligent work we put in today to steer AI's development and use toward positive ends will pay dividends far into the future.
If we make responsible AI regulation a priority now, we can build toward a world where AI empowers people and progress for generations to come.
The guide sets out how members should interact with journalists and media outlets when representing the ICE.
The institution is demonstrating leadership in carbon management by using the PAS 2080 standard to decarbonise.
The ICE is seeking applications for the new Michael Neale Bursary. There are limited spaces available – apply today.