AI specialist, Alan Mosca, explains that coordinated efforts are needed to craft an AI governance framework that ensures responsible innovation.
Artificial intelligence (AI) has grown at an exciting and unprecedented pace, and it’s not showing signs of slowing down.
AI systems are becoming increasingly advanced and far-reaching, raising important questions around ethics, safety, and accountability that regulation may help address.
When it comes to AI, governments are keen to move fast
There’s an active debate around whether AI should be regulated on a technology-specific basis or if existing laws can be extended to cover AI systems sufficiently.
Both approaches have merits and challenges.
Historically, regulation has lagged technological innovation significantly. We’ve seen this happen across the centuries, as legislators have always found themselves having to be reactive to innovation.
However, it does appear that governments around the world, spurred by public opinion, are keen to move fast on AI regulation, and understandably so.
We’ve seen this already with the EU’s proposed AI Act and the UK’s AI Summit, and we will certainly see more regulatory action over the next couple of years.
Not letting science fiction become reality
The potential of AI is simply enormous.
With that comes the risk of misuse and uncontrolled expansion if it remains unregulated.
For example, while the development of autonomous weapons systems raises understandable concerns, we want the notion of ‘killer robots’ roaming the streets to remain firmly in the realm of science fiction through sensible governance.
More realistically though, regulators will need to focus their attention on a large number of key areas around responsible AI development:
- Transparency around use of training data: requiring disclosure on the sources and types of data used to train AI systems, to detect possible biases or lack of diversity.
- Responsible use of personal data: regulators will need to ensure AI systems use personal data ethically, with full transparency and consent. This includes setting boundaries around access to and retention of user data.
- Responsible development of new capabilities: guidelines will be important to steer the development of new AI capabilities in a direction that benefits society broadly, not just commercially.
- Censorship of harmful models: there will likely need to be reasonable boundaries on what types of abilities AI systems should and shouldn’t possess to prevent harm.
- Prevention of malicious applications: safeguards must be in place to prevent the misuse of AI for harmful purposes, whether accidental or intentional. Audits and impact assessments may help here.
- Ability to escalate an output to a human for review: even advanced AI systems should have appropriate human oversight mechanisms, particularly for high-stakes decisions that significantly impact human lives and welfare.
- Accountability for errors: just like humans, AI is imperfect and may commit mistakes. The responsibility boundaries of developers/operators and the limitations of this responsibility need to be defined explicitly and precisely.
- Protections for marginalised groups: extra safeguards against AI harm for vulnerable demographic groups.
- Considerations around emerging capabilities: as AI advances, updating regulations to cover new capabilities like deepfakes that could require oversight.
- Public education: promoting awareness and understanding among the general public on AI development and regulation.
- International cooperation: getting broad multinational buy-in will be key for effectiveness so it may need to be highlighted.
AI regulation in civil engineering
For civil engineering and construction, some of these key areas will be vital.
AI could make a mistake that impacts the stability and/or quality of a structure or building.
As such, ensuring that models aren’t harmful, and that the accountability for any error is well defined, will help prevent many potentially bad outcomes.
This will likely involve escalation to an expert human, such as a civil engineer, who will need to correct any potential mistakes.
Collaboration is essential
For this regulatory approach to work, governments and AI companies worldwide will need to cooperate closely.
Many of these rules can be completely sidestepped if there’s one place where bad actors are allowed to operate without consequences.
This will be the single hardest thing to put in place - examples of this exist for financial regulators, where most collaborate and coordinate to be able to regulate the global markets.
It’s increasingly likely that we will see dedicated AI regulator entities that operate in similar ways to the US Securities and Exchange Commission (SEC) and the UK Financial Conduct Authority (FCA).
What would AI regulators look like?
An international or national AI regulatory body could actually be modelled after financial regulators like the SEC to enable coordinated oversight of the AI industry across borders.
This regulator would be responsible for creating and enforcing a common set of rules, standards, and compliance mechanisms related to development, testing, and deployment of AI systems.
Companies releasing AI products and services would need to:
- register with the regulator;
- submit algorithmic models and data for audit;
- disclose capabilities and limitations; and
- adhere to codes of conduct.
The regulator would have authority to investigate violations through methods like fines, product bans, or criminal charges for willful noncompliance.
Staffed by subject matter experts in AI ethics, technical standards, and policy, this regulator would aim to balance innovation with managing risks by taking a data-driven, adaptive approach.
With participation from industry, academia, and the public, an SEC-style model could enforce responsible AI development on a global scale.
Regulation will enable positive innovation
Only time will tell what the eventual outcome will be, but we don’t actually have a lot of time to start steering towards desirable results proactively.
The UK has an abundance of AI experts that can help formulate policies, laws, and standards that promote innovation while managing risks - paving the way for a future where AI is a positive advancement for all.
The path forward will require navigating complex challenges around balancing innovation with risk prevention, maintaining public trust, and gathering international alignment.
But the diligent work we put in today to steer AI's development and use toward positive ends will pay dividends far into the future.
If we make responsible AI regulation a priority now, we can build toward a world where AI empowers people and progress for generations to come.
Learn more about AI and civil engineering
You may also be interested in
- Type
- Civil Engineer blog
5 ways to make climate action fairer
The ICE joined a COP29 event to discuss equitable solutions for communities facing some of the worst climate impacts.
- Type
- Civil Engineer blog
The cost of reducing carbon emissions in infrastructure
A new course will look at how to embed decarbonisation in construction contracts and procurement.
- Type
- Civil Engineer blog
How the Infrastructure Client Group is helping the UK transform project delivery
The group reflects on a busy year and sets ambitious goals for 2025 in its first-ever annual report.