Symposium on AI assisted Formal Specifications at ICSE 2025

Register now

How do we trust that the AI generated software does what humans intend?

This Symposium is designed to bring together the most relevant voices and projects in AI-assisted formal specifications, and provide everyone with an update on the cross-functional efforts between academia and industry.

Increasingly capable AI brings benefits and risks

AI entrenches biases from an unequal world, while hiding them under a veneer of objectivity
Risk

AI entrenches biases from an unequal world, while hiding them under a veneer of objectivity

Solution

We need to empower human deliberation to set rules for AI outputs.

Future AI could pursue our directives in dangerous and unpredictably inhuman ways
Risk

Future AI could pursue our directives in dangerous and unpredictably inhuman ways

Solution

We need to be able to set objective constraints on AI systems

Stopping progress toward artificial general intelligence (AGI) prolongs all problems that AGI can solve
Risk

Stopping progress toward artificial general intelligence (AGI) prolongs all problems that AGI can solve

Solution

We need to build safe, non-agentic tools to solve today’s and tomorrow’s problems

How Safeguarded AI works

Users, engineers, and regulators should be able to express enforcable constraints on AI outputs, like making sure a generated software program won't crash, or a designed molecule won't react with specified parts of a cell.

Safeguarded AI includes specification generation tools that help users specify desired properties of outputs which are separate from solution generators. Additionally, proof generators use models of the world to generate objective evidence that the proposed solutions have all desired properties.

Provable safety properties enable automated compliance, letting humans focus on what an AI system should do, not worrying about if it's doing it.

For more details, you can read our executive summary 2-pager that describes our overall strategy or our pitch-deck-style slides

Our first priority: Scaling formal methods

Formal verification is the highest standard for assuring safety of software systems, but today it's too costly for widespread use. We aim to show that progress in AI and formal methods can dramatically lower these costs.

Read our 2-pager on scaling safety proofs of software or check out the projects page to see our other efforts as well.