As generative AI adoption rises, the importance of responsibly deploying AI systems becomes even more crucial. Here, automated reasoning can make a key difference: It is a technique used in computer science to ensure that a system is performing according to specifications and intentions through mathematical proofs.
AWS has been using automated reasoning for years. We caught up with Anoop Deoras, director of Applied Sciences for Generative AI Applications and Developer Experiences at AWS, to dive into automated reasoning and why it’s a critical must-have in the era of gen AI.
An edited transcript of our conversation follows.
The AI Innovator: Please introduce yourself. What do you do at AWS?
Anoop Deoras: I’m a director of Applied Sciences here at AWS. I run an organization of machine learning scientists and machine learning engineers who work on foundational generative AI technology as well as its applications in the developer space. Our org is called the Next Generation Developer Experience. In short, we are working towards accelerating each stage of the software development life cycle through generative AI powered tools and technology.
We do everything from code completions to interacting with code, doing code scanning for security vulnerability detections and its remediations, doing multistep planning and reasoning that can transform code from one version to another, building new software features and fixing issues in code repositories, and even answering customer questions about their AWS resource configurations and billing information and trends.
We work on the entire SDLC (Software Development Life Cycle) stack to accelerate each stage and improve the productivity of developers and the software organization at large.
One of the areas you work on is automated reasoning. What is automated reasoning?
Automated reasoning is a specific discipline of artificial intelligence that applies logical deduction to computer systems to provide assurances about what a system or program will do or will never do. Now, these assurances are based on mathematical proof.
People solve many logical problems in mathematics, science and computation by using logical strategies like theorems and deductions. Automated reasoning uses computers that use the same tools to solve complex challenges. Automated reasoning tools attempt to answer questions about a program or even a logic formula by using known techniques from mathematics. So the tools help you check what’s true about a statement or expression. That’s automated reasoning, in short.
Can you share two use cases for this technique? What were the outcomes?
The ability to make logic inferences step-by-step is quite helpful in several areas. By using automated reasoning, you can actually prove security compliance availability, durability and even safety properties of large scale architectures such as AWS. Today, AWS processes over a billion mathematical queries per day that power AWS Identity and Access Management (IAM) Access Analyzer, Amazon S3 Block Public Access, and other security offerings. I have to say that AWS is the first and only code provider to use automated reasoning at this scale.
We routinely apply automated reasoning across our systems and service areas, such as storage and networking and virtualization and identity and cryptography. Amazon scientists and engineers also use automated reasoning to prove the correctness of critical internal systems. For example, Amazon Inspector Classic has a network reachability feature that automatically analyzes EC2 instances for potential security misconfigurations and vulnerabilities. AWS IAM Access Analyzer manages account permissions with automated reasoning. And we use it to safeguard data stored and processed on S3.
Since 2019, Amazon Prime Video has been creating software development tools that use automated reasoning to give Amazon developers greater confidence in the code that they write for the Prime Video application. It’s the application that controls video playback for all the Prime Video content.
Let me talk about one other use case that is close to my heart, and that’s actually where I come in. We are combining our deep expertise of automated reasoning and generative AI to have them work together and create more advanced solutions. In Amazon Q Developer, we use automated reasoning and generative AI to scan code for hard-to-detect security vulnerabilities, and we propose remediations.
AI already uses automated reasoning. How do you see automated reasoning advancing in the age of LLMs?
Automated reasoning is a specific discipline of artificial intelligence that applies logical deductions to computer systems. AI is a science that teaches computers to think like people. When computers are solving problems, automated reasoning differs from other AI technologies such as, say, natural language processing, which focuses on training computers to understand written or verbal speeches. Instead, automated reasoning uses logical models and proofs to reason about the possible behavior of a system or a program, including states it can or will never reach. So, in my opinion, automated reasoning feels like a very good complement to artificial intelligence.
When it comes to the deployment of AI and ML in high stakes situations, it is critical to ensure we are developing tools and techniques that can detect failure modes. For example, for AI and ML in, let’s say, autonomous vehicles, a failure mode makes a response that it is not able to detect pedestrians, stop signs, etc. So in a sense, what we really require is that the AI and ML algorithms behave as per a predefined set of specifications.
Now, due to the nature of how these ML and AI models are trained, it is (impractical) to test these models on every possible combination of inputs. So having formal verification models or automated reasoning specifically designed for AI algorithms such as, say, large language models or large vision models, can guarantee safe execution and deployment of these models.
How does this affect an informed development of agentic AI?
That’s an interesting question. I think of agentic AI systems as a type of AI system that can act independently to achieve goals, make decisions and learn from experiences. As an example, in Amazon Q, we use agentic AI systems to build automated software engineering solutions to do code transformations, such as Java version upgrades or even writing new features and net new code from developer requests.
The generative AI application industry at large would benefit from having tools and technology at their disposal, which would allow them to build guardrails and verifiers, guaranteeing that the agentic AI systems are behaving as per the predefined specifications. … As the agentic AI systems evolve and we have a need to do more reasoning in our workflows, that is where we would have to depend even more on the automated reasoning toolbox, if you will.
You said that automated reasoning is used at a broad level in Amazon Q Developer for security scanning. Can you tell me a little bit more about the security scanning capability?
Amazon Q Developer provides code security scanning capabilities that outperform leading public benchmarks. … Security scanning is used by tens of thousands of internal Amazon developers every week to provide code reviews and remediations.
What is the impact of automated reasoning on security, productivity, sustainability and cost savings?
Let me start with security. Automated reasoning tools help detect vulnerabilities in security in software code. So this helps businesses build more secure systems and reduces the risk of attack. By building mathematical proofs that ensure whether a certain program or critical code works as per the specification, it guarantees security and safety of customer code as well as their data. As for productivity, … automated reasoning can help detect and remediate code vulnerabilities, thus reducing the debugging time, leading to improved productivity for developers.
With the developer spending less time doing debugging, this is where the productivity gains start to kick in on sustainability. Automated reasoning helps to optimize code and (offers) strong verification guarantees – and these can lead to reduced energy needs that otherwise would be required for labor-intensive testing or debugging. Security, productivity and sustainability combined … have a direct positive impact on businesses saving on costs, either due to time saved in debugging and securing the system from attack or due to the compute saved.
What challenges has Amazon faced in automated reasoning technologies and how have you addressed them?
AWS is the first and only cloud provider to use automated reasoning at such a large scale. That means there haven’t been existing examples or templates for us to follow and learn from, which can be difficult at times. But that is also what attracts some of the brightest minds in automated reasoning to come work for Amazon because we get to experiment, identify best practices and create industry standards at such a large scale. So there are the challenges, but at the same time, these challenges are bringing tremendous opportunities.
How do you ensure the use of automated reasoning aligns with ethical standards and responsible AI practices?
As we think about this new world where more and more applications are being powered by AI as well as generative AI, the industry at large is looking for solutions that can ensure their applications are built with ethical standards and responsible AI practices. Automated reasoning is already solidifying our stance on building AI applications and systems with ethical standards and responsible AI practices.
Previously, we talked about how we use automated reasoning to make the developer code more secure, safer and also remove vulnerabilities because automated reasoning uses logical models and proofs to reason about the possible behaviors of a system or a program including states. … It is very well positioned to make AI systems secure and responsible.
How do you balance the use of automated reasoning with human oversight?
It is important to be thinking deeply about how we incorporate human judgment expertise and oversight in AI workflows to ensure that systems function and behave as per the design characteristics or specifications. Automated reasoning therefore helps us scale the verification systems. However, building of the verifier needs to be evaluated by human subject-matter experts. As a concrete example, when we build our code security scanning models using automated reasoning in Amazon Q, we routinely evaluate both our automated reasoning models and also their responses. Having human oversight or human experts in the loop is critical for scaling AI applications and the technologies, such as automated reasoning, that support them.
Are there any upcoming projects involving automated reasoning that you’re excited about?
We are continuing to invest in making automated reasoning tools more extensible to more complex coding and software development scenarios, such as, say, fixing an issue for complex projects and very large repositories with complex dependencies. I would also say that using automated reasoning requires our builders to have a different mindset, instead of thinking about all possible input scenarios and how they might go wrong. We define how the system should work and identify all the conditions that must be met in order for it to behave correctly. Then we can verify that those conditions are true using mathematical proof – this promotes a growth mindset. And I’m excited to see how automated reasoning will propel innovation. …
Where do you see the field of automated reasoning heading in the next few years?
I envision automated reasoning finding even more applications in the complex AI use cases involving multi-agentic AI, generative AI systems, complex coding scenarios, and things like that. … I’m quite excited about the role of automated reasoning in agentic AI development. Automated reasoning can help us build strong verification models, which can act as guardrails to ensure large AI systems powered by these AI agents and others function as per the specifications.
When we talk to our customers, we find that organizations want generative AI that can do more advanced and complex tasks. … AI that executes like agents is advanced today.
Next is the AI that can predict, is goal-seeking and is outcome-driven. That type of generative AI requires automated reasoning. So automated reasoning is poised to play a much larger role in emerging areas of generative AI in its applications.
Be First to Comment