Press "Enter" to skip to content
Credit: Pikisuperstar on Freepik

Is This the Future of Financial Research?

Financial research and analysis are areas ripe for disruption by AI. A company that is doing just that is Brightwave. Its platform uses AI to quickly analyze huge amounts of information — every line and footnote of company reports, earnings calls, and the like — and turns it into actionable insights for clients including hedge funds and wealth managers.

The AI Innovator caught up with CEO and co-founder Mike Conover, who previously led Databricks’ open source LLM engineering group and created Dolly, an LLM, for less than $30.

The following is an edited transcript of our interview.

The AI Innovator: What does your company do?

Mike Conover: Brightwave is an AI-driven financial research platform designed to transform how investment professionals — whether at private equity firms, long-only funds, or other institutional asset managers — gather, analyze, and act on insights. The Brightwave platform transforms every buried detail into clear, actionable insight — connecting critical data points across thousands of pages of content in minutes. Instead of drowning in manual reviews, our customers get to focus on what truly matters: spotting hidden risks and opportunities, strengthening investment theses, and moving deals forward faster and with greater confidence.

My journey to founding Brightwave is rooted in over 15 years of expertise in machine learning, where I’ve architected AI systems that tackle complex challenges — from mapping global labor flows to optimizing content relevance for LinkedIn’s newsfeed. At Brightwave, we’re channeling this deep technical knowledge into a singular mission: enabling investment firms to move faster, dig deeper, and act with greater conviction in an increasingly data-driven market landscape.

There is a lot of talk about AI agents. What are you doing in this area and what frameworks or architectures do you need to enable them?

AI agents are systems that process information, execute tasks, and improve over time in a self-directed or semi-autonomous way.  Our focus is on creating agents that expedite core investing workflows and enhance what finance professionals are able to achieve without changing their resources or time constraints.

To do this, we rely on a modular systems-of-systems architecture that orchestrates the interactions of many purpose-built machine learning models. These include language models for understanding and producing text, error correction and fact checking, search and information retrieval, synthesis, citations and more. Each component is dialed for specific tasks, and they work together seamlessly because we’re invested so heavily in our evaluations frameworks. This layered approach lets us build robust, engineering-first designs which, although the entire system is not differentiable, can be globally optimized.

What are the biggest obstacles you’ve encountered when trying to design or implement an agentic architecture?

The biggest hurdle a lot of teams face, and one I’m pleased to say we’ve cleared, is ensuring reliability in high-stakes environments like finance. A small mistake in a given processing step or a subtle misinterpreting of data can lead to significant downstream consequences.

Early on, we believed that increasing the context window size of foundation models would be enough to handle large datasets like private equity data rooms or large collections of SEC filings and earnings calls. Empirically, it’s the case that context size alone isn’t sufficient — multistep reasoning that handles complex relationships and fact patterns requires new methods.

Additionally, the UI considerations are non-trivial.  Everybody’s familiar with chat-based interfaces at this point, but the design problem we’re solving is how to reveal the thought process of a system that’s considered thousands of pages of text to an end-user in a way that’s useful and easy to understand.  It’s a lot of net-new product and design thinking, and that’s one of the things that makes this space so exciting.

How do current limitations in AI, such as reasoning or planning abilities, impact the development of agentic architectures?

Reasoning and planning are still areas where AI struggles. Many current systems are excellent at pattern recognition but fall short when it comes to understanding causality or making complex, multi-step deductions. I’m very bullish on advancements in foundation models’ planning capabilities and their ability to make use of complex sequences of function calls, it’s really a question of when that will come online.  The current suite of models tends to optimize for a greedy approach to search through a plan space – this is going to change with the next family of foundation models, but we’ll see how substantial the shift is.

To work within the constraints of the systems as they exist today, we’ve designed agents for specific, well-defined tasks. Examples could include synthesizing a comprehensive debt financing timeline across documents or combing through legal contracts to understand specific covenants. These tasks play to AI’s strengths without requiring autonomous levels of advanced reasoning. Over time, as reasoning and planning capabilities improve, we’ll be able to layer more sophisticated workflows on top of these foundational systems.

What are the key differences between designing an AI system for a specific task versus designing one with more general agency?

Designing for a specific task is pretty straightforward. You design the system to perform a well-defined function — like extracting financial metrics from SEC filings — with high accuracy and reliability. General agency, on the other hand, requires a system to adapt dynamically across different tasks without extensive retraining, and self-correct in the case of low-quality intermediate work products.  General agency requires the ability to make locally suboptimal moves, as in chess or Go, in order to achieve a globally optimal outcome. It’s unclear the current state of the art is there.

We use a modular approach to bridge this gap. Each component — be it a language model or search subsystems — is optimized for a particular role but is designed to work in concert with other components. This allows the system to flexibly synthesize information across different contexts, which is crucial in high-information-load industries like finance. The key is maintaining balance: ensuring adaptability without sacrificing reliability or accuracy.

How do you approach the issue of safety and control when developing an AI agent that can operate autonomously?

Safety and control are fundamental, especially in a high-stakes field like finance. From the start, we’ve designed Brightwave’s systems to be transparent and easy to oversee. Every output is fully traceable, and we’ve added real-time feedback loops so users can tweak and refine results as they go.

Our CTO, Brandon Kotara, has operated a federally regulated derivatives exchange, and that’s really shaped how we think about compliance, security and accountability. By keeping humans in control of key decisions and making the system fully auditable, we strike the right balance between leveraging AI and maintaining oversight.

What are the ethical considerations that arise when building agentic architectures, and how do you address them in your work?

Ethical considerations like transparency, fairness, and accountability are at the heart of everything we build. In finance, AI-driven decisions can ripple across markets and portfolios, so it’s essential that our outputs are accurate, explainable, and free from bias.

We tackle this by carefully selecting and validating the data our models rely on, and by building systems that put humans in the driver’s seat. Brightwave isn’t a substitute for good judgement — it’s here to enhance it.

How do you envision agentic architectures being used in the future, and what impact might they have on society?

Agentic architectures — modular systems that orchestrate multiple, purpose-built AI models — will fundamentally reshape how we digest and act on complex information. In finance, we’re already seeing these frameworks accelerate diligence, reveal hidden risks, and let teams operate with greater speed and confidence.

Over time, as reasoning and planning capabilities improve, we’ll move beyond narrow tasks into more flexible, adaptive workflows. … These architectures will democratize sophisticated analysis, allowing even lean teams to tackle challenges once reserved for armies of specialists.

On a broader level, the societal impact will be about raising the standard of decision-making. Agentic architectures not only streamline operations but also embed traceability, transparency, and accountability into the process. As users come to trust and rely on these systems, they’ll demand explainability and ethical rigor, making it easier to identify biases and correct course early.

The end result will be a world where professionals — from investors to policymakers — can confidently leverage AI insights without sacrificing human judgment, creating a more efficient, responsible, and opportunity-rich landscape for everyone involved.

    Author

    Be First to Comment

    Leave a Reply

    Your email address will not be published. Required fields are marked *