Vision AI Models Improve Decision Making in Manufacturing, Energy, and Finance
Generative artificial intelligence (AI) is best known for creating images and text. Now, it is helping industries make better planning decisions.
Georgia Tech researchers have created a new AI model for decision-focused learning (DFL), called Diffusion-DFL. Recent tests showed it makes more accurate decisions than current approaches.
Along with optimizing industrial output, Diffusion-DFL lowers costs and reduces risk. Experiments also showed it achieves across different fields.
Diffusion-DFL doesn’t just surpass current methods; it also predicts more accurately as problem sizes grow. The model requires less computing power despite these high-performance marks, making it more accessible to smaller enterprises.
Diffusion-DFL runs on diffusion models, the same technology that powers DALL-E and other AI image generators. It is the first DFL framework based on diffusion models.
“Anyone who makes high-stakes decisions under uncertainty, including supply chain managers, energy operators, and financial planners, benefits from Diffusion-DFL,” said Zihao Zhao, a Georgia Tech Ph.D. student who led the project.
“Instead of optimizing around a single forecast, the model evaluates many possible scenarios, so decisions account for real-world risk and become more robust.”
To test Diffusion-DFL, the team ran experiments based on real-world settings, including:
- Factory manufacturing to meet product demand
- Power grid scheduling to meet energy demand
- Stock market portfolio optimization
In each case, Diffusion-DFL made more accurate decisions than current methods. It also performed better as problems became larger and more complex. These results confirm the model’s ability to make important decisions in real-world scenarios with noisy data and uncertainty.
The experiments also show that Diffusion-DFL is practical, not just accurate. Training diffusion models is expensive, so the team developed a way to reduce memory use. This cut training costs by more than 99.7%. As a result, Diffusion-DFL can reach more researchers and practitioners.
“Our score-function estimator cuts GPU memory from over 60 gigabytes to 0.13 with almost no loss in decision quality, reducing the requirement for massive computing resources,” Zhao said. “I hope this expands Diffusion-DFL into other domains, like healthcare, where decisions must be made quickly under complex uncertainty."
Beyond decision-making applications, Diffusion-DFL marks a shift in DFL techniques and in the broader use of generative AI models.
In supply chain management, planners estimate future demand before deciding how much product to stock. In this DFL problem, engineers align ML models with predetermined decision objectives, like minimizing risk or reducing costs.
One flaw of DFL methods is that they optimize around a single, deterministic prediction in an uncertain future.
Diffusion-DFL takes a different approach. Instead of making a single guess, it determines a range of possible outcomes. This leads to decisions based on many likely scenarios, rather than on a single assumed future.
To do this, the framework uses diffusion models. These generative AI models create high-quality data from images, text, and audio.
The forward diffusion process involves adding noise to data until it becomes pure noise. Models trained via forward diffusion can reverse diffusion. This means they can start with noisy data and then produce meaningful insights from training examples.
Real-world data is often noisy and uncertain. Traditional DFL methods struggle in these conditions, but diffusion models are designed to handle them.
Because of this, Diffusion-DFL can explore many possible outcomes and choose better actions. Like image-generation AI, the model works well with complex data from different sources. This enables its use across different industries.
“Diffusion models have achieved significant success in generative AI and image synthesis, but our work shows their potential extends far beyond that,” said Kai Wang, an assistant professor in the School of Computational Science and Engineering (CSE).
“What makes Diffusion-DFL unique is that the specific downstream application guides how the model learns to handle uncertainty.
“Whether we are scheduling energy for power grids, balancing risk in financial portfolios, or developing early warning systems in healthcare, we can explicitly train these highly expressive models to navigate the unique complexities of each domain.”
Zhao and Wang collaborated with Caltech Ph.D. candidate Christopher Yeh and Harvard University postdoctoral fellow Lingkai Kong on Diffusion-DFL. Kong earned his Ph.D. in CSE from Georgia Tech in 2024.
Wang will present Diffusion-DFL on behalf of the group at the upcoming International Conference on Learning Representations (ICLR 2026). Occurring April 23-27 in Rio de Janeiro, ICLR is one of the world’s most prestigious conferences dedicated to artificial intelligence research.
“ICLR is the perfect stage for Diffusion-DFL because it brings together the exact community that needs to see the bridge between generative modeling and high-stakes decision-making for real-world applications,” Wang said.
“Presenting Diffusion-DFL allows us to challenge the traditional training framework of diffusion models. It’s about sparking a broader conversation on how we can align the training objectives of generative AI directly with actual, downstream decision-making needs.”