AI Framework for Multimodal Scene Construction and Data Generation - SBIR Topic DAF26BZ01-DV005
Disclaimer:
This topic was temporarily posted by the Department of War SBIR Program on March 2nd 2026 and removed the following day.
We believe this topic is planned to be released once the SBIR program is reauthorized; however, this topic may ultimately be modified or withdrawn.
Sign up below to be notified as soon as this topic is released again. In the meantime, we’d recommend you start planning to respond if within your capabilities.
Funding Amount:
Est. $140,000
Deadline to Apply:
Est. April 29th, 2026.
Objective:
The objective is to develop a capability for generating geo-specific, sensor-independent scenes for multimodal (RF and EO/IR) synthetic data generation by leveraging geo-spatial information, time-of-day, seasonal data, and measured databases, overcoming limitations in existing models and radiometric data.
Description:
The technology within this topic is restricted under the International Traffic in Arms Regulation (ITAR), 22 CFR Parts 120-130, which controls the export and import of defense-related material and services, including export of sensitive technical data, or the Export Administration Regulation (EAR), 15 CFR Parts 730-774, which controls dual use items. Offerors must disclose any proposed use of foreign nationals (FNs), their country(ies) of origin, the type of visa or work permit possessed, and the statement of work (SOW) tasks intended for accomplishment by the FN(s) in accordance with section 3.5 of the Announcement. Offerors are advised foreign nationals proposed to perform on this topic may be restricted due to the technical data under US Export Control Laws.
The objective is to develop a capability for generating geo-specific, sensor-independent scenes for multimodal (RF and EO/IR) synthetic data generation by leveraging geo-spatial information, time-of-day, seasonal data, and measured databases, overcoming limitations in existing models and radiometric data.
The DoD requires large-scale, high-fidelity background scenes to advance autonomous systems and Artificial Intelligence and Machine Learning (AI/ML) capabilities. These scenes are critical for providing realistic, context-rich environments that enable AI/ML and/or autonomous systems to learn, adapt, and perform effectively in real-world, dynamic conditions. A critical component of this effort is the ability to generate dynamic, high-fidelity background scenes that realistically model operational environments. Unlike traditional synthetic data generation, which often focuses on isolated sensor outputs, scene generation must create a coherent, interactive world where autonomous agents can navigate, perceive, and process imagery based on their movement and decision-making.
This presents several challenges. First, scene generation requires accurate modeling of complex environmental factors such as terrain variation, urban structures, vegetation, weather conditions, and electromagnetic propagation—all of which impact sensor performance. Additionally, ensuring spatial and temporal consistency across multimodal data (e.g., RF and EO/IR) is far more demanding than simply generating independent synthetic datasets. Autonomous systems rely on their ability to interpret changes in the environment dynamically, requiring realistic physics-based interactions between sensors and the scene. Further, aligning RF and EO/IR perspectives within the same scenario for sensor fusion introduces an added layer of complexity, demanding precise calibration of sensor viewpoints, occlusions, and atmospheric effects.
To accurately model such complex environments, scene generation tools must not only produce synthetic RF and EO/IR data but also ensure that these representations align with real-world sensor measurements. When the underlying environment is well-characterized, scene generation tools can generate multimodal imagery alongside ground truth labels, providing ready-made datasets for AI/ML models and autonomous agents. However, their effectiveness is often constrained by the availability of accurate models and measured databases that capture the necessary radiometric and electromagnetic characteristics of the environment. Addressing these limitations requires the development of software that integrates geospatial data, time-of-day, seasonal variations, measured databases, and land cover data to generate detailed representations of the environment. Furthermore, this software must support standardized scene formats compatible with existing simulation tools such as FLITES (EO/IR) and Xpatch (RF), allowing for flexible resolution and fidelity adjustments based on scenario requirements. Finally, a structured approach should be proposed to refine synthetic scene renderings as real-world measurements become available, improving realism and scene fidelity over time.
Who will win?
If you can achieve the objective above better than any other company on the market, you have a very high-likelihood of success and should apply.
Who is eligible to apply?
Any company that meets the following criteria:
For-profit company
U.S.-owned and controlled.
500 or fewer employees (including affiliates)
How Can BW&CO Help?
1) End-to-end support including, strategy, writing of the full proposal, and administrative & compliance support.
2) Proposal strategy and review.
3) Administrative & compliance support.
Request to talk with a member of our team by completing the form below: