Drones fail outside the environment they were designed in and no one talks about it…
In recent years, drones have been deployed across environments that share little beyond the fact that they involve flight. Dense urban airspace in Israel, frozen battlefields in Ukraine, humid agricultural regions in Southeast Asia, tightly regulated skies over Northern Europe, and long-range smuggling corridors used by organized crime each impose distinct operational constraints. Many systems that performed well in demonstrations struggled once exposed to these realities. The issue was not a lack of technological sophistication, but a mismatch between design assumptions and the environments in which the systems were ultimately expected to operate.
This pattern has been observed well beyond any single theater. As The Wall Street Journal reported in its coverage of drone incursions over European infrastructure and airspace, systems that appear reliable in testing often reveal weaknesses once regulatory scrutiny, detection requirements, and attribution pressures are introduced. In physical systems, readiness is not an abstract property. Environmental conditions are not edge cases; they are primary variables. A drone does not fail in theory. It fails in a specific place, under specific stresses, over time.
Most early-stage drone investments still rely on generalized evaluations. Performance metrics, simulations, and controlled demonstrations are treated as indicators of readiness. In software, such abstractions often hold. In physical systems, they rarely do. Environmental conditions are not edge cases; they are primary variables. A drone does not fail in the abstract. It fails in a place, under specific stresses, over time.
Zone A: Middle Eastern Skies
Israel’s operating environment illustrates this clearly. On one border lie dense urban centers shaped by vertical construction, RF congestion, and constant civilian proximity. On the other are open areas where GNSS degradation and denial are common. Since October 7th, field conditions combined with pilot skill requirements have led to thousands of drones falling to the ground, on both red and blue teams.
RF congestion introduces latency and packet loss. Wind behavior around buildings is unpredictable. Fabrics, metals, and collapsed structures are far from optimal flight pads for commercial systems. Civilian proximity imposes ethical and legal constraints on failure that do not exist in controlled test ranges. Analysts at NATO Science and Technology Organization and MITRE have repeatedly warned that urban autonomy behaves fundamentally differently than autonomy in open terrain, particularly when GNSS reliability degrades.
Systems optimized for open environments often exhibit subtle but compounding instability under these conditions. Small navigation errors near structures become significant risks for ground forces. Operator trust erodes long before a system technically “fails.” In this context, success is not defined by peak performance, but by predictability, controlled degradation, and repeatable behavior under constraint — and by the company’s ability to learn and improve quickly.

Zone B: Sub zero interference
Ukraine presents a different challenge. There, weather is not an inconvenience, but a persistent adversary. Sub-zero temperatures reduce battery efficiency, while snow, mud, and moisture degrade sensors and mechanical components. Continuous operational tempo leaves little margin for maintenance or recalibration.
Many systems that met performance targets in temperate testing environments experienced rapid degradation when exposed to prolonged cold and moisture. These outcomes were not surprises to those familiar with field operations. They were the result of assumptions carried forward from earlier testing phases that did not reflect where the systems would ultimately be used.
Layered on top of this are electronic warfare barriers, driven by both Russian forces and Ukraine’s own rapid adaptation efforts. GNSS disruption, jamming, and spoofing are not occasional events but constant conditions. As military analyst Michael Kofman has noted in his writing for War on the Rocks, “the side that adapts faster tends to prevail, not the side that begins with the most advanced systems.” In this environment, autonomy is not a feature; it is a survival requirement.
Research from RAND Corporation reinforces this point, showing that many unmanned system failures in contested environments stem not from design flaws, but from sustainment, integration, and environmental assumptions that were never stress-tested early.

Zone C: The Rain forest
In Southeast Asia, climate becomes the dominant factor. High humidity, heavy rainfall, and biological contamination introduce slow, cumulative failures. Connectors corrode. Optics fog internally. Electronics degrade not through sudden breakdown, but through gradual loss of reliability.
In Thailand and Cambodia, where drones are deployed across agriculture, logistics, surveillance, and humanitarian operations, durability matters more than peak capability. Systems that require careful handling or controlled storage struggle in environments where maintenance practices are informal and conditions are unforgiving. Cambodia has experimented with swarms and commercial drones for mine-clearing operations, while Thailand has unveiled a long-range loitering munition inspired by the Iranian Shahed platform. These use cases expose endurance, sustainment, and environmental tolerance as decisive factors.
As The Economist observed in its coverage of drone proliferation, modern conflict and dual-use deployment increasingly favor systems that can survive neglect and adaptation, rather than those optimized solely for ideal conditions. In Episode 7 of the Autonomous podcast, Roy talks about the area and its challenges

Across these contexts, a consistent pattern emerges. Systems are often funded before they are tested in the environments that will define their success or failure. Founders test where access is easiest. Investors evaluate what can be observed quickly. Reality asserts itself later, when architecture, supply chains, and capital structures are already fixed.
At that point, deficiencies are no longer design questions. They become financial and operational liabilities. As the Financial Times has noted in its reporting on defense and dual-use technologies, capital frequently flows faster than operational understanding, creating a gap that money alone cannot close.
This raises an uncomfortable but necessary question: who is responsible for testing systems where they are actually meant to operate, and when should that testing occur? In practice, the answer is often no one — or too late. Yet the cost of early exposure is modest compared to the cost of post-investment correction. Field testing before capital deployment does not eliminate risk, but it reveals it while it can still shape design.
In hardware, confidence should follow exposure, not precede it. Capital is most effective when it arrives after a system has already been shaped by the environment it will face.