The Human Role in GeoINT During the Age of Automation

Key Takeaways

  • GeoINT agencies collect immense amounts of visual data that, traditionally, must be manually analyzed by humans.
  • Automation, machine learning, and AI can help expedite workflows, but over-reliance on machines could diminish analysis quality. An automated eye in the sky can generate images of what’s on the ground, but an understanding of that image’s significance – tactically, economically, socially, or otherwise – is something that requires a human brain. 
  • Systems that balance human analysts and AI in a complimentary way retain the best of both automation and human cognition.

In 2017, Robert Cardillo, then Director of the National Geospatial-Intelligence Agency announced the NGA’s intention to automate 75% of its image analysis. In 2017 alone, that agency produced 12 million images and 50 million indexed observations, which required immense human work hours. A combination of artificial intelligence, augmentation, and automation was seen as a path toward saving time, money, and effort, while also increasing productivity and accuracy.

Clear benefits aside, automating geospatial intelligence is a more delicate balancing act than it might initially seem.

GeoINT is a discipline that divines meaning from data over time. An automated sweep of an image to run facial recognition is one thing, but repeatedly analyzing a space over time, while considering everything from economics and culture to architectural trends is something that requires human brains and their advanced, flexible cognitive abilities.

An example of GeoINT analysis with taxi trips and destinations in New York City. Image courtesy of OmniSci.

In our last post, we referenced how workout tracking app, Strava, pivots user data into useable information for city planners. Their data provides remarkable insights, in part, because it was generated by human users keenly interested in finding the ‘best’ routes through a given city. Analysts working on the data they generate know this, too, and it bolsters the value of their end-product.

An example of human-generated data in Strava Metro. Image courtesy of Cycling Industry News.

Rather than design systems of automation that seek to circumvent the human element – it is important to have human eyes and brains in the process as verifiers. As this excellent Trajectory Magazine article puts it, an ideal system would follow a ‘human-in-the-loop’ model of analysis.

A diagram showing the “Human-in-the-Loop” analysis model. Image courtesy of Trajectory.

Human cognition’s flexibility means our brains can process unexpected information or occurrences. Humans can divine context, see causal relationships inside a space, and draw information from seemingly disparate fields together to form a well-reasoned conclusion. We accrue expertise after experiences. On the other hand, human brains are subject to mental biases and distractibility. Machines, by contrast, can tirelessly perform repetitive tasks and find patterns with ease.

In short: leave the monotony to machines, and free human analysts up to handle the high-level analysis for which their brains are built.

Geospatial Intelligence for All

In 1961, the National Reconnaissance Office (NRO) was established and tasked with maintaining the United States’ intelligence satellite fleet, everything from drawing-board conception to data collection. The geospatial intelligence gleaned from this fleet has been used by the US’s intelligence community (the other four of the US’s ‘big five’ agencies: The Central Intelligence Agency, Defense Intelligence Agency, National Security Agency, and National Geospatial-Intelligence Agency).

The establishment of the NRO pointed to a then-obvious fact of Geospatial Intelligence (GeoINT): governments held the means of data collection, creation, and dissemination.

GeoINT image showing LiDAR flood depth data overlaid on a satellite image of New Orleans following Hurricane Katrina. Image courtesy of Penn State University, NOAA, and ESA.

A recent restructuring of responsibilities indicates a shift in that idea. In 2017, the NRO took over responsibility for imagery acquisition from the National Geospatial Intelligence Agency (NGA). The NGA still dictates what imagery is needed, and the NRO collects it, but now utilizes Requests for Information (RFIs) from industry in that pursuit.

In 2019, the NRO awarded significant contracts to Maxar, BlackSky, and Planet in an effort to better understand the quality, quantity, and kinds of available commercial data. As these kinds of interactions between the US government and commercial entities continue, the intelligence community will learn more about what commercial capabilities exist and the commercial sector will hone its understanding of what imagery and intelligence the NRO might require next.

This signals a sea-change from government-generated GeoINT to commercially produced data and analytics.

Why the shift, though? As spatial data, machine-learning, and other aspects of GeoINT have grown in the commercial sector, the government sees potential for data superior to that generated by government departments.

Analysis tools in programs like Maxar’s SecureWatch (pictured here) enable users to perform multi-spectral analysis of different events, like this failed missile launch at           Semnan Space Facility in Iran in 2019. Image courtesy of Maxar.

This isn’t just a federal-level dynamic; municipalities working on city transportation plans provide a clear example of the shift from public to private geospatial data generation. In the past, when a city decided to build new roads or modify some aspect of its transportation system, a mapping survey team might have gone out to collect raw data. Today, that data will likely come from a private company’s vast stores of user-generated geospatial data.

Companies like Strava Metro, a product of workout tracking app Strava, use aggregates of user data (stripped of identifiers) to illustrate popular walking, running, and biking routes through cities. Individual athletes can use this data to find new routes (or, in the age of Covid, routes that avoid others runners). In the hands of municipalities, however, this data can be used to better inform city planning efforts when new bike lanes and recreation loops are being worked on. Data from Strava Metro gets into as granular of details as which way people travel down certain streets. Cyclomedia, a Dutch company providing street-level data created with LiDAR and traditional imaging methods, takes a similar approach, marketing their information to utility companies.

The same is true for data originating from commercial efforts to automate vehicles. To ‘teach’ Cadillacs to drive autonomously on highways, aspects like slope of road, lane delineations, and other data were collected by Ushr, Inc. In-city autonomous driving would require equipping luxury vehicles with cumbersome LiDAR devices, which violate Cadillac’s aesthetic principles, but city busses have more freedom in that regard. The data Ushr generated could very well be used in service of making a fleet of city vehicles autonomous.

In city environments increasingly rich with active pedestrians, autonomous vehicles, and an enormous amount of user-generated geo-tagged GeoINT, it seems more and more likely that planners at every level of government will wind up turning to privately-created data and services to continue building the cities and communities of the future.

The Bright Future & Potential Impacts of SAR

As SAR data becomes more affordable and accessible, the geospatial industry will adapt with it. Just how precisely SAR’s impact will be felt remains a big question mark at this stage. Nonetheless, the excitement is palpable. Still, plenty of new technology gets hype and fades away; what makes new tech become standard tech is its ability to replace what came before it.

SAR has that ability, considering its edge over optical imagery with regard to weather factors and nighttime imaging capabilities.

SAR image of the Aswan Dam in Egypt.

SAR image of the Aswan Dam in Egypt. Image courtesy of Satellite Imaging Corp. and Airbus Defence & Space.

For years, SAR technology and data was mostly in the domain of governments. Its capabilities and usefulness have been well-proven, but principally for entities with enormous resources at their disposal. For example, imagery analysts from National Geospatial-Intelligence Agency (NGA) design workflows to regularly ingest SAR and make it useful. But that is only beginning now in the private sector.

Thanks to companies like Capella Space, which is continuing to launch its own constellation of SAR microsatellites, the technology is starting to creep out from behind government curtains.

East View Geospatial is an early reseller in North America for Capella Space, which now has the highest resolution commercial SAR imagery in the world. Four months ago, Capella launched its first operational satellite, Capella-2, a 107kg microsatellite that enters space the size of a washing machine before transforming in orbit. Data from that satellite is capable of producing 50 cm x 50 cm radar images – a level of detail that, when coupled with worldwide tasking capacity, unlocks even more potential and opportunities.

A SAR image of Tiangang Lake Solar Farm in China.

A SAR image of Tiangang Lake Solar Farm in China. Image courtesy of Capella Space.

The most obvious opportunity is nighttime imaging. Current commercial satellites are designed with sun synchronous orbits that generally image during peak sunlight hours – 10:00 a.m. or 1:00 p.m. This leaves tracking of nocturnal changes and information difficult to procure. With SAR satellites dotting the sky, capabilities for nighttime research and data collection, we believe, will blossom.

The shift will not occur overnight, however. Most industry professionals are trained in optical imagery and will have to learn to use SAR and the analytical tools associated. As companies like Capella launch more and more satellites, though, we predict an explosion of research and development around SAR data and analytics. This will eventually lead to academics crafting new theories and testing new applications, and on the commercial side, companies developing new and innovative tools for big data and SAR-focused algorithms.

SAR: Seeing Beyond the Visible

The ability to ‘see’ through darkness and weather events from space makes SAR data an invaluable asset in wide ranging endeavors from tracking changes in topography to sharpening responses to cyclonic storms, floods, earthquakes, oil spills, and other disasters.  

Most recently, satellites operated by NASA’s Jet Propulsion Laboratories are providing SAR data for NASA’s effort to ‘see’ the COVID-19 pandemic from space. The initiative tracks changes in human activity as lockdowns are enacted and lifted across the globe.  

This is a prime example of SAR data’s ability to enhance change detection capabilities. Because human activity occurs more or less regardless of light or weather conditions, optical data alone falls short in this project. SAR data’s unblinking look at patterns of human movement over a period of time, however, offers clear benefits.  

SAR data is also capable of tracking change over much shorter periods of time than the recent lockdowns. In 2015, days after an 8.3 magnitude earthquake shook Chile, Alaska Satellite Facility scientists used SAR data from the Sentinel-1A satellite to create an image depicting an 8.5 cm line-of-sight motion.  

An image created using SAR data to show motion from a 2015 earthquake.

Image courtesy of The Alaska Satellite Facility, F. Meyer, & W. Gong (2015)

In 2010, SAR technology was instrumental in NASA’s efforts to support responses to the Deepwater-Horizon oil rig blowout. The data was used to (among other things) characterize oil in open water and track its progress toward coast and marshlands. In this instance, too, the ability to function irrespective of sunlight or weather is a clear and sound benefit. 

That ability does come with some challenges. Namely, raw SAR data contains noisy images that require training and software to overcome. As more SAR data makes its way into the hands of users and experts, however, processes are being developed to make SAR data increasingly useable for those who have questions it might answer.  

Studies, like this one by the Remote Sensing Division of Instituto Nacional de Pesquisas Espaciais, Brazil, work to alleviate issues of stabilization when imaging forests. That work enables faster progress on early warning systems for deforestation, which are key components for governmental forest reduction policies.  

A figure showing how SAR data is used to detect deforestation in the Amazon.

A figure showing how SAR data is used to detect deforestation in the Amazon. Image courtesy of Remote Sensing Division of Instituto Nacional de Pesquisas Espaciais.

Another clear example of the progress being made is SAR data’s ability to analyze icebergs. In 2007, researchers tracked sea ice in the Bering Strait with RADARSAT-1 data. A decade later, processes around deep learning and SAR data made even more detailed analysis possible. In October of 2017, the Statoil/C-CORE Iceberg Classifier Challenge contest used SAR data to help ‘machine learn’ the difference between icebergs and ships as part of an effort to drive down the cost of operating safely. In a similar challenge, efforts utilizing data fusion are underway to map power grids

As innovations by experts cause end-user costs to fall and accuracy to rise, we can’t help but wonder what the growth of SAR data means for industries far and wide. Next week’s post will explore some of the technology’s potential impacts and legacies.  

Geospatial Terms Made Easy: What is MTM?

East View Geospatial (EVG) has been a provider and producer of top-of-the-line maps, charts, geospatial datasets, and models for over 25 years. Spatial data and its derivative products have countless applications, and some of those applications require these products to meet very specific criteria. Maps, charts, and geospatial data used in military operations must adhere to “MTM standards”. We asked Jason Sjolander, our GIS Technical Manager, about MTM specifications and why adhering to standardization is important in the realm of defense mapping. 

A 1:50,000 scale MTM map of Tonga created by East View Geospatial.

A 1:50,000 scale MTM map of Tonga produced by East View Geospatial.

What is MTM?

“MTM is an acronym of a defense mapping product. Its full name is Multinational Geospatial Co-Production Program Topographic Map. The Multinational Geospatial Co-Production Program or MGCP is a 32+ member group of nations across the world whom contribute information to a central database. This database amongst many things contains GIS data which is used to make MTM’s. Many mapping programs across the world have tools that run exclusively on this MGCP data, most notably ESRI and their Defense Mapping toolset. “

How does this apply to your work at EVG?

“At several points during my career at East View Geospatial, I have created MTMs at both the 50K and 100K scales. Having a standardized map setup like MTM allows for a more efficient workflow and production time and delivers a ready-to-use product to end users. “

Producing MTM maps is just one of the many things we do at EVG. We know how important it is to create maps, charts, datasets, and other geospatial products that meet our clients’ exact needs. We’ll work with you to ensure that the product you need is created to exact specifications no matter the end-use application. Contact us to find out how we can help you utilize maps, charts, and geospatial products today!

Jason Sjolander, GIS Technical Manager