LocationIntelligence.US

LocationIntelligence.USLocationIntelligence.USLocationIntelligence.US
  • Home
  • Solutions
  • CMO Services
  • Industries
  • Book Appointment
  • Blog
  • FAQ
  • About
  • Contact
  • Podcast
  • Market Research Notes
  • AI DC Infographic
  • More
    • Home
    • Solutions
    • CMO Services
    • Industries
    • Book Appointment
    • Blog
    • FAQ
    • About
    • Contact
    • Podcast
    • Market Research Notes
    • AI DC Infographic
  • Sign In
  • Create Account

  • Bookings
  • My Account
  • Signed in as:

  • filler@godaddy.com


  • Bookings
  • My Account
  • Sign out

LocationIntelligence.US

LocationIntelligence.USLocationIntelligence.USLocationIntelligence.US

Signed in as:

filler@godaddy.com

  • Home
  • Solutions
  • CMO Services
  • Industries
  • Book Appointment
  • Blog
  • FAQ
  • About
  • Contact
  • Podcast
  • Market Research Notes
  • AI DC Infographic

Account


  • Bookings
  • My Account
  • Sign out


  • Sign In
  • Bookings
  • My Account

Market Research on Geospatial Data & Location Technology

Precisely's DataLink & Microsoft's Planetary Computer Pro Partners

In this week's market research note for June 23, 2025, I wanted to bring attention to a webinar last week where Precisely announced DataLink, a program to connect multiple data providers under a single linking mechanism. This "linking" of multiple data entities has been brewing in the industry for several years because there were too many companies offering their version of a unique and persistent ID for their data. Precisely has the PreciselyID, Overture Maps Foundation has the Global Entity Referencing System (GERS) and there are several others such as Cotality with CLIP. Each wants their clients to standardize on their ID with the intent that they would become an industry standard and everyone would fall in line. It didn't happen.


The result is an admission by Precisely that there had to be a different model. As a result, Precisely has adopted the use of GERS and has launched DataLink that will also facilitate linking to REGRID's land parcel ID, Dun and Bradstreet's DUNS number, and GeoX's building ID.


For now, Precisely is linking only to Overture's Places data, which itself includes 70 million POIs inclusive of Microsoft's Places, Facebook's business listing and others. Overture has other foundational data sets including Addresses, Transportation, Buildings, Divisions (boundaries) and its Base layer.


The benefit of linking to Overture GERS is that there is an open data license and shareable under CC-BY though some data sets carry the ODbL license. While Precisely data is restricted as proprietary and licensable, they can share the GERS ID with their clients who may also use GERS and make a seamless connection to data purchased from Precisely. DataLink becomes the linkage with those other companies that have adopted GERS and to link to the Precisely ID.


Overture Maps is also delivering a data artifact called a Bridge file that allows users to join tables between its data sources as an input to the elements in the maps that they inform. For example, if you are using OpenStreetMap data you download the bridge file and link to GERS via a join table.


In the end, the strategy by Precisely is to engender an ecosystem of data providers and act as the pivot point. The assumption by Precisely and Overture is adoption of one of their persistent IDs for better data integration. Not to overstate the obvious but Overture is a non-profit and Precisely is not, and neither are the other companies mentioned here. Every one of those for-profit companies see the value in Overture's model and recognizes that all parties can profit from this model. The question will be is if Overture's rising tide lifts all data boats.


Microsoft Planetary Computer Pro

And then there is Microsoft that announced a slew of partnerships through their Planetary Computer Pro platform. Esri is integration ArcGIS Pro for image analysis; Xoople wants to leverage the platform's ingestion engine and deliver the data via Azure; Space Intelligence needed scale to access image data for deforestation analysis; Impact Observatory is running their models on the platform for land use and land cover mapping; and EY Consulting is now leveraging their geospatial expertise (who knew?) for their analytics practice. 

Snowflake LLMs, GeoHealth & Azure's Planetary Pro

In this week's market research note for June 16, 2025, I want to highlight three announcements that augment some of the latest research notes below.

  1. Amine Kriaa from Korem noted in a LinkedIn article the ease of use of working with Snowflake LLMs on geospatial data and highlighted the ability to use natural language statements and the capability to generate structured spatial SQL queries. This highlights two important milestones. The first is the use of natural language queries which allows non-GIS conversant users to leverage geospatial information. While it isn't always acknowledged, "normal" people ask geospatial questions everyday and the example Kriaa used was a simple query insurance companies use to know the proximity of fire stations to properties so that a policy can be adequately underwritten for risks. The second is the ability to generate SQL statements from natural language queries. The bane of many GIS users is to craft the correct spatial query using SQL. Anyone who has used the query dialog box in GIS software knows this well. (For me it was MapInfo's query function and I routinely got the syntax wrong). Now, if Snowflake is the preferred data and analysis platform of choice, the ability to ask and get answers opens up geospatial processing to an entire new user community. Kriaa also points to the cost of running these queries and this should not be overlooked. 
  2. In the journal, Annals of Epidemiology, a study noted how home address data is used to model and predict children's health. Electronic health records were used to identify low-income households and children with asthma. The study pointed to historically segregated neighborhoods that have been subject to discriminatory policies and were subject to "redlining" and "broad disinvestment" in the community. The result was that more children had developed asthma and their mortality rate was significantly higher than those not living in these disadvantaged neighborhoods. The study's author, Patricia Fabian noted that, "Every hospital and clinic collects electronic health records for their patients, which means this approach is scalable to most populations in the world, as long as records are kept consistently … New advances in satellite, housing and environmental data collection are expanding our ability to connect health data to geospatial data worldwide, as well. Any health outcome that is connected to risk factors related to housing can be studied using similar methods.” Mapping disease is obviously not new to the geospatial community but use of extensive volumes of electronically collected data holds the promise of more insights such as these.
  3. Last week, InfoWorld published an article titled, " Use geospatial data in Microsoft Azure with Planetary Computer Pro." According to the Planetary Computer project was built with open-source tools with 50 petabytes of data in 120 data sets. From the article, "At Build 2025, Microsoft announced a preview of a new enterprise-focused version of this service, called Planetary Computer Pro. It’s wrapped as an Azure service and suitable for use across your applications or as part of machine learning services. The service can be managed using familiar Azure tools, including the portal and CLI, as well as with its own SDK and APIs." While projects of this type are not uncommon and perhaps more      suitable for academic endeavors, it is enticing to think of a "geographic internet of things," in their words which the article points out will support applications for digital twins and precision agriculture.

CONTACT LOCATIONINTELLIGENCE.US FOR MORE INFORMATION AND ANALYSIS.

Large Geospatial Models: AWS & Niantic Spatial

In this week's market research note for June 2, 2025, I want to highlight two announcements: The launch of Niantic Spatial and their development of large geospatial models for more accurate positioning and AWS's support for geospatial foundation models (GeoFM) specifically for Earth observation data.


Background
Niantic was the company behind the crowd-sourced game of Pokémon Go and led by John Hanke. Hanke was the visionary of Keyhole, that company purchased by Google to create Google Earth. Last year, Niantic sold their gaming division recently to Scopley for $3.5 Billion and the announcement last week was the launch of Niantic Spatial. The company intends to create Large Geospatial Models (LGM) based on the accumulation of Pokemon's camera images for positional accuracy with satellite imagery for land and infrastructure features. The result would be the ability to leverage this integration to answer semantic queries about the Earth as you would enter text into an LLM like ChatGPT. 


AWS' announcement uses the Clay Foundation Model, an open-source model trained on EO satellite data but which must be tuned after an initial supervised classification of raw imagery to identify real-world objects. The objective is take a large volume of satellite data, train on the data using the Clay Foundation Model and then use GeoFM to encode and then classify pixels for land use categories, as an example. Granted this is a simplified explanation but the intent is to speed the classification process for very large volumes of data. See the listed resource links.


Is this significant?

It's significant for two reasons. First is that both advancements could take the volume of Earth observation imagery and ground-based camera data with location-based metadata to generate solutions that would otherwise be difficult to process because of the large amount of archived image data that isn't and can't be used because of the size of the stored data. I've argued in the past that with the number of EO satellites currently in orbit that most of the data captured goes unused and archived. In other words, we have too many pixels. The challenge has been to take the accumulated data with now persistent surveillance of nearly the entire globe and extract information on demand. 

The second is these are two companies freed from the historical trappings of GIS and the thinking behind traditional geospatial applications. Niantic Spatial is starting with $250 Million in funding which would make it one of the largest pure geospatial companies today. While that's not $250 Million in sales or market capitalization, Niantic's financial strength would catapult the company into a position that would challenge nearly every geospatial company in terms of overall value and the ability to deliver products to market. There are a mere handful of geospatial companies (not including those with substantial geospatial teams and investments like Google or Microsoft) that were rival Niantic Spatial.


In the case of Amazon, the challenge is not merely the ability to identify changes to forest cover, which was the example provided in their announcement. As it mentions in the announcement, there was significant pre-training of data and use of 10-meter Sentinel-2 imagery. The resulting classification of data and identification of textural changes using "pixel chips" (their phrasing) is a good step toward forward but not game changing. Yes, scaling this type of processing is going to be beneficial. However, for many applications, higher resolution imagery will be required.


Why Higher Resolution is Needed

In a use case that was part of my recent work, I was attempting to identify changes to land cover in residential housing. That would require approximately 1-meter spatial resolution and images would need to be captured weekly (at a minimum) to understand the necessary rate of change. The ability to understand not just land use changes but changes to property ownership is highly relevant for insurance, real estate and retail applications. The objective is to capture "land in transition" so that local and regional growth could be captured before and during the process of building new housing. Niantic Spatial's library of "real world" imagery combined with high resolution satellite data could have much promise in this use case. But there are many other use cases where more frequent image data is required. 


Bottom Line

Are there too many pixels? I think these two announcements have the potential to maximize the value of EO data such that we may now not have enough pixels. Over the course of 50 years of satellite imagery collection, i.e. since the launch of Landsat-1 in 1972, we've not been able to utilize the full value of data. Having been a member of the teams that did some of the original work of image data science at the USGS EROS data center, we archived much data and I suspect the same is true for the many current EO satellites currently orbiting the Earth. It's possible AWS and Niantic Spatial may have given the geospatial community a significant leap forward.


CONTACT LOCATIONINTELLIGENCE.US FOR MORE INFORMATION AND ANALYSIS.

ICEYE Partners with Safran

At the 2025 GEOINT conference, ICEYE & Safran released separate press statements about a continuation of their partnership by leveraging ICEYE's synthetic aperture radar (SAR) data and Safran's Earth observation (EO) satellite image analysis solutions and persistent monitoring. 


Why?

Setting aside the public statements of each company, the primary reason for integrating SAR and EO imagery is to enhance the signature of the objects under surveillance, and therefore, increase the potential to better detect and recognize real-world objects like ships, planes, cars, houses, etc. In other words, you want to improve the sometimes noisy, high reflectivity signature of SAR with the spectral depth of satellite imagery. In this case the result is the integration of two different types of remote sensing technologies, SAR and electro-optical EO data. The more common practice of integrating lower resolution EO spectral imagery with higher resolution EO spatial image products is often referred to as "pan-sharpening."


Now, this is not new. The integration of SAR and satellite imagery is often used. Before SAR was launched on satellites, side-looking airborne radar (SLAR) was used when EO satellites had resolution of 10's of meters (original Landsat was 80m; Thematic Mapper, 30m) and not the sub-meter spatial resolution you can find on today's birds. Now there are other technological advantages as providing SAR's ability to better map surface texture and cloud penetration. In military and IC application, there is now the possibility of offering persistent surveillance regardless of weather or time of day. However, what the partnership may signify is that SAR, on its own, is at best, supplemental to electro-optical. Is the industry recognizing that SAR alone is insufficient for complete image analysis? That is, has SAR reached the peak of inflated expectations?


Why now?

It appears that Safran's footprint in the defense industry and ICEYE's work with international government was colliding and each could leverage the other's strengths in those customer segments. Safran was already providing ICEYE with downlink capabilities for data. Both press statements mentioned work in the defense and intelligence community (IC). ICEYE's press statement was more detailed than Safran's in pointing out that, "SAR satellite imagery allows the detection and classification of objects of interest at any time of the day and in all weather conditions, making it an invaluable resource for defense and intelligence applications, and particularly for monitoring military activities." This seems to imply a joint military use case or a specific client that would likely be unidentifiable due to security concerns. ICEYE has also been providing satellites-as-a-service to governments in need of a different type of sensor imagery to augment multispectral, another aspect of government work that Safran may want to pursue.


AI?
Of course, AI had to be mentioned. And it is Safran's subsidiary, Safran.AI that entered into the collaboration. Safran.AI was originally constituted when Safran acquired Peligens for $250 Million in September 2024. Peligen provides, "AI analytics solutions for high-resolution imagery, full motion video and acoustic signals." Certainly, the advantage of multi-sensor integration provides the means for more accurate feature and change detection, the most common use case in practice. As such, the objective is to more quickly detect and identify objects in near real-time.


Bottom Line

As always with partnerships of this kind you want to follow the money trail. Safran.AI is a subsidiary of Safran Electronics and Defense, a $123 Billion public company listed on the Euronext Paris exchange. I suspect that there are several clients in the defense industry that would welcome the output of an integrated image product. It may also foretell a future acquisition by Safran of ICEYE. Today, there is not any financial relationship between the two companies. ICEYE has raised over $500M from various private equity firms including Solidium Oy, a Finish state-holding company that backs other Finish firms, and several others perhaps most notably, Blackrock, one of the largest U.S.-based asset managers in the world.


CONTACT LOCATIONINTELLIGENCE.US FOR MORE INFORMATION AND ANALYSIS.

Nearmap Acquires itel

Why does the acquisition make sense now?
Nearmap's acquisition of itel is an attempt to shorten the claims process and reduce fraud and perhaps more significantly to disrupt the property intelligence market. The quicker you can pay out claims by basing the veracity of the claim with "ground truth," i.e. recent image validation, then you've reduced the marginal cost of policy underwriting and you've captured one additional image for AI-crafted damage assessment models.


Why do the acquisition in the first place?

The competition in property intelligence is heating up, and specifically among it's own clients, like Cotality (formerly CoreLogic) which is more dominant in mortgage and proptech, but also insurance, and try to thwart similar partnerships such as Verisk and EagleView. It now appears more likely that Nearmap wants to take market share from Cotality's insurance practice. Nearmap's acquisition of Betterview in 2023 had already positioned it squarely against partners like Precisely, that have a substantial footprint in the insurance industry.


This is also a play toward going head to head with the Geospatial Insurance Consortium (GIC), and its partner, Vexcel Imaging, that offers gray-sky imagery to their subscribers at the time of catastrophic events. Vexcel clearly wants its differentiation to continue with UltraCam, a suite of very high spatial and spectral resolution digital cameras.


Nearmap is also now a private company having been acquired by Thoma Bravo in 2022 with the clear intent of pivoting toward offering more geospatial services to the insurance industry.


More than a Data Company

Historically, Nearmap was an image acquisition company. As such it was merely a data company with an archive of imagery that if unused is a sunk cost. With the acquisitions of Betterview and itel, it is clear that Nearmap and its PE underwriters, Thoma Bravo, wants to move the company deeper into insurance, where they have a footprint but also potentially into others like real estate.


In addition, the ability to capture rooftop measurements, while a much-needed capability, was becoming commoditized and table stakes as an image-based product. There are just so many pixels you can capture before you have just too many. However, it now has many such capabilities including automated feature extraction (e.g., pools, trampolines, sidewalks, vegetation, etc.), roof condition, and other property modifications.


Bottom Line

The entire move by Nearmap is to disrupt the current proptech market and become more than just a "data company." therefore positions the company to leverage artificial intelligence, geospatial data and image extraction to offer a more comprehensive solution to a market that has an increasing appetite for data. As a private company, it can now scale its operations without the pressure of quarterly earnings calls. It can continue to invest in technology and by the obvious intension of its Thoma Bravo advisors, will continue on a course of looking at other acquisitions to scale where they see opportunity.

  • Home
  • Solutions
  • Industries
  • Book Appointment
  • Blog
  • FAQ
  • About
  • Contact

LocationIntelligence.US

Wake Forest, NC 28587 United States

Copyright © 2025 LocationIntelligence.US - All Rights Reserved.

Powered by

Find more info about Location Intelligence

Check out our blog today.

Learn more

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

DeclineAccept