LogoLogo
  • Welcome
  • Getting Started
    • Quickstart
    • Coverage
  • API
    • Pre-Trained Agents
      • Valuation Agent
      • Condition Agent
      • Compliance Agent
      • Attribute Search Agent
      • Monitoring Agent
    • Spatial AI (base model)
    • PDF Reporting
    • SDK (coming soon)
    • Fine-Tuning (Coming Soon)
Powered by GitBook
On this page
  • Overview
  • Endpoint
  • Use Cases
  • Limits & Best Practices

Was this helpful?

  1. API

Spatial AI (base model)

PreviousMonitoring AgentNextPDF Reporting

Last updated 2 months ago

Was this helpful?

While pre-trained agents provide quick answers to common tasks, Geolava also supports a flexible inference endpoint, allowing you to make unstructured or more advanced spatial queries. Instead of naming a specific pre-trained Agent (e.g., “valuation”), you can ask the system for certain insights or let it reason about combined data in near-real time.

Overview

  • Prompt-Style Requests: Submit a freeform query about properties or entire regions.

  • Contextual Awareness: The system leverages the underlying Spatial Embedding (including multi-sensor imagery, historical data, and location data).

  • Open-Ended Output: Get a textual or JSON response describing the system’s best inference.

Like with pre-trained Agents, you can:

  • Specify a single property or

  • Provide city, state, bounding polygon or

  • Upload your own data (multiple addresses) and ask a general question covering them all.

Endpoint

POST /v1/spatial-reasoning

(API design still subject to change.)

Example Request

{
  "prompt": "Identify blighted properties in LA with boarded windows or doors but high sales potential"
}

Example Response

{
  "insight": "Found 12 candidate properties in Los Angeles that match the description. Four are near downtown with boarded windows, yet valuations are trending upward in these neighborhoods.",
  "matched_properties": [
    {
      "propertyId": "67a771e11bcf52fcce866d6a",
      "attributes_detected": ["boarded windows"],
      "estimated_value": 310000,
      "confidence": 0.82
    },
    {
      "propertyId": "67a771e11bcf52fcce866d6b",
      "attributes_detected": ["boarded doors, broken fence"],
      "estimated_value": 425000,
      "confidence": 0.79
    }
  ],
  "total_matched": 12,
  "notes": "Neighborhood uptrend suggests strong resale demand despite disrepair."
}

Use Cases

  1. Multi-factor Queries – E.g., “List properties with severe roof damage but near new developments.”

  2. Regional Summaries – Provide bounding boxes or city definitions: “In these 5 ZIP codes, show me flood-prone properties with stable valuations.”

  3. Hypothetical Questions – “If an ADU is added, how does that affect the valuation or compliance status?”

  4. Portfolio/Custom Data – Upload your data set of addresses, then ask: “Which properties in my dataset have known code violations and might be undervalued?”

Limits & Best Practices

  • Clarity: The more specific your prompt, the more targeted the analysis.

  • Rate Limits: This flexible endpoint can be heavier in compute usage. For large data sets, consider region or file-upload strategies.

  • Confidence Scores: Because it processes queries in a more open-ended manner, parse the included confidence or confidence_score to gauge reliability.