Quantcast
Channel: Question and Answer » postgresql
Viewing all articles
Browse latest Browse all 1138

Very different query plans with and without LIMIT in PostgreSQL.

$
0
0

I have the following query which I am using to create vector tiles for a web map. I’m pulling together just the geometries within the selected tile using a CTE, then joining it to get additional attribute information.

WITH lots AS (
  SELECT pams_pin, shape
    FROM parcels
   WHERE ST_Transform(shape,3857) && TileBBox($z, $x, $y)
)
SELECT p.pams_pin, v.property_location, v.property_class
     , v.owner_name, v.owner_address, v.owner_city, v.owner_zip
     , ST_AsGeoJSON(ST_Transform(p.shape,4326), 7) as shape
  FROM lots p
  LEFT JOIN mv_modiv_sr1a v ON p.pams_pin = v.pams_pin
;

This is incredibly slow, taking several seconds to return a result. However, if I take this same query and add LIMIT 500 to the end, it returns results almost immediately.

Here’s the output from EXPLAIN ANALYZE with the limit and without the limit.

Why does the query planner take such a different path when a LIMIT clause is added? Is there any way to force this more efficient plan without using LIMIT? While in this case, LIMIT 500 should not result in any missing records, I’d like a better understanding of why this is occuring and if there’s a way to provide a hint to the planner to use the smaller set within the CTE instead of driving the results from the ~3 million record table and then throwing a massive amount of data away.


Viewing all articles
Browse latest Browse all 1138

Latest Images

Trending Articles



Latest Images