Quantcast
Channel: Question and Answer » postgresql
Viewing all 1138 articles
Browse latest View live

SymmetricDS fails to start for PostgreSQL 9.2

$
0
0

I’m trying to get SymmetricDS up and running with PostgreSQL.
I’ve followed the tutorial (almost) exactly.
(I have not set up a separate node yet since, for my purposes, I need that separate node to be truly separate and on a different VM.)
Unfortunately, I am not able to get the database import step to function, as SymmetricDS will not connect to the database.

Following advice from Connecting to local instance of PostgreSQL from JDBC, I ensured that the second SLOC in pg_hba.conf was sensible;
PostgreSQL will (should) accept all connections made over TCP/IP over lo using client-side identification.
(The linked does call for md5 as opposed to ident; this has no visible effect and, according to the stack trace, is probably not what JDBC is expecting.)

I’ve ensured that symmetricds is a system user and is a user registered with PostgreSQL.
If memory serves, I did this with something like

ADD USER symmetricds WITH PASSWORD sds-pass;
GRANT ALL PRIVILEGES TO test FOR USER symmetricds;

(or something to this effect? I’m very new to databases.)
If I had to guess where I went wrong, it’d be here.

I edited the engine file corp-000.properties to use the PostgreSQL versions of connection details (the file comes set for MySQL) and
I filled in the appropriate credentials.


As far as I know, this is all that is needed to get SymmetricDS up and running (at least for the import step).
Obviously, something went wrong; a stack trace is included below.
What did I miss?

Shell log:

[root@dbrepa samples]# cat /var/lib/pgsql/9.2/data/pg_hba.conf
# PostgreSQL Client Authentication Configuration File
# ===================================================
# ...

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             127.0.0.1/32            ident
# IPv6 local connections:
host    all             all             ::1/128                 ident
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local   replication     postgres                                peer
#host    replication     postgres        127.0.0.1/32            ident
#host    replication     postgres        ::1/128                 ident


[root@dbrepa samples]# grep symmetricds /etc/passwd
symmetricds:x:501:501::/home/symmetricds:/bin/bash


[root@dbrepa samples]# service psql start
Starting psql service:                                     [  OK  ]


[root@dbrepa samples]# su - symmetricds
[symmetricds@dbrepa ~]$ psql test
psql (9.2.4)
Type "help" for help.

test=> l
                                   List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |    Access privileges
-----------+----------+----------+-------------+-------------+--------------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres             +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres             +
           |          |          |             |             | postgres=CTc/postgres
 test      | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =Tc/postgres            +
           |          |          |             |             | postgres=CTc/postgres   +
           |          |          |             |             | symmetricds=CTc/postgres
(4 rows)

test=> q
[symmetricds@dbrepa ~]$ exit
logout


[root@dbrepa samples]# cat ../engines/corp-000.properties
#
# Licensed to JumpMind Inc under one or more contributor
# ...
#
# You should have received a copy of the GNU General Public License,
# version 3.0 (GPLv3) along with this library; if not, see
# <http://www.gnu.org/licenses/>.
#
# ...
#

engine.name=corp-000

# The class name for the JDBC Driver
db.driver=org.postgresql.Driver

# The JDBC URL used to connect to the database
db.url=jdbc:postgresql://localhost/corp?stringtype=unspecified

# The user to login as who can create and update tables
db.user=symmetricds

# The password for the user to login as
db.password=sds-pass

registration.url=
sync.url=http://localhost:8080/sync/corp-000

# Do not change these for running the demo
group.id=corp
external.id=000

# Don't muddy the waters with purge logging
job.purge.period.time.ms=7200000

# This is how often the routing job will be run in milliseconds
job.routing.period.time.ms=5000
# This is how often the push job will be run.
job.push.period.time.ms=10000
# This is how often the pull job will be run.
job.pull.period.time.ms=10000


[root@dbrepa samples]# ../bin/dbimport --engine corp-000 --format XML create_sample.xml
Log output will be written to ../logs/symmetric.log
[] - AbstractCommandLauncher - Option: name=engine, value={corp-000}
[] - AbstractCommandLauncher - Option: name=format, value={XML}
-------------------------------------------------------------------------------
An exception occurred.  Please see the following for details:
-------------------------------------------------------------------------------
org.postgresql.util.PSQLException: FATAL: Ident authentication failed for user "symmetricds"
        at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:398)
        at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:173)
        at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
        at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:136)
        at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:29)
        at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:21)
        at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:31)
        at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:24)
        at org.postgresql.Driver.makeConnection(Driver.java:393)
        at org.postgresql.Driver.connect(Driver.java:267)
        at org.apache.commons.dbcp.DriverConnectionFactory.createConnection(DriverConnectionFactory.java:38)
        at org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:582)
        at org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(BasicDataSource.java:1556)
        at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1545)
 [wrapped] org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (FATAL: Ident authentication failed for user "symmetricds")
        at org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:1549)
        at org.apache.commons.dbcp.BasicDataSource.createDataSource(BasicDataSource.java:1388)
        at org.apache.commons.dbcp.BasicDataSource.getConnection(BasicDataSource.java:1044)
        at org.jumpmind.symmetric.AbstractCommandLauncher.testConnection(AbstractCommandLauncher.java:325)
 [wrapped] java.lang.RuntimeException: org.apache.commons.dbcp.SQLNestedException: Cannot create PoolableConnectionFactory (FATAL: Ident authentication failed for user "symmetricds")
        at org.jumpmind.symmetric.AbstractCommandLauncher.testConnection(AbstractCommandLauncher.java:329)
        at org.jumpmind.symmetric.AbstractCommandLauncher.getDatabasePlatform(AbstractCommandLauncher.java:336)
        at org.jumpmind.symmetric.DbImportCommand.executeWithOptions(DbImportCommand.java:113)
        at org.jumpmind.symmetric.AbstractCommandLauncher.execute(AbstractCommandLauncher.java:130)
        at org.jumpmind.symmetric.DbImportCommand.main(DbImportCommand.java:72)
-------------------------------------------------------------------------------

Some more resources as I find them in my sea of open tabs:


Merge two tables on partly matching column

$
0
0

I have below 2 tables.

old_name    old_value
Tom1        100
Kate1       80
Jim1        70

new_name   new_value
Tom2       70
Kate2      100
Jim2       80

I want to make a new table look like

old_name    old_value       new_name    new_value
Tom1        100             Tom2        70
Kate1       80              Kate2       100
Jim1        70              Jim2        80

How to do that in Postgres?

Cannot get my slave – replication server to start

$
0
0

My replication server will not start. I have followed the instructions here:
http://opensourcedbms.com/dbms/setup-replication-with-postgres-9-2-on-centos-6redhat-el6fedora/

As well as several other places including the Postgres Wiki as they all have the same information.

Here is what happens:
I do a full backup of the /9.2/data folder and move it to the replication/slave server, untar it. I can start PostgreSQL as well as pgAdmin and access all data with no problems.

I then move to the instructions on editing the pg_hba.conf and postgresql.conf for the slave server. I attempt to start it, and it fails (error in red [fail]. I cannot find any logs anywhere to give me a hint as to why.

I even verified there was no postmaster.pid in the data folder.

Also, I cannot find any log files. Do I need to “activate” a log file in the configuration?

So, if anyone wants to take a stab in the dark on my vague description, I’d love to hear any suggestions. I can put my conf files on pastebin if that will help.

Replication slave re-reading WAL on restart

$
0
0

I have a standard streaming replication setup going with PostgreSQL 9.1. It works fine. However, if I stop the slave cleanly (to move it or perform other maintenance) and restart it a short time later, it insists on re-reading all the WAL files it has generated before connecting to the master, which is a lot of them and takes hours. I would expect it instead to have noted the last WAL position it got from the master and restart from there.

I feel like I’m missing something here. Is this expected behavior?

My slave’s config has the usual suspects:

listen_addresses = '*'  
wal_level = hot_standby  
max_wal_senders = 10  
wal_keep_segments = 24  
hot_standby = on

Postgresql Streaming Replication – pgpool2 – failover

$
0
0

In my scenario I want pgpool to forward read only queries to slaves when the master goes down – I want my app to be in the “read only mode”.

How do I setup pgpool to accept read only queries when master fails (streaming replication) ?

Currently I when the master goes down, pgpool waits for the master and doesn’t forward any queries to slaves.

Pgrouting functions and geoms type not found. Install failed?

$
0
0

I have installed a postgresql 9.1 and postgis 2.0 from source.

I couldn’t launch this :

# Add pgRouting launchpad repository ("stable" or "unstable")
sudo add-apt-repository ppa:georepublic/pgrouting[-unstable]
sudo apt-get update

# Install pgRouting packages
sudo apt-get install postgresql-9.1-pgrouting

So I compiled and installed pgrouting 2.0 (after some hours searching for dependencies).
I created the extension on my database in postgresql.

I included the function from the sql files pgrouting.sql but the functions I need are in pgrouting_legacy.sql and pgrouting_dd_legacy.sql

When I try to load them, the error I get is : psql:/usr/share/postgresql/9.1/contrib/pgrouting-2.0/pgrouting_legacy.sql:299: ERROR: type "geoms" does not exist

Postgresql and Postgis are working fine…

What did I wrong ? Maybe I forgot something or the install failed ?

I followed this documentation : http://pgrouting.org/docs/1.x/install.html

http://www.bostongis.com/PrinterFriendly.aspx?content_name=pgrouting_osm2po_1

How to install PostgreSQL 9.2 with PostGIS 2.0 on Ubuntu 11.10 (or higher)?

postgis: difference with && operator and st_intersects

$
0
0

I found that there is a difference in result between using A && B operator or one of the geo functions like st_intersects(A,B) or st_overlaps(A,B)

While the st_intersects or st_overlaps functions returns query results without errors the operator version SOMETIMES results in errors over the SRID:

ERROR: Operation on two geometries with different SRID’s.

ERROR: Operation on two geometries with different SRIDs
SQL state: XX000

All of the records seem to be OKE, having the correct SRID, none missing.

The query is generated by MAPSERV WMS server, so what is there to do?

Sample query:

Operator &&

select encode(AsBinary(force_collection(force_2d("geometry")),'NDR'),'hex') as geom, "id" 
from (select * from soils.vw_zones) as vw
where geometry && GeomFromText('POLYGON((75722.1945223652   437331.342330005,75722.1945223652 503295.946741193,169438.897876772 503295.946741193,169438.897876772 437331.342330005,75722.1945223652 437331.342330005))',find_srid('','soils.vw_zones','geometry')) 

ST_INTERSECTS

select encode(AsBinary(force_collection(force_2d("geometry")),'NDR'),'hex') as geom, "id" 
from (select * from soils.vw_zones) as vw
where st_intersects(geometry,GeomFromText('POLYGON((75722.1945223652 437331.342330005,75722.1945223652 503295.946741193,169438.897876772 503295.946741193,169438.897876772 437331.342330005,75722.1945223652 437331.342330005))',find_srid('','soils.vw_zones','geometry')))

Transform an Irish grid reference (29903) into UK grid reference (27700) on insert

$
0
0

I have an insert script to input several hundred points, however some of the points use the Irish Grid Reference system. I have separated these out however was unsure how to convert or transform? these into 27700 on insert. I have the line below that grabs the UK reference, and the POST_GIS column is set to 27700.

st_geomfromtext('POINT(" . addslashes($data[6]) . ")',27700)

What maintenance should I do for PostGres database?

$
0
0

I’m getting familiarized with using PostGres for ArcGIS. I’ve only ever used SQL Server. I was successful in setting up an SDE database within PostGres 9.2, but I’m not sure what type of maintenance I should be doing or what the syntax is.

If someone could lead me in the right direction, I’d appreciate it.

Thanks.

How to do a minor upgrade postgresql 9.3.0 windows to 9.3.1?

$
0
0

What is the recommended way to perform an upgrade from PostgreSQL 9.3.0 to 9.3.1 (minor upgrade) using the Enterprise DB built windows installer? Should I uninstall first or just install over the existing installation?

The current installation was performed with postgresql-9.3.0-1-windows-x64.exe. Now I want to upgrade using postgresql-9.3.1-1-windows-x64.exe.

Insert a group of consistent item with foreign key but colliding with existing items

$
0
0

Is there a way to insert a group of items that are dependent but consistent between them with unique primary keys and foreign keys but that are colliding with items in database.

For example, given a table A:

id  primary key,
name

and a table B:

 id primary key,
 name,
 id_a'  -- foreign key on A

I want to insert:

INSERT INTO A(id, name) VALUES (1, "a");
INSERT INTO B(id, name, id_a) VALUES (1, "b", 1);

But the keys A.id or B.id could be taken already. Is there a way to insert my elements with auto setting key and foreign keys in a consistent way without colliding with existing elements?

PostgreSQL 9.2 (PostGIS) performance problem

$
0
0

A month ago i got acquainted with PostgreSQL and only now i’ve noticed that some of my queries are pretty slow.
Well, for example, i have table:

CREATE TABLE sometable
(
  gid serial NOT NULL,
  geom geometry(LineString) NOT NULL,
  CONSTRAINT sometable_pkey PRIMARY KEY (gid)
)
WITH (
  OIDS=FALSE
);
CREATE INDEX sometable_geom_gist
  ON sometable
  USING gist
  (geom);

The table contains about 150 000 rows and when i try to get all rows by SELECT * FROM sometable it takes 10 seconds! I doubt that its normal for PG.

When i’m trying to use spatial indexes by querying something like

SELECT ST_AsBinary(geom) FROM sometable
    WHERE   ST_Intersects(geom,
            ST_MakeEnvelope(-10000, -20000, 10000, 15000))

It takes 9 sec. (100 000 rows).
And

SELECT ST_AsBinary(geom) from sometable

takes 13 sec.


EXPLAIN ANALYZE
SELECT * FROM sometable
Seq Scan on sometable (cost=0.00..3406.47 rows=151547 width=69)
(actual time=0.021..102.959 rows=151547 loops=1) 
Total runtime: 165.174 ms

EXPLAIN ANALYZE
SELECT ST_AsBinary(geom) FROM sometable
Seq Scan on sometable (cost=0.00..3785.34 rows=151547
width=65) (actual time=0.030..234.241 rows=151547 loops=1) 
Total runtime: 296.704 ms

EXPLAIN ANALYZE
SELECT ST_AsBinary(geom) FROM sometable
        WHERE   ST_Intersects(geom,
                ST_MakeEnvelope(-10000, -20000, 10000, 15000))
Bitmap Heap Scan on sometable (cost=3142.47..30805.90 rows=32623 width=65) (actual time=38.644..1066.704 rows=98469 loops=1)
  Recheck Cond: (geom && '01030000000100000005000000000000A2941A6DC2000000A2941A6DC2000000A2941A6DC2000000A2941A6D42000000A2941A6D42000000A2941A6D42000000A2941A6D42000000A2941A6DC2000000A2941A6DC2000000A2941A6DC2'::geometry)
  Filter: _st_intersects(geom, '01030000000100000005000000000000A2941A6DC2000000A2941A6DC2000000A2941A6DC2000000A2941A6D42000000A2941A6D42000000A2941A6D42000000A2941A6D42000000A2941A6DC2000000A2941A6DC2000000A2941A6DC2'::geometry)
  ->  Bitmap Index Scan on sometable_geom_gist  (cost=0.00..3134.31 rows=97870 width=0) (actual time=38.109..38.109 rows=98469 loops=1)
        Index Cond: (geom && '01030000000100000005000000000000A2941A6DC2000000A2941A6DC2000000A2941A6DC2000000A2941A6D42000000A2941A6D42000000A2941A6D42000000A2941A6D42000000A2941A6DC2000000A2941A6DC2000000A2941A6DC2'::geometry)
Total runtime: 1125.249 ms

Well, would you be kind enough as give me some tips how to solve the problem or explain the situation? Thank you.

PostgreSQL 9.2 – Partitioning: Constraint Exclusion on SELECT not kicking in [closed]

$
0
0

My partitioned table SELECT queries include all partitioned tables even though checks are in place and constraint_exclusion = on.

The insert trigger works fine and new rows are inserted into the correct tables. The SELECT however runs over all tables regardless of my WHERE clause.

Here is my config:

constraint_exclusion = on (both in postgresql.conf and also tried with "ALTER DATABASE bigtable SET constraint_exclusion=on;")

Master Table:

CREATE TABLE bigtable (
    id bigserial NOT NULL,
    userid integer NOT NULL,
    inserttime timestamp with time zone NOT NULL DEFAULT now()
)

Child Table 1:

CREATE TABLE bigtable_2013_11 (CHECK ( inserttime >= DATE '2013-11-01' AND inserttime < DATE '2013-12-01' )) INHERITS (bigtable);        

Child Table 2:

CREATE TABLE bigtable_2013_12 (CHECK ( inserttime >= DATE '2013-12-01' AND inserttime < DATE '2014-01-01' )) INHERITS (bigtable);    

Stored Procedure:

CREATE OR REPLACE FUNCTION bigtable_insert_function()
RETURNS TRIGGER AS $$
BEGIN

    IF ( NEW.inserttime >= DATE '2013-11-01' AND NEW.inserttime < DATE '2013-11-01' ) THEN
        INSERT INTO bigtable_2013_11 VALUES (NEW.*);
    ELSEIF (NEW.inserttime >= DATE '2013-12-01' AND NEW.inserttime < DATE '2014-01-01' ) THEN
        INSERT INTO bigtable_2013_12 VALUES (NEW.*);
    ELSE
        RAISE EXCEPTION 'Bigtable insert date is out of range!';
    END IF;

    RETURN NULL;
END;
$$
LANGUAGE plpgsql;

Trigger:

CREATE TRIGGER bigtable_insert_trigger BEFORE INSERT ON bigtable FOR EACH ROW EXECUTE PROCEDURE bigtable_insert_function();

It’s pretty much the text book setup. The insert works fine:

INSERT INTO bigtable (userid, inserttime) VALUES ('1', now());

Above insert results in the new row being inserted correctly into ‘bigtable_2013_11′ only.

However I can’t get the SELECT to exclude the irrelevant tables. All SELECTs always run over all tables. I would expect bigtable_2013_12 to be excluded when following SELECT queries are used:

SELECT * FROM bigtable WHERE inserttime >= DATE '2013-11-01'::date AND inserttime < '2013-12-01'::date;

SELECT * FROM bigtable WHERE EXTRACT(MONTH FROM inserttime) = 11 AND EXTRACT (YEAR FROM inserttime) = 2013;

However the result is always this:

"Result  (cost=0.00..68.90 rows=17 width=20)"
"  ->  Append  (cost=0.00..68.90 rows=17 width=20)"
"        ->  Seq Scan on bigtable  (cost=0.00..0.00 rows=1 width=20)"
"              Filter: ((inserttime >= '2013-11-02'::date) AND (inserttime < '2013-11-30'::date))"
"        ->  Seq Scan on bigtable_2013_11 bigtable  (cost=0.00..34.45 rows=8 width=20)"
"              Filter: ((inserttime >= '2013-11-02'::date) AND (inserttime < '2013-11-30'::date))"
"        ->  Seq Scan on bigtable_2013_12 bigtable  (cost=0.00..34.45 rows=8 width=20)"
"              Filter: ((inserttime >= '2013-11-02'::date) AND (inserttime < '2013-11-30'::date))"

Why are my checks not kicking in? I am out of ideas. Everything seems to be setup correctly. Did I miss anything? Any help would be greatly appreciated.

Extract a LineString from LineString by two Points

$
0
0

I am using PostGIS2.0 and PostgreSQL9.3. I have LineString in Spatial-Ref 4326 like

geom:= ST_GEOMETRYFROMTEXT('LINESTRING(60.7014631515719 56.8441322356241,60.7023117507097 56.8445673405349,60.702948200063 56.8447993944193,60.703902874093 56.8448574076656,60.706236521722 56.8447993944193,60.7094187684889 56.8449444273664,60.7121236782406 56.8450894597515',4326);

And 2 Points in Spatial-Ref 4326 like

point1:= ST_GEOMETRYFROMTEXT('POINT(60.703902874093 56.8448574076656)', 4326);
point2:= ST_GEOMETRYFROMTEXT('POINT(60.7023117507097 56.8445673405349)', 4326);

Both points have been extracted from given LineString. So it is guaranteed, that both points are inside the given LineString. No neared points will be used.

It is not guaranteed, that the points are in the same order like in given LineString (see example: point1 is point #4 and point2 is point #2 of the LineString). They can be ascending or descending.

Is there any function or operator to extract a LineString from a given LineString by the points?

I want to do someting like

sub_geom:= ST_???(geom, point1, point2);

to get a LineString geometry that contains all Points from the given geometry from point1 to point2 (points are included)?

Expected result:

sub_geom == ST_GEOMETRYFROMTEXT('LINESTRING(60.7023117507097 56.8445673405349,60.702948200063 56.8447993944193,60.703902874093 56.8448574076656',4326) 

The order of the points of the result geometry does not matter. It can be ascending or descending. The result is only used for visualisation.

It would be nice, if you can show me a solution for my problem.


Is it possible to have run-time dynamic table updates to visualization using javascript interacting with postgis and geoserver

$
0
0

I have an idea of/have seen implementations, of dynamic map retrieval/updating using wms on openlayers by using updates to a postgis database which was hooked into geoserver and having a refresh policy. I have also seen examples done using cURL with php and python. Is this possible using JavaScript without using openlayers? The reason I don’t want to use openlayers is because my “base map” is a google earth plugin and I’ve heard they don’t really get along well.

In any case how would I update a postgis database which is connected to geoserver so that I can essentially move a point’s coordinates and display this accordingly? And I know that using geowebcache is how geoserver updates the databases it is connected to, but I have heard issues regarding this before as such what was the process for making sure geoserver updates? Preferably I’d like to do this in JavaScript without some kind of wrapper.

Postgresql vs MySQL – Which is better for join queries & writing data(inserts) [closed]

$
0
0

I have to design a database which will end up with 50M records in a single table(there will be other tables with lesser number of records). I’m more concerned with join queries & writing data(inserts) to the database. There will be less updates and deletes queries.

I have read this article on performace comparison of Postgresql vs MySQL.

Also I have gone through below links as well.

http://stackoverflow.com/questions/8181604/postgres-9-1-vs-mysql-5-6-innodb

http://stackoverflow.com/questions/110927/would-you-recommend-postgresql-over-mysql

http://stackoverflow.com/questions/724867/how-different-is-postgresql-to-mysql

MySQL vs PostgreSQL Wiki

MySQL vs PostgreSQL: Why MySQL Is Superior To PostgreSQL

My problem is some of the links on the stackoverflow is out dated. Some of the people say that Mysql better & vice-versa.

Since I’m more concerned with join queries & writing data to the database Which is better for me? Postgresql vs MySQL? What approachs should I take to design a database like this?

Given that please don’t consider this as another Postgresql vs MySQL question. I have done my research and I’m only concerned with join queries & writing data to the database scenario. I also got to know that PostgreSQL is better for GIS data. .

Postgresql function taking long time on newly restored database

$
0
0

I have developed one function in my PostgreSQL database. It was taking 12-15 seconds to execute.

Now I have restored a new PostgreSQL 9.0 dump using following command:

pg_restore -hx.x.x.x -p5432 -d xxx -Ux -Fc -v </path/pgbackup.backup

but on this newly restored database it takes 2 minutes to execute same function.

Please give me possible way to sort out this problem.

Missing libraries when upgrading to PostGIS 2.1 and PostgreSQL 9.3.1 using homebrew

$
0
0

In the process of upgrading my PostgreSQL from version 9.2.4 to 9.3.1 (via homebrew on OS X) I came across an odd problem. These are the steps I took so far

  • PostgreSQL, PostGIS and required libraries installed (no errors)
  • run initdb on the new database
  • stopped both servers
  • running pg_upgrade

pg_upgrade performs the necessary checks, creates dumps of the old cluster, but when importing into the new cluster I get the following error:

> ./pg_upgrade -b /usr/local/Cellar/postgresql/9.2.4/bin/ -B /usr/local/Cellar/postgresql/9.3.1/bin -d /usr/local/var/postgres/ -D /usr/local/var/postgres9.3.1 -u postgres
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is a superuser                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 fatal

Your installation references loadable libraries that are missing from the
new installation.  You can add these libraries to the new installation,
or remove the functions using them from the old installation.  A list of
problem libraries is in the file:
    loadable_libraries.txt

Failure, exiting

It appears as though PostgreSQL 9.3.1 tries to use PostGIS 2.0, which is not compatible

Could not load library "$libdir/postgis-2.0"
ERROR:  could not access file "$libdir/postgis-2.0": No such file or directory

Could not load library "$libdir/rtpostgis-2.0"
ERROR:  could not access file "$libdir/rtpostgis-2.0": No such file or directory

Has anyone run into the same problem?

PostgreSQL transaction locked database table: “idle in transaction”

$
0
0

I have a web application interacting with PostgreSQL (v8.4 & Centos Linux) which suddenly started locking some of the database’s tables. Still have no idea what happened, since the code is not new, has been running several time and has been tested beforehand.

I am trying to see what transaction might have cause it, or if a combination of it with an autovacuum process could have caused it. But in the meantime I would like to unlock the database tables. I tried restarting the postgreSQL service, terminating thus the processes in “idle in transaction” state which were locking the tables but it didn’t work: next time my application performed the same call the tables went locked again.

Any ideas on what can I do to unlock the database gracefully?

Viewing all 1138 articles
Browse latest View live