Quantcast
Channel: Question and Answer » postgresql
Viewing all 1138 articles
Browse latest View live

Umlauts in DBeaver

$
0
0

I have a postgres Database set to UTF-8. I am working with DBeaver 3.1.4, and it seems to use a different encoding than UTF-8 (I suppose Latin-1 or something), which leads to

1) wrong display in the views
2) sending queries in the wrong encoding, which makes text search difficult

Where can I set the encoding properly?

enter image description here


Shell script to execute psql command [closed]

$
0
0

I want to make an automated script to make a database user and password in Postgresql and also import some databases. When i execute my script bellow it stops somewhere and when i log out (CTRL+D or exit command) it tries to import database and it says:

psql: FATAL:  role "username" does not exist

At the end it doens’t go to /tmp
I’m using Ubuntu 14.10 and here is my script:

#!/bin/bash -x
#################
# Database
#################
printf 'CREATE USER koko WITH NOCREATEDB NOCREATEROLE NOSUPERUSER ENCRYPTED PASSWORD 'kokopass';nCREATE DATABASE kokodb WITH OWNER koko;' > cartaro.sql
su postgres
psql -f cartaro.sql
echo "Running postgis.sql"
psql -d "kokodb" -f /usr/share/postgresql/9.4/contrib/postgis-2.1/postgis.sql
echo "Running postgis_comments.sql"
psql -d "kokodb" -f /usr/share/postgresql/9.4/contrib/postgis-2.1/postgis_comments.sql
echo "Running spatial_ref_sys.sql"
psql -d "kokodb" -f /usr/share/postgresql/9.4/contrib/postgis-2.1/spatial_ref_sys.sql
psql -d "kokodb" -c 'grant all on geometry_columns to "koko";'
psql -d "kokodb" -c 'grant all on spatial_ref_sys to "koko";'
echo "Finished Database section"
exit

when i execute the script

How to connect to PostgreSQL 9 from Drupal 7?

$
0
0

I want to connect to PostgreSQL 9 from Drupal 7.

  1. to install drupal files using the following steps.

    wget http://ftp.drupal.org/files/projects/drupal-7.15.tar.gz      
    tar zxvf drupal-7.15.tar.gz    
    sudo mv drupal-7.15/* /var/www/     
    cd /var/www/    
    cp sites/default/default.settings.php
    sites/default/settings.php
    chmod a+w sites/default/settings.php  
    chmod a+w sites/default
    
  2. to install PostgreSQL database.

    createuser --pwprompt --encrypted --no-adduser --no-createdb
    username createdb --encoding=UNICODE --owner=username databasename
    
  3. to input my_domain in Firefox. It ran across the message:
    enter image description here

Maybe it is a problem that Drupal installation program can not access the PostgreSQL database. How to fix it?

Problem with a PostGIS trigger

$
0
0

I’m trying to update a lat and lon columns each time I insert a point with a trigger, but I don’t know why I get an error. This is the syntax I’m writing:

CREATE OR REPLACE FUNCTION update_tg2()
RETURNS trigger AS
$$
BEGIN
update places set new.lat  = new.st_y(geo::geometry);
RETURN NEW;
END;
$$
LANGUAGE 'plpgsql';

DROP TRIGGER IF EXISTS triger_coords2 on places;
create trigger triger_coords2 after insert or update on places
for each row execute procedure update_tg2();

And when insert:
insert into places (id, nombre, geo) values
(18, ‘my place’, ST_GeomFromText(‘POINT(-0.42154 38.38000)’, 4326))

I’m getting the error:

ERROR:  column "geo" does not exist
LINE 1: SELECT new.st_y(geo::geometry)

Of course, the column exists. If I select st_y(geo::geometry) from places, I can see its Y coord.

Any Idea on what could be failing? Or maybe one more efficient way to solve this? I think this is not a difficult trigger, but I’m quite new with functions.

Thanks.

11.2GB text file import MySQL or PostgreSQL

$
0
0

I am out of my depth a bit perhaps but submit this query hoping someone has some experience importing large (11.2GB) tab delimited text file (2.2GB tar.gz file) into either MySQL 5.5.41 or PostgreSQL 9.3.6. Double precision for the fields will be required as it is spatial data (Latitude, Longitude and Elevation).

I have MySQL and PostgreSQL setup on Ubuntu 14.04 using phpMyAdmin and phpPGAdmin to interact with the servers. However I realise that command line interaction will probably be better.

I have read a bit and there seems to be a way to split data for MySQL (SQLDumpSplitter2 – http://www.rusiczki.net/2007/01/24/sql-dump-file-splitter/).

Any suggestions or alternative options greatly appreciated.

Thanks
Paul

Postgres: difference between CTE and temporary table

$
0
0

In Postgres is there a difference between a CTE and a temporary table other than the fact that the CTE exists just for the context of one statement?

Documentation says that

Common Table Expressions or CTEs, can be thought of as defining temporary tables that exist just for one query.

Does “can be thought” means that they are identical?

Postgres equivalent of SQL Server recovery model Simple

$
0
0

I am migrating an SQL Server database to PostgreSQL.

The original database recovery model is Simple, mainly because the log size and performance are issues.

What are the recovery models of Postgres (9.4)? Is there any option to have a Simple recovery model as MSSQL has?

Rely on .pgpass in CREATE USER MAPPING

$
0
0

I am trying to create a script which creates a postgres-fdw connection between two postgres 9.4 databases. The script (which is checked in under version control), has been relying on pgpass to do other things. Is there any option I can use to request that the password be looked up in pgpass? … in general, where is the documentation on what options are available for CREATE USER MAPPING? the reference just says that options depend on the server.


Amazon RDS non superuser create function in C

$
0
0

I am asked to restore a Rails application to a postgresql RDS instance. There are many functions in C but looks like only superuser can create functions in C and I could not get superuser access on RDS instance, so I am kind of stuck.

any suggestions ?

Postgresql trigger on a table that update a field

$
0
0

Depending on values on 2 fields A and B, I would like a field C to be updated or filled with a specific value.

For example :

create or replace function periode() returns trigger as
$BODY$
begin
IF A >= x AND B <= y
THEN insert into C values ('z')
END IF;
RETURN NULL ;
END ;
$BODY$

LANGUAGE plpgsql;
DROP TRIGGER IF EXISTS trg_periode ON temp ;
CREATE TRIGGER trg_periode AFTER INSERT or UPDATE on temp
FOR EACH ROW EXECUTE PROCEDURE periode();

It does not work : ‘ERROR : column A doesn’t exist’. I can’t figure out why this message appears. Maybe there is a better way to create this trigger ? Thanks

How to correctly use PostgreSQL to limit multiple and/or concurrent executions of a task

$
0
0

Let’s assume you have a task FOO that can be queued once every minute, and a pool of 50 workers that can be paused. The queue is paused for 10 minutes, and 10 FOO tasks are queued. When the queue is resumed, the 10 FOO tasks will be executed almost concurrently (because there are more workers than tasks).

In this case, I need to ensure that no more than 1 FOO task per minute (time can vary) is performed.

One solution, using Redis, is to take advantage of Redis atomic and the TTL of a key. When a FOO task starts, it checks if the key worker:FOO exists. If does, then it exists, if it does not it sets the value and a TTL to the maximum frequency. This is easy to achieve using SETNX worker:FOO whatever and then using TTL worker:FOO if the previous command returned 1.

Because SETNX is atomic, I won’t fall into the case where two FOO tasks are executed because of the race condition between the GET and the SET.

Now the question is: what is the correct way to achieve the same result using PostgreSQL? I can have a table with a key and a executed_on timestamp value, but how can I ensure that there is no case where two FOO tasks are both executed because of the delay between FOO 1 checks the record and writes a lock?

Replication Lag on Postgres AWS RDS Service

$
0
0

We have a single master/single streaming replica Postgres 9.3 db on AWS. The load is not terribly high – this is a development/staging environment. (The production shows similar metrics). Point is the “ReplicaLag” shown in Cloudwatch oscillates wildly during the day between 0 and 200 seconds. I’ve changed max_wal_senders from 5 to 10 with no change.

Any suggestions for diagnosing this?

Replica Lag

(This is a t2.small master and t2.small replica, however the production instance is large, and exhibits the same issue. CPU is < 2%, connections < 60, iops only about 5-15 count/second).

Get a point value with psycopg2 and PPyGIS

$
0
0

I’m trying to setup a basic working postgis setup with python ppygis package.

>>> import psycopg2
>>> import ppygis
>>> connection = psycopg2.connect(database='spre', user='postgres')
>>> cursor = connection.cursor()
>>> cursor.execute('CREATE TABLE test (geometry GEOMETRY)')
>>> cursor.execute('INSERT INTO test VALUES(%s)', (ppygis.Point(1.0, 2.0),))
>>> cursor.execute('SELECT * from test')
>>> point = cursor.fetchone()[0]
>>> print point
0101000000000000000000F03F0000000000000040
>>>

I should have got a python object with separate X and Y coordinate. Something like

>>> Point(X: 1.0, Y: 2.0)

What am I doing wrong? I’m following http://www.fabianowski.eu/projects/ppygis/usage.html#installation


For posterity. I couldn’t make it work, so to get point data this was what I did,

>>> cursor.execute('SELECT ST_AsGeoJSON(geometry) FROM test')
>>> point = cursor.fetchone()[0]
>>> print json.loads(point)['coordinates']
[1,2]

From PostgreSQL to SQL Server (No User/Pass)

$
0
0

Some of you MAY be able to crack a really tough nut…

I have to move all the data from a PostgreSQL (9.1 – Windows) database to SQL Server because the company I am contracting with is upgrading the old program with a newer .NET version.

The real nut is that the User/Password for Postgres are baked in the old program… Of course the guy who wrote it is gone. No source code or any way to contact him. (Smaaaart)

I was able to obtain the WHOLE Postgres directory (Bin, Data… The whole thing) and I hope it can be done.

Is there a way out of this mess, or should I just blow the contract and look for something else?

THANK YOU!

FATAL: terminating walreceiver due to timeout

$
0
0

1/ DESCRIPTION:

  • Machine 1 (slave): Centos 6.6 , x64 , installed PostgreSQL 9.3 (on Local)
  • Machine 2 (master): Centos 6.6 , x64 , installed PostgreSQL 9.3 (on Cloud)

Machine 1 (slave) and machine 2 (master) are in a cluster (streaming replication). Sometime, I see “FATAL: terminating walreceiver due to timeout” in slave log.

Here is full detailed logs:

Slave

2015-03-03 02:01:53 UTC 19693   LOG:  database system is ready to accept read only connections
2015-03-03 02:01:53 UTC 19699   LOG:  started streaming WAL from primary at 0/8000000 on timeline 1
2015-03-03 02:02:15 UTC 19695   LOG:  redo starts at 0/8F04530
2015-03-03 02:39:26 UTC 19699   FATAL:  terminating walreceiver due to timeout
2015-03-03 02:39:26 UTC 19695   LOG:  invalid record length at 0/8F080F8
2015-03-03 02:39:41 UTC 21065   LOG:  started streaming WAL from primary at 0/8000000 on timeline 1
2015-03-03 03:19:12 UTC 21065   FATAL:  terminating walreceiver due to timeout
2015-03-03 03:19:12 UTC 19695   LOG:  invalid record length at 0/9D488F8
2015-03-03 03:19:27 UTC 22489   LOG:  started streaming WAL from primary at 0/9000000 on timeline 1

Master

2015-03-03 02:02:40 UTC 1718   LOG:  database system is ready to accept connections
2015-03-03 02:02:40 UTC 1724   LOG:  autovacuum launcher started
2015-03-03 02:02:42 UTC 1726 [unknown] [unknown]LOG:  invalid length of startup packet
2015-03-03 02:02:42 UTC 1726 [unknown] [unknown]LOG:  connection failed during start up processing: user= database=
2015-03-03 02:35:45 UTC 1788 pgAdmin III - Query Tool enterprisedbERROR:  column "username" does not exist at character 18
2015-03-03 02:35:45 UTC 1788 pgAdmin III - Query Tool enterprisedbSTATEMENT:
        select datname, username, client_addr, client_port, query from pg_stat_activity;
2015-03-03 02:41:03 UTC 1748 walreceiver enterprisedbLOG:  terminating walsender process due to replication timeout
2015-03-03 02:51:42 UTC 3184 ::1 psql.bin enterprisedbERROR:  unrecognized configuration parameter "replication_timeout"
2015-03-03 02:51:42 UTC 3184 ::1 psql.bin enterprisedbSTATEMENT:  show replication_timeout;
2015-03-03 02:51:54 UTC 3184 ::1 psql.bin enterprisedbERROR:  relation "pg_setting" does not exist at character 15
2015-03-03 02:51:54 UTC 3184 ::1 psql.bin enterprisedbSTATEMENT:  select * from pg_setting;
2015-03-03 02:58:33 UTC 3388 [unknown] [unknown]LOG:  invalid length of startup packet
2015-03-03 02:58:33 UTC 3388 [unknown] [unknown]LOG:  connection failed during start up processing: user= database=
2015-03-03 02:58:57 UTC 3390 [unknown] [unknown]LOG:  incomplete startup packet
2015-03-03 02:58:57 UTC 3390 [unknown] [unknown]LOG:  connection failed during start up processing: user= database=
2015-03-03 03:15:04 UTC 3967 [unknown] enterprisedbFATAL:  database "enterprisedb" does not exist
2015-03-03 03:20:53 UTC 2884 walreceiver enterprisedbLOG:  terminating walsender process due to replication timeout

2/ QUESTION:

What is about “FATAL: terminating walreceiver due to timeout” problem ? How can I fix it ?


How to improve ArcGIS Desktop 10.2.2 and Postgres 9.2 PostGIS geometry performance?

$
0
0

I’m experiencing sever performance issues on a polygon featureclass(PG_GEOMETRY) in a PostgreSQL 9.2 database from a 10.2.2 client. Even when zoomed to an area with 10-20 features within the extent.

I’ve narrowed it down to a call from ArcGIS Desktop:
DECLARE sdecur_508_23777 BINARY CURSOR WITH HOLD FOR select st_asewkb(ST_setSRID(zzzztablenamezzzz.shape,-1)) AS shape from sde.fooschema.zzzztablenamezzz
and then the subsequent fetches (e.g. 2015-03-13 19:52:26 GMT LOG statement: FETCH FORWARD 1000 from sdecur_508_23777)

The data frame and the spatial reference id (54017) are the same, so I’m not seeing why a query that brings back the entire table’s geometry as SRID-less WKB needs to be executed (count appox 900000 polygons).

There’s no fancy rendering–I just added the featureclass and zoomed in to a small extent. The featureclass has a spatial index that performs well

Can I select data inserted in the same uncommited transaction?

$
0
0

Maybe this is a dumb beginner question, but I cannot find an answer anywhere. Everywhere I read about the Transaction Isolation which solves the visibility of data within the concurrent transactions. My concern is the behavior within a single transaction.

If I start a transaction, insert some data, am I going to be able to select them right after – still within the same, yet uncommitted, transaction?
If yes, can this behavior be changed in a similar way like the mentioned Transaction Isolation in the case of concurrent transactions?

To be specific, I’m targetting PostgreSQL 9.4.

PostgreSQL security threats in external modules or PL languages

$
0
0

In reading the answer to question I came across a paragraph that indicated that some PostgreSQL PL languages and/or contrib modules can be used to compromise the security of the system.

  1. Is this only an issue if they get postgres db user or superuser access?
  2. Is this because they could in essence create any PL/Python (or other PL/*) function script to run against the system? Example?
  3. Is there any way to mitigate this, other than not installing PL/*?
  4. The paragraph mentions other contrib modules. What other contrib modules could be a postgres/system security risk?

Note that if PL/Python, PL/Perl, etc are installed, or some contrib modules are present, then it might be fairly easy to escalate access from the postgres database user (or other superuser) to the postgres shell user. From there you can reconfigure the database to enable logging, install extensions, inject C code, etc. There is no guarantee you would detect such an exploit, so they might be logging activity for some time.

How to block messages from Londiste?

$
0
0

This question is part postgres and part londiste.

I have successfully upgraded PostgreSQL from 9.1 to 9.3 using Londiste. However, in the PostgreSQL 9.3 log, I am still getting some messages related to old Londiste replication in my Postgres 9.1 environment.

select pgq.next_batch(‘some_oldqueue_name’, ‘some_old_consumer_name’)
2015-02-16 19:16:19 GMT ERROR: Not subscriber to queue: some_old_queue_name/some_old_consumer_name

The pqg.nextbatch function is like as follows:

select sub_queue, sub_consumer, sub_id, sub_last_tick, sub_batch into sub
from pgq.queue q, pgq.consumer c, pgq.subscription s
where q.queue_name = x_queue_name
and c.co_name = x_consumer_name
and s.sub_queue = q.queue_id
and s.sub_consumer = c.co_id;
if not found then
errmsg := ‘Not subscriber to queue: ‘

The some_old_queue_name and some_old_consumer_name are not in the database. So, the error is very obvious.

So, my question is that is there any way that I can stop this error message from Londiste side? I tried to unregister the old consumer name , but in vain.

If not, can I filter out the message from the Postgresql log?

Thanks

Finding census block for given address using Tiger geocoder

$
0
0

I have a geocoder based on Tiger/Line data running on PostGIS. I need to find census block for a given address. How can I do it?

Like I can do for census tract:
SELECT get_tract(ST_Point(-71.101375, 42.31376) ) As tract_name;

What would be similar query for finding census block and block group?

Viewing all 1138 articles
Browse latest View live


Latest Images