Quantcast
Channel: Question and Answer » postgresql
Viewing all 1138 articles
Browse latest View live

Performance of primary key IDs with gaps (but in sequential order)

$
0
0

I know that having non-sequential IDs is bad for index performance. But assuming all my IDs are created in correct order, but with large gaps: i.e:

154300000
283700000
351300000
464200000

…will the performance be any worse than having gapless auto_increment IDs?

I’ll be using MySQL, or perhaps PostgreSQL. The gaps between the IDs would not be even. They’ll be BIGINTs with a unix timestamp at the start (left side) of the number, and the rest of the numbers mostly being random, as discussed in another question I asked here:

http://stackoverflow.com/questions/6338956/mysql-primary-keys-uuid-guid-vs-bigint-timestamprandom/6339581


Replicate single table from Postgres 9.3 replica

$
0
0

I have an existing Postgres 9.3 server acting as a read-only replica using the built-in streaming replication.

I need to replicate a single table from a database on that server to another physical server.

I’ve tried using bucardo but it doesn’t like talking to read-only databases.

Is there a way to trigger an update on the remote server from the replica?

How to setup a SQL trigger in CartoDB

$
0
0

I need to setup 2 SQL triggers in CartoDB that will update my inland and coastal tables each time my obs table is updated. I have never setup a SQL trigger but I have looked at the documentation and it appears as though I need to have a function stored somewhere that the trigger activates.

So far my first trigger looks like this:

CREATE TRIGGER update_inland
AFTER INSERT ON obs
FOR EACH ROW
EXECUTE PROCEDURE update_inland()

But I get an error message that “function update_inland() does not exist“. I want my function to be this:

UPDATE inland SET lt_dispatch_level = obs.named_lt_dispatch_level 
FROM obs 
WHERE obs.created_at = (SELECT MAX(created_at) FROM obs) 
AND inland.cartodb_id = 1

How can I create this function and subsequent trigger in CartoDB? I have been reading the documentation for both Postgres triggers and functions but I need some help.

Can't list users from heroku psql

$
0
0

I have a heroku postgres add-on database with a user table. However, when I attempt to select * from user, all I get is:

  current_user  
----------------
 rtsjlhdfptlaqd
(1 row)

The table name is definitely user. There is no users table. There are definitely a bunch of users in my app, but I can’t seem to list them.

Thoughts?

Restrict two specific column values from existing at the same time

$
0
0

I have a PostgreSQL example table where at most one row that is not of type ‘c’ should be allowed.

I would appreciate any help creating a constraint that will enforce this.

CREATE TABLE example
(
  example_id    serial PRIMARY KEY,
  example_state CHAR(1) NOT NULL
);

ALTER TABLE example ADD CONSTRAINT 
  example_constraint
CHECK (example_state = 'a' OR example_state = 'b' OR example_state = 'c');

multiuser testing in web gis

$
0
0

We have build a web based application using aspmap and postgresql database (spatial database)

While we were checking our app for multiuser testing, we found that the results are overlapped for the parameters selected by both users.

The method we have used for rendering is – we have one column in database and we are updating the same and based on that field we are doing color rendering on map.

Now if one user selects product x and another product y and both of them fires same query at same time. both of them gets mix results on map.

as the same field in spatial table is getting update immediately one after another.

IS this right approach for any web application or are there any alternatives which can control such situation?

OpenGeo postgres installation not working from command line but works from PGadminIII on fresh install

$
0
0

Problem:

I just did a default install of opengeo suite (4.1) on Ubuntu 12.04 and followed instructions exactly. The problem is that I can connect just fine to my postgresql through pgadminIII (1.18.1), but I can’t connect on command line. Has anyone encountered this and if so, how did you fix?

background:

As I said above, I am installing opengeo suite (4.1) on a 32bit ubuntu 12.04 installation. In fact I did it twice, once at home and once at work and I get the same error so it is repeatable:

Basically, i follow all the instructions on the opengeo website. (http://suite.opengeo.org/opengeo-docs/installation/ubuntu/install.html)

I am sure I am using the 12.04 repository to download software. I changed the postgres.conf and pg_hba.conf to listen on my machine and accept outside connections and use md5 authentication.

I created a password for the postgres user manually like it instructs. However, when it comes time to test the command line, it doesn’t authenticate. I get the error:

~$ psql -U postgres -W 
Password for user postgres: 
psql: FATAL:  Peer authentication failed for user "postgres"

The problem is, that I can connect just fine through pgadminIII (1.18.1). I can run the postgis extension and it creates my postgis database. I can even edit files I created in QGIS. But I can’t log in via command line so I can’t run the shp2pgsql to load any preexisting data. This is a fresh installation on a machine and, as I mentioned, I tried it on two separate machines (both running ubuntu 12.04 x32).

my pga_hb.conf file:

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             192.168.0.0/16          md5
# IPv6 local connections:
host    all             all             ::1/128                 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local   replication     postgres                                peer
#host    replication     postgres        127.0.0.1/32            md5
#host    replication     postgres        ::1/128                 md5

ArcGIS 10.2.1 crashes during connection with ArcSDE 10.0 inside PostgreSQL 8.0.1

$
0
0

I’m currently working in an infrastructure built with ArcGIS 10.0 (Desktop and Server) [SP2 build 3200] and because for some functionalities (ArcGIS for Desktop extensions) I need to use a newer version I’m testing how the ArcGIS for Desktop 10.2.1 copes with infrastructure built with 10.0. And so far it goes quite poorly…

I guess my biggest problem is following: there’s and ArcSDE geodb (10.0) set-up in PostgreSQL 8.0.1 which works perfectly with the 10.0 version. However if I want to access (through a direct connection) any raster stored in the geodb from the ArcGIS for Desktop 10.2.1 (ArcMAP, ArcCatalog) it crashes without any warning or error.

Has anyone encountered such behaviour? The weird part is that I can list the ArcSDE db and use tables and vector layers without any problems. The problem starts when I want to access any rasters in ArcCatalog or ArcMap. The rasters which are stored are standard rasters like orthophotomap, dem,

The operating systems are: Windows 7 SP1 x86 for the 10.0 version and Windows XP SP3 for the 10.2.1 version. I have PosgreSQL client files installed. I’ve tried 9.0.5 client files, as it’s the oldest version of postgresql which shows under client files for 10.2.1 version and also old 10.0 client files, which I have for 10.0 ArcGIS version.

I do not have RDBMS files installes as I’m not sure what are they for.


Using PgBarman for PostgreSQL 9.3 backup

$
0
0

I’m setting up a backup strategy for a PostgreSQL 9.3 db. My job includes:

  • take db backups every day/week/month
  • use a Synology NAS as remote backup device

I’m thinking of using Postgres [base_backup + WAL archiving] + [rsync to NAS] for accomplishing the task. The db is a low activity one so I should force wal archiving via pg_switch_xlog or similar. One aspect of this is I would find myself with partially empty WAL files and I would like to reduce their size (I’ve been suggested to use pg_clearxlogtail).

Searching the Internet I found PgBarman which looks like a good aid in accomplishing the task but the documentation on the site assumes:

  • I have Postgres installed on the backup machine
  • I install PgBarman on the backup machine

I don’t think I can install either of the two on the NAS (am I wrong?).

So I’m rather planning to:

  • install PgBarman on the actual Postgres server machine
  • use it to backup the db locally and manage eventual recovery
  • archive the base backup and WAL files to the remote NAS

The main goal of the backup is not failure recovery as the Postgres server is hosted by a cloud hosting provider which ensures availability and no data loss, but rather for human error recovery.

Does this strategy make sense? Can anyone point me to a good resource to find out how to implement it? Would there be a better solution?
Any suggestions on implementing it or adopting alternative approaches would be greatly appreciated.

(I’ve already been suggested streaming replication many times but I don’t think I need it and moreover I wouldn’t know how to set that up in a NAS)

Postgres database dump and restore on different database

$
0
0

I have two Postgres databases on the same server, which have the same schemas.
The goal is to have DB1 as the production and DB2, as the database that will receive all the data that is migrated from a MySQL db, and then use the dump created after the migration on DB2 to restore the DB1. In other words, dump DB2 and use this dump to perform a restore on DB1.

This would allow to have the production DB1 “always” available even when the migration process is taking place on DB2.

My question is, is it possible to use the DB2 dump to restore the DB1? Or should a different strategy be used –
like renaming the databases?
Thank You

Could not open file "pg_clog/0000": No such file or directory

$
0
0

My local PostgreSql 9.2 database won’t start anymore this morning. I am on Windows 7. Yesterday, I performed a Windows Update before going to bed.

I have checked the log and found:

2014-05-28 10:16:03 CEST LOG:  database system was interrupted while in recovery at 2014-05-28 10:11:41 CEST
2014-05-28 10:16:03 CEST HINT:  This probably means that some data is corrupted and you will have to use the last backup for recovery.
2014-05-28 10:16:03 CEST LOG:  database system was not properly shut down; automatic recovery in progress
2014-05-28 10:16:03 CEST LOG:  redo starts at 0/25544D08
2014-05-28 10:16:03 CEST LOG:  file "pg_clog/0000" doesn't exist, reading as zeroes
2014-05-28 10:16:03 CEST CONTEXT:  xlog redo commit: 2014-04-24 18:48:48.775+02; rels: base/16855/140563 base/16855/140562 base/16855/140561 base/16855/140560 base/16855/140559 base/16855/140558 base/16855/140557 base/16855/140556
2014-05-28 10:16:03 CEST FATAL:  the database system is starting up
2014-05-28 10:16:04 CEST FATAL:  the database system is starting up
2014-05-28 10:16:05 CEST FATAL:  the database system is starting up
2014-05-28 10:16:06 CEST FATAL:  the database system is starting up
2014-05-28 10:16:07 CEST FATAL:  the database system is starting up
2014-05-28 10:16:08 CEST FATAL:  the database system is starting up
2014-05-28 10:16:12 CEST FATAL:  the database system is starting up
2014-05-28 10:16:13 CEST FATAL:  the database system is starting up
2014-05-28 10:16:14 CEST FATAL:  the database system is starting up
2014-05-28 10:16:15 CEST FATAL:  the database system is starting up
2014-05-28 10:16:16 CEST FATAL:  the database system is starting up
2014-05-28 10:16:18 CEST FATAL:  the database system is starting up
2014-05-28 10:16:19 CEST FATAL:  the database system is starting up
2014-05-28 10:16:20 CEST FATAL:  the database system is starting up
2014-05-28 10:16:21 CEST FATAL:  the database system is starting up
2014-05-28 10:16:22 CEST FATAL:  the database system is starting up
2014-05-28 10:16:23 CEST FATAL:  the database system is starting up
2014-05-28 10:16:24 CEST FATAL:  the database system is starting up
2014-05-28 10:16:25 CEST FATAL:  the database system is starting up
2014-05-28 10:16:26 CEST FATAL:  the database system is starting up
2014-05-28 10:16:27 CEST FATAL:  the database system is starting up
2014-05-28 10:16:28 CEST FATAL:  the database system is starting up
2014-05-28 10:16:29 CEST FATAL:  the database system is starting up
2014-05-28 10:16:30 CEST FATAL:  the database system is starting up
2014-05-28 10:16:31 CEST FATAL:  the database system is starting up
2014-05-28 10:16:32 CEST FATAL:  the database system is starting up
2014-05-28 10:16:33 CEST FATAL:  the database system is starting up
2014-05-28 10:16:34 CEST FATAL:  the database system is starting up
2014-05-28 10:16:35 CEST FATAL:  the database system is starting up
2014-05-28 10:16:36 CEST FATAL:  the database system is starting up
2014-05-28 10:16:37 CEST FATAL:  the database system is starting up
2014-05-28 10:16:38 CEST FATAL:  could not access status of transaction 0
2014-05-28 10:16:38 CEST DETAIL:  Could not open file "pg_clog/0000": No such file or directory.
2014-05-28 10:16:38 CEST CONTEXT:  xlog redo zeropage: 4
2014-05-28 10:16:38 CEST LOG:  startup process (PID 8024) exited with exit code 1
2014-05-28 10:16:38 CEST LOG:  aborting startup due to startup process failure

It is not a real issue if I have lost the database content, because I use it for development purposes only.

How should I proceed with this? Should I reinstall PostgreSql 9.2? Is there a command to restore it in a proper state?

Why does the behavior of array syntax differ from '(?,?)' syntax when updating a point field and that field is NULL?

$
0
0

I’m using PostgreSQL 9.3.5.

Suppose I have the following table:

CREATE TEMPORARY TABLE point_test ("name" varchar(255), "pt" point);

I then insert a row, leaving pt NULL:

INSERT INTO point_test (name) VALUES ('me');

Then, I want to update pt using the array-like syntax:

UPDATE point_test SET pt[0] = 42, pt[1] = -42 WHERE name = 'me';

which appears to succeed (UPDATE 1) – EXPLAIN VERBOSE shows the following:

Update on pg_temp_65.point_test  (cost=0.00..11.75 rows=1 width=538)
  ->  Seq Scan on pg_temp_65.point_test  (cost=0.00..11.75 rows=1 width=538)
        Output: name, (pt[0] := 42::double precision)[1] := (-42)::double precision, ctid
        Filter: ((point_test.name)::text = 'me'::text)

However, pt is still NULL. If I use a slightly different syntax, it works in this case:

UPDATE point_test SET pt = '(42,-42)' WHERE name = 'me';

results in the point (42,-42) as expected.

Further, now that there is something in the field, I can update the point using the first syntax:

UPDATE point_test SET pt[0] = 84, pt[1] = -84 WHERE name = 'me';

results in the point (84,-84) as expected.

Why do the behaviors of the two syntaxes differ only when the pt column is NULL?

Connect Jira to Postgresql with Pgpool-II

$
0
0

I gotta problem with connecting Jira 6 to Pgpool-II which is responsible for load balancing and replication in Postgresql bases(2 instances).

enter image description here

Pgpool is started on port 9999, I just want to connect to it in this (Jira)installation step but Jira tolds that connection attempt failed….

How connect C# to Postgresql in host j.layershift.co.uk

$
0
0

I have installed postgresql database in http://postgres-project-1241043.j.layershift.co.uk/ host.

I want to connect to the database using C#. I use Npgsql with following connection string.

connectionString = @ "Server = postgres-project-1241043.j.layershift.co.uk, Port = 5432, User Id = postgre; Password = abcdef; Database = dbluanvantn;";

But I am not able to connect to the server and get error:

Npgsql.NpgsqlException: Failed to a connection to 'postgres-project1241043.j.layershift.co.uk'.

Am I using correct connection string?. Help me fix it.

Debugging ogr2ogr "AddGeometryColumn failed" and "Terminating translation prematurely after failed translation of layer"

$
0
0

As noted in the title, I’m getting the following error:

ERROR 1: AddGeometryColumn failed for layer pretty_polys, layer creation has
failed.
ERROR 1: Terminating translation prematurely after failed
translation of layer mybeautifulshapefile (use -skipfailures to
skip errors)

I’m just adding a shapefile to PostGIS using ogr2ogr (PG connection data is all
fake placeholders, obviously):

ogr2ogr 
    -f PostgreSQL 
        PG:"host='000.00.000.00' port='5432' user='nullislandpatriot' password='nullisland4eva' dbname='data_i_like'" 
    "data/mybeautifulshapefile.shp" 
    -nln pretty_polys 
    -nlt POLYGON

I’ve tried to identify many possible sources for the error, but I can’t seem to
figure out what’s wrong, and the error isn’t descriptive enough.

So my question is, how can I drill down into this error to find out why AddGeometryColumn is failing?

Here are some of the things I have already tried:

  • using different files. I receive this error with any file, including the
    natural earth countries shapefile
  • checking the permissions in PostgreSQL. I have read and write access to
    the public schema that is being used. Though I should note that I am not
    the superuser, and we had to give explicit permission to allow me to access
    spatial_ref_sys, and geometry_columns.
  • editing the -nlt option to make sure I’m using the correct geometry type.
  • checking different arrangements of -a_srs, -s_srs, -t_srs, and not
    using them at all.
  • lots of other things before I tried the natural earth data, like
    PRECISION=NO, different database schemas, text encoding, …

My assumption is that the error has something to do with permissions or
settings in PostgreSQL or the syntax for the connection info, since I get the
same error for well-known shapefiles like Natural Earth. I’ve tried the
connection info withou the interior single quotes.

I also tried CPL_DEBUG=ON, which shows me a bit more of what is working:

Shape: DBF Codepage = UTF-8 for
/Users/bgolder/Downloads/ne_10m_admin_0_countries/ne_10m_admin_0_countries.shp
Shape: Treating as encoding 'UTF-8'.
OGR:
OGROpen(/Users/bgolder/Downloads/ne_10m_admin_0_countries/ne_10m_admin_0_countries.shp/0x7f9a79d00720)
succeeded as ESRI Shapefile.
PG: DBName="'data_i_like'"
PG: PostgreSQL version string : 'PostgreSQL 9.1.9 on x86_64-unknown-linux-gnu,
compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit'
PG: PostGIS version string : '2.0 USE_GEOS=1 USE_PROJ=1 USE_STATS=1'
OGR_PG_NOTICE: NOTICE:  CREATE TABLE will create implicit sequence
"countries_ogc_fid_seq" for serial column "countries.ogc_fid"

OGR_PG_NOTICE: NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index
"countries_pkey" for table "countries"

ERROR 1: AddGeometryColumn failed for layer countries, layer creation has
failed.
ERROR 1: Terminating translation prematurely after failed
translation of layer ne_10m_admin_0_countries (use -skipfailures to skip
errors)

GDAL: In GDALDestroy - unloading GDAL shared library.

Specs:

  • PostGIS 2.0, with PostgreSQL 9.1.9 on Ubuntu 12.04
  • GDAL 1.11.0 (on OS X 10.9.3, installed via Homebrew)

Should one always VACUUM ANALYZE before REINDEXing in PostgreSQL 8.4?

$
0
0

Early in the morning every day a pgAgent job refreshes the contents of table A from table B on my PostgreSQL 8.4 database. Table A contains around 140k records across 91 columns and has two indexes – one as part of the PRIMARY KEY and the other a GIST index on a POINT PostGIS geometry column.

To make the process go a little faster the job drops the index on the geometry column, before deleting the records in the table A and inserting the records from table B, then the index is recreated. This all being done the autovacuum daemon gets to work when it feels like it (after ten minutes or so from comparing the job stats and table stats for the job completion time and autovacuum run time).

Upon checking on the table this morning after all this had happened the table stats told me the table size was 272MB, the TOAST table size was 8192bytes, and the index size was 23MB. This seemed quite large so I issued a REINDEX command on the table and the index size came down to 9832kB.

My question(s) is this:

Why does the REINDEX apparently reduce the size of the indexes so much when the indexes (or at least the geometry column index) have been built anew from scratch? Should I make sure that the table has been vacuumed/analyzed before the indexes are built? Is not dropping the index on the primary key a factor in this? What am I missing?

Update SQL n:n table with multiple associations

$
0
0

I have done this before, but before I mindlessly repeat something that I consider as a hack, I’m asking here.

I have 3 tables (details left out for clarity) :

inv_items

id bigserial (PK)
sku character varying(24)
name character varying(32)
...

inv_item_groups

id bigserial (PK)
name cahracter varying(32)
...

inv_item_group_members

item_group_id bigint (FK -> inv_item_groups)
item_id bigint (FK -> inv_items)

Now, in my code, I have an object like so (in pseudo-code)

class ItemGroup
   id:long
   groupName:String
   items:long[]

and these objects can be modified, then needs to be updated. Since I want to preserve the key integrity, I need the inv_item_group_members table (otherwise, I would’ve used other solutions).

Now, the usual way I was doing this was to

DELETE FROM inv_item_group_members WHERE item_group_id = $1
-- where $1 is the object's id

the, for each items in the object

INSERT INTO inv_item_group_members (item_group_id, item_id) VALUES ($1, $2)

Is there a better solution? What are the alternative? I’m thinking of a SQL function, but not really sure what’s the best approach here (I’m not very experienced with PGSQL, yet.) I have read about writable CTE, but it does not address the case when elements are removed from the array (i.e. association removed).

Does PostgreSQL support multi-threaded replication?

$
0
0

Is PostgreSQL replication single-threaded ? Are there any tools to achieve multi-threaded replication ?

I’m asking this since Mariadb support this and currently I’m learning both these dbs.

Bulk update performance

$
0
0

I have a several process that aggregate some statistics on items (how much was shown, clicked … around 30 factors).
Once in a 5 minutes i flush the aggregate data to postgres (9.1).
I have 250K aggregate items with statistics.
tabe key:
item_id,process_id (in order to avoid locking process-id is part of the key),channel,(more 2 fields).

The process is like this:
open transaction
try to update (most of time this item already exist in db, so update success).
update is done like this – update statistics set counter_a= counter_a+1… where id = x and channel = y and …;
insert in case update failed.
commit

This take 15 minutes for 250K updates.
Any advices?
I saw this question :
Optimizing bulk update performance in Postgresql
But the case there is not relevant.

I cant drop indexes because i need them for the update.
Any way to avoid the overhead of the mvvc when i know that this is the only process that will write the data and no need in isolation or even transaction in case of failure.

db machine run on “hi1.4xlarge” amazon instance (64G RAM , 8 cores).

Explicitly granting permissions to update the sequence for a serial column necessary?

$
0
0

Recently I did create a table as a superuser including a serial id column, e.g.,

create table my_table
(
    id serial primary key,
    data integer
);

As I wanted my non-superuser user to have write access to that table, I granted it permissions:

grant select, update, insert, delete on table my_table to writer;

At a random point in time after doing so, the insertions made by that user started to fail because the user lacked permission to modify the sequence my_table_id_seq associated to the serial column. Unfortunately I can’t reproduce that on my current database.

I worked around this by giving the user the required permission, like this:

grant all on table my_table_id_seq to writer;

Can someone help me understand

  • why, at some point, the previously sufficient permissions might start to fail?
  • what is the proper way to grant write permission for a table with a serial column?
Viewing all 1138 articles
Browse latest View live