Quantcast
Channel: Question and Answer » postgresql
Viewing all 1138 articles
Browse latest View live

Recovery from Live to a new Slave Server – PostgreSQL – ERROR

$
0
0

I’ve started a new SLAVE PostgreSQL server set up.

master: 192.168.100.1

slave1: 192.168.100.2

slave2(NEW SLAVE) 192.168.100.3

* NOTE: I run the pg_basebackup from another STANDBY SERVER. Not from the MASTER

1 – screen -t basebackup

2 – su – postgres

3 – cd ~/9.2/data/

4 – ssh postgres@slave1 'pg_basebackup --pgdata=- --format=tar --label=bb_master --progress --host=localhost --port=5432 --username=replicator --xlog | pv --quiet --rate-limit 100M' | tar -x --no-same-owner

5 – I’ve commented the “primary_conninfo =” and “standby_mode=” so the slave can get the files from WAL_ARCHIVE

6 – Afte I got the logs:

postgres(iostreams)[10037]:       2016-01-09 00:07:26.604 UTC|10085|LOG:  database system is ready to accept read only connections

7 – After the server finished the WAL_ARCHIVE, I turned on replication from MASTER on recovery.conf:

recovery.conf on the New Slave:

restore_command = 'exec nice -n 19 ionice -c 2 -n 7 ../../bin/restore_wal_segment.bash "../wal_archive/%f" "%p"'
archive_cleanup_command = 'exec nice -n 19 ionice -c 2 -n 7 ../../bin/pg_archivecleaup_mv.bash -d "../wal_archive" "%r"'
recovery_target_timeline = 'latest'
standby_mode = on
primary_conninfo = 'host=192.168.100.2 port=5432 user=replicator application_name=replication_slave02'

But, once I’ve restarted the POSTGRESQL I got this error:

WAL segment `../wal_archive/00000005.history` not found
2016-01-09 01:13:39.183 UTC|774|FATAL:  timeline 2 of the primary does not match recovery target timeline 4

ls /var/lib/pgsql/9.2/wal_archive

postgres postgres 0000000200000C6900000065
postgres postgres 0000000200000C6900000066

restore_wal_segment.bash:

#!/bin/bash -eu


# multi version switch for the pg_archivecleanup tool
# package postgresql92-contrib doesn't create an /etc/alernatives link for pg_archivecleanup, and probably
# it's not desirable either. This command switches between different versions of the command based on the
# path it's been invoked from


declare -r -x PATH='/usr/local/bin:/usr/bin:/bin';




declare -i TS_THRESHOLD=0;
declare -i WAL_MTIME=-1;

declare PG_VERSION='';



    # invocation check
    if [ ${#} -lt 2 ] || ([ ${#} -ge 3 ] && (! [[ "${3}" =~ ^[0-9]{1,10}$ ]])); then
        printf 'Usage:n' 1>&2;
        printf 't%s { wal_segment_file } { target_file } [ minimum_age_in_seconds ]n' "${0}" 1>&2;
        exit 2;
    fi;



    # make sure we're running in a PG cluster and extract the version
    if 
        [ -f 'PG_VERSION' ] && 
        [ -f 'postmaster.opts' ] && 
        [ -a 'pg_xlog' ] 
    ; then
        PG_VERSION="$(0<'PG_VERSION')";
        if (! [[ "${PG_VERSION}" =~ ^[0-9]+. ]]); then
            printf 'PG cluster advertises an invalid PostgreSQL version (%s)n' "${PG_VERSION}" 1>&2;
            exit 3;
        fi;
    else
        printf 'Current path does not look like a PostgreSQL cluster directory. Abortingn' 1>&2;
        exit 3;
    fi;

    # This is race-condition prone but we're really trusting postgres not to start multiple instances of this
    # routine in parallel
    if (! ([ -f "${1}" ] || [ -h "${1}" ])); then
        printf 'WAL segment `%s` not foundn' "${1}" 1>&2
        exit 4;
    fi;

    if [ ${#} -ge 3 ]; then
        # we refuse to restore files newer than the threshold with error 7
        TS_THRESHOLD=$(($(date +'%s') - ${3}));
        WAL_MTIME=$(stat --format='%Y' "${1}");

        if [ ${WAL_MTIME} -gt ${TS_THRESHOLD} ]; then
            printf 'Archived WAL segment `%s` is newer than the configured delay (%d seconds)n' "${1}" ${3} 1>&2
            exit 7;
        fi;
    fi;

    exec cat 0<${1} 1>"${2}";

pg_archivecleaup_mv.bash

#!/bin/bash -eu


# multi version switch for the pg_archivecleanup tool
# package postgresql92-contrib doesn't create an /etc/alernatives link for pg_archivecleanup, and probably
# it's not desirable either. This command switches between different versions of the command based on the
# path it's been invoked from


declare -r -x PATH='/usr/local/bin:/usr/bin:/bin';


declare PG_VERSION='';


    # make sure we're running in a PG cluster and extract the version
    if 
        [ -f 'PG_VERSION' ] && 
        [ -f 'postmaster.opts' ] && 
        [ -a 'pg_xlog' ] 
    ; then
        PG_VERSION="$(0<'PG_VERSION')";
        if (! [[ "${PG_VERSION}" =~ ^[0-9]+. ]]); then
            printf 'PG cluster advertises an invalid PostgreSQL version (%s)n' "${PG_VERSION}" 1>&2;
            exit 3;
        fi;
    else
        printf 'Current path does not look like a PostgreSQL cluster directory. Abortingn' 1>&2;
        exit 3;
    fi;

    # we got the version. we just try to run it. -u is unset so that we can pass "${@}" even if it's not set
    set +u;
    exec "/usr/pgsql-${PG_VERSION}/bin/pg_archivecleanup" "${@}";

What can I do to solve the problem? It’s really important as it’s a production New Slave. Thank you!


Postgres Can't Add Foreign Key Constraint

$
0
0

I have a table with about 220 million records :( and I need to add a foreign key constraint.

My command looks something like this:

ALTER TABLE events 
ADD CONSTRAINT events_visitor_id_fkey 
FOREIGN KEY (visitor_id) 
REFERENCES visitors(id) 
ON DELETE CASCADE;

It’s been running for probably an hour now.

I ran this before hand:

set maintenance_work_mem='1GB';

What’s the fastest way to do this, and about how long should it take. The table it references is only 25 million.

I’m running it on an RDS instance of db.r3.large (15 GB of RAM).

EDIT:

Just cancelled the command and got this:

ERROR:  canceling statement due to user request
CONTEXT:  SQL statement "SELECT fk."visitor_id" FROM ONLY "public"."events" fk LEFT OUTER JOIN ONLY "public"."visitors" pk ON ( pk."id" OPERATOR(pg_catalog.=) fk."visitor_id") WHERE pk."id" IS NULL AND (fk."visitor_id" IS NOT NULL)"

after truncating a table in postgres, why does qgis lose track of updates/features?

$
0
0

I’m running a python script which drip-feeds data slowly into a postgres database with postgis extension. I’m using autocommit, and committing one row at a time. Horrendously slow, but I need to do it this way for a good reason :)

Once I add a postgres layer, QGIS seems to poll the database every so often and the number of features increases. This is great, and gives me visual feedback that my script is working.

If I stop my script, TRUNCATE the table using pgAdminIII and restart my script, QGIS correctly clears the display (it notices that there are no features). However, it doesn’t seem to track subsequent changes to the database, and the feature count sticks at the number of rows there were, rather than 0. I need to add the postgres layer again, which can take a while.

Is this a bug, a feature, or am I doing something wrong?

(Environment: QGIS 2.12.1 Pisa, Postgres 9.3.10, PostGIS 2.1.2, Ubuntu Tahr 32 bit)

Update

It seems that once the number of features exceeds the number of features in the database before I truncated the database, QGIS starts tracking changes again.

Get the 2 points immediately before and after a given point in PostGIS

$
0
0

In the picture below:
The given point is C, I would like to select A and B, but not Z (which is closer to C than B but it’s not immediately after in the LineString).

Note that C is always ON the line but not a vertex of the line.

enter image description here

What role for plpythonu function's in the file system?

$
0
0

As a preliminary test for a further work I’m trying to use a simple plpythonu function in Postgresql 9.2 to create a folder in my filesystem. So I have this code :

CREATE OR REPLACE FUNCTION "mkdir_test"() 
RETURNS void AS $BODY$ 

import os

dir = os.path.dirname('/tmp/areas/testdir/')
if not os.path.exists(dir):
    os.makedirs(dir)


$BODY$

LANGUAGE plpythonu
COST 100
CALLED ON NULL INPUT
SECURITY INVOKER
VOLATILE;
ALTER FUNCTION "mkdir_test"() OWNER TO "chewbacca";

It works, but then the directory created ‘testdir’ belongs to _postgres and has privileges ’700′, meaning that it is forbidden to anyone but postgres. How can I change this so that the user triggering this function is the owner of the file/folder created ? (Currently I’m doing this on Mac os x 10.11 but the objective is to have this working on any OS.)

Send Keyboard Interrupt in PostreSQL psql console?

$
0
0

Is it possible to send a keyboard interrupt signal to the PostgreSQL psql console that will take me back to a new console command line? Sometimes I’ll type a command incorrectly and then hit enter and the console will just sit there. If I hit ^c or ^d to try to interrupt the command, this will end the console session and I have to re-start the session which can be a pain if I had a lot of things going on in that session.

Thanks.

Group and count array elements using intarray

$
0
0

I am working on a Postgres 9.4 project with the intarray extension enabled. We have a table that looks like this:

items
-------------------------------------
id    name                  tag_ids  
--------------------------------------
1     a car                 {1,4}
2     a room to rent        {1}
3     a boat                {1,2,4,11}
4     a wine                {2}
5     emily                 {3}

I’d like to group the tag ids if possible. Like get a count of all the elements that have a tag_id of ‘{1,2,4,11}’

tag_id  count
1       3
2       2
4       2
11      1

Is this possible? I would think an intersection like this:

select * from items where tag_ids && '{1,2,4,11}'

But I need to group by the array elements inside the intersection result. If I group by tag_ids, it is just the unique value.

How would I do it?

Improving performance when importing file geodatabase into PostgreSQL?

$
0
0

I have a file geodatabase created with ArcMap 10.1 that contains one feature class with 650 million points. The feature class contains the shape field and an identifier. The .gdb is approximately 26GB.

I’m using ogr2ogr to import the .gdb into a PostgreSQL database. I started the process and the features are being inserted into the database. I verified this with a SELECT COUNT(*) FROM <TABLE> and the number of rows is increasing.

Based on my rudimentary approach to timing – watch the PC clock and use the above SELECT – I estimate the import is progressing at 7 million features per hour. Some quick mental math and I’m looking at 90+ hours to import the .gdb – if the rate remains the same and doesn’t degrade over time. This is roughly 2000 features per second.

The database is an out of the box PostgreSQL 9.4.5 / PostGIS 2.2 installation on a Windows 10 PC with a SSD (400 GB free / 512GB total) and 16GB RAM. No PostgreSQL configuration changes have been made yet as I’m not sure what, if anything, I should do.

Is there anything I can do to increase performance of the .gdb import? Perhaps a recommendation that maximizes writes and then I can rollback the changes after the imports? I could let this process run until completion but there will be other imports I need to perform and I’d rather not wait four days for each one. I have full control of this PC so I’ll cautiously say I’m open to any suggestions.

Update:
I’m using the following command and options. I don’t know how to check if this is the open or ESRI library. The version of GDAL is 1.12 (95% sure) and was installed with QGIS 2.12 Lyon (100% sure of this).

ogr2ogr -f "PostgreSQL" PG:"host=localhost port=5432 dbname=spatial_playground user=postgres password=********" c:gis_datapointdatapoints.gdb 

The database and .gdb are on the same hard drive so contention is probably impacting this. I considered throwing the .gdb on a USB 3.0 hard drive. The OGR command is running on the same machine. Everything is being done on the same machine; No network involved yet.


What are the consequences of dropping or changing the user name for the postgres DB account?

$
0
0

I feel confident I understand what’s involved at the db level:

  • If dropping, I’d need to create users for each sys admin, give them SUPERUSER, and then drop the postgres user.
  • If changing the name, I’d need to alter the postgres account.

My problem, though, is that I’m not sure what the consequences of this action would be. For example, what would it means from the point of view of the Linux postgres user? When I su - postgres, I can run psql and I’m able to connect to the database. I assume that capability goes away if the postgres DB user is gone.

Also, the database is started using the postgres linux user. Is there some issue if I remove the postgres user that would make it so the database wouldn’t even be able to start?

From ps:

/usr/pgsql-9.3/bin/postgres -D /var/lib/pgsql/9.3/data PG_GRANDPARENT_PID=1 USER=postgres PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin PWD=/ PGLOCALEDIR=/usr/pgsql-9.3/share/locale LANG=en_US.UTF-8 PGSYSCONFDIR=/etc/sysconfig/pgsql SHLVL=1 HOME=/var/lib/pgsql LOGNAME=postgres PGDATA=/var/lib/pgsql/9.3/data _=/usr/pgsql-9.3/bin/postgres

I’m running PosrgreSQL 9.3 on Cent OS 7.

PostgreSQL dump import only imprts tables and not data

$
0
0

I’m trying to import a postgreSQL dump into my DB. The dump is saved in a .sql file. I use the following command :

psql -U postgres -d dhris < dbDump.sql

and the table was created before.

The import does happen but there are tables without any data inside. What is the problem? I am using postgreSQL 9.3….

Import output :

SET
SET
SET
SET
SET
SET
CREATE EXTENSION
COMMENT
SET
CREATE TYPE
ALTER TYPE
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
SET
SET
CREATE TABLE
ALTER TABLE
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE FUNCTION
ALTER FUNCTION
CREATE AGGREGATE
ALTER AGGREGATE
CREATE AGGREGATE
ALTER AGGREGATE
CREATE AGGREGATE
ALTER AGGREGATE
CREATE OPERATOR
ALTER OPERATOR
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
COMMENT
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE SEQUENCE
ALTER TABLE
ALTER SEQUENCE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE VIEW
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
CREATE TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
COPY 15
COPY 5662
COPY 0
COPY 7463
COPY 111302
COPY 0
COPY 97639
COPY 114589
COPY 0
COPY 0
COPY 245
COPY 0
COPY 2
COPY 2
COPY 2
COPY 6
COPY 1
COPY 1
COPY 150440
COPY 88738
COPY 81140
COPY 328
COPY 112917
COPY 1
COPY 4
COPY 2
COPY 385037
COPY 110107
COPY 0
COPY 16
COPY 164
COPY 3
COPY 328
COPY 19
COPY 1
COPY 79861
COPY 75210
COPY 0
COPY 0
COPY 0
COPY 2249567
COPY 901494
COPY 31478
COPY 22
COPY 73
COPY 5789
COPY 4468
COPY 0
COPY 18
COPY 36
COPY 5780
COPY 5
COPY 220
COPY 25
COPY 3
COPY 4
COPY 1
COPY 3
 setval 
--------
    624
(1 row)

COPY 0
COPY 0
COPY 201027
COPY 58
COPY 15643
COPY 74222
COPY 164038
COPY 462554
COPY 0
COPY 161628

Which indexes should I create?

$
0
0

I need assistance in which indexes to create on a table, and the best approach for querying it.
We use PostgreSQL.

Here’s the table:

create table myt (
    the_time timestamp, 
    oid int, 
    counter int, 
    other_counter decimal(8,2) 
);

The ‘key’ is the timestamp + oid.
We slice the table into 30 minutes periods.
The columns counter and other_counter are aggregated for this period.

Here’s a data sample from the table:

insert into myt 
values 
('2012/01/01 00:00', 1, 10, 2.5),
('2012/01/01 00:30', 1, 5, 1.5),
('2012/01/01 00:30', 2, 8, 13.5),
('2012/01/01 01:00', 1, 15, 4),
('2012/01/01 01:00', 2, 10, 2.25),
('2012/01/01 01:00', 3, 2, 4.5),
('2012/01/01 01:30', 1, 10, 3.75),
('2012/01/01 02:00', 2, 30, 1.5),
('2012/01/01 02:30', 3, 10, 22.5),

('2012/01/02 00:00', 1, 10, 34.5),
('2012/01/02 00:30', 1, 70, 1.25),
('2012/01/02 01:00', 1, 20, 12.0),

('2012/01/03 00:00', 2, 40, 50),
('2012/01/03 00:30', 2, 90, 10);

I need to query the table based on the_time and oid and sum the counters.

This is the basic query:

select
  date_trunc('day', the_time) as interv_start,
  date_trunc('day', the_time) + interval '1 day' as interv_end,
  sum(counter), 
  sum(other_counter)
from myt
where oid in (1, 2)
  and the_time >= date '2012/01/02'
group by date_trunc('day', the_time)
order by interv_start;

We’ll probably set time limit with BETWEEN. Sometimes we’ll do the_time = … as well.

I would like to know which indexes to create.

I probably need this index:

CREATE UNIQUE INDEX idx ON myt (the_time, oid);

However, do I need the following indexes?

create index time_trunc_Idx on myt (date_trunc('day', the_time));
create index oidIdx on myt (oid);
create index time_idx on myt (the_time);

Here are the execution plans results based on the different indexes:
For one index: http://sqlfiddle.com/#!15/01ae6/1/0

Sort  (cost=24.42..24.43 rows=1 width=26)
    Sort Key: (date_trunc('day'::text, the_time))
    ->  HashAggregate  (cost=24.40..24.41 rows=1 width=26)
          ->  Bitmap Heap Scan on myt  (cost=7.55..24.36 rows=5 width=26)
                Recheck Cond: (the_time >= '2012-01-02'::date)
                Filter: (oid = ANY ('{1,2}'::integer[]))
                ->  Bitmap Index Scan on idx  (cost=0.00..7.55 rows=453 width=0)
                      Index Cond: (the_time >= '2012-01-02'::date)

And here with 4 indexes,

http://sqlfiddle.com/#!15/e34c7/1

Sort  (cost=19.08..19.09 rows=1 width=26)
    Sort Key: (date_trunc('day'::text, the_time))
    ->  HashAggregate  (cost=19.06..19.07 rows=1 width=26)
          ->  Bitmap Heap Scan on myt  (cost=8.41..19.02 rows=5 width=26)
                Recheck Cond: (oid = ANY ('{1,2}'::integer[]))
                Filter: (the_time >= '2012-01-02'::date)
                ->  Bitmap Index Scan on oididx  (cost=0.00..8.41 rows=14 width=0)
                      Index Cond: (oid = ANY ('{1,2}'::integer[]))

I’d appreciate your inputs.


Edited:
Here are 3 explain options (no index, one index, four indexes)

No Index:

Sort  (cost=30.48..30.48 rows=1 width=26) (actual time=0.037..0.037 rows=2 loops=1)
   Sort Key: (date_trunc('day'::text, the_time))
   Sort Method: quicksort  Memory: 25kB
   Buffers: shared hit=1
   ->  HashAggregate  (cost=30.45..30.47 rows=1 width=26) (actual time=0.031..0.032 rows=2 loops=1)
         Group Key: date_trunc('day'::text, the_time)
         Buffers: shared hit=1
         ->  Seq Scan on myt  (cost=0.00..30.41 rows=5 width=26) (actual time=0.012..0.015 rows=5 loops=1)
               Filter: ((oid = ANY ('{1,2}'::integer[])) AND (the_time >= '2012-01-02'::date))
               Rows Removed by Filter: 9
               Buffers: shared hit=1

One Index:

Sort  (cost=24.43..24.43 rows=1 width=26) (actual time=0.061..0.062 rows=2 loops=1)
   Sort Key: (date_trunc('day'::text, the_time))
   Sort Method: quicksort  Memory: 25kB
   Buffers: shared hit=8
   ->  HashAggregate  (cost=24.40..24.42 rows=1 width=26) (actual time=0.040..0.042 rows=2 loops=1)
         Group Key: date_trunc('day'::text, the_time)
         Buffers: shared hit=5
         ->  Bitmap Heap Scan on myt2  (cost=7.55..24.36 rows=5 width=26) (actual time=0.020..0.024 rows=5 loops=1)
               Recheck Cond: (the_time >= '2012-01-02'::date)
               Filter: (oid = ANY ('{1,2}'::integer[]))
               Heap Blocks: exact=1
               Buffers: shared hit=5
               ->  Bitmap Index Scan on idx  (cost=0.00..7.55 rows=453 width=0) (actual time=0.007..0.007 rows=5 loops=1)
                     Index Cond: (the_time >= '2012-01-02'::date)
                     Buffers: shared hit=4

Four Indexes:

Sort  (cost=19.09..19.09 rows=1 width=26) (actual time=0.046..0.046 rows=2 loops=1)
   Sort Key: (date_trunc('day'::text, the_time))
   Sort Method: quicksort  Memory: 25kB
   Buffers: shared hit=3 dirtied=1
   ->  HashAggregate  (cost=19.06..19.08 rows=1 width=26) (actual time=0.040..0.041 rows=2 loops=1)
         Group Key: date_trunc('day'::text, the_time)
         Buffers: shared hit=3 dirtied=1
         ->  Bitmap Heap Scan on myt3  (cost=8.41..19.02 rows=5 width=26) (actual time=0.029..0.032 rows=5 loops=1)
               Recheck Cond: (oid = ANY ('{1,2}'::integer[]))
               Filter: (the_time >= '2012-01-02'::date)
               Rows Removed by Filter: 7
               Heap Blocks: exact=1
               Buffers: shared hit=3 dirtied=1
               ->  Bitmap Index Scan on oididx  (cost=0.00..8.41 rows=14 width=0) (actual time=0.004..0.004 rows=12 loops=1)
                     Index Cond: (oid = ANY ('{1,2}'::integer[]))
                     Buffers: shared hit=2

Selecting latest row for user grouped by day help

$
0
0

Hoping someone can help me here as I can’t quite wrap my head around the best way to do this.

Im trying to SUM() some numbers from a JSON block, which I have working just fine, how ever there could be multiple records per day per user, and I only want to SUM the numbers of the latest record per day per user.

So, essentially using the record that matches MAX(created_at) per user, per day.

Some sample data

id  | user_id  | scan_id  | data           | created_at
1   | 1        | 100      | {"score": 40}  | 2015-11-06 22:15:27 
2   | 1        | 101      | {"score": 50}  | 2015-11-06 22:18:27
3   | 3        | 102      | {"score": 20}  | 2015-11-06 22:15:27 
4   | 3        | 103      | {"score": 70}  | 2015-11-06 22:12:27 
5   | 5        | 104      | {"score": 40}  | 2015-11-06 22:15:27 
6   | 6        | 105      | {"score": 10}  | 2015-12-06 22:15:27 

In the above data, I want to SUM the values from data->’score’, but you can see the first 4 rows are from two users. I only want to use the LATEST record in the SUM, so that would be record id’s 2 and 3, but not 1 and 4 (as they were older than the other records)

Record 6, would fall under its own day as its on a different date.

So, this query works without getting the latest record, I would like to know how to alter it to only use the latest record per user per day.

SELECT
 SUM((DATA ->> 'score')::integer) AS score, 
 count(*) as count,
 created_at::date
FROM
 scores
GROUP BY created_at::date

Relationship based access control (ReBAC)

$
0
0

In my application, access control is based on relationships between entities.
Consider this simplified schema:

CREATE TABLE user (...);

CREATE TABLE group (...);

CREATE TABLE group_members(
  group_name TEXT PRIMARY KEY NOT NULL,
  member_name TEXT PRIMARY KEY NOT NULL
);
ALTER TABLE group_members ADD FOREIGN KEY (group_name) REFERENCES group(name);
ALTER TABLE group_members ADD FOREIGN KEY (group_name) REFERENCES user(name);

A user can only add someone else to a group if he is himself a member of the group. Another example: A user should be able to control who can read the posts he made: everyone, friends, friends of friends, …

My question is: How do I control access to those ressources in a ReBAC model?

Specifically:

  • Is it OK to query the database everytime before I execute an operation and enforce access control on the application level?
  • Should I user triggers?
  • Is there anything else I’m not aware of?

Thanks

How can update.php be disabled from checking for Postgres databases?

$
0
0

Whenever I try to run the database update script update.php after upgrading modules a WSOD appears and the php error log shows this message:

PHP Fatal error: Class ‘DatabaseTasks_postgresql’ not found in <site root>/includes/install.inc on line 1338.

The only reason I can think of is that sometime ago I setup a Feed to import some records from a Postgres database into Drupal and it probably configured Drupal to check for Postgres. I am not sure if that is the real reason as it could be a different thing.

Is there some module I need to reinstall for the failure to go away?

These are the offending lines in install.inc.

**
 * Ensures the environment for a Drupal database on a predefined connection.
 *
 * This will run tasks that check that Drupal can perform all of the functions
 * on a database, that Drupal needs. Tasks include simple checks like CREATE
 * TABLE to database specific functions like stored procedures and client
 * encoding.
 */
function db_run_tasks($driver) {
  db_installer_object($driver)->runTasks();
  return TRUE;
}

/**
 * Returns a database installer object.
 *
 * @param $driver
 *   The name of the driver.
 */
function db_installer_object($driver) {
  Database::loadDriverFile($driver, array('install.inc'));
  $task_class = 'DatabaseTasks_' . $driver;
  return new $task_class();
}

update.php triggers a call to this piece of code and causes the error. Any ideas of the code may be bypassed or Drupal reconfigured to fix it?

Foreign key for array column

$
0
0

I have a table:

    CREATE TABLE methods
    (
        method_id serial PRIMARY KEY,
        method_name varchar(100)
    );

I want now to create a table with the following columns:

CREATE TABLE experiments 
(
    method integer[] REFERENCES methods(method_id),
    trials integer
);

I get an error:
Key columns “method” and “method_id” are of incompatible types: integer[] and integer.
I understand that the columns have to be the same type and I also saw that some tried to tackle this foreign key on array issue already: http://blog.2ndquadrant.com/postgresql-9-3-development-array-element-foreign-keys/
Some posts propose using a junctions/join tables Foreign key constraint on array member?
I am an absolute beginner and I could not figure it out yet.


PL/pgSQL issues when function used twice (caching problem ?)

$
0
0

I am facing an absolutely weird problem that feels much like a Postgres bug than an algorithm problem.

I have this function:

CREATE FUNCTION sp_connect(mail character varying, passwd character varying, role character varying)
  RETURNS json LANGUAGE plpgsql STABLE AS
$$
DECLARE
    user_info record;
BEGIN
  IF role = 'Role1' THEN
    SELECT u.id, r.name INTO user_info
    FROM users u
    INNER JOIN users_roles ur ON ur.user_id = u.id
    INNER JOIN roles r ON ur.role_id = r.id
    WHERE u.email = mail
      AND u.password = encode(digest(CONCAT(passwd, u.password_salt), 'sha512'), 'hex')
      AND r.name = 'Role1';
  ELSIF role = 'Role2' THEN
    SELECT h.id, 'Role1' AS name INTO user_info
    FROM history h
    WHERE h.email = mail
      AND h.password = encode(digest(CONCAT(passwd, h.password_salt), 'sha512'), 'hex');
  ELSE
    RAISE 'USER_NOT_FOUND';
  END IF;

  IF NOT FOUND THEN
      RAISE 'USER_NOT_FOUND';
  ELSE
      RETURN row_to_json(row) FROM (SELECT user_info.id AS id, user_info.name AS role) row;
  END IF;
END;
$$;

The problem I’m facing is when I use this function to log in with a Role1-user, then when I use it with a Role2-user, I get this error message:

type of parameter 7 (character varying) does not match that when preparing the plan (unknown)

Which is… well, I just don’t understand where does it come from. If you wipe the database and change the login order (i.e. Role2 then Role1), this time, Role1 gets the error.

Strange issue, strange solutions… If I just use ALTER FUNCTION sp_connect but without modify anything inside the function, then magically, the two roles can login without any problem. I also tried this solution:

  IF NOT FOUND THEN
      RAISE 'USER_NOT_FOUND';
  ELSE
      IF role = 'Seeker'
      THEN
          RETURN row_to_json(row) FROM (SELECT user_info.id AS id, user_info.name AS role) row;
      ELSE
          RETURN row_to_json(row) FROM (SELECT user_info.id AS id, user_info.name AS role) row;
  END IF;

And by adding an IF and ELSE that is absolutely useless and use the same RETURN-clause, this does not trigger any error.

I know DBA StackExchange is not for developers but this kind of problem seems to be more like a caching problem or whatever. Can somebody can tell me if I am doing something wrong with PostgreSQL functions? Or where I may get help with this weird problem?

Aggregate discarding values in one column that haven't a match in another column

$
0
0

Say, I have a table representing colored and labeled items inside numbered boxes.
Each box can not contain more than one item with a particular label, but items with the same label (and the same or a different color) may be unique in other boxes.

Oversimplifying, and using PostgreSQL, we can take the following table:

CREATE TABLE items (
    label character varying,
    color character varying,
    box_number integer
);
INSERT INTO items VALUES
  ('a','red',1),
  ('b','blue',1),
  ('c','blue',1),
  ('a','red',2),
  ('c','green',2),
  ('d','blue',2),
  ('b','red',3),
  ('d','green',3);

I want to know the label and the color of all the items inside the box number 3, but also all the box numbers where an item with the same label can be found. In other words, I’m trying to:

SELECT label, boxes
FROM (
  SELECT label, array_agg(DISTINCT box_number) AS boxes
  FROM items
  GROUP BY label
) AS sub1
WHERE 3 = ANY(boxes);

But I also need to return the color column, showing only the color of the item inside the box number 3.

For the example data, the output should be this:

label | color | boxes
------+-------+------
b     | red   | 1,3
d     | green | 2,3

Here’s an SQL Fiddle for the example.

DO INSTEAD in postresql Rules

$
0
0

I have a simple table like this :

CREATE TABLE gateway_text (
  id         BIGSERIAL                                  NOT NULL PRIMARY KEY,
  text       TEXT                                       NOT NULL,
  hash       char(40)                                   UNIQUE,
  created_at TIMESTAMP WITHOUT TIME ZONE DEFAULT now()  NOT NULL
);

CREATE RULE "gateway_text_on_duplicate_ignore" AS ON INSERT TO "gateway_text"
  WHERE EXISTS(SELECT 1 FROM gateway_text WHERE hash=NEW.hash)
  DO INSTEAD SELECT * FROM gateway_text WHERE hash=NEW.hash;

SQL Fiddle

As you can see, the rule is for when the first select is passed (the hash is already in database) so the first insert, is not the one DO INSTEAD supposed to run for.
But the problem is, every time I run insert, I get the select result, even for the first insert of a hash.
In my case this is OK, very OK actually, but I don’t know if this is correct or this is a bug? can I simply accept this as normal and relay on this?

QGIS as Spatial Database RAD or Hierarchical Attribute Viewer/Editor?

$
0
0

I have designed a geological observational spatial database in PostgreSQL/PostGIS to manage geochemical samples, QAQC and geological map objects. Now is the not so fun/easy part of designing a front end to manage all the 1:1 and 1:Many related tables. I have extensive experience with MS Access database forms design but besides that my GUI programing is limited.

I am looking for a multi-platform database RAD (Windows/Linux/IOS/etc) so I am leaning towards something with python. I was looking at Dabo (http://dabodev.com/) at first but it seems to be quite a slow to stagnate development.

So what are the opinions of using QGIS as the spatial database attributes front end manager and of course spatial display & editing? How difficult will it be to design a relatively complicated database application in QGIS?

On another hand one really nice feature of ArcGIS is its hierarchical attribute information & editor tool for dealing with multiple 1:1 & 1:many related tables. Does QGIS have anything similar? That would be nice to begin the database model testing.

Fastest way to process OSM

$
0
0

I’ve been working with imposm and Python, it’s fairly easy to parallelise. I’ve hit a performance problem when trying to associate the coordinates to the ways.

Creating a dictionary from coordinate_id to lat,long and then using it turns out to be really slow.

Do you know of any alternatives that I can use?

Has anyone used a C++ library, or even Scala, or PostgreSQL


Btw, I’ve been able to iterate through 39GB of PBF, almost the whole globe in 17 minutes on a 32 core machine. That’s the performance I’m looking for.

Viewing all 1138 articles
Browse latest View live