Quantcast
Channel: Question and Answer » postgresql
Viewing all 1138 articles
Browse latest View live

Syntax error at or near “1”— PostGIS in Action — Chapter 11 SQL data dump

$
0
0

I’m following along with PostGIS in Action, 2nd ed., chapter 11, while attempting the SQL dump from the data files and I’m getting the following error:

ERROR:  syntax error at or near "1"
LINE 189: 1 0101000020E610000060C77F81A01E6340132C0E677E293BC0 0 0 0 4... 

I’ve already looked at the errata, google, PostgresSQL docs (9.2, 9.5 copy differences); and double checked tab separated in relation to the stdin.

--
-- PostgreSQL database dump
--

-- Dumped from database version 9.2.4
-- Dumped by pg_dump version 9.3beta2

TOC entry 3966 (class 0 OID 10834271)
-- Dependencies: 342
-- Data for Name: aussie_track_points; Type: TABLE DATA; Schema: ch11; Owner: postgres
--

COPY aussie_track_points (ogc_fid, geom, track_fid, track_seg_id, track_seg_point_id, ele, "time", course, speed, magvar, geoidheight, name, cmt, "desc", src, url, urlname, sym, type, fix, sat, hdop, vdop, pdop, ageofdgpsdata, dgpsid) FROM stdin;
-- 1    0101000020E610000060C77F81A01E6340132C0E677E293BC0  0   0   040.899999999999999 2009-07-18 04:30:00-04  N  N  N  N  N  N  N  N  N  N  N  N  3d  N  0.96999999999999997 2.1699999999999999  2.3799999999999999  N  N
-- 2    0101000020E61000000723F609A01E6340F51263997E293BC0  0   0   1   40.399999999999999  2009-07-18 04:30:14-04  N  N  N  N  N  N  N  N  N  N  N  N  3d  N  0.91000000000000003 0.88    1.27    N  N
-- 3    0101000020E6100000A9C1340C9F1E6340E7C8CA2F83293BC0  0   0   2   41.799999999999997  2009-07-18 04:30:20-04  N  N  N  N  N  N  N  N  N  N  N  N  3d  N  0.96999999999999997 1.8999999999999999  2.1299999999999999  N  N
-- 4    0101000020E610000038DBDC989E1E6340B610E4A084293BC0  0   0   3   41.299999999999997  2009-07-18 04:30:26-04  N  N  N  N  N  N  N  N  N  N  N  N  3d  N  0.91000000000000003 0.88    1.27    N  N
-- 5    0101000020E6100000020D36759E1E6340F52F49658A293BC0  0   0   4   41.299999999999997  2009-07-18 04:30:43-04  N  N  N  N  N  N  N  N  N  N  N  N  3d  N  0.91000000000000003 0.88    1.27    N  N

Postgres query to return JSON object keys as array

$
0
0

Is it possible to return a JSON object keys as an array of values in PostgreSQL?

In JavaScript, this would simply be Object.keys(obj), which returns an array of strings.

For example, if I have a table like this:

tbl_items
---------
id bigserial NOT NULL
obj json NOT NULL

And if there’s a row like this:

id      obj
-----   -------------------------
123     '{"foo":1,"bar":2}'

How can I have a query to return:

id      keys
-----   ------------------
123     '{"foo","bar"}'

Avoid double query on the same table

$
0
0

I have two queries which are doing basically the same thing, but have different grouping, the first query (query 1) is used to populate a chart, and the second to populate a table.

Query 1:

SELECT key_id
       ,sum(salary)
       ,sum(bonus)
       ,created_at
FROM table
WHERE emp_id = 1
GROUP BY key_id, created_at

Query 2:

SELECT key_id
       ,sum(salary)
       ,sum(bonus)
       ,count(*) OVER() AS full_count
FROM table
WHERE emp_id = 1
GROUP BY key_id

I have created a function that returns those two queries in a json format, chart: [...], table: [...]. The problem is that I need to query the same table twice, because of my grouping. Is there any way of dealing with this situations?

How to use SSL/TLS between server and client with Postgres?

$
0
0

I don’t know much with it comes to security. I’m deploying a virtual machine with my database in a cloud service. I would like to encrypt all the communication between this server and any clients. How can I do that ?

More information:
I have read a lot of material from postgres documentation [1,2], but I can’t understand much of it. I have followed this tutorial and now I have these files:

valter@eniac:test$ ll
total 28
drwxr-xr-x 2 valter valter 4096 Jan 25 12:54 ./
drwxr-xr-x 3 root   root   4096 Jan 25 12:50 ../
-rw-rw-r-- 1 valter valter 1834 Jan 25 12:53 privkey.pem
-rw-rw-r-- 1 valter valter 4783 Jan 25 12:54 server.crt
-rw------- 1 valter valter 1675 Jan 25 12:53 server.key
-rw-rw-r-- 1 valter valter 3672 Jan 25 12:53 server.req

But I don’t know what to do with them. Where are root.crt and root.crt files ? How can I generate them ? Where do I put these generated files ? And what should I do on the client side ?

DENY insert,update,delete to user (PostgreSQL)

$
0
0

I wonder if Postgres support the DENY command. I search in the documentation but I couldn’t find anything about deny.

I have GRANT ALL ON table1 TO user1 WITH GRANT OPTION
but I don’t want user1 to be able to grant insert/update/delete to user2.
(He can do whatever he want with everyone else!!!)

I want to use this command DENY INSERT,UPDATE,DELETE ON table1 TO user2;

Is there any way to accomplish this?

select/union output from two columns into three

$
0
0

I’m sure there is an easy answer to this, but my PostgreSQL isn’t up to it at the moment.

I have a table hostname_facts with three columns:

hostname
name
value

hostname is a list of fully-qualified server hostnames.
name contains things like role (a TLA defining a server’s role in the environment), env (part of an FQDN), memorysize, eth1_ip, fqdn, etc.
value – the actual value of role, env, memorysize, eth1_ip, fqdn, etc.

There are about 800 hostnames in this table.

From this table I’ve been asked to create a flat file that can be used as an inventory list for Ansible, which needs to be in the format:

[role.env]
hostname, value_of_role, value_of_env
hostname, value_of_role, value_of_env
hostname, value_of_role, value_of_env
...

So, I need need to pull out hostname sorted by value of name = 'role' as well as name = 'env'. This is stumping me!

I used:

select hostname,value from hostname_facts
where name = 'role'
union
select hostname,value from hostname_facts
where name = 'env'
order by value ASC

But that returns the ‘name’ values all in one long column, ‘role’ followed by ‘env’, with all hostnames repeated twice (obviously).

Can anyone suggest how to get just one set of hostnames with ‘role’ and ‘env’ as two columns alongside, rather than one underneath the other?

PostgreSql : flatten json array data

$
0
0

From my current query, I obtain this jsonb data :

values: "a1", ["b1", "b2"]

And I want to flatten it on one level only, like this :

values: "a1", "b1", "b2"

Here is a simplfied way to get data in a query (only 2 levels are possible, never more):

SELECT * 
FROM jsonb_array_elements('{"test": ["a1", ["b1", "b2"]]}'::jsonb->'test');

I tried to use jsonb_array_elements but my problem is that : I don’t know if it is a json array or not ! Not expert in SQL, I did not find a way to code something like :

SELECT
    IF (is_json_array(list)) 
        jsonb_array_elements(list)
    ELSE
        list
    ENDIF
FROM jsonb_array_elements('{"test": ["a1", ["b1", "b2"]]}'::jsonb->'test');

For a «zoom out» view of my current data, here is a table-free working test :

with recursive search_key_recursive (jsonlevel) as(
    values ('{"fr": {"WantedKey": "a1", "Sub": [{"WantedKey": ["b1", "b2"]}], "AnotherSub": [{"WantedKey": "c1"}]}}'::jsonb)
    union all
    select 
        case jsonb_typeof(jsonlevel)           
            when 'object' then (jsonb_each(jsonlevel)).value        
            when 'array' then jsonb_array_elements(jsonlevel)   
        end as jsonlevel
    from search_key_recursive where jsonb_typeof(jsonlevel) in ('object', 'array')
)
select search_key_recursive.jsonlevel->'WantedKey'
from search_key_recursive
where jsonlevel ? 'WantedKey';

For after, I will use the result in an insert statement :

INSERT INTO table1 
SELECT 'someText', value 
FROM jsonb_array_elements('{"test": ["a1", "b1", "c1"]}'::jsonb->'test');

GeoServer Sql view setup error?

$
0
0

The problem goes like this: I am trying to route osm data with the help of this tutorial: (http://workshop.pgrouting.org) but in the step where I create the sql view and add the parameters it fails to create the layer boundaries.

This is the error I get:

java.lang.RuntimeException: java.io.IOException: Error occured calculating bounds for pgrouting
at org.geotools.jdbc.JDBCFeatureSource.getBoundsInternal(JDBCFeatureSource.java:540)…..
aused by: java.io.IOException: Error occured calculating bounds for pgrouting
at org.geotools.jdbc.JDBCDataStore.getBounds(JDBCDataStore.java:1309)
at org.geotools.jdbc.JDBCFeatureSource.getBoundsInternal(JDBCFeatureSource.java:533)
… 116 more
Caused by: org.postgresql.util.PSQLException: ERROR: Support for id,source,target columns only of type: integer. Support for Cost: double precision
Where: PL/pgSQL function pgr_dijkstra(text,bigint,bigint,boolean,boolean) line 6 at assignment
PL/pgSQL function pgr_fromatob(character varying,double precision,double precision,double precision,double precision) line 35 at FOR over EXECUTE statement
at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2062)

Does anyone have a clue?


osm2pgsql password prblem

$
0
0

I’m totally new in PostgreSQL.

I followed this guide to set up Postgre and Postgis:
http://learnosm.org/it/osm-data/setting-up-postgresql/

I created an OSM database and the Postgis extension. I tried to connect to it by QGis and I succeded, but the DB is empty.

So I downloaded from geofabrik site a little osm file: the malta-latest.osm.pbf

I then downloaded the cygwin-package for the osm2pgsql and following this guide:
http://learnosm.org/it/osm-data/osm2pgsql/

I have osm2pgsql in the system enviroment path.

I typed the istruction reported in the guide but I have and error about password.
I looked for this error and I tried to put the -W parameter in the instruction; when it prompts me to insert pass I can’t write… keyboard doesn’r respond

I googled for a while and I’ve understood there is something to set up in the enviroment variable but I don’t know how to to this.

ID duplicates where none should exist in postgresql

$
0
0

I have to do an assignment with the StackExchange database.

I get the error shown in the picture.

My Problem

PostgreSQL 9.4: Index Only Used For Lower INET Ranges

$
0
0

For some reason queries on the high range of are extremely slow and queries on the low range are extremely fast.

My index is an adapted version of the answer in Optimizing queries on a range of timestamps (two columns):
CREATE INDEX idx_ip_range_inversed ON ip_range_domains(low, high); but it only works for low ranges for some reason.

Table

CREATE TABLE IF NOT EXISTS ip_range_domains (
  ip_range_domain_id     BIGSERIAL PRIMARY KEY,
  domain_id              BIGINT REFERENCES domains                        NOT NULL,
  source_type_id         INTEGER REFERENCES source_types                  NOT NULL,
  low                    INET                                             NOT NULL,
  high                   INET                                             NOT NULL,
  auto_high_conf         BOOLEAN                                          NOT NULL   DEFAULT FALSE,
  invalidation_reason_id INTEGER REFERENCES invalidation_reasons                     DEFAULT NULL,
  invalidated_at         TIMESTAMP WITHOUT TIME ZONE                                 DEFAULT NULL,
  created_at             TIMESTAMP WITHOUT TIME ZONE                      NOT NULL   DEFAULT current_timestamp
);
CREATE INDEX domain_id_btree ON ip_range_domains (domain_id);
CREATE INDEX idx_ip_range_inversed ON ip_range_domains(low, high);

This is fast:

=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM ip_range_domains WHERE '8.8.8.8'::INET BETWEEN low AND high;
                                                                QUERY PLAN                                                                
------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on ip_range_domains  (cost=25411.02..278369.96 rows=948529 width=55) (actual time=61.514..61.567 rows=55 loops=1)
   Recheck Cond: (('8.8.8.8'::inet >= low) AND ('8.8.8.8'::inet <= high))
   Heap Blocks: exact=23
   Buffers: shared hit=3613
   ->  Bitmap Index Scan on idx_ip_range_inversed  (cost=0.00..25173.89 rows=948529 width=0) (actual time=61.493..61.493 rows=55 loops=1)
         Index Cond: (('8.8.8.8'::inet >= low) AND ('8.8.8.8'::inet <= high))
         Buffers: shared hit=3590
 Planning time: 0.537 ms
 Execution time: 61.631 ms

This is slow:

=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM ip_range_domains WHERE '200.8.8.8'::INET BETWEEN low AND high;
                                                          QUERY PLAN                                                          
------------------------------------------------------------------------------------------------------------------------------
 Seq Scan on ip_range_domains  (cost=0.00..586084.02 rows=1016801 width=55) (actual time=14090.840..21951.343 rows=1 loops=1)
   Filter: (('200.8.8.8'::inet >= low) AND ('200.8.8.8'::inet <= high))
   Rows Removed by Filter: 23156868
   Buffers: shared hit=21232 read=217499
 Planning time: 0.111 ms
 Execution time: 21951.376 ms

After a bit of manual investigation I found that 74.181.234.146 uses the index but 74.181.234.147 does not. Interestingly, as I get higher, the queries that use the index start taking 600-700ms. Perhaps that’s just an issue of finding the data on disk. That’s an acceptable response time but faster would be better.

=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM ip_range_domains WHERE '74.181.234.146'::INET BETWEEN low AND high;
                                                                 QUERY PLAN                                                                  
---------------------------------------------------------------------------------------------------------------------------------------------
 Bitmap Heap Scan on ip_range_domains  (cost=256258.42..580278.67 rows=5685950 width=55) (actual time=593.066..593.068 rows=3 loops=1)
   Recheck Cond: (('74.181.234.146'::inet >= low) AND ('74.181.234.146'::inet <= high))
   Heap Blocks: exact=3
   Buffers: shared hit=38630
   ->  Bitmap Index Scan on idx_ip_range_inversed  (cost=0.00..254836.93 rows=5685950 width=0) (actual time=593.057..593.057 rows=3 loops=1)
         Index Cond: (('74.181.234.146'::inet >= low) AND ('74.181.234.146'::inet <= high))
         Buffers: shared hit=38627
 Planning time: 0.108 ms
 Execution time: 593.094 ms

Lowest query that doesn’t use an index:

=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM ip_range_domains WHERE '74.181.234.147'::INET BETWEEN low AND high;
                                                         QUERY PLAN                                                          
-----------------------------------------------------------------------------------------------------------------------------
 Seq Scan on ip_range_domains  (cost=0.00..586084.02 rows=5685950 width=55) (actual time=5723.461..21914.826 rows=3 loops=1)
   Filter: (('74.181.234.147'::inet >= low) AND ('74.181.234.147'::inet <= high))
   Rows Removed by Filter: 23156866
   Buffers: shared hit=21864 read=216867
 Planning time: 0.108 ms
 Execution time: 21914.850 ms

Version:

=> SELECT version();

                                                    version                                                    
---------------------------------------------------------------------------------------------------------------
 PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit

Why are correlated subqueries sometimes faster than joins in Postgres?

$
0
0

This relates to the dataset described in Postgres is performing sequential scan instead of index scan

I’ve started work on adapting the import logic to work with a more normalised schema – no surprises here it’s faster and more compact – but I’ve hit a roadblock updating the existing data: adding and updating with relevant foreign keys is taking an age.

UPDATE pages
SET id_site = id FROM sites
WHERE sites.url = pages."urlShort"
AND "labelDate" = '2015-01-15'

NB pages.”urlShort” and sites.url are textfields, both are indexed but currently have no explicit relationship.

There are around 500,000 rows for each date value and updates like this are taking around 2h30. :-(

I looked at what the underlying query might look at:

select * from pages
join sites on
sites.url = pages."urlShort"
where "labelDate" = '2015-01-01'

This takes around 6 minutes to run has query plan like this:

"Hash Join  (cost=80226.81..934763.02 rows=493018 width=365)"
"  Hash Cond: ((pages."urlShort")::text = sites.url)"
"  ->  Bitmap Heap Scan on pages  (cost=13549.32..803595.26 rows=493018 width=315)"
"        Recheck Cond: ("labelDate" = '2015-01-01'::date)"
"        ->  Bitmap Index Scan on "pages_labelDate_idx"  (cost=0.00..13426.07 rows=493018 width=0)"
"              Index Cond: ("labelDate" = '2015-01-01'::date)"
"  ->  Hash  (cost=30907.66..30907.66 rows=1606466 width=50)"
"        ->  Seq Scan on sites  (cost=0.00..30907.66 rows=1606466 width=50)"

Based on some help in the past on related subjects I decided to compare this with a similar query that used a correlated subquery instead of a join.

SELECT "urlShort" AS url
FROM pages
WHERE 
"labelDate" = '2015-01-01'
and id_site is NULL
AND EXISTS
(SELECT * FROM sites
     WHERE sites.url = pages."urlShort")

This query only takes about 15s to run and has the following query plan:

"Hash Join  (cost=64524.36..860389.62 rows=423223 width=27)"
"  Hash Cond: ((pages."urlShort")::text = sites.url)"
"  ->  Bitmap Heap Scan on pages  (cost=13535.88..803581.81 rows=423223 width=27)"
"        Recheck Cond: ("labelDate" = '2015-01-01'::date)"
"        Filter: (id_site IS NULL)"
"        ->  Bitmap Index Scan on "pages_labelDate_idx"  (cost=0.00..13430.07 rows=493018 width=0)"
"              Index Cond: ("labelDate" = '2015-01-01'::date)"
"  ->  Hash  (cost=30907.66..30907.66 rows=1606466 width=27)"
"        ->  Seq Scan on sites  (cost=0.00..30907.66 rows=1606466 width=27)"

There are two things I’d like to know:
1) Can adjust the update to run faster based on the above?
2) What parts of the query plan are telltales for running slow? Or do you always have to run EXPLAIN ANALYZE to findout?

psql variable holding text with multiple lines

$
0
0

This works as expected,

set x '''2n3'''

but this doesn’t

set x '''2
3'''

Is there a workaround for later example so I don’t have to use n instead of newline?

Debian Server failover setup

$
0
0

I need to set up two servers with failover capability (Active – Passive model) sharing a common DB (Postgres)running over NAS. Could you please suggest me some workaround for the same ?

psycopg2 – drawbacks of reusing cursors?

$
0
0

I am new to Python and the psycopg2 module.

context

Working on a script that select data from MySQL, perform some operations (conversion of data-types and other “transformations”), and finally insert data into PostgreSQL using the psycopg2 module.

main problem

I’ve read on the official website of psycopg2 that it is better to instantiate new cursors each time it is possible:

When should I save and re-use a cursor as opposed to creating a new
one as needed?
Cursors are lightweight objects and creating lots of them should not pose any kind of problem. But note that cursors used to fetch
result sets will cache the data and use memory in proportion to the
result set size. Our suggestion is to almost always create a new
cursor and dispose old ones as soon as the data is not required
anymore (call close() on them.) The only exception are tight loops
where one usually use the same cursor for a whole bunch of INSERTs or
UPDATEs.

but the nature of my script seems to require the use of only one cursor for a whole bunch of inserts since I’ve add the check if the dataset is empty before try to insert in psql

for query, dataset in dict.iteritems() : 
    if dataset:    # this is the check if value(dataset) is not empty
      try:
        cur_psql.execute( query + dataset )
      except psycopg2.Error as e:
        print "Cannot execute that query", e.pgerror
        cnx_psql.rollback()
        sys.exit( "Rollback! And leaving early this lucky script, find out what is wrong" )
    else:
      print "The dataset for " + query + " is empty, skipping..."

this code insert data in different psql tables at each iteration with

cur_psql.execute( key + value ) 

question

My doubt is if in my scenario, I will encounter any drawback using the same cursor for all the inserts. I was not able to understand if I am in the category where

The only exception are tight loops
where one usually use the same cursor for a whole bunch of INSERTs or
UPDATEs.


Why am I getting old data?

$
0
0

Today I discovered something weird on PostgreSQL 9.5. (I have no idea whether this is because of beta or not.) When I want to fetch data, I get old and deleted data from query. I then do VACUUM FULL and then I get proper data (which is empty).

Am I missing something here? What might be the reason that PostgreSQL returns old data?

Note: Autovacuum is ON.

PostgreSQL: access right problems

$
0
0

I have a “production” user and database on which I want to give right access on all the users belonging to a production group.

Unfortunately, I don’t succeed, keeping the access denied error.

Can you have a look at the example script below and tell me what I missed (production user/database is toto and production group is toto_group)?

Some comments:

  • The use case is: The data loading is already in production. Developpers (titi, etc.) now only have to read the data, and we want them to read it from the production database in order to avoid having to duplicate everything.
  • I’ve initially tried on Linux, but I did the test script on windows. Same results.
  • I’m using postgresql 9.5
  • I have also tried to grant the access to the tables instead of the schema. Same results.

Thanks a lot.

c postgres postgres

DROP DATABASE IF EXISTS toto;
DROP TABLESPACE IF EXISTS toto_ts;
DROP ROLE IF EXISTS toto;
DROP DATABASE IF EXISTS titi;
DROP TABLESPACE IF EXISTS titi_ts;
DROP ROLE IF EXISTS titi;
DROP ROLE IF EXISTS toto_group;

CREATE ROLE toto_group NOLOGIN;

CREATE ROLE toto WITH PASSWORD 'toto' LOGIN;
CREATE TABLESPACE toto_ts OWNER toto LOCATION 'd:/pg_ts/toto';
CREATE DATABASE toto TABLESPACE=toto_ts TEMPLATE=template0 OWNER=toto;
GRANT toto_group TO toto;

CREATE ROLE titi WITH PASSWORD 'titi' LOGIN;
CREATE TABLESPACE titi_ts OWNER titi LOCATION 'd:/pg_ts/titi';
CREATE DATABASE titi TABLESPACE=titi_ts TEMPLATE=template0 OWNER=titi;
GRANT toto_group TO titi;

COMMIT;

c toto

DROP SCHEMA public;
CREATE SCHEMA s_test;
SET SCHEMA 's_test';
GRANT SELECT ON ALL TABLES IN SCHEMA s_test TO toto_group;
CREATE TABLE t_test (id INTEGER);

COMMIT;

c titi

DROP SCHEMA public;
CREATE SCHEMA s_test;
SET SCHEMA 's_test';
GRANT SELECT ON ALL TABLES IN SCHEMA s_test TO toto_group;
CREATE TABLE t_test (id INTEGER);

COMMIT;

CREATE DATABASE … WITH TEMPLATE seems to lose relations in PostgreSQL

$
0
0

I’m trying to Create a copy of a database in postgresql per the SO answer from 2009, but running into problems.

In Postgres 9.3.9, this creates a database with no relations (the gcis db exists and has tables and data):

postgres=# CREATE DATABASE gcis_rollback WITH TEMPLATE gcis OWNER postgres;
CREATE DATABASE
postgres=# c gcis_rollback
You are now connected to database "gcis_rollback" as user "postgres".
gcis_rollback=# d
No relations found.

I get the same using the commandline createdb:

~$ createdb -O postgres -T gcis gcis_rollback2
~$ psql gcis_rollback2
psql (9.3.9)
Type "help" for help.

gcis_rollback2=# d
No relations found.

Why don’t I see a full copy of this DB?

Background – This is a dev server, where I can take down the connections to make a copy. What I want is just a local copy for ease of rollback purposes while developing/testing DB schema changes using the Perl framework Module::Build::Database to build a patch.

Additional info:

gcis=# l+ gcis
                                               List of databases
 Name |  Owner   | Encoding |   Collate   |    Ctype    | Access privileges | Size  | Tablespace | Description 
------+----------+----------+-------------+-------------+-------------------+-------+------------+-------------
 gcis | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                   | 35 MB | pg_default | 


gcis=# d
                        List of relations
    Schema     |             Name             |   Type   | Owner  
---------------+------------------------------+----------+--------
 gcis_metadata | _report_editor               | table    | ubuntu
...
(57 rows)

How autocommit = 1 impacts statements between BEGIN/COMMIT?

$
0
0

Let’s say I have following code:

begin;
set autocommit = 1;
update tbl1 set col1 = 1;
update tbl1 set col1 = 2;
commit;

What would be the difference if autocommit (line 2) was set to 0?

How to return a set of rows from this function?

$
0
0

I am new to Postgres functions and have a table with the following structure:

CREATE TABLE options.options (
  delta double precision,
  gamma double precision,
  rho double precision,
  theta double precision,
  impvol double precision,
  value double precision,
  vega double precision,
  id bigserial NOT NULL,
  date bigint,
  ticker text,
  callput text,
  chg double precision,
  maturity integer,
  symbol text,
  strike double precision,
  implied double precision,
  last double precision,
  vol double precision,
  ask double precision,
  bid double precision,
  CONSTRAINT options_pkey PRIMARY KEY (id)
);

And I am trying to build the following function:

CREATE OR REPLACE FUNCTION generate_term_structure_by_moneyness(arg_ticker text
                                  ,arg_date integer
                                  ,arg_underlying float
                                  ,arg_lower float
                                  ,arg_higher  float)
  RETURNS SETOF varchar(250) AS -- declare return type!
$BODY$
BEGIN -- required for plpgsql

   RETURN QUERY
    select maturity,avg(Implied) from options.options 
    where ticker=arg_ticker and date=arg_date and strike/arg_underlying>arg_lower 
    and strike/arg_underlying<arg_higher group by maturity order by maturity asc;

END; -- required for plpgsql
$BODY$ LANGUAGE plpgsql;

The issue I am having is the following error when trying to run the following query:

Query

select * from generate_term_structure_by_moneyness('TQQQ',20151221,120.2699,.98,1.02)

Error

ERROR:  structure of query does not match function result type
DETAIL:  Returned type integer does not match expected type character varying in column 1.
CONTEXT:  PL/pgSQL function generate_term_structure_by_moneyness(text,integer,double precision,double precision,double precision) line 5 at RETURN QUERY
********** Error **********

ERROR: structure of query does not match function result type
SQL state: 42804
Detail: Returned type integer does not match expected type character varying in column 1.
Context: PL/pgSQL function generate_term_structure_by_moneyness(text,integer,double precision,double precision,double precision) line 5 at RETURN QUERY

I believe the error is coming from returns setof but I am unsure how to cater to this result type.

In pseudo code the returned data is expected in the form of:

[int,float; int, float; ...]
Viewing all 1138 articles
Browse latest View live