Tuesday, December 7, 2010

CREATE FUNCTION: return types

In the last post we covered the basic anatomy of a SQL function or stored procedure with CREATE FUNCTION:

CREATE FUNCTION function_name (argument_name argument_type, argument_name argument_type, ...) RETURNS return_type AS $$
     function_body_query;
$$ language language_name;

In this post we'll add some additional options and features related to the function's output.

VOID FUNCTIONS
Functions can return nothing. To do that we declare them as "RETURNS void". This is appropriate for non-SELECT commands such as INSERT:

CREATE FUNCTION addsalesdata(name varchar, month varchar, value decimal) returns void as $$
    insert into salesfigures values($1,$2,$3);
$$ language 'sql';


For these functions one can call SELECT function_name(...) or SELECT * FROM function_name(...).

FUNCTIONS THAT RETURN A SINGLE VALUE
Functions can return one value (one int, one decimal, one text etc.). For these functions one can call SELECT function_name(...) or SELECT * FROM function_name(...).

CREATE FUNCTION totalsalesformonth(month varchar) returns decimal as $$
    select sum(sales) from salesfigures where month=$1;
$$ language 'sql';


Another option to define a function's output is using OUT parameters. Here we do not use RETURNS but we list the order, names and types of output alongside the input arguments preceding each field with "OUT".

CREATE FUNCTION salesforpersonmonth4(month varchar, OUT name varchar, OUT value decimal) as $$
    select name,sales from salesfigures where month=$1;
$$ language 'sql';

aggs=# select * from salesforpersonmonth4('jan');
 name | value
------+-------
 joe  |    34
(1 row)


Notice that this is returning just one record when in fact the query would produce multiple rows of results.

FUNCTIONS THAT RETURN A SETOF VALUES
Functions can also return a set of values of the same type. That is, imagine it returning a column of decimals or a column of text. To specify this we use the SETOF modifier. Here is a function that returns the sales figures for each salesperson for January.

CREATE FUNCTION salesformonth(month varchar) returns SETOF decimal as $$
    select sales from salesfigures where month=$1;
$$ language 'sql';


aggs=# select * from salesformonth('jan');
 salesformonth
---------------
            34
            18
(2 rows)

FUNCTIONS THAT RETURN A SETOF ROWS
There are 4 options to return a set of rows.
1) RETURNS table
PostgreSQL functions (version 8.4 and later) can also return a table. That is, we define the fields of the SETOF rows that the function will return. Here is that last function but with the person's name as well as their sales:

CREATE FUNCTION salesforpersonformonth(month varchar) returns table(name text, sales decimal) as $$
    select name, sales from salesfigures where month=$1;
$$ language 'sql';

aggs=# select * from salesforpersonformonth('jan');
 name | sales
------+-------
 joe  |    34
 bob  |    18
(2 rows)

The types that functions return can be standard PostgreSQL types but they can also be columns or tables that exist or they can be a type that the user defines.

2) Existing table
Here we have a table with a name and sales value:

aggs=# create table personsales(name varchar, sales decimal);
CREATE TABLE

A function can then return that type:

CREATE FUNCTION salesforpersonmonth(month varchar) returns SETOF personsales as $$
    select name,sales from salesfigures where month=$1;
$$ language 'sql';

aggs=# select * from salesforpersonmonth('jan');     
 name | sales
------+-------
 joe  |    34
 bob  |    18
(2 rows)

3) Column of existing table
Here is a function that returns a SETOF values of a type defined by the column of an existing table:

CREATE FUNCTION salesforpersonmonth2(month varchar) returns SETOF personsales.sales%TYPE as $$
    select sales from salesfigures where month=$1;
$$ language 'sql';

aggs=# select * from salesforpersonmonth2('jan');
salesforpersonmonth2
----------------------
                   34
                   18
(2 rows)

4) Defined type
We can create a new type by using CREATE TYPE. CREATE TYPE has a lot of options but the basic use is

CREATE TYPE typename AS (fieldname1 type1, fieldname2 type2,...);

For instance,

CREATE TYPE personsales3 AS (name varchar, sales decimal);

Then, one can define

CREATE FUNCTION salesforpersonmonth3(month varchar) returns SETOF personsales3 as $$
    select name,sales from salesfigures where month=$1;
$$ language 'sql';

CREATE FUNCTION - basic anatomy

Functions or stored procedures are a very powerful feature of PostgreSQL. Functions allow one to modularize code, create "units" that do useful tasks and hide the complexity of the underlying code, here a query. They also reduce the risk of making a mistake by typing the same query over and over again, such as running a daily or monthly report.

PostgreSQL supports a particularly large variety of programming languages. By using these procedural language extensions one can write functions in Java (PL/Java), Python (PL/Python), R (PL/R), Perl (PL/Perl) and many others to interact with a PostgreSQL database. In this and the next post, however, we are going to cover simple functions in SQL only. In later posts we will cover PL/pgSQL a more powerful and flexible procedural language that provides features such as control flow (such as IF...THEN...ELSE), local variables and multiple queries.

We'll start with a simple SQL function. In the next post, we will expand on this and detail some additional options and features that CREATE FUNCTION supports.

Here are the sales data from the last post:

aggs=# select * from salesfigures;
 name | month | sales
------+-------+-------
 joe  | jan   |    34
 joe  | feb   |    23
 bob  | jan   |    18
 bob  | feb   |    30
(4 rows)

If we want to get the total sales for a given month we can do

aggs=# select sum(sales) from salesfigures where month='jan';
 sum
-----
  52
(1 row)

but if we know that are going to have to do this each month (and you are likely to forget the query next month) it might make sense to create a function called totalsalesformonth() that one can call and pass in the month of interest. Here is how we do that. Don't worry if this looks complicated for now. We will step through each part.

CREATE FUNCTION totalsalesformonth(month varchar) returns decimal as $$
    select sum(sales) from salesfigures where month=$1;
$$ language 'sql';


and we can run it as

aggs=# select totalsalesformonth('jan');
 totalsalesformonth
--------------------
                 52
(1 row)

and get the same result as the plain query. Let us break this function definition down.

COMMAND
CREATE FUNCTION is the command that runs to the final semicolon. It is a query so we can paste this into the psql command line or into pgAdmin the same as any other query.

It is defining a function called totalsalesformonth. PostgreSQL ignores the capitilization and turns everything into lower case so this call is also recognized

aggs=# select totalsalesForMonth('jan');
 totalsalesformonth
--------------------
                 52
(1 row)

NAME
The function name must be a single string. It cannot be "total sales for month" but it can be total_sales_for_month. Choose a sensible descriptive name for the function. You and any other users will thank you later.

PARAMETER LIST
We are specifying that it takes one argument or parameter (technically this is an IN parameter) called "month" and is of type varchar.
We can have more than one argument. These are specified as a comma-delimited list.

RETURN TYPE
The function is declared to return a single value of type decimal. We will discuss return types further in the next post.

FUNCTION BODY
The "$$" says to PostgreSQL that we are starting the code that constitutes the body of the function.
The function body should be a single statement, i.e. only one semicolon. PostgreSQL will allow multiple statements but it is only going to return the last. That is,

CREATE FUNCTION totalsalesformonth(month varchar) returns decimal as $$
    select sum(sales) from salesfigures where month=$1;
    select 3.;
$$ language 'sql';

is valid and will return 3 when called.
The names of the parameters in the parameter list are completely ignored. Instead we use $1 in place of the first parameter, $2 for the second parameter etc. So, when we call

select totalsalesForMonth('jan');

PostgreSQL takes the value 'jan' and replaces $1 with this to produce

select sum(sales) from salesfigures where month='jan';

which is then executed.

Importantly, one cannot use a passed in parameter for the name of a field or the name of a table. For instance, this function will not compile:

CREATE FUNCTION totalsales(tablename varchar) returns decimal as $$
    select sum(sales) from $1;                                       //WRONG!
$$ language 'sql';

PostgreSQL complains of a syntax error. Why is this?
PostgreSQL is strongly typed. That means it checks that the output of the query matches the types that the function should return. In this case, the parser cannot know if the tablename passed in has a sales field and if it does whether it is numeric. There are too many unknowns for PostgreSQL to be happy that this function will work as intended.If you do need to create dynamic queries such as this, one can achieve it with PL/pgSQL.

Although the names are ignored it is important to use good descriptive names for documentation purposes. I don't know about you but I can forget what I coded last week and appreciate the help of good documentation and besides, you might not be the only user of these functions. I would strongly discourage use of generic names such as "arg0", "arg1", "x", "y" unless there is good reason. (For instance, x and y might be the best choice for a function that processes a point in the Cartesian plane.)

LANGUAGE
Finally, we tell PostgreSQL which language we are using.

In summary, our function is

CREATE FUNCTION function_name (argument_name argument_type, argument_name argument_type, ...) RETURNS return_type AS $$
     function_body_query;
$$ language language_name;

This covers the basic anatomy of a SQL function. CREATE FUNCTION is a sophisticated command and supports a richer set of options. We'll start on those in the next post.

Thursday, December 2, 2010

schema

In this post we'll examine PostgreSQL's schema support.

We encountered the default schema called "public" in an earlier post when exploring pgAdmin. Colloquially, when someone talks about a database's schema they typically mean the structure of the tables (that this table has these fields with these types). Here, a schema is a higher level of organization, a container or namespace within a database. A schema contains a set of tables, views, stored procedures and triggers and so on, all the usual components of a database. However, one can have multiple schema within a PostgreSQL database. To clarify this, let's start with a concrete example and consider the pros and cons of different ways of organizing the data.

Suppose we have a supermarket company, SuperFoods Inc., who have two stores, say Sausalito, CA and Petaluma, CA. (Clearly they have recovered from the bad economy and layoffs of this earlier post.) Each of the two stores has a set of information: employees and contact details, details about the physical store itself, replenishment orders and order history etc.

Imagine the data are stored in two separate databases. The CEO of SuperFoods asks one of his minions to produce a report on the sales history of the two stores. The problem here is that databases are independent silos. One cannot write a SQL query that will collate the data from the two stores. psql can only connect to one database at a time. We can use external tools, such as a JDBC driver in Java that will make a connection to each of the two databases and collect the data, but we cannot query the two databases simultaneously from the psql command line.

Imagine now that we take the two databases and put them inside a single SuperFoods database. We create a container, a schema, for each of the two original databases. Each schema has a unique name, "sausalito" and "petaluma", and this causes each of the tables in the schema to have a qualified name that includes the schema name.

That is, we have

sausalito.employees
sausalito.sales

petaluma.employees
petaluma.sales

(Technically, it would include the database name too and be "SuperFoods".sausalito.employees etc. under the hood but the user doesn't see this or need to see it.)

Importantly, when we connect to the SuperFoods database we can now access the sales report tables from both stores. We cannot do

select * from sales;

because PostgreSQL is going to complain. There is an abmiguity here. There are multiple sales tables. Which do you mean? Thus, one has to qualify it with the schema name:

select * from sausalito.sales;


to specify the Sausalito sales and

select * from petaluma.sales;


to access the Petaluma sales.

We can now combine those queries with operators such as UNION and UNION ALL (covered in this post) to get something closer to what the CEO wants:

select 'sausalito', sum(sales) from sausalito.sales where year=2010
UNION ALL
select 'petaluma', sum(sales) from petaluma.sales where year=2010;

Great. Goal achieved.

OK, now consider this structure from the perspective of a human resources person at the Sausalito store. She needs to update the employees table for her store and she isn't interested in the Petaluma tables. Writing 'sausalito.' before each table in the queries is a bit of a nuisance. PostgreSQL helps us out with the notion of the search path. The search path says where to look, similar to classpaths in most programming languages.

Let's connect to a database, set the search path and create a new table.

psql -d SuperFoods

testdb=#set search_path TO sausalito;
SET

testdb=# \d
              List of relations
  Schema   |   Name    | Type  |    Owner    
-----------+-----------+-------+--------------
 sausalito | employees | table | postgresuser
 sausalito | sales     | table | postgresuser
(2 rows)

Notice we see sales and employees not sausalito.sales. When we set the search path it means, work on the database from the perspective of the sausalito schema. Now when one does

select * from sales;

it does not complain. It knows that you mean sausalito.sales. There is no ambiguity now because you cleared that up earlier when you set the search path.

You can even select the data from the other schema using the qualified name

select * from petaluma.sales;

from within the sausalito schema.

Creating tables once one has set the search_path has the same implicit name qualification:

testdb=# create table promotions(name varchar, value int);
CREATE TABLE
testdb=# \d
               List of relations
  Schema   |    Name    | Type  |    Owner    
-----------+------------+-------+--------------
 sausalito | employees  | table | postgresuser
 sausalito | promotions | table | postgresuser
 sausalito | sales      | table | postgresuser
(3 rows)

How do we create a schema? That is a synch:

testdb=# create schema sausalito;
CREATE SCHEMA
testdb=# create schema petaluma;
CREATE SCHEMA

Further, you can dump data from a single schema from the operating system command line

localhost:~ postgresuser$ pg_dump --schema sausalito testdb > sausalito.sql

We can see that using schema gives us all the tools that exist for single databases but we can collect related tables together, in SuperFoods by store, and yet compare data mong those different collections. Now we understand more about schema we can refine our notion about them. They are a namespace or rather a namespace within a database. PostgreSQL only sees one database here: SuperFoods.

Schema can be very powerful and useful way of partitioning data. However, they are typically used for more complex sets of tables, more complex than those typically needed for beginner exercises. For simpler cases, an additional field that defines the subset of data might suffice. Imagine that we include an additional 'store' field in the sales table. A single table could then be queried for Sausalito sales, Petaluma sales or both.

Also, imagine that SuperFoods assigns a regional manager to the two stores. That manager is effectively an employee at both stores. We want to avoid replication of data in databases and so would not want to include his personal details in both schema. In this case a single, more unified database structure might be more appropriate. Given those, schema are a valuable addition to PostgreSQL and worth considering when designing one's database.

Wednesday, December 1, 2010

Aggregates

PostgreSQL supports a number of different aggregate functions (a tutorial appears here). These are functions that perform some mathematical or statistical calculations on a set of values. Common examples are sum, min, max and count. The values are often numeric but they can be boolean, string, or XML too and count accepts anything.

Let's start with a simple example. Suppose we have the following table of sales per month for different sales people

aggs=# select * from salesfigures;
 name | month | sales
------+-------+-------
 joe  | jan   |    34
 joe  | feb   |    23
 bob  | jan   |    18
 bob  | feb   |    30
(4 rows)

We can get the total sales for the sales team for the year to date as

aggs=# select sum(sales) from salesfigures;
 sum
-----
 105
(1 row)

Notice that sum is a function that we can specify in the select list (the part before the FROM) and it takes one argument. We are supplying it the values in the sales column. It adds 34 + 23 + 18 + 30 to get 105. Easy.

Filtering by WHERE
As the aggregate gets computed after any WHERE clause is applied we can easily apply filters on the data that is supplied to the aggregate. For instance, to get the sales for January

aggs=# select sum(sales) from salesfigures where month='jan';
 sum
-----
  52
(1 row)

(34 + 18 = 52)

Filtering by GROUP
Similarly, aggregates are computed after GROUP BY clauses. This is a really powerful feature. So, if we want the total sales for each month we can run

aggs=# select month, sum(sales) from salesfigures group by month;
 month | sum
-------+-----
 feb   |  53
 jan   |  52
(2 rows)

Clearly, using strings for month is not the best table design here as they cannot sensibly be ordered or compared. I used them to keep the example simple.

We call the aggregates multiple times in the select list. Here we get the min and max sales values for each person

aggs=# select name, min(sales), max(sales) from salesfigures group by name;
 name | min | max
------+-----+-----
 bob  |  18 |  30
 joe  |  23 |  34
(2 rows)

GOTCHA
If we wanted to know who had the most sales in any month we might be tempted to query

select name from salesfigures where sales=max(sales);         WRONG!

We could imagine this as computing the maximum value in the sales column and the selecting the one or more names that matched this value. This will not work. If one runs this query, PostgreSQL complains

aggs=# select name from salesfigures where sales=max(sales);
ERROR:  aggregates not allowed in WHERE clause
LINE 1: select name from salesfigures where sales=max(sales);
                                                  ^
aggs=#

This is a typical beginner mistake. I've done it. You'll probably do it too. As the error message says, aggregates are not allowed in the WHERE clause. This is because the WHERE clause is used to generate the set of rows to be passed to the select list and any aggregates. What we have here is a circular relationship: max depends on which rows it is given and that is determined by WHERE, which depends on max... What you have to do is obtain the maximum value as one query and then pass that into another query.

aggs=# select name from salesfigures where sales=(select max(sales) from salesfigures);
 name
------
 joe
(1 row)

(select max(sales) from salesfigures) is called a subquery, something we will cover in more detail in a later post. For now, just know that it computes this subquery first to obtain a value, which will be 34. We then effectively have a simple query

select name from salesfigures where sales=34;

which is OK as the circularity has been eliminated.

Filtering by HAVING
OK, so we cannot have aggregates in WHERE clause. However, we can have them in a HAVING clause. If you recall our earlier post, WHERE can be accompanied by an optional HAVING clause that can be used to filter rows after the WHERE clause is performed.

Who is underperforming? Which salesperson(s) have had less than 20 sales in any one month?

aggs=# select name from salesfigures group by name having min(sales)<20;
 name
------
 bob
(1 row)

This query groups first to get a set of virtual tables, then applies the min aggregate to get the minumum per person. It then filters in those with a minumum less than 20. Finally, we asked for SELECT name so it returns the one name left, that for Bob.

COUNT
COUNT is commonly used aggregate. SELECT count(*) from tablename returns the number of rows. The * just means "everything", every column in this context. If we ran SELECT count(*) from salesfigures it would return 4, as there are four rows, as would SELECT count(sales) from salesfigures. Count counts the rows not the values. Don't confuse this with sum!

Unfortunately, COUNT is fairly slow in PostgreSQL. It can take seconds or sometime tens of seconds in really large tables. The reason is that when a row is deleted from a table, it is not really deleted, just flagged as not being needed any more. (This is the same in some operating system when one deletes a file.) Also, the rows for a table may be scattered through memory. Count has to scan through each row of the table sequentially, work out which rows are "dead" in order to return the total row count. Other databases can do this more efficiently and is a frequent complaint about PostgreSQL performance. However, it does excel in many other respects over other databases.

This lists the various aggregates that PostgreSQL provides. As well as count, min, max and sum mentioned above other commonly used aggregates are avg (average) and stddev (standard deviation, a statistical measure of variability). Unfortunately, median is not provided.

Let's end with a non numeric example. The EVERY aggregate checks whether each and every row is true

aggs=# select * from clients;
     name     | paidbill
--------------+----------
 acme corp.   | t
 lehman bros. | f
 GM           | t
(3 rows)

aggs=# select every(paidbill) from clients;
 every
-------
 f
(1 row)

Saturday, November 27, 2010

SQL: outer joins

Last time we covered INNER JOINs including a couple of special cases CROSS JOIN and self joins. In INNER JOINS the result set returns only the set of rows that match the join criteria (if any). IN OUTER JOINS, the results might contain both matched and unmatched rows. It is for this reason that beginners might find such JOINS a little more confusing. However, the logic is really quite straightforward.

There are three types of outer joins: LEFT, RIGHT, and FULL. Let's start with a LEFT OUTER JOIN.

In a left join when joining table t1 with table t2 PostgreSQL first does a "normal" inner join. It then looks to see if there are any rows from t1 that are not in the result set. If so, it adds in those rows placing NULLs for all the fields of t2. Thus, this guarantees that each row of t1 will appear at least once in the results.

Here are our data:

joindb=# select * from clients;
 id | name 
----+-------
  0 | joe
  1 | bob
  2 | alice
(3 rows)

joindb=# select * from orders;
 id | value
----+-------
  1 | val1
  2 | val2
  3 | val3
(3 rows)

An inner join on ids matches on id=1 and id=2

joindb=# select * from clients join orders on clients.id=orders.id;
 id | name  | id | value
----+-------+----+-------
  1 | bob   |  1 | val1
  2 | alice |  2 | val2
(2 rows)

but a left outer join has that inner join result set plus the row for id=0 from clients (shown in red):

joindb=# select * from clients left join orders on clients.id=orders.id;
 id | name  | id | value
----+-------+----+-------
  0 | joe   |    |
  1 | bob   |  1 | val1
  2 | alice |  2 | val2
(3 rows)

Left joins are implicitly OUTER JOINs but we can add that keyword in explicitly

select * from clients left outer join orders on clients.id=orders.id;

We can obtain the same result set if we use the USING syntax: select * from clients left join orders using(id);

That's it. Simple isn't it?

Why might we want something like this? Suppose we have a list of clients and we have purchase data from the last month. We might want a report of the sales for all our clients, even those that didn't buy anything in that month. This sort of report might be useful for the sales staff to target those clients.

OK, if a LEFT JOIN does an inner join and adds in any additonal "missing" rows from t1, what do you think a RIGHT JOIN does? Exactly. Do an INNER JOIN and add in any missing rows from t2.

joindb=# select * from clients right join orders on clients.id=orders.id;
 id | name  | id | value
----+-------+----+-------
  1 | bob   |  1 | val1
  2 | alice |  2 | val2
    |       |  3 | val3
(3 rows)

Here we can see the row for id=3 from orders shown in blue.

The last type is a FULL OUTER JOIN. That is an left and a right outer join. Do an inner join and add in any missing rows from both t1 and t2.

joindb=# select * from clients full join orders on clients.id=orders.id;
 id | name  | id | value
----+-------+----+-------
  0 | joe   |    |
  1 | bob   |  1 | val1
  2 | alice |  2 | val2
    |       |  3 | val3
(4 rows)

Again, we can use the ON join_criteria or the USING (join_column_list) syntax and we can include or omit the OUTER keyword.

We can also do a NATURAL join for left, right and full outer joins. Recall that with natural joins we ask PostgreSQL to match up the columns that share the same name.

joindb=# select * from clients natural full join orders;
 id | name  | value
----+-------+-------
  0 | joe   |
  1 | bob   | val1
  2 | alice | val2
  3 |       | val3
(4 rows)

Finally, in this and the last post we only joined two tables. We can in fact join as many tables as we wish in a single query. For outer joins you will need to use parentheses and nest the joins. For inner joins, however, one can do

select * from tablename1 t1, tablename2 t2, tablename3 t3,... where t1.id=t2.id and t1.id=t3.id and...;

Friday, November 26, 2010

SQL: inner, cross and self joins

Understanding how table joins work is one of the key skills of a database beginner. In an earlier post, we normalized our ceos table into two tables, one table to list the set of unique people and a second table to associate a person with a term as CEO. What we didn't cover then was how to combine those tables now those data had been broken into separate tables. This is the work of join queries. When we join tables we take two or more tables and combine them using zero, one or more join criteria to create a set of resulting rows.

Suppose that we have two tables:

joindb=# select * from clients;
 id | name 
----+-------
 0  | joe
 1  | bob
 2  | alice
(3 rows)

joindb=# select * from orders;
 id | value
----+-------
  1 | val1
  2 | val2
  3 | val3
(3 rows)


CROSS JOIN
The simplest type of join is a cross join. This says create a results set of all possible combinations of the the rows from the two tables.

joindb=# select * from clients, orders;
 id | name  | id | value
----+-------+----+-------
 0  | joe   |  1 | val1
 1  | bob   |  1 | val1
 2  | alice |  1 | val1
 0  | joe   |  2 | val2
 1  | bob   |  2 | val2
 2  | alice |  2 | val2
 0  | joe   |  3 | val3
 1  | bob   |  3 | val3
 2  | alice |  3 | val3
(9 rows)

The first two columns are all the fields of the first of our tables listed in the query, clients (shown in red), followed by all the fields of the second table, orders. We can see that it takes the clients table and combines all its rows with the first row of orders, then it takes the clients table again and combines with the second row of orders and finally does the same for the third row of orders to produce 3 * 3 rows.

We could have produced the same set of results using the more explicit CROSS JOIN terminology:

joindb=# select * from clients cross join orders;
 id | name  | id | value
----+-------+----+-------
 0  | joe   |  1 | val1
 1  | bob   |  1 | val1
 2  | alice |  1 | val1
 0  | joe   |  2 | val2
 1  | bob   |  2 | val2
 2  | alice |  2 | val2
 0  | joe   |  3 | val3
 1  | bob   |  3 | val3
 2  | alice |  3 | val3
(9 rows)

INNER JOIN
Cross joins can be useful but more typically we restrict the set of combinations based on some join criteria. Suppose we want to match up the client and order Ids:

joindb=# select * from clients join orders on clients.id=orders.id;
 id | name  | id | value
----+-------+----+-------
  1 | bob   |  1 | val1
  2 | alice |  2 | val2
(2 rows)

This says output only the rows where there is a matching ID for both clients and orders so we only obtain 2 rows. (0,'joe') of clients doesn't have a match so that is not output. Similarly for (3,'val3') for orders. These criteria are boolean criteria. They must return true or false for each row combination. This is called an INNER JOIN and we can supply that keyword explicitly. This query produces the same results as the last query: select * from clients inner join orders on clients.id=orders.id;.

It is not unusual to have a large number of join criteria. It can be tedious to keep writing out the table names for all these criteria so we can shorten the criteria text by the use of an alias. This is a short, temporary "nickname" we can give to each table. Let's give client an alias of t1 and orders t2.
We assign the name by putting the alias after each of our tables in the from section of our query. We can then use them in the later part of the query:

joindb=# select * from clients t1 join orders t2 on t1.id=t2.id;
 id | name  | id | value
----+-------+----+-------
  1 | bob   |  1 | val1
  2 | alice |  2 | val2
(2 rows)

If you use aliases you must use them in the join criteria otherwise PostgreSQL will complain:

joindb=# select * from clients t1 join orders t2 on clients.id=orders.id;
ERROR:  invalid reference to FROM-clause entry for table "clients"
LINE 1: select * from clients t1 join orders t2 on clients.id=orders...
                                                   ^
HINT:  Perhaps you meant to reference the table alias "t1".

Rather than using ON we can specify a field list after USING

joindb=# select * from clients join orders using (id);
 id | name  | value
----+-------+-------
  1 | bob   | val1
  2 | alice | val2
(2 rows)

We can actually drop the JOIN keyword altogether here if we use WHERE instead of ON:

joindb=# select * from clients t1, orders t2 where t1.id=t2.id;
 id | name  | id | value
----+-------+----+-------
  1 | bob   |  1 | val1
  2 | alice |  2 | val2
(2 rows)

NATURAL JOIN
Often we give fields the same fieldname for the same entity types across different tables. That is, if we have a person field in different tables we might use the same "person_id" name in those different tables. Such conventions is an aspect of good database design and can avoid some confusion when designing queries. We can ask PostgreSQL to make use of this convention in a NATURAL join which is a special type of INNER join. This is a join where we ask PostgreSQL to match up on all columns that have the same name. In the following query

joindb=# select * from clients natural join orders;
 id | name  | value
----+-------+-------
  1 | bob   | val1
  2 | alice | val2
(2 rows)

PostgreSQL works out that we have an id column in each table and implicitly does an INNER JOIN on that column.

SELF JOIN
The tables we are joining don't have to be different tables. We can join a table with itself.  This is called a self join. In this case we have to use aliases for the table otherwise PostgreSQL will not know which id column of which table instance we mean

joindb=# select * from clients t1, clients t2 where t1.id=t2.id;
 id | name  | id | name 
----+-------+----+-------
  0 | joe   |  0 | joe
  1 | bob   |  1 | bob
  2 | alice |  2 | alice
(3 rows)

To summarize, we can join two tables t1 and t2 using a cross join:

T1 CROSS JOIN T2
T1, T2

a self join

T1 CROSS JOIN T1
T1, T1

or a qualified inner join:

T1 [INNER] JOIN T2 ON boolean_expression
T1 [INNER] JOIN T2 USING ( join column list )
T1 NATURAL [INNER] JOIN T2
T1, T2 WHERE ....

and we can use table aliases

tablename t1, tablename t2...

In the next post we will cover OUTER JOINs.

Sunday, November 21, 2010

SQL: UNION, EXCEPT and INTERSECT

In this post we shall examine the very useful PostgreSQL operators UNION, EXCEPT and INTERSECT, each of which can be followed by ALL.

A hot startup is holding a special event and wants to send out invites to some of their best clients and also to some VIPs. Some of the VIPs are actually very supportive of the site and are clients too. What query will provide the complete set of people to invite avoiding duplicate records? For this, we want UNION. UNION is an OR operator for multiple result sets. Here is our data

create database hotstartup;
\c hotstartup;
create table clients(name varchar);
insert into clients values('John Smith');
insert into clients values('Ashton Kutcher');
insert into clients values('Joe Doe');
insert into clients values('Stephen Fry');

create table vips(name varchar);
insert into vips values('Ashton Kutcher');
insert into vips values('Stephen Fry');
insert into vips values('Demi Moore');
insert into vips values('Hugh Laurie');

and the query will be

hotstartup=# select * from clients union select * from vips;
      name     
----------------
 Ashton Kutcher
 Hugh Laurie
 Joe Doe
 Demi Moore
 John Smith
 Stephen Fry
(6 rows)

The query is saying get the rows from clients and append the rows of vips but remove any duplicates. Here we can see that we can have a total of 6 unique names from the initial set of 8 rows.

There are times when we do not want to remove duplicates. For that, we can append ALL after UNION:

hotstartup=# select * from clients union all select * from vips;
      name     
----------------
 John Smith
 Ashton Kutcher
 Joe Doe
 Stephen Fry
 Ashton Kutcher
 Stephen Fry
 Demi Moore
 Hugh Laurie
(8 rows)

The head party planner sees the list of 6 invites and says "No, no, no. We need to send out two sets of invites, one set to the VIPs that will get them into the VIP tent and then ordinary invites for everyone else". Clearly, select * from vips will return the names for the VIP invite list. However, what will get us the list for the remaining invites? We want everyone on the clients list EXCEPT those on the VIP list. Unsurprisingly, we use EXCEPT

hotstartup=# select * from clients except select * from vips;
    name   
------------
 Joe Doe
 John Smith
(2 rows)

Finally, if we want to get the list of people who are both clients and VIP we can use INTERSECT.

hotstartup=# select * from clients intersect select * from vips;
      name     
----------------
 Ashton Kutcher
 Stephen Fry
(2 rows)

Both EXCEPT and INTERSECT can be followed by an optional ALL. To examine its effect we need to add some duplicate data. Let's insert a duplicate name into clients:

hotstartup=# select * from clients;
      name     
----------------
 John Smith
 Ashton Kutcher
 Joe Doe
 Stephen Fry
 Stephen Fry
(5 rows)

hotstartup=# select * from vips;
      name     
----------------
 Ashton Kutcher
 Demi Moore
 Hugh Laurie
 Stephen Fry
(4 rows)

Compare the results of these two

hotstartup=# select * from clients except select * from vips;
    name   
------------
 Joe Doe
 John Smith
(2 rows)

versus

hotstartup=# select * from clients except all select * from vips;
    name    
-------------
 Joe Doe
 John Smith
 Stephen Fry
(3 rows)

The latter query is essentially saying take each row of clients and find a matching row in vips. If there is match delete the pair of rows from the results set.

For INTERSECT, let's insert Stephen Fry into vips to have him duplicated in both lists

hotstartup=# select * from clients;
      name     
----------------
 John Smith
 Ashton Kutcher
 Joe Doe
 Stephen Fry
 Stephen Fry
(5 rows)

hotstartup=# select * from vips;
      name     
----------------
 Ashton Kutcher
 Stephen Fry
 Demi Moore
 Hugh Laurie
 Stephen Fry
(5 rows)

Then we can see that INTERSECT ALL no longer removes duplicates, i.e.

hotstartup=# select * from clients intersect select * from vips;
      name     
----------------
 Ashton Kutcher
 Stephen Fry
(2 rows)

versus

hotstartup=# select * from clients intersect all select * from vips;
      name     
----------------
 Ashton Kutcher
 Stephen Fry
 Stephen Fry
(3 rows)

The latter query is saying find matching pairs of rows. Stephen Fry appears in both tables twice so we find two matching pairs for him and hence two rows appears in the results.

In all these three operators the columns of the two select statements must match. We cannot have the first select statement returning 3 fields and the second returning two. We can demonstrate that with our ceos table:

apple=# select * from ceos;
       name       | year
------------------+------
 Steve Jobs       | 1976
 Mike Markkula    | 1977
 Mike Scott       | 1978
 John Sculley     | 1983
 Michael Spindler | 1993
 Gil Amelio       | 1996
 Steve Jobs       | 1997
(7 rows)

apple=# select * from ceos where year between 1970 and 1979 union select * from ceos where year=1977;
     name      | year
---------------+------
 Mike Scott    | 1978
 Steve Jobs    | 1976
 Mike Markkula | 1977
(3 rows)

This is OK because both statements are returning * which as they refer to the same table means the same set of fields.

This, however, is not acceptable

apple=# select * from ceos where year between 1970 and 1979 union select name from ceos where year=1977;
ERROR:  each UNION query must have the same number of columns
LINE 1: ...os where year  between 1970 and 1979 union select name from ...

because the first statement returns all fields wheareas the second returns one field only.

SQL: DISTINCT, DISTINCT ON and ALL

It is not uncommon to have duplicate data in the results of a query. We can use the DISTINCT clause of SELECT to remove those and return only the unique results.

Returning to our trusty ceos table, a simple select is showing 7 rows:

apple=# select * from ceos;
       name       | year
------------------+------
 Steve Jobs       | 1976
 Mike Markkula    | 1977
 Mike Scott       | 1978
 John Sculley     | 1983
 Michael Spindler | 1993
 Gil Amelio       | 1996
 Steve Jobs       | 1997
(7 rows)

If we only want the distinct set of people who are or have been Apple's CEOs we can use distinct to give the six names:

apple=# select distinct(name) from ceos;
       name      
------------------
 Michael Spindler
 John Sculley
 Gil Amelio
 Mike Markkula
 Mike Scott
 Steve Jobs
(6 rows)

Similarly select distinct(year) from ceos; will provide a list of years of a CEO changeover.

DISTINCT ON
(expression) will return the "first" row of each set of rows where the expression is equal. The following says get the name and year from ceos for the set of rows where name is distinct.

apple=# select distinct on (name) name,year from ceos;
       name       | year
------------------+------
 Gil Amelio       | 1996
 John Sculley     | 1983
 Michael Spindler | 1993
 Mike Markkula    | 1977
 Mike Scott       | 1978
 Steve Jobs       | 1976
(6 rows)

You can see that it returned Jobs' first term as CEO.

I quoted first because it is not always obvious what that means. The results can be unpredictable. To be safe it is good practice to combine DISTINCT ON with ORDER BY to make that explicit.

apple=# select distinct on (name) name, year from ceos order by name,year;
       name       | year
------------------+------
 Gil Amelio       | 1996
 John Sculley     | 1983
 Michael Spindler | 1993
 Mike Markkula    | 1977
 Mike Scott       | 1978
 Steve Jobs       | 1976
(6 rows)

DISTINCT ON can handle multiple fields. This says get all the rows where the combination of name and year is distinct:

apple=# select distinct on (name,year) name,year from ceos;
       name       | year
------------------+------
 Gil Amelio       | 1996
 John Sculley     | 1983
 Michael Spindler | 1993
 Mike Markkula    | 1977
 Mike Scott       | 1978
 Steve Jobs       | 1976
 Steve Jobs       | 1997
(7 rows)

In this trivial example it returns all rows but one can imagine how can this can be powerful with additional fields. DISTINCT ON is not part of the SQL standard.

Incidentally, this could have been achieved with GROUP BY and ordering results

apple=# select name,year from ceos group by name,year order by name;
       name       | year
------------------+------
 Gil Amelio       | 1996
 John Sculley     | 1983
 Michael Spindler | 1993
 Mike Markkula    | 1977
 Mike Scott       | 1978
 Steve Jobs       | 1976
 Steve Jobs       | 1997
(7 rows)

Finally, ALL is the opposite to DISTINCT. As its name suggests it explicitly specifies that all rows must be returned. ALL is the default mode so that
select name from ceos is equivalent to select all(name) from ceos. As such it doesn't sound useful. However, whereas DISTINCT is designed to override the default behavior and strip out duplicate rows, ALL can be used with constructs such as UNION, EXCEPT and INTERSECT whose default behavior is to remove duplicates and override them to return ALL rows, including duplicates. We will cover those last three keywords in the next post.

Thursday, November 18, 2010

Create table and constraints

Today we shall cover creating tables. So far we have only created the simplest tables, one or two fields, no constraints, no foreign keys, no primary keys etc. CREATE TABLE is one of the core SQL command and is one of the more complex commands with a host of options. This post is quick introduction to highlight the basic ideas, common patterns and where to find mode detailed explanation in the PostgreSQL documentation. Importantly, as all of this material is essentially standard SQL so the reader can easily find a whole host of additional material covering all levels of detail on the web.

When creating a table we want to define the fields (columns), their data type (int, text, date etc.) and any constraints such as whether that field can be empty (NULL), unique, or whether it must contain a value found in another table (foreign key) or that the value must be in a certain range (say, check that price > 0.01).

In its simplest form the CREATE TABLE command is a comma-delimited list of fieldnames and their types:

CREATE TABLE tablename (
  fieldname1 datatype1,
  fieldname2 datatype2,
  ...
  fieldname3 datatype3
);

such as the Apple CEOs table of an earlier post:

create table ceos (name varchar, year int);

PostgreSQL supports a large number of datatypes. Here are some of the more common ones:
varchar(n) string of at most n characters. If less than n are provided, it only stores those chars.
char(n)string of n characters. If less than n are provided it will be space padded.
text variable length string.
int integer, 4 bytes.
bigint 8 bytes. This is returned by count as in "select count(*) from..."
decimal or numeric a flexible real value of specifiable precision. For instance 12.3456 is a numeric(6,4).
date date with year, month and day.
timestamp, timestamptz date and time without and with a timezone.
booleantrue or false.

This is, however, just a sample. There are types for money, shapes, arrays and many more.

Decide upon the types for each of your fields but do so with a consideration to storage size. Don't use more than you need. For instance, if you know that a column is two character data, say for a state abbreviation('AZ', 'CA' etc), a varchar(2) is a better choice than text. On the other hand, make sure that you will have enough for what you might need in the future. For instance, numeric(4,2) might suffice for most daily temperature readings in San Francisco but very very rarely it can exceed 100 F. If you try to store the record 103 F in a numeric(4,2) PostgreSQL will throw an exception ("ERROR:  numeric field overflow. DETAIL:  A field with precision 4, scale 2 must round to an absolute value less than 10^2"). With numeric(5,2) or decimal you will be safe.

Having decided upon field types for a table we then need to consider the PRIMARY KEY. This is a special field or combination of fields that must be unique across all rows. This key becomes a unique identifier for each row. Typically these are integer IDs (product ID, order ID etc) which can be supplied explicitly or can be provided by PostgreSQL automatically by declaring a field SERIAL. Let's examine this further.

To remind you, here is our table of Apple's CEOs and the year that they started that role:

apple=# \d ceos
          Table "public.ceos"
 Column |       Type        | Modifiers
--------+-------------------+-----------
 name   | character varying |
 year   | integer           |

apple=# select * from ceos;
       name       | year
------------------+------
 Steve Jobs       | 1976
 Mike Markkula    | 1977
 Mike Scott       | 1978
 John Sculley     | 1983
 Michael Spindler | 1993
 Gil Amelio       | 1996
 Steve Jobs       | 1997
(7 rows)


A table doesn't have to have a primary key. We didn't declare a primary key in the ceos table but then I knew that I was going to do simple selects only and that I would not be adding any new data to the table. Often, however, we don't know what additional data might be added to a table and so we have to design the table to handle the edge cases of that future data. Suppose Steve Jobs steps down and his post is taken over by Mike Scott. This is a relatively common name so what if it is not the same Mike Scott that ran the company in the 1970s. Now we have a problem. If we use the current table schema we would not be able to distinguish among the two Mikes. We need to assign each of the two Mikes a unique code. We could normalize the table and create one table to define the different people (people table) and a different table (ceos table) to define the association between the person and the year that they became CEO.

create table people(
   id int PRIMARY KEY,
   name varchar NOT NULL
);
insert into people(0,'Steve Jobs');
insert into people(1,'Mike Markkula');
insert into people(2,'Mike Scott');
insert into people(3,'John Sculley');
insert into people(4,'Michael Spindler');
insert into people(5,'Gil Amelio');
insert into people(6,'Mike Scott');

(This is just a simple illustrative example. In a real database a person would likely have separate last name and first name fields as well as other attribute fields.)
With this we have solved two problems. First, we have an id for each of Mikes: 2 and 6. We also have removed some redundancy in the database by only defining Steve Jobs once. Recall that he appears twice in the ceos table. It is better design to have an entity defined once. Imagine that Apple had a female CEO. She got married and changed her name. We would have to update all the rows where her name appears. In a normalized schema as above we only have to update a single row. This doesn't make a great difference in our trivial ceos example but in a large database where there may be lots of tables associated with a person. Imagine a payroll database where a person is associated with address, job, payrates and bonuses and so on, all of which might be in different tables this can be a huge issue.

In the people schema, we defined a id column of type integer. We declared it to be the PRIMARY KEY. This means that it must be unique and it must not be NULL. Next we declared a name column and said that it cannot be NULL. If we are defining people based on a single name field it doesn't make sense to allow a NULL name, otherwise what is the point of the row?

In the table above we supplied the ids. If we don't care what id is used for a person so long as each person gets a unique name we can ask PostgreSQL to make up IDs for us.

create table people(
  id SERIAL PRIMARY KEY,
  name varchar NOT NULL
);


By adding the keyword SERIAL it says create an ID starting from 1 and incrementing by 1 when we insert data. We can then insert data as

insert into people (name) values('Steve Jobs');
insert into people (name) values('Mike Markkula');
insert into people (name) values('Mike Scott');
insert into people (name) values('John Sculley');
insert into people (name) values('Michael Spindler');
insert into people (name) values('Gil Amelio');
insert into people (name) values('Mike Scott');

apple=# select * from people;
 id |       name      
----+------------------
  1 | Steve Jobs
  2 | Mike Markkula
  3 | Mike Scott
  4 | John Sculley
  5 | Michael Spindler
  6 | Gil Amelio
  7 | Mike Scott
(7 rows)

With our revised people table complete let us rework the ceo table.

create table ceo(
   id int,
   year int NOT NULL,
   PRIMARY KEY(id,year)
);
insert into ceos(1,1976);
insert into ceos(2,1977);
insert into ceos(3,1978);
insert into ceos(4,1983);
insert into ceos(5,1993);
insert into ceos(6,1996);
insert into ceos(1,1997);
ALTER TABLE ceos ADD CONSTRAINT ceos_foreign_id FOREIGN KEY (id) REFERENCES people (id);

We have an id column of type int. We intend that to be the same id as in the people table. To achieve that we add a foreign key constraint with the ALTER TABLE statement. This says add a constraint to the ceo table such that an id value in the ceo's id column must be a value that is present in the id field of the people table. (We could have written "id int references people," which would add a foreign key constraint to the primary key of the people table. It is a matter of tast but the alter table command is more explicit.)

We want the year to be NOT NULL. We might also want to verify make sure that any year supplied is sensible. After all, -5 is a NOT NULL int but doesn't make sense for a year. Moreover, this is not just any year, it must be a year since Apple was founded so it must be 1976 or later. Thus, we could add a check to that field

year int NOT NULL check(year >= 1976),

If we try to enter a year less than 1976 PostgreSQL will complain with

ERROR:  new row for relation "ceo" violates check constraint "ceo_year_check"

Finally, we have a primary key of both the id and the year. This will allow multiple people in the same year but not the same person multiple times in a year. When we execute the create table query PostgreSQL outputs

NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index "ceo_pkey" for table "ceo"

Database design is a tricky exercise and certainly a little confusing for beginners. There are many tradeoffs to be made in terms of storage size, speed and efficiency of loading and querying, maintainability, current versus known or unknown future data and so on. However, hopefully this post has conveyed the sense of some of things to consider and the rich levels of types and constraints that PostreSQL provides. PostgreSQL CREATE TABLE documentation details many more options and details but the basic detailes covered here will actually get you fairly far andconver a lot of needs.

Monday, November 15, 2010

Productivity tip: psql tab completion and history

Like many other shells, psql has tab completion. The ability to tab complete commands and relations such as tablenames can be a huge productivity boost.

Type 'S' and hit tab and the commands starting with 'S' appear:

SuperFoods=# S
SELECT     SET        SHOW       START      SAVEPOINT 

Type the next letter say 'e' and only the subset of commands appear:

SuperFoods=# SE
SET     SELECT 


(psql automatically changes the case to upper case on these primary commands and this can take an extra tab which is a little annoying and unnecessary.)

This continues until there is no ambiguity and the command completes.

Type 'set sea' and tab and it completes to

SuperFoods-# set search_path

Type space and then tab again and it completes to

SuperFoods-# set search_path TO

and so on.

The same behavior is true with tablenames. If you have a table called 'employees' and no other table starts with 'e' then tab will complete the whole tablename. If there are two tables starting with 'e' then psql will list those tables in the same manner as the commands above.

It is not always obvious what will and will not complete . If one types 'SEL' and tab it completes to 'SELECT'. Then typing ' * FR' and then tab one might expect it to complete the FROM but it does not. The psql documentation does however state that "the completion logic makes no claim to be an SQL parser".

If you have a blank line and type tab twice it brings up the primary commands:

SuperFoods-#
ABORT        ALTER        ANALYZE      BEGIN        CHECKPOINT   CLOSE        CLUSTER      COMMENT      COMMIT       COPY        
CREATE       DEALLOCATE   DECLARE      DELETE FROM  DISCARD      DO           DROP         END          EXECUTE      EXPLAIN     
FETCH        GRANT        INSERT       LISTEN       LOAD         LOCK         MOVE         NOTIFY       PREPARE      REASSIGN    
REINDEX      RELEASE      RESET        REVOKE       ROLLBACK     SAVEPOINT    SELECT       SET          SHOW         START       
TABLE        TRUNCATE     UPDATE       VACUUM       VALUES       WITH         UNLISTEN    

Finally, like other shells one can scroll through the command history using page up/down or cursor up/down keys. History retains up to the last 500 commands.

Modifying data: UPDATE and DELETE

In the last post we covered getting data into tables using INSERT and COPY. In this post we focus on modifying data in table using UPDATE and DELETE.

UPDATE as one might imagine updates or modifies values in a table, including NULLs. The basic idea is that one is specifying or SETting the new values of one or more fields FROM a certain table WHERE the rows meet some condition.

SuperFoods supermarket has a table for their employees:

create database "SuperFoods";
\c "SuperFoods";
create table employees (lastname varchar, firstname varchar, title varchar);
insert into employees values('Doe','Jane', 'cashier');
insert into employees values('Frost','Mary', 'manager');
insert into employees values('Smith','John', 'shift leader');

SuperFoods=# select * from employees;
 lastname | firstname |    title    
----------+-----------+--------------
 Doe      | Jane      | cashier
 Frost    | Mary      | manager
 Smith    | John      | shift leader
(3 rows)

John Smith has worked hard and has been promoted from shift leader to assistant manager. HR needs to update the employees table and set the title to be 'assistant manager' where the row corresponds to John Smith. We do that with:

SuperFoods=# update employees SET title='assistant manager' WHERE lastname='Smith' AND firstname='John';
UPDATE 1
SuperFoods=# select * from employees;
 lastname | firstname |       title      
----------+-----------+-------------------
 Doe      | Jane      | cashier
 Frost    | Mary      | manager
 Smith    | John      | assistant manager
(3 rows)


We don't have to update one row at a time. We can update many rows simultaneously. Suppose SuperFoods is having a promotion and are going to reduce the price of all their wine from a certain winery by 10%.

create table wines (winery varchar, wine varchar, price decimal);
insert into wines values('Vinny''s Vino','Smooth Syrah',9.99);
insert into wines values('Wacky Wines','Zany Zin',8.99);
insert into wines values('Wacky Wines','Magic Merlot',19.99);
insert into wines values('Oregon Oeno','Perky Pinot',14.50);

SuperFoods=# select * from wines;
    winery    |     wine     | price
--------------+--------------+-------
 Vinny's Vino | Smooth Syrah |  9.99
 Wacky Wines  | Zany Zin     |  8.99
 Wacky Wines  | Magic Merlot | 19.99
 Oregon Oeno  | Perky Pinot  | 14.50
(4 rows)

Notice for "Vinny's Vino" we needed to double up on the single quote inside the string. Without that PostgreSQL will get confused, think that string is 'Vinny' and then complain that the rest of statement starting with 's Vino' is not valid.

OK, let's reduce the price on all Wacky Wines:

update wines set price=0.9*price where winery='Wacky Wines';

SuperFoods=# update wines set price=0.9*price where winery='Wacky Wines';
UPDATE 2
SuperFoods=# select * from wines;
    winery    |     wine     | price 
--------------+--------------+--------
 Vinny's Vino | Smooth Syrah |   9.99
 Oregon Oeno  | Perky Pinot  |  14.50
 Wacky Wines  | Zany Zin     |  8.091
 Wacky Wines  | Magic Merlot | 17.991
(4 rows)

(Note: decimal is not the best type for a price but is fine for illustrative purposes here. Notice too that the updated rows are now at the end of the table.)

We are setting price field for any rows where winery='Wacky Wines' to be 0.9 times the original price and this leads to 2 rows being updated.

If we wanted to update the price of all wines, it would have been very simple indeed: update wines set price=0.9*price; as there would have been no need for a WHERE clause.

Usefully, we can update multiple columns in a single query by putting the SET components in a comma separated list:

update tablename set col1='newvalue1', col2='newvalue2' WHERE some_condition;

With UPDATE we are updating individual fields of some or all of the rows of a tables. With DELETE, however, we are deleting complete rows; there is no need to specify any columns. (Deleting complete columns is not achieved with delete. For that we use ALTER TABLE...DROP COLUMN covered later.) Thus, we just need to specify the table of interest and the focal set of rows:

delete from table_name where some_condition;

Despite SuperFoods' wine promotion, they are not bringing in enough customers. Times are tough and they have to let one of their cashiers go. To delete Jane Doe from the employees table we can do:

SuperFoods=# delete from employees where lastname='Doe' and firstname='Jane';
DELETE 1

If we don't specify a WHERE condition, say delete from wines, what will happen? This will mean that all rows are of interest and the table will be emptied. The table will remain, it will not be deleted from the database completely (for that we would need the DROP TABLE command), but it will not contain any rows until you do an INSERT or COPY. If you do want to empty a table it is actually better to use a special command, TRUNCATE TABLE which is more efficient.

You can also specify that you want to see the set of rows that were deleted. To do this, tack on a "returning *" to the end of the query:

SuperFoods=# delete from wines where price=9.99 returning *;
    winery    |     wine     | price
--------------+--------------+-------
 Vinny's Vino | Smooth Syrah |  9.99
(1 row)

DELETE 1

Few databases are completely static. Data change and they need to be cleaned, modified and purged. UPDATE and DELETE are both simple yet powerful. There isn't however a simple undo command if you make a mistake so use with caution. If in doubt, make a copy of the table (create table new_table as select * from old_table), test out your queries and then run them against the intended table.
 

Friday, November 12, 2010

Populating a database

A database isn't very useful without data. Let's examine how one can populate a database.

So far we have used simple INSERT statements such as

insert into ceos values('Steve Jobs',1976);

We don't have to fill all the fields in an insert. We can specify some subset in parentheses before VALUES. That is, suppose we have a table:

apple=# create table mytable (col1 int, col2 int, col3 int);
CREATE TABLE


We can insert into the last two columns only with:

apple=# insert into mytable (col2, col3) values(2,3);
INSERT 0 1
apple=# select * from mytable;
 col1 | col2 | col3
------+------+------
      |    2 |    3
(1 row)

We can also use the NULL keyword to fill in empty data:

apple=# insert into mytable values(1,2,NULL);
INSERT 0 1
apple=# select * from mytable;
 col1 | col2 | col3
------+------+------
      |    2 |    3
    1 |    2 |    
(2 rows)

and PostgreSQL also allows us to omit fields and it will fill all the remaining fields with NULLs:

apple=# insert into mytable values(11,12);
INSERT 0 1
apple=# select * from mytable;
 col1 | col2 | col3
------+------+------
      |    2 |    3
    1 |    2 |    
   11 |   12 |    
(3 rows)

This behavior actually depends on whether we specified any default values in the table schema. Here, we define that col1 has a default value of 0. PostgreSQL will fill in missing data with that value rather than NULL.

apple=# create table mytable2 (col1 int default 0, col2 int);
CREATE TABLE

apple=# insert into mytable2 (col2) values(1);
INSERT 0 1
apple=# select * from mytable2;
 col1 | col2
------+------
    0 |    1
(1 row)

The PostgreSQL documentation contains more information about how we can control that behavior.

Usefully, we can insert into a table using the result of a SELECT:

insert INTO table table_name SELECT some_query_expression;

e.g.

apple=# insert into mytable (col1) SELECT year from ceos where year > 1996;
INSERT 0 1
apple=# select * from mytable;
 col1 | col2 | col3
------+------+------
      |    2 |    3
    1 |    2 |    
   11 |   12 |    
 1997 |      |    
(4 rows)

Lastly, we can insert multiple rows of data using a single INSERT statement. This is more efficient and quicker than multiple single row insert statements.

apple=# insert into mytable2 values(1,2),(3,4),(5,6);
INSERT 0 3
apple=# select * from mytable2;
 col1 | col2
------+------
    0 |    1
    1 |    2
    3 |    4
    5 |    6
(4 rows)

If we had a file of multiple INSERT statements, which is how many databases including PostgreSQL can export a database to file, we could run those statements using psql's \i meta-command that we covered earlier.

If we have a CSV file that includes a header and we want to import it into PostgreSQL we can use the COPY command. The query for this is

COPY table_name FROM filepath WITH CSV HEADER;

(PostgreSQL has many more options for COPY detailed here.)

That is, suppose we have a file /Users/postgresuser/ceos.csv:

name,year
Steve Jobs, 1976
Mike Markkula,1977
Mike Scott,1978
John Sculley,1983
Michael Spindler,1993
Gil Amelio,1996
Steve Jobs,1997

then we can import into a table ceos as

apple=# create table ceos (name varchar, year int);
CREATE TABLE
apple=# copy ceos from '/Users/postgresuser/ceos.csv' with csv header;
COPY 7
apple=# select * from ceos;
       name       | year
------------------+------
 Steve Jobs       | 1976
 Mike Markkula    | 1977
 Mike Scott       | 1978
 John Sculley     | 1983
 Michael Spindler | 1993
 Gil Amelio       | 1996
 Steve Jobs       | 1997
(7 rows)

Note: the table must be created before the COPY otherwise PostgreSQL complains:

apple=# copy newceos from '/Users/postgresuser/ceos.csv' with csv header;
ERROR:  relation "newceos" does not exist

TIP: if we have an existing table with the same table structure that we want for our new table we can create that new empty table as

create table your_new_table_name as select * from old_table_name limit 0;

Importantly, COPY is much faster than INSERT. If you have a large amount of data to load, say millions of rows, and performance is important you should COPY in conjunction with some of the other strategies suggested by the PostreSQL documentation.

Thursday, November 11, 2010

Constraining rows of output: LIMIT and OFFSET

Database users often want to take a quick look at just a few rows of tables to get a sense of the data within them. We can do that in PostgreSQL with the LIMIT clause. Let's examine this table:

create table data(name varchar, value int);
insert into data values('joe',1);
insert into data values('bob',2);
insert into data values('jane',3);
insert into data values('anne',4);
insert into data values('kate',5);

apple=# select * from data;
 name | value
------+-------
 joe  |     1
 bob  |     2
 jane |     3
 anne |     4
 kate |     5
(5 rows)

Now, let's select just the first two rows:

apple=# select * from data limit 2;
 name | value
------+-------
 joe  |     1
 bob  |     2
(2 rows)


LIMIT is especially important when dealing with large tables. If you do select * from [very large table] in psql, it will generate the result set then show the first screen of rows. You can then press space to see the next screen of results or 'q' to quit and get back to the psql command line. Importantly, it may take a while to process the select because there are so many rows even if you only want to see the first screen. LIMITing results explicitly however generates results much more quickly.

What if wanted a set of rows that occurred later in the sequence? For this we can use PostgreSQL's OFFSET in combination with LIMIT. Both of these are PostgreSQL specific. (They have other names in other databases: MySQL uses LIMIT, ORACLE use ROWNUM <= number and other use TOP.) So, if we wanted the 4th and 5th row we offset the first 3 so that the 4th row becomes the start of our set and we specify a limit to say that we only want 2 rows from that point:

apple=# select * from data limit 2 offset 3;
 name | value
------+-------
 anne |     4
 kate |     5
(2 rows)

The order of LIMIT and OFFSET does not matter. This gives the same result:

apple=# select * from data offset 3 limit 2;
 name | value
------+-------
 anne |     4
 kate |     5
(2 rows)

We can use these two in combination with ORDER BY. Let's order by name:

apple=# select * from data order by name;
 name | value
------+-------
 anne |     4
 bob  |     2
 jane |     3
 joe  |     1
 kate |     5
(5 rows)

Now if we offset 3 and limit 2 we get the following instead:

apple=# select * from data order by name limit 2 offset 3;
 name | value
------+-------
 joe  |     1
 kate |     5
(2 rows)

Thus, clearly ORDER BY occurs first and then the LIMIT...OFFSET is applied.

Wednesday, November 10, 2010

Basic SQL

So far we have only covered the simplest SELECT SQL queries: select * from [tablename];. That is not going to get us too far. Let's broaden our repertoire. The material in this post is standard SQL. That is, it is not specific to PostgreSQL. Thus, I would encourage you to read around the wealth of other online resources. Ultimately, however, the best way to learn SQL is to use it and practice.

Copy and paste the following SQL into psql which is a list of Apple Computer CEOs and the year that they started that position:

create database apple;
\c apple
create table ceos (name varchar, year int);
insert into ceos values('Steve Jobs',1976);
insert into ceos values('Mike Markkula',1977);
insert into ceos values('Mike Scott',1978);
insert into ceos values('John Sculley',1983);
insert into ceos values('Michael Spindler',1993);
insert into ceos values('Gil Amelio',1996);
insert into ceos values('Steve Jobs',1997);
select * from ceos;

We can now start adding some WHERE clauses to our queries to filter down results. Who was an Apple CEO before 1984? That is a simple WHERE clause:

apple=# select name from ceos where year < 1984;
     name     
---------------
 Steve Jobs
 Mike Markkula
 Mike Scott
 John Sculley
(4 rows)


Not surprisingly, < is not the only operator available. We have all the usual >, >=, <, <=, =, != as well as many more, including those for string operations and regular expressions.

In SQL, the "WHERE x BETWEEN y AND z" clause means look for rows where some field x has a value >= y and value <= z. We can use that to query who was a CEO in the 1970s:

apple=# select name from ceos where year between 1970 and 1979;
     name     
---------------
 Steve Jobs
 Mike Markkula
 Mike Scott
(3 rows)

A SELECT statement can have any number of filtering clauses. The first is preceded with a "WHERE" and later ones are preceded with an AND or an OR — for example, select name from ceos where year <= 1970 OR year >= 1980; — and can involve parentheses, subqueries and other constructs which we will cover later.

A powerful additional clause type is a GROUP BY. Let's get the name and number of times each person has been a CEO:

apple=# select name, count(*) from ceos group by name;
       name       | count
------------------+-------
 Michael Spindler |     1
 John Sculley     |     1
 Gil Amelio       |     1
 Mike Markkula    |     1
 Mike Scott       |     1
 Steve Jobs       |     2
(6 rows)

When we group by, I like to imagine PostgreSQL creating a bunch of virtual minitables as an interim step, one per grouping, here by name.  Thus, one could think of PostgreSQL thinking "ah, Steve Jobs has two rows, let's create a minitable for him with those two rows, and everyone else gets a single row table:

       name       | year
------------------+------
 Steve Jobs       | 1976
 Steve Jobs       | 1997

       name       | year
------------------+------
 Mike Markkula    | 1977


       name       | year
------------------+------
 Mike Scott       | 1978


       name       | year
------------------+------
 John Sculley     | 1983

       name       | year
------------------+------
 Michael Spindler | 1993


       name       | year
------------------+------
 Gil Amelio       | 1996

Then it will look at the part of the query before the FROM, which is called the select list. What does the user want from these grouped data? They want the name. We grouped by name so the name will be the same for each row within a minitable. OK, output the name field from each minitable, the first row will do. What else do they want? A count(*) meaning count all the rows. That is 2 for Jobs, 1 for everyone else.

When we GROUP BY we can also add a posterior HAVING clause. (GROUP BYs don't have to have a HAVING but HAVING must have a GROUP BY.) Let's look at an example. Who was a CEO more than once?

apple=# select name from ceos group by name having count(*) > 1;
    name   
------------
 Steve Jobs
(1 row)

This means group first, then check the having clause which must return true of false for each group. Then after look that, inspect the required output in the select list.

PostgreSQL will group by as in the set of tables above, then it will count the number of rows for each and only those who have more than one row will qualify. Finally, we asked for the name only so it returns the name from that one minitable returning "Steve Jobs".

This has been a taste of some of the richness that SQL offers. We will cover additional query constructs later.