Install SQL packages:
# !conda install -y psycopg2
# !conda install -y postgresql
# !pip install ipython-sql
# !pip install sqlalchemy
Standard imports + sqlalchemy
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import sqlalchemy
%matplotlib inline
%load_ext sql
Establish a database connection to the Postgres database running on my machine localhost
using the schema ds100
postgresql_uri = "postgres://jegonzal:@localhost:5432/ds100"
sqlite_uri = "sqlite:///data/ds100.db"
default_db = postgresql_uri
%%sql $postgresql_uri
-- Need to drop views to prevent integrity constraint violations later.
DROP VIEW IF EXISTS date_stats;
The following example works through some basic table operations including:
CREATE TABLE
and DROP TABLE
¶To start, we are going to define a toy relation (a.k.a. table), populate it with some toy data, and work through some basic SQL. Deeper stuff coming soon though, I promise!
First, let's create the table of students
%%sql $default_db
-- Drop the table if it already exists
DROP TABLE IF EXISTS students;
-- Create the table profs
CREATE TABLE students(
name TEXT PRIMARY KEY,
gpa FLOAT CHECK (gpa >= 0.0 and gpa <= 4.0),
age INTEGER,
dept TEXT,
gender CHAR);
Note that each column has a fixed data type.
The DBMS will enforce these types as data is inserted.
Note also the definition of a primary key, as we discussed in the EDA lecture.
The DBMS will enforce the uniqueness of values in the key columns.
To see what we've done, let's run our first query, dumping out the content of the table: every column for each row. We denote every column with *
:
%%sql $default_db
SELECT * FROM students;
... it's funny, believe me.
Now let's manually insert some values into the table.
%%sql $default_db
INSERT INTO students VALUES
('Sergey Brin', 2.8, 40, 'CS', 'M'),
('Danah Boyd', 3.9, 35, 'CS', 'F'),
('Bill Gates', 1.0, 60, 'CS', 'M'),
('Hillary Mason', 4.0, 35, 'DATASCI', 'F'),
('Mike Olson', 3.7, 50, 'CS', 'M'),
('Mark Zuckerberg', 4.0, 30, 'CS', 'M'),
('Cheryl Sandberg', 4.0, 47, 'BUSINESS', 'F'),
('Susan Wojcicki', 4.0, 46, 'BUSINESS', 'F'),
('Marissa Meyer', 4.0, 45, 'BUSINESS', 'F');
Note that strings in SQL must be quoted with a single quote '
character.
Note how insertions need to have values in the same order as the columns in the create table
statement! Let's make sure our data is there:
%%sql $default_db
SELECT * FROM students;
What happens if we try to insert another record with the same primary key (name
)?
# %%sql $default_db
# INSERT INTO students VALUES ('Bill Gates', 4.0, 60, 'BUSINESS', 'M')
We can populate the database using Pandas as well:
tips_df = sns.load_dataset("tips")
tips_df.head()
Create a connection with the database
engine = sqlalchemy.create_engine(default_db)
Drop the table if it already exists and then upload the table to the database.
_ = engine.execute("DROP TABLE IF EXISTS tips;")
with engine.connect() as conn:
tips_df.to_sql("tips", conn)
We can also download tables directly into pandas:
with engine.connect() as conn:
students = pd.read_sql("SELECT * FROM students", conn)
students
There is no mechanism in standard SQL to access the schema associated with each database management systems. Here we use the corresponding client tools
!sqlite3 data/ds100.db ".schema students"
!psql ds100 -c "\d students"
I found the following SQL Alchemy Quick Reference Sheet to be very helpful.
engine = sqlalchemy.create_engine(postgresql_uri)
inspector = sqlalchemy.inspect(engine)
for col in inspector.get_columns("students"):
print(col)
engine = sqlalchemy.create_engine(sqlite_uri)
inspector = sqlalchemy.inspect(engine)
for col in inspector.get_columns("students"):
print(col)
What is Bill Gates' GPA?
%%sql $default_db
SELECT * FROM students
WHERE name LIKE '%Bill%' -- SQL like regular expression
Wow, Bill has a low GPA let's lend him a hand.
%%sql $default_db
UPDATE students
SET gpa = 1.0 + gpa
WHERE LOWER(name) = 'bill gates';
And let's check the table now:
%%sql $default_db
SELECT * FROM students
WHERE name ~'^Bil.'; -- Regular expression
Suppose Mark logged into the database and tried to give himself a 5.0? Uncomment the following line to see what happens:
# %%sql
# UPDATE students
# SET gpa = 1.0 + gpa
# WHERE LOWER(name) LIKE '%zuck%';
The above code fails. Why? (check the gpa.)
Reviewing our table
%%sql $default_db
SELECT * FROM students
Notice two things:
update
statement: we decide which rows get updated based entirely on the values in each row, as checked by the where
clause. There is no notion of any information outside the values in the row--e.g. there are no "object identifiers" or "row numbers"... everything is just the data and only the data.We can delete rows in much the same way we update rows:
%%sql $default_db
DELETE FROM students
WHERE name = 'Sergey Brin'
%%sql $default_db
SELECT * FROM students;
Restoring Sergey
%%sql $default_db
INSERT INTO students VALUES
('Sergey Brin', 4.0, 40, 'CS', 'M');
Now let's start looking at some slightly more interesting queries. The canonical SQL query block includes the following clauses, in the order they appear. Square brackets indicate optional clauses.
SELECT ...
FROM ...
[WHERE ...]
[GROUP BY ...]
[HAVING ...]
[ORDER BY ...]
[LIMIT ...];
Query blocks can reference one or more tables, and be nested in various ways. Before we worry about multi-table queries or nested queries, we'll work our way through examples that exercise all of these clauses on a single table.
SELECT
LIST¶The SELECT
list determines which columns to include in the output.
%%sql $default_db
SELECT name
FROM students;
SQL has a wide range of functions that can be applied to each attribute in the select list. Notice that we can alias (name) the columns with AS
. The complete list of built in PostreSQL functions is available here.
%%sql $default_db
SELECT UPPER(name) AS n, LOWER(dept) as d, gpa * 4.0 AS four_gpa
FROM students;
As we know, SQL is a multiset logic, preserving the meaning of the number of duplicates in query results. Sometimes, however, we don't want to keep the duplicates, we want to eliminate them. This is done simply by adding the keyword DISTINCT
after the SELECT
statement:
%%sql $default_db
SELECT DISTINCT dept
FROM students
Which rows are used when taking the distinct entries? Does it really matter?
WHERE
Clause¶The WHERE
clause determines which rows of to include by specifying a predicate (boolean expression). Rows (tuples) that satisfy this expression are returned.
%%sql $default_db
SELECT name, gpa
FROM students
WHERE dept = 'CS'
And of course we can specify both rows and columns explicitly. If we have a primary key, we can filter things down to even the cell level via a select
list of one column, and a where
clause checking equality on the primary key columns:
%%sql $default_db
SELECT gpa
FROM students
WHERE name = 'Bill Gates';
Note that even this "single-celled" response still has a uniform data type of a relation.
SQL is Closed Over Tables: SQL expressions take in tables and always produce tables. How does this compare to Pandas?
Now that you can slice and dice tables into columns, rows and cells, you have enough knowledge to poke around in a database. Let's move on to skills that you'll need as a data scientist.
GROUP BY aggregation in SQL is a lot like the group by in Pandas. SQL provides a family of [aggregate functions] for use in the select
clause. In the simplest form, queries with aggregates in the select
clause generate a single row of output, with each aggregate function performing a summary of all the rows of input. You can have many aggregate functions in your select
clause:
A list of built-in aggregate functions in PostgreSQL is here. In our case, the query we are looking for is as follows.
In the following we compute the average GPA as well as the number of students in each department:
%%sql $default_db
SELECT dept, AVG(gpa) as avg_gpa, COUNT(*)
FROM students
GROUP BY dept
We can use the HAVING
clause to apply a predicate to groups.
%%sql $default_db
SELECT dept, AVG(gpa) as avg_gpa, COUNT(*)
FROM students
GROUP BY dept
HAVING COUNT(*) >= 2
%%sql $default_db
SELECT dept, AVG(gpa) as avg_gpa, COUNT(*)
FROM students
WHERE gender = 'F'
GROUP BY dept
HAVING COUNT(*) >= 2
As a nicety, SQL allows you to order your output rows, in either ascending (ASC) or descending (DESC) order of the values in columns. For example:
%%sql $default_db
SELECT *
FROM students
ORDER BY gpa;
%%sql $default_db
SELECT *
FROM students
ORDER BY gpa, age;
%%sql $default_db
SELECT *
FROM students
ORDER BY gpa DESC, age ASC;
The limit clause limits the number of elements returned. Which elements are returned? While this depends on the order of elements which could be arbitrary beyond anything specified by the ORDER BY
clauses.
Is this a random sample? NO
Why do we use the LIMIT
clause? Often the database we are querying is massive and retrieving the entire table as we are debugging the query can be costly in time and system resources. However, we should avoid using LIMIT
when constructing a sample of the data.
%%sql
SELECT * FROM students LIMIT 3
It is often assumed that when working with a database all relations (tables) must come from outside or be derived from other sources of data. It is possible to construct tables in SQL.
Sometimes it's useful to auto-generate data in queries, rather than examine data in the database. This is nice for testing, but also can be useful to play some computational tricks as you'll see in your homework.
SQL has a simple scalar function called random
that returns a random value between 0.0 and 1.0. You can use this if you need to generate a column of random numbers. (The PostgreSQL manual doesn't promise much about the statistical properties of this random number generator.)
Let's roll a 6-sided die for each of the students
%%sql $postgresql_uri
SELECT *, ROUND(RANDOM() * 6) as roll_dice
FROM students;
Is this a good implementation of a fair 6 sided die?
Suppose we want to generate a whole bunch of random numbers, not tied to any particular stored table -- can we do that in SQL?
SQL has a notion of table-valued functions: functions that return tables, and hence can be used in a FROM
clause of a query. The standard table-valued function is called generate_series
, and it's much like numpy's arange
:
%%sql $postgresql_uri
SELECT *
FROM generate_series(1,5);
%%sql $postgresql_uri
SELECT *
FROM generate_series(1,10, 2);
So to generate 5 random real numbers between 0 and 6, we might use this SQL:
%%sql $postgresql_uri
SELECT trial, (6*RANDOM()) AS rando
FROM generate_series(1, 5) AS flip(trial);
Let's test the distribution of our earlier generator:
%%sql $postgresql_uri
SELECT ROUND(6*RANDOM()) AS rando, COUNT(*)
FROM generate_series(1, 100000) AS flip(trial)
GROUP BY rando
ORDER BY count
And if we want integers, we can use a PostgreSQL typecast operator (postfix ::<type>
):
%%sql $postgresql_uri
-- NOTE WE ALSO TAKE THE CEIL
SELECT CEIL(6*RANDOM())::INTEGER AS rando, COUNT(*)
FROM generate_series(1, 100000) AS flip(trial)
GROUP BY rando
ORDER BY count
Now suppose we want to populate a "matrix" relation my_matrix(x, y, val)
full of random values. In Python during Lecture 7 we used np.random.randn(3,2)
.
import numpy as np
# normally distributed random numbers, mean 0 variance 1
np.random.randn(3,2)
In this relational version we need to explicitly generate the x
and y
values. We can do this via SQL's built-in cartesian product!
%%sql $postgresql_uri
SELECT rows.x, columns.y, random() AS val
FROM generate_series(0,2) AS rows(x),
generate_series(0,1) AS columns(y);
We may want to store a matrix as a table—in which case we should set up the schema properly to ensure that it remains a legal matrix.
%%sql $postgresql_uri
DROP TABLE IF EXISTS my_matrix;
CREATE TABLE my_matrix(x INTEGER, y INTEGER, val FLOAT, PRIMARY KEY(x,y));
INSERT INTO my_matrix
SELECT rows.x, columns.y, random() AS val
FROM generate_series(0,2) AS rows(x),
generate_series(0,1) AS columns(y);
SELECT * FROM my_matrix;
A few take-aways from the previous cell:
my_matrix
reflects the fact that val
is a function of the row (x
) and column (y
) IDs.INSERT
statement, which contains a SELECT
query rather than the VALUES
we saw before. You might want to experiment and see what would happen if the SELECT
query produces a different schema than my_matrix
: try having it produce too few columns, too many columns, columns in different orders, etc.INSERT...SELECT
statement, notice the definition of output column names via the AS
in the SELECT
clause. Is that necessary here?INSERT...SELECT
statement, notice the definition of table and column names in the FROM
clause via AS
, and the way they get referenced in the SELECT
clause. Do we need the tablenames specified in the SELECT
clause? Try it and see!Sometimes we may want a custom scalar function that isn't built into SQL. Some database systems allow you to register your own user-defined functions (UDFs) in one or more programming languages. Conveniently, PostgreSQL allows us to register user-defined functions written in Python. Be aware of two things:
Calling Python for each row in a query is quite a bit slower than using the pre-compiled built-in functions in SQL ... this is akin to the use of Python loops instead of numpy
calls. If you can avoid using Python UDFs you should do so to get better performance.
Python is a full-feature programming language with access to your operating system's functionality, which means it can reach outside of the scope of the query and wreak havoc, including running arbitrary UNIX commands. (PostgreSQL refers to this as an untrusted
language.) Be very careful with the Python UDFs you use in your Postgres queries! If you want to be safer write UDFs in a trusted language. PostgreSQL has a number of other languages to choose from, including Java and even R!.
First we tell PostgreSQL we want to use the plpythonu package (so named because of "pl" for "programming language", "u" for "untrusted"):
%%sql $postgresql_uri
CREATE EXTENSION IF NOT EXISTS plpythonu;
Now let's write some trivial Python code and register it as a UDF using the create function
command. Since SQL is a typed language, we need to specify the SQL types for the input and output to our function, in addition to the code (within $$ delimiters) and the language:
%%sql $postgresql_uri
DROP FUNCTION IF EXISTS fib(x INTEGER);
CREATE FUNCTION fib(x INTEGER) RETURNS INTEGER
AS $$
def fib(x):
if x < 2:
return x
else:
return fib(x-1) + fib(x-2)
return fib(x)
$$ LANGUAGE plpythonu;
%%sql $postgresql_uri
SELECT x, fib(x)
FROM generate_series(1,10) AS row(x);
It is possible to create transactions that isolate changes. This is done by starting a transaction with BEGIN
. We can then proceed to make changes to the database. During this time others will not be able to see our changes. Until we end the transactions by saying ROLLBACK
or COMMIT
:
BEGIN;
UPDATE profs SET luckynumber = 888 WHERE lastname = 'Gonzalez';
SELECT * FROM profs;
ROLLBACK;
SELECT * FROM profs;
Try running this in the postgres shell...
Statistics doesn't deal with individuals, it deals with groups: distributions, populations, samples and the like. As such, computing statistics in SQL focuses heavily on aggregation functions.
All SQL systems have simple descriptive statistics built in as aggregation functions:
min, max
count
sum
avg
stddev
and variance
, the sample standard deviation and variance.PostgreSQL offers many more. Some handy ones include
stddev_pop
and var_pop
: the population standard deviation and variance, which you should use rather than stddev
and variance
if you know your data is the full population, not a sample.covar_samp
and covar_pop
: sample and population covariancecorr
, Pearson's correlation coefficientYou'll notice that a number of handy statistics are missing from this list, including the median and quartiles. That's because those are order statistics: they are defined based on an ordering of the values in a column.
SQL provides for this by allowing what it calls "ordered set functions", which require a WITHIN GROUP (ORDER BY <columns>)
clause to accompany the order-statistic aggregate. For example, to compute the 25th percentile, 50th percentile (median) and 75th percentile in SQL, we can use the following:
%%sql $postgresql_uri
SELECT
percentile_cont(0.5) WITHIN GROUP (ORDER BY x)
FROM generate_series(1,10) AS data(x);
There are two versions of the percentile function:
percentile_cont
inuous : interpolatespercentile_disc
rete : returns an entry from the tableWhat will the following expressions return?
%%sql $postgresql_uri
SELECT
percentile_disc(0.5) WITHIN GROUP (ORDER BY x)
FROM generate_series(1,10) AS data(x);
We can compute the edges and middle of the box in a box plot:
%%sql $postgresql_uri
SELECT
percentile_disc(0.25) WITHIN GROUP (ORDER BY x) as lower_quartile,
percentile_disc(0.5) WITHIN GROUP (ORDER BY x) as median,
percentile_disc(0.75) WITHIN GROUP (ORDER BY x) as upper_quartile
FROM generate_series(1,10) AS data(x);
psql
¶In a separate notebook (load_fec.ipynb
) you'll find the commands to load publicly-available campaign finance data from the Federal Election Commission into a PostgreSQL database.
To see what we have in the database, it's simplest to use the PostgreSQL shell command psql
to interact with the database. You can run man psql
to learn more about it. A few handy tips:
psql
supports some useful non-SQL "meta-"commands, which you access via backslash (\
). To find out about them, run psql
in a bash shell, and at the prompt you can type \?
.psql
has builtin documentation for SQL. To see that, at the psql
prompt type \help
.psql
is an interactive SQL shell, so not suitable for use inside a Jupyter notebook. If you want to invoke it within a Jupyter notebook, you should use !psql -c <SQL statement>
-- the -c
flag tells psql to run the SQL statement and then exit:!psql ds100 -c "select * from students;"
Let's see what tables we have our database after loading the FEC data:
!psql ds100 -c "\d"
And let's have a look at the individual
table's schema:
!psql ds100 -c "\d individual"
If you are curious about the meaning of these columns check out the FEC data description
How big is this table?
%%sql $postgresql_uri
SELECT COUNT(*)
FROM individual
LIMIT
and sampling¶This is not the first topic usually taught in SQL, but it's extremely useful for exploration.
OK, now we have some serious data loaded and we're ready to explore it.
Database tables are often big--hence the use of a database system. When browsing them at first, we may want to look at exemplary rows: e.g., an arbitrary number of rows, or a random sample of the rows.
To look at all of the data in the individual
table, we would simply write:
select * \
from individual;
But that would return 20,347,829 rows into our Jupyter notebook's memory, and perhaps overflow the RAM in your computer. Instead, we could limit the size of the output to the first 3 rows as follows:
%%sql $postgresql_uri
SELECT *
FROM individual
LIMIT 4;
limit
clause:¶As data scientists, we should be concerned about spending much time looking at a biased subset of our data. Instead, we might want an i.i.d. random sample of the rows in the table. There are various methods for sampling from a table. A simple one built into many database systems including PostgreSQL is Bernoulli sampling, in which the decision to return each row is made randomly and independently. As a metaphor, the database engine "flips a coin" for each row to decide whether to return it. We can influence the sampling rate by choosing the probability of a "true" result of the coinflip.
This is done on a per-table basis in the FROM
clause of the query like so:
%%sql $postgresql_uri
SELECT *
FROM individual TABLESAMPLE BERNOULLI(.00001) REPEATABLE(42);
To learn more about the TABLESAMPLE
clause checkout out the select docs. Note that there is a second sampling method called block sampling which is a lot like cluster sampling at the level of pages on disk!
Three things to note relative to our previous limit
construct:
For these reasons, if we want a proper i.i.d sample, it's a good idea to compute a nice-sized sample and store it, keeping it reasonably large for more general use. Since we will not be updating and rows in our individual
table, we can do this without worrying that the sample will get "out of date" with respect to the context of individual
.
We can use the CREATE TABLE AS SELECT ...
(a.k.a. CTAS) pattern to do create a table that saves the output of a query:
%%sql $postgresql_uri
DROP TABLE IF EXISTS indiv_sample;
CREATE TABLE indiv_sample AS
SELECT *
FROM individual TABLESAMPLE BERNOULLI(.1) REPEATABLE(42);
Here is a more manual way to construct a random sample of a fixed size. Note that this is not as efficient taking several minutes to complete.
# %%sql $postgresql_uri
# SELECT SETSEED(0.5);
# DROP TABLE IF EXISTS indiv_sample2;
# CREATE TABLE indiv_sample2 AS
# SELECT *, RANDOM() AS u
# FROM individual
# ORDER BY u
# LIMIT 20000;
%%sql $postgresql_uri
SELECT COUNT(*) FROM indiv_sample2
%%sql $postgresql_uri
SELECT * FROM indiv_sample2 LIMIT 5
OK, we already had a peek at the individual
table. Now let's look at specific attributes (columns) relates to who is donating how much.
In addition to referencing the columns of individual
in the select
clause, we can also derive new columns by writing field-level (so-called "scalar") functions. Typically we reference some table columns in those functions.
In our case, let's compute the log of transaction_amt
for subsequent plotting. SQL comes with many typical functions you can use in this way, and PostgreSQL is particularly rich on this front; see the PostgreSQL manual for details.
We'll look at indiv_sample
rather than individual
while we're just exploring.
%%sql $postgresql_uri
SELECT name, state, cmte_id,
transaction_amt, log(transaction_amt)
FROM indiv_sample
LIMIT 10;
We can combine SQL with python in the following way:
query = """
SELECT transaction_amt AS amt
FROM indiv_sample
WHERE transaction_amt > 0;
"""
result = %sql $postgresql_uri $query
sns.distplot(result.DataFrame()['amt'])
query = """
SELECT LOG(transaction_amt) AS log_amt
FROM indiv_sample
WHERE transaction_amt > 0;
"""
result = %sql $postgresql_uri $query
sns.distplot(result.DataFrame()['log_amt'])
scales = np.array([1,10,20, 100, 500, 1000, 5000])
_ = plt.xticks(np.log10(scales), scales)
CASE
statements: SQL conditionals in the FROM
clause¶What about smaller donations?
# %%sql $postgresql_uri
# SELECT name, state, cmte_id,
# transaction_amt, LOG(transaction_amt)
# FROM indiv_sample
# WHERE transaction_amt < 10
# LIMIT 10;
Uh oh, log is not defined for numbers <= 0! We need a conditional statement in the select
clause to decide what function to call. We can use SQL's case
construct for that.
%%sql $postgresql_uri
SELECT name, state, cmte_id, transaction_amt,
CASE WHEN transaction_amt > 0 THEN log(transaction_amt)
WHEN transaction_amt = 0 THEN 0
ELSE -1*(log(abs(transaction_amt)))
END AS log_magnitude
FROM indiv_sample
WHERE transaction_amt < 10
LIMIT 10;
WHERE
clauses¶We can choose which rows do and do not appear in the query by putting boolean-valued expressions ("predicates") in the WHERE
clause, right after the FROM
clause. For example, we might be looking for big donations greater than $1000:
%%sql $postgresql_uri
-- Notice that as we are more selective we return to the fulld ata
SELECT name, city, state, transaction_amt
FROM individual
WHERE transaction_amt > 1000
limit 10;
We can form more complex predicates using Boolean connectives AND, OR and NOT:
%%sql $postgresql_uri
SELECT name, city, state, transaction_amt
FROM individual
WHERE transaction_amt > 1000
AND (state = 'WI' OR state = 'IL')
AND NOT (city = 'CHICAGO')
LIMIT 10;
Finally by combing ORDER BY and LIMIT we can identify top campaign contributors.
%%sql $postgresql_uri
SELECT name, ROUND(SUM(transaction_amt)/100.0, 2) total_amt
FROM individual
WHERE city = 'SAN FRANCISCO'
GROUP BY name
ORDER BY total_amt DESC
LIMIT 20;
Note how the combination of ORDER BY
and LIMIT 10
gives you the "top 10" results. That's often handy!
What's the granularity of our individual
table? Transactions? Examining the schema it doesn't look like there's a key for the donor. Maybe the image_num
is a key? Or the file_num
?
To determine this, we need to count up the total number of rows, and the number of distinct values that occur in the image_num
column. SQL provides a family of [aggregate functions] for use in the select
clause. In the simplest form, queries with aggregates in the select
clause generate a single row of output, with each aggregate function performing a summary of all the rows of input. You can have many aggregate functions in your select
clause:
A list of built-in aggregate functions in PostgreSQL is here. In our case, the query we are looking for is as follows. To start with, we'll run it on our sample for a sanity check:
Up to now we've looked at a single query at a time. SQL also allows us to nest queries in various ways. In this section we look at the cleaner examples of how to do this in SQL: views and Common Table Expressions (CTEs).
In earlier examples, we created new tables and populated them from the result of queries over stored tables. There are two main drawbacks of that approach that may concern us in some cases:
For this reason, SQL provides a notion of logical views: these are basically named queries that are re-evaluated upon each reference.
The syntax is straightforward:
CREATE VIEW <name> AS
<SELECT statement>;
The resulting view <name>
can be used in an SELECT
query, but not in an INSERT
, DELETE
or UPDATE
query!
As an example, we might want a view that stores just some summary statistics of transaction_amt
s for each date:
%%sql $postgresql_uri
DROP VIEW IF EXISTS date_stats;
CREATE VIEW date_stats AS
SELECT
to_date(transaction_dt, 'MMDDYYYY') as day, -- Date Parsing
min(transaction_amt),
avg(transaction_amt),
stddev(transaction_amt),
max(transaction_amt)
FROM indiv_sample
GROUP BY transaction_dt
ORDER BY day;
%%sql
SELECT * from date_stats limit 5;
Notice that this did not create a table:
!psql ds100 -c "\dt"
Instead it created a view:
!psql ds100 -c "\dv"
We can list more about the view using the \d+
option:
!psql ds100 -c "\d+ date_stats"
Let's create a random table and we will even seed the random number generator.
%%sql $postgresql_uri
SELECT setseed(0.3);
DROP VIEW IF EXISTS rando;
CREATE VIEW rando(rownum, rnd) AS
SELECT rownum, round(random())::INTEGER
FROM generate_series(1,50) AS ind(rownum)
What is the sum of the rows in Random:
%%sql $postgresql_uri
SELECT SUM(rnd) FROM rando;
What was that value again?
%%sql $postgresql_uri
SELECT SUM(rnd) FROM rando;
</br></br></br>
The value changes with each invocation.
Views can help:
Problem:
temp1
, temp1_joey
, temp1_joey_fixed
, ... We need a mechanism to decompose query into views for the scope of a single query.
WITH
)¶Think of these as a view that exists only during the query.
If we're only going to use a view within a single query, it is a little inelegant to CREATE
it, and then have to DROP
it later to recycle the view name.
Common Table Expressions (CTEs) are like views that we use on-the-fly. (If you know about lambdas in Python, you can think of CTEs as lambda views.) The syntax for CTEs is to use a WITH
clause in front of the query:
WITH <name> [(renamed columns)] AS
(<SELECT statement>)
[, <name2> AS (<SELECT statement>)...]
If you need multiple CTEs, you separate them with commas. We can rewrite our query above without a view as follows:
%%sql $postgresql_uri
WITH per_day_stats AS (
SELECT
to_date(transaction_dt, 'MMDDYYYY') as day, -- Date Parsing
min(transaction_amt),
avg(transaction_amt),
stddev(transaction_amt),
max(transaction_amt)
FROM indiv_sample
GROUP BY transaction_dt
)
SELECT day, stddev
FROM per_day_stats
WHERE stddev IS NOT NULL
ORDER by stddev DESC
LIMIT 1;
%%sql $postgresql_uri
SELECT state,
percentile_cont(0.25) WITHIN GROUP (ORDER BY transaction_amt) as lower_quartile,
percentile_cont(0.5) WITHIN GROUP (ORDER BY transaction_amt) as median,
percentile_cont(0.75) WITHIN GROUP (ORDER BY transaction_amt) as upper_quartile
FROM indiv_sample
GROUP BY state
ORDER BY upper_quartile DESC
LIMIT 10;