This architecture shows how a SQL query flows from the client through various processing layers before reaching the actual data on disk, with each component playing a crucial role in ensuring efficient, reliable database operations.
SQL Clients, Web Apps, BI Tools
These are the programs that users interact with to send SQL queries to the database. They connect to the RDBMS server using protocols like JDBC, ODBC, or native database drivers. (python, nodejs, Java, etc)
Checks SQL syntax and converts queries into an internal format the database can understand. It validates that tables and columns exist and that the user has proper permissions.
Determines the most efficient way to execute a query by analyzing different execution plans. It considers factors like available indexes, table statistics, and join methods to minimize resource usage. This is what makes Postgresql
super efficient.
Carries out the optimized query plan by coordinating with other components to retrieve or modify data according to the SQL command.
EXPLAIN
Explain
SELECT * FROM trees tr
JOIN taxonomy ta on ta.id = tr.taxonomy_id
JOIN tree_species sp on sp.id = ta.species_id
WHERE tr.circumference > 50 and sp.species = 'tomentosa';
Manages how data is physically stored and retrieved from disk.
It handles different storage structures like heap files, B-trees
for indexes, and implements algorithms for efficient data access.
Ensures database transactions are
Memory Cache
Concurrency Control
Data Files - Store the actual table records and rows containing your business data.
Index Files - Contain sorted pointers to data locations, speeding up queries by avoiding full table scans.
Log Files - Record all database changes for recovery purposes, allowing the system to restore data after crashes or rollback uncommitted transactions.
The primary language for interacting with the database from within. Users and applications use SQL commands (SELECT, INSERT, UPDATE, DELETE) to query and modify data directly.
Tools and utilities for moving data in and out of the database. This includes bulk loading tools, data migration utilities, and export functions for backing up or transferring data to other systems.
In postgresql we have command line executables: createdb
, dropdb
, pg_dump
, pg_restore
, pg_dumpall
, pg_restoreall
these utilities are in the same folder as your
psql
executable
Organization and Optimization
Administrative functions that maintain database performance. This involves organizing data structures, managing storage allocation, and optimizing query execution paths to ensure fast response times.
In postgresql , we have for instance
Autovacuum
reclaims storage space by cleaning up deleted/updated rowsTOAST
(oversized attribute storage) manages large field values.VACUUM
/ REINDEX
/ CLUSTER
: Manual commands to reorganize data and indexes for better performance.SQL Layer - The query processing interface that interprets SQL commands, validates syntax, checks permissions, and translates high-level queries into low-level operations the database engine can execute.
Compacting and Cold Recovery - Maintenance operations that reclaim unused space (compacting) and restore the database from backups or after unexpected shutdowns (cold recovery). These ensure data integrity and efficient storage usage.
Database Core (Tables, Views, Index)
The logical database structure:
add : functions, triggers, sequences, etc
NTFS, NFS, etc. - The underlying operating system file systems where database files are physically stored. The database management system interfaces with these file systems to read and write data to disk. Common examples include:
Master SQL, and you’ll never lack job opportunities
SQL was created in the early 1970s. Here are the major features added through its evolution:
SQL-86 (SQL-87) - First ANSI Standard
SQL-89 (SQL1)
SQL-92 (SQL2)
SQL:1999 (SQL3)
SQL:2003
SQL:2006
SQL:2008
SQL:2011
SQL:2016
SQL:2023 (Latest)
# Python changes every few years
print "Hello" # Python 2 (dead)
print("Hello") # Python 3
# JavaScript changes constantly
var x = 5; // Old way
let x = 5; // New way
const x = 5; // Newer way
-- SQL from 1990s still works today!
SELECT * FROM users WHERE age > 18;
SQL is the COBOL
that actually stayed relevant
Imperative (Python/Java/C++): “HOW to do it”
results = []
for user in users:
if user.age > 21 and user.country == 'France':
results.append(user.name)
return sorted(results)
Declarative (SQL): “WHAT you want”
You describe the result, the database figures out HOW
Declarative means you describe what you want, not how to do it.
In SQL, you state the result you want, and the database (query planner) figures out how to get it.
SELECT name FROM students WHERE grade > 15;
✅ You focus on what, not how — that’s why SQL is declarative.
SQL isn’t just for database admins anymore:
# Modern data stack
df = pd.read_sql("SELECT * FROM events WHERE date > '2024-01-01'", conn)
model.train(df)
Standard SQL | PostgreSQL | MySQL | SQL Server |
---|---|---|---|
SUBSTRING() |
SUBSTRING() |
SUBSTRING() |
SUBSTRING() |
❌ | LIMIT 10 |
LIMIT 10 |
TOP 10 |
❌ | STRING_AGG() |
GROUP_CONCAT() |
STRING_AGG() |
❌ | ::TEXT |
❌ | CAST AS VARCHAR |
Core SQL (90%) is identical everywhere Fancy features (10%) vary
SQL
├── DDL (Data Definition Language)
│ └── CREATE, ALTER, DROP
├── DML (Data Manipulation Language)
│ └── INSERT, UPDATE, DELETE
├── DQL (Data Query Language)
│ └── SELECT, FROM, WHERE, JOIN
├── DCL (Data Control Language)
│ └── GRANT, REVOKE
└── TCL (Transaction Control Language)
└── COMMIT, ROLLBACK, BEGIN
You’ll use DML and DQL 90% of the time
Some database texts combine DQL with DML since
SELECT
statements can be considered data manipulation, resulting in 4 sub-languages instead of 5.
Defines and manages database structure
create database epitadb;
create table students (
id serial primary key,
name varchar(100) not null,
age int not null,
grade int not null
);
ALTER table students ADD COLUMN graduation_year int;
DROP table students;
CREATE TABLE <table_name> (
id serial primary key,
<column_name> <data_type> <constraints>,
<column_name> <data_type> <constraints>,
<column_name> <data_type> <constraints>,
<column_name> <data_type> <constraints>,
);
with index and keys:
CREATE TABLE <table_name> (
id serial primary key,
<column_name> <data_type> <constraints>,
<column_name> <data_type> <constraints>,
<column_name> <data_type> <constraints>,
<column_name> <data_type> <constraints>,
INDEX <index_name> (<column_name>),
UNIQUE INDEX <index_name> (<column_name>),
FOREIGN KEY (<column_name>) REFERENCES <table_name>(<column_name>),
);
Let’s look at the postgresql documentation for the create table statement:
https://www.postgresql.org/docs/current/sql-createtable.html
in postgresql
CREATE TABLE employees (
id SERIAL PRIMARY KEY,
first_name VARCHAR(50) NOT NULL,
last_name VARCHAR(50) NOT NULL,
email VARCHAR(100) UNIQUE NOT NULL,
hire_date DATE NOT NULL,
salary NUMERIC(10, 2) NOT NULL,
department_id INT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
It is good practice to use default values for date time columns and always have for tracabiliy purposess the 2 colums:
created_at
updated_at
attention, the last column does not have a comma
Manages data within tables
Aggregate Functions (DQL)
COUNT()
, SUM()
, AVG()
, MAX()
, MIN()
GROUP BY
for summarizing dataScalar Functions (DQL)
COALESCE()
- Handles NULL valuesCAST() / CONVERT()
- Type conversionSUBSTRING(), CONCAT(), TRIM()
- String manipulationDATEADD(), DATEDIFF()
- Date operationsROUND(), FLOOR(), CEILING()
- Math functionsClauses & Operations (DQL)
GROUP BY
- Groups rows for aggregationHAVING
- Filters grouped resultsORDER BY
- Sorts result setsDISTINCT
- Removes duplicatesUNION/INTERSECT/EXCEPT
- Set operationsWindow Functions (DQL)
ROW_NUMBER()
, RANK()
, DENSE_RANK()
LAG()
, LEAD()
- Access other rowsConditional Logic (DQL)
CASE WHEN
- Conditional expressionsINSERT INTO <table_name> (
<column_name>,
<column_name>,
<column_name>
)
VALUES (
<value>,
<value>,
<value>
);
for instance
INSERT INTO courses (
title,
description,
semester,
hours
)
VALUES (
"intro to databse",
"the best database course in the universe ",
"fall 2025",
123
);
id
primary key, gets auto-incrementedNULL
make sure
you can insert multiple rows in one query
INSERT INTO <table_name> (
<column_name>,
<column_name>,
<column_name>
)
VALUES
(<value>, <value>, <value>),
(<value>, <value>, <value>),
(<value>, <value>, <value>);
UPDATE <table_name>
SET <column_name> = <value>,
<column_name> = <value>
WHERE <condition>;
for instance:
UPDATE courses
SET title = "intro to databse",
description = "the best intro to database course in the universe",
semester = "fall 2025",
hours = 4
WHERE id = 1;
Retrieves data from the database
SELECT <column_name>, <column_name>
FROM <table_name>
WHERE <condition>;
for instance:
SELECT title, description
FROM courses
WHERE semester = "fall 2025";
joining on the teacher id:
SELECT title, description, teachers.name
FROM courses
JOIN teachers ON courses.teacher_id = teachers.id
WHERE semester = "fall 2025";
Controls access and permissions
GRANT SELECT, INSERT, UPDATE, DELETE ON <table_name> TO <user_name>;
for instance:
GRANT SELECT, INSERT, UPDATE, DELETE ON courses TO Alexis;
GRANT does not modify pg_hba.conf
These control different layers of access:
pg_hba.conf (Connection Level)
GRANT (Permission Level)
Example flow:
SELECT * FROM products
→ GRANT checks if they have SELECT permissionWhere GRANT info is stored:
\dp
(table privileges) or \l
(database privileges) in psqlYou need both: pg_hba.conf to allow connection + GRANT to allow operations!
Manages database transactions
BEGIN TRANSACTION;
-- Update all course credits by 1
UPDATE courses SET credits = credits + 1;
-- Give all teachers a 10% raise (you wish )
UPDATE teachers SET salary = salary * 1.10;
-- Check if any teacher salary is now over 20,000
IF EXISTS (SELECT 1 FROM teachers WHERE salary > 20000)
BEGIN
-- Something went wrong, undo everything
ROLLBACK;
PRINT 'Rolled back - salary too high! (for France)';
END
ELSE
BEGIN
-- Everything looks good, save the changes
COMMIT;
PRINT 'Changes saved successfully!';
END
Both COMMIT
and ROLLBACK
are “terminal” commands for a transaction.
They save/undo the work AND close the transaction in one go.
There’s no “END TRANSACTION” command in SQL.
Once you COMMIT
or ROLLBACK
, you’re done!
-- Create the structure
CREATE TABLE songs (
id INTEGER PRIMARY KEY,
title TEXT NOT NULL,
artist TEXT NOT NULL,
duration_seconds INTEGER,
release_date DATE
);
-- Modify the structure
ALTER TABLE songs ADD COLUMN genre TEXT;
-- Destroy the structure
DROP TABLE songs; -- ⚠️ Everything gone!
The everyday SQL
-- Create: Add new data
INSERT INTO songs (title, artist, duration_seconds)
VALUES ('Flowers', 'Miley Cyrus', 200);
-- Read: Query data
SELECT title, artist FROM songs WHERE duration_seconds < 180;
-- Update: Modify existing data
UPDATE songs SET genre = 'Pop' WHERE artist = 'Taylor Swift';
-- Delete: Remove data
DELETE FROM songs WHERE release_date < '2000-01-01';
This pattern solves the large majority of your data questions
SELECT -- What columns?
FROM -- What table?
JOIN -- What other tables?
WHERE -- What rows?
GROUP BY -- How to group?
HAVING -- Filter groups?
ORDER BY -- What order?
LIMIT -- How many?
-- Almost natural language
SELECT name, age
FROM students
WHERE grade > 15
ORDER BY age DESC
LIMIT 10;
Translates to: “Show me the names and ages of students with grades above 15, sorted by age (oldest first), but only the top 10”
This is why SQL survived: humans can read it
Without JOIN (multiple queries):
# Get user
user = db.query("
SELECT * FROM users WHERE id = 42
")
# Get their orders
orders = db.query(f"
SELECT *
FROM orders
WHERE user_id = {user.id}
")
# Get order details
for order in orders:
items = db.query(f"
SELECT *
FROM items
WHERE order_id = {order.id}
")
With JOIN (one query):
SELECT u.name, o.date, i.product
FROM users u
JOIN orders o ON u.id = o.user_id
JOIN items i ON o.id = i.order_id
WHERE u.id = 42;
A JOIN combines rows from two or more tables based on a related column between them.
Basic JOIN Syntax:
SELECT columns
FROM table1
JOIN table2
ON table1.column = table2.column;
keys
using aliases: trees : tr, taxonomy : ta, tree_species : sp
SELECT * FROM trees tr
JOIN taxonomy ta on ta.id = tr.taxonomy_id
JOIN tree_species sp on sp.id = ta.species_id
WHERE tr.circumference > 50 and sp.species = 'tomentosa';
instead of
SELECT * FROM trees
JOIN taxonomy on taxonomy.id = trees.taxonomy_id
JOIN tree_species on tree_species.id = taxonomy.species_id
WHERE trees.circumference > 50 and tree_species.species = 'tomentosa';
1. INNER JOIN (Default)
Returns only matching records from both tables.
SELECT customers.name, orders.order_date
FROM customers
INNER JOIN orders
ON customers.customer_id = orders.customer_id;
INNER JOIN
SELECT
m.movie_id,
m.release_year,
m.title,
a.actor_name,
FROM movies m
JOIN actors a ON m.movie_id = a.movie_id;
Result example:
movie_id | title | release_year | actor_name |
---|---|---|---|
1 | The Matrix | 1999 | Keanu Reeves |
1 | The Matrix | 1999 | Laurence F. |
2 | Inception | 2010 | Leonardo D. |
Movies with no actors will not be shown. NULL
values are used instead:
movie_id | title | release_year | actor_name | role |
---|---|---|---|---|
3 | Documentary XYZ | 2023 | NULL | NULL |
4 | Silent Film | 1920 | NULL | NULL |
Similarly the actors with no movies will not be shown.
2. LEFT JOIN (LEFT OUTER JOIN)
Returns all records from the left table (the first from
), and matched records from the right table (the table after the join
).
SELECT
m.movie_id,
m.title,
m.release_year,
a.actor_name,
a.role
FROM movies m
LEFT JOIN actors a ON m.movie_id = a.movie_id
ORDER BY m.title;
Result example:
also returns the movies with no actors
movie_id | title | release_year | actor_name | role |
---|---|---|---|---|
1 | The Matrix | 1999 | Keanu Reeves | Neo |
1 | The Matrix | 1999 | Laurence F. | Morpheus |
2 | Inception | 2010 | Leonardo D. | Cobb |
3 | Documentary XYZ | 2027 | NULL | NULL |
4 | Silent Film | 1920 | NULL | NULL |
The key point: LEFT JOIN keeps ALL movies, even if they have no actors in the actors table. Those movies will show NULL values for the actor columns.
This may eventually be useful for:
3. RIGHT JOIN (RIGHT OUTER JOIN)
Returns all records from the right table, and matched records from the left table.
SELECT
m.movie_id,
m.title,
m.release_year,
a.actor_name,
a.role
FROM movies m
RIGHT JOIN actors a ON m.movie_id = a.movie_id
ORDER BY m.title;
NULL values appear for actors with no movie
4. FULL OUTER JOIN
Returns all records when there’s a match in either table.
SELECT
m.movie_id,
m.title,
m.release_year,
a.actor_name,
a.role
FROM movies m
FULL OUTER JOIN actors a ON m.movie_id = a.movie_id
ORDER BY m.title;
Result example:
All movies AND all actors, even unmatched ones
movie_id | title | release_year | actor_name | role |
---|---|---|---|---|
NULL | NULL | NULL | Alphonse | NULL |
NULL | NULL | NULL | Nephew of Alphonse | NULL |
1 | The Matrix | 1999 | Keanu Reeves | Neo |
2 | Inception | 2010 | Leonardo D. | Cobb |
3 | Documentary XYZ | 2023 | NULL | NULL |
4 | Silent Film | 1920 | NULL | NULL |
The key difference: FULL OUTER JOIN keeps EVERYTHING from both tables:
This creates three groups in your results:
5. CROSS JOIN
when you do not have a ON
clause, it is a cross join
Returns the Cartesian product of both tables (every row paired with every row).
SELECT customers.name, products.product_name
FROM customers
CROSS JOIN products;
Projection
When we select a set of columns and not all the columns we are doing a projection
This returns all the columns of the table:
select * from movies
This returns only the title
and released_year
column
select title, released_year from movies
We can apply functions to the columns during the projections, for instance:
select avg(IMDB_Rating), max(IMDB_Rating), min(IMDB_Rating) from movies;
When we select a set of rows and not all the rows we are doing a filtering
select title from movies where IMDB_Rating > 8;
we can use multiple conditions
select title
from movies
where IMDB_Rating > 8
AND released_year > 2000;
select title
from movies
where IMDB_Rating > 8
OR released_year > 2000;