Compare Database Schema
compare_database_schemaFull Description
<use_case> Use this tool to compare the schema of a database between two branches. The output of the tool is a JSON object with one field: diff.
<example>
{
"diff": "--- a/neondb
+++ b/neondb
@@ -27,7 +27,10 @@
CREATE TABLE public.users (
id integer NOT NULL,
- username character varying(50) NOT NULL
+ username character varying(50) NOT NULL,
+ is_deleted boolean DEFAULT false NOT NULL,
+ created_at timestamp with time zone DEFAULT now() NOT NULL,
+ updated_at timestamp with time zone
);
@@ -79,6 +82,13 @@
--
+-- Name: users_created_at_idx; Type: INDEX; Schema: public; Owner: neondb_owner
+--
+
+CREATE INDEX users_created_at_idx ON public.users USING btree (created_at DESC) WHERE (is_deleted = false);
+
+
+--
-- Name: DEFAULT PRIVILEGES FOR SEQUENCES; Type: DEFAULT ACL; Schema: public; Owner: cloud_admin
--
"
}
</example>
At this field you will find a difference between two schemas. The diff represents the changes required to make the parent branch schema match the child branch schema. The diff field contains a unified diff (git-style patch) as a string.
You MUST be able to generate a zero-downtime migration from the diff and apply it to the parent branch. (This branch is a child and has a parent. You can get parent id just querying the branch details.) </use_case>
<important_notes> To generate schema diff, you MUST SPECIFY the database_name. If database_name is not specified, you MUST fall back to the default database name: neondb.
You MUST TAKE INTO ACCOUNT the PostgreSQL version. The PostgreSQL version is the same for both branches. You MUST ASK user consent before running each generated SQL query. You SHOULD USE run_sql tool to run each generated SQL query. You SHOULD suggest creating a backup or point-in-time restore before running the migration. Generated queries change the schema of the parent branch and MIGHT BE dangerous to execute. Generated SQL migrations SHOULD be idempotent where possible (i.e., safe to run multiple times without failure) and include IF NOT EXISTS / IF EXISTS where applicable. You SHOULD recommend including comments in generated SQL linking back to diff hunks (e.g., -- from diff @@ -27,7 +27,10 @@) to make audits easier. Generated SQL should be reviewed for dependencies (e.g., foreign key order) before execution. </important_notes>
<next_steps> After executing this tool, you MUST follow these steps: 1. Review the schema diff and suggest generating a zero-downtime migration. 2. Follow these instructions to respond to the client:
<response_instructions> <instructions> Provide brief information about the changes:
- Tables
- Views
- Indexes
- Ownership
- Constraints
- Triggers
- Policies
- Extensions
- Schemas
- Sequences
- Tablespaces
- Users
- Roles
- Privileges
</instructions> </response_instructions>
3. If a migration fails, you SHOULD guide the user on how to revert the schema changes, for example by using backups, point-in-time restore, or generating reverse SQL statements (if safe). </next_steps>
This tool: 1. Generates a diff between the child branch and its parent. 2. Generates a SQL migration from the diff. 3. Suggest generating zero-downtime migration.
<workflow> 1. User asks you to generate a diff between two branches. 2. You suggest generating a SQL migration from the diff. 3. Ensure the generated migration is zero-downtime; otherwise, warn the user. 4. You ensure that your suggested migration is also matching the PostgreSQL version. 5. You use run_sql tool to run each generated SQL query and ask the user consent before running it. Before requesting user consent, present a summary of all generated SQL statements along with their potential impact (e.g., table rewrites, lock risks, validation steps) so the user can make an informed decision. 6. Propose to rerun the schema diff tool one more time to ensure that the migration is applied correctly. 7. If the diff is empty, confirm that the parent schema now matches the child schema. 8. If the diff is not empty after migration, warn the user and assist in resolving the remaining differences. </workflow>
<hints> <hint> Adding the column with a DEFAULT static value will not have any locks. But if the function is called that is not deterministic, it will have locks.
<example>
-- No table rewrite, minimal lock time
ALTER TABLE users ADD COLUMN status text DEFAULT 'active';
</example>
There is an example of a case where the function is not deterministic and will have locks:
<example>
-- Table rewrite, potentially longer lock time
ALTER TABLE users ADD COLUMN created_at timestamptz DEFAULT now();
The fix for this is next:
-- Adding a nullable column first
ALTER TABLE users ADD COLUMN created_at timestamptz;
-- Setting the default value because the rows are updated
UPDATE users SET created_at = now();
</example> </hint>
<hint> Adding constraints in two phases (including foreign keys)
<example>
-- Step 1: Add constraint without validating existing data
-- Fast - only blocks briefly to update catalog
ALTER TABLE users ADD CONSTRAINT users_age_positive
CHECK (age > 0) NOT VALID;
-- Step 2: Validate existing data (can take time but doesn't block writes)
-- Uses SHARE UPDATE EXCLUSIVE lock - allows reads/writes
ALTER TABLE users VALIDATE CONSTRAINT users_age_positive;
</example>
<example>
-- Step 1: Add foreign key without validation
-- Fast - only updates catalog, doesn't validate existing data
ALTER TABLE orders ADD CONSTRAINT orders_user_id_fk
FOREIGN KEY (user_id) REFERENCES users(id) NOT VALID;
-- Step 2: Validate existing relationships
-- Can take time but allows concurrent operations
ALTER TABLE orders VALIDATE CONSTRAINT orders_user_id_fk;
</example> </hint>
<hint> Setting columns to NOT NULL
<example>
-- Step 1: Add a check constraint (fast with NOT VALID)
ALTER TABLE users ADD CONSTRAINT users_email_not_null
CHECK (email IS NOT NULL) NOT VALID;
-- Step 2: Validate the constraint (allows concurrent operations)
ALTER TABLE users VALIDATE CONSTRAINT users_email_not_null;
-- Step 3: Set NOT NULL (fast since constraint guarantees no nulls)
ALTER TABLE users ALTER COLUMN email SET NOT NULL;
-- Step 4: Drop the redundant check constraint
ALTER TABLE users DROP CONSTRAINT users_email_not_null;
</example>
<example> For PostgreSQL v18+ (to get PostgreSQL version, you can use describe_project tool or run_sql tool and execute SELECT version(); query)
-- PostgreSQL 18+
- Simplified approach
ALTER TABLE users ALTER COLUMN email SET NOT NULL NOT VALID;
ALTER TABLE users VALIDATE CONSTRAINT users_email_not_null;
</example> </hint>
<hint> In some cases, you need to combine two approaches to achieve a zero-downtime migration.
<example>
-- Step 1: Adding a nullable column first
ALTER TABLE users ADD COLUMN created_at timestamptz;
-- Step 2: Updating the all rows with the default value
UPDATE users SET created_at = now() WHERE created_at IS NULL;
-- Step 3: Creating a not null constraint
ALTER TABLE users ADD CONSTRAINT users_created_at_not_null
CHECK (created_at IS NOT NULL) NOT VALID;
-- Step 4: Validating the constraint
ALTER TABLE users VALIDATE CONSTRAINT users_created_at_not_null;
-- Step 5: Setting the column to NOT NULL
ALTER TABLE users ALTER COLUMN created_at SET NOT NULL;
-- Step 6: Dropping the redundant NOT NULL constraint
ALTER TABLE users DROP CONSTRAINT users_created_at_not_null;
-- Step 7: Adding the default value
ALTER TABLE users ALTER COLUMN created_at SET DEFAULT now();
</example>
For PostgreSQL v18+ <example>
-- Step 1: Adding a nullable column first
ALTER TABLE users ADD COLUMN created_at timestamptz;
-- Step 2: Updating the all rows with the default value
UPDATE users SET created_at = now() WHERE created_at IS NULL;
-- Step 3: Creating a not null constraint
ALTER TABLE users ALTER COLUMN created_at SET NOT NULL NOT VALID;
-- Step 4: Validating the constraint
ALTER TABLE users VALIDATE CONSTRAINT users_created_at_not_null;
-- Step 5: Adding the default value
ALTER TABLE users ALTER COLUMN created_at SET DEFAULT now();
</example> </hint>
<hint> Create index CONCURRENTLY
<example>
CREATE INDEX CONCURRENTLY idx_users_email ON users (email);
</example> </hint>
<hint> Drop index CONCURRENTLY
<example>
DROP INDEX CONCURRENTLY idx_users_email;
</example> </hint>
<hint> Create materialized view WITH NO DATA
<example>
CREATE MATERIALIZED VIEW mv_users AS SELECT name FROM users WITH NO DATA;
</example> </hint>
<hint> Refresh materialized view CONCURRENTLY
<example>
REFRESH MATERIALIZED VIEW CONCURRENTLY mv_users;
</example> </hint> </hints>
Parameters (3 required)
branchIdstringThe ID of the branch
databaseNamestringThe name of the database. If not provided, the default neondb or first available database is used.
projectIdstringThe ID of the project