instance_id
int64 0
199
| selected_database
stringclasses 12
values | query
stringlengths 92
2.94k
| error_sql
sequencelengths 1
4
| sol_sql
sequencelengths 0
0
| preprocess_sql
sequencelengths 0
10
| clean_up_sql
sequencelengths 0
5
| test_cases
sequencelengths 0
0
| external_data
nullclasses 6
values | efficiency
null 1
class |
---|---|---|---|---|---|---|---|---|---|
0 | financial | In the financial database, we have a table named 'order' that records details about orders given to clients. Each order is associated with an order_id and has attributes such as account_id, bank_to, account_to, and amount. We need to find all accounts that have placed at least two orders such that the difference between the highest and lowest amount for those orders exceeds 12000. This query aims to find such accounts, but the initial attempt produced incorrect results. | [
"SELECT account_id, MAX(payments) AS max_payment, MIN(payments) AS min_payment FROM loan GROUP BY account_id HAVING COUNT(account_id) > 1 AND (MAX(payments) - MIN(payments)) > 2;"
] | [] | [] | [] | [] | null | null |
1 | codebase_community | I have a table named 'comments' in the 'codebase_community' database with a column 'CreationDate' of type 'datetime'. I want to extract only the 'hh:mm:ss' part from this column. My desired result should look like this:
0:00:00
10:00:00
04:00:00
However, when I tried to use the following SQL query, it didn't give me the expected result:
sql
SELECT CreationDate::time FROM comments;
This query returns the time part but includes leading zeros, which I don't want. How can I modify my query to achieve the desired result? | [
"SELECT CreationDate::time FROM comments;"
] | [] | [] | [] | [] | null | null |
2 | financial | I'm exploring triggers and want to create one that fires after an Update event on a `status` column in the `loan` table. The column contains text values representing loan statuses, so the user may update the loan status. I want the trigger function to calculate the number of loans with a specific status 'A' for a certain account. Then update `total_loan_count` in a `loan_summary` table. Here is my trigger (which is not working and I want to figure out why): | [
"CREATE OR REPLACE FUNCTION total_loans()\n RETURNS TRIGGER \n AS $$ \n BEGIN \n UPDATE loan_summary \n SET total_loan_count = (SELECT COUNT(CASE WHEN status = 'A' THEN 1 END) FROM loan WHERE loan_summary.account_id = loan.account_id) WHERE account_id = NEW.account_id; RETURN NEW; \n END; \n $$ LANGUAGE plpgsql;",
"\n CREATE TRIGGER tr_total_loans AFTER UPDATE OF status FOR EACH ROW EXECUTE PROCEDURE total_loans();\n "
] | [] | [
"DROP TABLE IF EXISTS loan_summary;",
"CREATE TABLE loan_summary (account_id INT PRIMARY KEY, total_loan_count INT);",
"INSERT INTO loan_summary (account_id, total_loan_count) SELECT l.account_id, COUNT(*) FROM loan l WHERE l.status = 'A' GROUP BY l.account_id;"
] | [] | [] | null | null |
3 | european_football_2 | In the context of managing team attributes in the European Football database, a user attempted to add a new value 'Very Fast' to an existing ENUM type for 'buildupplayspeedclass' in the 'team_attributes' table. The user tried an approach: renaming the existing ENUM and creating a new one with the additional value, and switch the data type in place. The approach resulted in locks that caused application downtime, especially considering the table's size in the millions of rows. The user is seeking a solution that avoids such downtime, possibly by considering a different approach than using ENUMs. | [
"ALTER TYPE buildupplayspeedclass RENAME TO buildupplayspeedclass_old;",
"CREATE TYPE buildupplayspeedclass AS ENUM ('Slow', 'Balanced', 'Fast', 'Very Fast');",
"ALTER TABLE Team_Attributes ALTER COLUMN buildupplayspeedclass SET DATA TYPE buildupplayspeedclass USING buildupplayspeedclass::text::buildupplayspeedclass;",
"DROP TYPE buildupplayspeedclass;"
] | [] | [
"CREATE TYPE buildupplayspeedclass_enum AS ENUM ('Balanced', 'Fast', 'Slow');",
"\n ALTER TABLE team_attributes\n ALTER COLUMN buildupplayspeedclass\n TYPE buildupplayspeedclass_enum\n USING buildupplayspeedclass::buildupplayspeedclass_enum;"
] | [] | [] | null | null |
4 | student_club | In the student_club database, I created a unique index on the `event` table using the following queries 'CREATE UNIQUE INDEX unique_name ON event(event_name, event_date) where event_name is not null; CREATE UNIQUE INDEX unique_location ON event(location, event_date) where location is not null;'. However, when I attempt to insert a new record using an UPSERT operation using the query 'insert into event (event_id, event_name, location, event_date) values('test1', 'test_name', 'test_location', 'test_date')on conflict (event_name, location, event_date) do update set event_id = 'test1', event_name = 'test_name', location = 'test_location', event_date = 'test_date'', I encounter an error stating that there is no unique or exclusion constraint matching the ON CONFLICT specification. | [
"CREATE UNIQUE INDEX unique_name ON event(event_name, event_date) where event_name is not null;CREATE UNIQUE INDEX unique_location ON event(location, event_date) where location is not null;"
] | [] | [] | [] | [] | null | null |
5 | debit_card_specializing | In the following SQL, how could I make the `RETURNING` clause join to something else and return the joined row(s)? Here it only returns the row from `transactions_1k` that was updated, but I'd like it to return that row joined to something in another table, e.g. joined to `customers` tables and get both `transactions_1k.transactionid` and `customers.Segment` columns. | [
"\n UPDATE transactions_1k \n SET Amount = 100 \n FROM ( SELECT TransactionID FROM transactions_1k WHERE Amount = 50 ORDER BY Date LIMIT 100 FOR UPDATE ) sub \n WHERE transactions_1k.TransactionID = sub.TransactionID RETURNING *;\n "
] | [] | [] | [] | [] | null | null |
6 | codebase_community | I have a query that calculates the number of referrals each user has made. However, I want to count a referral only if the referred user has activated their premium account. How can I achieve this? | [
"SELECT users.Id, COUNT(posts.Id) as answered FROM users LEFT JOIN posts ON users.Id = posts.OwnerUserId GROUP BY users.Id ORDER BY answered DESC;"
] | [] | [] | [] | [] | null | null |
7 | codebase_community | I want to drop the 'users' table from the 'codebase_community' database. However, when I attempt to drop the table using the SQL command `DROP TABLE IF EXISTS users;`, I encounter an error message stating: 'cannot drop table users because other objects depend on it'. This issue arises because the 'users' table is referenced by foreign keys in other tables such as 'badges', 'comments', 'postHistory', 'posts', and 'votes'. I am seeking a solution to drop the 'users' table without having to remove all dependent tables or data. | [
"DROP TABLE IF EXISTS users;"
] | [] | [] | [] | [] | null | null |
8 | student_club | In database student_club, there is a set of users. A student can have multiple users, but ref1 and ref2 might be alike and can therefore link users together. ref1 and ref2 does not overlap, one value in ref1 does not exist in ref2. A user can own multiple assets. I want to "merge" users that has one or more refs alike and then count how many assets they own together. There could be missing entries in the user table, in that case I just want to propagate the owner into ref2 and set the asset_count and asset_ids. | [
"SELECT ARRAY_AGG(DISTINCT u.id) AS ids, ARRAY_AGG(DISTINCT u.username) AS usernames, ARRAY_AGG(DISTINCT u.ref1) AS refs1, ARRAY_AGG(DISTINCT u.ref2) AS refs2, COUNT(DISTINCT a.id) AS asset_count FROM assets a JOIN users u ON a.owner = u.ref1 OR a.owner = u.ref2 GROUP BY a.owner ORDER BY MIN(a.id);"
] | [] | [
"CREATE TABLE assets (id serial, name text, owner text, PRIMARY KEY(id));",
"CREATE TABLE users (id serial, username text, ref1 text, ref2 text, PRIMARY KEY(id));",
"INSERT INTO assets (name, owner) VALUES ('#1', 'a'), ('#2', 'b'), ('#3', 'c'), ('#4', 'a'), ('#5', 'c'), ('#6', 'd'), ('#7', 'e'), ('#8', 'd'), ('#9', 'a'), ('#10', 'a'), ('#11', 'z');",
"INSERT INTO users (username, ref1, ref2) VALUES ('bobo', 'a', 'd'), ('toto', 'b', 'e'), ('momo', 'c', 'd'), ('lolo', 'a', 'f'), ('popo', 'c', 'f');"
] | [
"drop table if exists users;",
"drop table if exists assets;"
] | [] | null | null |
9 | student_club | I am trying to compare the number of attendees for each event between two different tables: 'attendance' and 'budget'. I want to find events where the number of attendees in the 'attendance' table does not match the number of attendees recorded in the 'budget' table. My query follows this structure: | [
"WITH CTE AS ( SELECT link_to_event, COUNT(link_to_member) AS count FROM attendance GROUP BY link_to_event ) SELECT CTE.link_to_event, CTE.count AS newCount, budget.count AS oldCount FROM budget JOIN CTE ON budget.link_to_event = CTE.link_to_event WHERE budget.count != CTE.count;"
] | [] | [] | [] | [] | null | null |
10 | student_club | In the student_club database, we have a scenario where a member can attend multiple events, and an event can have multiple attendees. However, a member can only attend an event once. If a member attempts to attend the same event again, the system should update the attendance record with new information, such as status attend. The current approach is to use an INSERT statement, but it fails when the member already has an attendance record for the event. We need to implement an insert statement that updates the existing record if a conflict occurs based on the combination of member_id and event_id. | [
"INSERT INTO attendance VALUES ('recEVTik3MlqbvLFi', 'rec280Sk7o31iG0Tx', 1)"
] | [] | [
"ALTER TABLE attendance ADD COLUMN attend INTEGER DEFAULT 0;"
] | [
"ALTER TABLE attendance DROP COLUMN attend;"
] | [] | null | null |
11 | financial | In the financial database, there is a need to convert the data from a `BIGINT` column to a `TIMESTAMP` column. The `date` column in the `account` table is currently stored as a `BIGINT` representing the date in the format YYMMDD. The goal is to update this column to a `TIMESTAMP` type to store the date and time information. | [
"\n UPDATE account\n SET date__timestamp = date__bigint::timestamp;\n "
] | [] | [
"\n ALTER TABLE account\n ALTER COLUMN date\n TYPE BIGINT\n USING to_char(date, 'YYYYMMDD')::bigint;\n "
] | [] | [] | null | null |
12 | card_games | In the card_games database, there is a table named 'cards'. Each card is uniquely identified by a id and includes details about artists and bordercolors. The user wants to group the cards by their 'artist' attribute to get a distinct result for each group. However, when the user tries to use the following SQL query to achieve this, it results in an error or incorrect output: sql SELECT * FROM cards GROUP BY artist; The user understands that this query is incorrect because it does not group by all the columns that need to be shown. The user is seeking a solution to this problem. | [
"\n SELECT * FROM cards GROUP BY artist;\n "
] | [] | [
"\n DELETE FROM cards\n WHERE artist NOT IN ('Ralph Horsley', 'Daarken');\n ",
"\n DELETE FROM cards\n WHERE artist IS NULL;\n ",
"\n CREATE TABLE cards_new AS\n SELECT id, artist, bordercolor\n FROM cards;\n DROP TABLE cards;\n ALTER TABLE cards_new\n RENAME TO cards;\n "
] | [] | [] | null | null |
13 | debit_card_specializing | I'm trying to create an SQL query that checks if a SELECT query on the 'transactions_1k' table returns no rows based on a specific criteria involving 'CustomerID' and 'Date'. If no rows are returned, it should then execute another SELECT query with a different criteria. Here's what I mean:
sql
IF SELECT * FROM transactions_1k WHERE CustomerID = 3 AND Date = '2012-08-24' RETURNS NO ROWS
THEN SELECT * FROM transactions_1k WHERE CustomerID = 7626 AND Date = '2012-08-24'
Is this possible? I'm not sure if an empty result set counts as 'null', which is causing me some trouble. | [
"IF SELECT * FROM transactions_1k WHERE CustomerID = 3 AND Date = '2012-08-24' RETURNS NO ROWS\nTHEN SELECT * FROM transactions_1k WHERE CustomerID = 7626 AND Date = '2012-08-24'"
] | [] | [] | [] | [] | null | null |
14 | financial | I need to compare the 'account' table with another table, but there are some columns in the 'account' table that I don't need to compare. Specifically, I want to exclude the 'account_id' and 'date' columns from the comparison. I tried to dynamically generate a SQL query to select all columns except these two, but the output SQL was incorrect. Here's the problematic SQL I used: | [
"SELECT 'SELECT ' || array_to_string(ARRAY(SELECT 'o' || '.' || c.column_name\n FROM information_schema.columns As c\n WHERE table_name = 'account' \n AND c.column_name NOT IN('account_id', 'date')\n), ',') || ' FROM accountAs o' As sqlstmt"
] | [] | [] | [] | [] | null | null |
15 | financial | I have two tables: `account` and `loan`. I need to display the first 6 accounts from a specific district that has loans in the last 48 hours then the rest of the accounts. This works great but I get duplicates from the second query where I repeat these accounts again. I want to make sure `account.account_id` is unique. | [
"(\n SELECT\n account.account_id,\n account.frequency,\n l.loan_id,\n l.date AS loan_date,\n 0 AS priority\n FROM account\n LEFT JOIN loan l\n ON account.account_id = l.account_id\n WHERE account.district_id = '18'\n AND l.date >= (NOW() - INTERVAL '48 hours')\n ORDER BY l.date DESC NULLS LAST\n LIMIT 6\n)\nUNION\n(\n SELECT\n account.account_id,\n account.frequency,\n l.loan_id,\n l.date AS loan_date,\n 1 AS priority\n FROM account\n LEFT JOIN loan l\n ON account.account_id = l.account_id\n WHERE account.district_id = '18'\n ORDER BY account.date DESC\n);"
] | [] | [] | [] | [] | null | null |
16 | student_club | In the student_club database, there is a table named 'attendance' that records the attendance of members to various events. Each record in this table contains a 'link_to_event' which is a unique identifier for the event, and a 'link_to_member' which is a unique identifier for the member. The goal is to generate a output that aggregates the attendance records by event, where each event's attendance is represented as an array of member objects. Each member object should contain the member's unique identifier ('link_to_member') and the event's unique identifier ('link_to_event'). The desired output should be an array of these event-based arrays. However, the user encountered an issue where the output was interpreted as text, introducing undesired escape characters, and the outer array was missing. The user's query was adapted from a suggestion on another post, but it did not produce the desired result. | [
"SELECT Array_agg(rw) FROM (SELECT link_to_event, (SELECT To_(Array_agg(Row_to_(t))) FROM (SELECT link_to_member FROM public.attendance WHERE link_to_event = b.link_to_event) t) rw FROM attendance b GROUP BY link_to_event);"
] | [] | [
""
] | [
""
] | [] | null | null |
17 | financial | In the financial database, we need to generate a list of all years between two given dates from the 'loan' table. The dates are extracted from the 'date' column, which represents the approval date of loans. The goal is to generate all years between the earliest and latest loan approval dates, regardless of the interval between them. For instance, if the earliest loan was approved on '1994-01-05' and the latest on '1997-12-08', we should get a list of years including '1994', '1995', '1996', and '1997'. However, the initial query only returns the starting year if the interval between the dates is less than a year, which is not the desired outcome. | [
"SELECT to_char(generate_series, 'YYYY') FROM generate_series(MIN(date)::timestamptz, MAX(date)::timestamptz, '1 year') FROM loan;"
] | [] | [
""
] | [
""
] | [] | null | null |
18 | financial | In the financial database, there is a table named 'loan' that records details of loans given to clients. Each loan is associated with an account, and the table contains columns such as 'loan_id', 'account_id', 'date', 'amount', 'duration', 'payments', and 'status'. The 'amount' column represents the loan amount in USD. The task is to retrieve all rows from the 'loan' table, along with an additional column that shows the maximum loan amount per account. This will help in understanding the highest loan amount each account has taken. However, the user attempted to use the ROW_NUMBER() window function to achieve this, which resulted in incorrect results. | [
"SELECT account_id, amount FROM (SELECT account_id, amount, ROW_NUMBER() OVER(PARTITION BY account_id ORDER BY amount DESC) AS rn FROM loan) AS a WHERE rn = 1;"
] | [] | [
""
] | [
""
] | [] | null | null |
19 | financial | In the financial database, we need to create a table to store detailed information about clients, including their first name, last name, and a full name that is automatically generated from the first and last names. The full name should be stored as a generated column. However, when attempting to create the table with a generated column using the CONCAT function, an error occurs indicating that the generation expression is not immutable. | [
"CREATE TABLE client_information ( client_id smallserial NOT NULL, first_name character varying(50), last_name character varying(50), full_name character varying(100) GENERATED ALWAYS AS (concat(first_name, ' ', last_name)) STORED, PRIMARY KEY (client_id) );"
] | [] | [] | [
"DROP TABLE IF EXISTS client_information;"
] | [] | null | null |
20 | card_games | In the context of the card_games database, I frequently need to get a card's row based on its unique UUID, and if it does not exist, I want to create it and return its ID. For example, my table might be the 'cards' table. Suppose I want to insert a card with a specific UUID and name, and if the UUID already exists, I want to return the existing card's ID without modifying the row. However, using the following SQL statement, I encounter issues as it does not return the ID when the row already exists:\nsql \\nINSERT INTO cards(uuid, name) VALUES ('5f8287b1-5bb6-5f4c-ad17-316a40d5bb0c', 'Ancestor''s Chosen') \\nON CONFLICT DO NOTHING RETURNING id; \\n\nThis statement does not return the ID of the existing row. I need a solution that returns the ID whether the row is inserted or already exists. | [
"INSERT INTO cards(uuid, name) VALUES ('5f8287b1-5bb6-5f4c-ad17-316a40d5bb0c', 'Ancestor''s Chosen') ON CONFLICT DO NOTHING RETURNING id;"
] | [] | [] | [] | [] | null | null |
21 | financial | In the financial database, I have a table `account` where I need to insert new records or update existing ones based on the `account_id`. The `date` column should be updated to the current date if the record already exists. I want to know whether an `INSERT` or an `UPDATE` operation was performed. I attempted to use an `ON CONFLICT..DO UPDATE` clause but encountered issues with determining the type of operation. I considered adding an `is_update` column to track this, but it feels unnecessary as it is not related to the data itself. | [
"INSERT INTO account (account_id, district_id, frequency, date) VALUES (1, 18, 'POPLATEK MESICNE', CURRENT_DATE) ON CONFLICT (account_id) DO UPDATE SET date = CURRENT_DATE"
] | [] | [] | [
"UPDATE account SET date = '1995-03-24'",
"DELETE FROM account WHERE account_id = 22222"
] | [] | null | null |
22 | card_games | I am analyzing the release dates of Magic: The Gathering card sets to identify periods of consecutive releases. The data includes multiple entries for the same release date due to different printings or variations. I want to find the longest consecutive release periods along with their start and end dates. Here is the structure of the relevant table:\n- id SERIAL, releaseDate DATE, setCode VARCHAR(50)\nThe data could have equal release date entries:\n- id 1, releaseDate 2019-12-28, setCode '10E'\n- id 2, releaseDate 2019-12-28, setCode '10E'\n- id 3, releaseDate 2019-12-29, setCode '10E'\n- id 4, releaseDate 2019-12-29, setCode '10E'\n- id 5, releaseDate 2019-12-31, setCode '10E'\n- id 6, releaseDate 2019-12-31, setCode '10E'\n- id 7, releaseDate 2020-01-01, setCode '10E'\n- id 8, releaseDate 2020-01-01, setCode '10E'\n- id 9, releaseDate 2020-01-02, setCode '10E'\n- id 10, releaseDate 2020-01-03, setCode '10E'\n- id 11, releaseDate 2020-01-04, setCode '10E'\n- id 12, releaseDate 2020-01-04, setCode '10E'\n- id 13, releaseDate 2020-01-05, setCode '10E'\n- id 14, releaseDate 2020-01-22, setCode '10E'\n- id 15, releaseDate 2020-01-29, setCode '10E'\n- id 16, releaseDate 2020-01-30, setCode '10E'\nI am interested in getting the consecutive release periods with the start and end dates. An output like this:\n- count | date MIN | date MAX \\(6, 2019-12-31, 2020-01-05)\\(2, 2019-12-28, 2019-12-29)\\(2, 2020-01-29, 2020-01-30)\nI tried the following SQL query, but it gives incorrect counts and mismatched start/end dates:\ | [
"SELECT COUNT(*) -1 AS count, MAX(releaseDate), MIN(releaseDate) FROM (SELECT *, date(releaseDate) - row_number() OVER (PARTITION BY releaseDate ORDER BY date(releaseDate)) * INTERVAL '1 day' AS filter FROM sets_releaseInfo ) t1 GROUP BY filter HAVING COUNT(*) -1 > 0 ORDER BY count DESC"
] | [] | [
"CREATE TEMP TABLE sets_releaseInfo (id SERIAL, releaseDate DATE, setCode VARCHAR(50));",
"INSERT INTO sets_releaseInfo (releaseDate, setCode) VALUES ('2019-12-28', '10E'), ('2019-12-28', '10E'), ('2019-12-29', '10E'), ('2019-12-29', '10E'), ('2019-12-31', '10E'), ('2019-12-31', '10E'), ('2020-01-01', '10E'), ('2020-01-01', '10E'), ('2020-01-02', '10E'), ('2020-01-03', '10E'), ('2020-01-04', '10E'), ('2020-01-04', '10E'), ('2020-01-05', '10E'), ('2020-01-22', '10E'), ('2020-01-29', '10E'), ('2020-01-30', '10E');"
] | [
"DROP TABLE IF EXISTS sets_releaseInfo;"
] | [] | null | null |
23 | card_games | In the card_games database, we have a table named 'collection' where each card can have a reference to another card through the 'nextCardId' column. This column represents the ID of the next card in a sequence. We want to generate a sequence path for each card starting from the card that has no previous card (i.e., no card points to it) and ending at the card that has no next card (i.e., its 'nextCardId' is NULL). The path should be represented as a string of card IDs separated by ' --> '.\nFor example, if we have the following data:\n| id | nextCardId |\n|-----|------------|\n| 1 | 5 |\n| 2 | NULL |\n| 3 | 6 |\n| 4 | 7 |\n| 5 | 8 |\n| 6 | 9 |\n| 7 | NULL |\n| 8 | NULL |\n| 9 | 10 |\n| 10 | NULL |\nWe want to get the following paths:\n1 --> 5 --> 8;\n2;\n3 --> 6 --> 9 --> 10;\n4 --> 7;\nHowever, when we run the following SQL query, we get incorrect results that include incomplete paths:\nsql;\nWITH RECURSIVE path_cte AS (\n SELECT id, nextCardId, id::TEXT AS Path;\n FROM collection\n WHERE nextCardId IS NULL\n UNION ALL\n SELECT collection.id, collection.nextCardId, collection.id || ' --> ' || cte.Path\n FROM collection\n JOIN path_cte cte ON collection.nextCardId = cte.id\n)\nSELECT Path\nFROM path_cte\nORDER BY id;\n\nWe need to correct this query to get only the complete paths starting from the cards that have no previous card and ending at the cards that have no next card. | [
"WITH RECURSIVE path_cte AS (SELECT id, nextCardId, id::TEXT AS Path FROM collection WHERE nextCardId IS NULL UNION ALL SELECT collection.id, collection.nextCardId, collection.id || ' --> ' || cte.Path FROM collection JOIN path_cte cte ON collection.nextCardId = cte.id) SELECT Path FROM path_cte ORDER BY id;"
] | [] | [
"CREATE TABLE collection (id INTEGER NOT NULL PRIMARY KEY, nextCardId INTEGER)",
"INSERT INTO collection (id, nextCardId) VALUES (1, 5), (2, NULL), (3, 6), (4, 7), (5, 8), (6, 9), (7, NULL), (8, NULL), (9, 10), (10, NULL);"
] | [
"DROP TABLE IF EXISTS collection"
] | [] | null | null |
24 | financial | In the financial database, I need to classify transactions by quarter, but I want the quarters to start at a configurable month. If I set the quarter to start in April, then April, May, and June should be the first quarter. I think I need a function what_quarter_is(date_in, start_month). For example, what_quarter_is('1995-07-23', 4) = 2. The default EXTRACT(QUARTER FROM date) function in PostgreSQL starts quarters in January, which does not meet my requirements. | [
"SELECT EXTRACT(QUARTER FROM TIMESTAMP '2001-02-16 20:38:40');"
] | [] | [] | [
"DROP FUNCTION what_quarter_is(date, integer);"
] | [] | null | null |
25 | codebase_community | In the codebase_community database, I have a table named 'users' with a primary key of 'id'. I need to find all tables, columns, and constraints that reference the 'users' table regardless of which column in 'users' is referenced. For example, if there is a table named 'posts' with a foreign key constraint as follows:\nCREATE TABLE posts (\n id bigint NOT NULL,\n owneruserid bigint NULL,\n lasteditoruserid bigint NULL,\n PRIMARY KEY (id),\n FOREIGN KEY (owneruserid) REFERENCES users(id),\n FOREIGN KEY (lasteditoruserid) REFERENCES users(id)\n);\nI should get back rows like the following:\nbase_table base_col referencing_table referencing_col constraint_sql\nusers id posts owneruserid CONSTRAINT posts_owneruserid_fkey FOREIGN KEY (owneruserid) REFERENCES users(id)\nusers id posts lasteditoruserid CONSTRAINT posts_lasteditoruserid_fkey FOREIGN KEY (lasteditoruserid) REFERENCES users(id)\nNon-primary key references should also be listed and it should handle compound keys. | [
"SELECT (select r.relname from pg_class r where r.oid = c.confrelid) as base_table,\\n a.attname as base_col,\\n (select r.relname from pg_class r where r.oid = c.conrelid) as referencing_table,\\n UNNEST((select array_agg(attname) from pg_attribute where attrelid = c.conrelid and array[attnum] <@ c.conkey)) as referencing_col,\\n pg_get_constraintdef(c.oid) contraint_sql FROM pg_constraint c join pg_attribute a on c.confrelid=a.attrelid and a.attnum = ANY(confkey)\\n WHERE c.confrelid = (select oid from pg_class where relname = 'users')\\n AND c.confrelid!=c.conrelid;"
] | [] | [] | [] | [] | null | null |
26 | financial | We have a table 'trans' that records all transactions made by clients in various accounts. Each transaction has a 'trans_id', 'account_id', 'date', 'type', 'operation', 'amount', 'balance', 'k_symbol', 'bank', and 'account'. We need to add a new column 'next_bank' to the 'trans' table that indicates the next non-null 'bank' value for each transaction, ordered by 'date' within each 'account_id'. For example, if a transaction has a null 'bank', the 'next_bank' should be the 'bank' of the next transaction in the same account that has a non-null 'bank'. The user attempted to use the following SQL query, which fails in PostgreSQL due to the lack of support for the 'ignore nulls' clause in the window function. The query is as follows: | [
"SELECT first_value(bank ignore nulls) over (partition by account_id order by date rows unbounded following) as next_bank FROM trans;"
] | [] | [
"ALTER TABLE trans ADD COLUMN next_amount int;"
] | [
"ALTER TABLE trans DROP COLUMN next_amount;"
] | [] | null | null |
27 | european_football_2 | I have two separate queries that I want to combine. The first query retrieves the team_api_id and short names of teams from the Team table. The second query retrieves the buildUpPlaySpeed from the Team_Attributes table, based on the team_api_id. I want to combine these two queries into a single query that outputs theteam_api_id, team long name, and the corresponding buildUpPlaySpeed. I have tried the following sql: \nsql \\\\nSELECT team_api_id, team_short_name FROM Team as data FULL OUTER JOIN ( SELECT buildUpPlaySpeed, team_api_id FROM Team_Attributes ta WHERE team_api_id = data.team_api_id ) AS subquery_alias ON data.team_api_id = subquery_alias.team_api_id; \\\\n\n However, when I ran this query, I encountered an error: There is an entry for table 'data' but it cannot be referenced from this part of the query. How can I modify my query so that it properly combines the results of the two queries? | [
"SELECT team_api_id, team_short_name FROM Team as data FULL OUTER JOIN (SELECT buildUpPlaySpeed, team_api_id FROM Team_Attributes ta WHERE team_api_id = data.team_api_id) AS subquery_alias ON data.team_api_id = subquery_alias.team_api_id;"
] | [] | [] | [] | [] | null | null |
28 | financial | We have two tables in our financial database: `trans` and `loan`. The `trans` table records all transactions made by clients, while the `loan` table records all loans issued to clients. Each transaction and loan has a timestamp indicating when it occurred. We want to combine these two tables into a single dataset, without worrying about clashing IDs, and then count the number of actions (transactions and loans) per year. The goal is to produce a result set that shows the total number of actions in each year (order by year). I attempted to write a query but encountered an error related to the GROUP BY clause. | [
"WITH one AS ( SELECT date_trunc('year', date) as timeOne, COUNT(*) as trans_count FROM trans ORDER BY timeOne ), two AS ( SELECT date_trunc('year', date) as timeTwo, COUNT(*) as loan_count FROM loan ORDER BY timeTwo ) SELECT timeOne as year, SUM(trans_count, loan_count) as count FROM one, two ORDER BY 1;"
] | [] | [] | [] | [] | null | null |
29 | debit_card_specializing | In the context of the debit_card_specializing database, we need to draw the first place to fifth place winners from a pool of customers based on their transaction amounts. A customer can't win multiple places. If a customer hasn't placed, then all of their transaction amounts must be considered in the draw. The goal is to draw all five place winners efficiently without repeating the query multiple times. The transactions_1k table contains the necessary data with columns such as CustomerID and Amount. The user initially attempted to draw one winner but couldn't extend the logic to draw all five winners without eliminating previous winners in each subsequent draw. | [
"WITH gen_transactions AS (SELECT CustomerID, Amount FROM transactions_1k CROSS JOIN LATERAL generate_series(1, CAST(Amount AS INTEGER))), shuffle AS (SELECT CustomerID, Amount, row_number() OVER (ORDER BY random()) AS rn FROM gen_transactions) SELECT * FROM shuffle ORDER BY RANDOM() LIMIT 1;"
] | [] | [] | [] | [] | null | null |
30 | card_games | The data in the table "card_infomation" includes one column named "price". I am using postgres and I have multiple entries of jsonb inside an array in a single column called price. They're input as the card names and corresponding prices. There are multiple rows, with multiple json elements inside of each one of them. I would like to combine them into one big entry in one row, so that I will just have one row of one column as a result. | [
"\nINSERT INTO card_information(price) SELECT jsonb_agg(price) FROM (SELECT price FROM card_information) AS subquery; SELECT * FROM card_information;\n"
] | [] | [
"\nCREATE TABLE card_information (price JSONB); \nINSERT INTO card_information (price) VALUES \n('[{\"a\": 1}, {\"b\": 2}, {\"c\": 0.5}]'::jsonb), \n('[{\"d\": 2.2}, {\"e\": 2.4}, {\"f\": 3.5}]'::jsonb), \n('[{\"g\": 1.7}, {\"h\": 5.4}, {\"i\": 8.9}]'::jsonb);\nSELECT * FROM card_information;\n"
] | [
"DROP TABLE card_information;"
] | [] | null | null |
31 | financial | In the financial database, I have two tables: `trans` and `account`. The `trans` table contains transaction details including the `account_id`, `date`, `type`, `operation`, `amount`, `balance`, `k_symbol`, `bank`, and `account`. The `account` table contains account details including `account_id`, `district_id`, `frequency`, and `date`. For each transaction in the `trans` table that matches a specific `account_id` and `type`, I want to join the corresponding record in the `account` table with the minimum transaction date. I want to group the results by `k_symbol` and extract the `k_symbol`, `operation`, `amount`, `balance`, and `frequency` from the selected transaction record. | [
"SELECT t.k_symbol, t.operation, t.amount, t.balance, a.frequency FROM trans t INNER JOIN account a ON t.account_id = a.account_id WHERE t.account_id = 1 AND t.type = 'PRIJEM' GROUP BY t.k_symbol -- and t.date is the minimum for each group;"
] | [] | [
""
] | [
""
] | [] | null | null |
32 | card_games | I am trying to analyze the purchasing behavior of users in our card_games database to find out the count of sequential monthly purchases and their lengths for each user. I want to identify the longest streaks of consecutive monthly purchases for each user and then count how many users have each longest streak length. For example, if a user made purchases in March, April, May, and June, that would be a streak of 4 months. If another user made purchases in January, February, and March, that would be a streak of 3 months. I need to find the longest streak for each user and then count how many users have the longest streak of a certain length. The expected result should show the streak length and the number of users who have that longest streak length. | [
"\nSELECT user_id, COUNT(*) AS num_consecutive_months FROM (SELECT user_id, purchase_date, DATE_TRUNC('month', TO_DATE(purchase_date || '-01', 'YYYY-MM-DD')) AS month_date, ROW_NUMBER() OVER(PARTITION BY user_id ORDER BY DATE_TRUNC('month', TO_DATE(purchase_date || '-01', 'YYYY-MM-DD'))) - ROW_NUMBER() OVER(PARTITION BY user_id, DATE_TRUNC('month', TO_DATE(purchase_date || '-01', 'YYYY-MM-DD')) - INTERVAL '1 month' * ROW_NUMBER() OVER(PARTITION BY user_id ORDER BY DATE_TRUNC('month', TO_DATE(purchase_date || '-01', 'YYYY-MM-DD')))) AS grp FROM purchase) sub GROUP BY user_id, grp ORDER BY COUNT(*) DESC LIMIT 1;\n"
] | [] | [
"\nCREATE TABLE purchase ( purchase_date VARCHAR(255), user_id VARCHAR(255) ); INSERT INTO purchase(purchase_date, user_id) VALUES('2020-03', 'alex01'), ('2020-04', 'alex01'), ('2020-05', 'alex01'), ('2020-06', 'alex01'), ('2020-12', 'alex01'), ('2021-01', 'alex01'), ('2021-02', 'alex01'), ('2021-03', 'alex01'), ('2020-04', 'jon03'), ('2020-05', 'jon03'), ('2020-06', 'jon03'), ('2020-09', 'jon03'), ('2021-11', 'jon03'), ('2021-12', 'jon03'), ('2022-01', 'jon03'), ('2022-02', 'jon03'), ('2020-05', 'mark05'), ('2020-06', 'mark05'), ('2020-07', 'mark05'), ('2020-08', 'mark05'), ('2020-09', 'mark05');\n"
] | [
"DROP TABLE purchase;"
] | [] | null | null |
33 | financial | I am working with a table (card_info) containing card IDs, company names, and types. My task is to extract the pure card id without company information, "pure_cardid", from the cardid field by removing the substring between the first and second hyphens. Afterward, I need to retrieve the minimum value of type for each unique "pure_cardid", considering that multiple records may exist for the same \pure_cardid". My main challenge is how to correctly perform both the string manipulation and the aggregation in a single query. | [
"\nWITH tab_with_cardid AS (\n select split(cardid, '-', 3)ivm_arr,\n\n type,\n last_refresh_date\n FROM db.scema.table\n), ranked_visits AS (\n SELECT *, ROW_NUMBER() OVER(PARTITION BY CONCAT(ivm_arr[2],item) as temp ORDER BY type) AS rn\n FROM tab_with_cardid\n)\nSELECT cardid, pure_cardid\nFROM ranked_visits\nWHERE rn = 1\n"
] | [] | [
"\nCREATE TABLE card_info (\n cardid VARCHAR(50),\n company VARCHAR(10),\n type CHAR(1)\n);\n\nINSERT INTO card_info (cardid, company, type) VALUES\n('1234-5678-HIJK', '1234', 'A'),\n('1234-9012-HIJK', '1234', 'B'),\n('56457-12456-DF-GH-TC', '56457', 'D');\n"
] | [
"DROP TABLE card_info;"
] | [] | null | null |
34 | european_football_2 | Suppose we have the following table in the 'european_football_2' database that records the overall rating of players over time:\n|player_api_id|date|overall_rating|\n|-------------|----|--------------|\n|505942 |2016-02-18|67 |\n|505942 |2015-11-19|67 |\n|505942 |2015-09-21|62 |\n|155782 |2016-03-15|75 |\n|155782 |2015-12-10|74 |\n|162549 |2016-01-20|70 |\n|162549 |2015-10-25|68 |\nFor each player, we want the latest overall rating based on the date. The final table would be:\n|player_api_id|date|overall_rating|\n|-------------|----|--------------|\n|505942 |2016-02-18|67 |\n|155782 |2016-03-15|75 |\n|162549 |2016-01-20|70 |\nI attempted to group by player_api_id while ordering by date and then getting the first value:\nsql \\nSELECT player_api_id, MAX(date), FIRST(overall_rating) \\nFROM Player_Attributes \\nGROUP BY player_api_id \\nORDER BY date desc \\n\nBut this doesn't work. | [
"SELECT player_api_id, MAX(date), FIRST(overall_rating) FROM Player_Attributes GROUP BY player_api_id ORDER BY date desc;"
] | [] | [
""
] | [
""
] | [] | null | null |
35 | codebase_community | I am using a tool that allows querying user data in our local database using the PostgreSQL interface. I am running a simple query to print all ages of the users on our platform. However, I am getting an error message that says 'ERROR: invalid input syntax for type numeric: "text"'. I am not sure why I am getting this error. Can you help me understand why this error is occurring and how I can fix it? | [
"SELECT Age::numeric FROM users;"
] | [] | [
"ALTER TABLE users ALTER COLUMN Age SET DATA TYPE text; INSERT INTO users VALUES (1212121,3150,'2010-07-19 19:09:39','JMS','2014-09-13 04:03:25',NULL,NULL,NULL,257,138,7,134002,'Invalid Age',NULL);"
] | [
"DELETE FROM users WHERE id = 1212121; ALTER TABLE users ALTER COLUMN age SET DATA TYPE integer USING age::integer;"
] | [] | null | null |
36 | codebase_community | In our local database, we have two tables `users` and `profiles`. When a new user is added to the `users` table, we want to automatically create a corresponding profile in the `profiles` table. The `profiles` table has three columns: `id`, `CreationDate`, and `WebsiteUrl`. The `WebsiteUrl` should be derived from the user's WebsiteUrl by taking the part before the '.com' and after the 'http://'. For example, 'http://stackoverflow.com' should become 'stackoverflow'. To achieve this, I created a trigger on the `users` table with the following function: sql begin insert into profiles (Id, CreationDate, WebsiteUrl) select new.id, new.WebsiteUrl, left(replace(new.WebsiteUrl, '.', '-'), charindex('@', replace(new.WebsiteUrl, '.', '-')) - 1); return new; end; However, when a new user is added, I encounter the error: ERROR: function charindex(unknown, text) does not exist (SQLSTATE 42883) | [
"begin insert into profiles (Id, CreationDate, WebsiteUrl) select new.Id, new.CreationDate, left(replace(new.WebsiteUrl, '.', '-'), charindex('@', replace(new.WebsiteUrl, '.', '-')) - 1); return new; end;"
] | [] | [
"DROP TABLE IF EXISTS profiles; CREATE TABLE profiles (id varchar(256) NOT NULL, CreationDate text, WebsiteUrl text, PRIMARY KEY (id));"
] | [] | [] | null | null |
37 | financial | We have a large transaction table in our financial database with over 180 million rows and 20 GB in size. The table is structured to store detailed transaction records for various accounts. We are running a query to retrieve specific transactions based on a list of account IDs, a specific bank, and a range of transaction types. The query is taking an unexpectedly long time to execute when the shared buffers are cold, around 9 seconds, but only 25 ms when the data is cached. We suspect that the query planner is not choosing the most optimal execution plan. We have tried adding a covering index and forcing a Bitmap Heap Scan, but we would like to understand why the planner is not making the best choice and find a more permanent solution to improve performance to around 1-2 seconds. | [
"SELECT t.trans_id, t.account_id, t.date, t.type, t.amount FROM trans t JOIN account a ON t.account_id = a.account_id WHERE a.district_id = 18 AND t.bank = 'AB' AND t.type IN ('PRIJEM', 'VYDAJ')"
] | [] | [] | [] | [] | null | null |
38 | card_games | A user is working with a table named `cards` in the `card_games` database. They want to find card records that match specific criteria: `availability` is 'paper', `bordercolor` is 'black', `rarity` is 'uncommon', and `type` is 'Creature'. They can write a query to get rows that match all these conditions. However, they also want to find cards that meet 3 out of these 4 criteria. Can this be done in a single SQL query? | [
"SELECT * FROM cards WHERE availability = 'paper' AND bordercolor = 'black' AND rarity = 'uncommon' AND types = 'Creature';"
] | [] | [] | [] | [] | null | null |
39 | student_club | I want to insert a new event into the 'event' table and, in case of a duplicate event ID (which is unique), log the failure in the 'failure' table with specific event ID and member ID indicating the error. For example, I want to insert an event with the ID 'recAlAwtBZ0Fqbr5K' and name 'Annual Gala'. If it fails due to a duplicate name, log the failure with the member ID 'rec280Sk7o31iG0Tx'. My current SQL statement is producing an error: syntax error at or near 'insert'. | [
"insert into event (event_id, event_name, event_date, type, notes, location, status) values ('recAlAwtBZ0Fqbr5K', 'Annual Gala', '2023-12-15T19:00:00', 'Social', 'Annual Gala for club members', 'Grand Ballroom', 'Open') on conflict (event_id) do insert into failure (event, member) values ('recAlAwtBZ0Fqbr5K', 'rec280Sk7o31iG0Tx');"
] | [] | [
"CREATE TABLE failure (event VARCHAR(255) NOT NULL, member VARCHAR(255) NOT NULL, PRIMARY KEY (event, member));"
] | [
"DROP TABLE IF EXISTS failure;"
] | [] | null | null |
40 | european_football_2 | I am new to functions and triggers in PostgreSQL. I am trying to create a trigger function to log changes in the player's name in the Player table. I followed a tutorial but encountered an error. The code block and the error are provided below. The Player table contains detailed information about players. The player_audits table is intended to keep track of any changes to the player's name along with the timestamp of the change. | [
"CREATE OR REPLACE FUNCTION log_player_name_changes() RETURNS trigger AS $BODY$ BEGIN IF NEW.player_name <> OLD.player_name THEN INSERT INTO player_audits(player_id, old_player_name, changed_on) VALUES(OLD.id, OLD.player_name, now()); END IF; RETURN NEW; END; $BODY$ CREATE TRIGGER tr_change_playername AFTER UPDATE OF player_name ON player FOR EACH ROW EXECUTE PROCEDURE log_player_name_changes();"
] | [] | [
"CREATE TABLE player_audits (player_id int, old_player_name text, changed_on timestamp );"
] | [
"DROP TABLE IF EXISTS player_audits;"
] | [] | null | null |
41 | student_club |
I have an event_attendance table and what I am trying to build should be one row for each member.\nColumn definitions of the expected output:\nGame_AttendanceDate : Latest attendance date based on date where EventType = 'Game'\nGame_Attendances: Total number of Game events attended by each member.\nWorkshop_AttendanceDate: Latest attendance date based on date where EventType = 'Workshop'\nWorkshop_Attendances: Total number of Workshop events attended by each member.\nTotal_Attendances: Total events attended by each member. I tried on one categories but I have to do this calculation for another 2 categories then that will add up another 2 sub queries. Is there anyway to optimize the SQL code?
| [
"\nSELECT\n COALESCE(a.MemberID, b.MemberID) AS MemberID,\n a.AttendanceDate AS Latest_Game_Date,\n a.Game_Attendance AS Total_Game_Attendance,\n b.AttendanceDate AS Latest_Workshop_Date,\n b.Workshop_Attendance AS Total_Workshop_Attendance,\n a.Game_Attendance + b.Workshop_Attendance AS Total_Attendance\nFROM \n(\n SELECT \n MemberID, \n EventType,\n AttendanceDate,\n COUNT(EventID) OVER(PARTITION BY MemberID, EventType) AS Game_Attendance,\n ROW_NUMBER() OVER(PARTITION BY MemberID, EventType ORDER BY AttendanceDate DESC) AS RNUM\n FROM event_attendance\n WHERE EventType = 'Game'\n) a\nFULL JOIN \n(\n SELECT \n MemberID, \n EventType,\n AttendanceDate,\n COUNT(EventID) OVER(PARTITION BY MemberID, EventType) AS Workshop_Attendance,\n ROW_NUMBER() OVER(PARTITION BY MemberID, EventType ORDER BY AttendanceDate DESC) AS RNUM\n FROM event_attendance\n WHERE EventType = 'Workshop'\n) b\nON a.MemberID = b.MemberID\nWHERE (a.RNUM = 1 OR a.RNUM IS NULL) AND (b.RNUM = 1 OR b.RNUM IS NULL);\n"
] | [] | [
"\nCREATE TABLE event_attendance (MemberID int, EventID int, EventType text, AttendanceDate date); INSERT INTO event_attendance (MemberID, EventID, EventType, AttendanceDate) VALUES (1, 101, 'Game', '2023-01-01'), (1, 102, 'Game', '2023-01-10'), (1, 103, 'Game', '2023-02-15'), (1, 104, 'Game', '2023-02-20'), (1, 105, 'Workshop', '2023-03-01'), (1, 106, 'Workshop', '2023-03-20'), (2, 107, 'Game', '2023-01-15'), (2, 108, 'Workshop', '2023-02-06');\n"
] | [
"DROP TABLE event_attendance;"
] | [] | null | null |
42 | codebase_community |
I'm working with a table called `preference_tag`, which contains a `userid` and an array of tags in the `tag` column.
I need to find rows in the user's tag preference table where the array contains the corresponding tags.
For example, when querying with `ARRAY['friend', 'cat']`, it works as expected, returning the rows where the array contains both 'friend' and 'cat'.
However, when I try to use wildcard symbols (e.g., `ARRAY['%friend%', '%cat%']`), it doesn't return the expected results.
The issue seems to be related to the `%` symbols, as I want to match any values that contain substrings like 'friend' or 'cat', but I don't need an exact match.
| [
"\nSELECT DISTINCT userid, tag\nFROM preference_tag\nWHERE tag @> (ARRAY['friend', 'cat']::VARCHAR[]);\n"
] | [] | [
"\nCREATE TABLE preference_tag (\n userid INT PRIMARY KEY,\n tag TEXT[]\n);\n\nINSERT INTO preference_tag (userid, tag) VALUES\n(1, ARRAY['friend', 'apple', 'cat']),\n(2, ARRAY['cat', 'friend', 'dog']),\n(3, ARRAY['pasta', 'best-friend', 'lizard']),\n(4, ARRAY['wildcat', 'potato', 'alices-friend']);\n\n"
] | [
"DROP TABLE preference_tag;"
] | [] | null | null |
43 | financial | In the financial database, there is a table named 'account_info' that stores the detailed information of accounts. Each row in the table includes an array in the 'condition' column, which contains various conditions related to the account. We need to find all qualifying accounts where the 'condition' column contains a condition with a specific 'rootcompanyid' value of 5. The current query is only returning the last row that matches the condition, but we need all rows that have this 'rootcompanyid' value in any part of the array. | [
"SELECT * FROM account_info WHERE ((condition->0->>'conditions')::json->>'rootcompanyid')::json->>'$in' = '[5]';"
] | [] | [
"CREATE TABLE IF NOT EXISTS account_info (account_id INTEGER, condition JSONB);",
"INSERT INTO account_info (account_id, condition) VALUES (1, '[{\"action\":\"read\",\"subject\":\"rootcompany\",\"conditions\":{\"rootcompanyid\":{\"$in\":[35,20,5,6]}}}]'::jsonb), (2, '[{\"action\":\"read\",\"subject\":\"rootcompany\",\"conditions\":{\"rootcompanyid\":{\"$in\":[1,4,2,3,6]}}}]'::jsonb), (3, '[{\"action\":\"read\",\"subject\":\"rootcompany\",\"conditions\":{\"rootcompanyid\":{\"$in\":[5]}}}]'::jsonb);"
] | [
"DROP TABLE IF EXISTS account_info;"
] | [] | null | null |
44 | superhero | I am working on a superhero database and have a table called 'hero_power' that records the powers of each superhero. Currently, the combination of 'hero_id' and 'power_id' is supposed to be unique, meaning that a superhero cannot have the same power listed more than once. However, this is not quite what I want. Instead, I would want the combination 'hero_id' and 'power_id' to be unique only in cases where the power is currently active. In other words, a superhero should be able to have multiple instances of the same power listed if the power is inactive, but should not be allowed to have duplicates that are active. Is there a way to enforce this in this table? | [
"ALTER TABLE hero_power ADD CONSTRAINT unique_active_hero_power UNIQUE (hero_id, power_id);"
] | [] | [
"ALTER TABLE hero_power ADD COLUMN active BOOLEAN DEFAULT TRUE;"
] | [
"ALTER TABLE hero_power DROP COLUMN IF EXISTS active;",
"DROP INDEX IF EXISTS idx_hero_power_active;"
] | [] | null | null |
45 | toxicology | In the toxicology database, we have a table named `orders` that records the purchases made by users. Each record includes the `user_id`, `email`, `segment` (type of purchase), `destination` (location of purchase), and `revenue` (amount spent). We need to identify users who meet specific criteria based on their purchase history:\n1) Users who have made a purchase in the `luxury` segment with a `destination` of `New York`.\n2) Users who have made a purchase in the `luxury` segment with a `destination` of `London`.\n3) Users who have made purchases in the `basic` segment with a `destination` of `New York` and the total revenue from these purchases exceeds $2,000.\n4) Users who have never made a purchase with a `destination` of `Miami`.\nGiven the sample data, we expect to retrieve the following users:\nuser_id email \\(3 [email protected] \\(4 [email protected] \\(5 [email protected] \\)The user attempted to use the following SQL query to get part of the required results, but it did not account for conditions 3 and 4:\nsql \\(SELECT DISTINCT(user_id), email FROM orders o WHERE (o.segment = 'luxury' AND o.destination = 'New York') OR (o.segment = 'luxury' AND o.destination = 'London') \\) | [
"SELECT DISTINCT(user_id), email FROM orders o WHERE (o.segment = 'luxury' AND o.destination = 'New York') OR (o.segment = 'luxury' AND o.destination = 'London')"
] | [] | [
"CREATE TABLE orders (user_id INT, email TEXT, segment TEXT, destination TEXT, revenue NUMERIC); INSERT INTO orders (user_id, email, segment, destination, revenue) VALUES (1, '[email protected]', 'basic', 'New York', 500), (1, '[email protected]', 'luxury', 'London', 750), (1, '[email protected]', 'luxury', 'London', 500), (1, '[email protected]', 'basic', 'New York', 625), (1, '[email protected]', 'basic', 'Miami', 925), (1, '[email protected]', 'basic', 'Los Angeles', 218), (1, '[email protected]', 'basic', 'Sydney', 200), (2, '[email protected]', 'basic', 'Chicago', 375), (2, '[email protected]', 'luxury', 'New York', 1500), (2, '[email protected]', 'basic', 'Toronto', 2800), (2, '[email protected]', 'basic', 'Miami', 750), (2, '[email protected]', 'basic', 'New York', 500), (2, '[email protected]', 'basic', 'New York', 625), (3, '[email protected]', 'luxury', 'New York', 650), (3, '[email protected]', 'basic', 'New York', 875), (4, '[email protected]', 'luxury', 'Chicago', 1300), (4, '[email protected]', 'basic', 'New York', 1200), (4, '[email protected]', 'basic', 'New York', 1000), (4, '[email protected]', 'luxury', 'Sydney', 725), (5, '[email protected]', 'basic', 'London', 500), (5, '[email protected]', 'luxury', 'London', 750);"
] | [
"DROP TABLE orders;"
] | [] | null | null |
46 | formula_1 | In the Formula 1 database, there is a table named 'cars' which contains the information of cars. Each entry includes a 'version' column that records the version of the car used by the driver in the race. The version numbers are in a format similar to '3.0.5-1-test-dev' and need to be sorted correctly to determine the latest version used in a race. However, the current sorting method does not handle multi-digit numbers correctly and fails when the version includes additional string information after the numeric version. The task is to write a query that correctly sorts the versions. If the table is sorted, I can get the latest version by select the first one. | [
"SELECT version FROM cars ORDER BY SUBSTRING(version, '^[0-9]+') DESC, SUBSTRING(version, '[0-9]+\\.[0-9]+\\.([0-9]+)-') DESC, CAST(SUBSTRING(version, '[0-9]+\\.[0-9]+\\.[0-9]+-([0-9]+)') AS INTEGER) DESC, SUBSTRING(version, '[0-9]+\\.[0-9]+\\.[0-9]+-[0-9]+\\.([0-9]+)') DESC"
] | [] | [
"CREATE TABLE cars (version varchar(100))",
"INSERT INTO cars (version) VALUES ('3.0.5-1-test-dev'), ('3.0.6-1'), ('3.0.7-1-test'), ('3.0.8-1-test-dev-test23'), ('3.0.9-1'), ('3.0.13-2'), ('3.0.4-1-1'), ('3.0.10-1'), ('3.0.11-2'), ('3.0.11-1')"
] | [
"DROP TABLE cars;"
] | [] | null | null |
47 | thrombosis_prediction | In the thrombosis_prediction database, we have a set of normalized tables representing patients, medications, and their prescriptions. Each patient can be prescribed multiple medications, and each medication can be prescribed to multiple patients. For reporting purposes, we need a highly denormalized view that shows each patient's name and a list of all medications they are prescribed. However, when we filter the list to show only patients who are prescribed a specific medication (e.g., Aspirin), we lose the information about other medications those patients are prescribed. We want to filter by a specific medication but still get a list of all medications that a patient is prescribed in one row. | [
"SELECT prescriptions.patient_id, array_agg(DISTINCT prescriptions.medication_id ORDER BY prescriptions.medication_id) AS medications FROM prescriptions INNER JOIN prescriptions AS Aspirin_filter ON prescriptions.patient_id = Aspirin_filter.patient_id AND Aspirin_filter.medication_id = 1 GROUP BY prescriptions.patient_id;"
] | [] | [
"CREATE TABLE patients ( patient_id SERIAL PRIMARY KEY, patient_name TEXT NOT NULL );",
"CREATE TABLE medications ( medication_id SERIAL PRIMARY KEY, medication_name TEXT NOT NULL );",
"CREATE TABLE prescriptions ( patient_id INT REFERENCES patients (patient_id), medication_id INT REFERENCES medications (medication_id), PRIMARY KEY (patient_id, medication_id) );",
"INSERT INTO patients (patient_name) VALUES ('Alice'), ('Bob'), ('Charlie');",
"INSERT INTO medications (medication_name) VALUES ('Aspirin'), ('Ibuprofen'), ('Paracetamol'), ('Warfarin');",
"INSERT INTO prescriptions (patient_id, medication_id) VALUES (1, 1), (1, 2), (1, 3);",
"INSERT INTO prescriptions (patient_id, medication_id) VALUES (2, 2);",
"INSERT INTO prescriptions (patient_id, medication_id) VALUES (3, 2), (3, 1), (3, 3), (3, 4);"
] | [] | [] | null | null |
48 | formula_1 | In the context of Formula 1 racing data, I have two tables: `races` and `results`. The `races` table contains information about each race, including the `raceId` which uniquely identifies each race. The `results` table contains detailed information about the results of each race, including the `raceId` to link back to the `races` table, `driverId` to identify the driver, and `points` which represent the points scored by the driver in that race. I need to calculate the total points scored by each driver across all races, but only for races where the driver has participated. If a driver has not participated in any races, their total points should be `0`. I attempted to write a query to achieve this but encountered issues with grouping and ensuring that drivers who haven't participated in any races are included with a total of `0` points. | [
"SELECT r.driverId, ((SELECT COALESCE(SUM(r.points), 0) FROM results r WHERE r.raceId = races.raceId) - (SELECT COALESCE(SUM(r.points), 0) FROM results r WHERE r.raceId = races.raceId)) AS total_points FROM results r GROUP BY r.driverId"
] | [] | [
""
] | [
""
] | [] | null | null |
49 | superhero | In the context of the superhero database, I need to calculate the total count of superheroes by their alignment and also display the count of superheroes for each specific alignment and race combination. I attempted to write a query to achieve this but it doesn't provide the total count by alignment as I expected. Here's what I tried: | [
"select count(S.id), A.alignment, count(R.race), R.race from superhero S, alignment A, race R where S.alignment_id=A.id and S.race_id=R.id group by A.alignment, R.race;"
] | [] | [
""
] | [
""
] | [] | null | null |
50 | formula_1 | In the context of analyzing Formula 1 race results, I'm trying to understand the behavior of window functions in PostgreSQL. Specifically, I'm looking at the `array_agg` function with and without an `ORDER BY` clause within a window function. I expect both to return the same result since no filtering is applied, but they don't. Here's the scenario: I have a table of race results, and I want to aggregate the driver IDs in two ways: one with an order by the points they scored in the race, and another without any order. The results seem to suggest that ordering the partition affects the aggregation, which is confusing. Here's the SQL I used: | [
"select driverId, points, lead(driverId) over (order by points asc) as \"lead(driverId) with order\", array_agg(driverId) over (order by points asc) as \"array_agg(driverId) with order\", lead(driverId) over () as \"lead(driverId) without order\", array_agg(driverId) over () as \"array_agg(driverId) without order\" from results where raceId = 19 order by driverId asc"
] | [] | [
""
] | [
""
] | [] | null | null |
51 | formula_1 | In the context of Formula 1 racing data analysis, a user is attempting to calculate the total duration of pit stops for each race day based on the difference between consecutive pit stop times recorded in the same column. The user has a table that records pit stop details including race ID, driver ID, stop number, lap number, pit stop time, and duration. The user's initial approach was to calculate the maximum and minimum pit stop times for each race day and then find the difference between these times to estimate the total pit stop duration. However, this approach misses the intermediate pit stops, leading to an inaccurate total duration calculation. The user is seeking a method to accurately calculate the total pit stop duration by considering all consecutive pit stop times for each race day. | [
"SELECT \n raceId,\n MAX(time::time) AS end_time,\n MIN(time::time) AS start_time,\n (MAX(time::time) - MIN(time::time)) AS total_duration\nFROM pitStops\nWHERE raceId = 842\nGROUP BY raceId;"
] | [] | [
""
] | [
""
] | [] | null | null |
52 | toxicology | In the toxicology database, I'm attempting to retrieve a specific data structure from a query. My data is structured in a way that each molecule has atoms connected by bonds, and each molecule is labeled as either carcinogenic (+) or not carcinogenic (-). I want to return a object that groups molecules by their label and lists the atoms and bonds for each molecule. The desired output format is a object where each key is a label, and the value is an array of objects, each representing a molecule with its atoms and bonds. Here's the SQL query I have so far, but it doesn't produce the desired output structure: | [
"select label, JSON_AGG(JSON_BUILD_OBJECT(atom.molecule_id, atom.atom_id)) AS groupedMolecules FROM molecule JOIN atom ON molecule.molecule_id = atom.molecule_id GROUP BY label"
] | [] | [
""
] | [
""
] | [] | null | null |
53 | toxicology | In the context of a toxicology database, I have a `molecule` table that tracks molecules and their carcinogenic status, and an `atom` table that records atoms within these molecules. Each atom is identified by a unique `atom_id` and belongs to a molecule identified by `molecule_id`. The `element` column in the `atom` table specifies the chemical element of the atom. I need to count the number of sodium (`na`) and carbon (`c`) or chlorine (`cl`) atoms for each molecule. However, if both carbon (`c`) and chlorine (`cl`) elements within the same molecule, they should be counted as one. Here's the SQL query I attempted, but it counts each atom individually, even if they are of the same element within the same molecule: | [
"SELECT molecule_id, COALESCE(SUM(CASE WHEN element = 'na' THEN 1 ELSE 0 END), 0) na_atoms, COALESCE(SUM(CASE WHEN element = 'c' OR element = 'cl' THEN 1 ELSE 0 END), 0) c_atoms FROM atom GROUP BY molecule_id;"
] | [] | [
""
] | [
""
] | [] | null | null |
54 | european_football_2 | In the context of analyzing football match data, I'm attempting to calculate the average number of goals scored by each team, grouped by the hour of the match. The goal is to understand the performance trends of teams at different times of the day without resorting to external scripting. Here's the initial approach I took, which unfortunately resulted in an error due to incorrect handling of the timestamp data. | [
"SELECT home_team_api_id, AVG(home_team_goal) as avg_home_goals, AVG(away_team_goal) as avg_away_goals, SUM(home_team_goal) as total_home_goals, SUM(away_team_goal) as total_away_goals, MAX(home_team_goal) as max_home_goals, MIN(home_team_goal) as min_home_goals, COUNT(home_team_api_id) as count FROM Match GROUP BY home_team_api_id, date_part('hour', date);"
] | [] | [
""
] | [
""
] | [] | null | null |
55 | debit_card_specializing | In the table clients_to_groups, we need to identify clients who have made transactions at gas stations that belong to specific groups. Specifically, we want to find clients who have made transactions at gas stations that are either in the group 1 or 3 AND also in group 5 or 6. For example, a client who has made transactions at a gas station in the group 5 and another transaction at a gas station in the group 1 should be included in the results, but a client who has only made transactions at gas stations in the group 5 should not be included. | [
"SELECT DISTINCT c.id FROM clients c INNER JOIN clients_to_groups at1 ON c.id = at1.client_id INNER JOIN clients_to_groups at2 ON c.id = at2.client_id WHERE at1.group_id IN (5, 6) AND at2.group_id IN (1, 3);"
] | [] | [
"CREATE TABLE clients (id INT NOT NULL);",
"CREATE TABLE groups (id INT NOT NULL);",
"CREATE TABLE clients_to_groups (id serial, group_id INT, client_id INT);",
"INSERT INTO clients(id) VALUES (0), (1), (2), (3);",
"INSERT INTO groups(id) VALUES (1), (3), (5), (6);",
"INSERT INTO clients_to_groups(client_id, group_id) VALUES (0, 1), (0, 5), (1, 1), (1, 90), (2, 1), (3, 3), (3, 5), (3, 90);",
"INSERT INTO clients (id) SELECT random() from generate_series(1,2000);",
"INSERT INTO clients_to_groups(client_id, group_id) SELECT random(), random() from generate_series(1,2000);"
] | [
"DROP TABLE clients;",
"DROP TABLE groups;",
"DROP TABLE clients_to_groups;"
] | [] | null | null |
56 | european_football_2 | In the context of the 'european_football_2' database, consider a table that records daily financial transactions for football clubs. Each transaction includes the date, the club name, and the amount of money involved, which can be positive (income) or negative (expense). The goal is to group these transactions by club and sign (positive or negative) and sum the amounts for consecutive transactions of the same sign for each club. For example, if a club has consecutive positive transactions, they should be summed up into a single transaction. The user attempted to use window functions but encountered issues with their query, which did not produce the desired output. | [
"SELECT transaction_date AS date, club_name, sum(amount) over (partition by club_name, sign(amount) order by transaction_date) from club_transactions"
] | [] | [
"CREATE TABLE club_transactions (transaction_date DATE, club_name VARCHAR(50), amount INTEGER);",
"INSERT INTO club_transactions (transaction_date, club_name, amount) VALUES ('2023-01-01', 'Manchester United', 3), ('2023-01-02', 'Manchester United', 2), ('2023-01-03', 'Manchester United', 1), ('2023-01-04', 'Manchester United', -5), ('2023-01-05', 'Manchester United', 1), ('2023-01-01', 'Liverpool', 2), ('2023-01-02', 'Liverpool', -1), ('2023-01-03', 'Liverpool', -6);"
] | [
"DROP TABLE club_transactions;"
] | [] | null | null |
57 | california_schools | I have a table in Postgres that returns flat data. But I would like it to be returned to me in a Json ordered with its children as follows, and I have not been able to solve it.Is there a way in postgresql to order the parent modules with their child modules, I attach an example "[{"children":[{"id_module":4,"desc_module":"asdf","module_code":"asdf","name_module":"asdf","id_parent_module":1},{"id_module":3,"desc_module":"C","module_code":"232","name_module":"C","id_parent_module":1},{"id_module":2,"desc_module":"B","module_code":"011.002","name_module":"B","id_parent_module":1}],"id_module":1,"desc_module":"A","module_code":"001","name_module":"A","id_parent_module":null},{"children":[{"id_module":14,"desc_module":"asdf","module_code":"asdf","name_module":"asdf","id_parent_module":5}],"id_module":5,"desc_module":"asdf","module_code":"asdf","name_module":"asdf","id_parent_module":null},{"children":[{"id_module":22,"desc_module":"asdf","module_code":"asdf","name_module":"asdf","id_parent_module":6},{"id_module":8,"desc_module":"asdf","module_code":"asdf","name_module":"asdf","id_parent_module":6},{"id_module":7,"desc_module":"asdf","module_code":"asdf","name_module":"asdf","id_parent_module":6}],"id_module":6,"desc_module":"qw","module_code":"23","name_module":"asdf","id_parent_module":null},{"children":[{"id_module":21,"desc_module":"asdf","module_code":"asdf","name_module":"asdf","id_parent_module":9},{"id_module":20,"desc_module":"asdf","module_code":"asdf","name_module":"asdf","id_parent_module":9}],"id_module":9,"desc_module":"asdfsad","module_code":"asdf","name_module":"asdf","id_parent_module":null},{"children":[{"id_module":13,"desc_module":"asdf","module_code":"asdf","name_module":"asdf","id_parent_module":10},{"id_module":12,"desc_module":"asdfsf","module_code":"asdf","name_module":"asdf","id_parent_module":10},{"id_module":11,"desc_module":"asdf","module_code":"sadf","name_module":"asdf","id_parent_module":10}],"id_module":10,"desc_module":"asdf","module_code":"asdf","name_module":"asdf","id_parent_module":null}]" | [
"SELECT array_to_json(array_agg(row_to_json(alias))) FROM (select * from modules ) alias"
] | [] | [
"create table modules (id_module int, id_parent_module int, module_code text, name_module text, desc_module text);",
"insert into modules values (1, null, '001', 'A', 'A'), (2, 1, '011.002', 'B', 'B'), (3, 1, '232', 'C', 'C'), (4, 1, 'asdf', 'asdf', 'asdf'), (5, null, 'asdf', 'asdf', 'asdf'), (14, 5, 'asdf', 'asdf', 'asdf'), (6, null, '23', 'asdf', 'qw'), (7, 6, 'asdf', 'asdf', 'asdf'), (8, 6, 'asdf', 'asdf', 'asdf'), (22, 6, 'asdf', 'asdf', 'asdf'), (9, null, 'asdf', 'asdf', 'asdfsad'), (20, 9, 'asdf', 'asdf', 'asdf'), (21, 9, 'asdf', 'asdf', 'asdf'), (10, null, 'asdf', 'asdf', 'asdf'), (11, 10, 'sadf', 'asdf', 'asdf'), (12, 10, 'asdf', 'asdf', 'asdfsf'), (13, 10, 'asdf', 'asdf', 'asdf');"
] | [
"DROP TABLE modules;"
] | [] | null | null |
58 | toxicology | In the toxicology database, we have a table named 'atom_edits' that records updates to the 'atom' table. Users can update the 'element' or 'molecule_id' of an atom. If a field is not updated, it retains a NULL value. Here's an example of four edits touching two separate atoms. Atom with ID 'TR000_1' received two updates: the first one is updating the 'element' field, the second one touches the 'molecule_id'. Atom with ID 'TR000_2' received one update that changes the 'element'. We need to merge this table such that in the resulting table there's one row per atom, giving the cumulative edits. | [
"SELECT atom_id, (ARRAY_REMOVE(ARRAY_AGG(element ORDER BY edit_id DESC), NULL))[1] AS element, (ARRAY_REMOVE(ARRAY_AGG(molecule_id ORDER BY edit_id DESC), NULL))[1] AS molecule_id FROM atom_edits GROUP BY atom_id;"
] | [] | [
"CREATE TABLE atom_edits (edit_id SERIAL PRIMARY KEY, atom_id TEXT, element TEXT, molecule_id TEXT); INSERT INTO atom_edits (atom_id, element, molecule_id) VALUES ('TR000_1', 'cl', NULL), ('TR000_1', NULL, 'TR001'), ('TR000_2', 'c', NULL);"
] | [
"DROP TABLE atom_edits;"
] | [] | null | null |
59 | debit_card_specializing | We are trying to bulk insert a large number of customer records into the `customers` table using an `INSERT` statement with an `ON CONFLICT` clause. The goal is to get the `CustomerID` back for all rows, whether they are already existing or not. The `customers` table has a composite unique constraint on `Segment` and `Currency`. We are encountering an error when trying to run the SQL through Django's cursor. The error message indicates that the `ON CONFLICT DO UPDATE` command cannot affect a row a second time due to duplicate constrained values in the `VALUES` list. We need to handle this situation to ensure that we can insert new records and retrieve the IDs of both new and existing records. | [
"INSERT INTO customers (customerid, segment, currency) VALUES (3, 'SME', 'EUR'), (1, 'KAM', 'CZK'), (3, 'SME', 'EUR') ON CONFLICT (customerid, segment, currency) DO UPDATE SET Currency = customers.Currency RETURNING CustomerID;"
] | [] | [
"ALTER TABLE customers\nADD CONSTRAINT customers_customerid_segment_currency_uk\nUNIQUE (customerid, segment, currency);"
] | [
"DROP TABLE customers;"
] | [] | null | null |
60 | financial | In the financial database, there are two tables: 'client' and 'disp'. The 'disp' table contains a B column named 'addresses' which stores address information for each client. I attempted to join the 'client' and 'disp' tables on the 'client_id' field and then use b_array_elements to extract address details. However, I encountered an error 'cannot extract elements from a scalar' because some entries in the 'addresses' column are not arrays. I need to handle these cases properly to extract the 'PostCode' from the addresses B column for a specific client with client_id = 12345. | [
"SELECT \n client.client_id, \n client.gender, \n disp.disp_id, \n address ->> 'PostCode' AS PostCode\nFROM client\nFULL JOIN disp ON (client.client_id = disp.client_id),\njsonb_array_elements(disp.addresses) AS address\nWHERE disp.client_id = 12345;"
] | [] | [
"ALTER TABLE disp \nADD COLUMN addresses jsonb;",
"INSERT INTO disp (disp_id, client_id, account_id, addresses) VALUES\n (324124, 32323432, 4342443141, '[{\"PostCode\":\"12345\"}]'),\n (43244241, 3455566, 645634, '[null]'),\n (42342436, 12345, 5346574, 'null');"
] | [
"\n DELETE FROM disp \n WHERE disp_id IN (324124, 43244241, 42342436);\n ",
"\n ALTER TABLE disp \n DROP COLUMN addresses;\n "
] | [] | null | null |
61 | financial | In the financial database, I want to update the 'amount' in the 'loan' table for a specific 'account_id' and 'date' if it exists, or insert a new record if it does not. However, I do not want the 'loan_id' to increment if an update occurs because it is an auto-incrementing SERIAL column. The 'loan_id' should only increment when a new record is inserted to maintain a sequential order without gaps. | [
"\nINSERT INTO loan (\n loan_id, \n account_id, \n date, \n amount, \n duration, \n payments, \n status\n)\nVALUES (\n DEFAULT, \n 2, \n '1996-04-29', \n 30276, \n 12, \n 2523.0, \n 'B'\n)\nON CONFLICT (loan_id, account_id, date)\nDO UPDATE\n SET amount = loan.amount + 1000;"
] | [] | [
"CREATE TABLE IF NOT EXISTS loan (loan_id SERIAL PRIMARY KEY, account_id int NOT NULL, date date NOT NULL, amount int NOT NULL, duration int NOT NULL, payments double NOT NULL, status text NOT NULL, UNIQUE(account_id, date)); INSERT INTO loan (loan_id, account_id, date, amount, duration, payments, status) VALUES (134411, 2, '1994-01-05', 80952, 24, 3373.0, 'A');",
"\n DELETE FROM loan t1\n USING loan t2\n WHERE t1.account_id = t2.account_id\n AND t1.date = t2.date\n AND t1.loan_id > t2.loan_id;\n ",
"ALTER TABLE loan\n ADD CONSTRAINT loan_accountid_date_uk\n UNIQUE (account_id, date);"
] | [
"DROP TABLE IF EXISTS loan;"
] | [] | null | null |
62 | card_games | In our card_games database, we have a large table named cards which contains detailed information about each card. We also have two smaller tables, norm1 and norm2, which contain a subset of the cards based on certain criteria. The goal is to delete rows from the cards table where the combination of (uuid, setCode, rarity, manaCost) does not exist in either norm1 or norm2. The current query uses two separate NOT IN clauses, which is both verbose and potentially inefficient. We need to rewrite this query to make it more concise and performant. | [
"DELETE FROM cards WHERE (uuid, setCode, rarity, manaCost) NOT IN (SELECT uuid, setCode, rarity, manaCost FROM norm1 WHERE uuid IS NOT NULL AND setCode IS NOT NULL AND rarity IS NOT NULL AND manaCost IS NOT NULL) AND (uuid, setCode, rarity, manaCost) NOT IN (SELECT uuid, setCode, rarity, manaCost FROM norm2 WHERE uuid IS NOT NULL AND setCode IS NOT NULL AND rarity IS NOT NULL AND manaCost IS NOT NULL);"
] | [] | [
"\nCREATE TABLE norm1 AS SELECT uuid, setCode, rarity, manaCost FROM cards WHERE id % 2 = 0; CREATE TABLE norm2 AS SELECT uuid, setCode, rarity, manaCost FROM cards WHERE id % 3 = 0;\n"
] | [
"\nDROP TABLE norm1; DROP TABLE norm2;\n"
] | [] | null | null |
63 | financial | In the financial database, I want to apply a forward fill function to all nullable columns of a table. The forward fill function should be applied to each column dynamically, given the table name, an ID column, and a row number column. For example, using the 'trans' table, I want to apply the forward fill to all nullable columns, partitioned by 'account_id' and ordered by 'date'. The function should handle any table with nullable columns and apply the forward fill accordingly. However, my initial attempt at writing the function resulted in a syntax error. I need a corrected version of the function that works for any table with nullable columns. | [
"CREATE OR REPLACE FUNCTION f_gap_fill_update(tbl text, id text, row_num text) RETURNS void LANGUAGE plpgsql AS $func$ DECLARE tmp text[]; col text; BEGIN select array ( select column_name from information_schema.columns c where table_name = tbl ) into tmp; foreach col in array tmp loop execute 'update '||tbl||' set '||col||' = gapfill('||col||') OVER w AS '||col||' where '||tbl||'.row_num = '||col||'.row_num window w as (PARTITION BY '||id||' ORDER BY '||row_num||') returning *;'; end loop; end $func$;"
] | [] | [
"CREATE OR REPLACE FUNCTION gap_fill_internal(s anyelement, v anyelement) RETURNS anyelement LANGUAGE plpgsql AS $func$ BEGIN RETURN COALESCE(v, s); END $func$; CREATE AGGREGATE gap_fill(anyelement) ( SFUNC = gap_fill_internal, STYPE = anyelement );"
] | [
""
] | [] | null | null |
64 | financial | In the financial database, there is a table named 'card' that records details of issued cards. Each card is identified by a 'card_id' and is associated with a 'disp_id', along with other details like 'type' and 'issued'. Let's say we want to change the order of a specific 'disp_id' within the same 'type'. For instance, we want to set the 'disp_id' of a card with 'disp_id' = 41 to 1. This change should reorder the 'disp_id' values of all affected cards within the same 'type'. The expected result is that the card with 'disp_id' = 41 should now have 'disp_id' = 1, and the other cards' 'disp_id' values should be incremented accordingly. | [
"UPDATE card SET disp_id = 1 WHERE disp_id = 41;"
] | [] | [
""
] | [
""
] | [] | null | null |
65 | financial | I have created the following custom SQL function on a PostgreSQL 16.1 server to generate a series of monthly dates between two given dates for analyzing transaction trends over time:\nCREATE OR REPLACE FUNCTION public.generate_series_monthly(a date, b date)\nRETURNS SETOF date LANGUAGE SQL IMMUTABLE PARALLEL SAFE ROWS 12 AS $function$\nselect generate_series(date_trunc('month', a), date_trunc('month', b), '1 month')\n$function$;\nSpecifically, I have added the row estimate parameter, and as expected, I am seeing this estimate in some simple queries:\nexplain select generate_series_monthly('2023-01-01', '2023-12-01');\nHowever, in some uses in queries, I see it falling back to the default of 1000:\nexplain select * from generate_series_monthly('2023-01-01', '2023-12-01');\nI would expect this second query to also use the 12 row estimate. Why is it resorting to 1000? | [
"CREATE OR REPLACE FUNCTION public.generate_series_monthly(a date, b date) RETURNS SETOF date LANGUAGE SQL IMMUTABLE PARALLEL SAFE ROWS 10 AS $function$ select generate_series(date_trunc('month', a), date_trunc('month', b), '1 month') $function$; EXPLAIN SELECT generate_series_monthly('2024-01-01', '2024-05-01'); EXPLAIN SELECT * FROM generate_series_monthly('2024-01-01', '2024-05-01');"
] | [] | [
""
] | [
""
] | [] | null | null |
66 | european_football_2 | In the context of european_football_2 database whose match table contains columns such as season, date, home_team_goal, away_team_goal, etc. Now, suppose you want to treat any match ending in a draw (home_team_goal = away_team_goal) as if an invoice were being issued (similar to setting Invoiced = 1). Between two such draws, you might have several other matches that do not end in a draw (equivalent to Invoiced = 0), and for each of those matches, you want to treat the total goals scored (i.e., home_team_goal + away_team_goal) like a running amount you accumulate. Finally, you only want to keep the draw rows, and each of those rows should carry the sum of total goals scored since the last draw. | [
"SELECT \n m.id,\n m.date,\n CASE WHEN m.home_team_goal = m.away_team_goal THEN 1 ELSE 0 END AS invoiced,\n SUM(m.home_team_goal + m.away_team_goal)\n OVER (PARTITION BY (CASE WHEN m.home_team_goal = m.away_team_goal THEN 1 ELSE 0 END)\n ORDER BY m.id, m.date) AS amount\nFROM match AS m\nORDER BY m.id, m.date;"
] | [] | [
""
] | [
""
] | [] | null | null |
67 | debit_card_specializing | We have a table called transactions_1k that contains transaction details for multiple customers across different gas stations. Each row in this table has:
1. transaction date
2. ransaction time
3. customerid (the ID of the customer)
4. gasstationid (the ID of the gas station)
5. productid (the product involved)
6. amount (the quantity, e.g., liters purchased)
7. price (the cost)
We want to filter these transactions under the following rules, per customer:
1. Only the last transaction at each gas station should be considered.
2. If the customer has any transaction where amount < 10 (which indicates a potential issue), display the first gas station on which that issue occurred.
3. If the customer has no transactions with amount < 10, then display the last gas station on which the customer had a transaction with amount >= 10.
Given some sample data, we expect the final output to show only:
1. The last transaction for each gas station where amount >= 10.
2. The first transaction for each gas station where amount < 10.
We attempted the following SQL query in PostgreSQL to achieve this, but it does not return the desired results. Instead, it only picks the gas station with the maximum gasstationid for each customer and does not correctly determine the earliest occurrence of amount < 10 chronologically. In other words, this query fails to implement βthe last transaction per gas stationβ and βthe first station where amount < 10β correctly. | [
"WITH DataSource AS (\n SELECT\n *,\n MIN(CASE WHEN amount < 10 THEN gasstationid END) \n OVER (PARTITION BY customerid) AS first_issue_gasstation,\n ROW_NUMBER() OVER (PARTITION BY customerid ORDER BY gasstationid DESC) AS gasstation_id\n FROM transactions_1k\n WHERE gasstationid = (\n SELECT MAX(gasstationid)\n FROM transactions_1k\n WHERE customerid = transactions_1k.customerid\n )\n)\nSELECT \n customerid,\n transactionid,\n gasstationid,\n amount\nFROM DataSource\nWHERE\n (first_issue_gasstation IS NULL AND gasstation_id = 1)\n OR (first_issue_gasstation = gasstationid);"
] | [] | [
""
] | [
""
] | [] | null | null |
68 | superhero | In the superhero database, we have a directed acyclic graph representing the lineage of superheroes. Each superhero has a unique identifier and a parent identifier, which points to their predecessor in the lineage. Given two superheroes, 'Superhero A' and 'Superhero B', we need to find their common ancestor in the lineage. The provided query is inefficient as it traverses the entire lineage until it finds the root, which is not optimal when the common segment of the lineage is large. We need to find an efficient way to determine the common ancestor with a complexity of O(A+B) where A and B are the number of nodes in the lineages of 'Superhero A' and 'Superhero B', respectively. | [
"WITH RECURSIVE linked_list(id, parent_id) AS (SELECT id, parent_id FROM lineage WHERE id = 1001 OR id = 1201 UNION ALL SELECT g.id, g.parent_id FROM lineage g INNER JOIN linked_list ll ON ll.parent_id = g.id) SELECT string_agg(id::TEXT, ',') AS ids, parent_id FROM linked_list GROUP BY parent_id HAVING COUNT(DISTINCT id) > 1;"
] | [] | [
"CREATE TABLE lineage (id INT PRIMARY KEY, parent_id INT);",
"INSERT INTO lineage (id, parent_id) SELECT i, CASE WHEN i = 1 THEN NULL ELSE i - 1 END FROM generate_series(1, 1000) AS i;",
"INSERT INTO lineage (id, parent_id) SELECT 1000 + i, 1000 + i - 1 FROM generate_series(1, 200) AS i;",
"INSERT INTO lineage (id, parent_id) SELECT 1200 + i, 1000 + i - 1 FROM generate_series(1, 200) AS i;"
] | [
"DROP TABLE lineage;"
] | [] | null | null |
69 | card_games | In a digital card trading platform, users perform various actions such as `LOGIN`, `SEARCH`, and `BUY`. An abandoned `SEARCH` action is defined as when a user `LOGIN`s, performs one or more `SEARCH` actions, and does not perform a `BUY` action before the next `LOGIN`. Given a table `user_actions` that records `user_id`, `action`, and `action_time`, determine all abandoned `SEARCH` actions. | [
"SELECT c1.user_id, COUNT(*) FROM user_actions c1 LEFT JOIN (SELECT user_id, action, action_time FROM user_actions WHERE action = 'LOGIN') c2 ON c1.user_id = c2.user_id AND c2.action_time > c1.action_time LEFT JOIN (SELECT user_id, action, action_time FROM user_actions WHERE action = 'BUY') c3 ON c1.user_id = c3.user_id AND c3.action_time > c1.action_time AND c3.action_time < c2.action_time WHERE c1.action = 'SEARCH' AND c2.user_id IS NOT NULL AND c3.user_id IS NULL GROUP BY 1"
] | [] | [
"CREATE TABLE user_actions(user_id VARCHAR(1) NOT NULL, action VARCHAR(6) NOT NULL, action_time DATE NOT NULL);",
"INSERT INTO user_actions(user_id, action, action_time) VALUES ('A', 'LOGIN', '2023-05-01'), ('A', 'SEARCH', '2023-05-02'), ('A', 'SEARCH', '2023-05-03'), ('A', 'BUY', '2023-05-04'), ('B', 'LOGIN', '2023-05-01'), ('B', 'SEARCH', '2023-05-02'), ('B', 'SEARCH', '2023-05-03'), ('B', 'LOGIN', '2023-05-04'), ('B', 'SEARCH', '2023-05-05')"
] | [
"DROP TABLE user_actions"
] | [] | null | null |
70 | card_games | In the card_games database, there is a table named 'cards' which contains various details about each card, including a unique identifier 'id' and the card's name 'name'. Another table named 'decks' stores information about different decks, where each deck has a unique identifier 'id' and an array 'card_order' that lists the 'id's of the cards in the deck in the order they should be played. When a user selects a deck, they want to see the cards in the order they are listed in the 'card_order' array. However, the current SQL query does not preserve the order of the cards as specified in the 'card_order' array. The user's current SQL query is provided below and it does not maintain the order of the cards. | [
"SELECT c.id, c.name FROM cards c WHERE c.id IN (SELECT unnest(card_order) FROM decks WHERE id = 1);"
] | [] | [
"CREATE TABLE decks (id bigint PRIMARY KEY, card_order bigint[]);",
"INSERT INTO decks (id, card_order) VALUES (1, ARRAY[3, 6, 1]), (2, ARRAY[5, 2, 4]);"
] | [
"DROP TABLE decks;"
] | [] | null | null |
71 | card_games | In the context of the card_games database, we have two tables: 'card_prices' and 'order_cards'. The 'card_prices' table records the price of each card at different start dates, and the 'order_cards' table records the cards ordered by customers on specific dates. We need to join these two tables to get the price of each card at the time it was ordered. However, the initial attempt to join the tables resulted in duplicate records for some orders. Here are the tables and the problematic query:\n\Table 'card_prices':\n| start_date | card_id | price |\n|------------|---------|-------|\n| 2023-04-01 | 1 | 10.0 |\n| 2023-04-15 | 1 | 20.0 |\n| 2023-04-01 | 2 | 20.0 |\n\Table 'order_cards':\n| order_date | order_id | card_id |\n|------------|----------|---------|\n| 2023-04-01 | 10001 | 1 |\n| 2023-04-01 | 10001 | 2 |\n| 2023-04-02 | 10002 | 1 |\n| 2023-04-02 | 10002 | 2 |\n| 2023-04-16 | 10003 | 1 |\n| 2023-04-16 | 10003 | 2 |\n\nThe desired result is:\n| order_date | order_id | card_id | price |\n|------------|----------|---------|-------|\n| 2023-04-01 | 10001 | 1 | 10.0 |\n| 2023-04-01 | 10001 | 2 | 20.0 |\n| 2023-04-02 | 10002 | 1 | 10.0 |\n| 2023-04-02 | 10002 | 2 | 20.0 |\n| 2023-04-16 | 10003 | 1 | 20.0 |\n| 2023-04-16 | 10003 | 2 | 20.0 |\nHowever, the initial attempt resulted in duplicate records for some orders.\n | [
"SELECT ord.order_date, ord.order_id, ord.card_id, prd.price FROM order_cards ord LEFT JOIN (SELECT * FROM card_prices ORDER BY start_date ASC) AS prd ON ord.card_id = prd.card_id AND ord.order_date >= prd.start_date"
] | [] | [
"CREATE TABLE card_prices (start_date DATE, card_id BIGINT, price NUMERIC);",
"INSERT INTO card_prices (start_date, card_id, price) VALUES ('2023-04-01', 1, 10.0), ('2023-04-15', 1, 20.0), ('2023-04-01', 2, 20.0);",
"CREATE TABLE order_cards (order_date DATE, order_id BIGINT, card_id BIGINT);",
"INSERT INTO order_cards (order_date, order_id, card_id) VALUES ('2023-04-01', 10001, 1), ('2023-04-01', 10001, 2), ('2023-04-02', 10002, 1), ('2023-04-02', 10002, 2), ('2023-04-16', 10003, 1), ('2023-04-16', 10003, 2);"
] | [
"DROP TABLE card_prices;",
"DROP TABLE order_cards;"
] | [] | null | null |
72 | european_football_2 | In the database 'european_football_2', there is a table named 'player_stats' that records the performance statistics of football players across different matches. Each row in the table represents a player's performance in a specific match. The table has two columns, 'stats_keys' and 'stats_values', which store the performance metrics and their corresponding values as comma-separated strings. For example, 'stats_keys' might contain 'goals,assists,yellow_cards' and 'stats_values' might contain '2,1,0'. The task is to transform this table into a format where each performance metric is a separate column, with the corresponding values filled in for each player's match performance. | [
"select player_id, stats_keys, stats_values from player_stats"
] | [] | [
"CREATE TABLE player_stats (player_id INT, stats_keys TEXT, stats_values TEXT);",
"INSERT INTO player_stats (player_id, stats_keys, stats_values) VALUES (1, 'goals,assists,yellow_cards', '2,1,0'), (2, 'assists,yellow_cards', '0,1'), (3, 'goals,yellow_cards', '1,0'), (4, 'assists,yellow_cards,red_cards', '2,1,0');"
] | [
"DROP TABLE player_stats;"
] | [] | null | null |
73 | european_football_2 | In the 'european_football_2' database, there is a table named 'teams_config' which holds information about various football teams. Each team has a 'configurations' column of type jsonb that stores an array of objects representing different team settings. Each object in the array has an 'id', 'name', and 'settings'. For example, one row in the 'teams_config' table might have the following 'configurations':
[
{
"id": 100,
"name": "testOne",
"settings": "settingOne"
},
{
"id": 101,
"name": "testTwo",
"settings": "settingTwo"
},
] | [
"UPDATE teams_config SET configurations = jsonb_set(configurations, '{settings}', (configurations->'id') - (SELECT DISTINCT position - 1 FROM teams_config, jsonb_array_elements(configurations) WITH ORDINALITY arr(elem, position) WHERE elem->>'id' = '101')::int);"
] | [] | [
"CREATE TABLE teams_config (configurations jsonb);",
"INSERT INTO teams_config VALUES ('[{\"id\": 100, \"name\": \"testOne\", \"settings\": \"settingOne\"}, {\"id\": 101, \"name\": \"testTwo\", \"settings\": \"settingTwo\"}]');"
] | [
"DROP TABLE teams_config"
] | [] | null | null |
74 | formula_1 | I have a table race_dates which stores the begin_date and end_date of races, e.g. '2022-01-03' and '2022-03-04', is there any neat way to calculate ONLY the completed full calendar months between these dates? Some examples with their requested outputs: '2022-01-03' and '2022-03-04' full calendar months = 1 since only February was a full calendar month between this timespan. '2022-01-01' and '2022-05-30' full calendar months = 4 since May has 31 days total. '2022-01-31' and '2022-05-31' full calendar months = 3 since the month of May is not completed. I tried subtracting the dates but it gives me the days difference between these dates. I also tried the function AGE() but it is based also in the days difference, since it is using days to calculate years months etc. | [
"SELECT begin_date, end_date, age(CASE WHEN end_date = date_trunc('month', end_date) + interval '1 month - 1 day' THEN end_date + interval '1 day' ELSE date_trunc('month', end_date) END::date, CASE WHEN begin_date = date_trunc('month', begin_date) THEN begin_date ELSE date_trunc('month', begin_date) + interval '1 month' END::date) AS calculated_months FROM race_dates;"
] | [] | [
"CREATE TABLE race_dates (begin_date DATE NOT NULL, end_date DATE NOT NULL)",
"INSERT INTO race_dates (begin_date, end_date) VALUES ('2022-01-03', '2022-03-04'), ('2022-01-01', '2022-05-30'), ('2022-01-31', '2022-05-31'), ('2021-11-15', '2022-02-10'), ('2021-12-01', '2022-05-31');"
] | [
"DROP TABLE race_dates"
] | [] | null | null |
75 | student_club | In the student_club database, I am trying to insert an attendance record that tracks when a member attends an event. The goal is to ensure there are no duplicate entries for the same member (link_to_member) attending the same event (link_to_event). If an attendance record for the member and event already exists, the date column should be updated to reflect the most recent attendance timestamp. If no such record exists, a new record should be created. I have tried using the ON CONFLICT clause with a WHERE condition to achieve this, but it doesn't seem to work.
Here is one of the many permutations I've tried:
sql
INSERT INTO new_attendance (link_to_event, link_to_member, date)
VALUES ('reciRZdAqNIKuMC96', 'recL94zpn6Xh6kQii', NOW())
ON CONFLICT
WHERE link_to_member='recL94zpn6Xh6kQii' DO NOTHING
The link_to_member column does not have any constraints, so the simpler syntax:
sql
ON CONFLICT (link_to_member) DO NOTHING
throws database errors. My hope is this is a simple syntax issue. | [
"\n INSERT INTO new_attendance (link_to_event, link_to_member, date)\n VALUES ('reciRZdAqNIKuMC96', 'recL94zpn6Xh6kQii', NOW())\n ON CONFLICT\n WHERE link_to_member='recL94zpn6Xh6kQii' DO NOTHING;\n "
] | [] | [
"\n DROP TABLE IF EXISTS new_attendance;\n ",
"\n CREATE TABLE new_attendance AS\n SELECT DISTINCT link_to_event, link_to_member, NOW() AS date\n FROM attendance;\n ",
"\n ALTER TABLE new_attendance\n ADD CONSTRAINT unique_event_member UNIQUE (link_to_event, link_to_member);\n "
] | [] | [] | null | null |
76 | financial | I'm migrating from Oracle to PostgreSQL. In Oracle, I used the following call to acquire a lock with a timeout: `lkstat := DBMS_LOCK.REQUEST(lkhndl, DBMS_LOCK.X_MODE, lktimeout, true);`. This function tries to acquire the lock `lkhndl` and returns 1 if it fails to get it after `lktimeout` seconds. In PostgreSQL, I tried using `pg_advisory_xact_lock(lkhndl);`, but it seems to wait indefinitely for the lock. I need a way to implement a timeout version of lock acquiring in PostgreSQL named pg_try_advisory_lock_with_timeout. The function pg_try_advisory_lock_with_timeout(key bigint) is designed to attempt to acquire a PostgreSQL advisory lock with a timeout of 1 second. If the lock is unavailable due to contention or deadlock detection, it will return false instead of waiting indefinitely. | [
"\n pg_advisory_xact_lock(lkhndl);\n "
] | [] | [
"\n DROP FUNCTION IF EXISTS pg_try_advisory_lock_with_timeout(bigint);\n "
] | [] | [] | null | null |
77 | student_club | I'm trying to rank club members based on the hours they have attented for events, rounded to the nearest 10. I need to produce a descending ranking of members by total hours attened, including a column with the rank using the `RANK()` window function, and sort the result by the rank. However, my rounding logic seems to be incorrect, as it produces different results compared to the expected output. | [
"\n SELECT\n link_to_member,\n CASE\n WHEN (SUBSTRING(ROUND(SUM(hours)::NUMERIC, 0)::TEXT FROM '.{1}$') IN ('5', '6', '7', '8', '9', '0')) \n THEN CEIL(SUM(hours) / 10) * 10\n ELSE FLOOR(SUM(hours) / 10) * 10\n END AS rounded_hours,\n RANK() OVER (ORDER BY \n CASE\n WHEN (SUBSTRING(ROUND(SUM(hours)::NUMERIC, 0)::TEXT FROM '.{1}$') IN ('5', '6', '7', '8', '9', '0')) \n THEN CEIL(SUM(hours) / 10) * 10\n ELSE FLOOR(SUM(hours) / 10) * 10\n END DESC\n ) AS rank\n FROM attendance\n GROUP BY link_to_member\n ORDER BY rank, link_to_member; \n "
] | [] | [
"\n ALTER TABLE attendance\n ADD COLUMN hours NUMERIC;\n ",
"\n TRUNCATE TABLE attendance;\n ",
"\n INSERT INTO attendance (link_to_event, link_to_member, hours)\n VALUES \n ('event_1', 'member_1', 64.5),\n ('event_2', 'member_1', 60.0),\n ('event_2', 'member_2', 210.5),\n ('event_3', 'member_3', 237.6);\n "
] | [] | [] | null | null |
78 | financial | I need to create an index named ix_account on the 'account' table for the columns 'district_id', 'frequency', and 'date'. I want to ensure that the index does not already exist before attempting to create it. How can I check for the existence of this index? Return True if the index exists. Otherwise return False. | [
"\n CREATE INDEX ix_account ON account USING btree (district_id, frequency, date); \n "
] | [] | [
"\n CREATE INDEX ix_account ON account USING btree (district_id, frequency, date); \n "
] | [] | [] | null | null |
79 | european_football_2 | I am trying to create a view that counts the records where home team goal is 2 in a specific season. I have a function `findteam(text)` that returns a float representing the count for a given season. However, when I try to use this function in my view, I encounter an error stating 'cannot change data type of view column `team_count` from integer to double precision'. I am new to SQL and do not understand why this is happening or how to fix it. | [
"\n create or replace view findcount(season, team_count) as\n select\n season,\n findteam(season) as team_count\n from (\n select distinct season\n from match\n where season >= '2008/2009' \n ) seasons;\n "
] | [] | [
"\n DROP VIEW IF EXISTS findcount;\n DROP FUNCTION IF EXISTS findteam;\n ",
"\n create or replace function findteam(text) returns float as $$\n select cast(count(*) as float)\n from match m\n where m.home_team_goal = 2 and m.season = $1;\n $$ language sql;\n ",
"\n CREATE VIEW findcount AS\n SELECT season, CAST(10 AS INTEGER) AS team_count\n from (\n select distinct season\n from match\n where season >= '2008/2009' \n ) seasons;\n "
] | [] | [] | null | null |
80 | codebase_community | In the context of the 'codebase_community' database, a user has a table named 'posts' containing various posts made by users. Each post has a 'tags' column that lists the tags associated with the post. Specifically, the user is interested in identifying the number of posts that include the keywords 'bayesian' or 'distributions' for each post type. The user attempted to implement this in PostgreSQL but encountered errors in his SQL query. | [
"\n select posttypeid\n case when tags like ('%bayesian%','%distributions%') \n then 1 else 0 end as keyword_count\n from posts\n "
] | [] | [
"\n ALTER TABLE posts RENAME TO posts_backup;\n ",
"\n CREATE TABLE posts (\n id INT PRIMARY KEY,\n posttypeid INT,\n tags TEXT\n );\n ",
"\n INSERT INTO posts (id, posttypeid, tags)\n VALUES \n (1, 1, '<bayesian><prior><elicitation>'),\n (2, 1, '<distributions><normality>'),\n (3, 1, '<software><open-source>'),\n (4, 2, '<distributions>'),\n (5, 2, '<book><code>');\n ",
"\n DROP TABLE IF EXISTS posts_backup;\n "
] | [] | [] | null | null |
81 | debit_card_specializing | I have a table of transactions for multiple customers, where each transaction has a unique transaction id, along with amount, type, and transaction record. Some transactions for a single customerid share the same combination of these attributes. I want to update a first_transaction column with the transaction of the earliest transaction for each unique combination of attributes within each customer. My current method uses a LATERAL JOIN but is extremely slow on my small server. I process one customer at a time and commit after each iteration, but the query remains inefficient. How can I optimize this process? | [
"SELECT a.customerid, a.transaction, (SELECT b.transaction FROM transaction_info b WHERE b.customerid = a.customerid AND b.amount = a.amount AND b.type = a.type ORDER BY b.transaction LIMIT 1) AS first_transaction, a.amount, a.type, a.transactionid FROM transaction_info a ORDER BY a.customerid, a.transaction"
] | [] | [
"\nCREATE TABLE transaction_info (\n customerid int,\n transaction int,\n first_transaction varchar(10),\n amount numeric,\n type numeric,\n transactionid text\n);\nINSERT INTO transaction_info (customerid, transaction, first_transaction, amount, type, transactionid) VALUES\n(1, 1, 'na', 65250.78, 700000.52, '01010000206A0000000000F0C02E458A4400000000F03F'),\n(1, 2, 'na', 65250.78, 700000.52, '01010000206A0000000000F0C02E458A4400000000F03F'),\n(1, 3, 'na', 65250.78, 700000.52, '01010000206A0000000000F0C02E458A4400000000F03F'),\n(1, 4, 'na', 65999.00, 700555.00, '01010000455A000000000010C03F478A4400000010F03F'),\n(1, 5, 'na', 65999.00, 700555.00, '01010000455A000000000010C03F478A4400000010F03F'),\n(1, 6, 'na', 65999.00, 700555.00, '01010000455A000000000010C03F478A4400000010F03F'); \n"
] | [
"\nDROP TABLE test;\n"
] | [] | null | null |
82 | toxicology | In the toxicology database, I have two tables, 'bond' and 'molecule'. The 'bond' table contains information about bonds within molecules, including a foreign key 'molecule_id' that references the 'molecule' table. I need to construct a query that select count(*), molecule_id, most recent update timestamp grouping the bonds by 'molecule_id' and sorts the results based on molecule_id and the most recent bond entry (assuming we have a timestamp column added to the 'bond' table for this purpose). However, I've tried the following query and it doesn't work as expected: | [
"SELECT count(bond_id), molecule_id FROM bond GROUP BY molecule_id ORDER BY molecule_id last_update DESC;"
] | [] | [
"ALTER TABLE bond ADD COLUMN last_update TIMESTAMP DEFAULT CURRENT_TIMESTAMP;"
] | [
"ALTER TABLE bond DROP COLUMN last_update;"
] | [] | null | null |
83 | european_football_2 | In the context of the 'european_football_2' database, we have a table that logs changes to player statistics over time. Each row in the 'player_stats_changes' table represents a change to a specific player's attribute (such as height or weight) at a particular timestamp. We want to generate a cumulative view of these changes, where each row shows the player's current height and weight at each timestamp, filling in any missing values with the most recent known value. | [
"SELECT entity_id, coalesce(change->'height', lag(change->'height', 1, null) over (partition by entity_id order by updated_at)) as height, coalesce(change->'weight', lag(change->'weight', 1, null) over (partition by entity_id order by updated_at)) as weight, updated_at FROM ( SELECT entity_id, json_object_agg(column_id, value) as change, updated_at FROM player_stats_changes GROUP BY entity_id, updated_at) as changes;"
] | [] | [
"CREATE TABLE IF NOT EXISTS player_stats_changes ( entity_id TEXT NOT NULL, column_id TEXT NOT NULL, value JSONB NOT NULL, updated_at TIMESTAMP NOT NULL );",
"INSERT INTO player_stats_changes VALUES ('1', 'height', to_jsonb(140), '01-01-2021 00:00:00'::TIMESTAMP), ('1', 'weight', to_jsonb(30), '01-01-2021 00:00:00'::TIMESTAMP), ('1', 'height', to_jsonb(145), '01-02-2021 00:00:00'::TIMESTAMP), ('1', 'weight', to_jsonb(34), '01-03-2021 00:00:00'::TIMESTAMP);"
] | [
"DROP TABLE IF EXISTS player_stats_changes;"
] | [] | null | null |
84 | superhero | In the superhero database, I have two separate queries (q1, q2) joining across multiple tables assigning the same superheroes to different groups (I call these subgroups) based on different criteria. I get query result 1 and 2 (qr1, qr2). An item might appear in one or both, but within a result it is unique. I want to assign a new group id based on both subgroups and assigning the same group id if the subgroups share one or more items. | [
"with qr1(item, subgroup) AS (SELECT id, subgroup1 FROM superhero_group WHERE subgroup1 IS NOT NULL), qr2(item, subgroup) AS (SELECT id, subgroup2 FROM superhero_group WHERE subgroup2 IS NOT NULL) select item, subgroup1, subgroup2, dense_rank() over (order by item) as group from (select qr1.item, qr1.subgroup as subgroup1, qr2.subgroup as subgroup2 from qr1 full outer join qr2 on qr1.item = qr2.item) as combined"
] | [] | [
"CREATE TABLE superhero_group (id INTEGER PRIMARY KEY, subgroup1 INTEGER, subgroup2 INTEGER)",
"INSERT INTO superhero_group VALUES (1,1,5), (2,1,null), (3,2,null), (4,3,null), (5,3,6), (6,4,6), (7,null,7), (8,null,5), (10,null,5)"
] | [] | [] | null | null |
85 | superhero | In the superhero database, a user is allowed to view details of a superhero if their user_id matches the superhero's publisher_id or if there is an entry in the 'hero_access' table where their user_id is in the 'read_acl' column (array using gin index). Both tables have about 2 million rows. The query is slow, especially when using an OR clause. Is there a way that improves the performance significantly? | [
"select * from superhero where publisher_id = 1 or exists (select * from hero_access f where superhero.id = f.superhero_id and '{1}' && read_acl) order by superhero.id limit 10;"
] | [] | [
"CREATE TABLE hero_access (superhero_id bigint, read_acl text[]);",
"CREATE INDEX idx_hero_access_read_acl ON hero_access USING gin (read_acl);",
"INSERT INTO hero_access (superhero_id, read_acl) SELECT id, ARRAY['1'] FROM superhero ORDER BY random() LIMIT 10;"
] | [
"DROP TABLE hero_access;"
] | [] | null | null |
86 | superhero | I have two tables and I want to merge them. Table utm is a source-main table and table report contains data for utm rows. What I need: Take id and utm_ from utm table and add stats from table report with proper granulation. In table utm I've a row: (24611609, 'myTarget', 'Media', 'Social', NULL, NULL) and in table report I've 2 rows:
(24611609, '2022-08-01', 200, 150, 15, 'myTarget', 'Media', 'Social', 'premium', 'subcribe'),
(24611609, '2022-08-01', 25, 10, 1, 'myTarget', 'Media', 'Social', 'free', 'subcribe')
Common is: 'myTarget', 'Media', 'Social'.
Proper granularity level is id, utm_campaign, utm_source, utm_medium, so I need to SUM and GROUP two rows by these keys. I don't know how to deal with all possible granularity combinations. My idea was just use diffrent JOINS variations and merge results with UNION. But it's really stupid, I should create > 1000 unions and joins. Any tips? | [
"WITH r AS (SELECT id, date_of_visit, SUM(sessions) AS sessions, SUM(pageviews) AS pageviews, SUM(bounces) AS bounce, COALESCE(utm_campaign, '') AS utm_campaign, COALESCE(utm_source, '') AS utm_source, COALESCE(utm_medium, '') AS utm_medium, COALESCE(utm_content, '') AS utm_content, COALESCE(utm_term, '') AS utm_term FROM report GROUP BY id, date_of_visit, utm_campaign, utm_source, utm_medium, utm_content, utm_term UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), COALESCE(utm_campaign, ''), COALESCE(utm_source, ''), '' AS utm_medium, '' AS utm_content, '' AS utm_term FROM report GROUP BY id, date_of_visit, utm_campaign, utm_source UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), COALESCE(utm_campaign, ''), '' AS utm_source, COALESCE(utm_medium, ''), '' AS utm_content, '' AS utm_term FROM report GROUP BY id, date_of_visit, utm_campaign, utm_medium UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), COALESCE(utm_campaign, ''), '' AS utm_source, '' AS utm_medium, COALESCE(utm_content, ''), '' AS utm_term FROM report GROUP BY id, date_of_visit, utm_campaign, utm_content UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), COALESCE(utm_campaign, ''), '' AS utm_source, '' AS utm_medium, '' AS utm_content, COALESCE(utm_term, '') FROM report GROUP BY id, date_of_visit, utm_campaign, utm_term UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), '' AS utm_campaign, COALESCE(utm_source, ''), COALESCE(utm_medium, ''), '' AS utm_content, '' AS utm_term FROM report GROUP BY id, date_of_visit, utm_source, utm_medium UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), '' AS utm_campaign, COALESCE(utm_source, ''), '' AS utm_medium, COALESCE(utm_content, ''), '' AS utm_term FROM report GROUP BY id, date_of_visit, utm_source, utm_content UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), '' AS utm_campaign, COALESCE(utm_source, ''), '' AS utm_medium, '' AS utm_content, COALESCE(utm_term, '') FROM report GROUP BY id, date_of_visit, utm_source, utm_term UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), '' AS utm_campaign, '' AS utm_source, COALESCE(utm_medium, ''), COALESCE(utm_content, ''), '' AS utm_term FROM report GROUP BY id, date_of_visit, utm_medium, utm_content UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), '' AS utm_campaign, '' AS utm_source, COALESCE(utm_medium, ''), '' AS utm_content, COALESCE(utm_term, '') FROM report GROUP BY id, date_of_visit, utm_medium, utm_term UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), '' AS utm_campaign, '' AS utm_source, '' AS utm_medium, COALESCE(utm_content, ''), COALESCE(utm_term, '') FROM report GROUP BY id, date_of_visit, utm_content, utm_term UNION SELECT id, date_of_visit, SUM(sessions), SUM(pageviews), SUM(bounces), '' AS utm_campaign, '' AS utm_source, '' AS utm_medium, '' AS utm_content, '' AS utm_term FROM report GROUP BY id, date_of_visit) SELECT r.* FROM r JOIN utm AS u ON r.id = u.row_id AND (r.utm_campaign = u.utm_campaign OR (r.utm_campaign = '' AND u.utm_campaign IS NULL)) AND (r.utm_source = u.utm_source OR (r.utm_source = '' AND u.utm_source IS NULL)) AND (r.utm_medium = u.utm_medium OR (r.utm_medium = '' AND u.utm_medium IS NULL)) AND (r.utm_content = u.utm_content OR (r.utm_content = '' AND u.utm_content IS NULL)) AND (r.utm_term = u.utm_term OR (r.utm_term = '' AND u.utm_term IS NULL)) WHERE 'NA' NOT IN (r.utm_campaign, r.utm_source, r.utm_medium, r.utm_content, r.utm_term);"
] | [] | [
"CREATE TABLE utm (row_id int8 NOT NULL, utm_campaign text NULL, utm_source text NULL, utm_medium text NULL, utm_content text NULL, utm_term text NULL);",
"INSERT INTO utm (row_id, utm_campaign, utm_source, utm_medium, utm_content, utm_term) VALUES (24611609, 'myTarget', 'Media', 'Social', NULL, NULL), (28573041, 'shop_ smartfony', 'my_beeline', 'banner', NULL, NULL), (28573041, 'Beeline_uppers_2022', NULL, NULL, NULL, NULL), (24611609, 'campaign', 'source', 'medium', 'content', 'term');",
"CREATE TABLE report (id int8 NOT NULL, date_of_visit date NOT NULL, sessions numeric NULL, pageviews numeric NULL, bounces numeric NULL, utm_campaign text NULL, utm_source text NULL, utm_medium text NULL, utm_content text NULL, utm_term text NULL);",
"INSERT INTO report (id, date_of_visit, sessions, pageviews, bounces, utm_campaign, utm_source, utm_medium, utm_content, utm_term) VALUES (24611609, '2022-08-01', 200, 150, 15, 'myTarget', 'Media', 'Social', 'premium', 'subcribe'), (24611609, '2022-08-01', 25, 10, 1, 'myTarget', 'Media', 'Social', 'free', 'subcribe'), (28573041, '2022-08-01', 900, 885, 34, 'shop_ smartfony', 'my_beeline', 'banner', NULL, NULL), (28573041, '2022-08-01', 1000, 900, 10, 'Beeline_uppers_2022', NULL, NULL, NULL, NULL), (21781121, '2022-08-01', 500, 50, 5, 'vodafone', 'google', NULL, NULL, NULL), (21781121, '2022-08-01', 55, 50, 3, 'vodafone', 'google', 'youtube', NULL, NULL), (24611609, '2022-08-01', 1, 1, 0, 'campaign', 'source', 'medium', 'content', 'term');"
] | [
"DROP TABLE utm;",
"DROP TABLE report"
] | [] | null | null |
87 | card_games | I have a local PostgreSQL database named card_games, with a table called cards that contains many columns. One of these columns is named text, which stores details about each card's abilities or effects. Sometimes, the text field contains one or more curly-brace expressions indicating costs or actions. For example:
"{{T}}: Target creature gains haste until end of turn."
"{{1}}{{W}}: Prevent the next 2 damage that would be dealt to any target."
"{{2}}{{U}}{{U}}: Draw two cards, then discard a card."
"Flying (This creature can't be blocked except by creatures with flying or reach) {{G}}{{1}}"
I want to extract all the bracketed tokens (i.e., everything in {{...}}) from the text field, potentially returning them in a separate column or combining them into a list. Additionally, some rows may contain multiple occurrences of these curly-brace expressions, separated by normal text.
How can I write a SQL query (using PostgreSQL features like regexp_matches, substring, or similar) to:
1. Find all bracketed tokens within each row's text column,
2. Return them in a format where I can see each token (e.g., {{T}}, {{1}}{{W}}, etc.) separately or as an array,
3. Handle rows that have multiple bracketed tokens or none at all,
4. Optionally count how many curly-brace expressions appear per row?
I'm specifically looking for a solution that runs purely in SQL (e.g. using regexp_replace, regexp_matches, or other built-in PostgreSQL string functions). How should I structure my query to achieve this? Are there any caveats with capturing multiple matches from the same row in PostgreSQL? | [
"SELECT\n id,\n text,\n REGEXP_MATCHES(\n text,\n '\\{.*?\\}',\n 'g'\n ) AS bracketed_tokens\nFROM cards;"
] | [] | [
""
] | [
""
] | [] | null | null |
88 | superhero | I am porting the queries from InfluxDB to TimescaleDB (PostgreSQL). I am currently stuck with the equivalent of InfluxDB's TOP and BOTTOM functions. Specifically, I need to find the top 5 and bottom 5 races within each gender_id group, ranked by the number of superheroes. If multiple races have the same count, they should share the same rank. In InfluxDB, I would use TOP(count(race_id), 5) in each group with the same gender_id. How can I achieve this in PostgreSQL? | [
"SELECT race_id, top(count(*), 5) as cnt FROM superhero group by gender_id"
] | [] | [] | [] | [] | null | null |
89 | card_games | I have this SQL query to get the top 3 rulings for each uuid in the given list: 6d268c95-c176-5766-9a46-c14f739aba1c, 56f4935b-f6c5-59b9-88bf-9bcce20247ce, 8dfc67e9-8323-5d1f-9e25-9f9394abd5a0, 5ac794d2-4c66-5332-afb1-54b24bc11823, 60f49caf-3583-5f85-b4b3-08dca73a8628, ranked by the number of rulings. However, my current query is not working correctly and is not returning the expected results. | [
"SELECT rulings.id, rulings.date, rulings.text, rulings.uuid FROM rulings WHERE rulings.uuid IN ('6d268c95-c176-5766-9a46-c14f739aba1c', '56f4935b-f6c5-59b9-88bf-9bcce20247ce', '8dfc67e9-8323-5d1f-9e25-9f9394abd5a0', '5ac794d2-4c66-5332-afb1-54b24bc11823', '60f49caf-3583-5f85-b4b3-08dca73a8628') GROUP BY rulings.id ORDER BY rulings.id LIMIT 3"
] | [] | [] | [] | [] | null | null |
90 | formula_1 | I am analyzing Formula 1 race data to rank drivers based on their total points across multiple races. Each driver earns points for their position in each race. I want to retain the discrete race scoring while also ranking the drivers in the series. For example, considering a sub-query that returns this:\n| Driver ID | Driver Name | Total Points | Race Points | Race ID |\n| --------- | ----------- | ------------ | ----------- | ------- |\n| 1 | Lewis | 50 | 10 | 101 |\n| 1 | Lewis | 50 | 20 | 102 |\n| 1 | Lewis | 50 | 20 | 103 |\n| 2 | Nico | 40 | 20 | 101 |\n| 2 | Nico | 40 | 20 | 102 |\nYou can see Lewis has 50 points, as he won 10, 20, and 20 points in three races. Nico has 40 points, as he won 20 and 20 points in two races.\nNow for the ranking, what I'd like is:\n| Place | Driver ID | Driver Name | Total Points | Race Points | Race ID |\n| --------- | --------- | ----------- | ------------ | ----------- | ------- |\n| 1 | 1 | Lewis | 50 | 10 | 101 |\n| 1 | 1 | Lewis | 50 | 20 | 102 |\n| 1 | 1 | Lewis | 50 | 20 | 103 |\n| 2 | 2 | Nico | 40 | 20 | 101 |\n| 2 | 2 | Nico | 40 | 20 | 102 |\nBut if I use `rank()` and `order by Total Points`, I get:\n| Place | Driver ID | Driver Name | Total Points | Race Points | Race ID |\n| --------- | --------- | ----------- | ------------ | ----------- | ------- |\n| 1 | 1 | Lewis | 50 | 10 | 101 |\n| 1 | 1 | Lewis | 50 | 20 | 102 |\n| 1 | 1 | Lewis | 50 | 20 | 103 |\n| 4 | 2 | Nico | 40 | 20 | 101 |\n| 4 | 2 | Nico | 40 | 20 | 102 |\nWhich makes sense, since there are three 'ties' at 50 points.\n`dense_rank()` solves this problem, but if there are legitimate ties across different drivers, I want there to be gaps in the rank (e.g if Lewis and Nico both had 50 points, they are both the first place and the next driver would be in third place, no second).\nThe easiest way to solve this I think would be to issue two queries, one with the 'duplicate' drivers eliminated, and then a second one where I can retain the individual race data, which I need for the points break down display.\nI can also probably, given enough effort, think of a way to do this in a single query, but I'm wondering if I'm not just missing something really obvious that could accomplish this in a single, relatively simple query.\nAny suggestions? | [
"select rank() over (order by total_points desc) as place, id, name, total_points, race_points, raceId from racers"
] | [] | [
"CREATE TABLE racers (id integer, name text, total_points integer, race_points integer, raceId integer);",
"INSERT INTO racers (id, name, total_points, race_points, raceId) VALUES (1, 'Lewis', 50, 10, 123), (1, 'Lewis', 50, 20, 234), (1, 'Lewis', 50, 20, 345), (2, 'Nico', 40, 20, 123), (2, 'Nico', 40, 20, 234), (3, 'Dave', 50, 30, 123), (3, 'Dave', 50, 20, 234);"
] | [
"DROP TABLE racers;"
] | [] | null | null |
91 | california_schools | In the context of the 'california_schools' database, we have three tables: 'schools', 'satscores', and 'frpm'. The 'schools' table contains detailed information about each school, the 'satscores' table contains SAT scores for each school, and the 'frpm' table contains information about the free and reduced-price meal eligibility for each school. We want to determine for each school in the 'satscores' table, whether there is any corresponding entry in the 'frpm' table for the same school. The user has written a query that checks for the existence of such entries but believes it is inefficient as it traverses the 'frpm' table twice. Is there a better way? | [
"SELECT s.cds, true FROM satscores s WHERE EXISTS (SELECT 1 FROM frpm f WHERE s.cds = f.cdscode) UNION SELECT s.cds, false FROM satscores s WHERE NOT EXISTS (SELECT 1 FROM frpm f WHERE s.cds = f.cdscode) ORDER BY cds"
] | [] | [] | [] | [] | null | null |
92 | card_games | In the card_games database, we have a table named 'orders' that records the purchase history of customers. Each entry in the 'orders' table includes the month and year of the purchase, the order ID, and the customer ID. We want to analyze the purchase behavior of our customers to identify repeat customers. A repeat customer is defined as a customer who has made at least one purchase in the past and makes another purchase in a subsequent month. We aim to count the number of repeat customers per month. For example, if a customer made their first purchase in January 2022, then any purchase they make in February 2022 or later should be counted as a repeat purchase in that respective month. The user attempted to write a query to count repeat customers but encountered issues with the query logic, which only counted customers who made more than one purchase in the same month rather than those who made a purchase in a subsequent month after their first purchase. | [
"SELECT month_year, COUNT(DISTINCT customer_id) FROM orders GROUP BY month_year HAVING COUNT(order_id) > 1"
] | [] | [
"CREATE TABLE orders (month_year text, order_id text, customer_id bigint);",
"INSERT INTO orders (month_year, order_id, customer_id) VALUES ('2016-04', '0001', 24662), ('2016-05', '0002', 24662), ('2016-05', '0002', 24662), ('2016-07', '0003', 24662), ('2016-07', '0003', 24662), ('2016-07', '0004', 24662), ('2016-07', '0004', 24662), ('2016-08', '0005', 24662), ('2016-08', '0006', 24662), ('2016-08', '0007', 24662), ('2016-08', '0008', 24662), ('2016-08', '0009', 24662), ('2016-08', '0010', 11372), ('2016-08', '0011', 11372), ('2016-09', '0012', 24662), ('2016-10', '0013', 24662), ('2016-10', '0014', 11372), ('2016-11', '0015', 24662), ('2016-11', '0016', 11372), ('2016-12', '0017', 11372), ('2017-01', '0018', 11372), ('2017-01', '0019', 11372);"
] | [
"DROP TABLE orders;"
] | [] | null | null |
93 | european_football_2 | In the database european_football_2, there is a table named player_movements that records the movements of football players joining and leaving teams. Each row in the table includes the player's name, the event (either 'Join' or 'Leave'), and the timestamp of the event. The goal is to transform this data into a format that shows the periods during which each player was part of a team. Specifically, we want to create a table that lists each player, the date they joined a team, and the date they left the team. This will allow us to easily query the data to determine if a player was part of a team on a specific date. For example, we should be able to find out if Player A was on Team X on a given date by using a query like: SELECT * FROM transformed_table WHERE player_name = 'Player A' AND '2022-01-01' BETWEEN joined_date AND left_date. However, the user attempted to write a query that did not produce the desired results. | [
"SELECT player_name, event_date as join_date, (SELECT event_date FROM player_movements pm1 WHERE pm1.player_name = pm.player_name AND pm1.event = 'Leave' AND pm1.event_date > pm.event_date) as leave_date FROM player_movements pm WHERE event = 'Join'"
] | [] | [
"CREATE TABLE player_movements (player_name VARCHAR(100), event VARCHAR(10), event_date DATE);",
"INSERT INTO player_movements (player_name, event, event_date) VALUES ('Player A', 'Join', '2022-01-01'), ('Player A', 'Leave', '2022-01-02'), ('Player A', 'Join', '2022-01-31'), ('Player A', 'Leave', '2022-02-01'), ('Player B', 'Join', '2022-01-31'), ('Player B', 'Leave', '2022-02-01');"
] | [
"DROP TABLE player_movements;"
] | [] | null | null |
94 | european_football_2 | We have a table named 'player_attributes' that records various performance metrics for players. Each record includes the following metrics: gk_diving, gk_handling, gk_kicking, gk_positioning, gk_reflexes. For example, if a player has gk_diving = 6, gk_handling = 10, gk_kicking = 11, gk_positioning = 8, gk_reflexes = 8, the average after removing the highest (11) and lowest (6) amounts would be (8 + 8 + 10) / 3 = 8.6667. The following is the method I use. It is very bloated and the execution time is too long. Is there a neat way to achieve my needs? | [
"SELECT id, (SUM(gk) - MAX(gk) - MIN(gk)) / (COUNT(gk) - 2) AS adjusted_average FROM (SELECT id, gk_diving AS gk FROM player_attributes UNION ALL SELECT id, gk_handling AS gk FROM player_attributes UNION ALL SELECT id, gk_kicking AS gk FROM player_attributes UNION ALL SELECT id, gk_positioning AS gk FROM player_attributes UNION ALL SELECT id, gk_reflexes AS gk FROM player_attributes) subquery GROUP BY id ORDER BY id;"
] | [] | [] | [] | [] | null | null |
95 | financial | In the financial database, there is a table named 'trans' which records all transactions made by clients. Each transaction has a unique transaction ID, the account ID associated with the transaction, the date of the transaction, the type of transaction (credit or debit), the operation performed, the amount involved, the balance after the transaction, and additional details such as the bank and account of the partner. The table has more than 1000000 rows and is growing rapidly. I need to extract the most recent transaction (based on the transaction date) for each account ID. I also tried to avoid using a subquery but did not notice a significant difference. Any idea how I could optimize this query? | [
"DROP INDEX IF EXISTS idx_a;",
"SELECT DISTINCT ON (t.account_id) t.trans_id, t.account_id, t.date, t.type, t.amount, t.balance FROM trans t ORDER BY t.account_id, t.date DESC, trans_id;"
] | [] | [] | [
"DROP INDEX IF EXISTS idx_a;"
] | [] | null | null |
96 | financial | A financial analyst is tasked with analyzing transaction data to summarize daily financial activities for each client. They need to calculate the total amount of transactions, total balance changes, and the number of transactions for each client, partitioned by date. The analyst writes a query using the same window function with the same partition definition for multiple result columns but encounters redundancy. The use wants to use one PARTITION definition for multiple window function calls. | [
"\n select account_id, date, \n sum(amount) OVER (PARTITION BY account_id, date) as total_amount, \n sum(balance) OVER (PARTITION BY account_id, date) as total_balance, \n count(trans_id) OVER (PARTITION BY account_id, date) as total_transactions\n from trans \n "
] | [] | [] | [] | [] | null | null |
97 | debit_card_specializing | I need to retrieve transactions from the `transactions_1k` table based on a lexicographic ordering on multiple columns, where the direction of the sort on some columns is reversed. Specifically, I want to find transactions that occurred before a certain date, or on the same date but after a certain time, or on the same date and time but with a transaction amount less than a specified value. For example, I want to find transactions that occurred before '2012-08-24', or on '2012-08-24' but after '10:00:00', or on '2012-08-24' at '10:00:00' but with an amount less than 20. Is there a straightforward way to do this using tuples or a similar approach in PostgreSQL? Note that I cannot rely on tricks that apply only to integers, as some columns are of type date and text. | [
"\n SELECT * FROM transactions_1k WHERE (Date, Time, Amount) < ('2012-08-24', '10:00:00', 20);\n "
] | [] | [] | [] | [] | null | null |
98 | financial | A financial analyst is trying to generate a report that includes the client's name, the loan details, and the account details for loans that were issued in the year 1996. The analyst has written a query to join the `client`, `disp`, `account`, and `loan` tables. However, the query is failing with an error related to a missing FROM-clause entry. The analyst needs to retrieve the client's name, loan amount, loan duration, and account creation date for loans issued in 1996. | [
"\n SELECT client.gender, loan.amount, loan.duration, account.date FROM loan JOIN account ON loan.account_id = account.account_id JOIN client ON disp.client_id = client.client_id JOIN disp ON account.account_id = disp.account_id WHERE loan.date BETWEEN '1996-01-01' AND '1996-12-31';\n "
] | [] | [] | [] | [] | null | null |
99 | codebase_community | We are analyzing user engagement with posts on a community forum. Specifically, we want to determine if a user's view of a post had a significant impact based on the duration of the view and the percentage of the post viewed. The 'has_impact' field should be set to true if the difference between the view end time and the view start time is greater than 3 seconds and the view percentage is greater than 0.8, otherwise it should be false. We have a table 'view_logs' that logs each view event with the session_id, post_id, timestamp (ts), event_name (either 'view_start' or 'view_end'), and view percentage (view_perc). We need to populate the 'has_impact' field based on these criteria. | [
"with cte as (select pv1.session_id, pv1.post_id, pv2.view_perc, pv1.ts as start_time, min(pv2.ts) as end_time from view_logs pv1 join view_logs pv2 on pv1.session_id = pv2.session_id and pv1.post_id = pv2.post_id and pv1.event_name <> pv2.event_name and pv1.ts < pv2.ts group by pv1.session_id, pv1.post_id, pv2.view_perc, pv1.ts) select session_id, post_id, start_time, end_time, case when (end_time - start_time > 3 and view_perc > 0.8 )then 'yes' else 'no' end as has_meaningful_view from cte;"
] | [] | [
"create table view_logs (session_id varchar(10), post_id int, ts int, event_name varchar(50), view_perc float);",
"insert into view_logs(session_id, post_id, ts, event_name, view_perc) values ('m1', 1000, 1524600, 'view_start', null), ('m1', 1000, 1524602, 'view_end', 0.85), ('m1', 1000, 1524650, 'view_start', null), ('m1', 1000, 1524654, 'view_end', 0.9), ('m1', 2000, 1524700, 'view_start', null), ('m1', 2000, 1524707, 'view_end', 0.3), ('m1', 2000, 1524710, 'view_start', null), ('m1', 2000, 1524713, 'view_end', 0.9);"
] | [
"DROP TABLE IF EXISTS view_logs;"
] | [] | null | null |
BIRD-CRITIC-1.0-Flash
BIRD-Critic is the first SQL debugging benchmark designed to answer a critical question:
Can large language models (LLMs) fix user issues in real-world database applications?
Each task in BIRD-CRITIC has been verified by human experts on the following dimensions:
- Reproduction of errors on BIRD env to prevent data leakage.
- Carefully curate test case functions for each task specifically.
- Soft EX: This metric can evaluate SELECT-ONLY tasks.
- Soft EX + Parsing: This metric can evaluate tasks with user specific requirements or refinements.
- Test Case: For DBA tasks, such as CRUD (CREAT, READ, UPDATE, DELET), test cases should be promised to evaluate the correct logic. This is also effective for user issues requiring multiple sequential SQLs to resolve.
- Query Execution Plan: For user tasks involving efficiency improvement or runtime errors, QEP can be introduced to evaluate solution SQLs on algorithm level.
- Fast Eval Sandbox via PostgreSQL template & docker.
- Created new RDBs in different scale and professional domains.
We are releasing a lite version of BIRD-Critic, bird-critic-1.0-flash-exp
, which includes 200 high-quality user issues on PostgreSQL when developing real-world applications. We curate tasks by:
- Collecting and understanding realistic user issues.
- Distilling problem definitions and SQL knowledge.
- Reproducing bugs and solutions in the BIRD environment.
- Designing test cases for evaluation.
Model Performance Results
Rank | Model Name | Score | Level |
---|---|---|---|
1 | o1-preview-2024-09-12 | 38.5 | π Leading |
2 | deepseek-reasoner (r1) | 34.0 | π Elite |
3 | gpt-4o-2024-11-20 | 29.0 | π Elite |
4 | o1-mini | 28.0 | π Superior |
5 | deepseek-V3 | 27.5 | π Superior |
6 | phi-4 | 24.5 | π Superior |
7 | claude-3-5-sonnet | 24.0 | πΈ Advanced |
8 | gemini-2.0-flash-exp | 24.0 | πΈ Advanced |
9 | Qwen2.5-Coder-32B-Instruct | 23.5 | πΈ Advanced |
10 | gemini-2.0-flash-thinking-exp | 19.5 | πΈ Advanced |
11 | Meta-Llama-3.3-70B-Instruct | 18.5 | π« Standard |
12 | Codestral-22B-v0.1 | 18.0 | π« Standard |
13 | gemma-2-27b-it | 18.0 | π« Standard |
14 | QwQ-32B-Preview | 17.5 | π« Standard |
15 | starcoder2-15b-instruct-v0.1 | 15.5 | π« Standard |
16 | DeepSeek-Coder-V2-Lite-Instruct | 12.5 | βͺ Basic |
17 | Mixtral-8x7B-Instruct-v0.1 | 11.5 | βͺ Basic |
18 | gemma-2-9b-it | 10.5 | βͺ Basic |
19 | Yi-1.5-34B-Chat-16K | 10.5 | βͺ Basic |
20 | CodeLlama-34b-Instruct-hf | 10.0 | βͺ Basic |
21 | CodeLlama-13b-Instruct-hf | 8.5 | βͺ Basic |
22 | Mistral-7B-Instruct-v0.2 | 3.5 | βͺ Basic |
Tier Classification (By Ranking):
- π Leading: The Best!
- π Elite: Top 15%
- π Superior: Top 30%
- πΈ Advanced: Top 45%
- π« Standard: Top 70%
- βͺ Basic: Bottom 30%
Dataset Details
Uses
To avoid data leakage by auto-crawling, we do not include GT solution sqls and test cases along with data. please email [email protected] or [email protected] for full set, which will be sent automatically.
Code Sources
- Repository: Comming soon.
Dataset Structure
Below is a description of the dataset fields and additional information about the structure:
- db_id: The name of the database.
- query: The user query is rewritten in the BIRD environment.
- error_sql: The buggy SQL query written by the user.
- sol_sql: The ground truth SQL solution.
- preprocess_sql: SQL queries to run before executing the solution or prediction.
- clean_up_sql: SQL queries to run after the test cases to revert any changes made to the database.
- test_cases: A set of test cases to validate the predicted corrected SQL.
- efficiency: True if this question needs optimization, measure the cost by Query Execution Plan (QEP)
- external_data: For the external JSON data if present
Todo Lists
- Release lite version, bird-critic-1.0-flash (200).
- Open source code, leaderboard page.
- Release Full bird-critic-1.0-open (600 w/ 5 dialects).
- Release Full bird-critic-1.0-postgresql (600 pg tasks).
- Update agent baselines.
- BIRD-Pro v0.5
- BIRD-CRITIC 1.5 / 2.0 on track!
Post By:
BIRD Team & Google Cloud
- Downloads last month
- 27