feat(api): add tables and files v1 REST API with OpenAPI docs#3422
feat(api): add tables and files v1 REST API with OpenAPI docs#3422waleedlatif1 wants to merge 1 commit intofeat/mothership-copilotfrom
Conversation
PR SummaryMedium Risk Overview Extends the v1 API surface with workspace file listing, multipart upload (100MB limit + duplicate-name conflict handling), binary download with metadata headers, and deletion with audit logging. Updates OpenAPI + docs navigation to include the new Written by Cursor Bugbot for commit 0680b17. Configure here. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| .min(1, 'Limit must be at least 1') | ||
| .max(TABLE_LIMITS.MAX_QUERY_LIMIT, `Limit cannot exceed ${TABLE_LIMITS.MAX_QUERY_LIMIT}`) | ||
| .optional() | ||
| .default(100), |
There was a problem hiding this comment.
List rows fails when limit query parameter omitted
High Severity
The limit field uses z.coerce.number() with .min(1) and .optional().default(100). When limit is not provided as a query parameter, searchParams.get('limit') returns null. z.coerce.number() converts null to Number(null) which is 0. Since 0 is not undefined, .optional() does not intercept it, and .min(1) rejects it with a validation error. This makes the limit parameter effectively required despite being documented and intended as optional with a default of 100. The offset parameter has the same coercion pattern but is unaffected because .min(0) accepts 0.
Additional Locations (1)
| const errorMessage = error instanceof Error ? error.message : String(error) | ||
| const detailedError = `Failed to upsert row: ${errorMessage}` | ||
|
|
||
| return NextResponse.json({ error: detailedError }, { status: 500 }) |
There was a problem hiding this comment.
Upsert error response leaks internal error details
Medium Severity
The upsert endpoint's 500 error handler constructs a detailed error message including the raw error.message (e.g., database errors, connection strings, internal state). Every other new endpoint in this PR returns a generic message like 'Failed to delete rows' for 500 errors. Exposing internal error details in a public v1 API response is a security concern and inconsistent with the rest of the PR.
| uploadedBy: userId, | ||
| uploadedAt: new Date().toISOString(), | ||
| }, | ||
| message: 'File uploaded successfully', |
There was a problem hiding this comment.
Upload response fabricates timestamp instead of using record
Low Severity
The upload response constructs uploadedAt: new Date().toISOString() instead of using the actual timestamp from the userFile record returned by uploadWorkspaceFile. All other fields (id, name, size, type, key) come from the record, and the list endpoint correctly uses f.uploadedAt from the database. This inconsistency means the upload response may show a different timestamp than a subsequent list call.
Greptile SummaryThis PR introduces v1 REST API endpoints for Tables (CRUD, row operations, bulk ops, upsert) and Files (list, upload, download, delete), along with full OpenAPI 3.1.0 documentation and rate-limit/auth middleware integration for the new endpoints. The implementation follows the existing v1 API pattern established for workflows, logs, and audit logs. Key concerns found during review:
Confidence Score: 2/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant Client
participant MW as V1 Middleware
participant RL as RateLimiter
participant Route as Route Handler
participant DB as Database
Client->>MW: HTTP Request with API Key header
MW->>RL: checkRateLimitWithSubscription(userId)
RL-->>MW: allowed / denied result
alt Denied
MW-->>Client: 401 or 429 response
end
MW-->>Route: userId passed through
alt Tables CRUD
Route->>DB: getUserEntityPermissions check
Route->>DB: listTables or createTable or deleteTable
DB-->>Route: result
Route-->>Client: 200 success response
else Rows Insert/Update/Delete
Route->>DB: checkAccess for tableId
Route->>DB: validate row data against schema
Route->>DB: transactional batch write
DB-->>Route: updated rows
Route-->>Client: 200 success response
else Upsert Row
Route->>DB: checkAccess for tableId
Route->>DB: SELECT existing row by unique columns
Note over Route,DB: Row count checked outside transaction
Route->>DB: UPDATE or INSERT in transaction
DB-->>Route: upserted row
Route-->>Client: 200 with operation type
else Files Upload/Download/Delete
Route->>DB: getUserEntityPermissions check
Route->>DB: uploadWorkspaceFile or downloadWorkspaceFile
DB-->>Route: file record
Route-->>Client: 200 success response
end
|
| if (!existingRow && table.rowCount >= table.maxRows) { | ||
| return NextResponse.json( | ||
| { error: `Table row limit reached (${table.maxRows} rows max)` }, | ||
| { status: 400 } | ||
| ) | ||
| } |
There was a problem hiding this comment.
TOCTOU race condition on row-count limit
table.rowCount is read from the checkAccess result, which was fetched before the transaction begins. Under concurrent requests, multiple callers can simultaneously pass this guard and all proceed to INSERT, exceeding the table's maxRows limit.
The capacity check needs to happen inside the transaction with a row-count query that uses a SELECT … FOR UPDATE or a conditional insert so the limit is enforced atomically. At minimum, re-query the current count inside the transaction before inserting:
const upsertResult = await db.transaction(async (trx) => {
if (existingRow) {
// ... existing update path
}
// Re-check capacity inside the transaction
const [{ count: currentCount }] = await trx
.select({ count: sql<number>`count(*)` })
.from(userTableRows)
.where(eq(userTableRows.tableId, tableId))
if (Number(currentCount) >= table.maxRows) {
throw new Error(`Table row limit reached (${table.maxRows} rows max)`)
}
// ... existing insert path
})| const uniqueFilters = uniqueColumns.map((col) => { | ||
| const value = rowData[col.name] | ||
| if (value === undefined || value === null) { | ||
| return null | ||
| } | ||
| return sql`${userTableRows.data}->>${col.name} = ${String(value)}` | ||
| }) | ||
|
|
||
| const validUniqueFilters = uniqueFilters.filter((f): f is Exclude<typeof f, null> => f !== null) | ||
|
|
||
| if (validUniqueFilters.length === 0) { | ||
| return NextResponse.json( | ||
| { | ||
| error: `Upsert requires values for at least one unique field: ${uniqueColumns.map((c) => c.name).join(', ')}`, | ||
| }, | ||
| { status: 400 } | ||
| ) | ||
| } | ||
|
|
||
| const [existingRow] = await db | ||
| .select() | ||
| .from(userTableRows) | ||
| .where( | ||
| and( | ||
| eq(userTableRows.tableId, tableId), | ||
| eq(userTableRows.workspaceId, validated.workspaceId), | ||
| or(...validUniqueFilters) | ||
| ) | ||
| ) | ||
| .limit(1) |
There was a problem hiding this comment.
OR-based unique-column lookup can match the wrong row
When a table has multiple unique columns (e.g. email and username) and the caller provides values for all of them, the OR filter matches any row that satisfies any of the unique conditions. If two different existing rows each satisfy one condition (row A has the given email; row B has the given username), .limit(1) arbitrarily picks one and updates it — silently leaving the other row with a now-duplicate value in one of its unique columns, or producing a confusing result.
Industry-standard upsert semantics match on a single, designated conflict target (like PostgreSQL's ON CONFLICT (column)). Consider requiring callers to specify which unique column is the conflict target, or switch to the database-native ON CONFLICT clause via Drizzle's .onConflictDoUpdate() to get atomic, correct semantics.
| 'Content-Length': String(buffer.length), | ||
| 'X-File-Id': fileRecord.id, | ||
| 'X-File-Name': fileRecord.name, | ||
| 'X-Uploaded-At': |
There was a problem hiding this comment.
Non-ASCII filenames will corrupt the X-File-Name header
HTTP/1.1 header values must be ISO-8859-1 (Latin-1) printable characters. Storing a raw fileRecord.name that contains Unicode characters (e.g. Japanese, Arabic, emoji) directly in a header value will violate the spec and may cause parsing errors or garbled output in HTTP clients and proxies.
Either percent-encode or Base64-encode the value, or drop this header entirely since the filename is already included in Content-Disposition:
| 'X-Uploaded-At': | |
| 'X-File-Name': encodeURIComponent(fileRecord.name), |
| key: userFile.key, | ||
| uploadedBy: userId, | ||
| uploadedAt: new Date().toISOString(), | ||
| }, |
There was a problem hiding this comment.
uploadedAt reflects server time, not the actual DB record timestamp
The response for a successful upload sets uploadedAt: new Date().toISOString() using the current wall-clock time rather than the timestamp stored by uploadWorkspaceFile. If the storage layer assigns its own timestamp (e.g. a DB default), the value returned to the caller will differ from the value returned by a subsequent GET /api/v1/files listing, leading to inconsistent data.
Return userFile.uploadedAt instead, with the same defensive coercion used in the GET handler:
| }, | |
| uploadedAt: | |
| userFile.uploadedAt instanceof Date | |
| ? userFile.uploadedAt.toISOString() | |
| : String(userFile.uploadedAt), |
| async function handleBatchInsert( | ||
| requestId: string, | ||
| tableId: string, | ||
| body: z.infer<typeof BatchInsertRowsSchema>, | ||
| userId: string | ||
| ): Promise<NextResponse> { | ||
| const validated = BatchInsertRowsSchema.parse(body) |
There was a problem hiding this comment.
handleBatchInsert re-parses the already-validated body
handleBatchInsert (line 103) calls BatchInsertRowsSchema.parse(body) again even though the caller in POST already confirmed that body has the rows array shape. If schema parsing ever fails inside the helper, the error is not caught — it would bubble up to the outer POST catch block, but the outer block's ZodError handler is at line 361, which is fine. However, the double parse is wasteful and could mask intent.
More importantly, the outer POST handler at line 305 calls InsertRowSchema.parse(body) only after the batch check fails, but it does not re-check the ZodError from handleBatchInsert within its own scope — errors thrown inside the async helper that are ZodError will be caught at line 361 (because handleBatchInsert is awaited). So this is safe but slightly confusing. Consider having handleBatchInsert accept a pre-validated value to avoid the redundant parse.


Summary
Type of Change
Testing
Tested manually
Checklist