M16 Capstone — audit remediation (v5.3.0)#17
Conversation
Zero-knowledge code extraction, critical assessment, roadmap reconciliation, and prescriptive blueprint for @git-stunts/git-cas. Covers all 31 source files, 61 test files, and 12 CLI files.
Add milestone M16 (Capstone, v5.3.0) to ROADMAP.md with 13 task cards addressing all negative findings from the CODE-EVAL.md forensic audit: P0 (Critical): - 16.1 Crypto adapter behavioral normalization (C8) - 16.2 Memory restore guard (C1) - 16.11 Passphrase input security (C5 + V6) P1 (Important): - 16.3 Web Crypto encryption buffer guard (C4) - 16.5 Encrypt-then-chunk dedup warning (C10) - 16.6 Chunk size upper bound (C3) - 16.10 Orphaned blob tracking (C2) P2 (Housekeeping): - 16.4 FixedChunker pre-allocated buffer (C9) - 16.7 Lifecycle method naming (deprecate deleteAsset) - 16.8 CasError portability guard - 16.9 Pre-commit hook + hooks directory - 16.12 KDF brute-force awareness (C6) - 16.13 GCM nonce collision documentation (C7) Also registers new CasError codes RESTORE_TOO_LARGE and ENCRYPTION_BUFFER_EXCEEDED, adds concerns C8–C10 to the concerns section, and cross-references all concerns to their task cards in the summary table.
CasError now checks for Error.captureStackTrace before calling it. This prevents a TypeError on runtimes where captureStackTrace is not defined (e.g. Firefox, older Deno builds). Task: 16.8
- git mv scripts/git-hooks/pre-push → scripts/hooks/pre-push - New scripts/hooks/pre-commit runs `pnpm run lint` - install-hooks.sh updated to point at scripts/hooks/ Task: 16.9
- CasError.js: add curly braces around captureStackTrace guard - CasError.test.js: remove unused vi and afterEach imports
- NodeCryptoAdapter.encryptBuffer() now async (returns Promise) - decryptBuffer() validates key on Node and Web adapters - NodeCryptoAdapter.createEncryptionStream finalize() guards with STREAM_NOT_CONSUMED before stream consumption - New CryptoAdapter.conformance.test.js asserts identical contracts Task: 16.1
CasService._restoreBuffered now checks total chunk size against
maxRestoreBufferSize (default 512 MiB) before buffering encrypted or
compressed data. Throws RESTORE_TOO_LARGE with {size, limit} meta
when exceeded. Unencrypted streaming restore is unaffected.
Task: 16.2
- New bin/ui/passphrase-prompt.js with promptPassphrase() and readPassphraseFile() - --vault-passphrase-file <path> option on store, restore, vault init - resolvePassphrase() now async with priority chain: file → flag → env → TTY prompt → undefined - vault init uses confirm: true for passphrase confirmation Task: 16.11
CasService, FixedChunker, and CdcChunker constructors now throw when chunk size exceeds 100 MiB. CasService warns when > 10 MiB. Prevents accidental creation of excessively large blobs. Task: 16.6
WebCryptoAdapter now accepts maxEncryptionBufferSize (default 512 MiB). Streaming encryption throws ENCRYPTION_BUFFER_EXCEEDED when the accumulated plaintext exceeds the limit, since Web Crypto AES-GCM is a one-shot API that buffers all data. Refactored encrypt generator into a static private method. Task: 16.3
CDC deduplication is ineffective with encryption since ciphertext is pseudorandom — content-defined boundaries provide no dedup benefit. CasService.store() now emits an observability warning for this case. Task: 16.5
STREAM_ERROR now includes meta.orphanedBlobs — an array of OIDs for blobs that were successfully persisted before the stream failed. The error metric also reports the orphanedBlobs count for observability. Resolves task 16.10.
FixedChunker.chunk() now uses a pre-allocated Buffer.allocUnsafe(chunkSize) working buffer with a copy+offset pattern, matching CdcChunker's approach. Eliminates O(n²/chunkSize) total copies when the source yields many small buffers. Resolves task 16.4.
Add inspectAsset() and collectReferencedChunks() as canonical names for deleteAsset() and findOrphanedChunks() respectively. The old names were misleading — neither method performs destructive operations. Old names are preserved as deprecated aliases that emit observability warnings. Updated on CasService, facade, and all .d.ts files. Resolves task 16.7.
CasService emits a decryption_failed metric with slug context on every INTEGRITY_ERROR during encrypted restore, providing an audit trail for monitoring failed passphrase attempts. The CLI layer adds a 1-second delay after INTEGRITY_ERROR to slow brute-force attacks. The library API itself imposes no rate-limiting — callers manage their own policy. Resolves task 16.12.
Move docs/SECURITY.md to project root with new sections covering GCM nonce bounds (2^32 NIST limit), recommended key rotation frequency, KDF parameter guidance (PBKDF2/scrypt), and passphrase entropy. VaultService now tracks encryptionCount in vault metadata, incremented on each addToVault when the vault has encryption configured. An observability warning fires when the count exceeds 2^31, providing a safety margin before the NIST 2^32 limit. VaultService accepts an optional observability port (no-op default for backward compat). Resolves task 16.13.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (13)
📝 WalkthroughWalkthroughImplements M16 hardening: CLI config/passphrase resolution, restore/encryption buffer guards, chunk-size bounds, crypto adapter streaming normalization, lifecycle method renames with deprecated aliases, orphaned-blob tracking, encryption-count observability, git-hook, tests, and audit/docs additions. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as git-cas CLI
participant Resolver as PassphraseResolver
participant FS as FileSystem
participant Env as Process/TTY
participant Vault as VaultService
User->>CLI: Run vault command (opts)
CLI->>Resolver: resolvePassphrase(opts, extra)
alt file provided
Resolver->>FS: readPassphraseFile(path)
FS-->>Resolver: passphrase
else flag provided
Resolver-->>Resolver: use flag value
else env var set
Resolver->>Env: read env var
Env-->>Resolver: passphrase
else TTY available
Resolver->>Env: promptPassphrase({confirm})
Env-->>User: prompt (stderr)
User-->>Env: enter passphrase (hidden)
Env-->>Resolver: passphrase
else
Resolver-->>Resolver: undefined
end
Resolver-->>CLI: resolved passphrase
CLI->>Vault: perform operation with key/undefined
Vault-->>CLI: result
CLI-->>User: complete
sequenceDiagram
participant Client as CasService / Caller
participant Adapter as CryptoAdapter
participant WebAPI as Web Crypto API
participant State as SharedState
Client->>Adapter: createEncryptionStream(key)
Adapter->>Adapter: validate key, init nonce, state
Adapter-->>Client: { encrypt(chunk)->cipher, finalize()->meta }
loop stream chunks
Client->>Adapter: encrypt(chunk)
alt buffer > maxEncryptionBufferSize
Adapter-->>Client: throw ENCRYPTION_BUFFER_EXCEEDED
else
Adapter->>WebAPI: encrypt chunk with nonce/state
WebAPI-->>Adapter: ciphertext (+ tag on finalize)
Adapter->>State: record tag/consumption
Adapter-->>Client: yield ciphertext chunk
end
end
Client->>Adapter: finalize()
alt stream not fully consumed
Adapter-->>Client: throw STREAM_NOT_CONSUMED
else
Adapter->>Adapter: assemble meta (nonce, tag)
Adapter-->>Client: return meta
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
@coderabbitai review please 🥕 |
|
🧠 Learnings used✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
bin/git-cas.js (1)
112-125:⚠️ Potential issue | 🟠 MajorAvoid passphrase prompting before confirming vault encryption.
resolvePassphrase()is invoked before metadata lookup, so interactive runs can prompt even when the vault is not encrypted.Proposed fix
async function resolveEncryptionKey(cas, opts) { if (opts.keyFile) { return readKeyFile(opts.keyFile); } - const passphrase = await resolvePassphrase(opts); - if (!passphrase) { - return undefined; - } const metadata = await cas.getVaultMetadata(); - if (metadata?.encryption) { - return deriveVaultKey(cas, metadata, passphrase); + const hasExplicitPassphraseSource = Boolean( + opts.vaultPassphraseFile || opts.vaultPassphrase || process.env.GIT_CAS_PASSPHRASE, + ); + if (!metadata?.encryption) { + if (hasExplicitPassphraseSource) { + process.stderr.write('warning: passphrase ignored (vault is not encrypted)\n'); + } + return undefined; } - process.stderr.write('warning: passphrase ignored (vault is not encrypted)\n'); - return undefined; + const passphrase = await resolvePassphrase(opts); + if (!passphrase) { + return undefined; + } + return deriveVaultKey(cas, metadata, passphrase); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/git-cas.js` around lines 112 - 125, The function resolveEncryptionKey currently calls resolvePassphrase before checking vault metadata which can prompt the user unnecessarily; change the flow in resolveEncryptionKey so it first returns readKeyFile(opts.keyFile) if provided, then calls cas.getVaultMetadata() and only if metadata?.encryption is truthy call resolvePassphrase and then deriveVaultKey(cas, metadata, passphrase); if metadata shows no encryption, write the warning and return undefined without prompting. Ensure you still handle the case where passphrase resolution returns falsy by returning undefined.src/domain/services/CasService.js (1)
500-525:⚠️ Potential issue | 🟠 MajorCompressed restores can exceed the configured memory cap.
At Line 500, the limit check uses pre-decompression chunk bytes. At Line 524, decompression can inflate far beyond that, bypassing the intended cap for compressed assets.
Immediate mitigation
if (manifest.compression) { buffer = await this._decompress(buffer); + if (buffer.length > this.maxRestoreBufferSize) { + throw new CasError( + `Decompressed restore is ${buffer.length} bytes (limit: ${this.maxRestoreBufferSize})`, + 'RESTORE_TOO_LARGE', + { size: buffer.length, limit: this.maxRestoreBufferSize }, + ); + } }A stronger long-term fix is bounded streaming decompression so over-limit payloads are rejected before full allocation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/domain/services/CasService.js` around lines 500 - 525, The current size check uses pre-decompression chunk sizes (totalSize) but decompression in _decompress can inflate the payload beyond this limit; update the restore flow to enforce the maxRestoreBufferSize after decompression (and/or during streaming decompression): after Buffer.concat(await this._readAndVerifyChunks(manifest.chunks)) and after calling this._decompress(buffer), verify buffer.length <= this.maxRestoreBufferSize and throw a CasError ('RESTORE_TOO_LARGE', include size and limit) if it exceeds; for a long-term fix replace _decompress with a bounded streaming decompressor that rejects/aborts when output exceeds this.maxRestoreBufferSize to avoid unbounded memory allocation.
🧹 Nitpick comments (4)
src/infrastructure/adapters/NodeCryptoAdapter.js (1)
28-44: JSDoc return type is inconsistent with async implementation.The JSDoc on line 32 documents a synchronous return type
{ buf: Buffer, meta: ... }, but the method is nowasync(line 34), which means it returnsPromise<{ buf: Buffer, meta: ... }>. This mismatch can cause confusion for consumers relying on type hints.📝 Proposed fix to update JSDoc
/** * `@override` * `@param` {Buffer|Uint8Array} buffer - Plaintext to encrypt. * `@param` {Buffer|Uint8Array} key - 32-byte encryption key. - * `@returns` {{ buf: Buffer, meta: import('../../ports/CryptoPort.js').EncryptionMeta }} + * `@returns` {Promise<{ buf: Buffer, meta: import('../../ports/CryptoPort.js').EncryptionMeta }>} */ async encryptBuffer(buffer, key) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/infrastructure/adapters/NodeCryptoAdapter.js` around lines 28 - 44, The JSDoc for encryptBuffer documents a synchronous return but the function is async; update the `@returns` annotation for the encryptBuffer method to reflect it returns a Promise of the object (e.g., Promise<{ buf: Buffer, meta: import('../../ports/CryptoPort.js').EncryptionMeta }>) so the types match the async implementation in encryptBuffer and consumers get correct hints.test/unit/infrastructure/adapters/CryptoAdapter.conformance.test.js (2)
35-60: Consider consolidating error assertions.The pattern of using both
expect(...).rejects.toThrow(CasError)followed by a separate try/catch to verifyerr.codeis verbose. Vitest's.rejectscan be chained with property matchers.♻️ Proposed refactor for cleaner assertions
it('decryptBuffer rejects INVALID_KEY_TYPE for string key', async () => { const { buf, meta } = await adapter.encryptBuffer(Buffer.from('test'), key); - await expect( - Promise.resolve().then(() => adapter.decryptBuffer(buf, 'not-a-buffer', meta)), - ).rejects.toThrow(CasError); - - try { - await Promise.resolve().then(() => adapter.decryptBuffer(buf, 'not-a-buffer', meta)); - } catch (err) { - expect(err.code).toBe('INVALID_KEY_TYPE'); - } + await expect( + Promise.resolve().then(() => adapter.decryptBuffer(buf, 'not-a-buffer', meta)), + ).rejects.toMatchObject({ code: 'INVALID_KEY_TYPE' }); }); it('decryptBuffer rejects INVALID_KEY_LENGTH for 16-byte key', async () => { const shortKey = Buffer.alloc(16, 0xcc); const { buf, meta } = await adapter.encryptBuffer(Buffer.from('test'), key); - await expect( - Promise.resolve().then(() => adapter.decryptBuffer(buf, shortKey, meta)), - ).rejects.toThrow(CasError); - - try { - await Promise.resolve().then(() => adapter.decryptBuffer(buf, shortKey, meta)); - } catch (err) { - expect(err.code).toBe('INVALID_KEY_LENGTH'); - } + await expect( + Promise.resolve().then(() => adapter.decryptBuffer(buf, shortKey, meta)), + ).rejects.toMatchObject({ code: 'INVALID_KEY_LENGTH' }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/infrastructure/adapters/CryptoAdapter.conformance.test.js` around lines 35 - 60, Replace the two-step rejection checks in the 'decryptBuffer rejects INVALID_KEY_TYPE for string key' and 'decryptBuffer rejects INVALID_KEY_LENGTH for 16-byte key' tests with a single consolidated assertion: call adapter.decryptBuffer via Promise.resolve().then(...) and use await expect(...).rejects.toMatchObject({ code: 'INVALID_KEY_TYPE' }) (and { code: 'INVALID_KEY_LENGTH' } for the other test) instead of the separate expect(...).rejects.toThrow(CasError) + try/catch; keep references to adapter.decryptBuffer, CasError, and the key/shortKey variables so the tests still validate the correct error code.
62-71: Same consolidation opportunity for synchronous throw assertion.The try/catch after
expect(() => finalize()).toThrow(CasError)is also redundant.♻️ Proposed refactor
it('createEncryptionStream.finalize() throws STREAM_NOT_CONSUMED before consumption', () => { const { finalize } = adapter.createEncryptionStream(key); - expect(() => finalize()).toThrow(CasError); - - try { - finalize(); - } catch (err) { - expect(err.code).toBe('STREAM_NOT_CONSUMED'); - } + expect(() => finalize()).toThrow( + expect.objectContaining({ code: 'STREAM_NOT_CONSUMED' }), + ); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/infrastructure/adapters/CryptoAdapter.conformance.test.js` around lines 62 - 71, The test redundantly calls finalize() twice and wraps the second call in a try/catch after already asserting expect(() => finalize()).toThrow(CasError); instead capture the thrown error once and assert both its type and code in one flow: call adapter.createEncryptionStream(key).finalize via a sync wrapper that returns the caught error (e.g., const err = (() => { try { finalize(); } catch (e) { return e } })()), then assert err is instance of CasError and err.code === 'STREAM_NOT_CONSUMED'; reference createEncryptionStream and finalize to locate the test and remove the duplicate expect/try-catch.index.js (1)
56-75: DocumentmaxRestoreBufferSizein constructor JSDoc and#configtype.Line 68 adds
maxRestoreBufferSize, but the constructor JSDoc and Line 74 type annotation don’t include it.Proposed fix
/** * `@param` {Object} options @@ * `@param` {import('./src/ports/ChunkingPort.js').default} [options.chunker] - Pre-built ChunkingPort instance (advanced). + * `@param` {number} [options.maxRestoreBufferSize] - Max buffered restore size in bytes for encrypted/compressed restores. */ @@ - /** `@type` {{ plumbing: *, chunkSize?: number, codec?: *, policy?: *, crypto?: *, observability?: *, merkleThreshold?: number, concurrency?: number, chunking?: *, chunker?: * }} */ + /** `@type` {{ plumbing: *, chunkSize?: number, codec?: *, policy?: *, crypto?: *, observability?: *, merkleThreshold?: number, concurrency?: number, chunking?: *, chunker?: *, maxRestoreBufferSize?: number }} */ `#config`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@index.js` around lines 56 - 75, The constructor now accepts maxRestoreBufferSize but the JSDoc param list and the inline `#config` type comment are missing it; update the constructor JSDoc to add a `@param` entry for maxRestoreBufferSize (e.g., number and optional, with a short description) and extend the `#config` type annotation comment to include maxRestoreBufferSize?: number so the documented type matches the actual constructor arguments (refer to the constructor signature and the `#config` comment to locate where to add these).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/git-cas.js`:
- Around line 209-211: The conflict check inside the action handler currently
calls resolvePassphrase() which may prompt/consume stdin; change the validation
in the runAction(...) handler so it only inspects opts flags (e.g.,
opts.recipient, opts.keyFile, opts.vaultPassphrase flag) without awaiting
resolvePassphrase(), and defer calling await resolvePassphrase(opts) until after
validation and when you actually need the passphrase for encryption setup;
update the error message logic in the same anonymous action function so it
throws when both recipient and a key/passphrase flag are present, but does not
invoke resolvePassphrase() during that check.
In `@bin/ui/passphrase-prompt.js`:
- Around line 40-43: Normalize Windows CRLF by replacing the current regex that
only strips '\n' with one that also removes a preceding '\r' — update both
places where you call .replace(/\n$/, '') (the
Buffer.concat(chunks).toString('utf8').replace(...) and the content.replace(...)
after readFile(filePath, 'utf8')) to use .replace(/\r?\n$/, '') (or equivalent
trim of trailing CR/LF) so a trailing '\r' from CRLF is not left in the
passphrase.
In `@ROADMAP.md`:
- Line 281: The heading "### 16.1 — Crypto Adapter Behavioral Normalization
*(P0)* — C8" jumps from a top-level "#" section and should use the next
incremental level; change the leading "###" to "##" (or match the appropriate
level consistent with the surrounding M16 section headers) so the heading levels
are properly incremental and MD001 is resolved.
In `@src/domain/services/CasService.js`:
- Around line 38-75: The constructor currently accepts maxRestoreBufferSize
without validation; add validation to ensure it's an integer within safe bounds
(e.g., >= 1024 bytes and <= a sensible cap such as 2 * 1024 * 1024 * 1024) and
throw a clear Error for invalid values. Implement this by extending the existing
static `#validateConstructorArgs` to accept maxRestoreBufferSize (or add a new
private static method like `#validateRestoreBufferSize`) and check
Number.isInteger(maxRestoreBufferSize) && maxRestoreBufferSize >= 1024 &&
maxRestoreBufferSize <= MAX_RESTORE_BUFFER_SIZE (define MAX_RESTORE_BUFFER_SIZE
constant), then update the constructor to call the validator before assigning
this.maxRestoreBufferSize and surface a descriptive error message on failure.
In `@src/infrastructure/adapters/WebCryptoAdapter.js`:
- Around line 19-22: The constructor of WebCryptoAdapter currently assigns the
maxEncryptionBufferSize directly to the private field (`#maxEncryptionBufferSize`)
without validation; add input validation in the constructor to ensure the
provided maxEncryptionBufferSize is a finite, positive integer (e.g.,
Number.isFinite and Number.isInteger and > 0) and not Infinity/NaN/0/negative;
if the value is invalid, either throw a TypeError or fall back to the safe
default (512 * 1024 * 1024) before assigning to this.#maxEncryptionBufferSize so
the guard cannot be bypassed.
In `@src/infrastructure/chunkers/FixedChunker.js`:
- Around line 20-25: The constructor of FixedChunker must validate that the
provided chunkSize is a positive finite integer and not zero; update the
FixedChunker constructor (where `#chunkSize` is set) to throw a RangeError if
chunkSize is not a number, is NaN, is not finite, is less than 1, or is not an
integer (in addition to the existing upper-bound check), so the class never
accepts 0 or invalid numeric values that would produce empty chunks.
In `@test/unit/cli/actions.test.js`:
- Around line 126-143: The timing assertions in the tests ("delays ~1s on
INTEGRITY_ERROR before writing output" and "no delay for non-INTEGRITY_ERROR
codes") are flaky because they use Date.now() to measure wall-clock time; change
the tests to use Vitest's fake timers (vi.useFakeTimers(),
vi.advanceTimersByTimeAsync()) to deterministically advance timers around the
runAction(...) invocation and assert elapsed behavior without real time;
specifically, replace Date.now() measurements in those two it blocks and drive
the setTimeout-based delay via vi.advanceTimersByTimeAsync(1000) (or appropriate
values) and restore timers after each test, but first validate that
vi.useFakeTimers()/advanceTimersByTimeAsync work reliably on Node.js 22.x, Bun,
and Deno before committing this refactor.
In `@test/unit/domain/services/CasService.kdfBruteForce.test.js`:
- Around line 94-102: The test currently only measures elapsed time when calling
service.restore({ manifest, encryptionKey: wrongKey }) but doesn't assert that
decryption actually failed; modify the test to capture the thrown error (e.g.,
assign the caught error in the catch block or use await
expect(service.restore(...)).rejects.toEqual(INTEGRITY_ERROR)) and assert it
equals INTEGRITY_ERROR (reference the INTEGRITY_ERROR constant), ensuring the
test fails if no error is thrown, while keeping the existing timing assertion
that elapsed is less than 500ms.
In `@test/unit/domain/services/CasService.orphanedBlobs.test.js`:
- Around line 55-81: The three tests invoking service.store with failingSource
currently use try/catch but lack a guarantee the call throws, so add an explicit
failure if service.store resolves; either convert each to use Jest's rejection
matchers (e.g., await expect(service.store({ source: failingSource(n), slug:
'fail', filename: 'f.bin' })).rejects.toHaveProperty('code', 'STREAM_ERROR') and
then assert on err.meta.orphanedBlobs) or keep the try/catch and insert a fail()
(or throw new Error('Expected error')) immediately after the await
service.store(...) call so the test fails when no error is thrown; reference the
service.store call and failingSource helper when applying the change.
In `@test/unit/domain/services/CasService.restoreGuard.test.js`:
- Around line 98-109: The test "error meta includes size and limit" currently
only asserts inside the catch block so it can pass silently if no error is
thrown; after invoking service.restoreStream(...).next() (the call to
restoreStream on the service in this test), add an explicit failure assertion
immediately after the awaited call (e.g., fail('expected RESTORE_TOO_LARGE
error') or throw new Error) so the test fails when no error is thrown, keeping
the existing catch block assertions for err.code and err.meta unchanged.
---
Outside diff comments:
In `@bin/git-cas.js`:
- Around line 112-125: The function resolveEncryptionKey currently calls
resolvePassphrase before checking vault metadata which can prompt the user
unnecessarily; change the flow in resolveEncryptionKey so it first returns
readKeyFile(opts.keyFile) if provided, then calls cas.getVaultMetadata() and
only if metadata?.encryption is truthy call resolvePassphrase and then
deriveVaultKey(cas, metadata, passphrase); if metadata shows no encryption,
write the warning and return undefined without prompting. Ensure you still
handle the case where passphrase resolution returns falsy by returning
undefined.
In `@src/domain/services/CasService.js`:
- Around line 500-525: The current size check uses pre-decompression chunk sizes
(totalSize) but decompression in _decompress can inflate the payload beyond this
limit; update the restore flow to enforce the maxRestoreBufferSize after
decompression (and/or during streaming decompression): after Buffer.concat(await
this._readAndVerifyChunks(manifest.chunks)) and after calling
this._decompress(buffer), verify buffer.length <= this.maxRestoreBufferSize and
throw a CasError ('RESTORE_TOO_LARGE', include size and limit) if it exceeds;
for a long-term fix replace _decompress with a bounded streaming decompressor
that rejects/aborts when output exceeds this.maxRestoreBufferSize to avoid
unbounded memory allocation.
---
Nitpick comments:
In `@index.js`:
- Around line 56-75: The constructor now accepts maxRestoreBufferSize but the
JSDoc param list and the inline `#config` type comment are missing it; update the
constructor JSDoc to add a `@param` entry for maxRestoreBufferSize (e.g., number
and optional, with a short description) and extend the `#config` type annotation
comment to include maxRestoreBufferSize?: number so the documented type matches
the actual constructor arguments (refer to the constructor signature and the
`#config` comment to locate where to add these).
In `@src/infrastructure/adapters/NodeCryptoAdapter.js`:
- Around line 28-44: The JSDoc for encryptBuffer documents a synchronous return
but the function is async; update the `@returns` annotation for the encryptBuffer
method to reflect it returns a Promise of the object (e.g., Promise<{ buf:
Buffer, meta: import('../../ports/CryptoPort.js').EncryptionMeta }>) so the
types match the async implementation in encryptBuffer and consumers get correct
hints.
In `@test/unit/infrastructure/adapters/CryptoAdapter.conformance.test.js`:
- Around line 35-60: Replace the two-step rejection checks in the 'decryptBuffer
rejects INVALID_KEY_TYPE for string key' and 'decryptBuffer rejects
INVALID_KEY_LENGTH for 16-byte key' tests with a single consolidated assertion:
call adapter.decryptBuffer via Promise.resolve().then(...) and use await
expect(...).rejects.toMatchObject({ code: 'INVALID_KEY_TYPE' }) (and { code:
'INVALID_KEY_LENGTH' } for the other test) instead of the separate
expect(...).rejects.toThrow(CasError) + try/catch; keep references to
adapter.decryptBuffer, CasError, and the key/shortKey variables so the tests
still validate the correct error code.
- Around line 62-71: The test redundantly calls finalize() twice and wraps the
second call in a try/catch after already asserting expect(() =>
finalize()).toThrow(CasError); instead capture the thrown error once and assert
both its type and code in one flow: call
adapter.createEncryptionStream(key).finalize via a sync wrapper that returns the
caught error (e.g., const err = (() => { try { finalize(); } catch (e) { return
e } })()), then assert err is instance of CasError and err.code ===
'STREAM_NOT_CONSUMED'; reference createEncryptionStream and finalize to locate
the test and remove the duplicate expect/try-catch.
ℹ️ Review info
Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 315149ad-ba54-4dd7-9a41-511654d0c827
📒 Files selected for processing (35)
CHANGELOG.mdCODE-EVAL.mdREADME.mdROADMAP.mdSECURITY.mdbin/actions.jsbin/git-cas.jsbin/ui/passphrase-prompt.jsindex.d.tsindex.jsscripts/hooks/pre-commitscripts/hooks/pre-pushscripts/install-hooks.shsrc/domain/errors/CasError.jssrc/domain/services/CasService.d.tssrc/domain/services/CasService.jssrc/domain/services/VaultService.jssrc/infrastructure/adapters/NodeCryptoAdapter.jssrc/infrastructure/adapters/WebCryptoAdapter.jssrc/infrastructure/chunkers/CdcChunker.jssrc/infrastructure/chunkers/FixedChunker.jstest/unit/cli/actions.test.jstest/unit/cli/passphrase-prompt.test.jstest/unit/domain/errors/CasError.test.jstest/unit/domain/services/CasService.chunkSizeBound.test.jstest/unit/domain/services/CasService.dedupWarning.test.jstest/unit/domain/services/CasService.kdfBruteForce.test.jstest/unit/domain/services/CasService.lifecycle.test.jstest/unit/domain/services/CasService.orphanedBlobs.test.jstest/unit/domain/services/CasService.restoreGuard.test.jstest/unit/domain/services/VaultService.encryptionCount.test.jstest/unit/infrastructure/adapters/CryptoAdapter.conformance.test.jstest/unit/infrastructure/adapters/WebCryptoAdapter.bufferGuard.test.jstest/unit/infrastructure/chunkers/ChunkerBounds.test.jstest/unit/infrastructure/chunkers/FixedChunker.test.js
VaultService.test.js now passes a mockObservability() port (vi.fn() mocks) to all 46 tests instead of relying on the silent no-op default. rotateVaultPassphrase.test.js now passes SilentObserver explicitly. If observability wiring breaks, the test suite will catch it.
- resolveEncryptionKey now checks vault metadata before calling resolvePassphrase, avoiding unnecessary TTY prompts for unencrypted vaults. - Store action recipient-conflict check uses hasPassphraseSource() (flag/env inspection) instead of awaiting resolvePassphrase(), which could consume stdin as a side effect. - readPassphraseFile strips trailing CRLF (\r\n) in addition to LF, preventing mismatched passphrases from Windows-edited files.
- CasService: validate maxRestoreBufferSize (integer >= 1024). #validateConstructorArgs now accepts an object to stay within max-params lint rule. - WebCryptoAdapter: validate maxEncryptionBufferSize (finite, positive). - FixedChunker: validate chunkSize lower bound (positive integer). Prevents infinite loops from chunkSize=0 or NaN.
_restoreBuffered already checked pre-decompression chunk sizes against maxRestoreBufferSize, but decompression can inflate far beyond that bound. A second check after _decompress now throws RESTORE_TOO_LARGE when the decompressed buffer exceeds the configured limit.
- orphanedBlobs: add expect.unreachable after store() in 3 error tests. - restoreGuard: add expect.unreachable in size/limit meta test. - kdfBruteForce: assert INTEGRITY_ERROR code, not just timing. - conformance: consolidate double try/catch into toMatchObject / objectContaining single assertions; remove unused CasError import.
…ding level - NodeCryptoAdapter.encryptBuffer @returns wrapped in Promise<>. - index.js constructor JSDoc and #config type include maxRestoreBufferSize. - ROADMAP.md: add ## Task Cards heading to fix MD001 heading-level jump.
|
@coderabbitai review please 🥕 |
|
✅ Actions performedReview triggered.
|
Major fixes: - CasService.d.ts: add missing maxRestoreBufferSize and chunker to CasServiceOptions interface, preventing type drift for ./service consumers - VaultService.addToVault: shallow-copy state.metadata before mutation, preventing encryptionCount from double-incrementing on CAS retries Minor fixes: - CasService: add Number.isInteger check for chunkSize validation - CasService: attach orphanedBlobs meta to CasError on re-throw in _chunkAndStore instead of discarding it - CasService: add @param JSDoc for maxRestoreBufferSize - VaultService: fix observability type annotation (ObservabilityPort) - bin/git-cas.js: fix orphaned JSDoc block for resolvePassphrase - passphrase-prompt: reject empty passphrases, warn on insecure file permissions, add error/close listeners to readline - WebCryptoAdapter: document static #makeEncryptGenerator pattern - actions.test.js: use fake timers instead of real 1s wall-clock delay - CHANGELOG: re-categorize features/refactors misplaced under Fixed - ROADMAP: fix version plan table sort order - GUIDE: update chunkSize error message to match implementation
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
test/unit/domain/services/CasService.orphanedBlobs.test.js (1)
86-95: Consider addingexpect.unreachable()for consistency with other error-path tests.Unlike the other tests in this suite (lines 55-62, 65-74, 77-84), this test uses an empty
catchblock without verifying that an error was actually thrown. Ifservice.store()unexpectedly resolves, the test would still pass.♻️ Proposed fix
it('emits metric with orphaned blob count', async () => { try { await service.store({ source: failingSource(2), slug: 'fail', filename: 'f.bin' }); + expect.unreachable('should have thrown STREAM_ERROR'); } catch { // expected } const errorMetrics = observability.metric.mock.calls.filter((c) => c[0] === 'error'); expect(errorMetrics.length).toBeGreaterThan(0); expect(errorMetrics[0][1]).toHaveProperty('orphanedBlobs', 2); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/domain/services/CasService.orphanedBlobs.test.js` around lines 86 - 95, The test swallows errors and will pass if service.store unexpectedly resolves; after calling service.store({ source: failingSource(2), ... }) add an assertion to fail the test when no error is thrown (e.g., call expect.unreachable() immediately after the await) so the test only succeeds when the call actually throws; reference the service.store call and failingSource(2) in the test to locate where to insert the assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@GUIDE.md`:
- Around line 1567-1570: Update the FAQ entry for the "Chunk size must be an
integer >= 1024 bytes" error to mention both the lower and new upper bounds:
state that chunkSize must be >= 1 KiB and there is a hard cap at 100 MiB (and
that setting chunkSize > 10 MiB will trigger a warning), reference the parameter
name chunkSize and the 16.6 change, and advise users to increase the value only
up to 100 MiB (or reduce it if they hit the cap) while noting the >10 MiB
warning behavior.
In `@ROADMAP.md`:
- Line 231: The M16 "Capstone" estimates are inconsistent between the milestone
row labeled "M16| Capstone" and the summary table entries for the same task;
pick the correct values (authoritative estimate) and make them identical in both
places by editing the "M16| Capstone" row and the summary table rows that list
the same task cards so that the LoC and hours match (search for the "M16",
"Capstone", and the summary table entries to locate both occurrences); ensure
the final numbers are updated in both spots and keep the format "| M16| Capstone
| Audit remediation | v5.3.0 | 13 | ~XXX | ~YYh | 🔲 OPEN |".
In `@src/domain/services/VaultService.js`:
- Around line 398-410: The code increments metadata.encryptionCount inside
addToVault() even though addToVault() merely updates the index and doesn't
perform encryption; move the increment out of addToVault() and instead increment
metadata.encryptionCount at the actual encryption boundary in the store layer
where a tree is encrypted and persisted (the function that performs the vault
encryption/serialization for a treeOid), and keep using
VaultService.ENCRYPTION_COUNT_WARN and metadata.encryptionCount for the warning;
update readState()/state saving logic so the increment is applied to the same
metadata object persisted by VaultService rather than being incremented on
re-indexing or re-adding existing trees.
In `@test/unit/cli/passphrase-prompt.test.js`:
- Around line 45-60: The test relies on writeFile(..., { mode }) which is
affected by the process umask; ensure deterministic permissions by explicitly
forcing the desired mode after creation (e.g., call fs.chmodSync(tmpPath, 0o644)
in the "warning for world-readable file" test and fs.chmodSync(tmpPath, 0o600)
in the "no warning for restricted file permissions" test) or temporarily
set/restore process.umask around writeFile; update the tests that call writeFile
and readPassphraseFile and keep the existing writeSpy/origWrite handling and
tmpPath usage.
---
Nitpick comments:
In `@test/unit/domain/services/CasService.orphanedBlobs.test.js`:
- Around line 86-95: The test swallows errors and will pass if service.store
unexpectedly resolves; after calling service.store({ source: failingSource(2),
... }) add an assertion to fail the test when no error is thrown (e.g., call
expect.unreachable() immediately after the await) so the test only succeeds when
the call actually throws; reference the service.store call and failingSource(2)
in the test to locate where to insert the assertion.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: fdd6ec3c-901a-4fea-a198-8b49eeea00b8
📒 Files selected for processing (23)
CHANGELOG.mdGUIDE.mdROADMAP.mdbin/git-cas.jsbin/ui/passphrase-prompt.jsindex.jssrc/domain/services/CasService.d.tssrc/domain/services/CasService.jssrc/domain/services/VaultService.jssrc/infrastructure/adapters/NodeCryptoAdapter.jssrc/infrastructure/adapters/WebCryptoAdapter.jssrc/infrastructure/chunkers/FixedChunker.jstest/unit/cli/actions.test.jstest/unit/cli/passphrase-prompt.test.jstest/unit/domain/services/CasService.errors.test.jstest/unit/domain/services/CasService.kdfBruteForce.test.jstest/unit/domain/services/CasService.orphanedBlobs.test.jstest/unit/domain/services/CasService.restoreGuard.test.jstest/unit/domain/services/rotateVaultPassphrase.test.jstest/unit/infrastructure/adapters/CryptoAdapter.conformance.test.jstest/unit/infrastructure/adapters/WebCryptoAdapter.bufferGuard.test.jstest/unit/infrastructure/chunkers/ChunkerBounds.test.jstest/unit/vault/VaultService.test.js
🚧 Files skipped from review as they are similar to previous changes (8)
- test/unit/infrastructure/chunkers/ChunkerBounds.test.js
- src/infrastructure/chunkers/FixedChunker.js
- test/unit/domain/services/CasService.kdfBruteForce.test.js
- test/unit/domain/services/CasService.restoreGuard.test.js
- test/unit/infrastructure/adapters/CryptoAdapter.conformance.test.js
- bin/ui/passphrase-prompt.js
- test/unit/infrastructure/adapters/WebCryptoAdapter.bufferGuard.test.js
- test/unit/cli/actions.test.js
| // Shallow copy to avoid mutating readState()'s object on CAS retries. | ||
| const metadata = { ...(state.metadata || { version: 1 }) }; | ||
| if (metadata.encryption) { | ||
| // Tracks nonce-relevant operations: every addToVault on an encrypted | ||
| // vault implies an encryption occurred at the store layer. | ||
| metadata.encryptionCount = (metadata.encryptionCount || 0) + 1; | ||
| if (metadata.encryptionCount >= VaultService.ENCRYPTION_COUNT_WARN) { | ||
| this.observability.log( | ||
| 'warn', | ||
| `Vault encryption count (${metadata.encryptionCount}) exceeds ` + | ||
| `${VaultService.ENCRYPTION_COUNT_WARN} — rotate your key`, | ||
| { encryptionCount: metadata.encryptionCount }, | ||
| ); |
There was a problem hiding this comment.
Track encryptionCount where encryption actually happens.
addToVault() only updates the vault index; it has no evidence that the referenced treeOid was freshly encrypted with the vault key. This will overcount for adopted existing trees, re-adds of the same tree, and direct VaultService usage, so the rotation warning can fire without real key usage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/domain/services/VaultService.js` around lines 398 - 410, The code
increments metadata.encryptionCount inside addToVault() even though addToVault()
merely updates the index and doesn't perform encryption; move the increment out
of addToVault() and instead increment metadata.encryptionCount at the actual
encryption boundary in the store layer where a tree is encrypted and persisted
(the function that performs the vault encryption/serialization for a treeOid),
and keep using VaultService.ENCRYPTION_COUNT_WARN and metadata.encryptionCount
for the warning; update readState()/state saving logic so the increment is
applied to the same metadata object persisted by VaultService rather than being
incremented on re-indexing or re-adding existing trees.
There was a problem hiding this comment.
@coderabbitai Thanks for flagging. This is by design — pushing back with context.
Why the counter lives in addToVault(): VaultService manages vault state and has no visibility into whether the caller encrypted data. Moving the counter to the store layer would require VaultService awareness inside CasService's store path, crossing the hexagonal architecture boundary (VaultService is a domain service; CasService orchestrates it).
Why overcounting is safe: The counter is a conservative heuristic, not a precise measurement. Every addToVault() on an encrypted vault is a reasonable proxy for "an encryption probably happened." The warning threshold at 2^31 is half the NIST 2^32 GCM limit, providing ample margin for overcounting. The practical overcounting scenarios (re-adds with force: true without fresh encryption) are rare in normal usage — the CLI's store --tree always encrypts, and key rotation / recipient operations do use nonces for DEK re-wrapping.
The tradeoff: Precise tracking would require either (a) leaking encryption awareness into VaultService, or (b) adding a counter at the CasService store layer that VaultService reads back — both add coupling for minimal practical benefit given the 2x safety margin.
Leaving as-is. Happy to discuss further if there's a concrete scenario where the overcounting margin is insufficient.
There was a problem hiding this comment.
@flyingrobots, that's a well-reasoned justification. The architectural boundary argument is compelling — having VaultService reach into CasService's encryption path (or vice versa) would be a genuine violation of the hexagonal design. The 2× safety margin below the NIST GCM limit makes the conservative heuristic approach sound, and the overcounting scenarios you describe (force re-adds without fresh encryption) are indeed edge cases in normal usage.
Acknowledged — leaving as-is is the right call here.
ฅ(=`ω´=)ฅ
✏️ Learnings added
Learnt from: flyingrobots
Repo: git-stunts/git-cas PR: 17
File: src/domain/services/VaultService.js:398-410
Timestamp: 2026-03-08T18:29:36.612Z
Learning: In `src/domain/services/VaultService.js`, `encryptionCount` is intentionally incremented inside `addToVault()` rather than at the actual encryption boundary. This is a deliberate design decision: VaultService is a pure domain service with no visibility into whether the caller (CasService) actually performed encryption. Incrementing here is a conservative heuristic — every `addToVault()` on an encrypted vault is treated as a proxy for "an encryption probably happened." The warning threshold is `VaultService.ENCRYPTION_COUNT_WARN = 2^31`, which is half the NIST GCM nonce limit of `2^32`, providing a 2× safety margin that accommodates the overcounting. Moving the counter to the CasService store layer would cross the hexagonal architecture boundary and add coupling for minimal practical benefit.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
… file Add CLI flags to `git cas store`: --gzip, --strategy, --chunk-size, --concurrency, --codec, --merkle-threshold, --target-chunk-size, --min-chunk-size, --max-chunk-size. Add CLI flags to `git cas restore`: --concurrency, --max-restore-buffer. Introduce `.casrc` JSON config file support — placed at the repo root, it provides defaults for all CLI flags. CLI flags always take precedence. Fix ROADMAP.md M16 milestone summary LoC/hours discrepancy (~430/~28h corrected to ~698/~21h to match the detailed task breakdown).
…9–V12 All 10 concerns (C1–C10) now marked ✅ MITIGATED with status lines referencing their implementing tasks. Vision 6 (passphrase prompt) marked ✅ DONE — subsumed by Task 16.11. New visions added for remaining CLI/UX gaps: - V9: `vault status` command (metadata, encryptionCount, nonce health) - V10: `gc` command (collectReferencedChunks + git gc) - V11: KDF parameter tuning via `.casrc` - V12: File-level passphrase CLI flag Summary table updated with Status column. Backlog section updated to reflect CLI parity achievement (.casrc + store/restore flags).
There was a problem hiding this comment.
🧹 Nitpick comments (3)
bin/git-cas.js (1)
344-353: Consider reusingmergeConfigfor restore command config handling.The restore command manually merges
concurrencyandmaxRestoreBufferSizeinstead of usingmergeConfig. While the current approach works, it introduces potential drift ifmergeConfigis extended with additional restore-relevant options.♻️ Suggested refactor to reuse mergeConfig
const config = loadConfig(opts.cwd); - /** `@type` {Record<string, any>} */ - const casConfig = {}; - const concurrency = opts.concurrency ?? config.concurrency; - const maxRestoreBufferSize = opts.maxRestoreBuffer ?? config.maxRestoreBufferSize; - if (concurrency !== undefined) { casConfig.concurrency = concurrency; } - if (maxRestoreBufferSize !== undefined) { casConfig.maxRestoreBufferSize = maxRestoreBufferSize; } - - const cas = createCas(opts.cwd, { observability: observer, ...casConfig }); + const { casConfig } = mergeConfig(opts, config); + // Override maxRestoreBufferSize with CLI-specific flag name + if (opts.maxRestoreBuffer !== undefined) { + casConfig.maxRestoreBufferSize = opts.maxRestoreBuffer; + } + const cas = createCas(opts.cwd, { observability: observer, ...casConfig });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/git-cas.js` around lines 344 - 353, The restore command builds casConfig manually which duplicates logic and can drift from mergeConfig; replace the manual merging of concurrency and maxRestoreBufferSize by calling mergeConfig(opts, config) (or the appropriate mergeConfig signature used elsewhere) to produce the final config used when creating the CAS; then pass the merged result (plus observability: observer) into createCas instead of casConfig—update references around loadConfig, opts, config, casConfig, createCas, concurrency and maxRestoreBufferSize to use the merged config.test/unit/cli/config.test.js (1)
114-122: Consider adding CLI override tests formerkleThresholdandmaxRestoreBufferSize.The tests verify these values propagate from config, but there's no test confirming CLI flags take precedence over config values for these specific fields.
🧪 Suggested additional tests
it('maxRestoreBufferSize from config', () => { const { casConfig } = mergeConfig({}, { maxRestoreBufferSize: 1024 * 1024 }); expect(casConfig.maxRestoreBufferSize).toBe(1024 * 1024); }); + + it('merkleThreshold CLI overrides config', () => { + const { casConfig } = mergeConfig({ merkleThreshold: 100 }, { merkleThreshold: 500 }); + expect(casConfig.merkleThreshold).toBe(100); + }); + + it('maxRestoreBufferSize CLI overrides config', () => { + const { casConfig } = mergeConfig({ maxRestoreBuffer: 2048 * 1024 }, { maxRestoreBufferSize: 1024 * 1024 }); + // Note: CLI uses maxRestoreBuffer, config uses maxRestoreBufferSize + }); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/cli/config.test.js` around lines 114 - 122, Add unit tests to assert CLI flags override config values for merkleThreshold and maxRestoreBufferSize by calling mergeConfig with CLI-like first-argument overrides and the config object as the second argument; specifically, create tests that pass { merkleThreshold: <value> } and { maxRestoreBufferSize: <value> } as the first parameter to mergeConfig (instead of the empty {} used in current tests) and assert that the returned casConfig.merkleThreshold and casConfig.maxRestoreBufferSize equal the CLI-provided values (use the same test names as the existing ones to keep style consistent).bin/config.js (1)
78-92: Clarify behavior whenstrategyis'fixed'butchunkSizeisundefined.When
strategy === 'fixed'but neither CLI nor config provideschunkSize,resolveChunkingreturnsundefined. This delegates to CasService's default chunker, which is correct — but the silent fallback could surprise users who explicitly setstrategy: 'fixed'expecting it to take effect.Consider whether a warning or explicit handling would be helpful here, though the current behavior is defensible.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/config.js` around lines 78 - 92, The function resolveChunking currently returns undefined when strategy === 'fixed' but chunkSize is undefined; add an explicit warning so users aren't silently stomped by CasService's default chunker: inside resolveChunking, detect strategy === 'fixed' and chunkSize == null, and emit a clear console.warn (or use an existing logger if available) referencing resolveChunking, the 'fixed' strategy, and the missing chunkSize, then return undefined as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@bin/config.js`:
- Around line 78-92: The function resolveChunking currently returns undefined
when strategy === 'fixed' but chunkSize is undefined; add an explicit warning so
users aren't silently stomped by CasService's default chunker: inside
resolveChunking, detect strategy === 'fixed' and chunkSize == null, and emit a
clear console.warn (or use an existing logger if available) referencing
resolveChunking, the 'fixed' strategy, and the missing chunkSize, then return
undefined as before.
In `@bin/git-cas.js`:
- Around line 344-353: The restore command builds casConfig manually which
duplicates logic and can drift from mergeConfig; replace the manual merging of
concurrency and maxRestoreBufferSize by calling mergeConfig(opts, config) (or
the appropriate mergeConfig signature used elsewhere) to produce the final
config used when creating the CAS; then pass the merged result (plus
observability: observer) into createCas instead of casConfig—update references
around loadConfig, opts, config, casConfig, createCas, concurrency and
maxRestoreBufferSize to use the merged config.
In `@test/unit/cli/config.test.js`:
- Around line 114-122: Add unit tests to assert CLI flags override config values
for merkleThreshold and maxRestoreBufferSize by calling mergeConfig with
CLI-like first-argument overrides and the config object as the second argument;
specifically, create tests that pass { merkleThreshold: <value> } and {
maxRestoreBufferSize: <value> } as the first parameter to mergeConfig (instead
of the empty {} used in current tests) and assert that the returned
casConfig.merkleThreshold and casConfig.maxRestoreBufferSize equal the
CLI-provided values (use the same test names as the existing ones to keep style
consistent).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ddacdaaa-1724-4ab5-aebd-71ef0d39d4fa
📒 Files selected for processing (7)
CHANGELOG.mdGUIDE.mdREADME.mdROADMAP.mdbin/config.jsbin/git-cas.jstest/unit/cli/config.test.js
🚧 Files skipped from review as they are similar to previous changes (1)
- GUIDE.md
The runtime VaultService constructor already accepts an optional observability port, but the .d.ts declaration was missing it. Add the parameter to keep types in sync with the implementation.
Replace || with ?? so that empty-string CLI values (which are falsy but intentional) do not silently fall through to .casrc defaults.
readPassphraseFile now throws when a file or stdin yields an empty string after newline stripping. resolvePassphrase validates --vault- passphrase flag and GIT_CAS_PASSPHRASE env var with .trim() check.
vault rotate now accepts --old-passphrase-file and --new-passphrase-file flags, bringing it to parity with the store/restore passphrase-file support. The old --old-passphrase and --new-passphrase flags are retained as non-required options.
Add validateKdfAlgorithm() guard before vault init and vault rotate pass untrusted strings to the KDF. Add validateConfig() to loadConfig() that rejects out-of-range and wrong-type values for all .casrc keys. Also fix passphrase-prompt tests to write temp files with mode 0o600, eliminating spurious "insecure permissions" warnings during test runs.
- README/GUIDE: deleteAsset → inspectAsset, findOrphanedChunks → collectReferencedChunks with deprecation notes - SECURITY.md: add RESTORE_TOO_LARGE and ENCRYPTION_BUFFER_EXCEEDED error code sections - CHANGELOG: add entries for all PR #17 review fixes
- SECURITY.md: fix ENCRYPTION_BUFFER_EXCEEDED meta field name (accumulated, not size) to match WebCryptoAdapter implementation - vault rotate: reject double-stdin (--old-passphrase-file - and --new-passphrase-file - simultaneously); extract helper to keep action complexity within lint threshold - CHANGELOG.md: merge duplicate ### Added headings under [Unreleased] - passphrase-prompt tests: add 0-byte empty file rejection test
- GUIDE.md FAQ: mention 100 MiB upper bound and >10 MiB warning for chunk size, not just the lower bound - passphrase-prompt tests: use chmod() after writeFile() instead of relying on mode parameter, which is filtered by process umask and fails on hardened CI systems with restrictive umask
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/config.js`:
- Around line 64-89: validateConfig/validateCdc must enforce the same chunk-size
bounds that CasService/CdcChunker expect: add range checks using the existing
assertInt/assertEnum helpers so chunkSize is between 1 and 100*1024*1024 (100
MiB), and in validateCdc verify minChunkSize, targetChunkSize, maxChunkSize are
present integers within that same 1..100 MiB range and that minChunkSize <=
targetChunkSize <= maxChunkSize (reject otherwise); ensure any undefined cdc
fields are handled consistently and error messages identify the failing field
(e.g., cdc.minChunkSize, cdc.targetChunkSize, cdc.maxChunkSize) so loadConfig
fails early instead of deferring to CasService/CdcChunker.
In `@bin/git-cas.js`:
- Around line 258-259: The error message thrown in the recipient/credential
guard inside the code that checks opts.recipient and (opts.keyFile ||
hasPassphraseSource(opts)) is outdated; update the throw string in that branch
(the throw new Error(...) in bin/git-cas.js) to list the new passphrase sources
or use a generic phrase like "passphrase source" so it covers
--vault-passphrase-file and the GIT_CAS_PASSPHRASE env var as well as
--vault-passphrase and --key-file (e.g. "Provide
--key-file/--vault-passphrase/--vault-passphrase-file/GIT_CAS_PASSPHRASE or
--recipient, not both" or "Provide --key-file or a passphrase source (e.g.
--vault-passphrase, --vault-passphrase-file, GIT_CAS_PASSPHRASE) or --recipient,
not both").
- Around line 230-235: The parseIntFlag helper allows partially-numeric strings
(e.g., "64MiB", "123abc") which silently parse incorrectly; update parseIntFlag
to reject any input that is not a pure integer string by first validating the
full input (e.g., with a regex such as /^[+-]?\d+$/ or by comparing
String(Number(v)) to v) and only then parsing and returning the integer,
otherwise throw a descriptive Error; this change affects parseIntFlag used for
flags like --chunk-size, --target-chunk-size, --min-chunk-size,
--max-chunk-size, --concurrency, --max-restore-buffer, and --merkle-threshold.
In `@SECURITY.md`:
- Around line 682-685: The doc currently recommends reducing chunkSize to
mitigate ENCRYPTION_BUFFER_EXCEEDED, but this is incorrect because the
WebCryptoAdapter hits the limit before chunking; update the SECURITY.md guidance
to instead advise increasing maxEncryptionBufferSize in the WebCryptoAdapter
constructor or switching to NodeCryptoAdapter for streaming encryption, and
remove or correct the recommendation to reduce chunkSize (reference
ENCRYPTION_BUFFER_EXCEEDED, WebCryptoAdapter, maxEncryptionBufferSize,
NodeCryptoAdapter, and chunkSize so readers can find the related code paths).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: daf564ef-8de1-4b47-834a-0325f06b8f26
📒 Files selected for processing (10)
CHANGELOG.mdGUIDE.mdREADME.mdSECURITY.mdbin/config.jsbin/git-cas.jsbin/ui/passphrase-prompt.jsindex.d.tstest/unit/cli/config.test.jstest/unit/cli/passphrase-prompt.test.js
🚧 Files skipped from review as they are similar to previous changes (5)
- test/unit/cli/passphrase-prompt.test.js
- test/unit/cli/config.test.js
- README.md
- bin/ui/passphrase-prompt.js
- GUIDE.md
Replace hardcoded setTimeout in runAction with an injectable delay
parameter. Tests now inject a spy instead of patching globals with
vi.useFakeTimers(), making the INTEGRITY_ERROR rate-limit tests
deterministic across all supported runtimes.
Add test/CONVENTIONS.md documenting cross-runtime testing rules:
inject time dependencies, use chmod() over writeFile({ mode }),
prefer dependency injection over global monkey-patching.
- parseIntFlag: reject partial numeric strings (e.g., "64MiB") using regex validation and Number.isSafeInteger check - .casrc validation: add 100 MiB upper bound for chunkSize and CDC sub-keys, matching CasService runtime constraints - Error message: update recipient conflict text to mention all passphrase sources (--vault-passphrase-file, GIT_CAS_PASSPHRASE) - SECURITY.md: remove incorrect recommendation to reduce chunk size for ENCRYPTION_BUFFER_EXCEEDED (WebCrypto buffers the entire stream, not individual chunks)
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
🧹 Nitpick comments (3)
bin/config.js (1)
68-76: CDC parameter range validation added, but inter-field relationships not enforced.The validation now correctly bounds individual CDC fields (min: 1, max: 100 MiB). However, the relationships between fields (e.g.,
minChunkSize <= targetChunkSize <= maxChunkSize) are not validated here — invalid combinations will fail later inCdcChunker. This is acceptable since runtime validation exists downstream, but early validation would provide clearer error messages.Optional: Add inter-field relationship validation
function validateCdc(config) { if (config.cdc === undefined) { return; } if (typeof config.cdc !== 'object' || config.cdc === null || Array.isArray(config.cdc)) { throw new Error(`${FILENAME}: cdc must be an object`); } for (const key of ['minChunkSize', 'targetChunkSize', 'maxChunkSize']) { assertInt(config.cdc[key], `cdc.${key}`, { min: 1, max: MAX_CHUNK_SIZE }); } + const { minChunkSize, targetChunkSize, maxChunkSize } = config.cdc; + if (minChunkSize !== undefined && maxChunkSize !== undefined && minChunkSize > maxChunkSize) { + throw new Error(`${FILENAME}: cdc.minChunkSize must not exceed cdc.maxChunkSize`); + } + if (targetChunkSize !== undefined && minChunkSize !== undefined && targetChunkSize < minChunkSize) { + throw new Error(`${FILENAME}: cdc.targetChunkSize must be >= cdc.minChunkSize`); + } + if (targetChunkSize !== undefined && maxChunkSize !== undefined && targetChunkSize > maxChunkSize) { + throw new Error(`${FILENAME}: cdc.targetChunkSize must be <= cdc.maxChunkSize`); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/config.js` around lines 68 - 76, validateCdc currently checks per-field ranges but doesn't enforce inter-field ordering, so add checks inside validateCdc (after the assertInt loop) to ensure config.cdc.minChunkSize <= config.cdc.targetChunkSize <= config.cdc.maxChunkSize; if any relation fails, throw a clear Error mentioning the fields (e.g., "cdc.minChunkSize must be <= cdc.targetChunkSize <= cdc.maxChunkSize"). Use the same FILENAME context in the error message and reference the validateCdc function and the keys minChunkSize, targetChunkSize, maxChunkSize to locate where to add these checks.CHANGELOG.md (1)
17-17: Clarify "new architectural concerns" wording.The phrasing "Three new architectural concerns identified by the audit" may confuse readers. These concerns (C8–C10) were identified by the CODE-EVAL.md audit as part of the existing 10 concerns (C1–C10), not introduced by this PR. Consider rephrasing to emphasize that these concerns are now being documented/addressed rather than newly discovered:
-**Concerns C8–C10** — Three new architectural concerns identified by the audit: crypto adapter LSP violation (C8), FixedChunker quadratic allocation (C9), encrypt-then-chunk dedup loss (C10). +**Concerns C8–C10** — Three architectural concerns from the CODE-EVAL.md audit now documented: crypto adapter LSP violation (C8), FixedChunker quadratic allocation (C9), encrypt-then-chunk dedup loss (C10).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` at line 17, Reword the CHANGELOG entry so it makes clear that C8–C10 were identified by the existing CODE-EVAL.md audit and are now being documented here, not newly discovered by this PR; update the sentence referencing "Three new architectural concerns identified by the audit" to something like "Three architectural concerns (C8–C10) from the CODE-EVAL.md audit are now documented: crypto adapter LSP violation (C8), FixedChunker quadratic allocation (C9), encrypt-then-chunk dedup loss (C10)." Ensure the entry references CODE-EVAL.md and the concern IDs (C8–C10) so readers understand these are existing audit findings being recorded in the changelog.test/unit/cli/actions.test.js (1)
126-134: Assert that the delay is awaited, not just invoked.This still passes if
runActionchanges fromawait delay(1000)todelay(1000), because the spy resolves immediately. Hold the injected promise open, assertstderrandprocess.exitCodestay untouched, then release it.Suggested test tightening
- it('calls delay(1000) on INTEGRITY_ERROR', async () => { - const delaySpy = vi.fn().mockResolvedValue(undefined); + it('awaits delay(1000) before writing INTEGRITY_ERROR output', async () => { + let releaseDelay = () => {}; + const delaySpy = vi.fn().mockImplementation(() => new Promise((resolve) => { + releaseDelay = resolve; + })); const err = Object.assign(new Error('bad key'), { code: 'INTEGRITY_ERROR' }); - const action = runAction(async () => { throw err; }, () => false, { delay: delaySpy }); - await action(); + const actionPromise = runAction(() => { throw err; }, () => false, { delay: delaySpy })(); + expect(delaySpy).toHaveBeenCalledWith(1000); - expect(process.exitCode).toBe(1); - expect(stderrSpy).toHaveBeenCalled(); + expect(stderrSpy).not.toHaveBeenCalled(); + expect(process.exitCode).toBeUndefined(); + + releaseDelay(); + await actionPromise; + expect(stderrSpy).toHaveBeenCalled(); + expect(process.exitCode).toBe(1); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/unit/cli/actions.test.js` around lines 126 - 134, The test currently uses a delay spy that resolves immediately, so it cannot verify that runAction awaits the delay; change delaySpy to return a controllable pending promise (a deferred) so you can assert stderrSpy and process.exitCode remain unchanged before resolving, then resolve the deferred and assert delaySpy was awaited (called with 1000), stderrSpy was called and process.exitCode became 1 after resolution; reference the runAction invocation, delaySpy, stderrSpy and process.exitCode when adding the pending promise and the pre-/post-resolution assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@bin/config.js`:
- Around line 68-76: validateCdc currently checks per-field ranges but doesn't
enforce inter-field ordering, so add checks inside validateCdc (after the
assertInt loop) to ensure config.cdc.minChunkSize <= config.cdc.targetChunkSize
<= config.cdc.maxChunkSize; if any relation fails, throw a clear Error
mentioning the fields (e.g., "cdc.minChunkSize must be <= cdc.targetChunkSize <=
cdc.maxChunkSize"). Use the same FILENAME context in the error message and
reference the validateCdc function and the keys minChunkSize, targetChunkSize,
maxChunkSize to locate where to add these checks.
In `@CHANGELOG.md`:
- Line 17: Reword the CHANGELOG entry so it makes clear that C8–C10 were
identified by the existing CODE-EVAL.md audit and are now being documented here,
not newly discovered by this PR; update the sentence referencing "Three new
architectural concerns identified by the audit" to something like "Three
architectural concerns (C8–C10) from the CODE-EVAL.md audit are now documented:
crypto adapter LSP violation (C8), FixedChunker quadratic allocation (C9),
encrypt-then-chunk dedup loss (C10)." Ensure the entry references CODE-EVAL.md
and the concern IDs (C8–C10) so readers understand these are existing audit
findings being recorded in the changelog.
In `@test/unit/cli/actions.test.js`:
- Around line 126-134: The test currently uses a delay spy that resolves
immediately, so it cannot verify that runAction awaits the delay; change
delaySpy to return a controllable pending promise (a deferred) so you can assert
stderrSpy and process.exitCode remain unchanged before resolving, then resolve
the deferred and assert delaySpy was awaited (called with 1000), stderrSpy was
called and process.exitCode became 1 after resolution; reference the runAction
invocation, delaySpy, stderrSpy and process.exitCode when adding the pending
promise and the pre-/post-resolution assertions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bf87617a-63cc-41c7-b1b7-a0ee00e7aa16
📒 Files selected for processing (13)
CHANGELOG.mdGUIDE.mdREADME.mdSECURITY.mdbin/actions.jsbin/config.jsbin/git-cas.jsbin/ui/passphrase-prompt.jsindex.d.tstest/CONVENTIONS.mdtest/unit/cli/actions.test.jstest/unit/cli/config.test.jstest/unit/cli/passphrase-prompt.test.js
🚧 Files skipped from review as they are similar to previous changes (2)
- test/unit/cli/passphrase-prompt.test.js
- bin/ui/passphrase-prompt.js
Replace hand-rolled validateKdfAlgorithm() and free-text --strategy, --codec, --algorithm options with Commander's built-in Option.choices() API. Commander rejects invalid values at parse time with a clear error message, eliminating custom validation code.
- CDC inter-field ordering validation in .casrc (min ≤ target ≤ max) - Tighten delay test to verify runAction awaits the delay via deferred - Clarify CHANGELOG C8–C10 wording (existing audit findings, not new)
|
Fixed in 4c13b33: Addressed CodeRabbit review round 4 nitpicks.
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Summary
Implements all 13 task cards from Milestone 16 (Capstone) — a full audit remediation pass addressing 9 architectural flaws and 10 concerns identified in CODE-EVAL.md.
Safety Nets (Phase 1)
CasErrorportability guard (Error.captureStackTrace)scripts/hooks/renameencryptBuffer,STREAM_NOT_CONSUMEDguards)maxRestoreBufferSize,RESTORE_TOO_LARGE)STREAM_ERROR(meta.orphanedBlobs)Correctness (Phase 2)
--vault-passphrase-file, TTY prompt)maxEncryptionBufferSize)Polish (Phase 3)
inspectAsset,collectReferencedChunks+ deprecated aliases)decryption_failedmetric, 1s CLI delay)encryptionCountin vault metadataBacklog
encryptionCountauto-rotationStats
Test plan
npx eslint .— 0 errorsnpm test— 895 tests pass (Node)docker compose run --build --rm test-bun bunx vitest run test/unit— Bun unit testsdocker compose run --build --rm test-deno deno run -A npm:vitest run test/unit— Deno unit testsnpm pack --dry-run— cleannpx jsr publish --dry-run --allow-dirty— cleanSummary by CodeRabbit
New Features
Bug Fixes
Documentation
Deprecations
Chores / Tests