Skip to content

feat: improve concurrency safety, cross-platform support and dynamic versioning#6

Merged
Fahl-Design merged 17 commits intomainfrom
feat/concurrency-and-versioning
Apr 29, 2026
Merged

feat: improve concurrency safety, cross-platform support and dynamic versioning#6
Fahl-Design merged 17 commits intomainfrom
feat/concurrency-and-versioning

Conversation

@Fahl-Design
Copy link
Copy Markdown
Member

@Fahl-Design Fahl-Design commented Mar 6, 2026

Summary of Changes

1. Shell Update & Session Isolation (Root Cause Fix)

  • Fixed PVM_DIR Export: The pvm env command now explicitly exports PVM_DIR, ensuring the shell wrapper can correctly locate its configuration files.
  • Session-Specific Updates: Implemented isolated environment update files using the shell's PID (e.g., .env_update_1234). This prevents race conditions and "version bleeding" between different terminal tabs.
  • Safety Guards: Added directory presence checks in shell wrappers to prevent rm or cat operations on invalid paths.
  • Silent Operations: Bypassed user-defined aliases by using command rm and redirected all temporary file cleanup output to /dev/null.

2. Concurrency & Reliability

  • File Locking: Integrated the fs4 crate to implement advisory file locking on shared state files (.env_update, .update_check_guard, remote_cache.json). This ensures data integrity when multiple PVM instances run simultaneously.
  • JSON API Integration: Refactored network.rs to use the remote JSON API instead of HTML scraping. This is faster, more robust, and allowed for the removal of the scraper dependency.
  • Enhanced Progress Reporting: Added a real-time download progress bar with accurate byte counts and ETA for PHP version installations.

3. Cross-Platform Support

  • Dynamic Detection: Replaced hardcoded "linux-x86_64" strings with dynamic OS and Architecture detection (std::env::consts). PVM now natively supports macOS (Intel/Silicon) and ARM64 Linux (e.g., Raspberry Pi).

4. Modernization & Maintainability

  • Dynamic Versioning: Created a build.rs script that injects the current Git commit ID and build timestamp into the version string.
  • Robust Versioning: Integrated the semver crate for standard-compliant version parsing and sorting, replacing brittle manual string splitting.
  • Centralized Constants: Moved all magic strings and environment variable names into a single src/constants.rs file.
  • Dependency Audit: Updated all dependencies to their latest stable versions and removed unused crates (scraper, regex, thiserror, log).

Summary by CodeRabbit

  • New Features

    • Per-directory PHP version files; installer now supports multi-select package installs and shows package lists in listings.
    • CLI reports a build-time version string including tag/commit and timestamp.
  • Improvements

    • Remote listings switched to JSON; downloads stream with progress and clearer corruption/errors.
    • Atomic, cross-platform env/PATH updates with locked writes; shell wrappers emit safe, quoted env updates per-invocation.
    • Version sorting refined; cache and guard logic hardened with locking.
  • Tests / Docs / Chores

    • Toolchain pinned, docs expanded, tests extended, ignore rules updated.

@Fahl-Design Fahl-Design self-assigned this Mar 6, 2026
@Fahl-Design Fahl-Design force-pushed the feat/concurrency-and-versioning branch from 9d28092 to e213c57 Compare March 6, 2026 20:53
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 6, 2026

Warning

Rate limit exceeded

@Fahl-Design has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 43 minutes and 51 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 6bae8b15-bf32-4950-a8fb-bb5183e3b36f

📥 Commits

Reviewing files that changed from the base of the PR and between cf7dc57 and 9a007ce.

📒 Files selected for processing (5)
  • GEMINI.md
  • src/commands/install.rs
  • src/commands/ls_remote.rs
  • src/commands/use_cmd.rs
  • src/network.rs
📝 Walkthrough

Walkthrough

Adds a build script exporting a runtime PVM_VERSION, centralizes runtime names as public constants, replaces HTML scraping with a JSON remote feed, implements PID-scoped atomic env-update files with locking, adds semver-aware version sorting and package discovery, and updates commands, shell wrappers, tests, toolchain, and dependencies.

Changes

Cohort / File(s) Summary
Build & Toolchain
build.rs, Cargo.toml, rust-toolchain.toml, src/cli.rs
Adds build.rs that emits PVM_VERSION (git/timestamp, CI-aware); bumps and adjusts dependencies (adds futures-util, semver, chrono build-dep; adjusts reqwest features); sets CLI version to PVM_VERSION; pins toolchain to 1.95.0.
Constants & Modules
src/constants.rs, src/main.rs
New src/constants.rs exposing environment/file-name constants (PVM_DIR_VAR, MULTISHELL_PATH_VAR, ENV_UPDATE_FILE, REMOTE_CACHE_FILE, UPDATE_CHECK_GUARD_FILE, PHP_VERSION_FILE, BASE_URL); main.rs registers constants and utils modules.
Filesystem & Env Utilities
src/fs.rs, src/utils.rs
Adds VersionItem.packages, get_installed_packages, get_env_update_path(Option<PathBuf>) (override → env → PID-scoped path), and write_env_file_locked (exclusive locked atomic write). Moves version sorting to utils::sort_versions using semver + numeric fallback.
Commands: env / install / use
src/commands/env.rs, src/commands/install.rs, src/commands/use_cmd.rs
Commands now use constants for env/filenames, query packages during install with multi-select, write env updates via get_env_update_path + write_env_file_locked, propagate errors instead of ignoring, and use OS-aware PATH separators; use updates .php-version and env propagation accordingly.
Network / Remote listing & download
src/network.rs
Replaces HTML scraping with JSON listing (?format=json) into (version, packages) tuples, filters by OS/arch, caches JSON to REMOTE_CACHE_FILE with file locks and 24h expiry, downloads package-specific tarballs with streaming and progress, broadens Unix executable bit handling, and returns per-version package lists.
Shell Wrappers
src/shell.rs
Switches to shell-safe single-quote escaping, derives PATH from PVM_DIR_VAR, creates per-invocation PID-scoped unique env-update files, sets PVM_ENV_UPDATE_PATH, conditionally sources/evals and cleans the file, and returns wrapped pvm exit code; tests updated for quoting.
Update Guarding
src/update.rs
Uses UPDATE_CHECK_GUARD_FILE and an exclusive-locked guard file to rate-limit update checks (~24h); reads/writes timestamp under lock and logs resolution failures.
Listing & Init
src/commands/ls.rs, src/commands/ls_remote.rs, src/commands/init.rs
ls/ls-remote display per-version package lists in output; remote listing adapted to tuple shape; init iterates over (version, _) tuples.
Tests & Repo
tests/cli.rs, .gitignore, GEMINI.md
Test asserts export PVM_DIR= in bash env output; .php-version added to .gitignore; GEMINI.md expanded/edited guidance.

Sequence Diagram

sequenceDiagram
    participant Shell as Shell Wrapper
    participant PVM as pvm CLI
    participant FS as File System
    participant Lock as File Lock

    Shell->>PVM: invoke pvm (sets PVM_SHELL_PID / PVM_ENV_UPDATE_PATH)
    PVM->>FS: get_env_update_path(override=None)
    FS-->>PVM: return PID-scoped env-update path
    PVM->>Lock: open env-update file (exclusive)
    Lock-->>PVM: lock acquired
    PVM->>FS: write environment exports (buffered)
    PVM->>Lock: flush & release lock
    Lock-->>PVM: lock released
    PVM-->>Shell: exit code returned
    Shell->>FS: read env-update file (if exists)
    FS-->>Shell: env export lines
    Shell->>Shell: eval/source exports into shell environment
    Shell->>FS: remove env-update file
    FS-->>Shell: cleanup complete
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 I dug a build from git and time, a tiny version seed,
I locked the burrow's env updates so every write proceeds,
From HTML crumbs to JSON bakes I hopped with semver in paw,
Shells pass PIDs and share the feast — the PATHs are set, hurrah!
A carrot for CI, a nibble for devs — hop, test, and mew!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.95% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: improve concurrency safety, cross-platform support and dynamic versioning' directly and comprehensively summarizes the main changes: file locking for concurrency, platform detection for cross-platform support, and build-time version injection for dynamic versioning.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/concurrency-and-versioning

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 43 minutes and 51 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

@Fahl-Design
Copy link
Copy Markdown
Member Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 6, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
Cargo.toml (1)

4-4: ⚠️ Potential issue | 🟡 Minor

Ensure CI and local toolchains support Rust 1.85.0 or later.

The edition = "2024" requires rustc 1.85.0 (stable) or later. Verify your CI pipeline and development environment are configured to use this minimum version.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@Cargo.toml` at line 4, The Cargo.toml now sets edition = "2024" which
requires rustc 1.85.0+; update your toolchain configuration and CI to pin/use
Rust 1.85.0 or later (e.g., update any rust-toolchain, GitHub Actions workflow
steps, CI images, and Dockerfiles) so local and CI builds match the new edition;
verify rustup/toolchain settings and CI matrix entries reference 1.85.0+ and
rerun the pipeline to confirm compatibility.
🧹 Nitpick comments (3)
build.rs (2)

20-20: CI detection is limited to GitHub Actions.

The GITHUB_ACTIONS check excludes other CI environments (GitLab CI, CircleCI, Jenkins, etc.). Consider using the more universal CI environment variable, which is set by most CI systems.

💡 Proposed broader CI detection
-    let is_ci = std::env::var("GITHUB_ACTIONS").is_ok();
+    // CI env var is set by most CI systems (GitHub Actions, GitLab CI, CircleCI, Travis, etc.)
+    let is_ci = std::env::var("CI").is_ok() || std::env::var("GITHUB_ACTIONS").is_ok();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@build.rs` at line 20, The CI detection currently sets is_ci by checking
GITHUB_ACTIONS only; update it to detect a wider range of CI environments by
checking the generic "CI" env var (and optionally fall back to other CI-specific
vars) instead of or in addition to "GITHUB_ACTIONS" so is_ci becomes true for
most CI systems; locate the declaration of the is_ci variable and replace the
std::env::var("GITHUB_ACTIONS").is_ok() expression with a check for
std::env::var("CI").is_ok() (or a small OR chain of known CI vars) to broaden
detection.

3-41: Add cargo:rerun-if-changed directives for build efficiency.

Without rerun directives, this build script executes on every compilation, invoking git and date commands unnecessarily. Adding directives improves incremental build times.

⚡ Proposed fix to add rerun directives
 fn main() {
+    // Only rerun if git state or Cargo.toml changes
+    println!("cargo:rerun-if-changed=.git/HEAD");
+    println!("cargo:rerun-if-changed=.git/refs/heads/");
+    println!("cargo:rerun-if-changed=Cargo.toml");
+
     let commit_hash = Command::new("git")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@build.rs` around lines 3 - 41, In main(), add cargo rerun directives so the
build script only reruns when relevant inputs change: emit
println!("cargo:rerun-if-changed=.git/HEAD") and
println!("cargo:rerun-if-changed=.git/refs/heads") (or other git files you rely
on) to avoid invoking git/date on every compile, and emit environment-change
directives like println!("cargo:rerun-if-env-changed=GITHUB_ACTIONS") and
println!("cargo:rerun-if-env-changed=CARGO_PKG_VERSION") so changes to those
vars or package version trigger a rerun; place these near the top of main()
before computing commit_hash, build_time, and version.
src/utils.rs (1)

10-18: Mixed semver/non-semver comparison may produce inconsistent ordering.

When one string is valid semver and the other is not, both are compared using the numeric fallback, discarding valid semver metadata. For example, comparing "8.2.0-alpha" (valid semver with prerelease) against "8.2" (invalid semver) uses the fallback, yielding [8,2,0] vs [8,2], treating the prerelease as greater—contradicting semver semantics where 8.2.0-alpha < 8.2.0.

Consider handling the mixed case explicitly or documenting this limitation.

💡 Proposed fix to handle mixed case
         match (a_sem, b_sem) {
             (Ok(av), Ok(bv)) => av.cmp(&bv),
+            (Ok(_), Err(_)) => std::cmp::Ordering::Greater, // valid semver sorts after non-semver
+            (Err(_), Ok(_)) => std::cmp::Ordering::Less,
             _ => {
                 // Fallback for non-semver strings (e.g., "8.2")
                 let a_parts: Vec<u32> = a.split('.').filter_map(|s| s.parse().ok()).collect();
                 let b_parts: Vec<u32> = b.split('.').filter_map(|s| s.parse().ok()).collect();
                 a_parts.cmp(&b_parts)
             }
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils.rs` around lines 10 - 18, The current match over (a_sem, b_sem)
falls through to the numeric fallback for mixed semver/non-semver cases and
loses semver prerelease semantics; update the match to handle mixed cases
explicitly by adding arms for (Ok(av), Err(_)) and (Err(_), Ok(bv)): when one
side parsed as semver, split the non-semver string into numeric parts and
compare major/minor/patch from av (use av.major,av.minor,av.patch) to those
numeric parts; if numeric parts are equal but av has a prerelease, treat the
semver prerelease as less than the non-semver “release” (i.e., return Less), and
mirror the logic for the reversed case using bv, ensuring a and b variables and
the existing match block are used to locate the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.php-version:
- Line 1: The repository includes a committed .php-version file which is
machine-specific and can confuse contributors or cause merge conflicts; remove
the tracked .php-version from the repo (git rm --cached .php-version or delete
and commit), add ".php-version" to .gitignore to prevent re-committing, and if
keeping it intentionally, add a short note in the project README explaining why
this specific PHP version is required; reference the .php-version file when
making these changes.

In `@build.rs`:
- Around line 12-18: The current build_time computation uses
Command::new("date") (see build_time variable in build.rs) which is POSIX-only
and breaks on Windows; replace this shell call with a cross-platform timestamp
from Rust (add chrono as a build-dependency and compute build_time via
chrono::Utc::now().format(...).to_string()) so the timestamp is produced
reliably on all platforms and remove the Command::new("date") logic.

In `@src/commands/install.rs`:
- Around line 81-94: The current install path in install.rs swallows all I/O
errors when writing the env-update file (using .ok() on lock_exclusive,
writeln!, writer.flush, and unlock), causing a reported "successful switch" even
when the write fails; change this block to treat the write as fallible like
src/commands/use_cmd.rs: use crate::fs::get_env_update_path() as you already do,
but propagate errors (replace the .ok() calls with proper Result handling and
use ? to return the error or map it into the function's error type) for
file.open(), file.lock_exclusive(), writeln!(...), writer.flush(), and
file.unlock() so the install command returns an error if the env-update write
fails.

In `@src/commands/use_cmd.rs`:
- Around line 92-98: The code currently calls std::fs::write(PHP_VERSION_FILE,
&version).ok() and then unconditionally prints a success eprintln; change this
to check the Result from std::fs::write (e.g., match or if let Err(e)) so you
only print the success message when write returns Ok, and print or warn
(eprintln) with the error and mention PHP_VERSION_FILE when it fails so the user
knows the pin update was not written; update the write call handling and the
eprintln block around PHP_VERSION_FILE and version.bold() accordingly.

In `@src/fs.rs`:
- Around line 64-72: get_env_update_path currently rebuilds the .env_update
filename from PVM_SHELL_PID which still collides for concurrent backgrounded
invocations; instead accept a per-invocation path from the caller. Change the
signature of get_env_update_path (and any callers in src/shell.rs) to accept an
Option<PathBuf> or &Path (e.g., get_env_update_path(override: Option<&Path>))
and if provided return that path immediately, otherwise fall back to the
existing pvm_dir + PVM_SHELL_PID logic; update the shell wrapper code that
spawns each invocation to create a unique temp filename and pass that PathBuf
into get_env_update_path (or set and pass through an explicit env var like
PVM_ENV_UPDATE_PATH) so each invocation uses a unique .env_update file instead
of reconstructing from PVM_SHELL_PID.

In `@src/network.rs`:
- Around line 109-116: The cache file is currently truncated by File::create
before acquiring the exclusive lock, creating a race; replace the
File::create(&cache_path) call with opening via std::fs::OpenOptions
(create=true, read=true, write=true) so the file is not truncated on open, then
call file.lock_exclusive(), call file.set_len(0) to clear the file while holding
the lock, then create the std::io::BufWriter from the locked file and
write_all/flush, and finally call file.unlock(); keep references to
serde_json::to_string(&versions), pvm_dir, cache_path, lock_exclusive, set_len,
BufWriter, write_all, flush, and unlock when making the change.

In `@src/update.rs`:
- Around line 18-45: The guard file is being truncated at open which makes
read_to_string() always return empty; remove truncate(true) from the OpenOptions
chain used to open the file (the let file =
std::fs::OpenOptions::new()...open(&guard_file)? block) so the existing
timestamp can be read, then after deciding to update, call file.set_len(0) and
rewind before writing the new timestamp (the current set_len(0),
BufWriter::new(&file), writeln!(writer, "{}", now) and writer.flush() sequence)
and keep the existing lock_exclusive()/unlock() usage around the
read-and-possibly-write sequence.

---

Outside diff comments:
In `@Cargo.toml`:
- Line 4: The Cargo.toml now sets edition = "2024" which requires rustc 1.85.0+;
update your toolchain configuration and CI to pin/use Rust 1.85.0 or later
(e.g., update any rust-toolchain, GitHub Actions workflow steps, CI images, and
Dockerfiles) so local and CI builds match the new edition; verify
rustup/toolchain settings and CI matrix entries reference 1.85.0+ and rerun the
pipeline to confirm compatibility.

---

Nitpick comments:
In `@build.rs`:
- Line 20: The CI detection currently sets is_ci by checking GITHUB_ACTIONS
only; update it to detect a wider range of CI environments by checking the
generic "CI" env var (and optionally fall back to other CI-specific vars)
instead of or in addition to "GITHUB_ACTIONS" so is_ci becomes true for most CI
systems; locate the declaration of the is_ci variable and replace the
std::env::var("GITHUB_ACTIONS").is_ok() expression with a check for
std::env::var("CI").is_ok() (or a small OR chain of known CI vars) to broaden
detection.
- Around line 3-41: In main(), add cargo rerun directives so the build script
only reruns when relevant inputs change: emit
println!("cargo:rerun-if-changed=.git/HEAD") and
println!("cargo:rerun-if-changed=.git/refs/heads") (or other git files you rely
on) to avoid invoking git/date on every compile, and emit environment-change
directives like println!("cargo:rerun-if-env-changed=GITHUB_ACTIONS") and
println!("cargo:rerun-if-env-changed=CARGO_PKG_VERSION") so changes to those
vars or package version trigger a rerun; place these near the top of main()
before computing commit_hash, build_time, and version.

In `@src/utils.rs`:
- Around line 10-18: The current match over (a_sem, b_sem) falls through to the
numeric fallback for mixed semver/non-semver cases and loses semver prerelease
semantics; update the match to handle mixed cases explicitly by adding arms for
(Ok(av), Err(_)) and (Err(_), Ok(bv)): when one side parsed as semver, split the
non-semver string into numeric parts and compare major/minor/patch from av (use
av.major,av.minor,av.patch) to those numeric parts; if numeric parts are equal
but av has a prerelease, treat the semver prerelease as less than the non-semver
“release” (i.e., return Less), and mirror the logic for the reversed case using
bv, ensuring a and b variables and the existing match block are used to locate
the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 632cbd06-1e14-494d-8cb9-2e803f4dbca3

📥 Commits

Reviewing files that changed from the base of the PR and between 544d4d8 and e213c57.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (15)
  • .php-version
  • Cargo.toml
  • build.rs
  • src/cli.rs
  • src/commands/env.rs
  • src/commands/install.rs
  • src/commands/use_cmd.rs
  • src/constants.rs
  • src/fs.rs
  • src/main.rs
  • src/network.rs
  • src/shell.rs
  • src/update.rs
  • src/utils.rs
  • tests/cli.rs

Comment thread .php-version Outdated
Comment thread build.rs Outdated
Comment thread src/commands/install.rs Outdated
Comment thread src/commands/use_cmd.rs Outdated
Comment thread src/fs.rs Outdated
Comment thread src/network.rs
Comment thread src/update.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (1)
src/shell.rs (1)

49-55: ⚠️ Potential issue | 🟠 Major

The wrapper temp filename can still collide.

In Bash and Zsh, $$ is the shell PID even in subshells, and Fish's $fish_pid is also the shell PID. With only a second-resolution timestamp added, concurrent pvm invocations from the same shell can still generate the same env_file and clobber/remove each other's exports. Generate the path with mktemp (or another per-invocation random suffix) in each wrapper instead of assembling it manually. (gnu.org)

Example pattern for Bash/Zsh
-      local env_file="${PVM_DIR}/.env_update_$(date +%s)_$$"
-      [[ -f "$env_file" ]] && command rm -f "$env_file" 2>/dev/null
+      local env_file
+      env_file="$(mktemp "${PVM_DIR}/.env_update.XXXXXX")" || return 1

Also applies to: 104-110, 157-165

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shell.rs` around lines 49 - 55, The temp env file generation using local
env_file=\"${{{}}}/{}_$(date +%s)_$$\" is racy; replace the manual filename
assembly with a per-invocation secure temp file created by mktemp (e.g. call
mktemp to produce env_file), use that path for PVM_ENV_UPDATE_PATH before
invoking command pvm, and keep the existing eval/cleanup logic but remove the
old collision-prone construction; update the same pattern wherever env_file is
built (the blocks referencing env_file, PVM_ENV_UPDATE_PATH and command pvm) and
ensure you still remove the temp file after eval and on error (and add a trap if
needed) so each wrapper invocation gets a unique file.
🧹 Nitpick comments (2)
src/update.rs (1)

39-45: Consider propagating errors from critical I/O operations.

Using .ok() on set_len, rewind, and flush silently ignores potential I/O failures. While unlikely, if these fail, the guard file may be left in an inconsistent state. At minimum, propagating the unlock() error (or logging failures) would improve debuggability.

♻️ Optional: Propagate errors for better reliability
     // Write the new timestamp to prevent spam on next commands
-    file.set_len(0).ok();
-    file.rewind().ok();
+    file.set_len(0)?;
+    file.rewind()?;
     let mut writer = std::io::BufWriter::new(&file);
-    writeln!(writer, "{}", now).ok();
-    writer.flush().ok();
-    file.unlock().ok();
+    writeln!(writer, "{}", now)?;
+    writer.flush()?;
+    file.unlock()?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/update.rs` around lines 39 - 45, Replace the silent .ok() calls on
critical I/O operations so errors are propagated or at least surfaced: remove
.ok() on file.set_len, file.rewind, writer.flush, and file.unlock in the guard
update logic (the block creating BufWriter and writing the timestamp) and use
the ? operator to return a Result from the containing function (or map_err/log
the error) so failures aren’t ignored; ensure the containing function signature
is updated to return Result if needed and specifically propagate or handle
file.unlock() error instead of discarding it.
src/network.rs (1)

178-180: Consider handling missing Content-Length header gracefully.

Some servers or proxies may not provide a Content-Length header (e.g., with chunked transfer encoding). Currently, this causes a hard error. Consider falling back to an indeterminate progress bar instead.

♻️ Optional: Support downloads without Content-Length
-    let total_size = response
-        .content_length()
-        .context("Failed to get content length from server")?;
-
-    let pb = ProgressBar::new(total_size);
-    pb.set_style(ProgressStyle::default_bar()
-        .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")?
-        .progress_chars("#>-"));
+    let total_size = response.content_length();
+
+    let pb = if let Some(size) = total_size {
+        let pb = ProgressBar::new(size);
+        pb.set_style(ProgressStyle::default_bar()
+            .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")?
+            .progress_chars("#>-"));
+        pb
+    } else {
+        let pb = ProgressBar::new_spinner();
+        pb.set_style(ProgressStyle::default_spinner()
+            .template("{spinner:.green} [{elapsed_precise}] {bytes}")?);
+        pb
+    };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 178 - 180, The code currently unwraps
response.content_length() into total_size which errors when Content-Length is
missing; change total_size to an Option<u64> (e.g., let total_size =
response.content_length().ok(); or use .context only for unexpected errors) and
adjust the progress bar creation/usage (the progress bar variable, e.g., pb) to
accept an indeterminate/unknown length when total_size is None (create spinner
or progress bar with unknown length) and ensure the download/read loop updates
bytes read without assuming a known total; update any places that used
total_size as u64 to handle the Option (match or if let Some(total) { set length
} else { use indeterminate mode }).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/commands/install.rs`:
- Around line 73-74: The file src/commands/install.rs is not formatted; run
rustfmt (cargo fmt) to fix style so the Release workflow passes; specifically
format the code around the calls to crate::fs::resolve_local_version and
crate::fs::get_version_bin_dir in the install command (e.g., the let v =
crate::fs::resolve_local_version(&resolved_version)?; and let bin_dir =
crate::fs::get_version_bin_dir(&v)?; lines) and commit the reformatted file.
- Around line 82-91: Open the file write sequence in src/commands/install.rs and
remove the truncate(true) flag from OpenOptions so the file is opened without
truncation, then acquire the advisory lock via file.lock_exclusive() before
clearing the file and call file.set_len(0)? to truncate while holding the lock,
followed by writing the contents (using the existing BufWriter/writeln!/flush)
and file.unlock()?; apply the same change to the duplicated sequence in
src/commands/use_cmd.rs or refactor both into a shared helper (e.g., a
write_env_file_locked function) to centralize opening (without truncate),
locking (file.lock_exclusive()), truncating (file.set_len(0)?), writing
(writeln!/flush), and unlocking (file.unlock()?).

In `@src/network.rs`:
- Around line 49-61: The nested conditional block around the cache check
(variables read_res, cache_path, modified, elapsed, CACHE_DURATION, and the
serde_json::from_str call that returns Ok(versions)) is misformatted; run
rustfmt (cargo fmt) or reformat that block so it matches rustfmt style (reflow
indentation and spacing in the if-let chains) to satisfy cargo fmt --check in
CI.
- Around line 93-112: The code block around get_target_triple()/suffix/versions
has formatting that fails cargo fmt; run `cargo fmt` or reformat the snippet so
it matches rustfmt style (fix spacing/indentation around the let target =
get_target_triple()?; let suffix = format!("-cli-{}.tar.gz", target); the for
file in res { ... } block and the call to crate::utils::sort_versions(&mut
versions)) to satisfy rustfmt and CI.

---

Duplicate comments:
In `@src/shell.rs`:
- Around line 49-55: The temp env file generation using local
env_file=\"${{{}}}/{}_$(date +%s)_$$\" is racy; replace the manual filename
assembly with a per-invocation secure temp file created by mktemp (e.g. call
mktemp to produce env_file), use that path for PVM_ENV_UPDATE_PATH before
invoking command pvm, and keep the existing eval/cleanup logic but remove the
old collision-prone construction; update the same pattern wherever env_file is
built (the blocks referencing env_file, PVM_ENV_UPDATE_PATH and command pvm) and
ensure you still remove the temp file after eval and on error (and add a trap if
needed) so each wrapper invocation gets a unique file.

---

Nitpick comments:
In `@src/network.rs`:
- Around line 178-180: The code currently unwraps response.content_length() into
total_size which errors when Content-Length is missing; change total_size to an
Option<u64> (e.g., let total_size = response.content_length().ok(); or use
.context only for unexpected errors) and adjust the progress bar creation/usage
(the progress bar variable, e.g., pb) to accept an indeterminate/unknown length
when total_size is None (create spinner or progress bar with unknown length) and
ensure the download/read loop updates bytes read without assuming a known total;
update any places that used total_size as u64 to handle the Option (match or if
let Some(total) { set length } else { use indeterminate mode }).

In `@src/update.rs`:
- Around line 39-45: Replace the silent .ok() calls on critical I/O operations
so errors are propagated or at least surfaced: remove .ok() on file.set_len,
file.rewind, writer.flush, and file.unlock in the guard update logic (the block
creating BufWriter and writing the timestamp) and use the ? operator to return a
Result from the containing function (or map_err/log the error) so failures
aren’t ignored; ensure the containing function signature is updated to return
Result if needed and specifically propagate or handle file.unlock() error
instead of discarding it.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 843fd8fc-6613-4da5-b1f4-6e91e5b0a9eb

📥 Commits

Reviewing files that changed from the base of the PR and between e213c57 and af0ef46.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (11)
  • .gitignore
  • Cargo.toml
  • build.rs
  • rust-toolchain.toml
  • src/commands/install.rs
  • src/commands/use_cmd.rs
  • src/fs.rs
  • src/network.rs
  • src/shell.rs
  • src/update.rs
  • src/utils.rs
✅ Files skipped from review due to trivial changes (1)
  • .gitignore
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/utils.rs
  • build.rs

Comment thread src/commands/install.rs Outdated
Comment thread src/commands/install.rs Outdated
Comment thread src/network.rs Outdated
Comment thread src/network.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
src/network.rs (2)

177-179: Download will fail if server omits Content-Length header.

Some servers or CDN configurations may not provide a Content-Length header, especially for dynamically served content. The current code requires it and will error out with "Failed to get content length from server".

Consider falling back to an indeterminate progress bar (spinner) when content_length() returns None:

Suggested approach
-    let total_size = response
-        .content_length()
-        .context("Failed to get content length from server")?;
-
-    let pb = ProgressBar::new(total_size);
-    pb.set_style(ProgressStyle::default_bar()
-        .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")?
-        .progress_chars("#>-"));
+    let total_size = response.content_length();
+    
+    let pb = if let Some(size) = total_size {
+        let bar = ProgressBar::new(size);
+        bar.set_style(ProgressStyle::default_bar()
+            .template("{spinner:.green} [{elapsed_precise}] [{wide_bar:.cyan/blue}] {bytes}/{total_bytes} ({eta})")?
+            .progress_chars("#>-"));
+        bar
+    } else {
+        let bar = ProgressBar::new_spinner();
+        bar.set_style(ProgressStyle::default_spinner()
+            .template("{spinner:.green} [{elapsed_precise}] {bytes} downloaded")?);
+        bar
+    };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 177 - 179, The code currently unwraps
response.content_length() into total_size which errors when the header is
missing; change the logic to treat content_length() as an Option: if Some(len)
keep using total_size as before, otherwise create an indeterminate progress
indicator (spinner) instead of a length-based progress bar. Update all uses of
total_size (and the progress bar creation/updates, e.g., where you construct the
ProgressBar or call set_length) to handle the None case (use spinner.start() /
spinner.inc() or a ProgressStyle::template for unknown length) and avoid
returning an error from the download function when content_length() is None so
downloads succeed against servers without Content-Length.

186-196: Minor: downloaded variable is redundant.

The downloaded variable tracks the same value as buffer.len(). You can simplify by using buffer.len() as u64 directly:

Simplified approach
-    let mut downloaded: u64 = 0;
     let mut stream = response.bytes_stream();
     let mut buffer = Vec::new();

     while let Some(item) = stream.next().await {
         let chunk = item.context("Error while downloading chunk")?;
         buffer.extend_from_slice(&chunk);
-        let new = std::cmp::min(downloaded + (chunk.len() as u64), total_size);
-        downloaded = new;
-        pb.set_position(new);
+        pb.set_position(buffer.len() as u64);
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 186 - 196, Remove the redundant downloaded
variable and compute progress directly from buffer.len(); specifically delete
the declaration "let mut downloaded: u64 = 0" and the assignment "downloaded =
new", and within the loop replace the min/update logic with let new =
std::cmp::min(buffer.len() as u64 + (chunk.len() as u64), total_size) (or simply
compute new = std::cmp::min(buffer.len() as u64, total_size) after extending
buffer) and call pb.set_position(new); keep the existing stream iteration
(response.bytes_stream(), stream.next().await) and error handling (context)
unchanged.
src/update.rs (1)

60-64: Silently ignoring network errors may hide connectivity issues.

When network::resolve_version fails, the function returns Ok(None) without any indication of the failure. While this is acceptable for an optional update check, users with persistent network issues won't receive any feedback.

Consider logging a debug message when the resolution fails, or at minimum document this silent-failure behavior.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/update.rs` around lines 60 - 64, The call to
network::resolve_version(&minor_prefix).await currently ignores errors and
returns Ok(None) silently; update the handling so that on Err(e) you emit a
debug/log message (e.g., using tracing::debug! or log::debug!) that includes the
error and context (minor_prefix and target_version) and then continue to return
Ok(None) to preserve behavior; modify the match/if-let around
network::resolve_version to capture Err(e) and log it rather than dropping it
silently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/commands/use_cmd.rs`:
- Around line 120-130: The current pattern in use_cmd.rs opens the env file with
.truncate(true) before calling file.lock_exclusive(), which causes a race where
the file is truncated prior to acquiring the exclusive lock; remove the
.truncate(true) flag from the OpenOptions call, call file.lock_exclusive()
immediately after open, then call file.set_len(0)? to truncate the file while
holding the lock, write and flush via the existing writer, and finally unlock;
additionally, factor the logic into a shared helper (e.g., write_env_file_locked
in src::fs) used by both use_cmd.rs and install.rs to avoid duplication and
ensure the correct locking/truncation sequence.

---

Nitpick comments:
In `@src/network.rs`:
- Around line 177-179: The code currently unwraps response.content_length() into
total_size which errors when the header is missing; change the logic to treat
content_length() as an Option: if Some(len) keep using total_size as before,
otherwise create an indeterminate progress indicator (spinner) instead of a
length-based progress bar. Update all uses of total_size (and the progress bar
creation/updates, e.g., where you construct the ProgressBar or call set_length)
to handle the None case (use spinner.start() / spinner.inc() or a
ProgressStyle::template for unknown length) and avoid returning an error from
the download function when content_length() is None so downloads succeed against
servers without Content-Length.
- Around line 186-196: Remove the redundant downloaded variable and compute
progress directly from buffer.len(); specifically delete the declaration "let
mut downloaded: u64 = 0" and the assignment "downloaded = new", and within the
loop replace the min/update logic with let new = std::cmp::min(buffer.len() as
u64 + (chunk.len() as u64), total_size) (or simply compute new =
std::cmp::min(buffer.len() as u64, total_size) after extending buffer) and call
pb.set_position(new); keep the existing stream iteration
(response.bytes_stream(), stream.next().await) and error handling (context)
unchanged.

In `@src/update.rs`:
- Around line 60-64: The call to network::resolve_version(&minor_prefix).await
currently ignores errors and returns Ok(None) silently; update the handling so
that on Err(e) you emit a debug/log message (e.g., using tracing::debug! or
log::debug!) that includes the error and context (minor_prefix and
target_version) and then continue to return Ok(None) to preserve behavior;
modify the match/if-let around network::resolve_version to capture Err(e) and
log it rather than dropping it silently.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d8ce650f-1cfb-48f0-8541-1861082d4c4c

📥 Commits

Reviewing files that changed from the base of the PR and between af0ef46 and 07cf984.

📒 Files selected for processing (4)
  • src/commands/install.rs
  • src/commands/use_cmd.rs
  • src/network.rs
  • src/update.rs

Comment thread src/commands/use_cmd.rs Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
src/commands/use_cmd.rs (2)

58-63: ⚠️ Potential issue | 🟠 Major

This flow ignores the answer to install's use now prompt.

Install::call() already asks whether the new version should be used immediately in src/commands/install.rs Lines 57-97. version = newer_version happens unconditionally afterward, so use_cmd still switches even if the user answered “no” there. This path needs a non-interactive install helper, or install should return whether it already performed the switch.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/use_cmd.rs` around lines 58 - 63, Install::call currently
prompts the user and the caller in use_cmd.rs ignores its answer; modify
Install::call (or add a new method on Install) to return a Result<bool, _> (or
similar) that indicates whether the install performed the "use now" switch, then
update the call site where install_cmd.call().await is invoked to capture that
boolean and only set version = newer_version when it is true; ensure the new
return type is propagated or provide a non-interactive helper like
Install::call_non_interactive that performs install without prompting and
returns whether it switched so use_cmd can decide whether to update its version
variable.

119-131: ⚠️ Potential issue | 🔴 Critical

Remove the unsafe environment mutation; rely on the env-update file approach already being written.

The unsafe block here uses the same unsound std::env::set_var pattern as in install.rs. In Rust 2024, std::env::set_var is explicitly marked unsafe because it mutates process-wide global state without proving single-threaded or exclusive access—the standard library docs state that on non-Windows platforms in multithreaded programs, the only safe option is to not use set_var/remove_var at all, since no thread can reliably guarantee exclusive access to the environment.

The code already correctly writes env exports to the file (lines 112–116), which will be evaluated by the shell hook wrapper. Remove the unsafe block (lines 119–131) and let child processes read the environment from the file instead of mutating the current process's global environment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/use_cmd.rs` around lines 119 - 131, Remove the unsafe
process-wide environment mutation in use_cmd.rs: delete the unsafe block that
calls std::env::set_var for MULTISHELL_PATH_VAR and PATH (the code that reads
std::env::var_os("PATH"), builds new_path with bin_dir, and calls
std::env::set_var("PATH", ...)). Rely on the env-update file already written
earlier (the env exports written at lines ~112–116) so child shells pick up
bin_dir via the shell hook wrapper instead of mutating the current process; do
not replace it with any other std::env::set_var/remove_var calls.
src/fs.rs (1)

188-229: ⚠️ Potential issue | 🟠 Major

Env-override tests use unsafe mutation without proper multi-threaded protection.

In Rust 2024, std::env::set_var and std::env::remove_var require that no other thread concurrently reads or writes the environment. A local Mutex only serializes access within the two tests that use it; it cannot prevent other tests running in parallel (which is the default in cargo test) or library code from reading PVM_DIR concurrently. On non-Windows platforms, this is unsound.

Move these tests to subprocess-based integration tests (e.g., using assert_cmd), serialize the entire test suite with --test-threads=1, or refactor the code under test to accept configuration without mutating the process environment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/fs.rs` around lines 188 - 229, The tests
test_get_pvm_dir_with_env_override and test_list_installed_versions mutate the
process env unsafely via ENV_LOCK and unsafe std::env::set_var/remove_var;
change them into subprocess-based integration tests instead: remove the ENV_LOCK
and unsafe env mutations from src/fs.rs tests and create integration tests
(e.g., in tests/ or using assert_cmd) that spawn a child process with PVM_DIR
set via Command::env("PVM_DIR", temp_dir.path()), create the versions dirs in
that temp_dir, run a small test helper or the binary which calls get_pvm_dir(),
get_versions_dir(), and list_installed_versions(), and assert on its stdout/exit
code; this avoids global env mutation and resolves the Rust 2024 thread-safety
issue.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/network.rs`:
- Line 38: The remote cache file currently uses a single key (REMOTE_CACHE_FILE)
at cache_path (let cache_path = pvm_dir.join(REMOTE_CACHE_FILE)) causing
cross-target pollution; modify caching in the functions that read/write the
versions cache to include the target triple from get_target_triple() (e.g.,
append or prepend it to REMOTE_CACHE_FILE to form a target-scoped filename) or
embed the target triple inside the cached payload and validate it on load
(reject and refresh if mismatched), and update all places that construct
cache_path and the load/save logic so the cached versions used by functions
handling versions are always scoped per target triple.
- Around line 80-91: The HTTP clients are created with Client::new() with no
timeouts; replace those calls with a reqwest::Client built via
reqwest::Client::builder() (ClientBuilder) and set both connect_timeout(...) and
timeout(...) to reasonable durations, then call build() to obtain the Client;
apply this change to both locations in this file that currently call
Client::new() so both the connect and overall request timeouts are enforced.

In `@src/update.rs`:
- Around line 31-35: The throttle check using `now - last_check` can underflow
if `last_check` is in the future; change the subtraction to a safe form (e.g.,
use `now.saturating_sub(last_check)` or `checked_sub` and handle None) when
comparing against `86400` in the block that parses `last_check` from `contents`,
so the guard in the `if let Ok(last_check)` branch correctly skips the update
when appropriate and still unlocks the file via
`fs4::fs_std::FileExt::unlock(&file)` before returning `Ok(None)`.

---

Outside diff comments:
In `@src/commands/use_cmd.rs`:
- Around line 58-63: Install::call currently prompts the user and the caller in
use_cmd.rs ignores its answer; modify Install::call (or add a new method on
Install) to return a Result<bool, _> (or similar) that indicates whether the
install performed the "use now" switch, then update the call site where
install_cmd.call().await is invoked to capture that boolean and only set version
= newer_version when it is true; ensure the new return type is propagated or
provide a non-interactive helper like Install::call_non_interactive that
performs install without prompting and returns whether it switched so use_cmd
can decide whether to update its version variable.
- Around line 119-131: Remove the unsafe process-wide environment mutation in
use_cmd.rs: delete the unsafe block that calls std::env::set_var for
MULTISHELL_PATH_VAR and PATH (the code that reads std::env::var_os("PATH"),
builds new_path with bin_dir, and calls std::env::set_var("PATH", ...)). Rely on
the env-update file already written earlier (the env exports written at lines
~112–116) so child shells pick up bin_dir via the shell hook wrapper instead of
mutating the current process; do not replace it with any other
std::env::set_var/remove_var calls.

In `@src/fs.rs`:
- Around line 188-229: The tests test_get_pvm_dir_with_env_override and
test_list_installed_versions mutate the process env unsafely via ENV_LOCK and
unsafe std::env::set_var/remove_var; change them into subprocess-based
integration tests instead: remove the ENV_LOCK and unsafe env mutations from
src/fs.rs tests and create integration tests (e.g., in tests/ or using
assert_cmd) that spawn a child process with PVM_DIR set via
Command::env("PVM_DIR", temp_dir.path()), create the versions dirs in that
temp_dir, run a small test helper or the binary which calls get_pvm_dir(),
get_versions_dir(), and list_installed_versions(), and assert on its stdout/exit
code; this avoids global env mutation and resolves the Rust 2024 thread-safety
issue.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: af88eca4-d900-479a-95d9-af23f75828b9

📥 Commits

Reviewing files that changed from the base of the PR and between 07cf984 and 29f0b8e.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (6)
  • Cargo.toml
  • src/commands/install.rs
  • src/commands/use_cmd.rs
  • src/fs.rs
  • src/network.rs
  • src/update.rs

Comment thread src/network.rs Outdated
Comment thread src/network.rs Outdated
Comment thread src/update.rs Outdated
Fahl-Design and others added 9 commits April 28, 2026 22:30
…versioning

This change introduces:
- File locking for shared state files using fs4
- Dynamic versioning with commit ID and build time via build.rs
- Cross-platform support for macOS and ARM architectures
- Session-specific environment updates using PVM_SHELL_PID
- Safety guards and silent removal of temp files in shell wrappers
- Centralized constants and semantic version sorting logic
- Bug fixes in interactive menu and shell file discovery
| datasource | package | from   | to     |
| ---------- | ------- | ------ | ------ |
| crate      | tar     | 0.4.44 | 0.4.45 |

Signed-off-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
…versioning

This change introduces:
- File locking for shared state files using fs4
- Dynamic versioning with commit ID and build time via build.rs
- Cross-platform support for macOS and ARM architectures
- Session-specific environment updates using PVM_SHELL_PID
- Safety guards and silent removal of temp files in shell wrappers
- Centralized constants and semantic version sorting logic
- Bug fixes in interactive menu and shell file discovery

# Conflicts:
#	Cargo.toml
Upgraded all dependencies to their latest major versions using cargo-edit, updated the rust-toolchain to 1.95.0, applied clippy fixes (collapsed nested if statements, added truncate options to OpenOptions), and fixed breaking API changes in fs4.
@Fahl-Design Fahl-Design force-pushed the feat/concurrency-and-versioning branch from 29f0b8e to 6568efe Compare April 28, 2026 21:03
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (3)
src/network.rs (2)

38-38: ⚠️ Potential issue | 🟠 Major

Scope the cache file by target triple.

get_available_versions() now returns a target-filtered list, but the cache key is still the single remote_cache.json. Sharing one PVM_DIR across macOS/Linux or x86_64/aarch64 can reuse the wrong list for 24 hours and make resolve_version() pick a version whose tarball later 404s.

Suggested fix
-    let cache_path = pvm_dir.join(REMOTE_CACHE_FILE);
+    let target = get_target_triple()?;
+    let cache_path = pvm_dir.join(format!("{REMOTE_CACHE_FILE}.{target}"));
...
-    let target = get_target_triple()?;
     let suffix = format!("-cli-{}.tar.gz", target);

Also applies to: 89-90, 111-116

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` at line 38, The remote cache file is currently a single
REMOTE_CACHE_FILE (variable) stored via cache_path, causing cross-target reuse;
update cache key to include the target triple (e.g., append or interpolate the
target string into REMOTE_CACHE_FILE when computing cache_path in all places
where cache_path is constructed) so get_available_versions() and
resolve_version() read/write a target-scoped file; locate usages that build
cache_path (the occurrences around cache_path = pvm_dir.join(REMOTE_CACHE_FILE)
and the similar constructions later in the file) and change them to incorporate
the runtime target triple (from the same variable/parameter used to filter
versions) so each target has its own remote_cache.<target>.json.

75-86: ⚠️ Potential issue | 🟠 Major

Add bounded timeouts to both HTTP clients.

Both network paths still use Client::new(), so connect time and total request time are unbounded. A stalled API or download host can hang pvm indefinitely.

Suggested fix
-    let client = Client::new();
+    let client = Client::builder()
+        .connect_timeout(Duration::from_secs(10))
+        .timeout(Duration::from_secs(300))
+        .build()
+        .context("Failed to initialize HTTP client")?;
#!/bin/bash
# Verify the call sites still create reqwest clients without timeout configuration.
rg -n -C2 'Client::new\(\)|connect_timeout\(|\.timeout\(' src/network.rs

Also applies to: 164-171

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 75 - 86, The code calls Client::new() (e.g., the
client used to GET json_url built from BASE_URL) which leaves connect and total
request time unbounded; replace Client::new() with a configured reqwest::Client
built via Client::builder() and set at least a connect_timeout (e.g., 5–15s) and
a request timeout (.timeout(), e.g., 30–60s) before .build(), then use that
client for the .get(json_url) call (and any other call sites that currently call
Client::new(), including the other network path referenced); ensure you
propagate the same configured client to both code paths so stalled hosts cannot
hang the process.
src/shell.rs (1)

49-50: ⚠️ Potential issue | 🟠 Major

The env-update filename is still not unique enough.

Lines 49, 104, and 157 use $(date +%s)_$$ / $fish_pid as the unique suffix. Two backgrounded pvm calls started from the same shell within the same second will still target the same file, so one invocation can overwrite or remove the other's update. Please switch to a real temp-file primitive such as mktemp, or add a per-invocation random component.

Also applies to: 104-105, 157-157

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shell.rs` around lines 49 - 50, The env-update temp filename generation
(the local env_file variable used in the shell generation) is not unique enough;
replace the ad-hoc "$(date +%s)_$$" / "$fish_pid" suffix with a real temp-file
primitive (e.g., call mktemp with a template like mktemp
"${TMPDIR:-/tmp}/pvm_env_XXXXXX" or mktemp -t pvm_env on macOS) or append a
secure random component per invocation, then use that mktemp-created path for
env_file and ensure existing cleanup still runs; update every occurrence where
env_file is built (the local env_file assignments in the shell script templates)
and keep the removal logic that checks and rm -f the generated file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/commands/env.rs`:
- Around line 22-30: The env output currently interpolates raw values (PVM_DIR
via PVM_DIR_VAR) into s.set_env_var without escaping, so embedded quotes, $, or
backticks can break eval; update the set_env_var implementations in src/shell.rs
to first shell-escape the value according to the target shell (bash/zsh/fish) —
e.g. add a helper like escape_shell_value(value: &str, shell: Shell) and call it
from each set_env_var implementation before wrapping/formatting the export/let
statement so all special characters (", $, `, \, etc.) are safely escaped.

In `@src/fs.rs`:
- Around line 49-50: get_aliased_versions is re-sorting using the old
split-and-parse comparator which diverges from the shared ordering used in
list_installed_versions; replace that local comparator with a call to the shared
sorter instead. Locate get_aliased_versions and remove the custom split/parse
comparator logic, collect the candidate versions into a Vec, call
crate::utils::sort_versions(&mut versions) to apply the canonical semver
ordering (same as list_installed_versions), then proceed to compute "latest" and
minor aliases from that sorted list so aliases always match the shared semver
order.

In `@src/network.rs`:
- Around line 189-200: The code currently accumulates the entire response into
memory via buffer/extend_from_slice; instead, create a temporary file (e.g. with
tempfile::NamedTempFile or tempfile::tempfile()) and stream each chunk from
response.bytes_stream() into that file (using an async or blocking write
consistent with the surrounding code), updating pb.set_position with the file's
current byte position; after the stream finishes, seek the temp file back to
start and pass the File (not a Cursor<Vec>) into the extraction logic (replace
uses of cursor and buffer), and ensure the temp file lives long enough for
extraction and is cleaned up on error/success.
- Around line 42-48: The code currently ignores lock acquisition errors (using
.ok()) and proceeds with cache I/O, reintroducing races; modify the read and
write paths so that File::open followed by file.lock_shared() (read path) and
file.lock() (write path) check the Result and abort the cache access if the lock
call fails — do not call read_to_string, set_len, or write_all when lock
acquisition returns Err, and ensure file.unlock() is only called when the
corresponding lock succeeded; update the logic around
lock_shared/read_to_string/unlock and lock/set_len/write_all/unlock to handle
and propagate or log the lock error instead of swallowing it.

---

Duplicate comments:
In `@src/network.rs`:
- Line 38: The remote cache file is currently a single REMOTE_CACHE_FILE
(variable) stored via cache_path, causing cross-target reuse; update cache key
to include the target triple (e.g., append or interpolate the target string into
REMOTE_CACHE_FILE when computing cache_path in all places where cache_path is
constructed) so get_available_versions() and resolve_version() read/write a
target-scoped file; locate usages that build cache_path (the occurrences around
cache_path = pvm_dir.join(REMOTE_CACHE_FILE) and the similar constructions later
in the file) and change them to incorporate the runtime target triple (from the
same variable/parameter used to filter versions) so each target has its own
remote_cache.<target>.json.
- Around line 75-86: The code calls Client::new() (e.g., the client used to GET
json_url built from BASE_URL) which leaves connect and total request time
unbounded; replace Client::new() with a configured reqwest::Client built via
Client::builder() and set at least a connect_timeout (e.g., 5–15s) and a request
timeout (.timeout(), e.g., 30–60s) before .build(), then use that client for the
.get(json_url) call (and any other call sites that currently call Client::new(),
including the other network path referenced); ensure you propagate the same
configured client to both code paths so stalled hosts cannot hang the process.

In `@src/shell.rs`:
- Around line 49-50: The env-update temp filename generation (the local env_file
variable used in the shell generation) is not unique enough; replace the ad-hoc
"$(date +%s)_$$" / "$fish_pid" suffix with a real temp-file primitive (e.g.,
call mktemp with a template like mktemp "${TMPDIR:-/tmp}/pvm_env_XXXXXX" or
mktemp -t pvm_env on macOS) or append a secure random component per invocation,
then use that mktemp-created path for env_file and ensure existing cleanup still
runs; update every occurrence where env_file is built (the local env_file
assignments in the shell script templates) and keep the removal logic that
checks and rm -f the generated file.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 99b8411f-9448-4ceb-8953-4b474b279741

📥 Commits

Reviewing files that changed from the base of the PR and between 29f0b8e and 6568efe.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (16)
  • .gitignore
  • Cargo.toml
  • build.rs
  • rust-toolchain.toml
  • src/cli.rs
  • src/commands/env.rs
  • src/commands/install.rs
  • src/commands/use_cmd.rs
  • src/constants.rs
  • src/fs.rs
  • src/main.rs
  • src/network.rs
  • src/shell.rs
  • src/update.rs
  • src/utils.rs
  • tests/cli.rs
✅ Files skipped from review due to trivial changes (7)
  • .gitignore
  • rust-toolchain.toml
  • src/main.rs
  • src/cli.rs
  • tests/cli.rs
  • src/constants.rs
  • build.rs
🚧 Files skipped from review as they are similar to previous changes (4)
  • src/utils.rs
  • src/commands/install.rs
  • Cargo.toml
  • src/update.rs

Comment thread src/commands/env.rs
Comment thread src/fs.rs
Comment thread src/network.rs
Comment thread src/network.rs Outdated
- Switch to dl.static-php.dev/static-php-cli/bulk JSON endpoint
- Add support for selecting and installing cli, fpm, and micro packages
- Update ls-remote and list commands to display package availability
- Update GEMINI.md with new endpoint findings and dev rules
@Fahl-Design Fahl-Design force-pushed the feat/concurrency-and-versioning branch from 63eea06 to 4bfbff4 Compare April 28, 2026 22:08
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (5)
src/network.rs (4)

44-48: ⚠️ Potential issue | 🟠 Major

Do not proceed with cache I/O when lock acquisition fails.

Line 44 and Line 135 ignore lock errors and continue reading/writing anyway. That defeats the concurrency protection and can corrupt cache state.

As per coding guidelines: "Use fs4 file locking when writing to .env or remote_cache.json files."

Also applies to: 135-140

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 44 - 48, The code currently ignores the result
of file.lock_shared() (and the corresponding exclusive lock at lines ~135-140)
and proceeds with I/O, risking cache corruption; change both places to check the
Result from file.lock_shared() / file.lock_exclusive() and abort the cache
read/write when locking fails (e.g., return an Err or early return from the
surrounding function), logging the lock error via the existing logger before
calling nothing further on the file; ensure file.unlock() is only called if the
lock succeeded so that read_to_string / write_all are not invoked on an unlocked
file descriptor.

75-75: ⚠️ Potential issue | 🟠 Major

Add bounded HTTP timeouts on both reqwest clients.

Line 75 and Line 197 use Client::new() without explicit connect/overall timeouts, so hangs can block indefinitely.

#!/bin/bash
# Verify both client constructions are still unbounded Client::new() calls.
rg -n -C2 '\bClient::new\s*\(' src/network.rs

Also applies to: 197-197

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` at line 75, The reqwest clients are created with
Client::new() (variable `client` in src/network.rs) which leaves connect/overall
timeouts unbounded; replace those constructions with reqwest::Client::builder()
to set a sensible connect_timeout and overall timeout (e.g., Duration::from_secs
values) via .connect_timeout(...) and .timeout(...), call .build() and handle
the Result, and use the resulting client variable where `client` is currently
used (apply the same change to the second Client::new() occurrence elsewhere in
this file).

38-38: ⚠️ Potential issue | 🟠 Major

Scope cache entries by target triple.

Line 38 still uses a single remote_cache.json, while results are target-filtered at Lines 89–90. Sharing PVM_DIR across architectures can serve stale/wrong target data for 24h.

Also applies to: 89-90

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` at line 38, Cache is currently a single file
(REMOTE_CACHE_FILE) at pvm_dir leading to cross-target contamination; change
cache_path construction to include the target triple (e.g.,
format!("{}-{}.json", REMOTE_CACHE_FILE_BASE, target_triple)) so each
architecture has its own remote_cache file. Update all places that read/write
remote cache (the code that builds cache_path and the logic that filters results
at the target-filtering site) to derive the filename from the same target_triple
variable so reads and writes use the same per-target file instead of a shared
remote_cache.json.

225-237: ⚠️ Potential issue | 🟠 Major

Avoid buffering full archives in memory during download.

Lines 226–237 materialize the entire tarball in Vec<u8>. This scales memory with artifact size and can fail on low-memory hosts; stream to a temp file and extract from file handle.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 225 - 237, The code currently accumulates the
entire response into a Vec<u8> (buffer) and then creates a Cursor/GzDecoder from
it; change this to stream the response directly to a temporary file to avoid
OOM: create a tempfile (e.g., tempfile::NamedTempFile or std::fs::File in /tmp),
write each chunk from stream (the variable stream and chunk handling) into that
file while updating pb.set_position(...), flush and seek the file back to start,
then create the GzDecoder from the file handle instead of Cursor(buffer) and
proceed with extraction (remember to remove the temp file after successful
extraction or on error). Ensure you keep the same progress updates using pb and
retain the package context in the finish_with_message call.
src/shell.rs (1)

18-20: ⚠️ Potential issue | 🟠 Major

Escape env var values before emitting shell commands.

Lines 19, 77, and 132 still inject raw values into shell-quoted strings. This is unsafe for eval "$(pvm env)" when values contain ", $, `, or backslashes.

#!/bin/bash
# Verify the current implementation still interpolates raw values.
rg -n -C2 'fn set_env_var|format!\("export \{\}=\\"\{\}\\""\)|format!\("set -gx \{\} \\"\{\}\\""\)' src/shell.rs

Also applies to: 76-78, 131-133

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/shell.rs` around lines 18 - 20, The emitted shell commands currently
interpolate raw env values (see set_env_var and the other format! calls that
produce export "..." and set -gx "..." strings), so implement a single helper
(e.g., escape_shell_value) that escapes backslashes, double quotes, dollar signs
and backticks (replace \ -> \\, " -> \", $ -> \$, ` -> \`) and use it wherever
values are injected (the functions that call format!("export {}=\"{}\"", ...),
format!("set -gx {} \"{}\"", ...) and the similar format! at the third site) so
all emitted shell/fish commands pass the escaped value instead of the raw
string.
🧹 Nitpick comments (1)
src/commands/install.rs (1)

129-132: Use the resolved version in the “use later” command hint.

Lines 129–132 currently print pvm use {version} (user input). For aliases/prefixes, this is non-deterministic; prefer resolved_version so the hint references the exact version just installed.

Suggested patch
-            format!("pvm use {}", version).bold()
+            format!("pvm use {}", resolved_version).bold()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/install.rs` around lines 129 - 132, The hint currently prints
the user-provided variable version in the "pvm use {version}" message; change it
to use the resolved_version so the hint references the exact installed version.
Locate the format! call that builds "pvm use {}" (using the variable version)
and replace format!("pvm use {}", version) with format!("pvm use {}",
resolved_version), ensuring resolved_version is in scope where the message is
constructed (and if not, use the resolved value already computed in the install
flow).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@GEMINI.md`:
- Line 31: Update the user-facing platform name casing in the documentation by
replacing the backticked token `macos` with the canonical `macOS` in the
Supported OS line (the line that currently reads "**Supported OS:** `linux`,
`macos`."); ensure the surrounding formatting and backticks remain unchanged so
the line becomes "**Supported OS:** `linux`, `macOS`."

In `@src/commands/ls_remote.rs`:
- Around line 16-25: The version-prefix filter using v.starts_with(prefix) is
too fuzzy (e.g., "8.2" matches "8.20.x"); update the retain closure around
versions_info (used after network::get_available_versions()) to perform
segment-aware matching: split prefix and v on '.' into segments and require all
prefix segments to equal the corresponding version segments (or allow exact
equality v == prefix), e.g. compute prefix_segments = prefix.split('.') and in
retain check that v.split('.') has at least as many segments and zip+all to
compare segments instead of using v.starts_with(prefix).

In `@src/shell.rs`:
- Around line 49-50: The per-call temporary filename uses date +%s and PID
(env_file) which can collide; change the creation of env_file in the shell
generation logic to use a secure unique temp name (e.g., mktemp-style or include
nanoseconds/random suffix) instead of "${{}}/_$(date +%s)_$$"; update all
occurrences that set env_file (the env_file variable assignments around the
blocks where env files are created) to produce a uniquely-named file per call to
avoid races and remove the reliance on second-resolution timestamps.

---

Duplicate comments:
In `@src/network.rs`:
- Around line 44-48: The code currently ignores the result of file.lock_shared()
(and the corresponding exclusive lock at lines ~135-140) and proceeds with I/O,
risking cache corruption; change both places to check the Result from
file.lock_shared() / file.lock_exclusive() and abort the cache read/write when
locking fails (e.g., return an Err or early return from the surrounding
function), logging the lock error via the existing logger before calling nothing
further on the file; ensure file.unlock() is only called if the lock succeeded
so that read_to_string / write_all are not invoked on an unlocked file
descriptor.
- Line 75: The reqwest clients are created with Client::new() (variable `client`
in src/network.rs) which leaves connect/overall timeouts unbounded; replace
those constructions with reqwest::Client::builder() to set a sensible
connect_timeout and overall timeout (e.g., Duration::from_secs values) via
.connect_timeout(...) and .timeout(...), call .build() and handle the Result,
and use the resulting client variable where `client` is currently used (apply
the same change to the second Client::new() occurrence elsewhere in this file).
- Line 38: Cache is currently a single file (REMOTE_CACHE_FILE) at pvm_dir
leading to cross-target contamination; change cache_path construction to include
the target triple (e.g., format!("{}-{}.json", REMOTE_CACHE_FILE_BASE,
target_triple)) so each architecture has its own remote_cache file. Update all
places that read/write remote cache (the code that builds cache_path and the
logic that filters results at the target-filtering site) to derive the filename
from the same target_triple variable so reads and writes use the same per-target
file instead of a shared remote_cache.json.
- Around line 225-237: The code currently accumulates the entire response into a
Vec<u8> (buffer) and then creates a Cursor/GzDecoder from it; change this to
stream the response directly to a temporary file to avoid OOM: create a tempfile
(e.g., tempfile::NamedTempFile or std::fs::File in /tmp), write each chunk from
stream (the variable stream and chunk handling) into that file while updating
pb.set_position(...), flush and seek the file back to start, then create the
GzDecoder from the file handle instead of Cursor(buffer) and proceed with
extraction (remember to remove the temp file after successful extraction or on
error). Ensure you keep the same progress updates using pb and retain the
package context in the finish_with_message call.

In `@src/shell.rs`:
- Around line 18-20: The emitted shell commands currently interpolate raw env
values (see set_env_var and the other format! calls that produce export "..."
and set -gx "..." strings), so implement a single helper (e.g.,
escape_shell_value) that escapes backslashes, double quotes, dollar signs and
backticks (replace \ -> \\, " -> \", $ -> \$, ` -> \`) and use it wherever
values are injected (the functions that call format!("export {}=\"{}\"", ...),
format!("set -gx {} \"{}\"", ...) and the similar format! at the third site) so
all emitted shell/fish commands pass the escaped value instead of the raw
string.

---

Nitpick comments:
In `@src/commands/install.rs`:
- Around line 129-132: The hint currently prints the user-provided variable
version in the "pvm use {version}" message; change it to use the
resolved_version so the hint references the exact installed version. Locate the
format! call that builds "pvm use {}" (using the variable version) and replace
format!("pvm use {}", version) with format!("pvm use {}", resolved_version),
ensuring resolved_version is in scope where the message is constructed (and if
not, use the resolved value already computed in the install flow).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 41bb5702-7006-4bf2-a607-0a4bf9cd15c9

📥 Commits

Reviewing files that changed from the base of the PR and between 29f0b8e and 6bdd79c.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (20)
  • .gitignore
  • Cargo.toml
  • GEMINI.md
  • build.rs
  • rust-toolchain.toml
  • src/cli.rs
  • src/commands/env.rs
  • src/commands/init.rs
  • src/commands/install.rs
  • src/commands/ls.rs
  • src/commands/ls_remote.rs
  • src/commands/use_cmd.rs
  • src/constants.rs
  • src/fs.rs
  • src/main.rs
  • src/network.rs
  • src/shell.rs
  • src/update.rs
  • src/utils.rs
  • tests/cli.rs
✅ Files skipped from review due to trivial changes (6)
  • rust-toolchain.toml
  • tests/cli.rs
  • src/cli.rs
  • .gitignore
  • src/main.rs
  • build.rs
🚧 Files skipped from review as they are similar to previous changes (4)
  • src/utils.rs
  • src/constants.rs
  • src/update.rs
  • src/fs.rs

Comment thread GEMINI.md Outdated
Comment thread src/commands/ls_remote.rs
Comment thread src/shell.rs Outdated
- shell: fish wrapper used bash-only ${VAR} and $(...) syntax; switch
  to fish-native $VAR with string-concat command sub, replace
  'eval (cat ...)' with 'source'
- network: gate cache read/write on successful lock_shared/lock so
  unsynchronized I/O cannot proceed when locking fails
- use_cmd: propagate .php-version write errors via with_context
  instead of swallowing them
- install: correct stale "first-package only" comment to reflect
  actual atomic cleanup behavior
- build: check git command exit status and skip empty stdout; watch
  the resolved HEAD ref file instead of the refs/heads/ directory
- chore: drop unused fs4 dep, std::fs::File::lock/unlock/lock_shared
  stable since Rust 1.89
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (6)
GEMINI.md (1)

31-31: ⚠️ Potential issue | 🟡 Minor

Fix user-facing platform casing: macosmacOS.

static-php-cli Integration lists Supported OS as macos, but the canonical casing is macOS. This was also requested in a prior review, so please update it again for consistency.

✅ Proposed fix
- - **Supported OS:** `linux`, `macos`.
+ - **Supported OS:** `linux`, `macOS`.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@GEMINI.md` at line 31, Update the user-facing platform casing in GEMINI.md by
changing the Supported OS entry that currently reads "**Supported OS:** `linux`,
`macos`" to use the canonical casing "**Supported OS:** `linux`, `macOS`" (this
is the "Supported OS" line in the static-php-cli Integration block).
src/fs.rs (1)

123-128: ⚠️ Potential issue | 🟡 Minor

Reuse the shared sorter for aliases too.

This local split/parse comparator can still disagree with crate::utils::sort_versions() for semver edge cases, so latest and minor aliases can diverge from list_installed_versions().

♻️ Minimal fix
-    // Sort semantic versions cleanly
-    installed.sort_by(|a, b| {
-        let a_parts: Vec<u32> = a.split('.').filter_map(|s| s.parse().ok()).collect();
-        let b_parts: Vec<u32> = b.split('.').filter_map(|s| s.parse().ok()).collect();
-        a_parts.cmp(&b_parts)
-    });
+    crate::utils::sort_versions(&mut installed);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/fs.rs` around lines 123 - 128, The local comparator used in
installed.sort_by should be replaced with the shared sorter
crate::utils::sort_versions so semver edge cases are consistent; call
crate::utils::sort_versions(&mut installed) (or the equivalent public function)
instead of manually splitting/parsing, and likewise use that same function when
creating alias lists (e.g., where latest/minor aliases are computed) to ensure
list_installed_versions() and alias resolution use identical ordering; add an
import if necessary and remove the split/parse closure.
src/network.rs (3)

75-86: ⚠️ Potential issue | 🟠 Major

Add explicit timeouts to both HTTP clients.

Both call sites use Client::new(), so slow connects or stalled bodies can hang these commands indefinitely. Build the client with bounded connect_timeout() and timeout() at both sites.

In reqwest latest docs, what are the default connect and overall request timeouts for reqwest::Client::new(), and what Client::builder() calls should be used to set both explicitly?

Also applies to: 197-207


225-237: ⚠️ Potential issue | 🟠 Major

Stream the archive to disk instead of buffering it all in memory.

buffer.extend_from_slice() makes peak memory proportional to the tarball size, so larger installs can fail on lower-memory machines. A temp file keeps memory flat and still works with GzDecoder/Archive.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 225 - 237, Currently the code collects the
entire response into a Vec buffer (buffer.extend_from_slice) and then creates a
Cursor for GzDecoder, which makes peak memory proportional to the tarball;
instead stream chunks directly to a temporary file (e.g.,
tempfile::NamedTempFile or create a File), write each chunk inside the while let
Some(item) = response.bytes_stream().next().await loop and call pb.set_position
with the file length, then finish the progress bar and seek the temp file back
to start (or reopen it) before passing the file (or a BufReader<File>) into
GzDecoder::new instead of GzDecoder::new(cursor); remove the in-memory buffer
and ensure the temp file is persisted long enough for Archive to read.

38-38: ⚠️ Potential issue | 🟠 Major

Scope the remote cache by target triple.

A single remote_cache.json can serve a filtered Linux list back to macOS (or vice versa) for 24 hours, because the target filter is applied before caching but the cache key is still global.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` at line 38, The cache file is currently global (let
cache_path = pvm_dir.join(REMOTE_CACHE_FILE)) so different target triples share
one remote_cache.json; change this to include the target triple in the filename
(e.g. build a filename like format!("{}-{}.json", REMOTE_CACHE_FILE_BASE,
target_triple) or format!("remote_cache-{}.json", target_triple)) and use
pvm_dir.join(...) instead of REMOTE_CACHE_FILE; reference the existing
cache_path, pvm_dir and REMOTE_CACHE_FILE symbols and ensure the target_triple
value you use (from the existing context or passed into the function) is
sanitized for filesystem use.
src/commands/ls_remote.rs (1)

18-20: ⚠️ Potential issue | 🟠 Major

Make the version-prefix filter segment-aware.

starts_with(prefix) still lets 8.2 match 8.20.x. This can install or display the wrong release for common prefixes.

🛠️ Minimal fix
         if let Some(prefix) = &self.version_prefix {
-            versions_info.retain(|(v, _)| v.starts_with(prefix) || v == prefix);
+            versions_info.retain(|(v, _)| v == prefix || v.starts_with(&format!("{prefix}.")));
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/ls_remote.rs` around lines 18 - 20, The current filter in
versions_info.retain(|(v, _)| v.starts_with(prefix) || v == prefix) is not
segment-aware (e.g., "8.2" matches "8.20.x"); change the closure to split both
the candidate version string `v` and `self.version_prefix` on '.' and compare
only whole segments: compute prefix_segments =
prefix.split('.').collect::<Vec<_>>() and for each (v, _) check either v ==
prefix or v.split('.').take(prefix_segments.len()).collect::<Vec<_>>() ==
prefix_segments; update the retain call in the ls_remote.rs function that uses
self.version_prefix accordingly so prefix matching compares dot-separated
segments rather than raw string starts_with.
🧹 Nitpick comments (1)
GEMINI.md (1)

13-13: Nit: tighten the informal “until the end of electronics” phrasing.

Line 13 is a bit jokey for a “CRITICAL SAFETY DIRECTIVE”. Consider replacing it with straightforward wording like “forever” / “permanently” to reduce ambiguity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@GEMINI.md` at line 13, The phrase "until the end of electronics" in GEMINI.md
(CRITICAL SAFETY DIRECTIVE at line 13) is informal and ambiguous; replace it
with a clear, unambiguous term such as "permanently" or "indefinitely" (e.g.,
change the sentence to use "permanently") to tighten tone and remove jokey
wording.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/commands/install.rs`:
- Around line 61-79: The code currently creates/uses dest =
versions_dir.join(&resolved_version) and on any package download failure calls
std::fs::remove_dir_all(&dest), which can wipe a previously working install;
instead implement an atomic install by creating a temporary staging directory
(e.g., dest_temp = dest.with_extension("tmp") or dest.join(".tmp_install")),
call std::fs::create_dir_all(&dest_temp), pass dest_temp to
network::download_and_extract in place of dest, and on each package failure
remove only the staging dir (std::fs::remove_dir_all(&dest_temp).ok()) and bail;
after all packages succeed, rename/move the staging dir into place
(std::fs::rename(&dest_temp, &dest) or replace existing with an atomic swap) and
remove any old temp; update error handling in the block that calls
network::download_and_extract to reference dest_temp instead of dest and avoid
deleting dest.

In `@src/fs.rs`:
- Around line 1-4: Add fs4 to Cargo.toml with the sync feature enabled, import
the FileExt trait from fs4::fs_std::FileExt in src/fs.rs (so std::fs::File gains
the locking methods), and replace any calls to lock() on File with
lock_exclusive() (e.g., where .env or remote_cache.json are written).
Specifically update the top-of-file imports to include FileExt and change the
file locking calls in the functions that write those files to use
lock_exclusive() so compilation succeeds and fs4 locking is used.

In `@src/network.rs`:
- Around line 5-6: Import the fs4 extension trait by adding use
fs4::fs_std::FileExt; at the top of src/network.rs and replace any calls to
.lock() on std::fs::File instances with .lock_exclusive() (e.g., the call
currently at line ~134 and other occurrences in the ranges ~41–48 and ~128–140);
ensure the file handles are of a type that implements the fs4 extension so the
new method resolves and the exclusive lock is applied when writing .env or
remote_cache.json.

---

Duplicate comments:
In `@GEMINI.md`:
- Line 31: Update the user-facing platform casing in GEMINI.md by changing the
Supported OS entry that currently reads "**Supported OS:** `linux`, `macos`" to
use the canonical casing "**Supported OS:** `linux`, `macOS`" (this is the
"Supported OS" line in the static-php-cli Integration block).

In `@src/commands/ls_remote.rs`:
- Around line 18-20: The current filter in versions_info.retain(|(v, _)|
v.starts_with(prefix) || v == prefix) is not segment-aware (e.g., "8.2" matches
"8.20.x"); change the closure to split both the candidate version string `v` and
`self.version_prefix` on '.' and compare only whole segments: compute
prefix_segments = prefix.split('.').collect::<Vec<_>>() and for each (v, _)
check either v == prefix or
v.split('.').take(prefix_segments.len()).collect::<Vec<_>>() == prefix_segments;
update the retain call in the ls_remote.rs function that uses
self.version_prefix accordingly so prefix matching compares dot-separated
segments rather than raw string starts_with.

In `@src/fs.rs`:
- Around line 123-128: The local comparator used in installed.sort_by should be
replaced with the shared sorter crate::utils::sort_versions so semver edge cases
are consistent; call crate::utils::sort_versions(&mut installed) (or the
equivalent public function) instead of manually splitting/parsing, and likewise
use that same function when creating alias lists (e.g., where latest/minor
aliases are computed) to ensure list_installed_versions() and alias resolution
use identical ordering; add an import if necessary and remove the split/parse
closure.

In `@src/network.rs`:
- Around line 225-237: Currently the code collects the entire response into a
Vec buffer (buffer.extend_from_slice) and then creates a Cursor for GzDecoder,
which makes peak memory proportional to the tarball; instead stream chunks
directly to a temporary file (e.g., tempfile::NamedTempFile or create a File),
write each chunk inside the while let Some(item) =
response.bytes_stream().next().await loop and call pb.set_position with the file
length, then finish the progress bar and seek the temp file back to start (or
reopen it) before passing the file (or a BufReader<File>) into GzDecoder::new
instead of GzDecoder::new(cursor); remove the in-memory buffer and ensure the
temp file is persisted long enough for Archive to read.
- Line 38: The cache file is currently global (let cache_path =
pvm_dir.join(REMOTE_CACHE_FILE)) so different target triples share one
remote_cache.json; change this to include the target triple in the filename
(e.g. build a filename like format!("{}-{}.json", REMOTE_CACHE_FILE_BASE,
target_triple) or format!("remote_cache-{}.json", target_triple)) and use
pvm_dir.join(...) instead of REMOTE_CACHE_FILE; reference the existing
cache_path, pvm_dir and REMOTE_CACHE_FILE symbols and ensure the target_triple
value you use (from the existing context or passed into the function) is
sanitized for filesystem use.

---

Nitpick comments:
In `@GEMINI.md`:
- Line 13: The phrase "until the end of electronics" in GEMINI.md (CRITICAL
SAFETY DIRECTIVE at line 13) is informal and ambiguous; replace it with a clear,
unambiguous term such as "permanently" or "indefinitely" (e.g., change the
sentence to use "permanently") to tighten tone and remove jokey wording.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c567fe57-b8c6-446d-9e2e-cf22fb2e2fd7

📥 Commits

Reviewing files that changed from the base of the PR and between 29f0b8e and 65f0e3f.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (20)
  • .gitignore
  • Cargo.toml
  • GEMINI.md
  • build.rs
  • rust-toolchain.toml
  • src/cli.rs
  • src/commands/env.rs
  • src/commands/init.rs
  • src/commands/install.rs
  • src/commands/ls.rs
  • src/commands/ls_remote.rs
  • src/commands/use_cmd.rs
  • src/constants.rs
  • src/fs.rs
  • src/main.rs
  • src/network.rs
  • src/shell.rs
  • src/update.rs
  • src/utils.rs
  • tests/cli.rs
✅ Files skipped from review due to trivial changes (7)
  • .gitignore
  • src/main.rs
  • rust-toolchain.toml
  • src/cli.rs
  • build.rs
  • src/update.rs
  • src/shell.rs
🚧 Files skipped from review as they are similar to previous changes (4)
  • tests/cli.rs
  • src/commands/env.rs
  • src/constants.rs
  • Cargo.toml

Comment thread src/commands/install.rs
Comment thread src/commands/install.rs Outdated
Comment thread src/fs.rs
Comment thread src/network.rs
Filename used $(date +%s)_$$ which collides when same shell PID launches
two concurrent pvm invocations within the same second (e.g. background
jobs). Add $RANDOM$RANDOM (bash/zsh) and (random)(random) (fish) so two
parallel invocations cannot land on the same path and clobber each
other's env file.
- network: add shared http_client() helper with bounded
  connect_timeout + total timeout; replace bare Client::new()
- network: stream tarball into a tempfile and extract from there
  so peak memory no longer scales with archive size
- update: use saturating_sub for the 24h throttle so a future
  guard timestamp (clock skew, restored snapshot, manual edit)
  cannot underflow and disable the check
- ls_remote: make the version-prefix filter segment-aware so
  "8.2" no longer matches "8.20.x"
- install: do not wipe a pre-existing version dir if a follow-up
  package install fails; only the cli package is offered for
  immediate "use now"
- shell: shell-escape values in set_env_var/path via single-quote
  quoting so embedded "$, backticks, quotes in PVM_DIR cannot
  break eval; add coverage for hostile inputs
- docs: GEMINI.md uses canonical macOS casing
@Fahl-Design
Copy link
Copy Markdown
Member Author

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 29, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/commands/use_cmd.rs (1)

112-123: ⚠️ Potential issue | 🟠 Major

Avoid process-global env mutation from async command code.

Line 112 updates PATH and PVM_MULTISHELL_PATH with unsafe std::env::set_var. On Unix this is only sound if no other threads can read or write the environment, which is hard to guarantee in a Tokio CLI. Prefer keeping shell state in the env-update file and set env explicitly on spawned Commands instead of mutating the global process env.

#!/bin/bash
# Verify the runtime flavor and all process-global env mutations before keeping this unsafe block.
rg -n 'tokio::main|Builder::new_multi_thread|Builder::new_current_thread|set_var\(' src
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/use_cmd.rs` around lines 112 - 123, The unsafe block mutates
process-global env (std::env::set_var for MULTISHELL_PATH_VAR and PATH) which is
unsound in async/Tokio contexts; instead, remove the unsafe set_var calls in the
block and (1) persist the bin_dir value to the existing env-update file logic
(write PVM_MULTISHELL_PATH / MULTISHELL_PATH_VAR there) and (2) when launching
child processes, set the environment explicitly on each std::process::Command
using Command::env or Command::envs (e.g., where commands are spawned in this
module), adding PATH with bin_dir prepended and PVM_MULTISHELL_PATH set for that
Command only; keep global env untouched. Ensure references to
MULTISHELL_PATH_VAR and any functions that spawn commands (the
Command::spawn/Command::status call sites in this file) are updated to pass
these env entries.
♻️ Duplicate comments (2)
src/network.rs (2)

47-48: ⚠️ Potential issue | 🟠 Major

Scope the remote cache to the target triple.

The remote list is filtered by get_target_triple(), but the cache key is still the single remote_cache.json. Reusing one PVM_DIR across macOS/Linux or x86_64/aarch64 can serve the wrong package list for up to 24 hours.

💡 Suggested direction
-    let cache_path = pvm_dir.join(REMOTE_CACHE_FILE);
+    let target = get_target_triple()?;
+    let cache_path = pvm_dir.join(format!("{REMOTE_CACHE_FILE}.{target}"));
@@
-    let target = get_target_triple()?;
     let suffix = format!("-{}.tar.gz", target);

Also applies to: 99-100

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 47 - 48, The cache filename is not scoped to the
target triple, causing cross-target cache reuse; change creation of cache_path
by calling get_target_triple() (the same function used when filtering the remote
list) and append or interpolate that triple into the REMOTE_CACHE_FILE name
(e.g., remote_cache-{triple}.json) so pvm_dir.join(...) uses the target-scoped
filename; update all other places that reference REMOTE_CACHE_FILE (including
the other occurrences that build cache_path) to use the new target-scoped
filename creation logic so read/write use the same per-target key.

5-6: ⚠️ Potential issue | 🔴 Critical

Import fs4::fs_std::FileExt and switch the writer path to lock_exclusive().

std::fs::File does not provide lock_shared, lock, or unlock on its own, and fs4 does not expose a plain lock() method. As written, the cache-locking path does not compile.

🔧 Minimal fix
 use futures_util::StreamExt;
+use fs4::fs_std::FileExt;
 use reqwest::Client;
@@
-            && file.lock().is_ok()
+            && file.lock_exclusive().is_ok()
#!/bin/bash
# Verify the missing trait import and the lock calls used in src/network.rs.
sed -n '1,30p' src/network.rs
rg -n 'lock_shared\(|\.lock\(|unlock\(' src/network.rs

Also applies to: 52-58, 138-150

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/network.rs` around lines 5 - 6, Add the fs4 file-ext trait and switch to
the exclusive lock method: import the trait with use fs4::fs_std::FileExt; then
replace any calls to writer.lock() with writer.lock_exclusive(), and ensure
existing calls to lock_shared() and unlock() use the FileExt trait methods
(lock_shared(), lock_exclusive(), unlock()) so the cache-locking paths (where
writer is used) compile correctly.
🧹 Nitpick comments (1)
src/commands/ls_remote.rs (1)

19-19: Optional: avoid repeated string allocation in the retain predicate.

format!("{}.", prefix) is rebuilt for every element. You can precompute it once before retain for a small efficiency/readability win.

Suggested tweak
 if let Some(prefix) = &self.version_prefix {
-    versions_info.retain(|(v, _)| v == prefix || v.starts_with(&format!("{}.", prefix)));
+    let dotted_prefix = format!("{prefix}.");
+    versions_info.retain(|(v, _)| v == prefix || v.starts_with(&dotted_prefix));
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/ls_remote.rs` at line 19, The retain closure rebuilds
format!("{}.", prefix) for every element causing repeated allocations; fix this
by computing a single String (e.g., let prefix_dot = format!("{}.", prefix))
before calling versions_info.retain and then use versions_info.retain(|(v, _)| v
== prefix || v.starts_with(&prefix_dot)) so the format call is done once and the
predicate uses the precomputed value (refer to the versions_info.retain call and
the prefix variable).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@GEMINI.md`:
- Around line 16-20: Docs mismatch: update the filesystem section to reflect
runtime behavior by noting that get_pvm_dir() uses dirs::data_local_dir() (so on
macOS pvm lives under ~/Library/Application Support rather than ~/.local/share)
and replace references to a single .env with the actual env update filename
pattern returned by get_env_update_path() (e.g., .env_update or
.env_update_<pid>); adjust the bullet items for $PVM_DIR and the env file entry
accordingly and ensure any other mentions (like the one around line 49) use the
same naming/pattern.

---

Outside diff comments:
In `@src/commands/use_cmd.rs`:
- Around line 112-123: The unsafe block mutates process-global env
(std::env::set_var for MULTISHELL_PATH_VAR and PATH) which is unsound in
async/Tokio contexts; instead, remove the unsafe set_var calls in the block and
(1) persist the bin_dir value to the existing env-update file logic (write
PVM_MULTISHELL_PATH / MULTISHELL_PATH_VAR there) and (2) when launching child
processes, set the environment explicitly on each std::process::Command using
Command::env or Command::envs (e.g., where commands are spawned in this module),
adding PATH with bin_dir prepended and PVM_MULTISHELL_PATH set for that Command
only; keep global env untouched. Ensure references to MULTISHELL_PATH_VAR and
any functions that spawn commands (the Command::spawn/Command::status call sites
in this file) are updated to pass these env entries.

---

Duplicate comments:
In `@src/network.rs`:
- Around line 47-48: The cache filename is not scoped to the target triple,
causing cross-target cache reuse; change creation of cache_path by calling
get_target_triple() (the same function used when filtering the remote list) and
append or interpolate that triple into the REMOTE_CACHE_FILE name (e.g.,
remote_cache-{triple}.json) so pvm_dir.join(...) uses the target-scoped
filename; update all other places that reference REMOTE_CACHE_FILE (including
the other occurrences that build cache_path) to use the new target-scoped
filename creation logic so read/write use the same per-target key.
- Around line 5-6: Add the fs4 file-ext trait and switch to the exclusive lock
method: import the trait with use fs4::fs_std::FileExt; then replace any calls
to writer.lock() with writer.lock_exclusive(), and ensure existing calls to
lock_shared() and unlock() use the FileExt trait methods (lock_shared(),
lock_exclusive(), unlock()) so the cache-locking paths (where writer is used)
compile correctly.

---

Nitpick comments:
In `@src/commands/ls_remote.rs`:
- Line 19: The retain closure rebuilds format!("{}.", prefix) for every element
causing repeated allocations; fix this by computing a single String (e.g., let
prefix_dot = format!("{}.", prefix)) before calling versions_info.retain and
then use versions_info.retain(|(v, _)| v == prefix ||
v.starts_with(&prefix_dot)) so the format call is done once and the predicate
uses the precomputed value (refer to the versions_info.retain call and the
prefix variable).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 289c983b-ec5b-4a79-9237-cd7c9c637237

📥 Commits

Reviewing files that changed from the base of the PR and between 29f0b8e and cf7dc57.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (20)
  • .gitignore
  • Cargo.toml
  • GEMINI.md
  • build.rs
  • rust-toolchain.toml
  • src/cli.rs
  • src/commands/env.rs
  • src/commands/init.rs
  • src/commands/install.rs
  • src/commands/ls.rs
  • src/commands/ls_remote.rs
  • src/commands/use_cmd.rs
  • src/constants.rs
  • src/fs.rs
  • src/main.rs
  • src/network.rs
  • src/shell.rs
  • src/update.rs
  • src/utils.rs
  • tests/cli.rs
✅ Files skipped from review due to trivial changes (6)
  • rust-toolchain.toml
  • src/main.rs
  • tests/cli.rs
  • src/cli.rs
  • .gitignore
  • build.rs
🚧 Files skipped from review as they are similar to previous changes (5)
  • src/constants.rs
  • src/update.rs
  • src/commands/install.rs
  • src/fs.rs
  • src/shell.rs

Comment thread GEMINI.md Outdated
- network: scope remote_cache filename per target triple so a shared
  PVM_DIR (NFS, dotfile sync) cannot serve linux-x86_64 entries to an
  aarch64 host
- use_cmd, install: remove unsafe std::env::set_var of PATH and
  PVM_MULTISHELL_PATH; the wrapper sources env_file into the parent
  shell on exit, and set_var is unsound under the multi-threaded tokio
  runtime
- ls_remote: hoist 'prefix.' allocation outside retain so the predicate
  does not reallocate per element
- docs: fix GEMINI.md PVM_DIR default for macOS, env file naming, and
  the data-integrity guideline to point at std::fs::File locking
  (fs4 was dropped in an earlier commit)
@Fahl-Design Fahl-Design enabled auto-merge (rebase) April 29, 2026 20:23
@Fahl-Design Fahl-Design disabled auto-merge April 29, 2026 20:23
@Fahl-Design Fahl-Design merged commit 0f9c7f0 into main Apr 29, 2026
3 of 4 checks passed
Fahl-Design added a commit that referenced this pull request Apr 29, 2026
- network: add shared http_client() helper with bounded
  connect_timeout + total timeout; replace bare Client::new()
- network: stream tarball into a tempfile and extract from there
  so peak memory no longer scales with archive size
- update: use saturating_sub for the 24h throttle so a future
  guard timestamp (clock skew, restored snapshot, manual edit)
  cannot underflow and disable the check
- ls_remote: make the version-prefix filter segment-aware so
  "8.2" no longer matches "8.20.x"
- install: do not wipe a pre-existing version dir if a follow-up
  package install fails; only the cli package is offered for
  immediate "use now"
- shell: shell-escape values in set_env_var/path via single-quote
  quoting so embedded "$, backticks, quotes in PVM_DIR cannot
  break eval; add coverage for hostile inputs
- docs: GEMINI.md uses canonical macOS casing
@Fahl-Design Fahl-Design deleted the feat/concurrency-and-versioning branch April 29, 2026 20:23
webproject-bot Bot pushed a commit that referenced this pull request Apr 29, 2026
## [1.1.0](v1.0.4...v1.1.0) (2026-04-29)

### Features

* improve concurrency safety, cross-platform support and dynamic versioning ([85e6c80](85e6c80))
* improve concurrency safety, cross-platform support and dynamic versioning ([3bc8179](3bc8179))
* support multiple PHP packages via new bulk API ([5f20206](5f20206))

### Bug Fixes

* address CodeRabbit review findings ([05f9621](05f9621))
* address remaining CodeRabbit findings on PR [#6](#6) ([e6130ed](e6130ed))
* address second-round CodeRabbit review on PR [#6](#6) ([0f9c7f0](0f9c7f0))
* resolve clippy::collapsible-if lint in network.rs ([4923dc1](4923dc1))
* **shell:** add RANDOM entropy to env_file names ([19efe4c](19efe4c))
@webproject-bot
Copy link
Copy Markdown

🎉 This PR is included in version 1.1.0 🎉

The release is available on:

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant