Skip to content

bug: Missing node-llama-cpp dependency in HAOS add-on (Local Embeddings failure) #136

@XBold

Description

@XBold

Pre-flight checks

  • I updated to the latest add-on version and restarted it.
  • I checked the docs/troubleshooting section first.

What happened?

Running openclaw memory status results in a crash/error because the node-llama-cpp package is missing from the environment. This prevents the use of the default "local" memory search provider.
Because this is a managed HAOS add-on, users cannot easily manually install missing npm packages into the system directory (/usr/lib/node_modules/).

What did you expect to happen?

Since local embeddings are a primary feature and the configuration defaults to the local provider, the necessary bindings (node-llama-cpp) should be bundled within the add-on container image.

Steps to reproduce

  1. Install or update the OpenClaw Assistant add-on on HAOS.
  2. Open the Web Terminal from the add-on page.
  3. Run the command: openclaw memory status
  4. Observe the stack trace indicating Cannot find package 'node-llama-cpp'.

Otherwise simply run "node-llama-cpp" inside the Web Terminal from the add-on page.

Add-on version

0.5.72

OpenClaw version (if known)

2026.5.2

Access mode

custom

Relevant add-on configuration (redacted)

"agents": {
  "defaults": {
    "memorySearch": {
      "enabled": true,
      "provider": "local",
      "local": {
        "modelPath": "/config/.openclaw/models/embeddinggemma-300M-Q8_0.gguf",
        "contextSize": 2048
      }
    }
  }
}

Add-on logs

root@17e0cc66-openclaw-assistant:/# openclaw memory status 

🦞 OpenClaw 2026.5.2 (8b2a6e5) — It's not "failing," it's "discovering new ways to configure the same thing wrong."

[openclaw] Failed to start CLI: Error: Local embeddings unavailable.
Reason: optional dependency node-llama-cpp is missing (or failed to install).
Detail: Cannot find package 'node-llama-cpp' imported from /usr/lib/node_modules/openclaw/dist/memory-core-host-engine-embeddings-CWg_CuWu.js
Did you mean to import "node-llama-cpp/dist/index.js"?
To enable local embeddings:
1) Use Node 24 (recommended for installs/updates; Node 22 LTS, currently 22.14+, remains supported)
2) Install node-llama-cpp next to the OpenClaw package or source checkout
3) If you use pnpm: pnpm approve-builds (select node-llama-cpp), then pnpm rebuild node-llama-cpp
Or set agents.defaults.memorySearch.provider = "github-copilot" (remote).
Or set agents.defaults.memorySearch.provider = "openai" (remote).
Or set agents.defaults.memorySearch.provider = "gemini" (remote).
Or set agents.defaults.memorySearch.provider = "voyage" (remote).
Or set agents.defaults.memorySearch.provider = "mistral" (remote).
Or set agents.defaults.memorySearch.provider = "deepinfra" (remote).
Or set agents.defaults.memorySearch.provider = "bedrock" (remote).
    at createEmbeddingProvider (file:///usr/lib/node_modules/openclaw/dist/manager-CgRVbrYO.js:117:19)
    at async MemoryIndexManager.loadProviderResult (file:///usr/lib/node_modules/openclaw/dist/manager-CgRVbrYO.js:2601:10)
    at async file:///usr/lib/node_modules/openclaw/dist/manager-CgRVbrYO.js:2705:27
    at async MemoryIndexManager.ensureProviderInitialized (file:///usr/lib/node_modules/openclaw/dist/manager-CgRVbrYO.js:2715:4)
    at async MemoryIndexManager.probeVectorAvailability (file:///usr/lib/node_modules/openclaw/dist/manager-CgRVbrYO.js:3071:3)
    at async Object.run (file:///usr/lib/node_modules/openclaw/dist/cli.runtime-CufqL73u.js:434:11)
    at async withManager (file:///usr/lib/node_modules/openclaw/dist/cli-utils-BCrh4eAL.js:10:3)
    at async withMemoryManagerForAgent (file:///usr/lib/node_modules/openclaw/dist/cli.runtime-CufqL73u.js:242:2)
    at async Module.runMemoryStatus (file:///usr/lib/node_modules/openclaw/dist/cli.runtime-CufqL73u.js:388:34)
    at async runMemoryStatus (file:///usr/lib/node_modules/openclaw/dist/cli-A84sFaRL.js:13:2)
root@17e0cc66-openclaw-assistant:/# node-llama-cpp
bash: /config/.node_global/bin/node-llama-cpp: Permission denied

Additional context

This is critical for users who want to keep their memory/embeddings entirely local to their Home Assistant instance without relying on external APIs like OpenAI or Gemini. In a containerized HAOS environment, the dependency management must be handled by the add-on image build process.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions