Troubleshooting
Common issues and how to fix them.
Prerequisites
Node.js is Required
MyLocalCLI requires Node.js 18 or higher.
Check your version:
node --version
Install Node.js:
- Download from nodejs.org (LTS version recommended)
- Or use a version manager:
- Windows: nvm-windows
- macOS/Linux: nvm
Windows Issues
PowerShell Execution Policy Error
Error:
mlc : File C:\...\mlc.ps1 cannot be loaded because running scripts is disabled on this system.
Fix: Run PowerShell as Administrator and execute:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Then try mlc again.
'mlc' is not recognized
Error:
'mlc' is not recognized as an internal or external command
Causes & Fixes:
-
npm not in PATH:
- Close and reopen your terminal
- Or run:
npm config get prefixand add that path to your system PATH
-
npm bin not linked globally:
npm config set prefix "C:\Users\YourName\AppData\Roaming\npm" -
Try using npx instead:
npx mylocalcli npx mylocalcli init
Permission Denied on Install
Error:
EACCES: permission denied
Fix: Don't use sudo with npm. Instead, fix npm permissions:
# Option 1: Change npm's default directory
mkdir ~/.npm-global
npm config set prefix '~/.npm-global'
# Add to ~/.bashrc or ~/.zshrc:
export PATH=~/.npm-global/bin:$PATH
Connection Issues
"Connection refused" Error
Error:
Error: connect ECONNREFUSED 127.0.0.1:1234
Causes & Fixes:
-
LM Studio server not running:
- Open LM Studio → Load a model → Start Local Server
-
Ollama not running:
ollama serve -
Wrong port configured:
- LM Studio default:
1234 - Ollama default:
11434 - Run
mlc initto reconfigure
- LM Studio default:
API Key Invalid (OpenRouter/Groq)
Error:
Error: 401 Unauthorized
Fix:
- Check your API key is correct
- Run
mlc initand re-enter your API key - Verify the key works on the provider's website
Model Issues
"Model not found" Error
For LM Studio:
- Make sure a model is loaded in LM Studio before starting the server
For Ollama:
# List available models
ollama list
# Pull a model if needed
ollama pull llama3.2
Slow Responses
Possible causes:
-
Model too large for your hardware:
- Try a smaller model (7B instead of 70B)
- Use quantized versions (Q4_K_M)
-
Not using GPU:
- LM Studio: Check GPU layers setting
- Ollama: Ensure CUDA/Metal is enabled
-
Context too long:
- Run
/clearto reset conversation - Use smaller files in context
- Run
Configuration Issues
Config File Corrupted
Fix: Delete and recreate:
# Windows
del %USERPROFILE%\.mylocalcli\config.json
# macOS/Linux
rm ~/.mylocalcli/config.json
# Then reconfigure
mlc init
Skills Not Loading
Check:
- Ensure
src/skills/builtin/folder exists - Restart the CLI after adding custom skills
- Run
/skillsto see loaded skills
Getting More Help
- GitHub Issues: Report a bug
- Check README: Full documentation
Quick Fixes Summary
| Problem | Quick Fix |
|---|---|
| Node.js not installed | Download from nodejs.org |
| PowerShell blocks scripts | Set-ExecutionPolicy RemoteSigned -Scope CurrentUser |
| 'mlc' not found | Use npx mylocalcli or fix npm PATH |
| Connection refused | Start LM Studio/Ollama server |
| API key invalid | Run mlc init and re-enter key |
| Config corrupted | Delete ~/.mylocalcli/config.json |