- Overview
- Key Features
- Quick Start
- Installation
- Usage Examples
- Configuration
- Supported AI Models
- How It Works
- Comparison
- Roadmap
- Building and Distribution
- Contributing
- Community & Support
- System Requirements
- Author
- Troubleshooting
- License
CodeModel CLI (cw
) provides a single unified interface for working with multiple AI code models from different providers. Instead of learning different CLIs for each AI service, you can define profiles for your preferred provider/model combinations and seamlessly switch between them, maintaining a consistent workflow.
Whether you're using OpenAI's GPT-4, Anthropic's Claude, Google's Gemini, or any other AI coding assistant, CodeModel CLI simplifies your development process with AI.
- Multi-Provider Support: Work with any AI code model provider through a single interface
- Profile Management: Easily create, update, and switch between different AI model configurations
- Interactive Selection: Choose profiles via an intuitive command-line interface
- Automatic Installation: Dependencies are installed on demand when needed
- Workflow Integration: Seamlessly integrate with your existing development tools
- Cross-Platform Compatibility: Works on macOS, Linux, and Windows
- Extensive Documentation: Comprehensive guides and examples for all use cases
1οΈβ£ Install globally:
npm install -g codemodel-cli
2οΈβ£ Create your first profile:
cw add myprofile --interactive
3οΈβ£ Start using it:
cw "Write a function that sorts an array"
That's it! You're now using CodeModel CLI! π
CodeModel CLI requires one of the following backend AI CLI tools to be installed:
Backend | Description | Installation |
---|---|---|
OpenAI CLI | Official OpenAI command-line interface | npm install -g openai |
GPT CLI | Command-line interface for GPT models | npm install -g gpt3-cli |
Anthropic CLI | Command-line interface for Claude models | npm install -g @anthropic-ai/cli |
You don't need to install these beforehand - CodeModel CLI can automatically install the appropriate backend based on your chosen provider.
# Install globally
npm install -g codemodel-cli
# Set up a backend (optional, will be done automatically if needed)
cw backend install openai
- Download the latest DMG file from the releases page
- Open the DMG file
- Run the
install.sh
script by double-clicking it
# Clone the repository
git clone https://github.com/cristianoaredes/codemodel-cli.git
cd codemodel-cli
# Install dependencies
npm install
# Link for development
npm link
# Install a backend and set up default profiles (optional)
cw backend install openai
./scripts/setup-profiles.sh
# Add a new profile with specific provider and model
cw add openai-profile --provider openai --model gpt-4.1
# Add a profile interactively with guided prompts
cw add claude-profile --interactive
# List all available profiles
cw list
# Set a profile as active
cw use openai-profile
# Select a profile interactively
cw select
# Remove a profile
cw remove old-profile
# List available backend CLI tools
cw backend list
# Install a specific backend
cw backend install openai
# Set the active backend
cw backend set anthropic
# Show information about the current backend
cw backend info
# Run with a specific backend just once
cw run --backend openai "Write a function to sort an array"
# Generate code with active profile
cw "Create a React component that shows a countdown timer"
# Compare outputs from different models
cw use openai-profile
cw "Implement a binary search tree in Python" > openai-solution.py
cw use claude-profile
cw "Implement a binary search tree in Python" > claude-solution.py
# Run with specific profile for one-time use
cw run --profile deepseek-profile "Optimize this function for performance: function fib(n) { ... }"
# In your shell scripts
function ask_ai() {
cw use openai-profile
cw "$1" > "$2"
}
# Usage
ask_ai "Create a unit test for this function" test.js
Configuration is stored in ~/.codemodel-cli/config.yaml
with a simple structure:
# Currently active profile
active: openai-profile
# Backend configuration
backend:
active: openai # Currently active backend CLI
custom: {} # Custom backend configurations
# Profile definitions
profiles:
openai-profile:
provider: openai
model: gpt-4.1
claude-profile:
provider: anthropic
model: claude-3.7-sonnet
# Additional profiles...
CodeModel CLI automatically selects an appropriate backend for each provider:
Provider | Default Backend |
---|---|
OpenAI | openai |
Anthropic | anthropic |
Google Gemini | gpt |
DeepSeek | openai |
Mistral | openai |
Qwen | openai |
OpenRouter | openai |
Most providers use OpenAI-compatible APIs, so the OpenAI CLI works as a backend for many providers.
Provider | Default Model | Best For | Response Speed | Code Quality |
---|---|---|---|---|
OpenAI | gpt-4.1 |
Complex code generation | βββββ | |
gemini-2.5-pro |
Scientific coding | βββββ | ||
Anthropic | claude-3.7-sonnet |
Code explanation | β Fast | βββββ |
DeepSeek | deepseek-coder-v3 |
Algorithmic solutions | β Fast | ββββ |
Mistral | mistral-codestral-2501 |
Low-latency tasks | β Very fast | ββββ |
Qwen | qwen2.5-coder-32b |
Multilingual code | ββββ | |
OpenRouter | agentica-org/deepcoder-14b-preview |
Free tier usage | β Fast | ββββ |
CodeModel CLI acts as a central hub for all your AI code model interactions:
- User Interface: Provides a consistent command-line experience
- Profile Management: Stores and manages your provider/model configurations
- Provider Routing: Directs your requests to the appropriate AI service
- Response Handling: Returns formatted responses from the AI models
Feature | CodeModel CLI | Provider-Specific CLIs | Other Wrappers |
---|---|---|---|
Multi-provider support | β | β | |
Profile management | β | β | |
Interactive selection | β | β | |
Auto-installation | β | β | β |
DMG installer | β | β | β |
Open source | β | ||
Customization options | β |
- API key management and secure storage
- Provider-specific configuration options
- Performance benchmarks for model comparison
- Batch processing mode
- Plugin system for extended functionality
- Web UI for configuration management
- Response formatting options (JSON, Markdown, etc.)
- Local model support (Ollama, llama.cpp)
To build a DMG installer for macOS distribution:
npm run build-dmg
The DMG file will be created in the dist
directory.
Contributions are welcome! Here's how you can help improve CodeModel CLI:
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature
- Make your changes and commit them:
git commit -m 'Add amazing feature'
- Push to the branch:
git push origin feature/amazing-feature
- Open a pull request
Please see our contributing guidelines for more details.
Join our community to get help, share ideas, and contribute:
- Node.js: v14.0.0 or higher
- npm: v6.0.0 or higher
- Operating Systems: macOS, Linux, Windows
- Disk Space: Minimal (<5MB)
- Dependencies: Internet connection for AI model access
Cristiano Aredes
- GitHub: @cristianoaredes
- LinkedIn: @cristianoaredes
Q: The cw
command is not found after installation
A: Ensure your global npm bin directory is in your PATH. You can check with npm bin -g
.
Q: I get an error about no supported backend CLI tools found
A: You need to install at least one backend CLI tool. Run cw backend install openai
to install the OpenAI CLI.
Q: The backend command fails to execute
A: Make sure you have the appropriate API keys set for your backend. Each backend CLI tool requires its own environment variables (e.g., OPENAI_API_KEY
for OpenAI CLI).
Q: I'm getting authentication errors with the backend
A: Check that you've set up the correct environment variables for your chosen provider's API key.
Q: How do I update to the latest version?
A: Run npm update -g codemodel-cli
.
Q: Can I use multiple profiles in a single script?
A: Yes! You can specify different profiles with each command using --profile name
.
Q: Can I use a different backend for each provider?
A: Yes! Set the default backend with cw backend set <name>
or specify a different backend for a single command with cw run --backend <name>
.
This project is licensed under the MIT License - see the LICENSE file for details.