\ | _ \ \ _ _|
\ | __ \ __ \ | | __ \ _ \ __ \ _ \ |
|\ | | | | | | | | | __/ | | ___ \ |
_| \_| .__/ .__/ \___/ .__/ \___|_| _|_/ _\___|
_| _| _|
"Because true nerds shouldn't have to reach for the mouse" 🧠⚡
Transform your Notepad++ into a command-line style AI assistant that responds at the speed of thought. Built for developers who value efficiency and keyboard-driven workflows.
# SELECT TEXT -> Ctrl+Shift+O -> GET RESULT
# No menus. No dialogs. No mouse.
- Code Generation:
Ctrl+Shift+O
on a comment → get working code - Instant Translation:
Ctrl+Shift+O
on foreign text → English output - Document Analysis:
Ctrl+Shift+O
on long text → concise summary - Command Memory: Your last used prompt stays active - chain commands efficiently
- Keyboard Navigation: Pop-up remembers your last prompt, navigate with arrow keys, activate with Enter - zero mouse required
# Quick code fix:
// BUG: This function sometimes returns NaN
function add(a,b) { return a+b }
[SELECT] → [Ctrl+Shift+O]
# Fast translation:
Denne tekst skal oversættes hurtigt.
[SELECT] → [Ctrl+Shift+O]
# Drop-in YAML repair workflow:
1. Ctrl+V (paste problematic YAML)
2. Ctrl+A (select all)
3. Ctrl+Shift+O
=> BOOM! Correctly formatted YAML replaces broken content
# Generate SQL from natural language:
Create a query that finds all users who logged in this week
[SELECT] → [Ctrl+Shift+O]
# Complete fix-and-paste workflow:
1. Ctrl+V (paste problematic code/config)
2. Ctrl+A (select all)
3. Ctrl+Shift+O (fix automatically)
4. Ctrl+A (select fixed content)
5. Ctrl+C (copy solution)
6. Ctrl+V (paste back in source system)
=> Job done in seconds!
Edit your NppOpenAI.ini
directly for maximum control:
[API]
secret_key=sk-...
api_url=https://api.openai.com/v1/ # Base URL for API
route_chat_completions=chat/completions # Endpoint path
response_type=openai # Response format (openai, ollama, simple)
model=gpt-4o-mini
temperature=0.7
show_reasoning=0 # Show (1) or hide (0) <think>...</think> reasoning sections
[PLUGIN]
keep_question=0 # Replace text vs. append responses
Connect directly to any LLM API without intermediary adapters or proxies. The plugin automatically handles request formatting, authentication, and response parsing for each backend type.
# Standard OpenAI API setup
api_url=https://api.openai.com/v1/
route_chat_completions=chat/completions
response_type=openai
🔄 Multiple API Format Support: This plugin intelligently adapts to different LLM APIs through the
response_type
parameter:
openai
: OpenAI/Azure format with choices array and message contentollama
: Ollama's native API with system and prompt fieldsclaude
: Anthropic Claude API with content array structuresimple
: Simple completion format for lightweight backendsEach format has specialized request formatters, authentication methods, and response parsers.
The plugin constructs API URLs by combining two parameters:
api_url
- The base URL of the AI serviceroute_chat_completions
- The specific endpoint path
Final URL = api_url + route_chat_completions
Examples:
# OpenAI API (default)
api_url=https://api.openai.com/v1/
route_chat_completions=chat/completions
response_type=openai
# Ollama with native API format
api_url=http://localhost:11434/
route_chat_completions=api/generate
response_type=ollama
model=qwen3:1.7b
# Anthropic Claude API
api_url=https://api.anthropic.com/v1/
route_chat_completions=messages
response_type=claude
# OpenAI-compatible server (like LiteLLM or vLLM)
api_url=http://localhost:8000/
route_chat_completions=v1/chat/completions
response_type=openai
Notes:
- Trailing slashes are handled automatically
- Choose the response_type to match your server's output format
- The plugin automatically formats requests correctly for each API type
- Authentication headers are customized for each provider (Bearer token or API key)
- For non-OpenAI API formats, you don't need an adapter or compatibility layer
- You can directly connect to different LLM backends regardless of their API format
Shortcut | Action | Time Saved |
---|---|---|
Ctrl+Shift+O | Process selected text | ~15s vs. copy/paste to browser |
Arrow keys + Enter | Navigate and select prompts without mouse | ~8s per prompt selection |
Ctrl+V, Ctrl+A, Ctrl+Shift+O, Ctrl+A+C | Complete paste→fix→copy workflow | ~45s per troubleshooting cycle |
Last prompt memory | Automatic reuse of previous prompt | ~5s per operation |
Context-aware responses | Get exactly what you need | Countless minutes |
Replace-mode | Responses replace queries for seamless editing | ~3s per edit |
1. Plugins → NppOpenAI → Edit Config
2. secret_key=YOUR_API_KEY
3. Ctrl+S
4. Ready to use
Create keyboard-accessible AI personas in your instructions file:
[Prompt:sql]
Convert this description into a PostgreSQL query.
[Prompt:cpp]
Optimize this C++ code for performance.
[Prompt:regex]
Create a regular expression that matches the described pattern.
Check out our advanced prompt examples for more sophisticated AI interactions, including technical writing, code fixing, and Node-RED function development.
- Keep instructions file open in one tab for reference
- Toggle replacement mode to seamlessly integrate AI into editing
- Chain prompts for multi-step processing
- Watch token count in the status to optimize prompts
- Control reasoning visibility with
show_reasoning=1|0
to show or hide AI's thinking process
"When keyboard warriors meet AI acceleration"
Picture this: A senior developer faces a complex refactoring task involving legacy code in an unfamiliar language, documentation that needs translation, and configuration files that require debugging.
Instead of context-switching between browsers, translators, and documentation sites, they unleash the full power of NppOpenAI with Notepad++'s native features:
-
Multi-Tab AI Orchestra:
With config files in one tab, code in another, and docs in a third, they rapidly switch contexts without losing focus -
Prompt Chaining Sequence:
[Tab 1] Select error message → Ctrl+Shift+O (Fix) → Ctrl+Z (Compare) → Ctrl+Y (Reapply) [Tab 2] Select foreign comment → Ctrl+Shift+O (Translate) → Arrow down → Enter (Switch prompt) → Ctrl+Shift+O (Analyze) [Tab 3] Select entire file → Ctrl+Shift+O (Reformat) → Select section → Ctrl+Shift+O (Optimize)
-
Trial and Error Without Fear:
When one AI approach doesn't yield perfect results, Ctrl+Z steps back, then a slight prompt adjustment with arrow keys and Enter tries a new angle—all without touching the mouse or losing context
In minutes, what would have taken hours of research and context-switching is done. The developer has translated documentation, fixed configuration errors, and optimized code—all through rapid-fire keyboard commands, fluid tab navigation, and AI augmentation working in perfect harmony.
Control how much of the AI's reasoning you can see with the show_reasoning
configuration option:
[API]
show_reasoning=0 # Default: Hide thinking sections
When working with AI models trained to show their reasoning, they may include <think>...</think>
sections in their responses. These sections contain the AI's step-by-step thought process.
- With
show_reasoning=0
: These sections are automatically filtered out for clean responses - With
show_reasoning=1
: The thinking sections remain visible, giving insight into how the AI arrived at its conclusions
This feature is especially useful for:
- Learning how the AI solves complex problems
- Debugging responses that seem incorrect
- Refining your prompts based on the AI's reasoning path
Note: When streaming mode is enabled (
streaming=1
), thinking sections that span across multiple response chunks may be partially retained. For complete filtering of thinking sections during streaming, you may need to process the full response in non-streaming mode.
while(coding){ useAI(); improveCode(); keepHands(ON_KEYBOARD); }