Your API key is stored locally in your browser and never sent to any server other than your configured OpenAI-compatible LLM API provider.
Share Link Limits: Practical limits for shared links vary by platform and usage.
Browser Limits: Browser URL limit ~2000 bytes (may vary by browser). QR code limit ~1500 bytes for reliable scanning.
Platform Recommendations: Mobile devices: Keep under 1000 bytes for best compatibility. Email sharing: Under 2000 bytes to avoid truncation. SMS/messaging: Under 500 bytes recommended.
Link Length Bar: The link length bar below shows the estimated size relative to browser limits.
Password / Session Key: A password/session key is used to encrypt shared links and local data. There are no accounts or login functions - hacka.re is a static client-side application with no server component.
Privacy: Your API keys and data never leave your browser except when communicating directly with your chosen LLM provider. The encrypted data in shared links is processed entirely in the browser and never reaches any server.
Security: All data stored in your browser's localStorage is also encrypted using this key. The security of your data relies entirely on the strength of this password/session key.
Agent Management: Save and manage your hacka.re configurations as reusable agents.
Save Current: Save your current API settings, prompts, functions, and MCP connections as a named agent.
Load Agent: Apply a saved agent's configuration to quickly switch contexts.
Quickly save your current settings as a named agent
Your saved agent configurations
Check the boxes for the prompts to include in system prompt
Function Calling: Create JavaScript functions that can be called by AI models through the OpenAI-compatible API. (The underlying API mechanism is known as Tool Calling in OpenAI's architecture.)
JavaScript Functions: Define functions with parameters that the AI can call to perform actions or retrieve information.
Function Tagging: By default, all functions are callable. If any function is tagged with @callable
or @tool
, then only tagged functions will be callable.
Privacy: Functions are stored locally in your browser. No data is sent to external servers unless your function explicitly does so.
Check the boxes for the functions you want to enable for function calling (also known as tool calling or tool use).
No functions defined. Create a function below.
What is MCP?: Model Context Protocol (MCP) is an open standard for connecting AI models to external tools and data sources. It enables AI assistants to interact with local services, APIs, and databases through a standardized protocol.
Built-in Servers: hacka.re includes MCP servers for GitHub, Gmail, and Shodan as proof-of-concept examples. These servers are not thoroughly tested but serve to demonstrate how hacka.re's architecture can be extended with external integrations.
Built-in MCP tool for creating secure share links with selected content.
Built-in MCP tool for exploring hacka.re source code and architecture.
Previously executed MCP server commands. Click to start again.
No command history yet. Start a server to build your history.
What is RAG?: Retrieval-Augmented Generation (RAG) enhances AI responses by searching through a knowledge base to find relevant context. It combines the power of semantic search with language models to provide more accurate and contextual answers.
How it Works: hacka.re includes pre-generated embeddings for three example EU regulatory documents. To use this index, you need an OpenAI API key since the same embedding model must be used for queries as was used to generate the index. When enabled, RAG automatically searches these documents to include relevant regulatory context in AI responses.
When enabled, chat responses will be enhanced with relevant context from your indexed knowledge base.
⚠️ RAG is only available with OpenAI provider
OpenAI provides the embeddings API required for vector search. Please switch to OpenAI in Settings to use RAG.
Pre-configured regulatory documents for testing RAG capabilities. Configure chunking parameters and refresh embeddings as needed.
Number of tokens to include in context. The system will select the best matching chunks up to this limit, automatically including gap-filling chunks for coherence.
Automatically derive multiple search terms from your question for better results.
Model used to derive search terms from your question.
Number of tokens per chunk. Tokens are approximated as characters/4.
Percentage of overlap between consecutive chunks.
OpenAI embedding model to use for vector generation.
hacka.re: Privacy-focused chat client that stores API keys, conversations, and settings locally in your browser.
Features: Serverless, offline-capable, MIT licensed. Create encrypted, password-protected GPTs with custom prompts.
Get started: Configure with base URL (OpenAI-compatible) and API key.
Vibe-Coded with Claude Sonnet | About | Development | Disclaimer
Welcome to hacka.re! Start a conversation with AI models.