About
Privacy Notice: This is a GitHub Pages site. All conversational data is sent directly to your configured API provider's servers. While your data is stored locally in your browser, all chat interactions are processed through your API provider's services. For enhanced privacy, consider using local LLMs that run entirely on your own hardware.
Overview
hacka.re is a highly portable, low-dependency, privacy-first chat interface for Large Language Models. It is designed to be a lightweight, static web UI that runs entirely client-side with no server-side rendering. The application is built using pure HTML, CSS, and JavaScript, ensuring that it can be run locally without any server setup. The project is open-source and licensed under the MIT No Attribution license, allowing for easy modification and redistribution.
Client-Side Architecture
A hacka.re is a lightweight, static web UI built with pure HTML, CSS, and JavaScript that runs entirely client-side with no server-side rendering. It only requires an OpenAI-compatible API for inference. Your API key never leaves your browser other than in calls to your configured OpenAI-compatible model provider, and none of the conversations or configurations ever touch hacka.re servers.
Secure Sharing Mechanism
Not even the encrypted blob in the link containing a complete GPT configuration touches any web servers. It allows for secure sharing of strongly encrypted GenAI provider credentials, system prompts, model choice, and conversation history over less secure communication channels. You can pack an entire self-contained GPT, with API endpoint, API key, system prompts, and conversation history - into a strongly encrypted link, and even print this link on paper as a QR code.
Portable Deployment
This static architecture allows for GPT's to be shared securely over otherwise insecure channels without touching servers aside from the LLM API endpoint(s) involved for inference. The entire hacka.re site can simply be downloaded and run from disk as static files by opening it up in your favorite web browser, or extended and re-published freely as it is licensed under the MIT No Attribution license.
Swedish Origins
The name "hacka.re" comes from "hackare" which translates to "whitehat hacker" in Swedish, reflecting the project's ethos: a tool built by whitehat hackers, for whitehat hackers. The tagline "Free, open, fΓΆr hackare, av hackare" translates to "free, open, for whitehat hackers, by whitehat hackers."
Privacy-First Approach
Unlike many commercial chat interfaces, hacka.re prioritizes user privacy by storing all data locally in your browser. Your API key and conversation history never leave your device except when making direct requests to your configured API provider. This approach gives users more control over their data while still providing access to state-of-the-art AI models. Note that ALL conversational data is still exposed to your configured API provider's servers and is therefore subject to their privacy policy.
Key Features
High-Performance Models
Access to models available through your chosen OpenAI-compatible API provider.
Privacy-Focused
Your API key and conversations are encrypted and stored in your browser's localStorage.
Context Window Visualization
Real-time display of token usage within model's context limit to optimize your conversations.
Markdown Support
Rich formatting for AI responses including code blocks with syntax highlighting.
Persistent History
Conversation history is saved locally between sessions for continuity.
Comprehensive Secure Sharing
Create encrypted password-protected shareable links to securely share your API key, system prompt, active model, and conversation data with trusted individuals.
Supported Models
hacka.re provides access to all models available through your configured OpenAI-compatible API provider. It's a design choice to ONLY support text, not even images, from ONLY OpenAI-compatible providers, since this is the de-facto standard. Support for additional features - such as image rendering and additional LLM provider API interfaces - can easily be added with a few prompts using the tools and method shown on the Development page.
Different API providers offer various models with different capabilities, context window sizes, and performance characteristics. The specific models available to you will depend entirely on which API provider you choose to connect with.
Technical Architecture
hacka.re is built as a pure client-side application using vanilla JavaScript, HTML, and CSS with no server-side rendering or processing. This static approach eliminates the need for a backend server, ensuring that your data remains on your device. All code is interpreted and executed entirely in your browser.
The application communicates directly with the configured OpenAI-compatible API using your corresponding API key stored in your browser's localStorage. All message processing, including markdown rendering and syntax highlighting, happens locally in your browser.
For detailed information about the technical implementation and architecture, including code examples and security considerations, visit the Architecture page.
Secure Sharing
hacka.re includes a feature to securely share various aspects of your configuration with others through session key-protected URL-based sharing.
Sharing options:
- API Provider: Any OpenAI-compatible API works
- API Key: Share your API key for access to models
- Active Model: Share your selected model preference with automatic fallback if unavailable
- System Prompt: Share your custom system prompt for consistent AI behavior
- Conversation Data: Share recent conversation history with configurable message count
For detailed information about the secure sharing implementation, including code examples, security considerations, and team collaboration features, visit the Architecture page.
Privacy Considerations
Privacy is a core principle of hacka.re. However, it's important to understand the data flow:
- This is a GitHub Pages site - the application is hosted on GitHub's servers
- Stores your API key encrypted in your browser's localStorage
- Keeps conversation history encrypted locally on your device
- All chat content is sent to your configured API provider's servers for processing
- Your conversations are subject to your API provider's privacy policy
- Does not use analytics, tracking, or telemetry
- Has no custom backend server that could potentially log your data
- All external js libraries are now hosted locally to prevent third-party CDN connections
While this approach gives you more control over your data than many commercial alternatives, please be aware that your conversation content is processed by your API provider's cloud services. Never share sensitive personal information, credentials, or confidential data in your conversations with not fully trusted API endpoints.
Use Cases
hacka.re can be used in various scenarios, from personal AI assistance to team collaboration:
- Local Development: Download and run the entire application locally without any server setup
- API Provider Flexibility: Use with any OpenAI-compatible API provider
- Secure Team Collaboration: Share API keys, system prompts, and conversations securely with team members
- Cross-Device Usage: Continue conversations across different devices using the secure sharing feature
Getting Started
To use hacka.re, you'll need an API key from a compatible provider.
Once you have your API key, simply:
- Visit hacka.re
- Open settings
- Select your API provider
- Paste your API key
- Select your preferred model
- Start chatting with LLMs
Your API key and conversations will be saved locally in your browser for future sessions.
License
hacka.re is released under the MIT No Attribution license, allowing you to freely use, modify, and distribute the software without attribution requirements.