BETA

Architecture

Technical Implementation

hacka.re is built as a pure client-side application using static HTML, CSS, and vanilla JavaScript with no server-side rendering or processing. All code is interpreted and executed entirely in your browser. This static approach eliminates the need for a backend server aside from the necessary OpenAI-compatible API endpoint for GenAI model access.

hacka.re is Vibe-Coded

99%+ of hacka.re's code was created through LLM-assisted development using Claude 3.7 Sonnet. Check out the Development page to see a screen recording of the vibe-coding process in action.


The application communicates directly with your configured provider using your API key, which is stored in your browser's localStorage. All message processing, including markdown rendering and syntax highlighting, happens locally in your browser. As a static web application, hacka.re never sends your data to any server except when making direct API calls to your chosen LLM provider.


MCP Integration: For extended functionality, hacka.re includes a local MCP (Model Context Protocol) stdio proxy that enables AI models to access external tools like filesystem operations, databases, and custom integrations. The proxy runs locally and maintains hacka.re's privacy-first approach by avoiding external data transmission.


Few dependencies limits the attack surface and thus increases the resilince to various attacks. We only depend on `marked` for rendering markdown, `dompurify` to prevent cross-site scripting, `tweetnacl` for mininmal-complexity strong in-browser encryption, `qrcode` to make qr codes out of these self-contained GPTs links, and `highlight.js` for syntax highlighting of code blocks. Everything else including all of the UI components and logic have been 99%+ vibe-coded using the Claude 3.7 Sonnet model. hacka.re is by design pretty bare-bones but allows for arbitrary expansion of purpose-specific features through further LLM-assisted development with limited time and effort investments.

Modular Storage Architecture

hacka.re implements a modular storage architecture divided into the following components:

  1. EncryptionService: Handles all encryption and decryption operations
  2. NamespaceService: Manages namespaces for storage isolation based on title/subtitle
  3. CoreStorageService: Provides basic storage operations with optional encryption
  4. DataService: Implements data-type-specific storage operations
  5. StorageService: High-level facade exposing storage APIs

MCP Data Storage:

  • mcp-connections: Encrypted storage of MCP server configurations and connection details
  • mcp-command-history: Encrypted storage of command history with deduplication and time tracking
  • Namespace Isolation: MCP data is isolated per namespace (title/subtitle) for multi-tenant usage
  • Privacy Protection: Command history excluded from shared links to protect sensitive information

Comprehensive Secure Sharing - Technical Details

hacka.re includes a feature to securely share various aspects of your configuration with others through session key-protected URL-based sharing. Note that links created with the share feature are susceptible to brute force attacks and rely entirely on a strong password- or session key to be resilient. The 12 random alphanumerical characters produced by default should be strong enough for most real-world applications but could of course be made arbitrarily stronger simply by increasing the number of rounds used for key derivation at the expense of compute and application responsiveness.


Comprehensive sharing options:

  1. Model Provider: Share your OpenAI-compatible API provider endpoint URL
  2. API Key: Share your API key for access to models
  3. System Prompt: Share your custom system prompt for consistent AI behavior
  4. Active Model: Share your selected model preference with automatic fallback if unavailable
  5. Conversation Data: Share recent conversation history with configurable message count

How it works:

  1. When you create a shareable link, you select what to include (base URL, API key, model, system prompt, conversation data)
  2. A real-time link length indicator shows the estimated size of the generated link
  3. You can generate a strong random session key or provide your own. A weak secret will produce insecure links.
  4. Your selected data is encrypted using a key derived from your session key with cryptographically sound methods
  5. Only the encrypted data is included in the URL after the # symbol (the encryption key is NOT included)
  6. When someone opens the link, they're prompted to enter the session key/password
  7. The application derives the decryption key from the entered session key/password and attempts to decrypt the data
  8. If successful, the data is applied to their session and the URL is cleared from the browser history
  9. For model preferences, the system verifies availability with the recipient's API key and falls back gracefully if needed
  10. If unsuccessful (wrong session key/password), they're prompted to try again

Team collaboration with session keys:

  1. Teams can agree on a common session key to use for sharing links
  2. Each team member can enter and lock this session key in their sharing settings
  3. When a team member receives a link created with the team's session key, the system automatically tries the locked session key first
  4. If the session key works, the shared data is applied without prompting for the session key
  5. This allows seamless sharing among team members without repeatedly entering the same session key
  6. The session key should be shared through a secure channel separate from the links themselves

Technical implementation: The feature uses TweetNaCl.js for encryption, combined with session key-based key derivation. Explore the implementations below via the popups.

  • LinkSharingService: Manages creation and parsing of shareable links
  • ShareService: High-level facade for share operations
  • PromptsService: Handles prompt template storage and retrieval
  • ApiToolsService: Provides utilities for API interactions

Security considerations:

  • True session key-based encryption: The encryption key is derived from the session key and is never included in the URL
  • Multiple hashing iterations: The key derivation process uses multiple iterations to increase security
  • URL fragment (#) usage: The data after # is not sent to servers when requesting a page, providing protection against server logging
  • Intended for trusted sharing: Still only share these links with people you trust, as they will have access to your API provider account if you share the API key
  • Temporary usage: Consider revoking your API key after sharing if you're concerned about unauthorized access

When to use each sharing option:

  • OpenAI-compatible API provider: Share to have them use the same API provider endpoint URL
  • API Key: Only share the API key if you want them to use your configured API provider account
  • System Prompt: Share your custom instructions to ensure consistent AI behavior
  • Active Model: Share your preferred model selection for consistent results
  • Conversation Data: Share messages and replies from your current conversation to continue a discussion

QR code generation:

  1. After generating a shareable link, a QR code is automatically created for easy mobile sharing
  2. The QR code encodes the complete shareable link including the encrypted data
  3. The system monitors the link length and provides warnings when approaching QR code capacity limits
  4. Standard QR codes can typically handle up to 1500-2000 bytes of data
  5. When links exceed this limit (common with large system prompts or conversation history), a warning is displayed
  6. The QR code uses error correction level L (low) to maximize data capacity
  7. Recipients can scan the QR code with any standard QR code scanner to open the link
  8. They will still need the session key to decrypt the data

MCP (Model Context Protocol) Integration

hacka.re implements comprehensive Model Context Protocol (MCP) integration that allows AI models to access external tools and resources through MCP servers. The integration is built as a pure client-side implementation with no external dependencies, following hacka.re's privacy-focused architecture.

MCP Components:

  1. MCPClientService: Zero-dependency MCP client implementation with JSON-RPC 2.0 protocol support
  2. MCPManager: UI component for managing MCP server connections and configurations
  3. MCP Stdio Proxy: Bridge between browser-based client and stdio-based MCP servers
  4. Function Integration Layer: Automatic tool registration with hacka.re's existing function calling system

Transport Layer Support:

  • Stdio Transport: Local process communication via mcp-stdio-proxy for servers like filesystem, database, and custom tools
  • SSE Transport: HTTP-based MCP servers with Server-Sent Events for web-based integrations
  • Real-time Communication: Progress callbacks and live status updates during tool execution

Tool Integration Flow:

  1. User configures MCP server command through the UI interface
  2. Proxy starts server process with stdio communication channel
  3. MCPClientService establishes JSON-RPC connection and capability negotiation
  4. Server capabilities (tools, resources, prompts) are automatically discovered
  5. Tools are dynamically registered as JavaScript functions in hacka.re's function calling system
  6. AI models can seamlessly call MCP tools through the standard function calling interface

Security and Privacy:

  • Encrypted Storage: All MCP configurations and command history stored using CoreStorageService encryption
  • Local Processing: MCP proxy runs locally on localhost only, no external network access unless explicitly configured
  • Process Isolation: Each MCP server runs in its own sandboxed process environment
  • Privacy-First: Command history excluded from shared links to protect sensitive information

This architecture enables powerful extensions to AI capabilities while maintaining hacka.re's core principles of privacy, security, and client-side operation. MCP tools appear as native functions to the AI models, creating a seamless experience for enhanced functionality.

Function Calling Architecture

hacka.re implements a comprehensive function calling system that allows users to define JavaScript functions that can be called by AI models through the OpenAI-compatible API. This system supports both user-defined functions and external tools through MCP integration.

Function Calling Components:

  1. FunctionCallingManager: Handles the UI for creating, editing, and managing functions
  2. FunctionToolsService: Manages JavaScript function registration, validation, and execution
  3. ApiToolsService: Handles tool declarations and execution for OpenAI-compatible API tool calling
  4. MCP Integration Layer: Automatically registers MCP server tools as callable functions

Function Definition and Metadata Extraction:

  • Functions are defined using standard JavaScript syntax with JSDoc comments
  • The system automatically extracts function names, parameters, and metadata from the code
  • JSDoc comments are parsed to extract rich metadata:
    • @description tags provide function descriptions
    • @param {type} name - description tags define parameter types and descriptions
  • This metadata is used to generate OpenAI-compatible tool definitions

Security Measures:

  • Sandboxed Execution: Functions run in a limited scope with controlled access to browser APIs
  • Timeout Protection: Execution is limited to 30 seconds to prevent infinite loops
  • Error Handling: Detailed error messages based on error type
  • Result Validation: Ensures function results are JSON-serializable

Integration with API Service:

  • Tool definitions are included in API requests when function calling is enabled
  • The system processes tool calls from the API response
  • Tool calls are routed to the appropriate handler based on the function name
  • Function results are returned to the API for follow-up responses
  • MCP Tool Integration: External MCP tools are seamlessly integrated and appear as native functions to AI models

This architecture allows for powerful extensions to the AI's capabilities while maintaining security and privacy. User-defined functions execute locally in the browser, while MCP tools run in isolated local processes, ensuring no external data transmission beyond direct API communication.