3 min read

So what are we building & why it matters?

So what are we building & why it matters?
HyperMink Desktop, Update #1

You might have seen System-1, our first product, where we tested whether Large Language Models (LLMs) and Large Vision-Language Models (LVLMs) could efficiently operate on everyday consumer devices using just the CPU. To achieve this, we employed a technique called quantisation. This process compresses the size of the LLM without sacrificing its intelligence. So, despite being smaller, it remains capable of providing intelligent responses to questions. Essentially, it's like carrying a compressed version of the entirety of human knowledge with you.

With that established, we set out to develop a tool that aids without detracting from the core human trait of exploration and discovery. It's convenient to receive an immediate answer to a question, but that same answer is delivered to millions of others, making you just one among many.

For factual queries like 'what's the capital of Micronesia?', a uniform response is good. However, when faced with questions like 'How to make/build/solve ____?', we believe it's essential to invest time in exploring, ideally yielding unique solutions that reflect your individual thought process and values.

We set out to develop a tool that aids without detracting from the core human trait of exploration and discovery.

Let's move beyond merely generating arbitrary text, images, videos, and other media, and harness this vast knowledge to accomplish something truly useful.

Imagine, a desktop browser-like research tool with a built-in LLM, no servers, SaaS, or subscriptions. It seamlessly integrates vast knowledge compressed within the local LLM with the context that's important and personal to you, all while maintaining privacy. Picture having a hundred website tabs open, or any number of local files into the context, and then posing a question. You'll receive answers tailored precisely to your context, and best of all, your data never leaves your computer.

HyperMink Desktop will come with an ephemeral mode as its default setting. This means that when you close the app, all data, including your browsing history, disappears. However, with a simple click, you can switch it to a persistent mode, transforming it into a local knowledge hub. It redefines bookmarking by allowing you to seamlessly add entire webpages and files to your local knowledge base as you browse and read, enabling continuous expansion.

So, why now?

Recent advancements in CPU and GPU capabilities of personal computers, combined with the efficiency in running large AI models on consumer devices, have made this the right moment to build HyperMink Desktop. Although not widely covered in the media, a small group of researchers have successfully adapted large AI models for consumer-grade hardware. Moreover, it's crucial to address potential privacy concerns with services before the erosion of privacy becomes an accepted norm in a new AI-driven world. Therefore, taking action now is imperative.

🤯
There is so much more to share but that's the gist of it for now.

We will post updates as we make progress. You will also see updates on existing System-1 as we continue to work on HyperMink Desktop.


Once again, thank you for trying out System-1 and for all your valuable feedback. As always, feel free to reach out to us anytime.