Experimental Research Lab

Every
Idea
In Lab.

AI-powered research infrastructure. Building next-generation RAG pipelines, vector search systems, and intelligent document processing at the intersection of language and computation.

Explore Research Tech Stack
RAG Core Architecture
v2 Chroma Vector DB
24G RAM Capacity
LLM Local Inference

// 01 — Research Areas

What We
Build Here

Focused experiments at the frontier of applied AI. Each project is a controlled environment for validating real-world hypotheses.

01

Document Intelligence

Semantic chunking, hybrid retrieval, and re-ranking pipelines for intelligent document Q&A systems built on Spring AI.

02

Vector Search

ChromaDB-powered embedding storage and retrieval with cosine similarity search and metadata filtering at scale.

03

Local LLM Inference

On-premise language model deployment via Ollama. Private, fast, and cost-free inference without cloud dependency.

// 02 — Infrastructure

Production-Grade
from Day One

Built on Oracle Cloud with ARM architecture, Nginx reverse proxy, and Let's Encrypt SSL. Every component designed for reliability and observability.

All Systems Operational
ubuntu@everyinlab ~ zsh
# System status check
$ systemctl status chromadb
● chromadb.service — Active (running)
 
# Vector DB heartbeat
$ curl localhost:8000/api/v2/heartbeat
{"nanosecond heartbeat": 1774329346}
 
# Nginx proxy status
$ nginx -t && systemctl reload nginx
✓ nginx: configuration test successful
 
$

// 03 — Technology Stack

Built With
Precision

Every tool chosen for a reason. No unnecessary complexity, no vendor lock-in.

SB
Spring Boot
AI
Spring AI
CH
ChromaDB
OL
Ollama
NG
Nginx
OCI
Oracle Cloud