Llaminal
Bringing local LLM workflows into the terminal
Challenge: local model tooling often feels fragmented or too low-level for fast daily
use.
Approach: built a polished CLI layer over Ollama with markdown rendering and practical
command ergonomics.
Outcome: faster iteration for prompt experimentation and shell-native AI workflows.
View Repository
latex-mcp
MCP utility for math rendering in messaging flows
Challenge: quickly rendering clean mathematical output for assistant and chat use cases.
Approach: implemented a focused MCP server that converts LaTeX directly to PNG assets.
Outcome: reduced friction for sharing math output in automated pipelines.
View Repository
Atomic-1Bit
Exploring ultra-light ternary inference patterns
Challenge: inference on constrained hardware needs aggressive simplification.
Approach: prototyped a 1.58-bit ternary inference direction with bare-metal constraints in
mind.
Outcome: established a base for further low-footprint AI experimentation.
View Repository