Location: Washington, D.C. (open to relocation, but preference to initial remote period)
Remote: Yes
Willing to relocate: Yes (June 2026 and beyond)
Technologies: Python, Rust, Elixir | Neo4j, Postgres, Redis, K8s | PyTorch, MLX, LoRA/QLoRA, Modal A100s, H100s | FastAPI, Phoenix, Rails
Résumé/CV: https://linkedin.com/in/arthurcolle
Email: arthur@distributed.systems
Background:
Deep knowledge of OpenAI, Anthropic + LLM APIs. Able to run models on GPUs and Apple Silicon unified memory. I have trained + deployed agentic coding, tool calling and multi-turn agent models in two prior roles.
Building autonomous agent infrastructure. Currently run a small R&D shop (Distributed, Systems & Co.) doing distributed AGI and AI safety research. I am developing graphsub (graph substrate), a graph-native service mesh for agents, eval harnesses, self-modifying DSLs.
Before that: 4 years at Goldman Sachs deep in SecDB/Slang (the reactive dependency-graph system). I worked on collateralized mortgage obligations, on the interest rate products trading desk (as an engineer in the technology division), automated parts of the downstream IR-swap lifecycle and on CDS lifecycle management for novations and other lifecycle events. Had a hand in managing and working on our internal TriOptima notional compression cycles. I got promoted from tech into structuring. Series 7.
Recent work: 15-service LLM stack (zero downtime across 98 deploys), fine-tuned Llama-3.1-8B (23% hallucination reduction), Rust daemons and TUIs. 3k+ GitHub contributions last year.
Looking for: AI labs, defense tech, or startups building agentic systems / foundation model infra / AI safety tooling. IC or research roles.
I can run SFT and RL experiments, train models, and I'd love to be able to train agentic models and ideally with large GPU clusters. I have enjoyed these last two years, a focused period of building "agentic scaffolding" and scaling up multi-agent systems and if any of this sounds appealing to you, I can definitely be of some help to your org!
False. Anyone can learn about index ETFs and still yolo into 3DTE options and promptly get variation margined out of existence.
Discipline and contextual reasoning in humans is not dependent on the tools they are using, and I think the take is completely and definitively wrong.
reply