About Work Infrastructure Numbers Philosophy Contact
Study Live
44+
Pearson R
0.311
TPU Research
Cloud ✓
Location
Phoenix, AZ

Bryan Leonard

Consciousness researcher. ML engineer. Builder.

Co-founder, Qira LLC · Phoenix, Arizona · imagineqira.com

Scroll to enter
Qira Intelligent systems for complex networks. AI-powered platforms for traffic, cognition, and language.
PTI — Live Traffic Intelligence EGC — Cognitive Research LOLM — Language Model
imagineqira.com →
About
Bryan Leonard

Independent researcher building from zero.

I study the gap between what people know and what they can say. That question led me to build Expression-Gated Consciousness — a formal framework for measuring how expression gates transparency — and to design LOLM, a novel language model architecture from scratch.

Co-founder of Qira LLC with my brother Brandyn Leonard. Self-taught full-stack engineer. Currently training models at billion-parameter scale on Google’s TPU Research Cloud while shipping production SaaS, Web3 infrastructure, and autonomous systems.

No university. No lab. No venture capital. Just work.

Open to: Municipal technology partnerships, research collaboration, compute grants, and teams solving hard problems.
PyTorch Python React Next.js TypeScript Solana FastAPI NLP CUDA Supabase
View Resume →
The Work

Codey — AI Coding Intelligence

A coding platform that understands your codebase as a connected network. Structural analysis, dependency mapping, and intelligent code generation.

SaaS · GPT-4o + 11 providers · VS Code extension · CLI
In Development

Live Infrastructure

Every system listed here is running in production right now. Not demos. Not mockups. Live services on real data, self-funded from Phoenix.

Live 24/7

Phoenix Traffic Intelligence

Real-time predictive cascade model for I-10 and I-17 corridors. Ingests live AZ-511 data, computes Level of Service grades, and generates 2-minute AI crew dispatch recommendations for 8 Phoenix freeway corridors.

375K+ corridor snapshots · 10-second polling · AI sweep every 2 min
View live public dashboard →
Training

LOLM — Custom Transformer

Original language model architecture designed from scratch. Not fine-tuning an existing model — building the architecture, tokenizer, training loop, and evaluation pipeline. Running on Google TPU Research Cloud infrastructure.

10B–100B parameter target · Google TPU Research Cloud grant
View architecture on GitHub →
Live Study

EGC — Consciousness Research

Live empirical psychology study with 44+ participants. Original mathematical framework (g(K) = 4K(1−K), by Brandyn Leonard) measuring how emotional knowledge gates conscious expression. Preprint on Zenodo.

N=44+ · Pearson r=0.311 · 3 confirmed response types · Peer-reviewable
Read the preprint →
In Development

Codey — AI Coding Agent

Full-stack SaaS coding platform with structural codebase analysis. Multi-provider AI routing (GPT-4o, Groq, Gemini, Claude), real code execution, autonomous repo monitoring, and health scoring.

FastAPI + Next.js + PostgreSQL + Redis + Celery · 11 AI providers
Try the beta →
Live

Compute at Scale

13B+ AI tokens processed across all projects in 48 days. $9,100+ in equivalent API compute, entirely self-funded. Every token tracked and auditable through a custom usage dashboard.

31x the Anthropic Max plan · 48 days · Zero institutional backing
View usage dashboard →
Live

Full-Stack Server Infrastructure

DigitalOcean droplet running all production services simultaneously. FastAPI backends, Cloudflare tunnels, SQLite + PostgreSQL databases, Celery workers, systemd services, UFW firewall, fail2ban, SSH key-only auth.

4 production services · 8GB RAM · Ubuntu 24.04 · NYC3
View all repos →
The Numbers

What building at this scale actually looks like.

13.0B
AI tokens processed
That's the equivalent of reading every English Wikipedia article 65 times. This is how much AI compute Bryan uses building across 4 projects over 48 days.
$9,112
est. compute (API equiv.)
Estimated API-equivalent cost over 48 days based on token counts from local session logs and published Anthropic pricing. 31x the Max plan. Self-funded from Phoenix.
44+
research subjects
Real people who completed a live psychology study. Each one wrote authentic responses that are now part of a peer-reviewable dataset on consciousness and expression.
0.311
Pearson correlation
A statistically significant correlation between how much someone expresses and how comfortable they feel doing it. This number has held steady from 14 subjects to 44.
10B–100B
parameter target
The size of the custom language model being built from scratch. For context, GPT-2 was 1.5B. This targets the scale where real language understanding begins to emerge.
375,000+
traffic snapshots
Continuous monitoring data from 8 Phoenix freeway corridors collected every 10 seconds. Real AZ-511 data from ADOT.
4
active projects
Running simultaneously: a consciousness study, a language model, a coding platform, and a traffic intelligence system. No team. One person.
0
institutional backing
No university, no lab, no research grant (yet), no venture capital. Every project is self-funded and self-directed from a home office in Phoenix.

From Phoenix, Arizona. Without permission. Without a lab.

Philosophy

Real work over everything. Not the appearance of progress. The actual thing. If the numbers don't confirm it, the idea doesn't survive.

Shoot for the scale that makes people uncomfortable when you say it out loud. 10 billion parameter models. Peer-reviewed consciousness research. From Phoenix, without permission.

The gap between knowing and saying is the most interesting problem in existence. We are closing it. That is what all of this is.

Contact

Collaboration, research, and impossible ideas.

For Municipal & Transportation Leaders
We built a live predictive cascade model for the I-10 and I-17 corridors that generates 2-minute crew dispatch recommendations. It's running right now on real AZ-511 data.
See the live Phoenix traffic dashboard →
For Researchers & Academics
The EGC framework proposes that expression gates consciousness through a measurable function. The study is still live and the preprint is on Zenodo. Your data would matter.
Take the study or read the preprint →
For Compute & Cloud Providers
We are pushing $9K+ equivalent in API compute and training custom architectures without institutional backing. We are looking for compute grants to scale LOLM to frontier class.
View our compute usage dashboard →
For Engineers & Builders
Everything here was built by two people from a home office in Phoenix. Every repo is public. If you're building something hard and want to collaborate, reach out.
Browse all repos on GitHub →

The study is still live. Your data would matter.

Take the EGC study →