A generative avatar collection for Posthuman validator stakers. My scope split into two problems — designing a layered visual architecture that could hold its character across billions of combinations, and growing the team that could ship it through a small in-house school.
Context
Posthuman is a Cosmos validator with decentralized governance. The collection serves stakers — to mint an avatar, a holder first needs a Posthuman Sphere SBT, awarded for staking, with six rarity tiers.
The mint mechanic, Sphere SBT utility, and per-tier rarity distribution were designed by the core team. My scope was the visual architecture, asset production, and the team that produced it. Developed in parallel with Galactic Odyssey — they informed each other, neither was the foundation of the other.
visual decision
I moved the entire collection to pixel art. It gave hand-level control over each slot and let manual polish actually happen at scale — neural generation alone couldn't hold form consistency tightly enough for a true layered constructor.

The style anchored to Posthuman's post-human philosophy — sci-fi frame, with bodies spanning humans, aliens, and robots across inclusive genders and skin variants.
layer architecture
I designed 12 layer categories — 340 items total, ranging from 1 chain variant to 76 hair variants — and the structure of how items compose without colliding visually. The per-tier probability distribution that runs across this architecture was set by the core team. The output, on their tier-rarity wiring:

Common - ~8 billion combinations. Stripped base — face, eyes, hair, clothes, and facial hair. No hats, masks, or badges.
Bronze - ~33 billion combinations. Same shape as Common, slightly more clothing variation.
Silver - ~49 trillion combinations. Hats, eye patches, full mask range, chains, and badges all unlock.
Gold - ~10 quadrillion combinations. All layers fully unlocked.
Platinum - ~17 quadrillion combinations. Widest space — broader background and mask palettes.
Brilliant - ~10 trillion combinations. Curated alien-only top tier — 6 base heads, 10 hair variants, 1 wrinkle option.

My job was making sure the architecture could carry that decision without breaking — every Sphere tier had to read as a coherent set, not a stripped or over-loaded version of another tier.
constraint
Same neural-form-consistency problem Galactic Odyssey ran into — layered composition needs near-100% form consistency across slots, and any deviation breaks the constructor. In Galactic Odyssey we pivoted away from a layered constructor into prompt-structure combinatorics. Posthuman Avatars couldn't pivot — the constructor was the spec.
The other constraint was production volume against team size: 12 layer categories with dozens to hundreds of variants each, on a small core team. One off pixel cascades through every combination it ever appears in.
pipeline
Research. Worked the ComfyUI setup until the figure held consistently enough across runs to support layered composition while leaving real variation room. Gating step — without that base, nothing downstream worked.
Production loop. Variant generation (some categories per-color — hair, separately for 11 colors), selection against the spec, Photoshop polish to align every asset to the layer grid, and clean the silhouette. Where production hours actually live.
On-chain handoff. Sliced cleaned assets to the layer schema, named every trait for on-chain metadata, and supported the team's per-tier rarity-table population. Designed the landing page in-house. Worked with OmniFlix on the mint surface — Sphere SBT check, on-click generation, on-chain commit. The mint logic was engineering's; my role was the visual surface.
Formally, the collection doesn't exist on-chain right now. The pipeline that produced it is the same shape I keep extending.
The team and the school
Core team of about five, with two to four community contributors. Most of the visual production sat with me, and we needed hands. I ran what was effectively a small in-house school for about three months — lectures on controlled image generation in ComfyUI, recorded video tutorials on the Photoshop polish workflow, walk-throughs of the broader pipeline, and on-call consultancy while contributors worked. Two contributors landed work that shipped into the final collection. When they left, they took working ComfyUI and production-Photoshop skill with them.
The team and the school
Core team of about five, with two to four community contributors. Most of the visual production sat with me, and we needed hands. I ran what was effectively a small in-house school for about three months — lectures on controlled image generation in ComfyUI, recorded video tutorials on the Photoshop polish workflow, walk-throughs of the broader pipeline, and on-call consultancy while contributors worked. Two contributors landed work that shipped into the final collection. When they left, they took working ComfyUI and production-Photoshop skill with them.
Production trade-off
One decision I'd revisit: real canvas was 1024×1024 with each "pixel" an 8×8 block, instead of building at native 128×128 and scaling at display time. Every edit had to respect the pseudo-pixel grid by hand. The collection shipped clean, but at a higher cost than it needed to.
Outcome
340 compatible assets shipped across 12 layer categories — form-consistency held across every one of them, no layer collisions in any permitted combination. The system was built to extend: new variants slot into the existing layer structure without re-architecting it.
The constructor mechanic was working end-to-end before the platform pause: Sphere SBT check, on-click generation, on-chain mint, trait commitment.
This site was made on Tilda — a website builder that helps to create a website without any code
Create a website