How's Artifical Conditioning different from AI today?
AI today is pre-trained reasoning while conditioning is like a puppy learning to sit. The best AI is a hallucinogenic blackbox, pre-trained by design as deeply as the P in GPT. Conditioning is continous learning that's grounded in transparent history.
We're an applied research venture, fresh from research at UC Berkeley, building Artificial Conditioning as the inevitable layer after deep learning (LLMs and similar), an inevitability for AI as certain as conditioning is part of human intelligence. We see it as the way to build AI who's understanding we can trust; at least, AI we can re-train is AI we can trust.
Key Features for Developers
No gap between training and inference
Train from end-users, deploy at the edge
Outputs trace back to conditioning history
Our first application of this use-case is built for Netbox, the preimer source of truth powering network automation, to answer a recurring question faced by network admins, "what role is this new device likely to have in my local network?"
There is no general answer since each network setup is so specific and also constantly changes as devices are added and removed (presenting an impossible challenge for LLMs trained on general language). Our API solves this problem by connecting a unique AI Agent to a Netbox account and associted network instance, offering device role predictions grounded by the local context (the current list of devices).
Try it below with the Netbox demo instance or your own account if you're a Netbox user. We're keen for your feedback.
Use-Case #2 Per-User
IoT & Smart Devices AI
The next evolution after Nest-like AI-- personal AI controlled directly by its end-users, constantly trained and tuned intuitively like our pets, instead of AI controlled by pattern matching you to its pre-training on other peoples' massed preferences.
Use-Case #3 Per-User Recommender AI
Each user gets their own AI Agent which they train to fetch content, replacing
a fixed base model servcing all users and siloing us into filter bubbles or propgating Momo-like meme crazes.
We're talking "I like horror movies on Tuesday nights" level of granularity and personalization.
The Big Picture
A New Layer of AI Trained on Local Context
Status quo AI is increasinly smarter but lacks local context, awareness. We're building that layer for the stack.
Interested in these use-cases or have ideas?
We'll help you build them.
See examples of Agent Archs here.
Post inputs to get outputs.
To train, include an output as a label OR provide instinct-like triggers for self-training (conditioning).
Think Differently About Thinking