Research & Development Beyond Pre-Trained AI
Neural AI Agents That Are Conditioned Like Animals
a library for continuously learning Agents based on an alternative to deep learning
Conditioning is part of human intelligence that is completely absent from AI like GPTs, which remain pre-trained by design. We're building what's missing from first principles.
Why does this matter? The hallucination problem, or more generally the inability of AI to understand the meaning behind information, is a result of the fixed training paradigm. Humans and animals are continously learning; by constrast, the intelligence in AI is fixed through pre-training, devoid of its own sense of meaning.
Our code is at version 0.1.1, available through a library and a hosted API, working at a level of intelligence comporable to a basic animal like a clam. We're building way outside of the deep learning paradigm, so while we use a form of neural networks, we are building from scratch, slowly evolving more complicated agent designs to support higher levels of cognition.
Join us. Try the code and reach out.
Use-Case #1 Context-Aware AI Autocomplete for Relational Data
Our first application of this use-case is built for Netbox, the preimer source of truth powering network automation, to answer a recurring question faced by network admins, "what role is this new device likely to have in my local network?"
There is no general answer since each network setup is so specific and also constantly changes as devices are added and removed (presenting an impossible challenge for LLMs trained on general language). Our API solves this problem by connecting a unique AI Agent to a Netbox account and associted network instance, offering device role predictions grounded by the local context (the current list of devices).
Try it below with the Netbox demo instance or your own account if you're a Netbox user. We're keen for your feedback.
Use-Case #2 Per-User
IoT & Smart Devices AI
The next evolution after Nest-like AI-- personal AI controlled directly by its end-users, constantly trained and tuned intuitively like our pets, instead of AI controlled by pattern matching you to its pre-training on other peoples' massed preferences.
Use-Case #3 Per-User Recommender AI
Each user gets their own AI Agent which they train to fetch content, replacing
a fixed base model servcing all users and siloing us into filter bubbles or propgating Momo-like meme crazes.
We're talking "I like horror movies on Tuesday nights" level of granularity and personalization.
The Big Picture
A New Layer of AI Trained on Local Context
Status quo AI is increasinly smarter but lacks local context, awareness. We're building that layer for the stack.
Interested in these use-cases or have ideas?
We'll help you build them.
How It Works in 2 Steps
1One config to build custom Agents
See examples of Agent Archs here.
2One method to train & query as many Agents as unique users, locally or via our Agents-as-a-Service hosted API
Post inputs to get outputs.
To train, include an output as a label OR provide instinct-like triggers for self-training (conditioning).
FAQs
What base model is running under-the-hood?
How is this different than Reinforcement Learning?
What's meant by AI "trained like animals?"
What's the problem with backpropagation or with AI as it is?
How does this solve AI hallucination or bias?
Why does this matter if "AGI" or superintelligence is imminent?
Is your aim to replace deep learning or Generative AI / LLMs?
How do I contribute?
Is AO Labs a startup or research org?
Have we published any papers?
Are we hiring?
We are thankful to awesome collaborators from some great places
Think Differently About Thinking