• What's Next in AI: Artificial Conditioning

    Build AI Agents That Are Conditioned Like Animals

    an API to build and deploy continuously (locally) learning AI Agents, trained per end-user

     

     

  • How's Artifical Conditioning different from AI today?

     

    AI today is pre-trained reasoning while conditioning is like a puppy learning to sit. The best AI is a hallucinogenic blackbox, pre-trained by design as deeply as the P in GPT. Conditioning is continous learning that's grounded in transparent history.

     

    We're an applied research venture, fresh from research at UC Berkeley, building Artificial Conditioning as the inevitable layer after deep learning (LLMs and similar), an inevitability for AI as certain as conditioning is part of human intelligence. We see it as the way to build AI who's understanding we can trust; at least, AI we can re-train is AI we can trust.

     

    This is radically different. With our library and API we're now working with select pilot customers-- join us. Try the code and reach out if you see potential for your application. We're here to explore use-cases and implementation.

  • Key Features for Developers

    Continous Learning

    No gap between training and inference

    Local Training

    Train from end-users, deploy at the edge

    Transparent Understanding

    Outputs trace back to conditioning history

  • broken image
    broken image

    Use-Case #1 Context-Aware AI Autocomplete for Relational Data

     

    Our first application of this use-case is built for Netbox, the preimer source of truth powering network automation, to answer a recurring question faced by network admins, "what role is this new device likely to have in my local network?"

     

    There is no general answer since each network setup is so specific and also constantly changes as devices are added and removed (presenting an impossible challenge for LLMs trained on general language). Our API solves this problem by connecting a unique AI Agent to a Netbox account and associted network instance, offering device role predictions grounded by the local context (the current list of devices).

     

    Try it below with the Netbox demo instance or your own account if you're a Netbox user. We're keen for your feedback.

     

    Use-Case #2 Per-User

    IoT & Smart Devices AI

     

    The next evolution after Nest-like AI-- personal AI controlled directly by its end-users, constantly trained and tuned intuitively like our pets, instead of AI controlled by pattern matching you to its pre-training on other peoples' massed preferences.

    broken image

    Use-Case #3 Per-User Recommender AI

     

    Each user gets their own AI Agent which they train to fetch content, replacing

    a fixed base model servcing all users and siloing us into filter bubbles or propgating Momo-like meme crazes.

     

    We're talking "I like horror movies on Tuesday nights" level of granularity and personalization.

    The Big Picture

    A New Layer of AI Trained on Local Context

     

    Status quo AI is increasinly smarter but lacks local context, awareness. We're building that layer for the stack.

     

    broken image

    Interested in these use-cases or have ideas?

     

    We'll help you build them.

     

  • How It Works in 2 Steps

    More at Github  or docs.aolabs.ai.

    1

    One config to build custom Agents

     

    See examples of Agent Archs here.

    2

    One method to train & query as many Agents as unique users, locally or via our Agents-as-a-Service API

     

    Post inputs to get outputs.

     

    To train, include an output as a label OR provide instinct-like triggers for self-training (conditioning).

  • FAQs

     

  • We are thankful to awesome alpha and beta testers from some great places

    broken image
    broken image
    broken image
    broken image
    broken image
    broken image
  • Think Differently About Thinking