AI chatbots that actually answer the question.
Chatbots for site search, customer support, FAQ, lead qualification, and intake. Built on Anthropic Claude or OpenAI GPT, with citations to your published policy. Custom UI, no embedded vendor widget.
What an AI chatbot does on a small business site.
Most chatbot deployments on small business sites are vendor widgets that say "I am sorry, I cannot help with that." We build custom chatbots that have real access to your published policy, services, hours, pricing, and FAQ content. The chatbot reads from the same content you publish to the public site, so the answers are always consistent with what your customer sees on the page.
The chatbot can do four things: answer operational questions like hours and pricing, qualify a lead by gathering name and contact and basic intake, route an urgent message to the owner phone or email, and capture a structured intake for the staff to follow up the next business day. It does not give legal advice, medical advice, or quote pricing on custom work. We build the boundaries into the deploy.
The architecture under the hood.
A small business chatbot is a Retrieval Augmented Generation system. The architecture is: an embedding model converts your published content into vectors stored in a vector database. When the customer asks a question, the question is converted to a vector and the closest matching content chunks are retrieved. Those chunks are passed to a Large Language Model (Claude or GPT) along with the question and a system prompt that constrains the response. The LLM writes the answer with citations.
The system prompt is where most of the work lives. It tells the model what the business does, what topics it can discuss, what topics it must defer, what tone to use, and how to format citations. We tune the system prompt against expected questions and edge cases before launch and iterate based on real customer interactions in the first thirty days.
Sources: Anthropic Claude documentation, OpenAI platform docs, pgvector, Pinecone RAG primer.
What we build with.
Stack choices are made per project but the default we ship is below.
LLM
Embeddings
Vector DB
UI
Industries that benefit most.
Tax and accounting
Client portal Q and A, intake automation, IRS publication retrieval.
Medical clinics
Front desk question deflection, address, hours, accepted insurance.
Insurance
Policy explainer, certificate request, claim status, payment links.
E commerce
Order status, returns, shipping, FAQ deflection.
What a chatbot costs.
The chatbot can be added to any existing site at quote, or bundled into a new build at the Professional tier or above. Hosting on Bubbles plus model API costs run between $30 and $200 per month depending on traffic.
Chatbot FAQ.
Will the chatbot make things up?
Properly architected RAG does not invent answers. It cites the source chunk it drew from. We test for hallucination before launch and tune the system prompt to defer when the answer is not in the indexed content.
What if a customer asks something I do not want answered?
We define topic boundaries in the system prompt at deploy. The chatbot defers and points to a human for anything outside those boundaries.
How does it handle PII?
For workflows where PII is involved (medical, financial, legal) we use BAA compliant infrastructure and do not send PII to public LLM APIs. For non sensitive workflows we redact PII at intake.
What about cost runaway?
We set token caps per session and per day. Anomalies trigger an alert. Cost per resolved query is tracked and visible.
Can I export the conversation log?
Yes. All conversations are logged with timestamps and customer identifiers (where consented). Logs export to CSV or to your CRM.
Ready to scope a Chatbots project?
Free audit comes first. We confirm scope, lock the timeline, and quote any add ons before you sign.