The one-year anniversary of OpenAI’s ChatGPT is almost upon us (Nov. 30) and somehow the rise of generative AI still feels like the world’s longest New Year’s Eve celebration.

Millions of people are using ChatGPT and services from fast followers such as Google Bard, Microsoft Bing Chat and Anthropic Claude to create text, image, audio and video content or even software scripts.

As one of its next acts, OpenAI made available blueprints allowing people to create personalized chatbots. People have already hacked together bots that channel Shakespeare, serve as creative writing coaches or technology advisors, among other virtual assistants.

Organizations meanwhile are racing to leverage GenAI to bolster their productivity and operational efficiency. Seventy-five percent of organizations reported an increase in budgets to pursue AI initiatives, according to a recent Dell survey. 1

Build Your Own Chatbot

Yet even as businesses look for opportunities to gain a competitive advantage with GenAI, many have struggled with an essential question: How can they apply GenAI to the data living and breathing throughout their own enterprise systems and make it actionable for employees or even customers?

One emerging use case involves using GenAI to build domain-specific chatbots, or virtual assistants that surface information contextualized for an individual business.

Previously, organizations would have to painstakingly code their own chatbots, or add their special sauce to a generic chatbot service. Such work comes at considerable financial and human capital cost, challenging overworked IT staff. And most companies lack the talent and other resources to build such applications.

However, thanks to the broad availability of open-source large and small language models and some richly detailed recipes, organizations can mine their own data siloes to create highly targeted chatbots. Such assistants help surface rich information about the company, such as answers to technical support questions, products, sales motions and even human resources. This data might even help unearth data long lying dormant in organizations.

This is happening in such domains as clinical drug trials, customer service and government services.

Moreover, some repeatable processes are emerging from these business scenarios.

Wash Your Data With RAG

One approach in particular is gaining steam: Augmenting a pre-trained LLM with context-specific data and tailoring it to a specific industry domain. This helps limit the scope of data while boosting relevancy and accuracy. It’s also less time-consuming than training a model from scratch.

This technique utilizes retrieval augmented generation (RAG), which takes data from structured and unstructured sources, such as documents, databases and Web pages that the language model cannot access on its own.

The target data is broken into smaller chunks and converted into a compatible format for the retrieval mechanism. Converting target data into vector representations and storing it in a vector database helps process complex relationships between data for better outcomes while enhancing the accuracy and retrieval speed.

Anchoring the model with relevant documentation ensures that answers remain up to date and contain information unique to the organization. For organizations, an open source LLM such Llama 2 paired with RAG has created something of a GenAI easy button.

A Blueprint to Build Your Own Easy Button

Implementing an LLM with RAG provides a simple path forward, but deciding where to run your GenAI chatbot requires deliberation.

Applications and services work best located close to the data they’re leveraging to provide value and the data is often best served by localized compute and storage. Building GenAI on premises also offers the best opportunity to meet control, security and governance with respect to data locality requirements.

Even so, most organizations operate more complex environments with a significant distribution of applications. Indeed, 82% of IT decision makers surveyed by Dell prefer an on premises or hybrid model for deploying GenAI applications, reflecting the important of supporting a multicloud architecture that maximizes the value of data while minimizing complexity. 2 Moreover, managing workloads with flexibility and choice allows organizations to optimize costs, enhance performance and ensure data compliance while reducing risk to data or IP leakage.

Fortunately, Dell offers a playbook and platform for building a chatbot customized with your business’s data in the comfort of your corporate datacenter. The Dell Validated Design for Red Hat OpenShift AI on APEX Cloud Platform offers organizations a simple guide for how to deploy a virtual assistant that answers questions and performs simple tasks utilizing a LLM and RAG framework on premises.

Your IT assets are precious. You shouldn’t exhaust them to build a business-specific chatbot nor should you entrust its care and feeding to someone else. Take the DIY approach to bringing AI to your data securely on premises.

Learn how Dell Generative AI Solutions help you bring AI to your data.

1 Generative AI Pulse Survey, Dell Technologies, Sept. 2023

2 Generative AI Pulse Survey, Dell Technologies, Sept. 2023