How do I Create a Personalized AI Assistant with OpenAI

How do I Create a Personalized AI Assistant with OpenAI.

Imagine having your individual digital assistant, form of like J.A.R.V.I.S from the Iron Man film, however customized in your wants. This AI assistant is designed that can assist you sort out routine duties or the rest you train it to deal with.

In this text, we’ll present you an instance of what our educated AI assistant can obtain. We’re going to create an AI that may present primary insights into our website’s content material, helping us in managing each the positioning and its content material more successfully.

To construct this, we’ll use three most important stacks: OpenAI, LangChain, and Next.js.

OpenAI

OpenAI, when you don’t already know, is an AI analysis group identified for his or her ChatGPT, which may generate human-like responses. They additionally present an API that permits builders to entry these AI capabilities to construct their very own purposes.

To get your API key, you’ll be able to enroll on the OpenAI Platform. After signing up, you’ll be able to create a key from the API keys part of your dashboard.

A white dashboard showing the list of menu and a button to generate the API key

API keys part on the OpenAI platform dashboard.

Once you’ve generated an API key, it’s a must to put it in your pc as an atmosphere variable and title it OPENAI_API_KEY. This is a typical title that libraries like OpenAI and LangChain search for, so that you don’t have to move it manually afterward.

Do be aware that Windows, macOS, and Linux every have their very own strategy to set an atmosphere variable.

Windows
  1. Right-click on “This PC” or “My Computer” and choose “Properties“.
  2. Click on “Advanced system settings” on the left sidebar.
  3. In the System Properties window, click on on the “Environment Variables” button.
  4. Under “System variables” or “User variables“, click “New” and enter the title, OPENAI_API_KEY, and worth of the atmosphere variable.
macOS and Linux

To set a everlasting variable, add the next to your shell configuration file equivalent to ~/.bash_profile, ~/.bashrc, ~/.zshrc.


export OPENAI_API_KEY=worth

LangChain

LangChain is a system that helps computer systems perceive and work with human language. In our case, it supplies instruments that may assist us convert textual content paperwork into numbers.

You would possibly surprise, why do we have to do that?

Basically, AI, machines, or computer systems are good at working with numbers however not with phrases, sentences, and their meanings. So we have to convert phrases into numbers.

This course of known as embedding.

It makes it simpler for computer systems to investigate and discover patterns in language knowledge, in addition to helps to know the semantics of the data they’re given from a human language.

A diagram showing the process of embedding words 'fancy cars' into numbers from left to right

For instance, let’s say a consumer sends a question about “fancy automobiles“. Rather than looking for the precise phrases from the data supply, it could in all probability perceive that you’re attempting to seek for Ferrari, Maserati, Aston Martin, Mercedes Benz, and so forth.

Next.js

We want a framework to create a consumer interface so customers can work together with our chatbot.

In our case, Next.js has every thing we have to get our chatbot up and working for the end-users. We will construct the interface using a React.js UI library, shadcn/ui. It has a route system for creating an API endpoint.

It additionally supplies an SDK that may make it simpler and faster to construct chat consumer interfaces.

Data and Other Prerequisites

Ideally, we’ll additionally want to organize some knowledge prepared. These can be processed, saved in a Vector storage and despatched to OpenAI to provide more information for the immediate.

In this instance, to make it less complicated, I’ve made a JSON file with a listing of title of a weblog publish. You can discover them within the repository. Ideally, you’d wish to retrieve this info straight from the database.

I assume you will have a good understanding of working with JavaScript, React.js, and NPM as a result of we’ll use them to construct our chatbot.

Also, be sure to have Node.js put in in your pc. You can test if it’s put in by typing:

node -v

If you don’t have Node.js put in, you’ll be able to comply with the directions on the official web site.

How’s Everything Going to Work?

To make it straightforward to know, right here’s a high-level overview of how every thing goes to work:

  1. The consumer will enter a query or question into the chatbot.
  2. LangChain will retrieve associated paperwork of the consumer’s question.
  3. Send the immediate, the question, and the associated paperwork to the OpenAI API to get a response.
  4. Display the response to the consumer.

Now that we’ve got a high-level overview of how every thing goes to work, let’s get began!

Installing Dependencies

Let’s begin by putting in the required packages to construct the consumer interface for our chatbot. Type the next command:


npx create-next-app@latest ai-assistant --typescript --tailwind --eslint

This command will set up and arrange Next.js with shadcn/ui, TypeScript, Tailwind CSS, and ESLint. It could ask you just a few questions; on this case, it’s best to pick the default choices.

Once the set up is full, navigate to the mission listing:


cd ai-assistant

Next, we have to set up just a few extra dependencies, equivalent to ai, openai, and langchain, which weren’t included within the earlier command.


npm i ai openai langchain @langchain/openai remark-gfm

Building the Chat Interface

To create the chat interface, we’ll use some pre-built parts from shadcn/ui just like the button, avatar, and enter. Fortunately, including these parts is straightforward with shadcn/ui. Just kind:


npx shadcn-ui@latest add scroll-area button avatar card enter

This command will mechanically pull and add the parts to the ui listing.

Next, let’s make a brand new file named Chat.tsx within the src/parts listing. This file will maintain our chat interface.

We’ll use the ai package deal to handle duties equivalent to capturing consumer enter, sending queries to the API, and receiving responses from the AI.

The OpenAI’s response could be plain textual content, HTML, or Markdown. To format it into correct HTML, we’ll use the remark-gfm package deal.

We’ll additionally have to show avatars inside the Chat interface. For this tutorial, I’m using Avatartion to generate avatars for each the AI and the consumer. These avatars are saved within the public listing.

Below is the code we’ll add to this file.


'use shopper';

import { Avatar, AvatarFallback, AvatarImage } from '@/ui/avatar';
import { Button } from '@/ui/button';
import {
    Card,
    CardContent material,
    CardFooter,
    CardHeader,
    CardTitle,
} from '@/ui/card';
import { Input } from '@/ui/enter';
import { ScrollArea } from '@/ui/scroll-area';
import { useChat } from 'ai/react';
import { Send } from 'lucide-react';
import { FunctionComponent, memo } from 'react';
import { ErrorBoundary } from 'react-error-boundary';
import ReactMarkdown, { Options } from 'react-markdown';
import remarkGfm from 'remark-gfm';

/**
 * Memoized ReactMarkdown part.
 * The part is memoized to stop pointless re-renders.
 */
const MemoizedReactMarkdown: FunctionComponent = memo(
    ReactMarkdown,
    (prevProps, nextProps) =>
        prevProps.youngsters === nextProps.youngsters &&
        prevProps.className === nextProps.className
);

/**
 * Represents a chat part that permits customers to work together with a chatbot.
 * The part shows a chat interface with messages exchanged between the consumer and the chatbot.
 * Users can enter their questions and obtain responses from the chatbot.
 */
export const Chat = () => {
    const { deal withInputChange, handleSubmit, enter, messages } = useChat({
        api: '/api/chat',
    });

    return (
        
            
                AI Assistant
            
            
                
                    {messages.map((message) => {
                        return (
                            
{message.function === 'consumer' && ( U )} {message.function === 'assistant' && ( )}

{message.function === 'consumer' ? 'User' : 'AI'} {message.content material}

} > {message.content material}

);
})}



);
};

Let’s take a look at the UI. First, we have to enter the next command to begin the Next.js localhost atmosphere:

npm run dev

By default, the Next.js localhost atmosphere runs at localhost:3000. Here’s how our chatbot interface will seem within the browser:

Setting up the API endpoint

Next, we have to arrange the API endpoint that the UI will use when the consumer submits their question. To do that, we create a brand new file named route.ts within the src/app/api/chat listing. Below is the code that goes into the file.


import { learnData } from '@/lib/knowledge';
import { OpenAIEmbeddings } from '@langchain/openai';
import { OpenAIStream, StreamingTextResponse } from 'ai';
import { Document } from 'langchain/doc';
import { ReminiscenceVectorRetailer } from 'langchain/vectorstores/reminiscence';
import OpenAI from 'openai';

/**
    * Create a vector retailer from a listing of paperwork using OpenAI embedding.
    */
const createStore = () => {
    const knowledge = learnData();

    return ReminiscenceVectorRetailer.fromDocuments(
        knowledge.map((title) => {
            return new Document({
                pageContent: `Title: ${title}`,
            });
        }),
        new OpenAIEmbeddings()
    );
};
const openai = new OpenAI();

export async operate POST(req: Request) {
    const { messages } = (await req.json()) as {
        messages:  'consumer' [];
    };
    const retailer = await createStore();
    const outcomes = await retailer.similaritySearch(messages[0].content material, 100);
    const questions = messages
        .filter((m) => m.function === 'consumer')
        .map((m) => m.content material);
    const latestQuestion = questions[questions.length - 1] || '';
    const response = await openai.chat.completions.create({
        messages: [
            {
                content: `You're a helpful assistant. You're here to help me with my questions.`,
                role: 'assistant',
            },
            {
                content: `
                Please answer the following question using the provided context.
                If the context is not provided, please simply say that you're not able to answer
                the question.

            Question:
                ${latestQuestion}

            Context:
                ${results.map((r) => r.pageContent).join('n')}
                `,
                role: 'user',
            },
        ],
        mannequin: 'gpt-4',
        stream: true,
        temperature: 0,
    });
    const stream = OpenAIStream(response);

    return new StreamingTextResponse(stream);
}

Let’s break down some essential elements of the code to know what’s occurring, as this code is essential for making our chatbot work.

First, the next code permits the endpoint to obtain a POST request. It takes the messages argument, which is mechanically constructed by the ai package deal working on the front-end.


export async operate POST(req: Request) {
    const { messages } = (await req.json()) as {
        messages:  'consumer' [];
    };
}

In this part of the code, we course of the JSON file, and retailer them in a vector retailer.


const createStore = () => {
    const knowledge = learnData();

    return ReminiscenceVectorRetailer.fromDocuments(
        knowledge.map((title) => {
            return new Document({
                pageContent: `Title: ${title}`,
            });
        }),
        new OpenAIEmbeddings()
    );
};

For the sake of simplicity on this tutorial, we retailer the vector in reminiscence. Ideally, you would want to retailer it in a Vector database. There are a number of choices to select from, equivalent to:

Then we retrieve of the related piece from the doc primarily based on the consumer question from it.


const retailer = await createStore();
const outcomes = await retailer.similaritySearch(messages[0].content material, 100);

Finally, we ship the consumer’s question and the associated paperwork to the OpenAI API to get a response, after which return the response to the consumer. In this tutorial, we use the GPT-4 mannequin, which is at present the most recent and strongest mannequin in OpenAI.


const latestQuestion = questions[questions.length - 1] || '';
const response = await openai.chat.completions.create({
    messages: [
        {
            content: `You're a helpful assistant. You're here to help me with my questions.`,
            role: 'assistant',
        },
        {
            content: `
            Please answer the following question using the provided context.
            If the context is not provided, please simply say that you're not able to answer
            the question.

        Question:
            ${latestQuestion}

        Context:
            ${results.map((r) => r.pageContent).join('n')}
            `,
            role: 'user',
        },
    ],
    mannequin: 'gpt-4',
    stream: true,
    temperature: 0,
});

We use a easy very immediate. We first inform OpenAI to guage the consumer’s question and reply to consumer with the supplied context. We additionally set the most recent mannequin obtainable in OpenAI, gpt-4 and set the temperature to 0. Our objective is to make sure that the AI solely responds inside the scope of the context, as a substitute of being artistic which may typically result in hallucination.

And that’s it. Now, we are able to attempt to chat with the chatbot; our digital private assistant.

Wrapping Up

We’ve simply constructed a easy chatbot! There’s room to make it more superior, definitely. As talked about on this tutorial, when you plan to make use of it in manufacturing, you must retailer your vector knowledge in a correct database as a substitute of in reminiscence. You may additionally wish to add more knowledge to supply higher context for answering consumer queries. You might also attempt tweaking the immediate to enhance the AI’s response.

Overall, I hope this helps you get began with constructing your next AI-powered software.


Check out more article on – How-To tutorial and latest highlights on – Technical News