You could probably ask ChatGPT how to get started with GPT, but I think a lot of people are building apps taking advantage of GPT's Fine Tuning ability. https://platform.openai.com/docs/guides/fine-tuning
If I take advantage of this for allowing my customers to ask questions about our product documentation, how do I limit questions to questions about my product documentation? I don't want to be paying Open OPI for questions unrelated to my product.
One idea would be to use a much cheaper (and faster) classifier to come back with a "yes" or "no" if the question asked is about your product documentation.
Using Ada or Babbage is about 1% of the cost of Davinci (and Curie is 10% the cost of Davinci).
Without any real tuning, this responds quite promptly (and the various tests I've done, correctly):
curl https://api.openai.com/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "text-ada-001",
"prompt": "Identify if the following question is about the Olympic Games. Answer with {yes}, {no}, {maybe}.\n\nWhat category is pole vaulting in?",
"temperature": 0.7,
"max_tokens": 38,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0
}'
I was just after something simple in its training set to demonstrate the simple classification that could be used as a check before passing the request on to a more expensive fine tuned model.
This is really a good question.
Could be interesting also which is the best method to provide to GPT a Product documentato and ask question inherents to that and nothing else.