How It Works

  1. User Question The assistant receives a natural language question like “What happened to Bitcoin today?

  2. Get Knowledge Context The Pine Context API analyzes the question and returns relevant information formatted as LLM-ready context.

  3. Combine with LLM The relevant context is passed along with the original question to OpenAI’s API, which generates a natural language response.

Step 1: Set API Keys

For this example you will need both a Pine API Key and an OpenAI API Key. After getting the keys, put them into environment variables or directly into your python code.

export PINE_API_KEY="sk-cj9I..."
export OPENAI_API_KEY="sk-A4Yk4E..."

Step 2: Install Dependencies

For this example we will need openai and requests package.

pip3 install openai
pip3 install requests

Step 3: Get Context and Sources from Pine

We write a function which calles the Pine Context API, and returns the context:

def get_general_context_and_sources(question):
    response = requests.post(
        'https://api.pine.dev/context',
        headers={
            'Authorization': f'Bearer {PINE_API_KEY}',
            'Content-Type': 'application/json'
        },
        json={
            'query': query
        }
    )
    response_json = response.json()
    return (response_json["markdown"], response_json["sources"])

Step 4: Ask OpenAI with Context

We write a function which promps OpenAI to answer our query with Pine’s context:


LLM_SYSTEM_PROPMT = """
You are a direct response engine.
You will receive context information followed by a question.
Use only the provided context to answer the question.
Answer concisely and naturally, as a knowledgeable expert would in conversation.
If the context doesn't contain enough information to fully answer the question, only use what is available in the context.
Do not mention that you used context, just reply with the answer.
"""

def ask_open_ai_with_context(question, context):
    client = OpenAI(api_key = OPENAI_API_KEY)
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": LLM_SYSTEM_PROPMT },
            {"role": "user", "content": f"""Question: {question} \n\n Context: {context}"""}
        ]
    )

    return response.choices[0].message.content

Step 5: Putting Together

We now combine our functions,

question = sys.argv[1] # gets the first argument to our program
[context, sources] = get_general_context_and_sources(question)
response = ask_open_ai_with_context(question, context)

print(response)

print("\nSources:")
for source in sources:
    print(f' - {source["url"]}')

Final Script

The script should look like,

search.py
from openai import OpenAI
import requests
import os
import sys

PINE_API_KEY = os.getenv('PINE_API_KEY')
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

LLM_SYSTEM_PROPMT = """
You are a direct response engine.
You will receive context information followed by a question.
Use only the provided context to answer the question.
Answer concisely and naturally, as a knowledgeable expert would in conversation.
If the context doesn't contain enough information to fully answer the question, only use what is available in the context.
Do not mention that you used context, just reply with the answer.
"""

def get_general_context_and_sources(question):
    response = requests.post(
        'https://api.pine.dev/context',
        headers={
            'Authorization': f'Bearer {PINE_API_KEY}',
            'Content-Type': 'application/json'
        },
        json={
            'query': query
        }
    )
    return (response_json["markdown"], response_json["sources"])

def ask_open_ai_with_context(question, context):
    client = OpenAI(api_key = OPENAI_API_KEY)
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": LLM_SYSTEM_PROPMT },
            {"role": "user", "content": f"""Question: {question} \n\n Context: {context}"""}
        ]
    )

    return response.choices[0].message.content

question = sys.argv[1]
[context, sources] = get_general_context_and_sources(question)
response = ask_open_ai_with_context(question, context)

print(response)

print("\nSources:")
for source in sources:
    print(f' - {source["url"]}')

You can now run the script via:

python3 search.py "How much does it cost to enter the Encinitas Turky Trot tomorrow?"

Sample Responses

“How much does it cost to enter the Encinitas Turky Trot tomorrow?”

The cost to enter the Encinitas Turkey Trot tomorrow is $77.82 for the 5k and $88.42 for the 10k.

Sources:
 - https://encinitasturkeytrot.org/
 - https://www.facebook.com/EncinitasTurkeyTrot/
 - https://raceroster.com/events/2024/79414/encinitas-turkey-trot
 - https://www.instagram.com/encinitasturkeytrot/

“How’s the surf in Santa Curz today?”

The surf in Santa Cruz today is 3-4+ feet and clean, with primary swells coming from the SSW and secondary swells from the WSW. Winds are light, ranging from 2 to 7 mph, contributing to the clean conditions. The water temperature is around 55°F.

Sources:
 - https://www.surfline.com/surf-report/santa-cruz/584204204e65fad6a77099d9
 - https://surfcaptain.com/forecast/santa-cruz-california
 - https://www.surfline.com/surf-report/pleasure-point/5842041f4e65fad6a7708807
 - https://deepswell.com/surf-report/US/Santa-Cruz-County/Pleasure-Point/1061