0

https://medium.com/@sohaibshaheen/train-chatgpt-with-custom-data-and-create-your-own-chat-bot-using-macos-fb78c2f9646d

I was using this tutorial to create my language model. I overlooked the UPDATE notice on the bottom on my first run, so I ran into the error that got.index cant be found. Now I updated the code and used the llama.index instead with my secret key. Now my problem is that when I try to run the "Python3 app.py" command I get the "can't open file '/Users/my-Name/app.py': [Errno 2] No such file or directory" error. I tried installing the Docs folder and the app.py (created with textedit in MacOS) into the Users directory but then I get the "Traceback (most recent call last): File "/Users/my.-Name/app.py", line 1, in from llama_index import SimpleDirectoryReader, GPTSimpleVectorIndex, LLMPredictor, ServiceContext ImportError: cannot import name 'GPTSimpleVectorIndex' from 'llama_index' (/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/llama_index/init.py)" error.

I want to know what I did wrong or why it failed even when I installed it directly in the User directory

1 Answer 1

0

I was also following this article, and found that much of the code was out of date and the features had been deprecated. After a lot of trial and error and reviewing the documentation I now have a working custom LLM (on MacOS - the software and packages may differ on other OS).

The following software and packages are required:

  1. Install Python from https://www.python.org/downloads/
  2. python3 -m pip install -U pip
  3. pip3 install openai
  4. pip3 install llama-index
  5. pip3 install PyPDF2
  6. pip3 install gradio
  • Importing is now from llama_index.core
  • GPTSimpleVectorIndex is deprecated in favour of GPTVectorStoreIndex
  • LLMPredictor and ServiceContext are deprecated in favour of Settings, StorageContext and load_index_from_storage
  • index.save_to_disk is deprecated in favour of
  • index.storage_context.persist()
  • index.query is deprecated in favour of query_engine

My updated and functioning code now reads as follows:

from llama_index.core import SimpleDirectoryReader, GPTVectorStoreIndex, Settings, StorageContext, load_index_from_storage
from llama_index.llms.openai import OpenAI
import gradio as gr
import os

os.environ["OPENAI_API_KEY"] = '--enter API Key---'

def construct_index(directory_path):
    num_outputs = 512

    Settings.llm = OpenAI(temperature=0.7, model_name="gpt-3.5-turbo", max_tokens=num_outputs)

    docs = SimpleDirectoryReader(directory_path).load_data()

    index = GPTVectorStoreIndex.from_documents(docs)

    index.storage_context.persist("storage")

    return index

def chatbot(input_text):
    storage_context = StorageContext.from_defaults(persist_dir="storage")
    index = load_index_from_storage(storage_context)
    query_engine = index.as_query_engine()
    response = query_engine.query(input_text)
    return response.response

iface = gr.Interface(fn=chatbot, inputs=gr.Textbox(lines=7, label="Enter your text"), outputs="text", title="Custom LLM Bot")

iface.launch(share=True)
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.