Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad environment variable in README #100

Open
AIUser2324 opened this issue Aug 2, 2024 · 7 comments
Open

Bad environment variable in README #100

AIUser2324 opened this issue Aug 2, 2024 · 7 comments

Comments

@AIUser2324
Copy link

The instructions for using a self-hosted LLM in the README file say that you need to set the OPENAI_API_BASE variable. This should be OPENAI_BASE_URL to work properly.

@aantn
Copy link
Contributor

aantn commented Aug 2, 2024

Hey, which holmes version are you running and with which LLM model?

In the latest holmes version use LiteLLM under the hood, which uses OPENAI_API_BASE according to the docs.

@AIUser2324
Copy link
Author

I used the latest brew installation. holmes version gives me: HEAD -> master-fd086e5. I used the Llama3.1 model from Ollama.

@aantn
Copy link
Contributor

aantn commented Aug 2, 2024

Thanks, you're definitely on the latest version using LiteLLM.

What was the exact --model flag that you passed holmes?

@AIUser2324
Copy link
Author

AIUser2324 commented Aug 2, 2024

When I set the environment variable with export OPENAI_API_BASE=<url-here>

and I then call holmes ask --model=llama3.1:8b-instruct-q8_0 "what pods are unhealthy and why?"

I get the following error: NotFoundError: Error code: 404 - {'error': {'message': 'The model `llama3.1:8b-instruct-q8_0` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

When I go with --model=openai/llama3.1:8b-instruct-q8_0 I get: BadRequestError: Error code: 400 - {'error': {'message': 'invalid model ID', 'type': 'invalid_request_error', 'param': None, 'code': None}}

@aantn
Copy link
Contributor

aantn commented Aug 4, 2024

Got it, thanks. And to clarify, this works if you go with OPENAI_BASE_URL?

@AIUser2324
Copy link
Author

Yes, correct. If I use OPENAI_BASE_URL it works.

@aantn
Copy link
Contributor

aantn commented Sep 10, 2024

@AIUser2324, do either of the updated instructions for Ollama here work for you? https://github.com/robusta-dev/holmesgpt/pull/133/files#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5

On my side, Holmes is able to connect in both cases, but I'm not getting good results. Perhaps that is because I'm not using the instruct model?

In any event, are you able to get decent results with either:

  1. The steps you mentioned in your original post
  2. The new instructions (linked in the PR)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants