-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Issues: ollama/ollama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
fail to upload models due to max try
bug
Something isn't working
#4926
opened Jun 8, 2024 by
taozhiyuai
Dictionary learning and concept extraction for model tuning
feature request
New feature or request
#4924
opened Jun 8, 2024 by
IgorAlexey
ollama download时下载的server地址是开源的吗?在国内感觉不好拉,想弄个类似的
feature request
New feature or request
#4923
opened Jun 8, 2024 by
papandadj
Update docs/tutorials/windows.md for Windows Uninstall
feature request
New feature or request
#4920
opened Jun 7, 2024 by
Suvoo
/api/show params showing as deformed string
bug
Something isn't working
#4918
opened Jun 7, 2024 by
royjhan
Request Ollama Web API to fetch all models data in the remote
feature request
New feature or request
#4914
opened Jun 7, 2024 by
edwinjhlee
Ollama model download fails on kubernetes
bug
Something isn't working
#4913
opened Jun 7, 2024 by
samyIO
Error: llama runner process has terminated: signal: aborted (core dumped)
bug
Something isn't working
#4912
opened Jun 7, 2024 by
mikestut
integration with microsoft/aici for deterministic prompt outputs
feature request
New feature or request
#4908
opened Jun 7, 2024 by
bvelker
Need Support: Local Model Parameters Override Like Llama.cpp
#4904
opened Jun 7, 2024 by
DirtyKnightForVi
Intel/neural-chat-7b-v3 prompts itself
bug
Something isn't working
#4903
opened Jun 7, 2024 by
0x2E16CF0F
Performance issue with CPU only inference start 0.1.39 - to latest version of todate.
bug
Something isn't working
#4902
opened Jun 7, 2024 by
raymond-infinitecode
Error: pull model manifest: ssh: no key found
bug
Something isn't working
#4901
opened Jun 7, 2024 by
674316
Failed to get max tokens for LLM with name qwen2:7b-instruct-fp16 with ollama
bug
Something isn't working
#4899
opened Jun 7, 2024 by
wenlong1234
Add "use_mmap" to environment variable
feature request
New feature or request
#4895
opened Jun 7, 2024 by
sisi399
Feature: Allow setting OLLAMA_NUM_PARALLEL per model
feature request
New feature or request
#4894
opened Jun 7, 2024 by
sammcj
Error: error loading llama server" error="llama runner process has terminated: exit status 0xc0000409
bug
Something isn't working
#4893
opened Jun 7, 2024 by
Hsiayukoo
Previous Next
ProTip!
Add no:assignee to see everything that’s not assigned.