🏠 HTTP 301 - moved permanently to feddit.org 🏠

  • 1 Post
  • 28 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle

  • You can get different results, sometimes better sometimes worse, most of the time differently phrased (e.g. the gemma models by google like to do bulletlists and sometimes tell me where they got that information from). There are models specifically trained / finetuned for different tasks (mostly coding, but also writing stories, answering medical questions, telling me what is on a picture, speaking different languages, running on smaller / bigger hardware, etc.). Have a look at ollamas library of models which is outright tiny compared to e.g. huggingface.

    Also, i don’t trust OpenAI and others to be confidential with company data or explicit code snippets from work i feed them.


  • If you’re lucky you just set it to the wrong version, mine uses 10.3.0 (see below).

    I tried running the docker container first as well but gave up since there are seperate versions for cuda and rocm which comes packaged with this as well and therefor gets unnecessary big.

    I am running it on Fedora natively. I installed it with the setup script from the top of the docs:

    curl -fsSL https://ollama.com/install.sh | sh

    After that i created a service file (also stated in the linked docs) so that it starts at boot time (so i can just boot my pc and forget it without needing to login).

    The crucial part for the GPU in question (RX 6700XT) was this line under the [service] section:

    Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"

    As you stated, this sets the environment variable for rocm. Also to be able to reach it from outside of localhost (for my server):

    Environment="OLLAMA_HOST=0.0.0.0"




  • passepartout@feddit.detoMemes@lemmy.mlThe signs are aligned.
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    2 months ago

    It is great that you could bring up the curiosity to bust the soy myth.

    There is no moral consumption of animal products. Many people a lot smarter than both of us (or at least mor dedicated / funded / in their jobs) have made the research and come to this conslusion as well.

    The most people who oppose this fact feel attacked at first because it can’t coexist with their own behaviour. It is the same as with every debate where emotion gets brought up as a reasoning though (e.g. refugees, climate change, homeopathy, etc.).











  • passepartout@feddit.detoSelfhosted@lemmy.worldSelf hosted LLM
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Yes, since we have similar gpus you could try the following to run it in a docker container on linux, taken from here and slightly modified:

    #!/bin/bash
    
    model=microsoft/phi-2
    # share a volume with the Docker container to avoid downloading weights every run
    volume=<path-to-your-data-directory>/data
    
    docker run -e HSA_OVERRIDE_GFX_VERSION=10.3.0 -e PYTORCH_ROCM_ARCH="gfx1031" --device /dev/kfd --device /dev/dri --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:1.4-rocm --model-id $model
    

    Note how the rocm version has a different tag and that you need to mount your gpu device into the container. The two environment variables are specific to my (any maybe yours also) gpu architecture. It will need a while to download though.


  • Huggingface TGI is just a piece of software handling the models, like gpt4all. Here is a list of models officially supported by TGI, although they state that you can try different ones as well. You follow the link and look for the files section. The size of the model files (safetensors or pickele binaries) gives a good estimate of how much vram you will need. Sadly this is more than most consumer graphics cards have except for santacoder and microsoft phi.





  • It’s true that you shouldn’t open ports to the internet. If you still want your services to be accessible from outside the local network you can install a wireguard server on your thin client that has access to the services you want. And if you really want to harden it you can restrict wireguard clients from ssh and other admin things.

    You will need to open one port on the router to your wireguard server though. Wireguard is UDP though and ignores packages without an established connection, so attackers will not even know there is an open port on your router.

    Edit: tailscale and zerotier are good external solutions to this as well without needing to open a port at all.