push local Ollama model in VScodium
continuous-integration/drone/push Build is passing
Details
continuous-integration/drone/push Build is passing
Details
parent
1f554ca773
commit
d1a0c7df26
Binary file not shown.
After Width: | Height: | Size: 55 KiB |
Binary file not shown.
After Width: | Height: | Size: 35 KiB |
Binary file not shown.
After Width: | Height: | Size: 65 KiB |
Binary file not shown.
After Width: | Height: | Size: 1.1 MiB |
@ -0,0 +1,59 @@
|
|||||||
|
# Comment déployer un model d'IA local
|
||||||
|
|
||||||
|
<img src="../../assets/ia/ia.png">
|
||||||
|
|
||||||
|
## Pré requis
|
||||||
|
|
||||||
|
- [Docker & compose](https://git.legaragenumerique.fr:GARAGENUM/docker-install) :whale:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://git.legaragenumerique.fr:GARAGENUM/docker-install.git
|
||||||
|
cd docker-install/
|
||||||
|
sudo ./docker-install.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
- Déployer le modèle:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nano docker-compose.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
- Copier le code suivant:
|
||||||
|
|
||||||
|
```yml
|
||||||
|
ollama:
|
||||||
|
image: ollama/ollama
|
||||||
|
container_name: ollama
|
||||||
|
restart: always
|
||||||
|
ports:
|
||||||
|
- 11434:11434
|
||||||
|
volumes:
|
||||||
|
- ./ollama:/root/.ollama
|
||||||
|
```
|
||||||
|
|
||||||
|
- Installer le plugin ```continue```:
|
||||||
|
|
||||||
|
![continue](../../assets/ia/continue-1.png)
|
||||||
|
|
||||||
|
![continue](../../assets/ia/continue-2.png)
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
- Editer le config.json:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
{
|
||||||
|
"models": [
|
||||||
|
{
|
||||||
|
"title": "Ollama",
|
||||||
|
"provider": "ollama",
|
||||||
|
"model": "llama3:latest",
|
||||||
|
"apiBase": "http://localhost:11434/"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
![continue](../../assets/ia/continue-3.png)
|
Loading…
Reference in New Issue