Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
cb7e573d26 | ||
|
|
6ef0ad2f93 | ||
|
|
31be0a0b54 | ||
|
|
71ddaa3345 | ||
|
|
775a0b5adb | ||
|
|
1a983e945b | ||
|
|
b152359706 | ||
| a844f6cc30 | |||
|
|
8e6b5a9bab | ||
|
|
cd4695ee68 | ||
|
|
5bd7958f09 | ||
| 8fc9170c9c |
42
CHANGELOG.md
42
CHANGELOG.md
@@ -1,10 +1,36 @@
|
|||||||
### Bug fixes
|
### New data sources
|
||||||
- fatal error when trying to retrieve compute boot disk if former does not have one
|
|
||||||
- ignored timeouts
|
- decort_disk_snapshot_list
|
||||||
- wrong handling of errors when attaching network interfaces and disks to kvmvm
|
- decort_disk_snapshot
|
||||||
|
- decort_disk_list_deleted
|
||||||
|
- decort_disk_list_unattached
|
||||||
|
- decort_disk_list_types
|
||||||
|
- decort_disk_list_types_detailed
|
||||||
|
|
||||||
|
### New resources
|
||||||
|
|
||||||
|
- decort_disk_snapshot
|
||||||
|
|
||||||
### New features
|
### New features
|
||||||
- parameter iotune in disk
|
|
||||||
- migrated to terraform SDKv2
|
- add dockerfile for creating an image for the tf provider
|
||||||
- admin mode (activated by environment variable DECORT\_ADMIN\_MODE) for resources: account, k8s, image, disk, resgroup, kvmvm, vins
|
- change behaviour to disk resource: check the disk status during update the tf state
|
||||||
- parameters sep\_id and pool in kvmvm
|
- add disks block to kvmvm resource
|
||||||
|
|
||||||
|
### New articles on wiki
|
||||||
|
|
||||||
|
- [Сборка terraform провайдера в образ](https://github.com/rudecs/terraform-provider-decort/wiki/04.05-Сборка-terraform-провайдера-в-образ)
|
||||||
|
- [Массовое создание ресурсов. Мета аргументы](https://github.com/rudecs/terraform-provider-decort/wiki/05.04-Массовое-создание-ресурсов.-Мета-аргументы)
|
||||||
|
- [Удаление ресурсов](https://github.com/rudecs/terraform-provider-decort/wiki/05.05-Удаление-ресурсов)
|
||||||
|
- [Управление снимком диска](https://github.com/rudecs/terraform-provider-decort/wiki/07.01.19-Resource-функция-decort_disk_snapshot-управление-снимком-диска)
|
||||||
|
- [Получение списка типов для диска](https://github.com/rudecs/terraform-provider-decort/wiki/06.01.39-Data-функция-decort_disk_list_types-получение-списка-типов-диска)
|
||||||
|
- [Расширенное получение списка поддерживаемых типов](https://github.com/rudecs/terraform-provider-decort/wiki/06.01.40-Data-функция-decort_disk_list_types_detailed-расширенное-получение-информации-о-поддерживаемых-типах-дисков)
|
||||||
|
- [Получение информации об удаленных дисках](https://github.com/rudecs/terraform-provider-decort/wiki/06.01.41-Data-функция-decort_disk_list_deleted-получение-информации-об-удаленных-дисках)
|
||||||
|
- [Получение информации о неподключенных дисках](https://github.com/rudecs/terraform-provider-decort/wiki/06.01.42-Data-функция-decort_disk_list_unattached-получение-информации-о-неподключенных-дисках)
|
||||||
|
- [Получение списка снимков состояния диска](https://github.com/rudecs/terraform-provider-decort/wiki/06.01.43-Data-функция-decort_disk_snapshot_list-получение-списка-снимков-состояния-диска)
|
||||||
|
- [Получение информацуии о снимке состояния диска](https://github.com/rudecs/terraform-provider-decort/wiki/06.01.44-Data-функция-decort_disk_snapshot-получение-информации-о-снимке-состояния)
|
||||||
|
|
||||||
|
### Update articles
|
||||||
|
|
||||||
|
- [Управление дисковыми ресурсами.](https://github.com/rudecs/terraform-provider-decort/wiki/07.01.03-Resource-функция-decort_disk-управление-дисковыми-ресурсами)
|
||||||
|
- [Управление виртуальными серверами, создаваемыми на базе системы виртуализации KVM](https://github.com/rudecs/terraform-provider-decort/wiki/07.01.01-Resource-функция-decort_kvmvm-управление-виртуальными-машинами-на-базе-KVM)
|
||||||
|
|||||||
10
Dockerfile
Normal file
10
Dockerfile
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
FROM docker.io/hashicorp/terraform:latest
|
||||||
|
|
||||||
|
WORKDIR /opt/decort/tf/
|
||||||
|
COPY provider.tf ./
|
||||||
|
COPY terraform-provider-decort ./terraform.d/plugins/digitalenergy.online/decort/decort/3.1.1/linux_amd64/
|
||||||
|
RUN terraform init
|
||||||
|
|
||||||
|
WORKDIR /tf
|
||||||
|
COPY entrypoint.sh /
|
||||||
|
ENTRYPOINT ["/entrypoint.sh", "/bin/terraform"]
|
||||||
20
Makefile
20
Makefile
@@ -4,15 +4,31 @@ NAMESPACE=decort
|
|||||||
NAME=terraform-provider-decort
|
NAME=terraform-provider-decort
|
||||||
#BINARY=terraform-provider-${NAME}
|
#BINARY=terraform-provider-${NAME}
|
||||||
BINARY=${NAME}.exe
|
BINARY=${NAME}.exe
|
||||||
|
WORKPATH= ./examples/terraform.d/plugins/${HOSTNAME}/${NAMESPACE}/${NAMESPACE}/${VERSION}/${OS_ARCH}
|
||||||
MAINPATH = ./cmd/decort/
|
MAINPATH = ./cmd/decort/
|
||||||
VERSION=1.1
|
VERSION=3.1.1
|
||||||
#OS_ARCH=darwin_amd64
|
#OS_ARCH=darwin_amd64
|
||||||
OS_ARCH=windows_amd64
|
#OS_ARCH=windows_amd64
|
||||||
|
#OS_ARCH=linux_amd64
|
||||||
|
|
||||||
default: install
|
default: install
|
||||||
|
|
||||||
|
image:
|
||||||
|
GOOS=linux GOARCH=amd64 go build -o terraform-provider-decort ./cmd/decort/
|
||||||
|
docker build . -t rudecs/tf:3.1.1
|
||||||
|
rm terraform-provider-decort
|
||||||
|
|
||||||
|
lint:
|
||||||
|
golangci-lint run --timeout 600s
|
||||||
|
|
||||||
|
st:
|
||||||
|
go build -o ${BINARY} ${MAINPATH}
|
||||||
|
cp ${BINARY} ${WORKPATH}
|
||||||
|
rm ${BINARY}
|
||||||
|
|
||||||
build:
|
build:
|
||||||
go build -o ${BINARY} ${MAINPATH}
|
go build -o ${BINARY} ${MAINPATH}
|
||||||
|
|
||||||
release:
|
release:
|
||||||
GOOS=darwin GOARCH=amd64 go build -o ./bin/${BINARY}_${VERSION}_darwin_amd64
|
GOOS=darwin GOARCH=amd64 go build -o ./bin/${BINARY}_${VERSION}_darwin_amd64
|
||||||
GOOS=freebsd GOARCH=386 go build -o ./bin/${BINARY}_${VERSION}_freebsd_386
|
GOOS=freebsd GOARCH=386 go build -o ./bin/${BINARY}_${VERSION}_freebsd_386
|
||||||
|
|||||||
60
README.md
60
README.md
@@ -1,22 +1,27 @@
|
|||||||
# terraform-provider-decort
|
# terraform-provider-decort
|
||||||
|
|
||||||
Terraform provider для платформы Digital Energy Cloud Orchestration Technology (DECORT)
|
Terraform provider для платформы Digital Energy Cloud Orchestration Technology (DECORT)
|
||||||
|
|
||||||
Внимание: провайдер версии 3.x разработан для DECORT API 3.8.x.
|
Внимание: провайдер версии 3.x разработан для DECORT API 3.8.x.
|
||||||
Для более старых версий можно использовать:
|
Для более старых версий можно использовать:
|
||||||
|
|
||||||
- DECORT API 3.7.x - версия провайдера rc-1.25
|
- DECORT API 3.7.x - версия провайдера rc-1.25
|
||||||
- DECORT API 3.6.x - версия провайдера rc-1.10
|
- DECORT API 3.6.x - версия провайдера rc-1.10
|
||||||
- DECORT API до 3.6.0 - terraform DECS provider (https://github.com/rudecs/terraform-provider-decs)
|
- DECORT API до 3.6.0 - terraform DECS provider (https://github.com/rudecs/terraform-provider-decs)
|
||||||
|
|
||||||
## Режимы работы
|
## Режимы работы
|
||||||
|
|
||||||
Провайдер позволяет работать в двух режимах:
|
Провайдер позволяет работать в двух режимах:
|
||||||
|
|
||||||
- Режим пользователя,
|
- Режим пользователя,
|
||||||
- Режим администратора.
|
- Режим администратора.
|
||||||
Для переключения между режимами используйте флаг DECORT_ADMIN_MODE.
|
Для переключения между режимами используйте флаг DECORT_ADMIN_MODE.
|
||||||
Вики проекта: https://github.com/rudecs/terraform-provider-decort/wiki
|
Вики проекта: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
|
||||||
## Возможности провайдера
|
## Возможности провайдера
|
||||||
- Работа с Compute instances,
|
|
||||||
- Работа с disks,
|
- Работа с Compute instances,
|
||||||
|
- Работа с disks,
|
||||||
- Работа с k8s,
|
- Работа с k8s,
|
||||||
- Работа с image,
|
- Работа с image,
|
||||||
- Работа с reource groups,
|
- Работа с reource groups,
|
||||||
@@ -29,18 +34,23 @@ Terraform provider для платформы Digital Energy Cloud Orchestration
|
|||||||
- Работа с vgpu,
|
- Работа с vgpu,
|
||||||
- Работа с bservice,
|
- Работа с bservice,
|
||||||
- Работа с extnets,
|
- Работа с extnets,
|
||||||
- Работа с locations.
|
- Работа с locations,
|
||||||
|
- Работа с load balancer.
|
||||||
|
|
||||||
Вики проекта: https://github.com/rudecs/terraform-provider-decort/wiki
|
Вики проекта: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
|
||||||
## Начало
|
## Начало
|
||||||
Старт возможен по двум путям:
|
|
||||||
|
Старт возможен по двум путям:
|
||||||
|
|
||||||
1. Установка через собранные пакеты.
|
1. Установка через собранные пакеты.
|
||||||
2. Ручная установка.
|
2. Ручная установка.
|
||||||
|
|
||||||
### Установка через собранные пакеты.
|
### Установка через собранные пакеты.
|
||||||
|
|
||||||
1. Скачайте и установите terraform по ссылке: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
1. Скачайте и установите terraform по ссылке: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
||||||
2. Создайте файл `main.tf` и добавьте в него следующий блок.
|
2. Создайте файл `main.tf` и добавьте в него следующий блок.
|
||||||
|
|
||||||
```terraform
|
```terraform
|
||||||
provider "decort" {
|
provider "decort" {
|
||||||
authenticator = "oauth2"
|
authenticator = "oauth2"
|
||||||
@@ -51,45 +61,62 @@ provider "decort" {
|
|||||||
allow_unverified_ssl = true
|
allow_unverified_ssl = true
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Выполните команду
|
3. Выполните команду
|
||||||
|
|
||||||
```
|
```
|
||||||
terraform init
|
terraform init
|
||||||
```
|
```
|
||||||
|
|
||||||
Провайдер автоматически будет установлен на ваш компьютер из terraform registry.
|
Провайдер автоматически будет установлен на ваш компьютер из terraform registry.
|
||||||
|
|
||||||
### Ручная установка
|
### Ручная установка
|
||||||
|
|
||||||
1. Скачайте и установите Go по ссылке: https://go.dev/dl/
|
1. Скачайте и установите Go по ссылке: https://go.dev/dl/
|
||||||
2. Скачайте и установите terraform по ссылке: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
2. Скачайте и установите terraform по ссылке: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
||||||
3. Склонируйте репозиторий с провайдером, выполнив команду:
|
3. Склонируйте репозиторий с провайдером, выполнив команду:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/rudecs/terraform-provider-decort.git
|
git clone https://github.com/rudecs/terraform-provider-decort.git
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Перейдите в скачанную папку с провайдером и выполните команду
|
4. Перейдите в скачанную папку с провайдером и выполните команду
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
go build -o terraform-provider-decort
|
go build -o terraform-provider-decort
|
||||||
```
|
```
|
||||||
|
|
||||||
Если вы знаете как устроен _makefile_, то можно изменить в файле `Makefile` параметры под вашу ОС и выполнить команду
|
Если вы знаете как устроен _makefile_, то можно изменить в файле `Makefile` параметры под вашу ОС и выполнить команду
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
make build
|
make build
|
||||||
```
|
```
|
||||||
|
|
||||||
5. Полученный файл необходимо поместить:
|
5. Полученный файл необходимо поместить:
|
||||||
Linux:
|
Linux:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
~/.terraform.d/plugins/${host_name}/${namespace}/${type}/${version}/${target}
|
~/.terraform.d/plugins/${host_name}/${namespace}/${type}/${version}/${target}
|
||||||
```
|
```
|
||||||
|
|
||||||
Windows:
|
Windows:
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
%APPDATA%\terraform.d\plugins\${host_name}\${namespace}\${type}\${version}\${target}
|
%APPDATA%\terraform.d\plugins\${host_name}\${namespace}\${type}\${version}\${target}
|
||||||
```
|
```
|
||||||
|
|
||||||
ВНИМАНИЕ: для ОС Windows `%APP_DATA%` является каталогом, в котором будут помещены будущие файлы terraform.
|
ВНИМАНИЕ: для ОС Windows `%APP_DATA%` является каталогом, в котором будут помещены будущие файлы terraform.
|
||||||
Где:
|
Где:
|
||||||
|
|
||||||
- host_name - имя хоста, держателя провайдера, например, digitalenergy.online
|
- host_name - имя хоста, держателя провайдера, например, digitalenergy.online
|
||||||
- namespace - пространство имен хоста, например decort
|
- namespace - пространство имен хоста, например decort
|
||||||
- type - тип провайдера, может совпадать с пространством имен, например, decort
|
- type - тип провайдера, может совпадать с пространством имен, например, decort
|
||||||
- version - версия провайдера, например 1.2
|
- version - версия провайдера, например 1.2
|
||||||
- target - версия ОС, например windows_amd64
|
- target - версия ОС, например windows_amd64
|
||||||
|
|
||||||
6. После этого, создайте файл `main.tf`.
|
6. После этого, создайте файл `main.tf`.
|
||||||
7. Добавьте в него следующий блок
|
7. Добавьте в него следующий блок
|
||||||
|
|
||||||
```terraform
|
```terraform
|
||||||
terraform {
|
terraform {
|
||||||
required_providers {
|
required_providers {
|
||||||
@@ -100,32 +127,39 @@ terraform {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
В поле `version` указывается версия провайдера.
|
В поле `version` указывается версия провайдера.
|
||||||
Обязательный параметр
|
Обязательный параметр
|
||||||
Тип поля - строка
|
Тип поля - строка
|
||||||
ВНИМАНИЕ: Версии в блоке и в репозитории, в который был помещен провайдер должны совпадать!
|
ВНИМАНИЕ: Версии в блоке и в репозитории, в который был помещен провайдер должны совпадать!
|
||||||
|
|
||||||
В поле `source` помещается путь до репозитория с версией вида:
|
В поле `source` помещается путь до репозитория с версией вида:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
${host_name}/${namespace}/${type}
|
${host_name}/${namespace}/${type}
|
||||||
```
|
```
|
||||||
ВНИМАНИЕ: все параметры должны совпадать с путем репозитория, в котором помещен провайдер.
|
|
||||||
|
|
||||||
8. В консоле выполнить команду
|
ВНИМАНИЕ: все параметры должны совпадать с путем репозитория, в котором помещен провайдер.
|
||||||
|
|
||||||
|
8. В консоле выполнить команду
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
terraform init
|
terraform init
|
||||||
```
|
```
|
||||||
|
|
||||||
9. Если все прошло хорошо - ошибок не будет.
|
9. Если все прошло хорошо - ошибок не будет.
|
||||||
|
|
||||||
Более подробно о сборке провайдера можно найти по ссылке: https://learn.hashicorp.com/tutorials/terraform/provider-use?in=terraform/providers
|
Более подробно о сборке провайдера можно найти по ссылке: https://learn.hashicorp.com/tutorials/terraform/provider-use?in=terraform/providers
|
||||||
|
|
||||||
## Примеры работы
|
## Примеры работы
|
||||||
|
|
||||||
Примеры работы можно найти:
|
Примеры работы можно найти:
|
||||||
|
|
||||||
- На вики проекта: https://github.com/rudecs/terraform-provider-decort/wiki
|
- На вики проекта: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
- В папке `samples`
|
- В папке `samples`
|
||||||
|
|
||||||
Схемы к terraform'у доступны:
|
Схемы к terraform'у доступны:
|
||||||
- В папке `docs`
|
|
||||||
|
- В папке `docs`
|
||||||
|
|
||||||
Хорошей работы!
|
Хорошей работы!
|
||||||
|
|||||||
57
README_EN.md
57
README_EN.md
@@ -1,22 +1,26 @@
|
|||||||
# terraform-provider-decort
|
# terraform-provider-decort
|
||||||
|
|
||||||
Terraform provider for Digital Energy Cloud Orchestration Technology (DECORT) platform
|
Terraform provider for Digital Energy Cloud Orchestration Technology (DECORT) platform
|
||||||
|
|
||||||
NOTE: provider 3.x is designed for DECORT API 3.8.x. For older API versions please use:
|
NOTE: provider 3.x is designed for DECORT API 3.8.x. For older API versions please use:
|
||||||
|
|
||||||
- DECORT API 3.7.x versions - provider verion rc-1.25
|
- DECORT API 3.7.x versions - provider verion rc-1.25
|
||||||
- DECORT API 3.6.x versions - provider version rc-1.10
|
- DECORT API 3.6.x versions - provider version rc-1.10
|
||||||
- DECORT API versions prior to 3.6.0 - Terraform DECS provider (https://github.com/rudecs/terraform-provider-decs)
|
- DECORT API versions prior to 3.6.0 - Terraform DECS provider (https://github.com/rudecs/terraform-provider-decs)
|
||||||
|
|
||||||
## Working modes
|
## Working modes
|
||||||
|
|
||||||
The provider support two working modes:
|
The provider support two working modes:
|
||||||
|
|
||||||
- User mode,
|
- User mode,
|
||||||
- Administator mode.
|
- Administator mode.
|
||||||
Use flag DECORT_ADMIN_MODE for swithcing beetwen modes.
|
Use flag DECORT_ADMIN_MODE for swithcing beetwen modes.
|
||||||
See user guide at https://github.com/rudecs/terraform-provider-decort/wiki
|
See user guide at https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
- Work with Compute instances,
|
|
||||||
- Work with disks,
|
- Work with Compute instances,
|
||||||
|
- Work with disks,
|
||||||
- Work with k8s,
|
- Work with k8s,
|
||||||
- Work with image,
|
- Work with image,
|
||||||
- Work with reource groups,
|
- Work with reource groups,
|
||||||
@@ -29,21 +33,25 @@ See user guide at https://github.com/rudecs/terraform-provider-decort/wiki
|
|||||||
- Work with vgpu,
|
- Work with vgpu,
|
||||||
- Work with bservice,
|
- Work with bservice,
|
||||||
- Work with extnets,
|
- Work with extnets,
|
||||||
- Work with locations.
|
- Work with locations,
|
||||||
|
- Work with load balancers.
|
||||||
|
|
||||||
This provider supports Import operations on pre-existing resources.
|
This provider supports Import operations on pre-existing resources.
|
||||||
|
|
||||||
See user guide at https://github.com/rudecs/terraform-provider-decort/wiki
|
See user guide at https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
|
||||||
|
|
||||||
## Get Started
|
## Get Started
|
||||||
Two ways for starting:
|
|
||||||
|
Two ways for starting:
|
||||||
|
|
||||||
1. Installing via binary packages
|
1. Installing via binary packages
|
||||||
2. Manual installing
|
2. Manual installing
|
||||||
|
|
||||||
### Installing via binary packages
|
### Installing via binary packages
|
||||||
|
|
||||||
1. Download and install terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
1. Download and install terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
||||||
2. Create a file `main.tf` and add to it next section.
|
2. Create a file `main.tf` and add to it next section.
|
||||||
|
|
||||||
```terraform
|
```terraform
|
||||||
provider "decort" {
|
provider "decort" {
|
||||||
authenticator = "oauth2"
|
authenticator = "oauth2"
|
||||||
@@ -54,45 +62,62 @@ provider "decort" {
|
|||||||
allow_unverified_ssl = true
|
allow_unverified_ssl = true
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
3. Execute next command
|
3. Execute next command
|
||||||
|
|
||||||
```
|
```
|
||||||
terraform init
|
terraform init
|
||||||
```
|
```
|
||||||
|
|
||||||
The Provider will automatically install on your computer from the terrafrom registry.
|
The Provider will automatically install on your computer from the terrafrom registry.
|
||||||
|
|
||||||
### Manual installing
|
### Manual installing
|
||||||
|
|
||||||
1. Download and install Go Programming Language: https://go.dev/dl/
|
1. Download and install Go Programming Language: https://go.dev/dl/
|
||||||
2. Download and install terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
2. Download and install terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
||||||
3. Clone provider's repo:
|
3. Clone provider's repo:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
git clone https://github.com/rudecs/terraform-provider-decort.git
|
git clone https://github.com/rudecs/terraform-provider-decort.git
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Change directory to clone provider's and execute next command
|
4. Change directory to clone provider's and execute next command
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
go build -o terraform-provider-decort
|
go build -o terraform-provider-decort
|
||||||
```
|
```
|
||||||
|
|
||||||
If you have experience with _makefile_, you can change `Makefile`'s paramters and execute next command
|
If you have experience with _makefile_, you can change `Makefile`'s paramters and execute next command
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
make build
|
make build
|
||||||
```
|
```
|
||||||
|
|
||||||
5. Now move compilled file to:
|
5. Now move compilled file to:
|
||||||
Linux:
|
Linux:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
~/.terraform.d/plugins/${host_name}/${namespace}/${type}/${version}/${target}
|
~/.terraform.d/plugins/${host_name}/${namespace}/${type}/${version}/${target}
|
||||||
```
|
```
|
||||||
|
|
||||||
Windows:
|
Windows:
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
%APPDATA%\terraform.d\plugins\${host_name}/${namespace}/${type}/${version}/${target}
|
%APPDATA%\terraform.d\plugins\${host_name}/${namespace}/${type}/${version}/${target}
|
||||||
```
|
```
|
||||||
|
|
||||||
NOTE: for Windows OS `%APP_DATA%` is a cataloge, where will place terraform files.
|
NOTE: for Windows OS `%APP_DATA%` is a cataloge, where will place terraform files.
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
- host_name - digitalenergy.online
|
- host_name - digitalenergy.online
|
||||||
- namespace - decort
|
- namespace - decort
|
||||||
- type - decort
|
- type - decort
|
||||||
- version - 1.2
|
- version - 1.2
|
||||||
- target - windows_amd64
|
- target - windows_amd64
|
||||||
|
|
||||||
6. After all, create a file `main.tf`.
|
6. After all, create a file `main.tf`.
|
||||||
7. Add to the file next code section
|
7. Add to the file next code section
|
||||||
|
|
||||||
```terraform
|
```terraform
|
||||||
terraform {
|
terraform {
|
||||||
required_providers {
|
required_providers {
|
||||||
@@ -103,18 +128,22 @@ terraform {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
`version`- field for provider's version
|
`version`- field for provider's version
|
||||||
Required
|
Required
|
||||||
String
|
String
|
||||||
Note: Versions in code section and in a repository must be equal!
|
Note: Versions in code section and in a repository must be equal!
|
||||||
|
|
||||||
`source` - path to repository with provider's version
|
`source` - path to repository with provider's version
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
${host_name}/${namespace}/${type}
|
${host_name}/${namespace}/${type}
|
||||||
```
|
```
|
||||||
|
|
||||||
NOTE: all paramters must be equal to the repository path!
|
NOTE: all paramters must be equal to the repository path!
|
||||||
|
|
||||||
8. Execute command in your terminal
|
8. Execute command in your terminal
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
terraform init
|
terraform init
|
||||||
```
|
```
|
||||||
@@ -124,10 +153,12 @@ terraform init
|
|||||||
More details about the provider's building process: https://learn.hashicorp.com/tutorials/terraform/provider-use?in=terraform/providers
|
More details about the provider's building process: https://learn.hashicorp.com/tutorials/terraform/provider-use?in=terraform/providers
|
||||||
|
|
||||||
## Examples and Samples
|
## Examples and Samples
|
||||||
|
|
||||||
- Examples: https://github.com/rudecs/terraform-provider-decort/wiki
|
- Examples: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
- Samples: see in repository `samples`
|
- Samples: see in repository `samples`
|
||||||
|
|
||||||
Terraform schemas in:
|
Terraform schemas in:
|
||||||
- See in repository `docs`
|
|
||||||
|
- See in repository `docs`
|
||||||
|
|
||||||
Good work!
|
Good work!
|
||||||
|
|||||||
167
docs/data-sources/lb.md
Normal file
167
docs/data-sources/lb.md
Normal file
@@ -0,0 +1,167 @@
|
|||||||
|
---
|
||||||
|
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||||
|
page_title: "decort_lb Data Source - decort"
|
||||||
|
subcategory: ""
|
||||||
|
description: |-
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# decort_lb (Data Source)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<!-- schema generated by tfplugindocs -->
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
### Required
|
||||||
|
|
||||||
|
- `lb_id` (Number)
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||||
|
|
||||||
|
### Read-Only
|
||||||
|
|
||||||
|
- `backends` (List of Object) (see [below for nested schema](#nestedatt--backends))
|
||||||
|
- `created_by` (String)
|
||||||
|
- `created_time` (Number)
|
||||||
|
- `deleted_by` (String)
|
||||||
|
- `deleted_time` (Number)
|
||||||
|
- `desc` (String)
|
||||||
|
- `dp_api_user` (String)
|
||||||
|
- `extnet_id` (Number)
|
||||||
|
- `frontends` (List of Object) (see [below for nested schema](#nestedatt--frontends))
|
||||||
|
- `gid` (Number)
|
||||||
|
- `guid` (Number)
|
||||||
|
- `ha_mode` (Boolean)
|
||||||
|
- `id` (String) The ID of this resource.
|
||||||
|
- `image_id` (Number)
|
||||||
|
- `milestones` (Number)
|
||||||
|
- `name` (String)
|
||||||
|
- `primary_node` (List of Object) (see [below for nested schema](#nestedatt--primary_node))
|
||||||
|
- `rg_id` (Number)
|
||||||
|
- `rg_name` (String)
|
||||||
|
- `secondary_node` (List of Object) (see [below for nested schema](#nestedatt--secondary_node))
|
||||||
|
- `status` (String)
|
||||||
|
- `tech_status` (String)
|
||||||
|
- `updated_by` (String)
|
||||||
|
- `updated_time` (Number)
|
||||||
|
- `vins_id` (Number)
|
||||||
|
|
||||||
|
<a id="nestedblock--timeouts"></a>
|
||||||
|
### Nested Schema for `timeouts`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `default` (String)
|
||||||
|
- `read` (String)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--backends"></a>
|
||||||
|
### Nested Schema for `backends`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `algorithm` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `server_default_settings` (List of Object) (see [below for nested schema](#nestedobjatt--backends--server_default_settings))
|
||||||
|
- `servers` (List of Object) (see [below for nested schema](#nestedobjatt--backends--servers))
|
||||||
|
|
||||||
|
<a id="nestedobjatt--backends--server_default_settings"></a>
|
||||||
|
### Nested Schema for `backends.server_default_settings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `guid` (String)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedobjatt--backends--servers"></a>
|
||||||
|
### Nested Schema for `backends.servers`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `check` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
- `server_settings` (List of Object) (see [below for nested schema](#nestedobjatt--backends--servers--server_settings))
|
||||||
|
|
||||||
|
<a id="nestedobjatt--backends--servers--server_settings"></a>
|
||||||
|
### Nested Schema for `backends.servers.server_settings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `guid` (String)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--frontends"></a>
|
||||||
|
### Nested Schema for `frontends`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend` (String)
|
||||||
|
- `bindings` (List of Object) (see [below for nested schema](#nestedobjatt--frontends--bindings))
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
|
||||||
|
<a id="nestedobjatt--frontends--bindings"></a>
|
||||||
|
### Nested Schema for `frontends.bindings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--primary_node"></a>
|
||||||
|
### Nested Schema for `primary_node`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend_ip` (String)
|
||||||
|
- `compute_id` (Number)
|
||||||
|
- `frontend_ip` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `mgmt_ip` (String)
|
||||||
|
- `network_id` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--secondary_node"></a>
|
||||||
|
### Nested Schema for `secondary_node`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend_ip` (String)
|
||||||
|
- `compute_id` (Number)
|
||||||
|
- `frontend_ip` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `mgmt_ip` (String)
|
||||||
|
- `network_id` (Number)
|
||||||
|
|
||||||
|
|
||||||
175
docs/data-sources/lb_list.md
Normal file
175
docs/data-sources/lb_list.md
Normal file
@@ -0,0 +1,175 @@
|
|||||||
|
---
|
||||||
|
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||||
|
page_title: "decort_lb_list Data Source - decort"
|
||||||
|
subcategory: ""
|
||||||
|
description: |-
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# decort_lb_list (Data Source)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<!-- schema generated by tfplugindocs -->
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
- `includedeleted` (Boolean)
|
||||||
|
- `page` (Number)
|
||||||
|
- `size` (Number)
|
||||||
|
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||||
|
|
||||||
|
### Read-Only
|
||||||
|
|
||||||
|
- `id` (String) The ID of this resource.
|
||||||
|
- `items` (List of Object) (see [below for nested schema](#nestedatt--items))
|
||||||
|
|
||||||
|
<a id="nestedblock--timeouts"></a>
|
||||||
|
### Nested Schema for `timeouts`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `default` (String)
|
||||||
|
- `read` (String)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--items"></a>
|
||||||
|
### Nested Schema for `items`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backends` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends))
|
||||||
|
- `created_by` (String)
|
||||||
|
- `created_time` (Number)
|
||||||
|
- `deleted_by` (String)
|
||||||
|
- `deleted_time` (Number)
|
||||||
|
- `desc` (String)
|
||||||
|
- `dp_api_password` (String)
|
||||||
|
- `dp_api_user` (String)
|
||||||
|
- `extnet_id` (Number)
|
||||||
|
- `frontends` (List of Object) (see [below for nested schema](#nestedobjatt--items--frontends))
|
||||||
|
- `gid` (Number)
|
||||||
|
- `guid` (Number)
|
||||||
|
- `ha_mode` (Boolean)
|
||||||
|
- `image_id` (Number)
|
||||||
|
- `lb_id` (Number)
|
||||||
|
- `milestones` (Number)
|
||||||
|
- `name` (String)
|
||||||
|
- `primary_node` (List of Object) (see [below for nested schema](#nestedobjatt--items--primary_node))
|
||||||
|
- `rg_id` (Number)
|
||||||
|
- `rg_name` (String)
|
||||||
|
- `secondary_node` (List of Object) (see [below for nested schema](#nestedobjatt--items--secondary_node))
|
||||||
|
- `status` (String)
|
||||||
|
- `tech_status` (String)
|
||||||
|
- `updated_by` (String)
|
||||||
|
- `updated_time` (Number)
|
||||||
|
- `vins_id` (Number)
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--backends"></a>
|
||||||
|
### Nested Schema for `items.backends`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `algorithm` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `server_default_settings` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--server_default_settings))
|
||||||
|
- `servers` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--servers))
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--backends--server_default_settings"></a>
|
||||||
|
### Nested Schema for `items.backends.server_default_settings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `guid` (String)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--backends--servers"></a>
|
||||||
|
### Nested Schema for `items.backends.servers`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `check` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
- `server_settings` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--servers--server_settings))
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--backends--servers--server_settings"></a>
|
||||||
|
### Nested Schema for `items.backends.servers.server_settings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `guid` (String)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--frontends"></a>
|
||||||
|
### Nested Schema for `items.frontends`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend` (String)
|
||||||
|
- `bindings` (List of Object) (see [below for nested schema](#nestedobjatt--items--frontends--bindings))
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--frontends--bindings"></a>
|
||||||
|
### Nested Schema for `items.frontends.bindings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--primary_node"></a>
|
||||||
|
### Nested Schema for `items.primary_node`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend_ip` (String)
|
||||||
|
- `compute_id` (Number)
|
||||||
|
- `frontend_ip` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `mgmt_ip` (String)
|
||||||
|
- `network_id` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--secondary_node"></a>
|
||||||
|
### Nested Schema for `items.secondary_node`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend_ip` (String)
|
||||||
|
- `compute_id` (Number)
|
||||||
|
- `frontend_ip` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `mgmt_ip` (String)
|
||||||
|
- `network_id` (Number)
|
||||||
|
|
||||||
|
|
||||||
174
docs/data-sources/lb_list_deleted.md
Normal file
174
docs/data-sources/lb_list_deleted.md
Normal file
@@ -0,0 +1,174 @@
|
|||||||
|
---
|
||||||
|
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||||
|
page_title: "decort_lb_list_deleted Data Source - decort"
|
||||||
|
subcategory: ""
|
||||||
|
description: |-
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# decort_lb_list_deleted (Data Source)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<!-- schema generated by tfplugindocs -->
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
- `page` (Number)
|
||||||
|
- `size` (Number)
|
||||||
|
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||||
|
|
||||||
|
### Read-Only
|
||||||
|
|
||||||
|
- `id` (String) The ID of this resource.
|
||||||
|
- `items` (List of Object) (see [below for nested schema](#nestedatt--items))
|
||||||
|
|
||||||
|
<a id="nestedblock--timeouts"></a>
|
||||||
|
### Nested Schema for `timeouts`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `default` (String)
|
||||||
|
- `read` (String)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--items"></a>
|
||||||
|
### Nested Schema for `items`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backends` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends))
|
||||||
|
- `created_by` (String)
|
||||||
|
- `created_time` (Number)
|
||||||
|
- `deleted_by` (String)
|
||||||
|
- `deleted_time` (Number)
|
||||||
|
- `desc` (String)
|
||||||
|
- `dp_api_password` (String)
|
||||||
|
- `dp_api_user` (String)
|
||||||
|
- `extnet_id` (Number)
|
||||||
|
- `frontends` (List of Object) (see [below for nested schema](#nestedobjatt--items--frontends))
|
||||||
|
- `gid` (Number)
|
||||||
|
- `guid` (Number)
|
||||||
|
- `ha_mode` (Boolean)
|
||||||
|
- `image_id` (Number)
|
||||||
|
- `lb_id` (Number)
|
||||||
|
- `milestones` (Number)
|
||||||
|
- `name` (String)
|
||||||
|
- `primary_node` (List of Object) (see [below for nested schema](#nestedobjatt--items--primary_node))
|
||||||
|
- `rg_id` (Number)
|
||||||
|
- `rg_name` (String)
|
||||||
|
- `secondary_node` (List of Object) (see [below for nested schema](#nestedobjatt--items--secondary_node))
|
||||||
|
- `status` (String)
|
||||||
|
- `tech_status` (String)
|
||||||
|
- `updated_by` (String)
|
||||||
|
- `updated_time` (Number)
|
||||||
|
- `vins_id` (Number)
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--backends"></a>
|
||||||
|
### Nested Schema for `items.backends`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `algorithm` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `server_default_settings` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--server_default_settings))
|
||||||
|
- `servers` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--servers))
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--backends--server_default_settings"></a>
|
||||||
|
### Nested Schema for `items.backends.server_default_settings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `guid` (String)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--backends--servers"></a>
|
||||||
|
### Nested Schema for `items.backends.servers`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `check` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
- `server_settings` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--servers--server_settings))
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--backends--servers--server_settings"></a>
|
||||||
|
### Nested Schema for `items.backends.servers.server_settings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `guid` (String)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--frontends"></a>
|
||||||
|
### Nested Schema for `items.frontends`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend` (String)
|
||||||
|
- `bindings` (List of Object) (see [below for nested schema](#nestedobjatt--items--frontends--bindings))
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--frontends--bindings"></a>
|
||||||
|
### Nested Schema for `items.frontends.bindings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--primary_node"></a>
|
||||||
|
### Nested Schema for `items.primary_node`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend_ip` (String)
|
||||||
|
- `compute_id` (Number)
|
||||||
|
- `frontend_ip` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `mgmt_ip` (String)
|
||||||
|
- `network_id` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedobjatt--items--secondary_node"></a>
|
||||||
|
### Nested Schema for `items.secondary_node`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend_ip` (String)
|
||||||
|
- `compute_id` (Number)
|
||||||
|
- `frontend_ip` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `mgmt_ip` (String)
|
||||||
|
- `network_id` (Number)
|
||||||
|
|
||||||
|
|
||||||
87
docs/resources/image_virtual.md
Normal file
87
docs/resources/image_virtual.md
Normal file
@@ -0,0 +1,87 @@
|
|||||||
|
---
|
||||||
|
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||||
|
page_title: "decort_image_virtual Resource - decort"
|
||||||
|
subcategory: ""
|
||||||
|
description: |-
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# decort_image_virtual (Resource)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<!-- schema generated by tfplugindocs -->
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
### Required
|
||||||
|
|
||||||
|
- `link_to` (Number) ID of real image to link this virtual image to upon creation
|
||||||
|
- `name` (String) Name of the rescue disk
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
- `permanently` (Boolean) whether to completely delete the image
|
||||||
|
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||||
|
|
||||||
|
### Read-Only
|
||||||
|
|
||||||
|
- `account_id` (Number)
|
||||||
|
- `acl` (String)
|
||||||
|
- `architecture` (String)
|
||||||
|
- `boot_type` (String)
|
||||||
|
- `bootable` (Boolean)
|
||||||
|
- `ckey` (String)
|
||||||
|
- `compute_ci_id` (Number)
|
||||||
|
- `deleted_time` (String)
|
||||||
|
- `desc` (String)
|
||||||
|
- `drivers` (List of String)
|
||||||
|
- `enabled` (Boolean)
|
||||||
|
- `gid` (Number)
|
||||||
|
- `guid` (Number)
|
||||||
|
- `history` (List of Object) (see [below for nested schema](#nestedatt--history))
|
||||||
|
- `hot_resize` (Boolean)
|
||||||
|
- `id` (String) The ID of this resource.
|
||||||
|
- `image_id` (Number) Image id
|
||||||
|
- `image_name` (String)
|
||||||
|
- `last_modified` (Number)
|
||||||
|
- `milestones` (Number)
|
||||||
|
- `password` (String)
|
||||||
|
- `pool_name` (String)
|
||||||
|
- `provider_name` (String)
|
||||||
|
- `purge_attempts` (Number)
|
||||||
|
- `res_id` (String)
|
||||||
|
- `rescuecd` (Boolean)
|
||||||
|
- `sep_id` (Number)
|
||||||
|
- `shared_with` (List of Number)
|
||||||
|
- `size` (Number)
|
||||||
|
- `status` (String)
|
||||||
|
- `tech_status` (String)
|
||||||
|
- `type` (String)
|
||||||
|
- `unc_path` (String)
|
||||||
|
- `username` (String)
|
||||||
|
- `version` (String)
|
||||||
|
|
||||||
|
<a id="nestedblock--timeouts"></a>
|
||||||
|
### Nested Schema for `timeouts`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `create` (String)
|
||||||
|
- `default` (String)
|
||||||
|
- `delete` (String)
|
||||||
|
- `read` (String)
|
||||||
|
- `update` (String)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--history"></a>
|
||||||
|
### Nested Schema for `history`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `guid` (String)
|
||||||
|
- `id` (Number)
|
||||||
|
- `timestamp` (Number)
|
||||||
|
|
||||||
|
|
||||||
@@ -29,8 +29,12 @@ description: |-
|
|||||||
|
|
||||||
- `cloud_init` (String) Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases.
|
- `cloud_init` (String) Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases.
|
||||||
- `description` (String) Optional text description of this compute instance.
|
- `description` (String) Optional text description of this compute instance.
|
||||||
|
- `detach_disks` (Boolean)
|
||||||
- `extra_disks` (Set of Number) Optional list of IDs of extra disks to attach to this compute. You may specify several extra disks.
|
- `extra_disks` (Set of Number) Optional list of IDs of extra disks to attach to this compute. You may specify several extra disks.
|
||||||
|
- `ipa_type` (String) compute purpose
|
||||||
|
- `is` (String) system name
|
||||||
- `network` (Block Set, Max: 8) Optional network connection(s) for this compute. You may specify several network blocks, one for each connection. (see [below for nested schema](#nestedblock--network))
|
- `network` (Block Set, Max: 8) Optional network connection(s) for this compute. You may specify several network blocks, one for each connection. (see [below for nested schema](#nestedblock--network))
|
||||||
|
- `permanently` (Boolean)
|
||||||
- `pool` (String) Pool to use if sepId is set, can be also empty if needed to be chosen by system.
|
- `pool` (String) Pool to use if sepId is set, can be also empty if needed to be chosen by system.
|
||||||
- `sep_id` (Number) ID of SEP to create bootDisk on. Uses image's sepId if not set.
|
- `sep_id` (Number) ID of SEP to create bootDisk on. Uses image's sepId if not set.
|
||||||
- `started` (Boolean) Is compute started.
|
- `started` (Boolean) Is compute started.
|
||||||
|
|||||||
176
docs/resources/lb.md
Normal file
176
docs/resources/lb.md
Normal file
@@ -0,0 +1,176 @@
|
|||||||
|
---
|
||||||
|
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||||
|
page_title: "decort_lb Resource - decort"
|
||||||
|
subcategory: ""
|
||||||
|
description: |-
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# decort_lb (Resource)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<!-- schema generated by tfplugindocs -->
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
### Required
|
||||||
|
|
||||||
|
- `extnet_id` (Number)
|
||||||
|
- `name` (String)
|
||||||
|
- `rg_id` (Number)
|
||||||
|
- `start` (Boolean)
|
||||||
|
- `vins_id` (Number)
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
- `config_reset` (Boolean)
|
||||||
|
- `desc` (String)
|
||||||
|
- `enable` (Boolean)
|
||||||
|
- `permanently` (Boolean)
|
||||||
|
- `restart` (Boolean)
|
||||||
|
- `restore` (Boolean)
|
||||||
|
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||||
|
|
||||||
|
### Read-Only
|
||||||
|
|
||||||
|
- `backends` (List of Object) (see [below for nested schema](#nestedatt--backends))
|
||||||
|
- `created_by` (String)
|
||||||
|
- `created_time` (Number)
|
||||||
|
- `deleted_by` (String)
|
||||||
|
- `deleted_time` (Number)
|
||||||
|
- `dp_api_user` (String)
|
||||||
|
- `frontends` (List of Object) (see [below for nested schema](#nestedatt--frontends))
|
||||||
|
- `gid` (Number)
|
||||||
|
- `guid` (Number)
|
||||||
|
- `ha_mode` (Boolean)
|
||||||
|
- `id` (String) The ID of this resource.
|
||||||
|
- `image_id` (Number)
|
||||||
|
- `lb_id` (Number)
|
||||||
|
- `milestones` (Number)
|
||||||
|
- `primary_node` (List of Object) (see [below for nested schema](#nestedatt--primary_node))
|
||||||
|
- `rg_name` (String)
|
||||||
|
- `secondary_node` (List of Object) (see [below for nested schema](#nestedatt--secondary_node))
|
||||||
|
- `status` (String)
|
||||||
|
- `tech_status` (String)
|
||||||
|
- `updated_by` (String)
|
||||||
|
- `updated_time` (Number)
|
||||||
|
|
||||||
|
<a id="nestedblock--timeouts"></a>
|
||||||
|
### Nested Schema for `timeouts`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `create` (String)
|
||||||
|
- `default` (String)
|
||||||
|
- `delete` (String)
|
||||||
|
- `read` (String)
|
||||||
|
- `update` (String)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--backends"></a>
|
||||||
|
### Nested Schema for `backends`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `algorithm` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `server_default_settings` (List of Object) (see [below for nested schema](#nestedobjatt--backends--server_default_settings))
|
||||||
|
- `servers` (List of Object) (see [below for nested schema](#nestedobjatt--backends--servers))
|
||||||
|
|
||||||
|
<a id="nestedobjatt--backends--server_default_settings"></a>
|
||||||
|
### Nested Schema for `backends.server_default_settings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `guid` (String)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedobjatt--backends--servers"></a>
|
||||||
|
### Nested Schema for `backends.servers`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `check` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
- `server_settings` (List of Object) (see [below for nested schema](#nestedobjatt--backends--servers--server_settings))
|
||||||
|
|
||||||
|
<a id="nestedobjatt--backends--servers--server_settings"></a>
|
||||||
|
### Nested Schema for `backends.servers.server_settings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `guid` (String)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--frontends"></a>
|
||||||
|
### Nested Schema for `frontends`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend` (String)
|
||||||
|
- `bindings` (List of Object) (see [below for nested schema](#nestedobjatt--frontends--bindings))
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
|
||||||
|
<a id="nestedobjatt--frontends--bindings"></a>
|
||||||
|
### Nested Schema for `frontends.bindings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--primary_node"></a>
|
||||||
|
### Nested Schema for `primary_node`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend_ip` (String)
|
||||||
|
- `compute_id` (Number)
|
||||||
|
- `frontend_ip` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `mgmt_ip` (String)
|
||||||
|
- `network_id` (Number)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--secondary_node"></a>
|
||||||
|
### Nested Schema for `secondary_node`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `backend_ip` (String)
|
||||||
|
- `compute_id` (Number)
|
||||||
|
- `frontend_ip` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `mgmt_ip` (String)
|
||||||
|
- `network_id` (Number)
|
||||||
|
|
||||||
|
|
||||||
88
docs/resources/lb_backend.md
Normal file
88
docs/resources/lb_backend.md
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
---
|
||||||
|
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||||
|
page_title: "decort_lb_backend Resource - decort"
|
||||||
|
subcategory: ""
|
||||||
|
description: |-
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# decort_lb_backend (Resource)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<!-- schema generated by tfplugindocs -->
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
### Required
|
||||||
|
|
||||||
|
- `lb_id` (Number) ID of the LB instance to backendCreate
|
||||||
|
- `name` (String) Must be unique among all backends of this LB - name of the new backend to create
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
- `algorithm` (String)
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `servers` (Block List) (see [below for nested schema](#nestedblock--servers))
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
### Read-Only
|
||||||
|
|
||||||
|
- `guid` (String)
|
||||||
|
- `id` (String) The ID of this resource.
|
||||||
|
|
||||||
|
<a id="nestedblock--servers"></a>
|
||||||
|
### Nested Schema for `servers`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `check` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
- `server_settings` (Block List) (see [below for nested schema](#nestedblock--servers--server_settings))
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `guid` (String)
|
||||||
|
|
||||||
|
<a id="nestedblock--servers--server_settings"></a>
|
||||||
|
### Nested Schema for `servers.server_settings`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `guid` (String)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedblock--timeouts"></a>
|
||||||
|
### Nested Schema for `timeouts`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `create` (String)
|
||||||
|
- `default` (String)
|
||||||
|
- `delete` (String)
|
||||||
|
- `read` (String)
|
||||||
|
- `update` (String)
|
||||||
|
|
||||||
|
|
||||||
55
docs/resources/lb_backend_server.md
Normal file
55
docs/resources/lb_backend_server.md
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
---
|
||||||
|
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||||
|
page_title: "decort_lb_backend_server Resource - decort"
|
||||||
|
subcategory: ""
|
||||||
|
description: |-
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# decort_lb_backend_server (Resource)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<!-- schema generated by tfplugindocs -->
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
### Required
|
||||||
|
|
||||||
|
- `address` (String) IP address of the server.
|
||||||
|
- `backend_name` (String) Must be unique among all backends of this LB - name of the new backend to create
|
||||||
|
- `lb_id` (Number) ID of the LB instance to backendCreate
|
||||||
|
- `name` (String) Must be unique among all servers defined for this backend - name of the server definition to add.
|
||||||
|
- `port` (Number) Port number on the server
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
- `check` (String) set to disabled if this server should be used regardless of its state.
|
||||||
|
- `downinter` (Number)
|
||||||
|
- `fall` (Number)
|
||||||
|
- `inter` (Number)
|
||||||
|
- `maxconn` (Number)
|
||||||
|
- `maxqueue` (Number)
|
||||||
|
- `rise` (Number)
|
||||||
|
- `slowstart` (Number)
|
||||||
|
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||||
|
- `weight` (Number)
|
||||||
|
|
||||||
|
### Read-Only
|
||||||
|
|
||||||
|
- `guid` (String)
|
||||||
|
- `id` (String) The ID of this resource.
|
||||||
|
|
||||||
|
<a id="nestedblock--timeouts"></a>
|
||||||
|
### Nested Schema for `timeouts`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `create` (String)
|
||||||
|
- `default` (String)
|
||||||
|
- `delete` (String)
|
||||||
|
- `read` (String)
|
||||||
|
- `update` (String)
|
||||||
|
|
||||||
|
|
||||||
56
docs/resources/lb_frontend.md
Normal file
56
docs/resources/lb_frontend.md
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
---
|
||||||
|
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||||
|
page_title: "decort_lb_frontend Resource - decort"
|
||||||
|
subcategory: ""
|
||||||
|
description: |-
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# decort_lb_frontend (Resource)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<!-- schema generated by tfplugindocs -->
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
### Required
|
||||||
|
|
||||||
|
- `backend_name` (String)
|
||||||
|
- `lb_id` (Number) ID of the LB instance to backendCreate
|
||||||
|
- `name` (String)
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||||
|
|
||||||
|
### Read-Only
|
||||||
|
|
||||||
|
- `bindings` (List of Object) (see [below for nested schema](#nestedatt--bindings))
|
||||||
|
- `guid` (String)
|
||||||
|
- `id` (String) The ID of this resource.
|
||||||
|
|
||||||
|
<a id="nestedblock--timeouts"></a>
|
||||||
|
### Nested Schema for `timeouts`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `create` (String)
|
||||||
|
- `default` (String)
|
||||||
|
- `delete` (String)
|
||||||
|
- `read` (String)
|
||||||
|
- `update` (String)
|
||||||
|
|
||||||
|
|
||||||
|
<a id="nestedatt--bindings"></a>
|
||||||
|
### Nested Schema for `bindings`
|
||||||
|
|
||||||
|
Read-Only:
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `guid` (String)
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
|
||||||
|
|
||||||
46
docs/resources/lb_frontend_bind.md
Normal file
46
docs/resources/lb_frontend_bind.md
Normal file
@@ -0,0 +1,46 @@
|
|||||||
|
---
|
||||||
|
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||||
|
page_title: "decort_lb_frontend_bind Resource - decort"
|
||||||
|
subcategory: ""
|
||||||
|
description: |-
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
# decort_lb_frontend_bind (Resource)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
<!-- schema generated by tfplugindocs -->
|
||||||
|
## Schema
|
||||||
|
|
||||||
|
### Required
|
||||||
|
|
||||||
|
- `address` (String)
|
||||||
|
- `frontend_name` (String) Must be unique among all backends of this LB - name of the new backend to create
|
||||||
|
- `lb_id` (Number) ID of the LB instance to backendCreate
|
||||||
|
- `name` (String)
|
||||||
|
- `port` (Number)
|
||||||
|
|
||||||
|
### Optional
|
||||||
|
|
||||||
|
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||||
|
|
||||||
|
### Read-Only
|
||||||
|
|
||||||
|
- `guid` (String)
|
||||||
|
- `id` (String) The ID of this resource.
|
||||||
|
|
||||||
|
<a id="nestedblock--timeouts"></a>
|
||||||
|
### Nested Schema for `timeouts`
|
||||||
|
|
||||||
|
Optional:
|
||||||
|
|
||||||
|
- `create` (String)
|
||||||
|
- `default` (String)
|
||||||
|
- `delete` (String)
|
||||||
|
- `read` (String)
|
||||||
|
- `update` (String)
|
||||||
|
|
||||||
|
|
||||||
4
entrypoint.sh
Normal file
4
entrypoint.sh
Normal file
@@ -0,0 +1,4 @@
|
|||||||
|
#!/bin/sh
|
||||||
|
|
||||||
|
cp -aL /opt/decort/tf/* /opt/decort/tf/.* ./
|
||||||
|
exec "$@"
|
||||||
@@ -25,4 +25,6 @@ import "time"
|
|||||||
var Timeout30s = time.Second * 30
|
var Timeout30s = time.Second * 30
|
||||||
var Timeout60s = time.Second * 60
|
var Timeout60s = time.Second * 60
|
||||||
var Timeout180s = time.Second * 180
|
var Timeout180s = time.Second * 180
|
||||||
|
var Timeout300s = time.Second * 300
|
||||||
|
var Timeout600s = time.Second * 600
|
||||||
var Timeout20m = time.Minute * 20
|
var Timeout20m = time.Minute * 20
|
||||||
|
|||||||
@@ -27,6 +27,7 @@ import (
|
|||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/extnet"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/extnet"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/image"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/image"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/lb"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/locations"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/locations"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/rg"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/rg"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/snapshot"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/snapshot"
|
||||||
@@ -43,6 +44,12 @@ func NewDataSourcesMap() map[string]*schema.Resource {
|
|||||||
"decort_disk": disks.DataSourceDisk(),
|
"decort_disk": disks.DataSourceDisk(),
|
||||||
"decort_disk_list": disks.DataSourceDiskList(),
|
"decort_disk_list": disks.DataSourceDiskList(),
|
||||||
"decort_rg_list": rg.DataSourceRgList(),
|
"decort_rg_list": rg.DataSourceRgList(),
|
||||||
|
"decort_disk_list_types_detailed": disks.DataSourceDiskListTypesDetailed(),
|
||||||
|
"decort_disk_list_types": disks.DataSourceDiskListTypes(),
|
||||||
|
"decort_disk_list_deleted": disks.DataSourceDiskListDeleted(),
|
||||||
|
"decort_disk_list_unattached": disks.DataSourceDiskListUnattached(),
|
||||||
|
"decort_disk_snapshot": disks.DataSourceDiskSnapshot(),
|
||||||
|
"decort_disk_snapshot_list": disks.DataSourceDiskSnapshotList(),
|
||||||
"decort_account_list": account.DataSourceAccountList(),
|
"decort_account_list": account.DataSourceAccountList(),
|
||||||
"decort_account_computes_list": account.DataSourceAccountComputesList(),
|
"decort_account_computes_list": account.DataSourceAccountComputesList(),
|
||||||
"decort_account_disks_list": account.DataSourceAccountDisksList(),
|
"decort_account_disks_list": account.DataSourceAccountDisksList(),
|
||||||
@@ -69,6 +76,9 @@ func NewDataSourcesMap() map[string]*schema.Resource {
|
|||||||
"decort_location_url": locations.DataSourceLocationUrl(),
|
"decort_location_url": locations.DataSourceLocationUrl(),
|
||||||
"decort_image_list": image.DataSourceImageList(),
|
"decort_image_list": image.DataSourceImageList(),
|
||||||
"decort_image": image.DataSourceImage(),
|
"decort_image": image.DataSourceImage(),
|
||||||
|
"decort_lb": lb.DataSourceLB(),
|
||||||
|
"decort_lb_list": lb.DataSourceLBList(),
|
||||||
|
"decort_lb_list_deleted": lb.DataSourceLBListDeleted(),
|
||||||
// "decort_pfw": dataSourcePfw(),
|
// "decort_pfw": dataSourcePfw(),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -27,6 +27,7 @@ import (
|
|||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/image"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/image"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/k8s"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/k8s"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/lb"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/pfw"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/pfw"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/rg"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/rg"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/snapshot"
|
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/snapshot"
|
||||||
@@ -35,18 +36,24 @@ import (
|
|||||||
|
|
||||||
func NewRersourcesMap() map[string]*schema.Resource {
|
func NewRersourcesMap() map[string]*schema.Resource {
|
||||||
return map[string]*schema.Resource{
|
return map[string]*schema.Resource{
|
||||||
"decort_resgroup": rg.ResourceResgroup(),
|
"decort_resgroup": rg.ResourceResgroup(),
|
||||||
"decort_kvmvm": kvmvm.ResourceCompute(),
|
"decort_kvmvm": kvmvm.ResourceCompute(),
|
||||||
"decort_disk": disks.ResourceDisk(),
|
"decort_disk": disks.ResourceDisk(),
|
||||||
"decort_vins": vins.ResourceVins(),
|
"decort_disk_snapshot": disks.ResourceDiskSnapshot(),
|
||||||
"decort_pfw": pfw.ResourcePfw(),
|
"decort_vins": vins.ResourceVins(),
|
||||||
"decort_k8s": k8s.ResourceK8s(),
|
"decort_pfw": pfw.ResourcePfw(),
|
||||||
"decort_k8s_wg": k8s.ResourceK8sWg(),
|
"decort_k8s": k8s.ResourceK8s(),
|
||||||
"decort_snapshot": snapshot.ResourceSnapshot(),
|
"decort_k8s_wg": k8s.ResourceK8sWg(),
|
||||||
"decort_account": account.ResourceAccount(),
|
"decort_snapshot": snapshot.ResourceSnapshot(),
|
||||||
"decort_bservice": bservice.ResourceBasicService(),
|
"decort_account": account.ResourceAccount(),
|
||||||
"decort_bservice_group": bservice.ResourceBasicServiceGroup(),
|
"decort_bservice": bservice.ResourceBasicService(),
|
||||||
"decort_image": image.ResourceImage(),
|
"decort_bservice_group": bservice.ResourceBasicServiceGroup(),
|
||||||
"decort_image_virtual": image.ResourceImageVirtual(),
|
"decort_image": image.ResourceImage(),
|
||||||
|
"decort_image_virtual": image.ResourceImageVirtual(),
|
||||||
|
"decort_lb": lb.ResourceLB(),
|
||||||
|
"decort_lb_backend": lb.ResourceLBBackend(),
|
||||||
|
"decort_lb_backend_server": lb.ResourceLBBackendServer(),
|
||||||
|
"decort_lb_frontend": lb.ResourceLBFrontend(),
|
||||||
|
"decort_lb_frontend_bind": lb.ResourceLBFrontendBind(),
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -786,15 +786,15 @@ func ResourceAccount() *schema.Resource {
|
|||||||
DeleteContext: resourceAccountDelete,
|
DeleteContext: resourceAccountDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout60s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout60s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourceAccountSchemaMake(),
|
Schema: resourceAccountSchemaMake(),
|
||||||
|
|||||||
@@ -511,15 +511,15 @@ func ResourceBasicService() *schema.Resource {
|
|||||||
DeleteContext: resourceBasicServiceDelete,
|
DeleteContext: resourceBasicServiceDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout60s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout60s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourceBasicServiceSchemaMake(),
|
Schema: resourceBasicServiceSchemaMake(),
|
||||||
|
|||||||
@@ -616,15 +616,15 @@ func ResourceBasicServiceGroup() *schema.Resource {
|
|||||||
DeleteContext: resourceBasicServiceGroupDelete,
|
DeleteContext: resourceBasicServiceGroupDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout60s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout60s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourceBasicServiceGroupSchemaMake(),
|
Schema: resourceBasicServiceGroupSchemaMake(),
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
@@ -31,11 +32,19 @@ Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
|||||||
|
|
||||||
package disks
|
package disks
|
||||||
|
|
||||||
const disksCreateAPI = "/restmachine/cloudapi/disks/create"
|
const (
|
||||||
const disksGetAPI = "/restmachine/cloudapi/disks/get"
|
disksCreateAPI = "/restmachine/cloudapi/disks/create"
|
||||||
const disksListAPI = "/restmachine/cloudapi/disks/list"
|
disksGetAPI = "/restmachine/cloudapi/disks/get"
|
||||||
const disksResizeAPI = "/restmachine/cloudapi/disks/resize2"
|
disksListAPI = "/restmachine/cloudapi/disks/list"
|
||||||
const disksRenameAPI = "/restmachine/cloudapi/disks/rename"
|
disksResizeAPI = "/restmachine/cloudapi/disks/resize2"
|
||||||
const disksDeleteAPI = "/restmachine/cloudapi/disks/delete"
|
disksRenameAPI = "/restmachine/cloudapi/disks/rename"
|
||||||
const disksIOLimitAPI = "/restmachine/cloudapi/disks/limitIO"
|
disksDeleteAPI = "/restmachine/cloudapi/disks/delete"
|
||||||
const disksRestoreAPI = "/restmachine/cloudapi/disks/restore"
|
disksIOLimitAPI = "/restmachine/cloudapi/disks/limitIO"
|
||||||
|
disksRestoreAPI = "/restmachine/cloudapi/disks/restore"
|
||||||
|
disksListTypesAPI = "/restmachine/cloudapi/disks/listTypes"
|
||||||
|
disksListDeletedAPI = "/restmachine/cloudapi/disks/listDeleted"
|
||||||
|
disksListUnattachedAPI = "/restmachine/cloudapi/disks/listUnattached"
|
||||||
|
|
||||||
|
disksSnapshotDeleteAPI = "/restmachine/cloudapi/disks/snapshotDelete"
|
||||||
|
disksSnapshotRollbackAPI = "/restmachine/cloudapi/disks/snapshotRollback"
|
||||||
|
)
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
@@ -94,7 +95,7 @@ func dataSourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface
|
|||||||
d.Set("sep_type", disk.SepType)
|
d.Set("sep_type", disk.SepType)
|
||||||
d.Set("size_max", disk.SizeMax)
|
d.Set("size_max", disk.SizeMax)
|
||||||
d.Set("size_used", disk.SizeUsed)
|
d.Set("size_used", disk.SizeUsed)
|
||||||
d.Set("snapshots", flattendDiskSnapshotList(disk.Snapshots))
|
d.Set("snapshots", flattenDiskSnapshotList(disk.Snapshots))
|
||||||
d.Set("status", disk.Status)
|
d.Set("status", disk.Status)
|
||||||
d.Set("tech_status", disk.TechStatus)
|
d.Set("tech_status", disk.TechStatus)
|
||||||
d.Set("type", disk.Type)
|
d.Set("type", disk.Type)
|
||||||
@@ -106,68 +107,83 @@ func dataSourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface
|
|||||||
func dataSourceDiskSchemaMake() map[string]*schema.Schema {
|
func dataSourceDiskSchemaMake() map[string]*schema.Schema {
|
||||||
rets := map[string]*schema.Schema{
|
rets := map[string]*schema.Schema{
|
||||||
"disk_id": {
|
"disk_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Required: true,
|
Required: true,
|
||||||
|
Description: "The unique ID of the subscriber-owner of the disk",
|
||||||
},
|
},
|
||||||
"account_id": {
|
"account_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The unique ID of the subscriber-owner of the disk",
|
||||||
},
|
},
|
||||||
"account_name": {
|
"account_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The name of the subscriber '(account') to whom this disk belongs",
|
||||||
},
|
},
|
||||||
"acl": {
|
"acl": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
},
|
},
|
||||||
"boot_partition": {
|
"boot_partition": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of disk partitions",
|
||||||
},
|
},
|
||||||
"compute_id": {
|
"compute_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Compute ID",
|
||||||
},
|
},
|
||||||
"compute_name": {
|
"compute_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Compute name",
|
||||||
},
|
},
|
||||||
"created_time": {
|
"created_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Created time",
|
||||||
},
|
},
|
||||||
"deleted_time": {
|
"deleted_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Deleted time",
|
||||||
},
|
},
|
||||||
"desc": {
|
"desc": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Description of disk",
|
||||||
},
|
},
|
||||||
"destruction_time": {
|
"destruction_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Time of final deletion",
|
||||||
},
|
},
|
||||||
"devicename": {
|
"devicename": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of the device",
|
||||||
},
|
},
|
||||||
"disk_path": {
|
"disk_path": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk path",
|
||||||
},
|
},
|
||||||
"gid": {
|
"gid": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the grid (platform)",
|
||||||
},
|
},
|
||||||
"guid": {
|
"guid": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk ID on the storage side",
|
||||||
},
|
},
|
||||||
"image_id": {
|
"image_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Image ID",
|
||||||
},
|
},
|
||||||
"images": {
|
"images": {
|
||||||
Type: schema.TypeList,
|
Type: schema.TypeList,
|
||||||
@@ -175,6 +191,7 @@ func dataSourceDiskSchemaMake() map[string]*schema.Schema {
|
|||||||
Elem: &schema.Schema{
|
Elem: &schema.Schema{
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
},
|
},
|
||||||
|
Description: "IDs of images using the disk",
|
||||||
},
|
},
|
||||||
"iotune": {
|
"iotune": {
|
||||||
Type: schema.TypeList,
|
Type: schema.TypeList,
|
||||||
@@ -182,143 +199,177 @@ func dataSourceDiskSchemaMake() map[string]*schema.Schema {
|
|||||||
Elem: &schema.Resource{
|
Elem: &schema.Resource{
|
||||||
Schema: map[string]*schema.Schema{
|
Schema: map[string]*schema.Schema{
|
||||||
"read_bytes_sec": {
|
"read_bytes_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of bytes to read per second",
|
||||||
},
|
},
|
||||||
"read_bytes_sec_max": {
|
"read_bytes_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of bytes to read",
|
||||||
},
|
},
|
||||||
"read_iops_sec": {
|
"read_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of io read operations per second",
|
||||||
},
|
},
|
||||||
"read_iops_sec_max": {
|
"read_iops_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of io read operations",
|
||||||
},
|
},
|
||||||
"size_iops_sec": {
|
"size_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Size of io operations",
|
||||||
},
|
},
|
||||||
"total_bytes_sec": {
|
"total_bytes_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Total size bytes per second",
|
||||||
},
|
},
|
||||||
"total_bytes_sec_max": {
|
"total_bytes_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum total size of bytes per second",
|
||||||
},
|
},
|
||||||
"total_iops_sec": {
|
"total_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Total number of io operations per second",
|
||||||
},
|
},
|
||||||
"total_iops_sec_max": {
|
"total_iops_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum total number of io operations per second",
|
||||||
},
|
},
|
||||||
"write_bytes_sec": {
|
"write_bytes_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of bytes to write per second",
|
||||||
},
|
},
|
||||||
"write_bytes_sec_max": {
|
"write_bytes_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of bytes to write per second",
|
||||||
},
|
},
|
||||||
"write_iops_sec": {
|
"write_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of write operations per second",
|
||||||
},
|
},
|
||||||
"write_iops_sec_max": {
|
"write_iops_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of write operations per second",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
"iqn": {
|
"iqn": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk IQN",
|
||||||
},
|
},
|
||||||
"login": {
|
"login": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Login to access the disk",
|
||||||
},
|
},
|
||||||
"milestones": {
|
"milestones": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Milestones",
|
||||||
},
|
},
|
||||||
"disk_name": {
|
"disk_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of disk",
|
||||||
},
|
},
|
||||||
"order": {
|
"order": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk order",
|
||||||
},
|
},
|
||||||
"params": {
|
"params": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk params",
|
||||||
},
|
},
|
||||||
"parent_id": {
|
"parent_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the parent disk",
|
||||||
},
|
},
|
||||||
"passwd": {
|
"passwd": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Password to access the disk",
|
||||||
},
|
},
|
||||||
"pci_slot": {
|
"pci_slot": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the pci slot to which the disk is connected",
|
||||||
},
|
},
|
||||||
"pool": {
|
"pool": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Pool for disk location",
|
||||||
},
|
},
|
||||||
"purge_attempts": {
|
"purge_attempts": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of deletion attempts",
|
||||||
},
|
},
|
||||||
"purge_time": {
|
"purge_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Time of the last deletion attempt",
|
||||||
},
|
},
|
||||||
"reality_device_number": {
|
"reality_device_number": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Reality device number",
|
||||||
},
|
},
|
||||||
"reference_id": {
|
"reference_id": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the reference to the disk",
|
||||||
},
|
},
|
||||||
"res_id": {
|
"res_id": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Resource ID",
|
||||||
},
|
},
|
||||||
"res_name": {
|
"res_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of the resource",
|
||||||
},
|
},
|
||||||
"role": {
|
"role": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk role",
|
||||||
},
|
},
|
||||||
"sep_id": {
|
"sep_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Storage endpoint provider ID to create disk",
|
||||||
},
|
},
|
||||||
"sep_type": {
|
"sep_type": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Type SEP. Defines the type of storage system and contains one of the values set in the cloud platform",
|
||||||
},
|
},
|
||||||
"size_max": {
|
"size_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Size in GB",
|
||||||
},
|
},
|
||||||
"size_used": {
|
"size_used": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of used space, in GB",
|
||||||
},
|
},
|
||||||
"snapshots": {
|
"snapshots": {
|
||||||
Type: schema.TypeList,
|
Type: schema.TypeList,
|
||||||
@@ -326,47 +377,57 @@ func dataSourceDiskSchemaMake() map[string]*schema.Schema {
|
|||||||
Elem: &schema.Resource{
|
Elem: &schema.Resource{
|
||||||
Schema: map[string]*schema.Schema{
|
Schema: map[string]*schema.Schema{
|
||||||
"guid": {
|
"guid": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the snapshot",
|
||||||
},
|
},
|
||||||
"label": {
|
"label": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of the snapshot",
|
||||||
},
|
},
|
||||||
"res_id": {
|
"res_id": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Reference to the snapshot",
|
||||||
},
|
},
|
||||||
"snap_set_guid": {
|
"snap_set_guid": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The set snapshot ID",
|
||||||
},
|
},
|
||||||
"snap_set_time": {
|
"snap_set_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The set time of the snapshot",
|
||||||
},
|
},
|
||||||
"timestamp": {
|
"timestamp": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Snapshot time",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
"status": {
|
"status": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk status",
|
||||||
},
|
},
|
||||||
"tech_status": {
|
"tech_status": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Technical status of the disk",
|
||||||
},
|
},
|
||||||
"type": {
|
"type": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||||
},
|
},
|
||||||
"vmid": {
|
"vmid": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Virtual Machine ID (Deprecated)",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
@@ -109,7 +110,7 @@ func flattenDiskList(dl DisksList) []map[string]interface{} {
|
|||||||
"sep_type": disk.SepType,
|
"sep_type": disk.SepType,
|
||||||
"size_max": disk.SizeMax,
|
"size_max": disk.SizeMax,
|
||||||
"size_used": disk.SizeUsed,
|
"size_used": disk.SizeUsed,
|
||||||
"snapshots": flattendDiskSnapshotList(disk.Snapshots),
|
"snapshots": flattenDiskSnapshotList(disk.Snapshots),
|
||||||
"status": disk.Status,
|
"status": disk.Status,
|
||||||
"tech_status": disk.TechStatus,
|
"tech_status": disk.TechStatus,
|
||||||
"type": disk.Type,
|
"type": disk.Type,
|
||||||
@@ -121,7 +122,7 @@ func flattenDiskList(dl DisksList) []map[string]interface{} {
|
|||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func flattendDiskSnapshotList(sl SnapshotList) []interface{} {
|
func flattenDiskSnapshotList(sl SnapshotList) []interface{} {
|
||||||
res := make([]interface{}, 0)
|
res := make([]interface{}, 0)
|
||||||
for _, snapshot := range sl {
|
for _, snapshot := range sl {
|
||||||
temp := map[string]interface{}{
|
temp := map[string]interface{}{
|
||||||
@@ -140,7 +141,7 @@ func flattendDiskSnapshotList(sl SnapshotList) []interface{} {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func dataSourceDiskListRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
func dataSourceDiskListRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
diskList, err := utilityDiskListCheckPresence(ctx, d, m)
|
diskList, err := utilityDiskListCheckPresence(ctx, d, m, disksListAPI)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return diag.FromErr(err)
|
return diag.FromErr(err)
|
||||||
}
|
}
|
||||||
@@ -180,68 +181,83 @@ func dataSourceDiskListSchemaMake() map[string]*schema.Schema {
|
|||||||
Elem: &schema.Resource{
|
Elem: &schema.Resource{
|
||||||
Schema: map[string]*schema.Schema{
|
Schema: map[string]*schema.Schema{
|
||||||
"account_id": {
|
"account_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The unique ID of the subscriber-owner of the disk",
|
||||||
},
|
},
|
||||||
"account_name": {
|
"account_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The name of the subscriber '(account') to whom this disk belongs",
|
||||||
},
|
},
|
||||||
"acl": {
|
"acl": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
},
|
},
|
||||||
"boot_partition": {
|
"boot_partition": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of disk partitions",
|
||||||
},
|
},
|
||||||
"compute_id": {
|
"compute_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Compute ID",
|
||||||
},
|
},
|
||||||
"compute_name": {
|
"compute_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Compute name",
|
||||||
},
|
},
|
||||||
"created_time": {
|
"created_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Created time",
|
||||||
},
|
},
|
||||||
"deleted_time": {
|
"deleted_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Deleted time",
|
||||||
},
|
},
|
||||||
"desc": {
|
"desc": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Description of disk",
|
||||||
},
|
},
|
||||||
"destruction_time": {
|
"destruction_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Time of final deletion",
|
||||||
},
|
},
|
||||||
"devicename": {
|
"devicename": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of the device",
|
||||||
},
|
},
|
||||||
"disk_path": {
|
"disk_path": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk path",
|
||||||
},
|
},
|
||||||
"gid": {
|
"gid": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the grid (platform)",
|
||||||
},
|
},
|
||||||
"guid": {
|
"guid": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk ID on the storage side",
|
||||||
},
|
},
|
||||||
"disk_id": {
|
"disk_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The unique ID of the subscriber-owner of the disk",
|
||||||
},
|
},
|
||||||
"image_id": {
|
"image_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Image ID",
|
||||||
},
|
},
|
||||||
"images": {
|
"images": {
|
||||||
Type: schema.TypeList,
|
Type: schema.TypeList,
|
||||||
@@ -249,6 +265,7 @@ func dataSourceDiskListSchemaMake() map[string]*schema.Schema {
|
|||||||
Elem: &schema.Schema{
|
Elem: &schema.Schema{
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
},
|
},
|
||||||
|
Description: "IDs of images using the disk",
|
||||||
},
|
},
|
||||||
"iotune": {
|
"iotune": {
|
||||||
Type: schema.TypeList,
|
Type: schema.TypeList,
|
||||||
@@ -256,151 +273,187 @@ func dataSourceDiskListSchemaMake() map[string]*schema.Schema {
|
|||||||
Elem: &schema.Resource{
|
Elem: &schema.Resource{
|
||||||
Schema: map[string]*schema.Schema{
|
Schema: map[string]*schema.Schema{
|
||||||
"read_bytes_sec": {
|
"read_bytes_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of bytes to read per second",
|
||||||
},
|
},
|
||||||
"read_bytes_sec_max": {
|
"read_bytes_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of bytes to read",
|
||||||
},
|
},
|
||||||
"read_iops_sec": {
|
"read_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of io read operations per second",
|
||||||
},
|
},
|
||||||
"read_iops_sec_max": {
|
"read_iops_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of io read operations",
|
||||||
},
|
},
|
||||||
"size_iops_sec": {
|
"size_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Size of io operations",
|
||||||
},
|
},
|
||||||
"total_bytes_sec": {
|
"total_bytes_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Total size bytes per second",
|
||||||
},
|
},
|
||||||
"total_bytes_sec_max": {
|
"total_bytes_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum total size of bytes per second",
|
||||||
},
|
},
|
||||||
"total_iops_sec": {
|
"total_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Total number of io operations per second",
|
||||||
},
|
},
|
||||||
"total_iops_sec_max": {
|
"total_iops_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum total number of io operations per second",
|
||||||
},
|
},
|
||||||
"write_bytes_sec": {
|
"write_bytes_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of bytes to write per second",
|
||||||
},
|
},
|
||||||
"write_bytes_sec_max": {
|
"write_bytes_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of bytes to write per second",
|
||||||
},
|
},
|
||||||
"write_iops_sec": {
|
"write_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of write operations per second",
|
||||||
},
|
},
|
||||||
"write_iops_sec_max": {
|
"write_iops_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of write operations per second",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
"iqn": {
|
"iqn": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk IQN",
|
||||||
},
|
},
|
||||||
"login": {
|
"login": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Login to access the disk",
|
||||||
},
|
},
|
||||||
"machine_id": {
|
"machine_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Machine ID",
|
||||||
},
|
},
|
||||||
"machine_name": {
|
"machine_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Machine name",
|
||||||
},
|
},
|
||||||
"milestones": {
|
"milestones": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Milestones",
|
||||||
},
|
},
|
||||||
"disk_name": {
|
"disk_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of disk",
|
||||||
},
|
},
|
||||||
"order": {
|
"order": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk order",
|
||||||
},
|
},
|
||||||
"params": {
|
"params": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk params",
|
||||||
},
|
},
|
||||||
"parent_id": {
|
"parent_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the parent disk",
|
||||||
},
|
},
|
||||||
"passwd": {
|
"passwd": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Password to access the disk",
|
||||||
},
|
},
|
||||||
"pci_slot": {
|
"pci_slot": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the pci slot to which the disk is connected",
|
||||||
},
|
},
|
||||||
"pool": {
|
"pool": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Pool for disk location",
|
||||||
},
|
},
|
||||||
"purge_attempts": {
|
"purge_attempts": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of deletion attempts",
|
||||||
},
|
},
|
||||||
"purge_time": {
|
"purge_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Time of the last deletion attempt",
|
||||||
},
|
},
|
||||||
"reality_device_number": {
|
"reality_device_number": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Reality device number",
|
||||||
},
|
},
|
||||||
"reference_id": {
|
"reference_id": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the reference to the disk",
|
||||||
},
|
},
|
||||||
"res_id": {
|
"res_id": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Resource ID",
|
||||||
},
|
},
|
||||||
"res_name": {
|
"res_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of the resource",
|
||||||
},
|
},
|
||||||
"role": {
|
"role": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk role",
|
||||||
},
|
},
|
||||||
"sep_id": {
|
"sep_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Storage endpoint provider ID to create disk",
|
||||||
},
|
},
|
||||||
"sep_type": {
|
"sep_type": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Type SEP. Defines the type of storage system and contains one of the values set in the cloud platform",
|
||||||
},
|
},
|
||||||
"size_max": {
|
"size_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Size in GB",
|
||||||
},
|
},
|
||||||
"size_used": {
|
"size_used": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of used space, in GB",
|
||||||
},
|
},
|
||||||
"snapshots": {
|
"snapshots": {
|
||||||
Type: schema.TypeList,
|
Type: schema.TypeList,
|
||||||
@@ -408,47 +461,57 @@ func dataSourceDiskListSchemaMake() map[string]*schema.Schema {
|
|||||||
Elem: &schema.Resource{
|
Elem: &schema.Resource{
|
||||||
Schema: map[string]*schema.Schema{
|
Schema: map[string]*schema.Schema{
|
||||||
"guid": {
|
"guid": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the snapshot",
|
||||||
},
|
},
|
||||||
"label": {
|
"label": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of the snapshot",
|
||||||
},
|
},
|
||||||
"res_id": {
|
"res_id": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Reference to the snapshot",
|
||||||
},
|
},
|
||||||
"snap_set_guid": {
|
"snap_set_guid": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The set snapshot ID",
|
||||||
},
|
},
|
||||||
"snap_set_time": {
|
"snap_set_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The set time of the snapshot",
|
||||||
},
|
},
|
||||||
"timestamp": {
|
"timestamp": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Snapshot time",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
"status": {
|
"status": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk status",
|
||||||
},
|
},
|
||||||
"tech_status": {
|
"tech_status": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Technical status of the disk",
|
||||||
},
|
},
|
||||||
"type": {
|
"type": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||||
},
|
},
|
||||||
"vmid": {
|
"vmid": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Virtual Machine ID (Deprecated)",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -0,0 +1,82 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package disks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
)
|
||||||
|
|
||||||
|
func dataSourceDiskListTypesRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
listTypes, err := utilityDiskListTypesCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
id := uuid.New()
|
||||||
|
d.SetId(id.String())
|
||||||
|
d.Set("types", listTypes)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func dataSourceDiskListTypesSchemaMake() map[string]*schema.Schema {
|
||||||
|
res := map[string]*schema.Schema{
|
||||||
|
"types": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Schema{
|
||||||
|
Type: schema.TypeString,
|
||||||
|
},
|
||||||
|
Description: "The types of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func DataSourceDiskListTypes() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
ReadContext: dataSourceDiskListTypesRead,
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Read: &constants.Timeout30s,
|
||||||
|
Default: &constants.Timeout60s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: dataSourceDiskListTypesSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,133 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package disks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
)
|
||||||
|
|
||||||
|
func flattenDiskListTypesDetailed(tld TypesDetailedList) []map[string]interface{} {
|
||||||
|
res := make([]map[string]interface{}, 0)
|
||||||
|
for _, typeListDetailed := range tld {
|
||||||
|
temp := map[string]interface{}{
|
||||||
|
"pools": flattenListTypesDetailedPools(typeListDetailed.Pools),
|
||||||
|
"sep_id": typeListDetailed.SepID,
|
||||||
|
}
|
||||||
|
res = append(res, temp)
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func flattenListTypesDetailedPools(pools PoolList) []interface{} {
|
||||||
|
res := make([]interface{}, 0)
|
||||||
|
for _, pool := range pools {
|
||||||
|
temp := map[string]interface{}{
|
||||||
|
"name": pool.Name,
|
||||||
|
"types": pool.Types,
|
||||||
|
}
|
||||||
|
res = append(res, temp)
|
||||||
|
}
|
||||||
|
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func dataSourceDiskListTypesDetailedRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
listTypesDetailed, err := utilityDiskListTypesDetailedCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
id := uuid.New()
|
||||||
|
d.SetId(id.String())
|
||||||
|
d.Set("items", flattenDiskListTypesDetailed(listTypesDetailed))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func dataSourceDiskListTypesDetailedSchemaMake() map[string]*schema.Schema {
|
||||||
|
res := map[string]*schema.Schema{
|
||||||
|
"items": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"pools": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Pool name",
|
||||||
|
},
|
||||||
|
"types": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Schema{
|
||||||
|
Type: schema.TypeString,
|
||||||
|
},
|
||||||
|
Description: "The types of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"sep_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Storage endpoint provider ID to create disk",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func DataSourceDiskListTypesDetailed() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
ReadContext: dataSourceDiskListTypesDetailedRead,
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Read: &constants.Timeout30s,
|
||||||
|
Default: &constants.Timeout60s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: dataSourceDiskListTypesDetailedSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,485 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package disks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/flattens"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
func utilityDiskListUnattachedCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (UnattachedList, error) {
|
||||||
|
unattachedList := UnattachedList{}
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
if accountId, ok := d.GetOk("accountId"); ok {
|
||||||
|
urlValues.Add("accountId", strconv.Itoa(accountId.(int)))
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Debugf("utilityDiskListUnattachedCheckPresence: load disk Unattached list")
|
||||||
|
unattachedListRaw, err := c.DecortAPICall(ctx, "POST", disksListUnattachedAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
err = json.Unmarshal([]byte(unattachedListRaw), &unattachedList)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
return unattachedList, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func flattenDiskListUnattached(ul UnattachedList) []map[string]interface{} {
|
||||||
|
res := make([]map[string]interface{}, 0)
|
||||||
|
for _, unattachedDisk := range ul {
|
||||||
|
unattachedDiskAcl, _ := json.Marshal(unattachedDisk.Acl)
|
||||||
|
tmp := map[string]interface{}{
|
||||||
|
"_ckey": unattachedDisk.Ckey,
|
||||||
|
"_meta": flattens.FlattenMeta(unattachedDisk.Meta),
|
||||||
|
"account_id": unattachedDisk.AccountID,
|
||||||
|
"account_name": unattachedDisk.AccountName,
|
||||||
|
"acl": string(unattachedDiskAcl),
|
||||||
|
"boot_partition": unattachedDisk.BootPartition,
|
||||||
|
"created_time": unattachedDisk.CreatedTime,
|
||||||
|
"deleted_time": unattachedDisk.DeletedTime,
|
||||||
|
"desc": unattachedDisk.Desc,
|
||||||
|
"destruction_time": unattachedDisk.DestructionTime,
|
||||||
|
"disk_path": unattachedDisk.DiskPath,
|
||||||
|
"gid": unattachedDisk.GridID,
|
||||||
|
"guid": unattachedDisk.GUID,
|
||||||
|
"disk_id": unattachedDisk.ID,
|
||||||
|
"image_id": unattachedDisk.ImageID,
|
||||||
|
"images": unattachedDisk.Images,
|
||||||
|
"iotune": flattenIOTune(unattachedDisk.IOTune),
|
||||||
|
"iqn": unattachedDisk.IQN,
|
||||||
|
"login": unattachedDisk.Login,
|
||||||
|
"milestones": unattachedDisk.Milestones,
|
||||||
|
"disk_name": unattachedDisk.Name,
|
||||||
|
"order": unattachedDisk.Order,
|
||||||
|
"params": unattachedDisk.Params,
|
||||||
|
"parent_id": unattachedDisk.ParentID,
|
||||||
|
"passwd": unattachedDisk.Passwd,
|
||||||
|
"pci_slot": unattachedDisk.PciSlot,
|
||||||
|
"pool": unattachedDisk.Pool,
|
||||||
|
"purge_attempts": unattachedDisk.PurgeAttempts,
|
||||||
|
"purge_time": unattachedDisk.PurgeTime,
|
||||||
|
"reality_device_number": unattachedDisk.RealityDeviceNumber,
|
||||||
|
"reference_id": unattachedDisk.ReferenceID,
|
||||||
|
"res_id": unattachedDisk.ResID,
|
||||||
|
"res_name": unattachedDisk.ResName,
|
||||||
|
"role": unattachedDisk.Role,
|
||||||
|
"sep_id": unattachedDisk.SepID,
|
||||||
|
"size_max": unattachedDisk.SizeMax,
|
||||||
|
"size_used": unattachedDisk.SizeUsed,
|
||||||
|
"snapshots": flattenDiskSnapshotList(unattachedDisk.Snapshots),
|
||||||
|
"status": unattachedDisk.Status,
|
||||||
|
"tech_status": unattachedDisk.TechStatus,
|
||||||
|
"type": unattachedDisk.Type,
|
||||||
|
"vmid": unattachedDisk.VMID,
|
||||||
|
}
|
||||||
|
res = append(res, tmp)
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func dataSourceDiskListUnattachedRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
diskListUnattached, err := utilityDiskListUnattachedCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
id := uuid.New()
|
||||||
|
d.SetId(id.String())
|
||||||
|
d.Set("items", flattenDiskListUnattached(diskListUnattached))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func DataSourceDiskListUnattached() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
ReadContext: dataSourceDiskListUnattachedRead,
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Read: &constants.Timeout30s,
|
||||||
|
Default: &constants.Timeout60s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: dataSourceDiskListUnattachedSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func dataSourceDiskListUnattachedSchemaMake() map[string]*schema.Schema {
|
||||||
|
res := map[string]*schema.Schema{
|
||||||
|
"account_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Description: "ID of the account the disks belong to",
|
||||||
|
},
|
||||||
|
|
||||||
|
"items": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"_ckey": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "CKey",
|
||||||
|
},
|
||||||
|
"_meta": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Schema{
|
||||||
|
Type: schema.TypeString,
|
||||||
|
},
|
||||||
|
Description: "Meta parameters",
|
||||||
|
},
|
||||||
|
"account_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "ID of the account the disks belong to",
|
||||||
|
},
|
||||||
|
"account_name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The name of the subscriber '(account') to whom this disk belongs",
|
||||||
|
},
|
||||||
|
"acl": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"boot_partition": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Number of disk partitions",
|
||||||
|
},
|
||||||
|
"created_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Created time",
|
||||||
|
},
|
||||||
|
"deleted_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Deleted time",
|
||||||
|
},
|
||||||
|
"desc": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Description of disk",
|
||||||
|
},
|
||||||
|
"destruction_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Time of final deletion",
|
||||||
|
},
|
||||||
|
"disk_path": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Disk path",
|
||||||
|
},
|
||||||
|
"gid": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "ID of the grid (platform)",
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Disk ID on the storage side",
|
||||||
|
},
|
||||||
|
"disk_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The unique ID of the subscriber-owner of the disk",
|
||||||
|
},
|
||||||
|
"image_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Image ID",
|
||||||
|
},
|
||||||
|
"images": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Schema{
|
||||||
|
Type: schema.TypeString,
|
||||||
|
},
|
||||||
|
Description: "IDs of images using the disk",
|
||||||
|
},
|
||||||
|
"iotune": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"read_bytes_sec": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Number of bytes to read per second",
|
||||||
|
},
|
||||||
|
"read_bytes_sec_max": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Maximum number of bytes to read",
|
||||||
|
},
|
||||||
|
"read_iops_sec": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Number of io read operations per second",
|
||||||
|
},
|
||||||
|
"read_iops_sec_max": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Maximum number of io read operations",
|
||||||
|
},
|
||||||
|
"size_iops_sec": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Size of io operations",
|
||||||
|
},
|
||||||
|
"total_bytes_sec": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Total size bytes per second",
|
||||||
|
},
|
||||||
|
"total_bytes_sec_max": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Maximum total size of bytes per second",
|
||||||
|
},
|
||||||
|
"total_iops_sec": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Total number of io operations per second",
|
||||||
|
},
|
||||||
|
"total_iops_sec_max": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Maximum total number of io operations per second",
|
||||||
|
},
|
||||||
|
"write_bytes_sec": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Number of bytes to write per second",
|
||||||
|
},
|
||||||
|
"write_bytes_sec_max": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Maximum number of bytes to write per second",
|
||||||
|
},
|
||||||
|
"write_iops_sec": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Number of write operations per second",
|
||||||
|
},
|
||||||
|
"write_iops_sec_max": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Maximum number of write operations per second",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"iqn": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Disk IQN",
|
||||||
|
},
|
||||||
|
"login": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Login to access the disk",
|
||||||
|
},
|
||||||
|
"milestones": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Milestones",
|
||||||
|
},
|
||||||
|
"disk_name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Name of disk",
|
||||||
|
},
|
||||||
|
"order": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Disk order",
|
||||||
|
},
|
||||||
|
"params": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Disk params",
|
||||||
|
},
|
||||||
|
"parent_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "ID of the parent disk",
|
||||||
|
},
|
||||||
|
"passwd": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Password to access the disk",
|
||||||
|
},
|
||||||
|
"pci_slot": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "ID of the pci slot to which the disk is connected",
|
||||||
|
},
|
||||||
|
"pool": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Pool for disk location",
|
||||||
|
},
|
||||||
|
"purge_attempts": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Number of deletion attempts",
|
||||||
|
},
|
||||||
|
"purge_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Time of the last deletion attempt",
|
||||||
|
},
|
||||||
|
"reality_device_number": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Reality device number",
|
||||||
|
},
|
||||||
|
"reference_id": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "ID of the reference to the disk",
|
||||||
|
},
|
||||||
|
"res_id": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Resource ID",
|
||||||
|
},
|
||||||
|
"res_name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Name of the resource",
|
||||||
|
},
|
||||||
|
"role": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Disk role",
|
||||||
|
},
|
||||||
|
"sep_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Storage endpoint provider ID to create disk",
|
||||||
|
},
|
||||||
|
"size_max": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Size in GB",
|
||||||
|
},
|
||||||
|
"size_used": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Number of used space, in GB",
|
||||||
|
},
|
||||||
|
"snapshots": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "ID of the snapshot",
|
||||||
|
},
|
||||||
|
"label": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Name of the snapshot",
|
||||||
|
},
|
||||||
|
"res_id": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Reference to the snapshot",
|
||||||
|
},
|
||||||
|
"snap_set_guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The set snapshot ID",
|
||||||
|
},
|
||||||
|
"snap_set_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The set time of the snapshot",
|
||||||
|
},
|
||||||
|
"timestamp": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Snapshot time",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"status": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Disk status",
|
||||||
|
},
|
||||||
|
"tech_status": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Technical status of the disk",
|
||||||
|
},
|
||||||
|
"type": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||||
|
},
|
||||||
|
"vmid": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Virtual Machine ID (Deprecated)",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
129
internal/service/cloudapi/disks/data_source_disk_snapshot.go
Normal file
129
internal/service/cloudapi/disks/data_source_disk_snapshot.go
Normal file
@@ -0,0 +1,129 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package disks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
)
|
||||||
|
|
||||||
|
func dataSourceDiskSnapshotRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||||
|
if disk == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
snapshots := disk.Snapshots
|
||||||
|
snapshot := Snapshot{}
|
||||||
|
label := d.Get("label").(string)
|
||||||
|
for _, sn := range snapshots {
|
||||||
|
if label == sn.Label {
|
||||||
|
snapshot = sn
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if label != snapshot.Label {
|
||||||
|
return diag.Errorf("Snapshot with label \"%v\" not found", label)
|
||||||
|
}
|
||||||
|
|
||||||
|
id := uuid.New()
|
||||||
|
d.SetId(id.String())
|
||||||
|
d.Set("timestamp", snapshot.TimeStamp)
|
||||||
|
d.Set("guid", snapshot.Guid)
|
||||||
|
d.Set("res_id", snapshot.ResId)
|
||||||
|
d.Set("snap_set_guid", snapshot.SnapSetGuid)
|
||||||
|
d.Set("snap_set_time", snapshot.SnapSetTime)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func DataSourceDiskSnapshot() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
ReadContext: dataSourceDiskSnapshotRead,
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Read: &constants.Timeout30s,
|
||||||
|
Default: &constants.Timeout60s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: dataSourceDiskSnapshotSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func dataSourceDiskSnapshotSchemaMake() map[string]*schema.Schema {
|
||||||
|
rets := map[string]*schema.Schema{
|
||||||
|
"disk_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
Description: "The unique ID of the subscriber-owner of the disk",
|
||||||
|
},
|
||||||
|
"label": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
Description: "Name of the snapshot",
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "ID of the snapshot",
|
||||||
|
},
|
||||||
|
"timestamp": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Snapshot time",
|
||||||
|
},
|
||||||
|
"res_id": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Reference to the snapshot",
|
||||||
|
},
|
||||||
|
"snap_set_guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The set snapshot ID",
|
||||||
|
},
|
||||||
|
"snap_set_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The set time of the snapshot",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
return rets
|
||||||
|
}
|
||||||
@@ -0,0 +1,121 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package disks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
)
|
||||||
|
|
||||||
|
func dataSourceDiskSnapshotListRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||||
|
if disk == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
id := uuid.New()
|
||||||
|
d.SetId(id.String())
|
||||||
|
d.Set("items", flattenDiskSnapshotList(disk.Snapshots))
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func DataSourceDiskSnapshotList() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
ReadContext: dataSourceDiskSnapshotListRead,
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Read: &constants.Timeout30s,
|
||||||
|
Default: &constants.Timeout60s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: dataSourceDiskSnapshotListSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func dataSourceDiskSnapshotListSchemaMake() map[string]*schema.Schema {
|
||||||
|
rets := map[string]*schema.Schema{
|
||||||
|
"disk_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
Description: "The unique ID of the subscriber-owner of the disk",
|
||||||
|
},
|
||||||
|
"items": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"label": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Name of the snapshot",
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "ID of the snapshot",
|
||||||
|
},
|
||||||
|
"timestamp": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Snapshot time",
|
||||||
|
},
|
||||||
|
"res_id": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Reference to the snapshot",
|
||||||
|
},
|
||||||
|
"snap_set_guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The set snapshot ID",
|
||||||
|
},
|
||||||
|
"snap_set_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The set time of the snapshot",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
return rets
|
||||||
|
}
|
||||||
69
internal/service/cloudapi/disks/data_source_list_deleted.go
Normal file
69
internal/service/cloudapi/disks/data_source_list_deleted.go
Normal file
@@ -0,0 +1,69 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package disks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
)
|
||||||
|
|
||||||
|
func dataSourceDiskListDeletedRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
diskList, err := utilityDiskListCheckPresence(ctx, d, m, disksListDeletedAPI)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
id := uuid.New()
|
||||||
|
d.SetId(id.String())
|
||||||
|
d.Set("items", flattenDiskList(diskList))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func DataSourceDiskListDeleted() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
ReadContext: dataSourceDiskListDeletedRead,
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Read: &constants.Timeout30s,
|
||||||
|
Default: &constants.Timeout60s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: dataSourceDiskListSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
@@ -109,3 +110,66 @@ type IOTune struct {
|
|||||||
WriteIopsSec int `json:"write_iops_sec"`
|
WriteIopsSec int `json:"write_iops_sec"`
|
||||||
WriteIopsSecMax int `json:"write_iops_sec_max"`
|
WriteIopsSecMax int `json:"write_iops_sec_max"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type Pool struct {
|
||||||
|
Name string `json:"name"`
|
||||||
|
Types []string `json:"types"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type PoolList []Pool
|
||||||
|
|
||||||
|
type TypeDetailed struct {
|
||||||
|
Pools []Pool `json:"pools"`
|
||||||
|
SepID int `json:"sepId"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type TypesDetailedList []TypeDetailed
|
||||||
|
|
||||||
|
type TypesList []string
|
||||||
|
|
||||||
|
type Unattached struct {
|
||||||
|
Ckey string `json:"_ckey"`
|
||||||
|
Meta []interface{} `json:"_meta"`
|
||||||
|
AccountID int `json:"accountId"`
|
||||||
|
AccountName string `json:"accountName"`
|
||||||
|
Acl map[string]interface{} `json:"acl"`
|
||||||
|
BootPartition int `json:"bootPartition"`
|
||||||
|
CreatedTime int `json:"createdTime"`
|
||||||
|
DeletedTime int `json:"deletedTime"`
|
||||||
|
Desc string `json:"desc"`
|
||||||
|
DestructionTime int `json:"destructionTime"`
|
||||||
|
DiskPath string `json:"diskPath"`
|
||||||
|
GridID int `json:"gid"`
|
||||||
|
GUID int `json:"guid"`
|
||||||
|
ID int `json:"id"`
|
||||||
|
ImageID int `json:"imageId"`
|
||||||
|
Images []int `json:"images"`
|
||||||
|
IOTune IOTune `json:"iotune"`
|
||||||
|
IQN string `json:"iqn"`
|
||||||
|
Login string `json:"login"`
|
||||||
|
Milestones int `json:"milestones"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Order int `json:"order"`
|
||||||
|
Params string `json:"params"`
|
||||||
|
ParentID int `json:"parentId"`
|
||||||
|
Passwd string `json:"passwd"`
|
||||||
|
PciSlot int `json:"pciSlot"`
|
||||||
|
Pool string `json:"pool"`
|
||||||
|
PurgeAttempts int `json:"purgeAttempts"`
|
||||||
|
PurgeTime int `json:"purgeTime"`
|
||||||
|
RealityDeviceNumber int `json:"realityDeviceNumber"`
|
||||||
|
ReferenceID string `json:"referenceId"`
|
||||||
|
ResID string `json:"resId"`
|
||||||
|
ResName string `json:"resName"`
|
||||||
|
Role string `json:"role"`
|
||||||
|
SepID int `json:"sepId"`
|
||||||
|
SizeMax int `json:"sizeMax"`
|
||||||
|
SizeUsed int `json:"sizeUsed"`
|
||||||
|
Snapshots []Snapshot `json:"snapshots"`
|
||||||
|
Status string `json:"status"`
|
||||||
|
TechStatus string `json:"techStatus"`
|
||||||
|
Type string `json:"type"`
|
||||||
|
VMID int `json:"vmid"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type UnattachedList []Unattached
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
@@ -41,6 +42,7 @@ import (
|
|||||||
|
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/status"
|
||||||
log "github.com/sirupsen/logrus"
|
log "github.com/sirupsen/logrus"
|
||||||
|
|
||||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
@@ -119,6 +121,9 @@ func resourceDiskCreate(ctx context.Context, d *schema.ResourceData, m interface
|
|||||||
}
|
}
|
||||||
|
|
||||||
func resourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
func resourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
|
||||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||||
if disk == nil {
|
if disk == nil {
|
||||||
d.SetId("")
|
d.SetId("")
|
||||||
@@ -128,6 +133,28 @@ func resourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface{}
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if disk.Status == status.Destroyed || disk.Status == status.Purged {
|
||||||
|
d.Set("disk_id", 0)
|
||||||
|
return resourceDiskCreate(ctx, d, m)
|
||||||
|
} else if disk.Status == status.Deleted {
|
||||||
|
urlValues.Add("diskId", d.Id())
|
||||||
|
urlValues.Add("reason", d.Get("reason").(string))
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", disksRestoreAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
disk, err = utilityDiskCheckPresence(ctx, d, m)
|
||||||
|
if disk == nil {
|
||||||
|
d.SetId("")
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
diskAcl, _ := json.Marshal(disk.Acl)
|
diskAcl, _ := json.Marshal(disk.Acl)
|
||||||
|
|
||||||
d.Set("account_id", disk.AccountID)
|
d.Set("account_id", disk.AccountID)
|
||||||
@@ -169,7 +196,7 @@ func resourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface{}
|
|||||||
d.Set("sep_type", disk.SepType)
|
d.Set("sep_type", disk.SepType)
|
||||||
d.Set("size_max", disk.SizeMax)
|
d.Set("size_max", disk.SizeMax)
|
||||||
d.Set("size_used", disk.SizeUsed)
|
d.Set("size_used", disk.SizeUsed)
|
||||||
d.Set("snapshots", flattendDiskSnapshotList(disk.Snapshots))
|
d.Set("snapshots", flattenDiskSnapshotList(disk.Snapshots))
|
||||||
d.Set("status", disk.Status)
|
d.Set("status", disk.Status)
|
||||||
d.Set("tech_status", disk.TechStatus)
|
d.Set("tech_status", disk.TechStatus)
|
||||||
d.Set("type", disk.Type)
|
d.Set("type", disk.Type)
|
||||||
@@ -179,9 +206,27 @@ func resourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface{}
|
|||||||
}
|
}
|
||||||
|
|
||||||
func resourceDiskUpdate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
func resourceDiskUpdate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
|
||||||
c := m.(*controller.ControllerCfg)
|
c := m.(*controller.ControllerCfg)
|
||||||
urlValues := &url.Values{}
|
urlValues := &url.Values{}
|
||||||
|
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||||
|
if disk == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
if disk.Status == status.Destroyed || disk.Status == status.Purged {
|
||||||
|
return resourceDiskCreate(ctx, d, m)
|
||||||
|
} else if disk.Status == status.Deleted {
|
||||||
|
urlValues.Add("diskId", d.Id())
|
||||||
|
urlValues.Add("reason", d.Get("reason").(string))
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", disksRestoreAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
|
||||||
if d.HasChange("size_max") {
|
if d.HasChange("size_max") {
|
||||||
oldSize, newSize := d.GetChange("size_max")
|
oldSize, newSize := d.GetChange("size_max")
|
||||||
@@ -238,26 +283,10 @@ func resourceDiskUpdate(ctx context.Context, d *schema.ResourceData, m interface
|
|||||||
urlValues = &url.Values{}
|
urlValues = &url.Values{}
|
||||||
}
|
}
|
||||||
|
|
||||||
if d.HasChange("restore") {
|
|
||||||
if d.Get("restore").(bool) {
|
|
||||||
urlValues.Add("diskId", d.Id())
|
|
||||||
urlValues.Add("reason", d.Get("reason").(string))
|
|
||||||
|
|
||||||
_, err := c.DecortAPICall(ctx, "POST", disksRestoreAPI, urlValues)
|
|
||||||
if err != nil {
|
|
||||||
return diag.FromErr(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
urlValues = &url.Values{}
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
return resourceDiskRead(ctx, d, m)
|
return resourceDiskRead(ctx, d, m)
|
||||||
}
|
}
|
||||||
|
|
||||||
func resourceDiskDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
func resourceDiskDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
|
||||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||||
if disk == nil {
|
if disk == nil {
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -265,7 +294,9 @@ func resourceDiskDelete(ctx context.Context, d *schema.ResourceData, m interface
|
|||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
if disk.Status == status.Destroyed || disk.Status == status.Purged {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
params := &url.Values{}
|
params := &url.Values{}
|
||||||
params.Add("diskId", d.Id())
|
params.Add("diskId", d.Id())
|
||||||
params.Add("detach", strconv.FormatBool(d.Get("detach").(bool)))
|
params.Add("detach", strconv.FormatBool(d.Get("detach").(bool)))
|
||||||
@@ -277,126 +308,141 @@ func resourceDiskDelete(ctx context.Context, d *schema.ResourceData, m interface
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return diag.FromErr(err)
|
return diag.FromErr(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func resourceDiskSchemaMake() map[string]*schema.Schema {
|
func resourceDiskSchemaMake() map[string]*schema.Schema {
|
||||||
rets := map[string]*schema.Schema{
|
rets := map[string]*schema.Schema{
|
||||||
"account_id": {
|
"account_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Required: true,
|
Required: true,
|
||||||
|
ForceNew: true,
|
||||||
|
Description: "The unique ID of the subscriber-owner of the disk",
|
||||||
},
|
},
|
||||||
"disk_name": {
|
"disk_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Required: true,
|
Required: true,
|
||||||
|
Description: "Name of disk",
|
||||||
},
|
},
|
||||||
"size_max": {
|
"size_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Required: true,
|
Required: true,
|
||||||
|
Description: "Size in GB",
|
||||||
},
|
},
|
||||||
"gid": {
|
"gid": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Required: true,
|
Required: true,
|
||||||
|
ForceNew: true,
|
||||||
|
Description: "ID of the grid (platform)",
|
||||||
},
|
},
|
||||||
"pool": {
|
"pool": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Pool for disk location",
|
||||||
},
|
},
|
||||||
"sep_id": {
|
"sep_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Storage endpoint provider ID to create disk",
|
||||||
},
|
},
|
||||||
"desc": {
|
"desc": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Description of disk",
|
||||||
},
|
},
|
||||||
"type": {
|
"type": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
ValidateFunc: validation.StringInSlice([]string{"D", "B", "T"}, false),
|
ValidateFunc: validation.StringInSlice([]string{"D", "B", "T"}, false),
|
||||||
|
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||||
},
|
},
|
||||||
|
|
||||||
"detach": {
|
"detach": {
|
||||||
Type: schema.TypeBool,
|
Type: schema.TypeBool,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Default: false,
|
Default: false,
|
||||||
Description: "detach disk from machine first",
|
Description: "Detaching the disk from compute",
|
||||||
},
|
},
|
||||||
"permanently": {
|
"permanently": {
|
||||||
Type: schema.TypeBool,
|
Type: schema.TypeBool,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Default: false,
|
Default: false,
|
||||||
Description: "whether to completely delete the disk, works only with non attached disks",
|
Description: "Whether to completely delete the disk, works only with non attached disks",
|
||||||
},
|
},
|
||||||
"reason": {
|
"reason": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Default: "",
|
Default: "",
|
||||||
Description: "reason for an action",
|
Description: "Reason for deletion",
|
||||||
},
|
|
||||||
"restore": {
|
|
||||||
Type: schema.TypeBool,
|
|
||||||
Optional: true,
|
|
||||||
Default: false,
|
|
||||||
Description: "restore deleting disk",
|
|
||||||
},
|
},
|
||||||
|
|
||||||
"disk_id": {
|
"disk_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk ID. Duplicates the value of the ID parameter",
|
||||||
},
|
},
|
||||||
"account_name": {
|
"account_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The name of the subscriber '(account') to whom this disk belongs",
|
||||||
},
|
},
|
||||||
"acl": {
|
"acl": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
},
|
},
|
||||||
"boot_partition": {
|
"boot_partition": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of disk partitions",
|
||||||
},
|
},
|
||||||
"compute_id": {
|
"compute_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Compute ID",
|
||||||
},
|
},
|
||||||
"compute_name": {
|
"compute_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Compute name",
|
||||||
},
|
},
|
||||||
"created_time": {
|
"created_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Created time",
|
||||||
},
|
},
|
||||||
"deleted_time": {
|
"deleted_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Deleted time",
|
||||||
},
|
},
|
||||||
"destruction_time": {
|
"destruction_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Time of final deletion",
|
||||||
},
|
},
|
||||||
"devicename": {
|
"devicename": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of the device",
|
||||||
},
|
},
|
||||||
"disk_path": {
|
"disk_path": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk path",
|
||||||
},
|
},
|
||||||
"guid": {
|
"guid": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk ID on the storage side",
|
||||||
},
|
},
|
||||||
"image_id": {
|
"image_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Image ID",
|
||||||
},
|
},
|
||||||
"images": {
|
"images": {
|
||||||
Type: schema.TypeList,
|
Type: schema.TypeList,
|
||||||
@@ -404,6 +450,7 @@ func resourceDiskSchemaMake() map[string]*schema.Schema {
|
|||||||
Elem: &schema.Schema{
|
Elem: &schema.Schema{
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
},
|
},
|
||||||
|
Description: "IDs of images using the disk",
|
||||||
},
|
},
|
||||||
"iotune": {
|
"iotune": {
|
||||||
Type: schema.TypeList,
|
Type: schema.TypeList,
|
||||||
@@ -413,143 +460,171 @@ func resourceDiskSchemaMake() map[string]*schema.Schema {
|
|||||||
Elem: &schema.Resource{
|
Elem: &schema.Resource{
|
||||||
Schema: map[string]*schema.Schema{
|
Schema: map[string]*schema.Schema{
|
||||||
"read_bytes_sec": {
|
"read_bytes_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of bytes to read per second",
|
||||||
},
|
},
|
||||||
"read_bytes_sec_max": {
|
"read_bytes_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of bytes to read",
|
||||||
},
|
},
|
||||||
"read_iops_sec": {
|
"read_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of io read operations per second",
|
||||||
},
|
},
|
||||||
"read_iops_sec_max": {
|
"read_iops_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of io read operations",
|
||||||
},
|
},
|
||||||
"size_iops_sec": {
|
"size_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Size of io operations",
|
||||||
},
|
},
|
||||||
"total_bytes_sec": {
|
"total_bytes_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Total size bytes per second",
|
||||||
},
|
},
|
||||||
"total_bytes_sec_max": {
|
"total_bytes_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum total size of bytes per second",
|
||||||
},
|
},
|
||||||
"total_iops_sec": {
|
"total_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Total number of io operations per second",
|
||||||
},
|
},
|
||||||
"total_iops_sec_max": {
|
"total_iops_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum total number of io operations per second",
|
||||||
},
|
},
|
||||||
"write_bytes_sec": {
|
"write_bytes_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of bytes to write per second",
|
||||||
},
|
},
|
||||||
"write_bytes_sec_max": {
|
"write_bytes_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of bytes to write per second",
|
||||||
},
|
},
|
||||||
"write_iops_sec": {
|
"write_iops_sec": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of write operations per second",
|
||||||
},
|
},
|
||||||
"write_iops_sec_max": {
|
"write_iops_sec_max": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Optional: true,
|
Optional: true,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Maximum number of write operations per second",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
"iqn": {
|
"iqn": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk IQN",
|
||||||
},
|
},
|
||||||
"login": {
|
"login": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Login to access the disk",
|
||||||
},
|
},
|
||||||
"milestones": {
|
"milestones": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Milestones",
|
||||||
},
|
},
|
||||||
|
|
||||||
"order": {
|
"order": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk order",
|
||||||
},
|
},
|
||||||
"params": {
|
"params": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk params",
|
||||||
},
|
},
|
||||||
"parent_id": {
|
"parent_id": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the parent disk",
|
||||||
},
|
},
|
||||||
"passwd": {
|
"passwd": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Password to access the disk",
|
||||||
},
|
},
|
||||||
"pci_slot": {
|
"pci_slot": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the pci slot to which the disk is connected",
|
||||||
},
|
},
|
||||||
|
|
||||||
"purge_attempts": {
|
"purge_attempts": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of deletion attempts",
|
||||||
},
|
},
|
||||||
"purge_time": {
|
"purge_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Time of the last deletion attempt",
|
||||||
},
|
},
|
||||||
"reality_device_number": {
|
"reality_device_number": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Reality device number",
|
||||||
},
|
},
|
||||||
"reference_id": {
|
"reference_id": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the reference to the disk",
|
||||||
},
|
},
|
||||||
"res_id": {
|
"res_id": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Resource ID",
|
||||||
},
|
},
|
||||||
"res_name": {
|
"res_name": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of the resource",
|
||||||
},
|
},
|
||||||
"role": {
|
"role": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk role",
|
||||||
},
|
},
|
||||||
|
|
||||||
"sep_type": {
|
"sep_type": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Type SEP. Defines the type of storage system and contains one of the values set in the cloud platform",
|
||||||
},
|
},
|
||||||
"size_used": {
|
"size_used": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Number of used space, in GB",
|
||||||
},
|
},
|
||||||
"snapshots": {
|
"snapshots": {
|
||||||
Type: schema.TypeList,
|
Type: schema.TypeList,
|
||||||
@@ -557,43 +632,52 @@ func resourceDiskSchemaMake() map[string]*schema.Schema {
|
|||||||
Elem: &schema.Resource{
|
Elem: &schema.Resource{
|
||||||
Schema: map[string]*schema.Schema{
|
Schema: map[string]*schema.Schema{
|
||||||
"guid": {
|
"guid": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "ID of the snapshot",
|
||||||
},
|
},
|
||||||
"label": {
|
"label": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Name of the snapshot",
|
||||||
},
|
},
|
||||||
"res_id": {
|
"res_id": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Reference to the snapshot",
|
||||||
},
|
},
|
||||||
"snap_set_guid": {
|
"snap_set_guid": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The set snapshot ID",
|
||||||
},
|
},
|
||||||
"snap_set_time": {
|
"snap_set_time": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "The set time of the snapshot",
|
||||||
},
|
},
|
||||||
"timestamp": {
|
"timestamp": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Snapshot time",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
"status": {
|
"status": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Disk status",
|
||||||
},
|
},
|
||||||
"tech_status": {
|
"tech_status": {
|
||||||
Type: schema.TypeString,
|
Type: schema.TypeString,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Technical status of the disk",
|
||||||
},
|
},
|
||||||
"vmid": {
|
"vmid": {
|
||||||
Type: schema.TypeInt,
|
Type: schema.TypeInt,
|
||||||
Computed: true,
|
Computed: true,
|
||||||
|
Description: "Virtual Machine ID (Deprecated)",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -610,15 +694,15 @@ func ResourceDisk() *schema.Resource {
|
|||||||
DeleteContext: resourceDiskDelete,
|
DeleteContext: resourceDiskDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout180s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout180s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourceDiskSchemaMake(),
|
Schema: resourceDiskSchemaMake(),
|
||||||
|
|||||||
246
internal/service/cloudapi/disks/resource_disk_snapshot.go
Normal file
246
internal/service/cloudapi/disks/resource_disk_snapshot.go
Normal file
@@ -0,0 +1,246 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package disks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
func resourceDiskSnapshotCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||||
|
if disk == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
snapshots := disk.Snapshots
|
||||||
|
snapshot := Snapshot{}
|
||||||
|
label := d.Get("label").(string)
|
||||||
|
for _, sn := range snapshots {
|
||||||
|
if label == sn.Label {
|
||||||
|
snapshot = sn
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if label != snapshot.Label {
|
||||||
|
return diag.Errorf("Snapshot with label \"%v\" not found", label)
|
||||||
|
}
|
||||||
|
if rollback := d.Get("rollback").(bool); rollback {
|
||||||
|
urlValues.Add("diskId", strconv.Itoa(d.Get("disk_id").(int)))
|
||||||
|
urlValues.Add("label", label)
|
||||||
|
urlValues.Add("timestamp", strconv.Itoa(d.Get("timestamp").(int)))
|
||||||
|
log.Debugf("resourceDiskCreate: Snapshot rollback with label", label)
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", disksSnapshotRollbackAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
return resourceDiskSnapshotRead(ctx, d, m)
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceDiskSnapshotRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||||
|
if disk == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
snapshots := disk.Snapshots
|
||||||
|
snapshot := Snapshot{}
|
||||||
|
label := d.Get("label").(string)
|
||||||
|
for _, sn := range snapshots {
|
||||||
|
if label == sn.Label {
|
||||||
|
snapshot = sn
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if label != snapshot.Label {
|
||||||
|
return diag.Errorf("Snapshot with label \"%v\" not found", label)
|
||||||
|
}
|
||||||
|
|
||||||
|
d.SetId(d.Get("label").(string))
|
||||||
|
d.Set("timestamp", snapshot.TimeStamp)
|
||||||
|
d.Set("guid", snapshot.Guid)
|
||||||
|
d.Set("res_id", snapshot.ResId)
|
||||||
|
d.Set("snap_set_guid", snapshot.SnapSetGuid)
|
||||||
|
d.Set("snap_set_time", snapshot.SnapSetTime)
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceDiskSnapshotUpdate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||||
|
if disk == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
snapshots := disk.Snapshots
|
||||||
|
snapshot := Snapshot{}
|
||||||
|
label := d.Get("label").(string)
|
||||||
|
for _, sn := range snapshots {
|
||||||
|
if label == sn.Label {
|
||||||
|
snapshot = sn
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if label != snapshot.Label {
|
||||||
|
return diag.Errorf("Snapshot with label \"%v\" not found", label)
|
||||||
|
}
|
||||||
|
if d.HasChange("rollback") && d.Get("rollback").(bool) == true {
|
||||||
|
urlValues.Add("diskId", strconv.Itoa(d.Get("disk_id").(int)))
|
||||||
|
urlValues.Add("label", label)
|
||||||
|
urlValues.Add("timestamp", strconv.Itoa(d.Get("timestamp").(int)))
|
||||||
|
log.Debugf("resourceDiskUpdtae: Snapshot rollback with label", label)
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", disksSnapshotRollbackAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
|
||||||
|
return resourceDiskSnapshotRead(ctx, d, m)
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceDiskSnapshotDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
|
||||||
|
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||||
|
if disk == nil { //if disk not exits, can't call snapshotDelete
|
||||||
|
d.SetId("")
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
params := &url.Values{}
|
||||||
|
params.Add("diskId", strconv.Itoa(d.Get("disk_id").(int)))
|
||||||
|
params.Add("label", d.Get("label").(string))
|
||||||
|
|
||||||
|
_, err = c.DecortAPICall(ctx, "POST", disksSnapshotDeleteAPI, params)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceDiskSnapshotSchemaMake() map[string]*schema.Schema {
|
||||||
|
rets := map[string]*schema.Schema{
|
||||||
|
"disk_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
ForceNew: true,
|
||||||
|
Description: "The unique ID of the subscriber-owner of the disk",
|
||||||
|
},
|
||||||
|
"label": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
ForceNew: true,
|
||||||
|
Description: "Name of the snapshot",
|
||||||
|
},
|
||||||
|
"rollback": {
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
Default: false,
|
||||||
|
Description: "Needed in order to make a snapshot rollback",
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "ID of the snapshot",
|
||||||
|
},
|
||||||
|
"timestamp": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Snapshot time",
|
||||||
|
},
|
||||||
|
"res_id": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Reference to the snapshot",
|
||||||
|
},
|
||||||
|
"snap_set_guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The set snapshot ID",
|
||||||
|
},
|
||||||
|
"snap_set_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "The set time of the snapshot",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
return rets
|
||||||
|
}
|
||||||
|
|
||||||
|
func ResourceDiskSnapshot() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
CreateContext: resourceDiskSnapshotCreate,
|
||||||
|
ReadContext: resourceDiskSnapshotRead,
|
||||||
|
UpdateContext: resourceDiskSnapshotUpdate,
|
||||||
|
DeleteContext: resourceDiskSnapshotDelete,
|
||||||
|
|
||||||
|
Importer: &schema.ResourceImporter{
|
||||||
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
|
},
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Create: &constants.Timeout600s,
|
||||||
|
Read: &constants.Timeout300s,
|
||||||
|
Update: &constants.Timeout300s,
|
||||||
|
Delete: &constants.Timeout300s,
|
||||||
|
Default: &constants.Timeout300s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: resourceDiskSnapshotSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
@@ -44,7 +45,7 @@ import (
|
|||||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
)
|
)
|
||||||
|
|
||||||
func utilityDiskListCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (DisksList, error) {
|
func utilityDiskListCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}, api string) (DisksList, error) {
|
||||||
diskList := DisksList{}
|
diskList := DisksList{}
|
||||||
c := m.(*controller.ControllerCfg)
|
c := m.(*controller.ControllerCfg)
|
||||||
urlValues := &url.Values{}
|
urlValues := &url.Values{}
|
||||||
@@ -63,7 +64,7 @@ func utilityDiskListCheckPresence(ctx context.Context, d *schema.ResourceData, m
|
|||||||
}
|
}
|
||||||
|
|
||||||
log.Debugf("utilityDiskListCheckPresence: load disk list")
|
log.Debugf("utilityDiskListCheckPresence: load disk list")
|
||||||
diskListRaw, err := c.DecortAPICall(ctx, "POST", disksListAPI, urlValues)
|
diskListRaw, err := c.DecortAPICall(ctx, "POST", api, urlValues)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,62 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package disks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"net/url"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
func utilityDiskListTypesDetailedCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (TypesDetailedList, error) {
|
||||||
|
listTypesDetailed := TypesDetailedList{}
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("detailed", "true")
|
||||||
|
log.Debugf("utilityDiskListTypesDetailedCheckPresence: load disk list Types Detailed")
|
||||||
|
diskListRaw, err := c.DecortAPICall(ctx, "POST", disksListTypesAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
err = json.Unmarshal([]byte(diskListRaw), &listTypesDetailed)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return listTypesDetailed, nil
|
||||||
|
}
|
||||||
62
internal/service/cloudapi/disks/utility_disk_types_list.go
Normal file
62
internal/service/cloudapi/disks/utility_disk_types_list.go
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package disks
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"net/url"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
func utilityDiskListTypesCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (TypesList, error) {
|
||||||
|
typesList := TypesList{}
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("detailed", "false")
|
||||||
|
log.Debugf("utilityDiskListTypesCheckPresence: load disk list Types Detailed")
|
||||||
|
diskListRaw, err := c.DecortAPICall(ctx, "POST", disksListTypesAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
err = json.Unmarshal([]byte(diskListRaw), &typesList)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return typesList, nil
|
||||||
|
}
|
||||||
@@ -31,7 +31,7 @@ Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
|||||||
|
|
||||||
package image
|
package image
|
||||||
|
|
||||||
const imageCreateAPI = "/restmachine/cloudapi/image/createImage"
|
const imageCreateAPI = "/restmachine/cloudapi/image/create"
|
||||||
const imageCreateVirtualAPI = "/restmachine/cloudapi/image/createVirtual"
|
const imageCreateVirtualAPI = "/restmachine/cloudapi/image/createVirtual"
|
||||||
const imageGetAPI = "/restmachine/cloudapi/image/get"
|
const imageGetAPI = "/restmachine/cloudapi/image/get"
|
||||||
const imageListGetAPI = "/restmachine/cloudapi/image/list"
|
const imageListGetAPI = "/restmachine/cloudapi/image/list"
|
||||||
|
|||||||
@@ -33,6 +33,7 @@ package image
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"encoding/json"
|
||||||
"net/url"
|
"net/url"
|
||||||
"strconv"
|
"strconv"
|
||||||
|
|
||||||
@@ -52,7 +53,7 @@ func resourceImageCreate(ctx context.Context, d *schema.ResourceData, m interfac
|
|||||||
urlValues.Add("url", d.Get("url").(string))
|
urlValues.Add("url", d.Get("url").(string))
|
||||||
urlValues.Add("gid", strconv.Itoa(d.Get("gid").(int)))
|
urlValues.Add("gid", strconv.Itoa(d.Get("gid").(int)))
|
||||||
urlValues.Add("boottype", d.Get("boot_type").(string))
|
urlValues.Add("boottype", d.Get("boot_type").(string))
|
||||||
urlValues.Add("imagetype", d.Get("image_type").(string))
|
urlValues.Add("imagetype", d.Get("type").(string))
|
||||||
|
|
||||||
tstr := d.Get("drivers").([]interface{})
|
tstr := d.Get("drivers").([]interface{})
|
||||||
temp := ""
|
temp := ""
|
||||||
@@ -94,11 +95,25 @@ func resourceImageCreate(ctx context.Context, d *schema.ResourceData, m interfac
|
|||||||
if architecture, ok := d.GetOk("architecture"); ok {
|
if architecture, ok := d.GetOk("architecture"); ok {
|
||||||
urlValues.Add("architecture", architecture.(string))
|
urlValues.Add("architecture", architecture.(string))
|
||||||
}
|
}
|
||||||
|
/* uncomment then OK
|
||||||
imageId, err := c.DecortAPICall(ctx, "POST", imageCreateAPI, urlValues)
|
imageId, err := c.DecortAPICall(ctx, "POST", imageCreateAPI, urlValues)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return diag.FromErr(err)
|
return diag.FromErr(err)
|
||||||
}
|
}
|
||||||
|
*/
|
||||||
|
//innovation
|
||||||
|
res, err := c.DecortAPICall(ctx, "POST", imageCreateAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
i := make([]interface{}, 0)
|
||||||
|
err = json.Unmarshal([]byte(res), &i)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
imageId := strconv.Itoa(int(i[1].(float64)))
|
||||||
|
// end innovation
|
||||||
|
|
||||||
d.SetId(imageId)
|
d.SetId(imageId)
|
||||||
d.Set("image_id", imageId)
|
d.Set("image_id", imageId)
|
||||||
@@ -229,15 +244,15 @@ func ResourceImage() *schema.Resource {
|
|||||||
DeleteContext: resourceImageDelete,
|
DeleteContext: resourceImageDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout60s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout60s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourceImageSchemaMake(dataSourceImageExtendSchemaMake()),
|
Schema: resourceImageSchemaMake(dataSourceImageExtendSchemaMake()),
|
||||||
|
|||||||
@@ -116,15 +116,15 @@ func ResourceImageVirtual() *schema.Resource {
|
|||||||
DeleteContext: resourceImageDelete,
|
DeleteContext: resourceImageDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout60s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout60s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourceImageVirtualSchemaMake(dataSourceImageExtendSchemaMake()),
|
Schema: resourceImageVirtualSchemaMake(dataSourceImageExtendSchemaMake()),
|
||||||
|
|||||||
@@ -376,15 +376,15 @@ func ResourceK8s() *schema.Resource {
|
|||||||
DeleteContext: resourceK8sDelete,
|
DeleteContext: resourceK8sDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout20m,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout20m,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourceK8sSchemaMake(),
|
Schema: resourceK8sSchemaMake(),
|
||||||
|
|||||||
@@ -228,15 +228,15 @@ func ResourceK8sWg() *schema.Resource {
|
|||||||
DeleteContext: resourceK8sWgDelete,
|
DeleteContext: resourceK8sWgDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout20m,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout20m,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourceK8sWgSchemaMake(),
|
Schema: resourceK8sWgSchemaMake(),
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
@@ -31,16 +32,21 @@ Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
|||||||
|
|
||||||
package kvmvm
|
package kvmvm
|
||||||
|
|
||||||
const KvmX86CreateAPI = "/restmachine/cloudapi/kvmx86/create"
|
const (
|
||||||
const KvmPPCCreateAPI = "/restmachine/cloudapi/kvmppc/create"
|
KvmX86CreateAPI = "/restmachine/cloudapi/kvmx86/create"
|
||||||
const ComputeGetAPI = "/restmachine/cloudapi/compute/get"
|
KvmPPCCreateAPI = "/restmachine/cloudapi/kvmppc/create"
|
||||||
const RgListComputesAPI = "/restmachine/cloudapi/rg/listComputes"
|
ComputeGetAPI = "/restmachine/cloudapi/compute/get"
|
||||||
const ComputeNetAttachAPI = "/restmachine/cloudapi/compute/netAttach"
|
RgListComputesAPI = "/restmachine/cloudapi/rg/listComputes"
|
||||||
const ComputeNetDetachAPI = "/restmachine/cloudapi/compute/netDetach"
|
ComputeNetAttachAPI = "/restmachine/cloudapi/compute/netAttach"
|
||||||
const ComputeDiskAttachAPI = "/restmachine/cloudapi/compute/diskAttach"
|
ComputeNetDetachAPI = "/restmachine/cloudapi/compute/netDetach"
|
||||||
const ComputeDiskDetachAPI = "/restmachine/cloudapi/compute/diskDetach"
|
ComputeDiskAttachAPI = "/restmachine/cloudapi/compute/diskAttach"
|
||||||
const ComputeStartAPI = "/restmachine/cloudapi/compute/start"
|
ComputeDiskDetachAPI = "/restmachine/cloudapi/compute/diskDetach"
|
||||||
const ComputeStopAPI = "/restmachine/cloudapi/compute/stop"
|
ComputeStartAPI = "/restmachine/cloudapi/compute/start"
|
||||||
const ComputeResizeAPI = "/restmachine/cloudapi/compute/resize"
|
ComputeStopAPI = "/restmachine/cloudapi/compute/stop"
|
||||||
const DisksResizeAPI = "/restmachine/cloudapi/disks/resize2"
|
ComputeResizeAPI = "/restmachine/cloudapi/compute/resize"
|
||||||
const ComputeDeleteAPI = "/restmachine/cloudapi/compute/delete"
|
DisksResizeAPI = "/restmachine/cloudapi/disks/resize2"
|
||||||
|
ComputeDeleteAPI = "/restmachine/cloudapi/compute/delete"
|
||||||
|
ComputeUpdateAPI = "/restmachine/cloudapi/compute/update"
|
||||||
|
ComputeDiskAddAPI = "/restmachine/cloudapi/compute/diskAdd"
|
||||||
|
ComputeDiskDeleteAPI = "/restmachine/cloudapi/compute/diskDel"
|
||||||
|
)
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
@@ -113,6 +114,36 @@ func parseComputeInterfacesToNetworks(ifaces []InterfaceRecord) []interface{} {
|
|||||||
return result
|
return result
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func findInExtraDisks(DiskId uint, ExtraDisks []interface{}) bool {
|
||||||
|
for _, ExtraDisk := range ExtraDisks {
|
||||||
|
if DiskId == uint(ExtraDisk.(int)) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
func flattenComputeDisksDemo(disksList []DiskRecord, extraDisks []interface{}) []map[string]interface{} {
|
||||||
|
res := make([]map[string]interface{}, 0)
|
||||||
|
for _, disk := range disksList {
|
||||||
|
if disk.Name == "bootdisk" || findInExtraDisks(disk.ID, extraDisks) { //skip main bootdisk and extraDisks
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
temp := map[string]interface{}{
|
||||||
|
"disk_name": disk.Name,
|
||||||
|
"disk_id": disk.ID,
|
||||||
|
"disk_type": disk.Type,
|
||||||
|
"sep_id": disk.SepID,
|
||||||
|
"pool": disk.Pool,
|
||||||
|
"desc": disk.Desc,
|
||||||
|
"image_id": disk.ImageID,
|
||||||
|
"size": disk.SizeMax,
|
||||||
|
}
|
||||||
|
res = append(res, temp)
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
func flattenCompute(d *schema.ResourceData, compFacts string) error {
|
func flattenCompute(d *schema.ResourceData, compFacts string) error {
|
||||||
// This function expects that compFacts string contains response from API compute/get,
|
// This function expects that compFacts string contains response from API compute/get,
|
||||||
// i.e. detailed information about compute instance.
|
// i.e. detailed information about compute instance.
|
||||||
@@ -139,7 +170,11 @@ func flattenCompute(d *schema.ResourceData, compFacts string) error {
|
|||||||
d.Set("cpu", model.Cpu)
|
d.Set("cpu", model.Cpu)
|
||||||
d.Set("ram", model.Ram)
|
d.Set("ram", model.Ram)
|
||||||
// d.Set("boot_disk_size", model.BootDiskSize) - bootdiskSize key in API compute/get is always zero, so we set boot_disk_size in another way
|
// d.Set("boot_disk_size", model.BootDiskSize) - bootdiskSize key in API compute/get is always zero, so we set boot_disk_size in another way
|
||||||
d.Set("image_id", model.ImageID)
|
if model.VirtualImageID != 0 {
|
||||||
|
d.Set("image_id", model.VirtualImageID)
|
||||||
|
} else {
|
||||||
|
d.Set("image_id", model.ImageID)
|
||||||
|
}
|
||||||
d.Set("description", model.Desc)
|
d.Set("description", model.Desc)
|
||||||
d.Set("cloud_init", "applied") // NOTE: for existing compute we hard-code this value as an indicator for DiffSuppress fucntion
|
d.Set("cloud_init", "applied") // NOTE: for existing compute we hard-code this value as an indicator for DiffSuppress fucntion
|
||||||
// d.Set("status", model.Status)
|
// d.Set("status", model.Status)
|
||||||
@@ -158,12 +193,12 @@ func flattenCompute(d *schema.ResourceData, compFacts string) error {
|
|||||||
d.Set("sep_id", bootDisk.SepID)
|
d.Set("sep_id", bootDisk.SepID)
|
||||||
d.Set("pool", bootDisk.Pool)
|
d.Set("pool", bootDisk.Pool)
|
||||||
|
|
||||||
if len(model.Disks) > 0 {
|
//if len(model.Disks) > 0 {
|
||||||
log.Debugf("flattenCompute: calling parseComputeDisksToExtraDisks for %d disks", len(model.Disks))
|
//log.Debugf("flattenCompute: calling parseComputeDisksToExtraDisks for %d disks", len(model.Disks))
|
||||||
if err = d.Set("extra_disks", parseComputeDisksToExtraDisks(model.Disks)); err != nil {
|
//if err = d.Set("extra_disks", parseComputeDisksToExtraDisks(model.Disks)); err != nil {
|
||||||
return err
|
//return err
|
||||||
}
|
//}
|
||||||
}
|
//}
|
||||||
|
|
||||||
if len(model.Interfaces) > 0 {
|
if len(model.Interfaces) > 0 {
|
||||||
log.Debugf("flattenCompute: calling parseComputeInterfacesToNetworks for %d interfaces", len(model.Interfaces))
|
log.Debugf("flattenCompute: calling parseComputeInterfacesToNetworks for %d interfaces", len(model.Interfaces))
|
||||||
@@ -179,6 +214,11 @@ func flattenCompute(d *schema.ResourceData, compFacts string) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
err = d.Set("disks", flattenComputeDisksDemo(model.Disks, d.Get("extra_disks").(*schema.Set).List()))
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
@@ -92,6 +93,14 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
|
|||||||
urlValues.Add("pool", pool.(string))
|
urlValues.Add("pool", pool.(string))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if ipaType, ok := d.GetOk("ipa_type"); ok {
|
||||||
|
urlValues.Add("ipaType", ipaType.(string))
|
||||||
|
}
|
||||||
|
|
||||||
|
if IS, ok := d.GetOk("is"); ok {
|
||||||
|
urlValues.Add("IS", IS.(string))
|
||||||
|
}
|
||||||
|
|
||||||
/*
|
/*
|
||||||
sshKeysVal, sshKeysSet := d.GetOk("ssh_keys")
|
sshKeysVal, sshKeysSet := d.GetOk("ssh_keys")
|
||||||
if sshKeysSet {
|
if sshKeysSet {
|
||||||
@@ -123,6 +132,7 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return diag.FromErr(err)
|
return diag.FromErr(err)
|
||||||
}
|
}
|
||||||
|
urlValues = &url.Values{}
|
||||||
// Compute create API returns ID of the new Compute instance on success
|
// Compute create API returns ID of the new Compute instance on success
|
||||||
|
|
||||||
d.SetId(apiResp) // update ID of the resource to tell Terraform that the resource exists, albeit partially
|
d.SetId(apiResp) // update ID of the resource to tell Terraform that the resource exists, albeit partially
|
||||||
@@ -140,6 +150,7 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
|
|||||||
log.Errorf("resourceComputeCreate: could not delete compute after failed creation: %v", err)
|
log.Errorf("resourceComputeCreate: could not delete compute after failed creation: %v", err)
|
||||||
}
|
}
|
||||||
d.SetId("")
|
d.SetId("")
|
||||||
|
urlValues = &url.Values{}
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
@@ -181,13 +192,50 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if !cleanup {
|
||||||
|
if disks, ok := d.GetOk("disks"); ok {
|
||||||
|
log.Debugf("resourceComputeCreate: Create disks on ComputeID: %d", compId)
|
||||||
|
addedDisks := disks.([]interface{})
|
||||||
|
if len(addedDisks) > 0 {
|
||||||
|
for _, disk := range addedDisks {
|
||||||
|
diskConv := disk.(map[string]interface{})
|
||||||
|
|
||||||
|
urlValues.Add("computeId", d.Id())
|
||||||
|
urlValues.Add("diskName", diskConv["disk_name"].(string))
|
||||||
|
urlValues.Add("size", strconv.Itoa(diskConv["size"].(int)))
|
||||||
|
if diskConv["disk_type"].(string) != "" {
|
||||||
|
urlValues.Add("diskType", diskConv["disk_type"].(string))
|
||||||
|
}
|
||||||
|
if diskConv["sep_id"].(int) != 0 {
|
||||||
|
urlValues.Add("sepId", strconv.Itoa(diskConv["sep_id"].(int)))
|
||||||
|
}
|
||||||
|
if diskConv["pool"].(string) != "" {
|
||||||
|
urlValues.Add("pool", diskConv["pool"].(string))
|
||||||
|
}
|
||||||
|
if diskConv["desc"].(string) != "" {
|
||||||
|
urlValues.Add("desc", diskConv["desc"].(string))
|
||||||
|
}
|
||||||
|
if diskConv["image_id"].(int) != 0 {
|
||||||
|
urlValues.Add("imageId", strconv.Itoa(diskConv["image_id"].(int)))
|
||||||
|
}
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", ComputeDiskAddAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
cleanup = true
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
log.Debugf("resourceComputeCreate: new Compute ID %d, name %s creation sequence complete", compId, d.Get("name").(string))
|
log.Debugf("resourceComputeCreate: new Compute ID %d, name %s creation sequence complete", compId, d.Get("name").(string))
|
||||||
|
|
||||||
// We may reuse dataSourceComputeRead here as we maintain similarity
|
// We may reuse dataSourceComputeRead here as we maintain similarity
|
||||||
// between Compute resource and Compute data source schemas
|
// between Compute resource and Compute data source schemas
|
||||||
// Compute read function will also update resource ID on success, so that Terraform
|
// Compute read function will also update resource ID on success, so that Terraform
|
||||||
// will know the resource exists
|
// will know the resource exists
|
||||||
return dataSourceComputeRead(ctx, d, m)
|
return resourceComputeRead(ctx, d, m)
|
||||||
}
|
}
|
||||||
|
|
||||||
func resourceComputeRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
func resourceComputeRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
@@ -228,32 +276,32 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
|
|||||||
*/
|
*/
|
||||||
|
|
||||||
// 1. Resize CPU/RAM
|
// 1. Resize CPU/RAM
|
||||||
params := &url.Values{}
|
urlValues := &url.Values{}
|
||||||
doUpdate := false
|
doUpdate := false
|
||||||
params.Add("computeId", d.Id())
|
urlValues.Add("computeId", d.Id())
|
||||||
|
|
||||||
oldCpu, newCpu := d.GetChange("cpu")
|
oldCpu, newCpu := d.GetChange("cpu")
|
||||||
if oldCpu.(int) != newCpu.(int) {
|
if oldCpu.(int) != newCpu.(int) {
|
||||||
params.Add("cpu", fmt.Sprintf("%d", newCpu.(int)))
|
urlValues.Add("cpu", fmt.Sprintf("%d", newCpu.(int)))
|
||||||
doUpdate = true
|
doUpdate = true
|
||||||
} else {
|
} else {
|
||||||
params.Add("cpu", "0") // no change to CPU allocation
|
urlValues.Add("cpu", "0") // no change to CPU allocation
|
||||||
}
|
}
|
||||||
|
|
||||||
oldRam, newRam := d.GetChange("ram")
|
oldRam, newRam := d.GetChange("ram")
|
||||||
if oldRam.(int) != newRam.(int) {
|
if oldRam.(int) != newRam.(int) {
|
||||||
params.Add("ram", fmt.Sprintf("%d", newRam.(int)))
|
urlValues.Add("ram", fmt.Sprintf("%d", newRam.(int)))
|
||||||
doUpdate = true
|
doUpdate = true
|
||||||
} else {
|
} else {
|
||||||
params.Add("ram", "0")
|
urlValues.Add("ram", "0")
|
||||||
}
|
}
|
||||||
|
|
||||||
if doUpdate {
|
if doUpdate {
|
||||||
log.Debugf("resourceComputeUpdate: changing CPU %d -> %d and/or RAM %d -> %d",
|
log.Debugf("resourceComputeUpdate: changing CPU %d -> %d and/or RAM %d -> %d",
|
||||||
oldCpu.(int), newCpu.(int),
|
oldCpu.(int), newCpu.(int),
|
||||||
oldRam.(int), newRam.(int))
|
oldRam.(int), newRam.(int))
|
||||||
params.Add("force", "true")
|
urlValues.Add("force", "true")
|
||||||
_, err := c.DecortAPICall(ctx, "POST", ComputeResizeAPI, params)
|
_, err := c.DecortAPICall(ctx, "POST", ComputeResizeAPI, urlValues)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return diag.FromErr(err)
|
return diag.FromErr(err)
|
||||||
}
|
}
|
||||||
@@ -276,15 +324,27 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
|
|||||||
}
|
}
|
||||||
|
|
||||||
// 3. Calculate and apply changes to data disks
|
// 3. Calculate and apply changes to data disks
|
||||||
err := utilityComputeExtraDisksConfigure(ctx, d, m, true) // pass do_delta = true to apply changes, if any
|
if d.HasChange("extra_disks") {
|
||||||
|
err := utilityComputeExtraDisksConfigure(ctx, d, m, true) // pass do_delta = true to apply changes, if any
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// 4. Calculate and apply changes to network connections
|
||||||
|
err := utilityComputeNetworksConfigure(ctx, d, m, true) // pass do_delta = true to apply changes, if any
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return diag.FromErr(err)
|
return diag.FromErr(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// 4. Calculate and apply changes to network connections
|
if d.HasChange("description") || d.HasChange("name") {
|
||||||
err = utilityComputeNetworksConfigure(ctx, d, m, true) // pass do_delta = true to apply changes, if any
|
updateParams := &url.Values{}
|
||||||
if err != nil {
|
updateParams.Add("computeId", d.Id())
|
||||||
return diag.FromErr(err)
|
updateParams.Add("name", d.Get("name").(string))
|
||||||
|
updateParams.Add("desc", d.Get("description").(string))
|
||||||
|
if _, err := c.DecortAPICall(ctx, "POST", ComputeUpdateAPI, updateParams); err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if d.HasChange("started") {
|
if d.HasChange("started") {
|
||||||
@@ -301,9 +361,108 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
if d.HasChange("disks") {
|
||||||
|
deletedDisks := make([]interface{}, 0)
|
||||||
|
addedDisks := make([]interface{}, 0)
|
||||||
|
|
||||||
|
oldDisks, newDisks := d.GetChange("disks")
|
||||||
|
oldConv := oldDisks.([]interface{})
|
||||||
|
newConv := newDisks.([]interface{})
|
||||||
|
|
||||||
|
for _, el := range oldConv {
|
||||||
|
if !isContainsDisk(newConv, el) {
|
||||||
|
deletedDisks = append(deletedDisks, el)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, el := range newConv {
|
||||||
|
if !isContainsDisk(oldConv, el) {
|
||||||
|
addedDisks = append(addedDisks, el)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(deletedDisks) > 0 {
|
||||||
|
urlValues.Add("computeId", d.Id())
|
||||||
|
urlValues.Add("force", "false")
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", ComputeStopAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
|
||||||
|
for _, disk := range deletedDisks {
|
||||||
|
diskConv := disk.(map[string]interface{})
|
||||||
|
if diskConv["disk_name"].(string) == "bootdisk" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
urlValues.Add("computeId", d.Id())
|
||||||
|
urlValues.Add("diskId", strconv.Itoa(diskConv["disk_id"].(int)))
|
||||||
|
urlValues.Add("permanently", strconv.FormatBool(diskConv["permanently"].(bool)))
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", ComputeDiskDeleteAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
urlValues.Add("computeId", d.Id())
|
||||||
|
urlValues.Add("altBootId", "0")
|
||||||
|
_, err = c.DecortAPICall(ctx, "POST", ComputeStartAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(addedDisks) > 0 {
|
||||||
|
for _, disk := range addedDisks {
|
||||||
|
diskConv := disk.(map[string]interface{})
|
||||||
|
if diskConv["disk_name"].(string) == "bootdisk" {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
urlValues.Add("computeId", d.Id())
|
||||||
|
urlValues.Add("diskName", diskConv["disk_name"].(string))
|
||||||
|
urlValues.Add("size", strconv.Itoa(diskConv["size"].(int)))
|
||||||
|
if diskConv["disk_type"].(string) != "" {
|
||||||
|
urlValues.Add("diskType", diskConv["disk_type"].(string))
|
||||||
|
}
|
||||||
|
if diskConv["sep_id"].(int) != 0 {
|
||||||
|
urlValues.Add("sepId", strconv.Itoa(diskConv["sep_id"].(int)))
|
||||||
|
}
|
||||||
|
if diskConv["pool"].(string) != "" {
|
||||||
|
urlValues.Add("pool", diskConv["pool"].(string))
|
||||||
|
}
|
||||||
|
if diskConv["desc"].(string) != "" {
|
||||||
|
urlValues.Add("desc", diskConv["desc"].(string))
|
||||||
|
}
|
||||||
|
if diskConv["image_id"].(int) != 0 {
|
||||||
|
urlValues.Add("imageId", strconv.Itoa(diskConv["image_id"].(int)))
|
||||||
|
}
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", ComputeDiskAddAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// we may reuse dataSourceComputeRead here as we maintain similarity
|
// we may reuse dataSourceComputeRead here as we maintain similarity
|
||||||
// between Compute resource and Compute data source schemas
|
// between Compute resource and Compute data source schemas
|
||||||
return dataSourceComputeRead(ctx, d, m)
|
return resourceComputeRead(ctx, d, m)
|
||||||
|
}
|
||||||
|
|
||||||
|
func isContainsDisk(els []interface{}, el interface{}) bool {
|
||||||
|
for _, elOld := range els {
|
||||||
|
elOldConv := elOld.(map[string]interface{})
|
||||||
|
elConv := el.(map[string]interface{})
|
||||||
|
if elOldConv["disk_name"].(string) == elConv["disk_name"].(string) {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
func resourceComputeDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
func resourceComputeDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
@@ -318,8 +477,8 @@ func resourceComputeDelete(ctx context.Context, d *schema.ResourceData, m interf
|
|||||||
|
|
||||||
params := &url.Values{}
|
params := &url.Values{}
|
||||||
params.Add("computeId", d.Id())
|
params.Add("computeId", d.Id())
|
||||||
params.Add("permanently", "1")
|
params.Add("permanently", strconv.FormatBool(d.Get("permanently").(bool)))
|
||||||
params.Add("detachDisks", "1")
|
params.Add("detachDisks", strconv.FormatBool(d.Get("detach_disks").(bool)))
|
||||||
|
|
||||||
if _, err := c.DecortAPICall(ctx, "POST", ComputeDeleteAPI, params); err != nil {
|
if _, err := c.DecortAPICall(ctx, "POST", ComputeDeleteAPI, params); err != nil {
|
||||||
return diag.FromErr(err)
|
return diag.FromErr(err)
|
||||||
@@ -328,6 +487,244 @@ func resourceComputeDelete(ctx context.Context, d *schema.ResourceData, m interf
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func ResourceComputeSchemaMake() map[string]*schema.Schema {
|
||||||
|
rets := map[string]*schema.Schema{
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
Description: "Name of this compute. Compute names are case sensitive and must be unique in the resource group.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"rg_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
ValidateFunc: validation.IntAtLeast(1),
|
||||||
|
Description: "ID of the resource group where this compute should be deployed.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"driver": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
ForceNew: true,
|
||||||
|
StateFunc: statefuncs.StateFuncToUpper,
|
||||||
|
ValidateFunc: validation.StringInSlice([]string{"KVM_X86", "KVM_PPC"}, false), // observe case while validating
|
||||||
|
Description: "Hardware architecture of this compute instance.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"cpu": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
ValidateFunc: validation.IntBetween(1, constants.MaxCpusPerCompute),
|
||||||
|
Description: "Number of CPUs to allocate to this compute instance.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"ram": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
ValidateFunc: validation.IntAtLeast(constants.MinRamPerCompute),
|
||||||
|
Description: "Amount of RAM in MB to allocate to this compute instance.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"image_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
ForceNew: true,
|
||||||
|
Description: "ID of the OS image to base this compute instance on.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"boot_disk_size": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
Description: "This compute instance boot disk size in GB. Make sure it is large enough to accomodate selected OS image.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"disks": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Optional: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"disk_name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
Description: "Name for disk",
|
||||||
|
},
|
||||||
|
"size": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
Description: "Disk size in GiB",
|
||||||
|
},
|
||||||
|
"disk_type": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Optional: true,
|
||||||
|
ValidateFunc: validation.StringInSlice([]string{"B", "D"}, false),
|
||||||
|
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data'",
|
||||||
|
},
|
||||||
|
"sep_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Optional: true,
|
||||||
|
Description: "Storage endpoint provider ID; by default the same with boot disk",
|
||||||
|
},
|
||||||
|
"pool": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Optional: true,
|
||||||
|
Description: "Pool name; by default will be chosen automatically",
|
||||||
|
},
|
||||||
|
"desc": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Optional: true,
|
||||||
|
Description: "Optional description",
|
||||||
|
},
|
||||||
|
"image_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Optional: true,
|
||||||
|
Description: "Specify image id for create disk from template",
|
||||||
|
},
|
||||||
|
"disk_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Disk ID",
|
||||||
|
},
|
||||||
|
"permanently": {
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
Default: false,
|
||||||
|
Description: "Disk deletion status",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"sep_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
ForceNew: true,
|
||||||
|
Description: "ID of SEP to create bootDisk on. Uses image's sepId if not set.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"pool": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
ForceNew: true,
|
||||||
|
Description: "Pool to use if sepId is set, can be also empty if needed to be chosen by system.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"extra_disks": {
|
||||||
|
Type: schema.TypeSet,
|
||||||
|
Optional: true,
|
||||||
|
MaxItems: constants.MaxExtraDisksPerCompute,
|
||||||
|
Elem: &schema.Schema{
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
},
|
||||||
|
Description: "Optional list of IDs of extra disks to attach to this compute. You may specify several extra disks.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"network": {
|
||||||
|
Type: schema.TypeSet,
|
||||||
|
Optional: true,
|
||||||
|
MaxItems: constants.MaxNetworksPerCompute,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: networkSubresourceSchemaMake(),
|
||||||
|
},
|
||||||
|
Description: "Optional network connection(s) for this compute. You may specify several network blocks, one for each connection.",
|
||||||
|
},
|
||||||
|
|
||||||
|
/*
|
||||||
|
"ssh_keys": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Optional: true,
|
||||||
|
MaxItems: MaxSshKeysPerCompute,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: sshSubresourceSchemaMake(),
|
||||||
|
},
|
||||||
|
Description: "SSH keys to authorize on this compute instance.",
|
||||||
|
},
|
||||||
|
*/
|
||||||
|
|
||||||
|
"description": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Description: "Optional text description of this compute instance.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"cloud_init": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Default: "applied",
|
||||||
|
DiffSuppressFunc: cloudInitDiffSupperss,
|
||||||
|
Description: "Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases.",
|
||||||
|
},
|
||||||
|
|
||||||
|
// The rest are Compute properties, which are "computed" once it is created
|
||||||
|
"rg_name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Name of the resource group where this compute instance is located.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"account_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "ID of the account this compute instance belongs to.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"account_name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
Description: "Name of the account this compute instance belongs to.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"boot_disk_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
Description: "This compute instance boot disk ID.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"os_users": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: osUsersSubresourceSchemaMake(),
|
||||||
|
},
|
||||||
|
Description: "Guest OS users provisioned on this compute instance.",
|
||||||
|
},
|
||||||
|
|
||||||
|
"started": {
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
Default: true,
|
||||||
|
Description: "Is compute started.",
|
||||||
|
},
|
||||||
|
"detach_disks": {
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
Default: true,
|
||||||
|
},
|
||||||
|
"permanently": {
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
Default: true,
|
||||||
|
},
|
||||||
|
"is": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Description: "system name",
|
||||||
|
},
|
||||||
|
"ipa_type": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Description: "compute purpose",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
return rets
|
||||||
|
}
|
||||||
|
|
||||||
func ResourceCompute() *schema.Resource {
|
func ResourceCompute() *schema.Resource {
|
||||||
return &schema.Resource{
|
return &schema.Resource{
|
||||||
SchemaVersion: 1,
|
SchemaVersion: 1,
|
||||||
@@ -338,169 +735,17 @@ func ResourceCompute() *schema.Resource {
|
|||||||
DeleteContext: resourceComputeDelete,
|
DeleteContext: resourceComputeDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout180s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout180s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: map[string]*schema.Schema{
|
Schema: ResourceComputeSchemaMake(),
|
||||||
"name": {
|
|
||||||
Type: schema.TypeString,
|
|
||||||
Required: true,
|
|
||||||
Description: "Name of this compute. Compute names are case sensitive and must be unique in the resource group.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"rg_id": {
|
|
||||||
Type: schema.TypeInt,
|
|
||||||
Required: true,
|
|
||||||
ValidateFunc: validation.IntAtLeast(1),
|
|
||||||
Description: "ID of the resource group where this compute should be deployed.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"driver": {
|
|
||||||
Type: schema.TypeString,
|
|
||||||
Required: true,
|
|
||||||
ForceNew: true,
|
|
||||||
StateFunc: statefuncs.StateFuncToUpper,
|
|
||||||
ValidateFunc: validation.StringInSlice([]string{"KVM_X86", "KVM_PPC"}, false), // observe case while validating
|
|
||||||
Description: "Hardware architecture of this compute instance.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"cpu": {
|
|
||||||
Type: schema.TypeInt,
|
|
||||||
Required: true,
|
|
||||||
ValidateFunc: validation.IntBetween(1, constants.MaxCpusPerCompute),
|
|
||||||
Description: "Number of CPUs to allocate to this compute instance.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"ram": {
|
|
||||||
Type: schema.TypeInt,
|
|
||||||
Required: true,
|
|
||||||
ValidateFunc: validation.IntAtLeast(constants.MinRamPerCompute),
|
|
||||||
Description: "Amount of RAM in MB to allocate to this compute instance.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"image_id": {
|
|
||||||
Type: schema.TypeInt,
|
|
||||||
Required: true,
|
|
||||||
ForceNew: true,
|
|
||||||
Description: "ID of the OS image to base this compute instance on.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"boot_disk_size": {
|
|
||||||
Type: schema.TypeInt,
|
|
||||||
Required: true,
|
|
||||||
Description: "This compute instance boot disk size in GB. Make sure it is large enough to accomodate selected OS image.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"sep_id": {
|
|
||||||
Type: schema.TypeInt,
|
|
||||||
Optional: true,
|
|
||||||
Computed: true,
|
|
||||||
ForceNew: true,
|
|
||||||
Description: "ID of SEP to create bootDisk on. Uses image's sepId if not set.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"pool": {
|
|
||||||
Type: schema.TypeString,
|
|
||||||
Optional: true,
|
|
||||||
Computed: true,
|
|
||||||
ForceNew: true,
|
|
||||||
Description: "Pool to use if sepId is set, can be also empty if needed to be chosen by system.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"extra_disks": {
|
|
||||||
Type: schema.TypeSet,
|
|
||||||
Optional: true,
|
|
||||||
MaxItems: constants.MaxExtraDisksPerCompute,
|
|
||||||
Elem: &schema.Schema{
|
|
||||||
Type: schema.TypeInt,
|
|
||||||
},
|
|
||||||
Description: "Optional list of IDs of extra disks to attach to this compute. You may specify several extra disks.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"network": {
|
|
||||||
Type: schema.TypeSet,
|
|
||||||
Optional: true,
|
|
||||||
MaxItems: constants.MaxNetworksPerCompute,
|
|
||||||
Elem: &schema.Resource{
|
|
||||||
Schema: networkSubresourceSchemaMake(),
|
|
||||||
},
|
|
||||||
Description: "Optional network connection(s) for this compute. You may specify several network blocks, one for each connection.",
|
|
||||||
},
|
|
||||||
|
|
||||||
/*
|
|
||||||
"ssh_keys": {
|
|
||||||
Type: schema.TypeList,
|
|
||||||
Optional: true,
|
|
||||||
MaxItems: MaxSshKeysPerCompute,
|
|
||||||
Elem: &schema.Resource{
|
|
||||||
Schema: sshSubresourceSchemaMake(),
|
|
||||||
},
|
|
||||||
Description: "SSH keys to authorize on this compute instance.",
|
|
||||||
},
|
|
||||||
*/
|
|
||||||
|
|
||||||
"description": {
|
|
||||||
Type: schema.TypeString,
|
|
||||||
Optional: true,
|
|
||||||
Description: "Optional text description of this compute instance.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"cloud_init": {
|
|
||||||
Type: schema.TypeString,
|
|
||||||
Optional: true,
|
|
||||||
Default: "applied",
|
|
||||||
DiffSuppressFunc: cloudInitDiffSupperss,
|
|
||||||
Description: "Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases.",
|
|
||||||
},
|
|
||||||
|
|
||||||
// The rest are Compute properties, which are "computed" once it is created
|
|
||||||
"rg_name": {
|
|
||||||
Type: schema.TypeString,
|
|
||||||
Computed: true,
|
|
||||||
Description: "Name of the resource group where this compute instance is located.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"account_id": {
|
|
||||||
Type: schema.TypeInt,
|
|
||||||
Computed: true,
|
|
||||||
Description: "ID of the account this compute instance belongs to.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"account_name": {
|
|
||||||
Type: schema.TypeString,
|
|
||||||
Computed: true,
|
|
||||||
Description: "Name of the account this compute instance belongs to.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"boot_disk_id": {
|
|
||||||
Type: schema.TypeInt,
|
|
||||||
Computed: true,
|
|
||||||
Description: "This compute instance boot disk ID.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"os_users": {
|
|
||||||
Type: schema.TypeList,
|
|
||||||
Computed: true,
|
|
||||||
Elem: &schema.Resource{
|
|
||||||
Schema: osUsersSubresourceSchemaMake(),
|
|
||||||
},
|
|
||||||
Description: "Guest OS users provisioned on this compute instance.",
|
|
||||||
},
|
|
||||||
|
|
||||||
"started": {
|
|
||||||
Type: schema.TypeBool,
|
|
||||||
Optional: true,
|
|
||||||
Default: true,
|
|
||||||
Description: "Is compute started.",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
@@ -91,16 +92,33 @@ func utilityComputeExtraDisksConfigure(ctx context.Context, d *schema.ResourceDa
|
|||||||
|
|
||||||
detach_set := old_set.(*schema.Set).Difference(new_set.(*schema.Set))
|
detach_set := old_set.(*schema.Set).Difference(new_set.(*schema.Set))
|
||||||
log.Debugf("utilityComputeExtraDisksConfigure: detach set has %d items for Compute ID %s", detach_set.Len(), d.Id())
|
log.Debugf("utilityComputeExtraDisksConfigure: detach set has %d items for Compute ID %s", detach_set.Len(), d.Id())
|
||||||
for _, diskId := range detach_set.List() {
|
|
||||||
|
if detach_set.Len() > 0 {
|
||||||
urlValues := &url.Values{}
|
urlValues := &url.Values{}
|
||||||
urlValues.Add("computeId", d.Id())
|
urlValues.Add("computeId", d.Id())
|
||||||
urlValues.Add("diskId", fmt.Sprintf("%d", diskId.(int)))
|
urlValues.Add("force", "false")
|
||||||
_, err := c.DecortAPICall(ctx, "POST", ComputeDiskDetachAPI, urlValues)
|
_, err := c.DecortAPICall(ctx, "POST", ComputeStopAPI, urlValues)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
// failed to detach disk - there will be partial resource update
|
return err
|
||||||
log.Errorf("utilityComputeExtraDisksConfigure: failed to detach disk ID %d from Compute ID %s: %s", diskId.(int), d.Id(), err)
|
}
|
||||||
apiErrCount++
|
for _, diskId := range detach_set.List() {
|
||||||
lastSavedError = err
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("computeId", d.Id())
|
||||||
|
urlValues.Add("diskId", fmt.Sprintf("%d", diskId.(int)))
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", ComputeDiskDetachAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
// failed to detach disk - there will be partial resource update
|
||||||
|
log.Errorf("utilityComputeExtraDisksConfigure: failed to detach disk ID %d from Compute ID %s: %s", diskId.(int), d.Id(), err)
|
||||||
|
apiErrCount++
|
||||||
|
lastSavedError = err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
urlValues.Add("computeId", d.Id())
|
||||||
|
urlValues.Add("altBootId", "0")
|
||||||
|
_, err = c.DecortAPICall(ctx, "POST", ComputeStartAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -267,12 +285,12 @@ func utilityComputeCheckPresence(ctx context.Context, d *schema.ResourceData, m
|
|||||||
// and RG ID
|
// and RG ID
|
||||||
computeName, argSet := d.GetOk("name")
|
computeName, argSet := d.GetOk("name")
|
||||||
if !argSet {
|
if !argSet {
|
||||||
return "", fmt.Errorf("Cannot locate compute instance if name is empty and no compute ID specified")
|
return "", fmt.Errorf("cannot locate compute instance if name is empty and no compute ID specified")
|
||||||
}
|
}
|
||||||
|
|
||||||
rgId, argSet := d.GetOk("rg_id")
|
rgId, argSet := d.GetOk("rg_id")
|
||||||
if !argSet {
|
if !argSet {
|
||||||
return "", fmt.Errorf("Cannot locate compute by name %s if no resource group ID is set", computeName.(string))
|
return "", fmt.Errorf("cannot locate compute by name %s if no resource group ID is set", computeName.(string))
|
||||||
}
|
}
|
||||||
|
|
||||||
urlValues.Add("rgId", fmt.Sprintf("%d", rgId))
|
urlValues.Add("rgId", fmt.Sprintf("%d", rgId))
|
||||||
|
|||||||
57
internal/service/cloudapi/lb/api.go
Normal file
57
internal/service/cloudapi/lb/api.go
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
const lbListAPI = "/restmachine/cloudapi/lb/list"
|
||||||
|
const lbListDeletedAPI = "/restmachine/cloudapi/lb/listDeleted"
|
||||||
|
const lbGetAPI = "/restmachine/cloudapi/lb/get"
|
||||||
|
const lbCreateAPI = "/restmachine/cloudapi/lb/create"
|
||||||
|
const lbDeleteAPI = "/restmachine/cloudapi/lb/delete"
|
||||||
|
const lbDisableAPI = "/restmachine/cloudapi/lb/disable"
|
||||||
|
const lbEnableAPI = "/restmachine/cloudapi/lb/enable"
|
||||||
|
const lbUpdateAPI = "/restmachine/cloudapi/lb/update"
|
||||||
|
const lbStartAPI = "/restmachine/cloudapi/lb/start"
|
||||||
|
const lbStopAPI = "/restmachine/cloudapi/lb/stop"
|
||||||
|
const lbRestartAPI = "/restmachine/cloudapi/lb/restart"
|
||||||
|
const lbRestoreAPI = "/restmachine/cloudapi/lb/restore"
|
||||||
|
const lbConfigResetAPI = "/restmachine/cloudapi/lb/configReset"
|
||||||
|
const lbBackendCreateAPI = "/restmachine/cloudapi/lb/backendCreate"
|
||||||
|
const lbBackendDeleteAPI = "/restmachine/cloudapi/lb/backendDelete"
|
||||||
|
const lbBackendUpdateAPI = "/restmachine/cloudapi/lb/backendUpdate"
|
||||||
|
const lbBackendServerAddAPI = "/restmachine/cloudapi/lb/backendServerAdd"
|
||||||
|
const lbBackendServerDeleteAPI = "/restmachine/cloudapi/lb/backendServerDelete"
|
||||||
|
const lbBackendServerUpdateAPI = "/restmachine/cloudapi/lb/backendServerUpdate"
|
||||||
|
const lbFrontendCreateAPI = "/restmachine/cloudapi/lb/frontendCreate"
|
||||||
|
const lbFrontendDeleteAPI = "/restmachine/cloudapi/lb/frontendDelete"
|
||||||
|
const lbFrontendBindAPI = "/restmachine/cloudapi/lb/frontendBind"
|
||||||
|
const lbFrontendBindDeleteAPI = "/restmachine/cloudapi/lb/frontendBindDelete"
|
||||||
|
const lbFrontendBindUpdateAPI = "/restmachine/cloudapi/lb/frontendBindingUpdate"
|
||||||
96
internal/service/cloudapi/lb/data_source_lb.go
Normal file
96
internal/service/cloudapi/lb/data_source_lb.go
Normal file
@@ -0,0 +1,96 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
)
|
||||||
|
|
||||||
|
func flattenLB(d *schema.ResourceData, lb *LoadBalancer) {
|
||||||
|
d.Set("ha_mode", lb.HAMode)
|
||||||
|
d.Set("backends", flattenLBBackends(lb.Backends))
|
||||||
|
d.Set("created_by", lb.CreatedBy)
|
||||||
|
d.Set("created_time", lb.CreatedTime)
|
||||||
|
d.Set("deleted_by", lb.DeletedBy)
|
||||||
|
d.Set("deleted_time", lb.DeletedTime)
|
||||||
|
d.Set("desc", lb.Description)
|
||||||
|
d.Set("dp_api_user", lb.DPAPIUser)
|
||||||
|
d.Set("extnet_id", lb.ExtnetId)
|
||||||
|
d.Set("frontends", flattenFrontends(lb.Frontends))
|
||||||
|
d.Set("gid", lb.GID)
|
||||||
|
d.Set("guid", lb.GUID)
|
||||||
|
d.Set("image_id", lb.ImageId)
|
||||||
|
d.Set("milestones", lb.Milestones)
|
||||||
|
d.Set("name", lb.Name)
|
||||||
|
d.Set("primary_node", flattenNode(lb.PrimaryNode))
|
||||||
|
d.Set("rg_id", lb.RGID)
|
||||||
|
d.Set("rg_name", lb.RGName)
|
||||||
|
d.Set("secondary_node", flattenNode(lb.SecondaryNode))
|
||||||
|
d.Set("status", lb.Status)
|
||||||
|
d.Set("tech_status", lb.TechStatus)
|
||||||
|
d.Set("updated_by", lb.UpdatedBy)
|
||||||
|
d.Set("updated_time", lb.UpdatedTime)
|
||||||
|
d.Set("vins_id", lb.VinsId)
|
||||||
|
}
|
||||||
|
|
||||||
|
func dataSourceLBRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
lb, err := utilityLBCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
d.SetId(strconv.FormatUint(lb.ID, 10))
|
||||||
|
|
||||||
|
flattenLB(d, lb)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func DataSourceLB() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
ReadContext: dataSourceLBRead,
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Read: &constants.Timeout30s,
|
||||||
|
Default: &constants.Timeout60s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: dsLBSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
103
internal/service/cloudapi/lb/data_source_lb_list.go
Normal file
103
internal/service/cloudapi/lb/data_source_lb_list.go
Normal file
@@ -0,0 +1,103 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
)
|
||||||
|
|
||||||
|
func flattenLBList(lbl LBList) []map[string]interface{} {
|
||||||
|
res := make([]map[string]interface{}, 0, len(lbl))
|
||||||
|
for _, lb := range lbl {
|
||||||
|
temp := map[string]interface{}{
|
||||||
|
"ha_mode": lb.HAMode,
|
||||||
|
"backends": flattenLBBackends(lb.Backends),
|
||||||
|
"created_by": lb.CreatedBy,
|
||||||
|
"created_time": lb.CreatedTime,
|
||||||
|
"deleted_by": lb.DeletedBy,
|
||||||
|
"deleted_time": lb.DeletedTime,
|
||||||
|
"desc": lb.Description,
|
||||||
|
"dp_api_user": lb.DPAPIUser,
|
||||||
|
"dp_api_password": lb.DPAPIPassword,
|
||||||
|
"extnet_id": lb.ExtnetId,
|
||||||
|
"frontends": flattenFrontends(lb.Frontends),
|
||||||
|
"gid": lb.GID,
|
||||||
|
"guid": lb.GUID,
|
||||||
|
"image_id": lb.ImageId,
|
||||||
|
"milestones": lb.Milestones,
|
||||||
|
"name": lb.Name,
|
||||||
|
"primary_node": flattenNode(lb.PrimaryNode),
|
||||||
|
"rg_id": lb.RGID,
|
||||||
|
"rg_name": lb.RGName,
|
||||||
|
"secondary_node": flattenNode(lb.SecondaryNode),
|
||||||
|
"status": lb.Status,
|
||||||
|
"tech_status": lb.TechStatus,
|
||||||
|
"updated_by": lb.UpdatedBy,
|
||||||
|
"updated_time": lb.UpdatedTime,
|
||||||
|
"vins_id": lb.VinsId,
|
||||||
|
}
|
||||||
|
res = append(res, temp)
|
||||||
|
}
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func dataSourceLBListRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
lbList, err := utilityLBListCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
id := uuid.New()
|
||||||
|
d.SetId(id.String())
|
||||||
|
d.Set("items", flattenLBList(lbList))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func DataSourceLBList() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
ReadContext: dataSourceLBListRead,
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Read: &constants.Timeout30s,
|
||||||
|
Default: &constants.Timeout60s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: dsLBListSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
68
internal/service/cloudapi/lb/data_source_lb_list_deleted.go
Normal file
68
internal/service/cloudapi/lb/data_source_lb_list_deleted.go
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
|
||||||
|
"github.com/google/uuid"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
)
|
||||||
|
|
||||||
|
func dataSourceLBListDeletedRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
lbList, err := utilityLBListDeletedCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
id := uuid.New()
|
||||||
|
d.SetId(id.String())
|
||||||
|
d.Set("items", flattenLBList(lbList))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func DataSourceLBListDeleted() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
ReadContext: dataSourceLBListDeletedRead,
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Read: &constants.Timeout30s,
|
||||||
|
Default: &constants.Timeout60s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: dsLBListDeletedSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
128
internal/service/cloudapi/lb/flattens.go
Normal file
128
internal/service/cloudapi/lb/flattens.go
Normal file
@@ -0,0 +1,128 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
func flattenNode(node Node) []map[string]interface{} {
|
||||||
|
temp := make([]map[string]interface{}, 0)
|
||||||
|
n := map[string]interface{}{
|
||||||
|
"backend_ip": node.BackendIp,
|
||||||
|
"compute_id": node.ComputeId,
|
||||||
|
"frontend_ip": node.FrontendIp,
|
||||||
|
"guid": node.GUID,
|
||||||
|
"mgmt_ip": node.MGMTIp,
|
||||||
|
"network_id": node.NetworkId,
|
||||||
|
}
|
||||||
|
|
||||||
|
temp = append(temp, n)
|
||||||
|
|
||||||
|
return temp
|
||||||
|
}
|
||||||
|
|
||||||
|
func flattendBindings(bs []Binding) []map[string]interface{} {
|
||||||
|
temp := make([]map[string]interface{}, 0, len(bs))
|
||||||
|
for _, b := range bs {
|
||||||
|
t := map[string]interface{}{
|
||||||
|
"address": b.Address,
|
||||||
|
"guid": b.GUID,
|
||||||
|
"name": b.Name,
|
||||||
|
"port": b.Port,
|
||||||
|
}
|
||||||
|
temp = append(temp, t)
|
||||||
|
}
|
||||||
|
return temp
|
||||||
|
}
|
||||||
|
|
||||||
|
func flattenFrontends(fs []Frontend) []map[string]interface{} {
|
||||||
|
temp := make([]map[string]interface{}, 0, len(fs))
|
||||||
|
for _, f := range fs {
|
||||||
|
t := map[string]interface{}{
|
||||||
|
"backend": f.Backend,
|
||||||
|
"bindings": flattendBindings(f.Bindings),
|
||||||
|
"guid": f.GUID,
|
||||||
|
"name": f.Name,
|
||||||
|
}
|
||||||
|
temp = append(temp, t)
|
||||||
|
}
|
||||||
|
|
||||||
|
return temp
|
||||||
|
}
|
||||||
|
|
||||||
|
func flattenServers(servers []Server) []map[string]interface{} {
|
||||||
|
temp := make([]map[string]interface{}, 0, len(servers))
|
||||||
|
for _, server := range servers {
|
||||||
|
t := map[string]interface{}{
|
||||||
|
"address": server.Address,
|
||||||
|
"check": server.Check,
|
||||||
|
"guid": server.GUID,
|
||||||
|
"name": server.Name,
|
||||||
|
"port": server.Port,
|
||||||
|
"server_settings": flattenServerSettings(server.ServerSettings),
|
||||||
|
}
|
||||||
|
|
||||||
|
temp = append(temp, t)
|
||||||
|
}
|
||||||
|
return temp
|
||||||
|
}
|
||||||
|
|
||||||
|
func flattenServerSettings(defSet ServerSettings) []map[string]interface{} {
|
||||||
|
temp := map[string]interface{}{
|
||||||
|
"downinter": defSet.DownInter,
|
||||||
|
"fall": defSet.Fall,
|
||||||
|
"guid": defSet.GUID,
|
||||||
|
"inter": defSet.Inter,
|
||||||
|
"maxconn": defSet.MaxConn,
|
||||||
|
"maxqueue": defSet.MaxQueue,
|
||||||
|
"rise": defSet.Rise,
|
||||||
|
"slowstart": defSet.SlowStart,
|
||||||
|
"weight": defSet.Weight,
|
||||||
|
}
|
||||||
|
|
||||||
|
res := make([]map[string]interface{}, 0)
|
||||||
|
res = append(res, temp)
|
||||||
|
return res
|
||||||
|
}
|
||||||
|
|
||||||
|
func flattenLBBackends(backends []Backend) []map[string]interface{} {
|
||||||
|
temp := make([]map[string]interface{}, 0, len(backends))
|
||||||
|
for _, item := range backends {
|
||||||
|
t := map[string]interface{}{
|
||||||
|
"algorithm": item.Algorithm,
|
||||||
|
"guid": item.GUID,
|
||||||
|
"name": item.Name,
|
||||||
|
"server_default_settings": flattenServerSettings(item.ServerDefaultSettings),
|
||||||
|
"servers": flattenServers(item.Servers),
|
||||||
|
}
|
||||||
|
|
||||||
|
temp = append(temp, t)
|
||||||
|
}
|
||||||
|
return temp
|
||||||
|
}
|
||||||
101
internal/service/cloudapi/lb/lb_data_subresource.go
Normal file
101
internal/service/cloudapi/lb/lb_data_subresource.go
Normal file
@@ -0,0 +1,101 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
|
||||||
|
func dsLBSchemaMake() map[string]*schema.Schema {
|
||||||
|
sch := createLBSchema()
|
||||||
|
sch["lb_id"] = &schema.Schema{
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
}
|
||||||
|
return sch
|
||||||
|
}
|
||||||
|
|
||||||
|
func dsLBListDeletedSchemaMake() map[string]*schema.Schema {
|
||||||
|
return map[string]*schema.Schema{
|
||||||
|
"page": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Default: 0,
|
||||||
|
},
|
||||||
|
"size": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Default: 0,
|
||||||
|
},
|
||||||
|
"items": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: dsLBItemSchemaMake(),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func dsLBListSchemaMake() map[string]*schema.Schema {
|
||||||
|
return map[string]*schema.Schema{
|
||||||
|
"includedeleted": {
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
Default: false,
|
||||||
|
},
|
||||||
|
"page": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Default: 0,
|
||||||
|
},
|
||||||
|
"size": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Default: 0,
|
||||||
|
},
|
||||||
|
"items": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: dsLBItemSchemaMake(),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func dsLBItemSchemaMake() map[string]*schema.Schema {
|
||||||
|
sch := createLBSchema()
|
||||||
|
sch["dp_api_password"] = &schema.Schema{
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
}
|
||||||
|
return sch
|
||||||
|
}
|
||||||
91
internal/service/cloudapi/lb/lb_resource_subresource.go
Normal file
91
internal/service/cloudapi/lb/lb_resource_subresource.go
Normal file
@@ -0,0 +1,91 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
|
||||||
|
func lbResourceSchemaMake() map[string]*schema.Schema {
|
||||||
|
sch := createLBSchema()
|
||||||
|
sch["rg_id"] = &schema.Schema{
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
}
|
||||||
|
sch["name"] = &schema.Schema{
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
sch["extnet_id"] = &schema.Schema{
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
sch["vins_id"] = &schema.Schema{
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
}
|
||||||
|
sch["start"] = &schema.Schema{
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Required: true,
|
||||||
|
}
|
||||||
|
sch["desc"] = &schema.Schema{
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
sch["enable"] = &schema.Schema{
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
sch["restart"] = &schema.Schema{
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
sch["restore"] = &schema.Schema{
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
sch["config_reset"] = &schema.Schema{
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
}
|
||||||
|
|
||||||
|
sch["permanently"] = &schema.Schema{
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Optional: true,
|
||||||
|
}
|
||||||
|
return sch
|
||||||
|
}
|
||||||
367
internal/service/cloudapi/lb/lb_schema.go
Normal file
367
internal/service/cloudapi/lb/lb_schema.go
Normal file
@@ -0,0 +1,367 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
|
||||||
|
func createLBSchema() map[string]*schema.Schema {
|
||||||
|
return map[string]*schema.Schema{
|
||||||
|
"ha_mode": {
|
||||||
|
Type: schema.TypeBool,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"backends": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"algorithm": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"server_default_settings": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"downinter": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"fall": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"inter": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"maxconn": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"maxqueue": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"rise": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"slowstart": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"weight": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"servers": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"address": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"check": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"port": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"server_settings": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"downinter": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"fall": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"inter": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"maxconn": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"maxqueue": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"rise": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"slowstart": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"weight": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"created_by": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"created_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"deleted_by": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"deleted_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"desc": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"dp_api_user": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"extnet_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"frontends": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"backend": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"bindings": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"address": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"port": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"gid": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"lb_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"image_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"milestones": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"primary_node": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"backend_ip": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"compute_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"frontend_ip": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"mgmt_ip": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"network_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"rg_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"rg_name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"secondary_node": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"backend_ip": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"compute_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"frontend_ip": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"mgmt_ip": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"network_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"status": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"tech_status": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"updated_by": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"updated_time": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"vins_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
120
internal/service/cloudapi/lb/models.go
Normal file
120
internal/service/cloudapi/lb/models.go
Normal file
@@ -0,0 +1,120 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
type LoadBalancer struct {
|
||||||
|
HAMode bool `json:"HAmode"`
|
||||||
|
ACL interface{} `json:"acl"`
|
||||||
|
Backends []Backend `json:"backends"`
|
||||||
|
CreatedBy string `json:"createdBy"`
|
||||||
|
CreatedTime uint64 `json:"createdTime"`
|
||||||
|
DeletedBy string `json:"deletedBy"`
|
||||||
|
DeletedTime uint64 `json:"deletedTime"`
|
||||||
|
Description string `json:"desc"`
|
||||||
|
DPAPIUser string `json:"dpApiUser"`
|
||||||
|
ExtnetId uint64 `json:"extnetId"`
|
||||||
|
Frontends []Frontend `json:"frontends"`
|
||||||
|
GID uint64 `json:"gid"`
|
||||||
|
GUID uint64 `json:"guid"`
|
||||||
|
ID uint64 `json:"id"`
|
||||||
|
ImageId uint64 `json:"imageId"`
|
||||||
|
Milestones uint64 `json:"milestones"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
PrimaryNode Node `json:"primaryNode"`
|
||||||
|
RGID uint64 `json:"rgId"`
|
||||||
|
RGName string `json:"rgName"`
|
||||||
|
SecondaryNode Node `json:"secondaryNode"`
|
||||||
|
Status string `json:"status"`
|
||||||
|
TechStatus string `json:"techStatus"`
|
||||||
|
UpdatedBy string `json:"updatedBy"`
|
||||||
|
UpdatedTime uint64 `json:"updatedTime"`
|
||||||
|
VinsId uint64 `json:"vinsId"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type LoadBalancerDetailed struct {
|
||||||
|
DPAPIPassword string `json:"dpApiPassword"`
|
||||||
|
LoadBalancer
|
||||||
|
}
|
||||||
|
|
||||||
|
type Backend struct {
|
||||||
|
Algorithm string `json:"algorithm"`
|
||||||
|
GUID string `json:"guid"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
ServerDefaultSettings ServerSettings `json:"serverDefaultSettings"`
|
||||||
|
Servers []Server `json:"servers"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type LBList []LoadBalancerDetailed
|
||||||
|
|
||||||
|
type ServerSettings struct {
|
||||||
|
Inter uint64 `json:"inter"`
|
||||||
|
GUID string `json:"guid"`
|
||||||
|
DownInter uint64 `json:"downinter"`
|
||||||
|
Rise uint `json:"rise"`
|
||||||
|
Fall uint `json:"fall"`
|
||||||
|
SlowStart uint64 `json:"slowstart"`
|
||||||
|
MaxConn uint `json:"maxconn"`
|
||||||
|
MaxQueue uint `json:"maxqueue"`
|
||||||
|
Weight uint `json:"weight"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Server struct {
|
||||||
|
Address string `json:"address"`
|
||||||
|
Check string `json:"check"`
|
||||||
|
GUID string `json:"guid"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Port uint `json:"port"`
|
||||||
|
ServerSettings ServerSettings `json:"serverSettings"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Node struct {
|
||||||
|
BackendIp string `json:"backendIp"`
|
||||||
|
ComputeId uint64 `json:"computeId"`
|
||||||
|
FrontendIp string `json:"frontendIp"`
|
||||||
|
GUID string `json:"guid"`
|
||||||
|
MGMTIp string `json:"mgmtIp"`
|
||||||
|
NetworkId uint64 `json:"networkId"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Frontend struct {
|
||||||
|
Backend string `json:"backend"`
|
||||||
|
Bindings []Binding `json:"bindings"`
|
||||||
|
GUID string `json:"guid"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
}
|
||||||
|
|
||||||
|
type Binding struct {
|
||||||
|
Address string `json:"address"`
|
||||||
|
GUID string `json:"guid"`
|
||||||
|
Name string `json:"name"`
|
||||||
|
Port uint `json:"port"`
|
||||||
|
}
|
||||||
281
internal/service/cloudapi/lb/resource_lb.go
Normal file
281
internal/service/cloudapi/lb/resource_lb.go
Normal file
@@ -0,0 +1,281 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
func resourceLBCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBCreate")
|
||||||
|
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("name", d.Get("name").(string))
|
||||||
|
urlValues.Add("rgId", strconv.Itoa(d.Get("rg_id").(int)))
|
||||||
|
urlValues.Add("extnetId", strconv.Itoa(d.Get("extnet_id").(int)))
|
||||||
|
urlValues.Add("vinsId", strconv.Itoa(d.Get("vins_id").(int)))
|
||||||
|
urlValues.Add("start", strconv.FormatBool((d.Get("start").(bool))))
|
||||||
|
|
||||||
|
if desc, ok := d.GetOk("desc"); ok {
|
||||||
|
urlValues.Add("desc", desc.(string))
|
||||||
|
}
|
||||||
|
|
||||||
|
lbId, err := c.DecortAPICall(ctx, "POST", lbCreateAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
d.SetId(lbId)
|
||||||
|
d.Set("lb_id", lbId)
|
||||||
|
|
||||||
|
_, err = utilityLBCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
diagnostics := resourceLBRead(ctx, d, m)
|
||||||
|
if diagnostics != nil {
|
||||||
|
return diagnostics
|
||||||
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
|
||||||
|
if enable, ok := d.GetOk("enable"); ok {
|
||||||
|
api := lbDisableAPI
|
||||||
|
if enable.(bool) {
|
||||||
|
api = lbEnableAPI
|
||||||
|
}
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", api, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBRead")
|
||||||
|
|
||||||
|
lb, err := utilityLBCheckPresence(ctx, d, m)
|
||||||
|
if lb == nil {
|
||||||
|
d.SetId("")
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
d.Set("ha_mode", lb.HAMode)
|
||||||
|
d.Set("backends", flattenLBBackends(lb.Backends))
|
||||||
|
d.Set("created_by", lb.CreatedBy)
|
||||||
|
d.Set("created_time", lb.CreatedTime)
|
||||||
|
d.Set("deleted_by", lb.DeletedBy)
|
||||||
|
d.Set("deleted_time", lb.DeletedTime)
|
||||||
|
d.Set("desc", lb.Description)
|
||||||
|
d.Set("dp_api_user", lb.DPAPIUser)
|
||||||
|
d.Set("extnet_id", lb.ExtnetId)
|
||||||
|
d.Set("frontends", flattenFrontends(lb.Frontends))
|
||||||
|
d.Set("gid", lb.GID)
|
||||||
|
d.Set("guid", lb.GUID)
|
||||||
|
d.Set("lb_id", lb.ID)
|
||||||
|
d.Set("image_id", lb.ImageId)
|
||||||
|
d.Set("milestones", lb.Milestones)
|
||||||
|
d.Set("name", lb.Name)
|
||||||
|
d.Set("primary_node", flattenNode(lb.PrimaryNode))
|
||||||
|
d.Set("rg_id", lb.RGID)
|
||||||
|
d.Set("rg_name", lb.RGName)
|
||||||
|
d.Set("secondary_node", flattenNode(lb.SecondaryNode))
|
||||||
|
d.Set("status", lb.Status)
|
||||||
|
d.Set("tech_status", lb.TechStatus)
|
||||||
|
d.Set("updated_by", lb.UpdatedBy)
|
||||||
|
d.Set("updated_time", lb.UpdatedTime)
|
||||||
|
d.Set("vins_id", lb.VinsId)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBDelete")
|
||||||
|
|
||||||
|
lb, err := utilityLBCheckPresence(ctx, d, m)
|
||||||
|
if lb == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
|
||||||
|
if permanently, ok := d.GetOk("permanently"); ok {
|
||||||
|
urlValues.Add("permanently", strconv.FormatBool(permanently.(bool)))
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err = c.DecortAPICall(ctx, "POST", lbDeleteAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
d.SetId("")
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBEdit(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBEdit")
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
if d.HasChange("enable") {
|
||||||
|
api := lbDisableAPI
|
||||||
|
enable := d.Get("enable").(bool)
|
||||||
|
if enable {
|
||||||
|
api = lbEnableAPI
|
||||||
|
}
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", api, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.HasChange("start") {
|
||||||
|
api := lbStopAPI
|
||||||
|
start := d.Get("start").(bool)
|
||||||
|
if start {
|
||||||
|
api = lbStartAPI
|
||||||
|
}
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", api, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.HasChange("desc") {
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
urlValues.Add("desc", d.Get("desc").(string))
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbUpdateAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.HasChange("restart") {
|
||||||
|
restart := d.Get("restart").(bool)
|
||||||
|
if restart {
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbRestartAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.HasChange("restore") {
|
||||||
|
restore := d.Get("restore").(bool)
|
||||||
|
if restore {
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbRestoreAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.HasChange("config_reset") {
|
||||||
|
cfgReset := d.Get("config_reset").(bool)
|
||||||
|
if cfgReset {
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbConfigResetAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
urlValues = &url.Values{}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
//TODO: перенести backend и frontend из ресурсов сюда
|
||||||
|
|
||||||
|
return resourceLBRead(ctx, d, m)
|
||||||
|
}
|
||||||
|
|
||||||
|
func ResourceLB() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
CreateContext: resourceLBCreate,
|
||||||
|
ReadContext: resourceLBRead,
|
||||||
|
UpdateContext: resourceLBEdit,
|
||||||
|
DeleteContext: resourceLBDelete,
|
||||||
|
|
||||||
|
Importer: &schema.ResourceImporter{
|
||||||
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
|
},
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Create: &constants.Timeout600s,
|
||||||
|
Read: &constants.Timeout300s,
|
||||||
|
Update: &constants.Timeout300s,
|
||||||
|
Delete: &constants.Timeout300s,
|
||||||
|
Default: &constants.Timeout300s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: lbResourceSchemaMake(),
|
||||||
|
}
|
||||||
|
}
|
||||||
373
internal/service/cloudapi/lb/resource_lb_backend.go
Normal file
373
internal/service/cloudapi/lb/resource_lb_backend.go
Normal file
@@ -0,0 +1,373 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
func resourceLBBackendCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBBackendCreate")
|
||||||
|
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("backendName", d.Get("name").(string))
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
|
||||||
|
if algorithm, ok := d.GetOk("algorithm"); ok {
|
||||||
|
urlValues.Add("algorithm", algorithm.(string))
|
||||||
|
}
|
||||||
|
if inter, ok := d.GetOk("inter"); ok {
|
||||||
|
urlValues.Add("inter", strconv.Itoa(inter.(int)))
|
||||||
|
}
|
||||||
|
if downinter, ok := d.GetOk("downinter"); ok {
|
||||||
|
urlValues.Add("downinter", strconv.Itoa(downinter.(int)))
|
||||||
|
}
|
||||||
|
if rise, ok := d.GetOk("rise"); ok {
|
||||||
|
urlValues.Add("rise", strconv.Itoa(rise.(int)))
|
||||||
|
}
|
||||||
|
if fall, ok := d.GetOk("fall"); ok {
|
||||||
|
urlValues.Add("fall", strconv.Itoa(fall.(int)))
|
||||||
|
}
|
||||||
|
if slowstart, ok := d.GetOk("slowstart"); ok {
|
||||||
|
urlValues.Add("slowstart", strconv.Itoa(slowstart.(int)))
|
||||||
|
}
|
||||||
|
if maxconn, ok := d.GetOk("maxconn"); ok {
|
||||||
|
urlValues.Add("maxconn", strconv.Itoa(maxconn.(int)))
|
||||||
|
}
|
||||||
|
if maxqueue, ok := d.GetOk("maxqueue"); ok {
|
||||||
|
urlValues.Add("maxqueue", strconv.Itoa(maxqueue.(int)))
|
||||||
|
}
|
||||||
|
if weight, ok := d.GetOk("weight"); ok {
|
||||||
|
urlValues.Add("weight", strconv.Itoa(weight.(int)))
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbBackendCreateAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
d.SetId(strconv.Itoa(d.Get("lb_id").(int)) + "#" + d.Get("name").(string))
|
||||||
|
|
||||||
|
_, err = utilityLBBackendCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
diagnostics := resourceLBBackendRead(ctx, d, m)
|
||||||
|
if diagnostics != nil {
|
||||||
|
return diagnostics
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBBackendRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBBackendRead")
|
||||||
|
|
||||||
|
b, err := utilityLBBackendCheckPresence(ctx, d, m)
|
||||||
|
if b == nil {
|
||||||
|
d.SetId("")
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
lbId, _ := strconv.ParseInt(strings.Split(d.Id(), "#")[0], 10, 32)
|
||||||
|
|
||||||
|
d.Set("lb_id", lbId)
|
||||||
|
d.Set("name", b.Name)
|
||||||
|
d.Set("algorithm", b.Algorithm)
|
||||||
|
d.Set("guid", b.GUID)
|
||||||
|
d.Set("downinter", b.ServerDefaultSettings.DownInter)
|
||||||
|
d.Set("fall", b.ServerDefaultSettings.Fall)
|
||||||
|
d.Set("inter", b.ServerDefaultSettings.Inter)
|
||||||
|
d.Set("maxconn", b.ServerDefaultSettings.MaxConn)
|
||||||
|
d.Set("maxqueue", b.ServerDefaultSettings.MaxQueue)
|
||||||
|
d.Set("rise", b.ServerDefaultSettings.Rise)
|
||||||
|
d.Set("slowstart", b.ServerDefaultSettings.SlowStart)
|
||||||
|
d.Set("weight", b.ServerDefaultSettings.Weight)
|
||||||
|
d.Set("servers", flattenServers(b.Servers))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBBackendDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBBackendDelete")
|
||||||
|
|
||||||
|
lb, err := utilityLBBackendCheckPresence(ctx, d, m)
|
||||||
|
if lb == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
urlValues.Add("backendName", d.Get("name").(string))
|
||||||
|
|
||||||
|
_, err = c.DecortAPICall(ctx, "POST", lbBackendDeleteAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
d.SetId("")
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBBackendEdit(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBBackendEdit")
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
urlValues.Add("backendName", d.Get("name").(string))
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
|
||||||
|
if d.HasChange("algorithm") {
|
||||||
|
urlValues.Add("algorithm", d.Get("algorithm").(string))
|
||||||
|
}
|
||||||
|
if d.HasChange("inter") {
|
||||||
|
urlValues.Add("inter", strconv.Itoa(d.Get("inter").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("downinter") {
|
||||||
|
urlValues.Add("downinter", strconv.Itoa(d.Get("downinter").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("rise") {
|
||||||
|
urlValues.Add("rise", strconv.Itoa(d.Get("rise").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("fall") {
|
||||||
|
urlValues.Add("fall", strconv.Itoa(d.Get("fall").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("slowstart") {
|
||||||
|
urlValues.Add("slowstart", strconv.Itoa(d.Get("slowstart").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("maxconn") {
|
||||||
|
urlValues.Add("maxconn", strconv.Itoa(d.Get("maxconn").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("maxqueue") {
|
||||||
|
urlValues.Add("maxqueue", strconv.Itoa(d.Get("maxqueue").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("weight") {
|
||||||
|
urlValues.Add("weight", strconv.Itoa(d.Get("weight").(int)))
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbBackendUpdateAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
//TODO: перенести servers сюда
|
||||||
|
|
||||||
|
return resourceLBBackendRead(ctx, d, m)
|
||||||
|
}
|
||||||
|
|
||||||
|
func ResourceLBBackend() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
CreateContext: resourceLBBackendCreate,
|
||||||
|
ReadContext: resourceLBBackendRead,
|
||||||
|
UpdateContext: resourceLBBackendEdit,
|
||||||
|
DeleteContext: resourceLBBackendDelete,
|
||||||
|
|
||||||
|
Importer: &schema.ResourceImporter{
|
||||||
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
|
},
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Create: &constants.Timeout600s,
|
||||||
|
Read: &constants.Timeout300s,
|
||||||
|
Update: &constants.Timeout300s,
|
||||||
|
Delete: &constants.Timeout300s,
|
||||||
|
Default: &constants.Timeout300s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"lb_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
Description: "ID of the LB instance to backendCreate",
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
Description: "Must be unique among all backends of this LB - name of the new backend to create",
|
||||||
|
},
|
||||||
|
"algorithm": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
ValidateFunc: validation.StringInSlice([]string{"roundrobin", "static-rr", "leastconn"}, false),
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"downinter": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"fall": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"inter": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"maxconn": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"maxqueue": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"rise": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"slowstart": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"weight": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"servers": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"address": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"check": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"port": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"server_settings": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"downinter": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"fall": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"inter": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"maxconn": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"maxqueue": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"rise": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"slowstart": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"weight": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
314
internal/service/cloudapi/lb/resource_lb_backend_server.go
Normal file
314
internal/service/cloudapi/lb/resource_lb_backend_server.go
Normal file
@@ -0,0 +1,314 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
func resourceLBBackendServerCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBBackendServerCreate")
|
||||||
|
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("backendName", d.Get("backend_name").(string))
|
||||||
|
urlValues.Add("serverName", d.Get("name").(string))
|
||||||
|
urlValues.Add("address", d.Get("address").(string))
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
urlValues.Add("port", strconv.Itoa(d.Get("port").(int)))
|
||||||
|
|
||||||
|
if check, ok := d.GetOk("check"); ok {
|
||||||
|
urlValues.Add("check", check.(string))
|
||||||
|
}
|
||||||
|
|
||||||
|
if inter, ok := d.GetOk("inter"); ok {
|
||||||
|
urlValues.Add("inter", strconv.Itoa(inter.(int)))
|
||||||
|
}
|
||||||
|
if downinter, ok := d.GetOk("downinter"); ok {
|
||||||
|
urlValues.Add("downinter", strconv.Itoa(downinter.(int)))
|
||||||
|
}
|
||||||
|
if rise, ok := d.GetOk("rise"); ok {
|
||||||
|
urlValues.Add("rise", strconv.Itoa(rise.(int)))
|
||||||
|
}
|
||||||
|
if fall, ok := d.GetOk("fall"); ok {
|
||||||
|
urlValues.Add("fall", strconv.Itoa(fall.(int)))
|
||||||
|
}
|
||||||
|
if slowstart, ok := d.GetOk("slowstart"); ok {
|
||||||
|
urlValues.Add("slowstart", strconv.Itoa(slowstart.(int)))
|
||||||
|
}
|
||||||
|
if maxconn, ok := d.GetOk("maxconn"); ok {
|
||||||
|
urlValues.Add("maxconn", strconv.Itoa(maxconn.(int)))
|
||||||
|
}
|
||||||
|
if maxqueue, ok := d.GetOk("maxqueue"); ok {
|
||||||
|
urlValues.Add("maxqueue", strconv.Itoa(maxqueue.(int)))
|
||||||
|
}
|
||||||
|
if weight, ok := d.GetOk("weight"); ok {
|
||||||
|
urlValues.Add("weight", strconv.Itoa(weight.(int)))
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbBackendServerAddAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
d.SetId(strconv.Itoa(d.Get("lb_id").(int)) + "#" + d.Get("backend_name").(string) + "#" + d.Get("name").(string))
|
||||||
|
|
||||||
|
_, err = utilityLBBackendServerCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
diagnostics := resourceLBBackendServerRead(ctx, d, m)
|
||||||
|
if diagnostics != nil {
|
||||||
|
return diagnostics
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBBackendServerRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBBackendServerRead")
|
||||||
|
|
||||||
|
s, err := utilityLBBackendServerCheckPresence(ctx, d, m)
|
||||||
|
if s == nil {
|
||||||
|
d.SetId("")
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
lbId, _ := strconv.ParseInt(strings.Split(d.Id(), "#")[0], 10, 32)
|
||||||
|
backendName := strings.Split(d.Id(), "#")[1]
|
||||||
|
|
||||||
|
d.Set("lb_id", lbId)
|
||||||
|
d.Set("backend_name", backendName)
|
||||||
|
d.Set("name", s.Name)
|
||||||
|
d.Set("port", s.Port)
|
||||||
|
d.Set("address", s.Address)
|
||||||
|
d.Set("check", s.Check)
|
||||||
|
d.Set("guid", s.GUID)
|
||||||
|
d.Set("downinter", s.ServerSettings.DownInter)
|
||||||
|
d.Set("fall", s.ServerSettings.Fall)
|
||||||
|
d.Set("inter", s.ServerSettings.Inter)
|
||||||
|
d.Set("maxconn", s.ServerSettings.MaxConn)
|
||||||
|
d.Set("maxqueue", s.ServerSettings.MaxQueue)
|
||||||
|
d.Set("rise", s.ServerSettings.Rise)
|
||||||
|
d.Set("slowstart", s.ServerSettings.SlowStart)
|
||||||
|
d.Set("weight", s.ServerSettings.Weight)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBBackendServerDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBBackendServerDelete")
|
||||||
|
|
||||||
|
lb, err := utilityLBBackendServerCheckPresence(ctx, d, m)
|
||||||
|
if lb == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
urlValues.Add("serverName", d.Get("name").(string))
|
||||||
|
urlValues.Add("backendName", d.Get("backend_name").(string))
|
||||||
|
|
||||||
|
_, err = c.DecortAPICall(ctx, "POST", lbBackendServerDeleteAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
d.SetId("")
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBBackendServerEdit(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBBackendServerEdit")
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
urlValues.Add("backendName", d.Get("backend_name").(string))
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
urlValues.Add("serverName", d.Get("name").(string))
|
||||||
|
urlValues.Add("address", d.Get("address").(string))
|
||||||
|
urlValues.Add("port", strconv.Itoa(d.Get("port").(int)))
|
||||||
|
|
||||||
|
if d.HasChange("check") {
|
||||||
|
urlValues.Add("check", d.Get("check").(string))
|
||||||
|
}
|
||||||
|
if d.HasChange("inter") {
|
||||||
|
urlValues.Add("inter", strconv.Itoa(d.Get("inter").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("downinter") {
|
||||||
|
urlValues.Add("downinter", strconv.Itoa(d.Get("downinter").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("rise") {
|
||||||
|
urlValues.Add("rise", strconv.Itoa(d.Get("rise").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("fall") {
|
||||||
|
urlValues.Add("fall", strconv.Itoa(d.Get("fall").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("slowstart") {
|
||||||
|
urlValues.Add("slowstart", strconv.Itoa(d.Get("slowstart").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("maxconn") {
|
||||||
|
urlValues.Add("maxconn", strconv.Itoa(d.Get("maxconn").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("maxqueue") {
|
||||||
|
urlValues.Add("maxqueue", strconv.Itoa(d.Get("maxqueue").(int)))
|
||||||
|
}
|
||||||
|
if d.HasChange("weight") {
|
||||||
|
urlValues.Add("weight", strconv.Itoa(d.Get("weight").(int)))
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbBackendServerUpdateAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
//TODO: перенести servers сюда
|
||||||
|
|
||||||
|
return resourceLBBackendServerRead(ctx, d, m)
|
||||||
|
}
|
||||||
|
|
||||||
|
func ResourceLBBackendServer() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
CreateContext: resourceLBBackendServerCreate,
|
||||||
|
ReadContext: resourceLBBackendServerRead,
|
||||||
|
UpdateContext: resourceLBBackendServerEdit,
|
||||||
|
DeleteContext: resourceLBBackendServerDelete,
|
||||||
|
|
||||||
|
Importer: &schema.ResourceImporter{
|
||||||
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
|
},
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Create: &constants.Timeout600s,
|
||||||
|
Read: &constants.Timeout300s,
|
||||||
|
Update: &constants.Timeout300s,
|
||||||
|
Delete: &constants.Timeout300s,
|
||||||
|
Default: &constants.Timeout300s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"lb_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
Description: "ID of the LB instance to backendCreate",
|
||||||
|
},
|
||||||
|
"backend_name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
Description: "Must be unique among all backends of this LB - name of the new backend to create",
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
Description: "Must be unique among all servers defined for this backend - name of the server definition to add.",
|
||||||
|
},
|
||||||
|
"address": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
Description: "IP address of the server.",
|
||||||
|
},
|
||||||
|
"port": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
Description: "Port number on the server",
|
||||||
|
},
|
||||||
|
"check": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
ValidateFunc: validation.StringInSlice([]string{"disabled", "enabled"}, false),
|
||||||
|
Description: "set to disabled if this server should be used regardless of its state.",
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"downinter": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"fall": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"inter": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"maxconn": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"maxqueue": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"rise": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"slowstart": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"weight": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Optional: true,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
192
internal/service/cloudapi/lb/resource_lb_frontend.go
Normal file
192
internal/service/cloudapi/lb/resource_lb_frontend.go
Normal file
@@ -0,0 +1,192 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
func resourceLBFrontendCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBFrontendCreate")
|
||||||
|
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("backendName", d.Get("backend_name").(string))
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
urlValues.Add("frontendName", d.Get("name").(string))
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbFrontendCreateAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
d.SetId(strconv.Itoa(d.Get("lb_id").(int)) + "#" + d.Get("name").(string))
|
||||||
|
|
||||||
|
_, err = utilityLBFrontendCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
diagnostics := resourceLBFrontendRead(ctx, d, m)
|
||||||
|
if diagnostics != nil {
|
||||||
|
return diagnostics
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBFrontendRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBFrontendRead")
|
||||||
|
|
||||||
|
f, err := utilityLBFrontendCheckPresence(ctx, d, m)
|
||||||
|
if f == nil {
|
||||||
|
d.SetId("")
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
lbId, _ := strconv.ParseInt(strings.Split(d.Id(), "#")[0], 10, 32)
|
||||||
|
d.Set("lb_id", lbId)
|
||||||
|
d.Set("backend_name", f.Backend)
|
||||||
|
d.Set("name", f.Name)
|
||||||
|
d.Set("guid", f.GUID)
|
||||||
|
d.Set("bindings", flattendBindings(f.Bindings))
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBFrontendDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBFrontendDelete")
|
||||||
|
|
||||||
|
lb, err := utilityLBFrontendCheckPresence(ctx, d, m)
|
||||||
|
if lb == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
urlValues.Add("frontendName", d.Get("name").(string))
|
||||||
|
|
||||||
|
_, err = c.DecortAPICall(ctx, "POST", lbFrontendDeleteAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
d.SetId("")
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBFrontendEdit(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
|
||||||
|
//TODO: перенести bindings сюда
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func ResourceLBFrontend() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
CreateContext: resourceLBFrontendCreate,
|
||||||
|
ReadContext: resourceLBFrontendRead,
|
||||||
|
UpdateContext: resourceLBFrontendEdit,
|
||||||
|
DeleteContext: resourceLBFrontendDelete,
|
||||||
|
|
||||||
|
Importer: &schema.ResourceImporter{
|
||||||
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
|
},
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Create: &constants.Timeout600s,
|
||||||
|
Read: &constants.Timeout300s,
|
||||||
|
Update: &constants.Timeout300s,
|
||||||
|
Delete: &constants.Timeout300s,
|
||||||
|
Default: &constants.Timeout300s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"lb_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
Description: "ID of the LB instance to backendCreate",
|
||||||
|
},
|
||||||
|
"backend_name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
},
|
||||||
|
"bindings": {
|
||||||
|
Type: schema.TypeList,
|
||||||
|
Computed: true,
|
||||||
|
Elem: &schema.Resource{
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"address": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"port": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
201
internal/service/cloudapi/lb/resource_lb_frontend_bind.go
Normal file
201
internal/service/cloudapi/lb/resource_lb_frontend_bind.go
Normal file
@@ -0,0 +1,201 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
)
|
||||||
|
|
||||||
|
func resourceLBFrontendBindCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBFrontendBindCreate")
|
||||||
|
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("frontendName", d.Get("frontend_name").(string))
|
||||||
|
urlValues.Add("bindingName", d.Get("name").(string))
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
urlValues.Add("bindingAddress", d.Get("address").(string))
|
||||||
|
urlValues.Add("bindingPort", strconv.Itoa(d.Get("port").(int)))
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbFrontendBindAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
d.SetId(strconv.Itoa(d.Get("lb_id").(int)) + "#" + d.Get("frontend_name").(string) + "#" + d.Get("name").(string))
|
||||||
|
|
||||||
|
_, err = utilityLBFrontendBindCheckPresence(ctx, d, m)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
diagnostics := resourceLBFrontendBindRead(ctx, d, m)
|
||||||
|
if diagnostics != nil {
|
||||||
|
return diagnostics
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBFrontendBindRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBFrontendBindRead")
|
||||||
|
|
||||||
|
b, err := utilityLBFrontendBindCheckPresence(ctx, d, m)
|
||||||
|
if b == nil {
|
||||||
|
d.SetId("")
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
lbId, _ := strconv.ParseInt(strings.Split(d.Id(), "#")[0], 10, 32)
|
||||||
|
frontendName := strings.Split(d.Id(), "#")[1]
|
||||||
|
|
||||||
|
d.Set("lb_id", lbId)
|
||||||
|
d.Set("frontend_name", frontendName)
|
||||||
|
d.Set("name", b.Name)
|
||||||
|
d.Set("address", b.Address)
|
||||||
|
d.Set("guid", b.GUID)
|
||||||
|
d.Set("port", b.Port)
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBFrontendBindDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBFrontendBindDelete")
|
||||||
|
|
||||||
|
b, err := utilityLBFrontendBindCheckPresence(ctx, d, m)
|
||||||
|
if b == nil {
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
urlValues.Add("bindingName", d.Get("name").(string))
|
||||||
|
urlValues.Add("frontendName", d.Get("frontend_name").(string))
|
||||||
|
|
||||||
|
_, err = c.DecortAPICall(ctx, "POST", lbFrontendBindDeleteAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
d.SetId("")
|
||||||
|
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func resourceLBFrontendBindEdit(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||||
|
log.Debugf("resourceLBFrontendBindEdit")
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
urlValues.Add("frontendName", d.Get("frontend_name").(string))
|
||||||
|
urlValues.Add("bindingName", d.Get("name").(string))
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
|
||||||
|
if d.HasChange("address") {
|
||||||
|
urlValues.Add("bindingAddress", d.Get("address").(string))
|
||||||
|
}
|
||||||
|
|
||||||
|
if d.HasChange("port") {
|
||||||
|
urlValues.Add("bindingPort", strconv.Itoa(d.Get("port").(int)))
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := c.DecortAPICall(ctx, "POST", lbFrontendBindUpdateAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return diag.FromErr(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return resourceLBFrontendBindRead(ctx, d, m)
|
||||||
|
}
|
||||||
|
|
||||||
|
func ResourceLBFrontendBind() *schema.Resource {
|
||||||
|
return &schema.Resource{
|
||||||
|
SchemaVersion: 1,
|
||||||
|
|
||||||
|
CreateContext: resourceLBFrontendBindCreate,
|
||||||
|
ReadContext: resourceLBFrontendBindRead,
|
||||||
|
UpdateContext: resourceLBFrontendBindEdit,
|
||||||
|
DeleteContext: resourceLBFrontendBindDelete,
|
||||||
|
|
||||||
|
Importer: &schema.ResourceImporter{
|
||||||
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
|
},
|
||||||
|
|
||||||
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
Create: &constants.Timeout600s,
|
||||||
|
Read: &constants.Timeout300s,
|
||||||
|
Update: &constants.Timeout300s,
|
||||||
|
Delete: &constants.Timeout300s,
|
||||||
|
Default: &constants.Timeout300s,
|
||||||
|
},
|
||||||
|
|
||||||
|
Schema: map[string]*schema.Schema{
|
||||||
|
"lb_id": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
Description: "ID of the LB instance to backendCreate",
|
||||||
|
},
|
||||||
|
"frontend_name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
Description: "Must be unique among all backends of this LB - name of the new backend to create",
|
||||||
|
},
|
||||||
|
"address": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
},
|
||||||
|
"guid": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Computed: true,
|
||||||
|
},
|
||||||
|
"name": {
|
||||||
|
Type: schema.TypeString,
|
||||||
|
Required: true,
|
||||||
|
},
|
||||||
|
"port": {
|
||||||
|
Type: schema.TypeInt,
|
||||||
|
Required: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
}
|
||||||
70
internal/service/cloudapi/lb/utility_lb.go
Normal file
70
internal/service/cloudapi/lb/utility_lb.go
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
)
|
||||||
|
|
||||||
|
func utilityLBCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*LoadBalancer, error) {
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
if (d.Get("lb_id").(int)) != 0 {
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
} else {
|
||||||
|
urlValues.Add("lbId", d.Id())
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := c.DecortAPICall(ctx, "POST", lbGetAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
lb := &LoadBalancer{}
|
||||||
|
if err := json.Unmarshal([]byte(resp), lb); err != nil {
|
||||||
|
return nil, fmt.Errorf("can not unmarshall data to lb: %s %+v", resp, lb)
|
||||||
|
}
|
||||||
|
|
||||||
|
return lb, nil
|
||||||
|
}
|
||||||
82
internal/service/cloudapi/lb/utility_lb_backend.go
Normal file
82
internal/service/cloudapi/lb/utility_lb_backend.go
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
)
|
||||||
|
|
||||||
|
func utilityLBBackendCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*Backend, error) {
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
bName := d.Get("name").(string)
|
||||||
|
|
||||||
|
if (d.Get("lb_id").(int)) != 0 {
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
} else {
|
||||||
|
parameters := strings.Split(d.Id(), "#")
|
||||||
|
urlValues.Add("lbId", parameters[0])
|
||||||
|
bName = parameters[1]
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := c.DecortAPICall(ctx, "POST", lbGetAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
lb := &LoadBalancer{}
|
||||||
|
if err := json.Unmarshal([]byte(resp), lb); err != nil {
|
||||||
|
return nil, fmt.Errorf("can not unmarshall data to lb: %s %+v", resp, lb)
|
||||||
|
}
|
||||||
|
|
||||||
|
backends := lb.Backends
|
||||||
|
for _, b := range backends {
|
||||||
|
if b.Name == bName {
|
||||||
|
return &b, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, fmt.Errorf("can not find backend with name: %s for lb: %d", bName, lb.ID)
|
||||||
|
}
|
||||||
95
internal/service/cloudapi/lb/utility_lb_backend_server.go
Normal file
95
internal/service/cloudapi/lb/utility_lb_backend_server.go
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
)
|
||||||
|
|
||||||
|
func utilityLBBackendServerCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*Server, error) {
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
bName := d.Get("backend_name").(string)
|
||||||
|
sName := d.Get("name").(string)
|
||||||
|
|
||||||
|
if (d.Get("lb_id").(int)) != 0 {
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
} else {
|
||||||
|
parameters := strings.Split(d.Id(), "#")
|
||||||
|
urlValues.Add("lbId", parameters[0])
|
||||||
|
bName = parameters[1]
|
||||||
|
sName = parameters[2]
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := c.DecortAPICall(ctx, "POST", lbGetAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
lb := &LoadBalancer{}
|
||||||
|
if err := json.Unmarshal([]byte(resp), lb); err != nil {
|
||||||
|
return nil, fmt.Errorf("can not unmarshall data to lb: %s %+v", resp, lb)
|
||||||
|
}
|
||||||
|
|
||||||
|
backend := &Backend{}
|
||||||
|
backends := lb.Backends
|
||||||
|
for i, b := range backends {
|
||||||
|
if b.Name == bName {
|
||||||
|
backend = &backends[i]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if backend.Name == "" {
|
||||||
|
return nil, fmt.Errorf("can not find backend with name: %s for lb: %d", bName, lb.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, s := range backend.Servers {
|
||||||
|
if s.Name == sName {
|
||||||
|
return &s, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, fmt.Errorf("can not find server with name: %s for backend: %s for lb: %d", sName, bName, lb.ID)
|
||||||
|
}
|
||||||
82
internal/service/cloudapi/lb/utility_lb_frontend.go
Normal file
82
internal/service/cloudapi/lb/utility_lb_frontend.go
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
)
|
||||||
|
|
||||||
|
func utilityLBFrontendCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*Frontend, error) {
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
fName := d.Get("name").(string)
|
||||||
|
|
||||||
|
if (d.Get("lb_id").(int)) != 0 {
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
} else {
|
||||||
|
parameters := strings.Split(d.Id(), "#")
|
||||||
|
urlValues.Add("lbId", parameters[0])
|
||||||
|
fName = parameters[1]
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := c.DecortAPICall(ctx, "POST", lbGetAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
lb := &LoadBalancer{}
|
||||||
|
if err := json.Unmarshal([]byte(resp), lb); err != nil {
|
||||||
|
return nil, fmt.Errorf("can not unmarshall data to lb: %s %+v", resp, lb)
|
||||||
|
}
|
||||||
|
|
||||||
|
frontends := lb.Frontends
|
||||||
|
for _, f := range frontends {
|
||||||
|
if f.Name == fName {
|
||||||
|
return &f, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, fmt.Errorf("can not find frontend with name: %s for lb: %d", fName, lb.ID)
|
||||||
|
}
|
||||||
95
internal/service/cloudapi/lb/utility_lb_frontend_bind.go
Normal file
95
internal/service/cloudapi/lb/utility_lb_frontend_bind.go
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"fmt"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
"strings"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
)
|
||||||
|
|
||||||
|
func utilityLBFrontendBindCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*Binding, error) {
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
fName := d.Get("frontend_name").(string)
|
||||||
|
bName := d.Get("name").(string)
|
||||||
|
|
||||||
|
if (d.Get("lb_id").(int)) != 0 {
|
||||||
|
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||||
|
} else {
|
||||||
|
parameters := strings.Split(d.Id(), "#")
|
||||||
|
urlValues.Add("lbId", parameters[0])
|
||||||
|
fName = parameters[1]
|
||||||
|
bName = parameters[2]
|
||||||
|
}
|
||||||
|
|
||||||
|
resp, err := c.DecortAPICall(ctx, "POST", lbGetAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
if resp == "" {
|
||||||
|
return nil, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
lb := &LoadBalancer{}
|
||||||
|
if err := json.Unmarshal([]byte(resp), lb); err != nil {
|
||||||
|
return nil, fmt.Errorf("can not unmarshall data to lb: %s %+v", resp, lb)
|
||||||
|
}
|
||||||
|
|
||||||
|
frontend := &Frontend{}
|
||||||
|
frontends := lb.Frontends
|
||||||
|
for i, f := range frontends {
|
||||||
|
if f.Name == fName {
|
||||||
|
frontend = &frontends[i]
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if frontend.Name == "" {
|
||||||
|
return nil, fmt.Errorf("can not find frontend with name: %s for lb: %d", fName, lb.ID)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, b := range frontend.Bindings {
|
||||||
|
if b.Name == bName {
|
||||||
|
return &b, nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, fmt.Errorf("can not find bind with name: %s for frontend: %s for lb: %d", bName, fName, lb.ID)
|
||||||
|
}
|
||||||
74
internal/service/cloudapi/lb/utility_lb_list.go
Normal file
74
internal/service/cloudapi/lb/utility_lb_list.go
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
)
|
||||||
|
|
||||||
|
func utilityLBListCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (LBList, error) {
|
||||||
|
lbList := LBList{}
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
if includedeleted, ok := d.GetOk("includedeleted"); ok {
|
||||||
|
urlValues.Add("includedeleted", strconv.FormatBool((includedeleted.(bool))))
|
||||||
|
}
|
||||||
|
|
||||||
|
if page, ok := d.GetOk("page"); ok {
|
||||||
|
urlValues.Add("page", strconv.Itoa(page.(int)))
|
||||||
|
}
|
||||||
|
if size, ok := d.GetOk("size"); ok {
|
||||||
|
urlValues.Add("size", strconv.Itoa(size.(int)))
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Debugf("utilityLBListCheckPresence: load lb list")
|
||||||
|
lbListRaw, err := c.DecortAPICall(ctx, "POST", lbListAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
err = json.Unmarshal([]byte(lbListRaw), &lbList)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return lbList, nil
|
||||||
|
}
|
||||||
70
internal/service/cloudapi/lb/utility_lb_list_deleted.go
Normal file
70
internal/service/cloudapi/lb/utility_lb_list_deleted.go
Normal file
@@ -0,0 +1,70 @@
|
|||||||
|
/*
|
||||||
|
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||||
|
Authors:
|
||||||
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
/*
|
||||||
|
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||||
|
Orchestration Technology) with Terraform by Hashicorp.
|
||||||
|
|
||||||
|
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||||
|
|
||||||
|
Please see README.md to learn where to place source code so that it
|
||||||
|
builds seamlessly.
|
||||||
|
|
||||||
|
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||||
|
*/
|
||||||
|
|
||||||
|
package lb
|
||||||
|
|
||||||
|
import (
|
||||||
|
"context"
|
||||||
|
"encoding/json"
|
||||||
|
"net/url"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||||
|
log "github.com/sirupsen/logrus"
|
||||||
|
|
||||||
|
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||||
|
)
|
||||||
|
|
||||||
|
func utilityLBListDeletedCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (LBList, error) {
|
||||||
|
lbList := LBList{}
|
||||||
|
c := m.(*controller.ControllerCfg)
|
||||||
|
urlValues := &url.Values{}
|
||||||
|
|
||||||
|
if page, ok := d.GetOk("page"); ok {
|
||||||
|
urlValues.Add("page", strconv.Itoa(page.(int)))
|
||||||
|
}
|
||||||
|
if size, ok := d.GetOk("size"); ok {
|
||||||
|
urlValues.Add("size", strconv.Itoa(size.(int)))
|
||||||
|
}
|
||||||
|
|
||||||
|
log.Debugf("utilityLBListDeletedCheckPresence: load lb list")
|
||||||
|
lbListRaw, err := c.DecortAPICall(ctx, "POST", lbListDeletedAPI, urlValues)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
err = json.Unmarshal([]byte(lbListRaw), &lbList)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return lbList, nil
|
||||||
|
}
|
||||||
@@ -181,15 +181,15 @@ func ResourcePfw() *schema.Resource {
|
|||||||
DeleteContext: resourcePfwDelete,
|
DeleteContext: resourcePfwDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout60s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout60s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourcePfwSchemaMake(),
|
Schema: resourcePfwSchemaMake(),
|
||||||
|
|||||||
@@ -307,15 +307,15 @@ func ResourceResgroup() *schema.Resource {
|
|||||||
DeleteContext: resourceResgroupDelete,
|
DeleteContext: resourceResgroupDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout180s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout180s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: map[string]*schema.Schema{
|
Schema: map[string]*schema.Schema{
|
||||||
|
|||||||
@@ -178,15 +178,15 @@ func ResourceSnapshot() *schema.Resource {
|
|||||||
DeleteContext: resourceSnapshotDelete,
|
DeleteContext: resourceSnapshotDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout60s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout60s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourceSnapshotSchemaMake(),
|
Schema: resourceSnapshotSchemaMake(),
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
|
|||||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
|||||||
Authors:
|
Authors:
|
||||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||||
|
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
you may not use this file except in compliance with the License.
|
you may not use this file except in compliance with the License.
|
||||||
|
|||||||
@@ -292,15 +292,15 @@ func ResourceVins() *schema.Resource {
|
|||||||
DeleteContext: resourceVinsDelete,
|
DeleteContext: resourceVinsDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
Create: &constants.Timeout180s,
|
Create: &constants.Timeout600s,
|
||||||
Read: &constants.Timeout30s,
|
Read: &constants.Timeout300s,
|
||||||
Update: &constants.Timeout180s,
|
Update: &constants.Timeout300s,
|
||||||
Delete: &constants.Timeout60s,
|
Delete: &constants.Timeout300s,
|
||||||
Default: &constants.Timeout60s,
|
Default: &constants.Timeout300s,
|
||||||
},
|
},
|
||||||
|
|
||||||
Schema: resourceVinsSchemaMake(),
|
Schema: resourceVinsSchemaMake(),
|
||||||
|
|||||||
@@ -455,7 +455,7 @@ func ResourceAccount() *schema.Resource {
|
|||||||
DeleteContext: resourceAccountDelete,
|
DeleteContext: resourceAccountDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -610,7 +610,7 @@ func ResourceDisk() *schema.Resource {
|
|||||||
DeleteContext: resourceDiskDelete,
|
DeleteContext: resourceDiskDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -381,7 +381,7 @@ func ResourceCDROMImage() *schema.Resource {
|
|||||||
DeleteContext: resourceCDROMImageDelete,
|
DeleteContext: resourceCDROMImageDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -126,7 +126,7 @@ func ResourceDeleteImages() *schema.Resource {
|
|||||||
DeleteContext: resourceDeleteListImages,
|
DeleteContext: resourceDeleteListImages,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -665,7 +665,7 @@ func ResourceImage() *schema.Resource {
|
|||||||
DeleteContext: resourceImageDelete,
|
DeleteContext: resourceImageDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -328,7 +328,7 @@ func ResourceVirtualImage() *schema.Resource {
|
|||||||
DeleteContext: resourceImageDelete,
|
DeleteContext: resourceImageDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -376,7 +376,7 @@ func ResourceK8s() *schema.Resource {
|
|||||||
DeleteContext: resourceK8sDelete,
|
DeleteContext: resourceK8sDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -228,7 +228,7 @@ func ResourceK8sWg() *schema.Resource {
|
|||||||
DeleteContext: resourceK8sWgDelete,
|
DeleteContext: resourceK8sWgDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -338,7 +338,7 @@ func ResourceCompute() *schema.Resource {
|
|||||||
DeleteContext: resourceComputeDelete,
|
DeleteContext: resourceComputeDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -229,7 +229,7 @@ func ResourcePcidevice() *schema.Resource {
|
|||||||
DeleteContext: resourcePcideviceDelete,
|
DeleteContext: resourcePcideviceDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -181,7 +181,7 @@ func ResourcePfw() *schema.Resource {
|
|||||||
DeleteContext: resourcePfwDelete,
|
DeleteContext: resourcePfwDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -307,7 +307,7 @@ func ResourceResgroup() *schema.Resource {
|
|||||||
DeleteContext: resourceResgroupDelete,
|
DeleteContext: resourceResgroupDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -492,7 +492,7 @@ func ResourceSep() *schema.Resource {
|
|||||||
DeleteContext: resourceSepDelete,
|
DeleteContext: resourceSepDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -188,7 +188,7 @@ func ResourceSepConfig() *schema.Resource {
|
|||||||
DeleteContext: resourceSepConfigDelete,
|
DeleteContext: resourceSepConfigDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -178,7 +178,7 @@ func ResourceSnapshot() *schema.Resource {
|
|||||||
DeleteContext: resourceSnapshotDelete,
|
DeleteContext: resourceSnapshotDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
@@ -292,7 +292,7 @@ func ResourceVins() *schema.Resource {
|
|||||||
DeleteContext: resourceVinsDelete,
|
DeleteContext: resourceVinsDelete,
|
||||||
|
|
||||||
Importer: &schema.ResourceImporter{
|
Importer: &schema.ResourceImporter{
|
||||||
State: schema.ImportStatePassthrough,
|
StateContext: schema.ImportStatePassthroughContext,
|
||||||
},
|
},
|
||||||
|
|
||||||
Timeouts: &schema.ResourceTimeout{
|
Timeouts: &schema.ResourceTimeout{
|
||||||
|
|||||||
32
internal/status/status.go
Normal file
32
internal/status/status.go
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
package status
|
||||||
|
|
||||||
|
type Status = string
|
||||||
|
|
||||||
|
var (
|
||||||
|
//The disk is linked to any Compute
|
||||||
|
Assigned Status = "ASSIGNED"
|
||||||
|
|
||||||
|
//An object model has been created in the database
|
||||||
|
Modeled Status = "MODELED"
|
||||||
|
|
||||||
|
//In the process of creation
|
||||||
|
Creating Status = "CREATING"
|
||||||
|
|
||||||
|
//Creating
|
||||||
|
Created Status = "CREATED"
|
||||||
|
|
||||||
|
//Physical resources are allocated for the object
|
||||||
|
Allocated Status = "ALLOCATED"
|
||||||
|
|
||||||
|
//The object has released (returned to the platform) the physical resources that it occupied
|
||||||
|
Unallocated Status = "UNALLOCATED"
|
||||||
|
|
||||||
|
//Permanently deleted
|
||||||
|
Destroyed Status = "DESTROYED"
|
||||||
|
|
||||||
|
//Deleted to Trash
|
||||||
|
Deleted Status = "DELETED"
|
||||||
|
|
||||||
|
//Deleted from storage
|
||||||
|
Purged Status = "PURGED"
|
||||||
|
)
|
||||||
9
provider.tf
Normal file
9
provider.tf
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
decort = {
|
||||||
|
source = "digitalenergy.online/decort/decort"
|
||||||
|
version = "3.1.1"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
@@ -1,7 +1,10 @@
|
|||||||
# Примеры применения ресурсов terraform-provider-decort
|
# Примеры применения ресурсов terraform-provider-decort
|
||||||
|
|
||||||
Каждый файл снабжен комментариями, которые кратко описывают возможности и параметры ресурса.
|
Каждый файл снабжен комментариями, которые кратко описывают возможности и параметры ресурса.
|
||||||
Для успешной работы необходим установленный terraform.
|
Для успешной работы необходим установленный terraform.
|
||||||
|
|
||||||
## Ресурсы в примерах
|
## Ресурсы в примерах
|
||||||
|
|
||||||
- cloudapi:
|
- cloudapi:
|
||||||
- data:
|
- data:
|
||||||
- image
|
- image
|
||||||
@@ -37,6 +40,15 @@
|
|||||||
- vins_list
|
- vins_list
|
||||||
- locations_list
|
- locations_list
|
||||||
- location_url
|
- location_url
|
||||||
|
- lb
|
||||||
|
- lb_list
|
||||||
|
- lb_list_deleted
|
||||||
|
- disk_list_deleted
|
||||||
|
- disk_list_unattached
|
||||||
|
- disk_list_types
|
||||||
|
- disk_list_types_detailed
|
||||||
|
- disk_snapshot_list
|
||||||
|
- disk_snapshot
|
||||||
- resources:
|
- resources:
|
||||||
- image
|
- image
|
||||||
- virtual_image
|
- virtual_image
|
||||||
@@ -49,6 +61,12 @@
|
|||||||
- account
|
- account
|
||||||
- bservice
|
- bservice
|
||||||
- bservice_group
|
- bservice_group
|
||||||
|
- lb
|
||||||
|
- lb_frontend
|
||||||
|
- lb_backend
|
||||||
|
- lb_frontend_bind
|
||||||
|
- lb_backend_server
|
||||||
|
- disk_snapshot
|
||||||
- cloudbroker:
|
- cloudbroker:
|
||||||
- data:
|
- data:
|
||||||
- grid
|
- grid
|
||||||
@@ -94,13 +112,14 @@
|
|||||||
- vins
|
- vins
|
||||||
|
|
||||||
## Как пользоваться примерами
|
## Как пользоваться примерами
|
||||||
|
|
||||||
1. Установить terraform
|
1. Установить terraform
|
||||||
2. Установить terraform-provider-decort с помощью команды `terraform init` (выполняется автоматически), либо вручную.
|
2. Установить terraform-provider-decort с помощью команды `terraform init` (выполняется автоматически), либо вручную.
|
||||||
3. Заменить параметр *controller_url* на ваш.
|
3. Заменить параметр _controller_url_ на ваш.
|
||||||
4. Заменить параметр *oauth2* на ваш.
|
4. Заменить параметр _oauth2_ на ваш.
|
||||||
5. Добавить ключи
|
5. Добавить ключи
|
||||||
*DECORT_APP_SECRET* и *DECORT_APP_ID*
|
_DECORT_APP_SECRET_ и _DECORT_APP_ID_
|
||||||
в качестве переменных окружения, либо
|
в качестве переменных окружения, либо
|
||||||
можно добавить `app_id` и `app_secret`
|
можно добавить `app_id` и `app_secret`
|
||||||
в блок `provider`,что небезопасно, т.к. данные
|
в блок `provider`,что небезопасно, т.к. данные
|
||||||
могут быть похищены при передачи файла.
|
могут быть похищены при передачи файла.
|
||||||
|
|||||||
54
samples/cloudapi/data_disk_list_deleted/main.tf
Normal file
54
samples/cloudapi/data_disk_list_deleted/main.tf
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
/*
|
||||||
|
Пример использования
|
||||||
|
Получение списка дисков со статусом DELETED
|
||||||
|
*/
|
||||||
|
#Расскомментируйте этот код,
|
||||||
|
#и внесите необходимые правки в версию и путь,
|
||||||
|
#чтобы работать с установленным вручную (не через hashicorp provider registry) провайдером
|
||||||
|
/*
|
||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
decort = {
|
||||||
|
version = "1.1"
|
||||||
|
source = "digitalenergy.online/decort/decort"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
|
||||||
|
provider "decort" {
|
||||||
|
authenticator = "oauth2"
|
||||||
|
#controller_url = <DECORT_CONTROLLER_URL>
|
||||||
|
controller_url = "https://ds1.digitalenergy.online"
|
||||||
|
#oauth2_url = <DECORT_SSO_URL>
|
||||||
|
oauth2_url = "https://sso.digitalenergy.online"
|
||||||
|
allow_unverified_ssl = true
|
||||||
|
}
|
||||||
|
|
||||||
|
data "decort_disk_list_deleted" "dld" {
|
||||||
|
#id аккаунта для получения списка дисков
|
||||||
|
#опциональный параметр
|
||||||
|
#тип - число
|
||||||
|
#account_id = 11111
|
||||||
|
|
||||||
|
#тип диска
|
||||||
|
#опциональный параметр
|
||||||
|
#тип - строка
|
||||||
|
#возможные типы: "b" - boot_disk, "d" - data_disk
|
||||||
|
#type = "d"
|
||||||
|
|
||||||
|
#кол-во страниц для вывода
|
||||||
|
#опицональный параметр
|
||||||
|
#тип - число
|
||||||
|
#page = 1
|
||||||
|
|
||||||
|
#размер страницы
|
||||||
|
#опицональный параметр
|
||||||
|
#тип - число
|
||||||
|
#size = 1
|
||||||
|
}
|
||||||
|
|
||||||
|
output "test" {
|
||||||
|
value = data.decort_disk_list_deleted.dld
|
||||||
|
}
|
||||||
39
samples/cloudapi/data_disk_list_types/main.tf
Normal file
39
samples/cloudapi/data_disk_list_types/main.tf
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
/*
|
||||||
|
Пример использования
|
||||||
|
Получение списка типов дисков
|
||||||
|
*/
|
||||||
|
#Расскомментируйте этот код,
|
||||||
|
#и внесите необходимые правки в версию и путь,
|
||||||
|
#чтобы работать с установленным вручную (не через hashicorp provider registry) провайдером
|
||||||
|
/*
|
||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
decort = {
|
||||||
|
version = "1.1"
|
||||||
|
source = "digitalenergy.online/decort/decort"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
|
||||||
|
provider "decort" {
|
||||||
|
authenticator = "oauth2"
|
||||||
|
#controller_url = <DECORT_CONTROLLER_URL>
|
||||||
|
controller_url = "https://ds1.digitalenergy.online"
|
||||||
|
#oauth2_url = <DECORT_SSO_URL>
|
||||||
|
oauth2_url = "https://sso.digitalenergy.online"
|
||||||
|
allow_unverified_ssl = true
|
||||||
|
}
|
||||||
|
|
||||||
|
data "decort_disk_list_types" "dlt" {
|
||||||
|
#Нет входных параметров
|
||||||
|
|
||||||
|
#Выходной параметр
|
||||||
|
#тип - лист строк
|
||||||
|
#types {}
|
||||||
|
}
|
||||||
|
|
||||||
|
output "test" {
|
||||||
|
value = data.decort_disk_list_types.dlt
|
||||||
|
}
|
||||||
52
samples/cloudapi/data_disk_list_types_detailed/main.tf
Normal file
52
samples/cloudapi/data_disk_list_types_detailed/main.tf
Normal file
@@ -0,0 +1,52 @@
|
|||||||
|
/*
|
||||||
|
Пример использования
|
||||||
|
Получение списка типов дисков, но детально
|
||||||
|
*/
|
||||||
|
#Расскомментируйте этот код,
|
||||||
|
#и внесите необходимые правки в версию и путь,
|
||||||
|
#чтобы работать с установленным вручную (не через hashicorp provider registry) провайдером
|
||||||
|
/*
|
||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
decort = {
|
||||||
|
version = "1.1"
|
||||||
|
source = "digitalenergy.online/decort/decort"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
|
||||||
|
provider "decort" {
|
||||||
|
authenticator = "oauth2"
|
||||||
|
#controller_url = <DECORT_CONTROLLER_URL>
|
||||||
|
controller_url = "https://ds1.digitalenergy.online"
|
||||||
|
#oauth2_url = <DECORT_SSO_URL>
|
||||||
|
oauth2_url = "https://sso.digitalenergy.online"
|
||||||
|
allow_unverified_ssl = true
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
data "decort_disk_list_types_detailed" "dltd" {
|
||||||
|
#Нет входных параметров
|
||||||
|
|
||||||
|
#Выходной параметр
|
||||||
|
#тип - лист типов
|
||||||
|
# items {}
|
||||||
|
|
||||||
|
#Выходной параметр
|
||||||
|
#Список пулов
|
||||||
|
# pools
|
||||||
|
|
||||||
|
#Выходной параметр
|
||||||
|
#Имя
|
||||||
|
# name
|
||||||
|
|
||||||
|
#Выходной параметр
|
||||||
|
#Список типов
|
||||||
|
#types
|
||||||
|
}
|
||||||
|
|
||||||
|
output "test" {
|
||||||
|
value = data.decort_disk_list_types_detailed.dltd
|
||||||
|
}
|
||||||
39
samples/cloudapi/data_disk_list_unattached/main.tf
Normal file
39
samples/cloudapi/data_disk_list_unattached/main.tf
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
/*
|
||||||
|
Пример использования
|
||||||
|
Получение списка доступных неприсоединенных дисков
|
||||||
|
*/
|
||||||
|
#Расскомментируйте этот код,
|
||||||
|
#и внесите необходимые правки в версию и путь,
|
||||||
|
#чтобы работать с установленным вручную (не через hashicorp provider registry) провайдером
|
||||||
|
/*
|
||||||
|
terraform {
|
||||||
|
required_providers {
|
||||||
|
decort = {
|
||||||
|
version = "1.1"
|
||||||
|
source = "digitalenergy.online/decort/decort"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
*/
|
||||||
|
|
||||||
|
|
||||||
|
provider "decort" {
|
||||||
|
authenticator = "oauth2"
|
||||||
|
#controller_url = <DECORT_CONTROLLER_URL>
|
||||||
|
controller_url = "https://ds1.digitalenergy.online"
|
||||||
|
#oauth2_url = <DECORT_SSO_URL>
|
||||||
|
oauth2_url = "https://sso.digitalenergy.online"
|
||||||
|
allow_unverified_ssl = true
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
data "decort_disk_list_unattached" "dlu" {
|
||||||
|
#Номер аккаунта
|
||||||
|
#опциональный параметр
|
||||||
|
#тип - число
|
||||||
|
account_id = 100
|
||||||
|
}
|
||||||
|
|
||||||
|
output "test" {
|
||||||
|
value = data.decort_disk_list_unattached.dlu
|
||||||
|
}
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user