Compare commits
12 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0adf28daf6 | ||
|
|
9402d6f291 | ||
|
|
cb7e573d26 | ||
|
|
6ef0ad2f93 | ||
|
|
31be0a0b54 | ||
|
|
71ddaa3345 | ||
|
|
775a0b5adb | ||
|
|
1a983e945b | ||
|
|
b152359706 | ||
| a844f6cc30 | |||
|
|
8e6b5a9bab | ||
|
|
cd4695ee68 |
18
CHANGELOG.md
18
CHANGELOG.md
@@ -1,10 +1,10 @@
|
||||
### Bug fixes
|
||||
- fatal error when trying to retrieve compute boot disk if former does not have one
|
||||
- ignored timeouts
|
||||
- wrong handling of errors when attaching network interfaces and disks to kvmvm
|
||||
### Version 3.2.2
|
||||
|
||||
### New features
|
||||
- parameter iotune in disk
|
||||
- migrated to terraform SDKv2
|
||||
- admin mode (activated by environment variable DECORT\_ADMIN\_MODE) for resources: account, k8s, image, disk, resgroup, kvmvm, vins
|
||||
- parameters sep\_id and pool in kvmvm
|
||||
### Bug fixes
|
||||
|
||||
- Fix bug with getting kvmvm data_source
|
||||
|
||||
### Features
|
||||
|
||||
- Add enable/disable functionality for kvmvm resource
|
||||
- Add status checker for kvmvm resource
|
||||
|
||||
10
Dockerfile
Normal file
10
Dockerfile
Normal file
@@ -0,0 +1,10 @@
|
||||
FROM docker.io/hashicorp/terraform:latest
|
||||
|
||||
WORKDIR /opt/decort/tf/
|
||||
COPY provider.tf ./
|
||||
COPY terraform-provider-decort ./terraform.d/plugins/digitalenergy.online/decort/decort/3.2.2/linux_amd64/
|
||||
RUN terraform init
|
||||
|
||||
WORKDIR /tf
|
||||
COPY entrypoint.sh /
|
||||
ENTRYPOINT ["/entrypoint.sh", "/bin/terraform"]
|
||||
13
Makefile
13
Makefile
@@ -4,18 +4,31 @@ NAMESPACE=decort
|
||||
NAME=terraform-provider-decort
|
||||
#BINARY=terraform-provider-${NAME}
|
||||
BINARY=${NAME}.exe
|
||||
WORKPATH= ./examples/terraform.d/plugins/${HOSTNAME}/${NAMESPACE}/${NAMESPACE}/${VERSION}/${OS_ARCH}
|
||||
MAINPATH = ./cmd/decort/
|
||||
VERSION=1.1
|
||||
#OS_ARCH=darwin_amd64
|
||||
OS_ARCH=windows_amd64
|
||||
#OS_ARCH=linux_amd64
|
||||
|
||||
default: install
|
||||
|
||||
image:
|
||||
GOOS=linux GOARCH=amd64 go build -o terraform-provider-decort ./cmd/decort/
|
||||
docker build . -t rudecs/tf:3.2.2
|
||||
rm terraform-provider-decort
|
||||
|
||||
lint:
|
||||
golangci-lint run --timeout 600s
|
||||
|
||||
st:
|
||||
go build -o ${BINARY} ${MAINPATH}
|
||||
cp ${BINARY} ${WORKPATH}
|
||||
rm ${BINARY}
|
||||
|
||||
build:
|
||||
go build -o ${BINARY} ${MAINPATH}
|
||||
|
||||
release:
|
||||
GOOS=darwin GOARCH=amd64 go build -o ./bin/${BINARY}_${VERSION}_darwin_amd64
|
||||
GOOS=freebsd GOARCH=386 go build -o ./bin/${BINARY}_${VERSION}_freebsd_386
|
||||
|
||||
60
README.md
60
README.md
@@ -1,22 +1,27 @@
|
||||
# terraform-provider-decort
|
||||
|
||||
Terraform provider для платформы Digital Energy Cloud Orchestration Technology (DECORT)
|
||||
|
||||
Внимание: провайдер версии 3.x разработан для DECORT API 3.8.x.
|
||||
Для более старых версий можно использовать:
|
||||
|
||||
- DECORT API 3.7.x - версия провайдера rc-1.25
|
||||
- DECORT API 3.6.x - версия провайдера rc-1.10
|
||||
- DECORT API до 3.6.0 - terraform DECS provider (https://github.com/rudecs/terraform-provider-decs)
|
||||
|
||||
## Режимы работы
|
||||
|
||||
Провайдер позволяет работать в двух режимах:
|
||||
|
||||
- Режим пользователя,
|
||||
- Режим администратора.
|
||||
Для переключения между режимами используйте флаг DECORT_ADMIN_MODE.
|
||||
Вики проекта: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
Для переключения между режимами используйте флаг DECORT_ADMIN_MODE.
|
||||
Вики проекта: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
|
||||
## Возможности провайдера
|
||||
- Работа с Compute instances,
|
||||
- Работа с disks,
|
||||
|
||||
- Работа с Compute instances,
|
||||
- Работа с disks,
|
||||
- Работа с k8s,
|
||||
- Работа с image,
|
||||
- Работа с reource groups,
|
||||
@@ -29,18 +34,23 @@ Terraform provider для платформы Digital Energy Cloud Orchestration
|
||||
- Работа с vgpu,
|
||||
- Работа с bservice,
|
||||
- Работа с extnets,
|
||||
- Работа с locations.
|
||||
- Работа с locations,
|
||||
- Работа с load balancer.
|
||||
|
||||
Вики проекта: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
|
||||
## Начало
|
||||
Старт возможен по двум путям:
|
||||
|
||||
Старт возможен по двум путям:
|
||||
|
||||
1. Установка через собранные пакеты.
|
||||
2. Ручная установка.
|
||||
|
||||
### Установка через собранные пакеты.
|
||||
|
||||
1. Скачайте и установите terraform по ссылке: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
||||
2. Создайте файл `main.tf` и добавьте в него следующий блок.
|
||||
|
||||
```terraform
|
||||
provider "decort" {
|
||||
authenticator = "oauth2"
|
||||
@@ -51,45 +61,62 @@ provider "decort" {
|
||||
allow_unverified_ssl = true
|
||||
}
|
||||
```
|
||||
|
||||
3. Выполните команду
|
||||
|
||||
```
|
||||
terraform init
|
||||
```
|
||||
|
||||
Провайдер автоматически будет установлен на ваш компьютер из terraform registry.
|
||||
|
||||
### Ручная установка
|
||||
|
||||
1. Скачайте и установите Go по ссылке: https://go.dev/dl/
|
||||
2. Скачайте и установите terraform по ссылке: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
||||
3. Склонируйте репозиторий с провайдером, выполнив команду:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/rudecs/terraform-provider-decort.git
|
||||
```
|
||||
|
||||
4. Перейдите в скачанную папку с провайдером и выполните команду
|
||||
|
||||
```bash
|
||||
go build -o terraform-provider-decort
|
||||
```
|
||||
|
||||
Если вы знаете как устроен _makefile_, то можно изменить в файле `Makefile` параметры под вашу ОС и выполнить команду
|
||||
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
5. Полученный файл необходимо поместить:
|
||||
Linux:
|
||||
Linux:
|
||||
|
||||
```bash
|
||||
~/.terraform.d/plugins/${host_name}/${namespace}/${type}/${version}/${target}
|
||||
```
|
||||
|
||||
Windows:
|
||||
|
||||
```powershell
|
||||
%APPDATA%\terraform.d\plugins\${host_name}\${namespace}\${type}\${version}\${target}
|
||||
```
|
||||
|
||||
ВНИМАНИЕ: для ОС Windows `%APP_DATA%` является каталогом, в котором будут помещены будущие файлы terraform.
|
||||
Где:
|
||||
|
||||
- host_name - имя хоста, держателя провайдера, например, digitalenergy.online
|
||||
- namespace - пространство имен хоста, например decort
|
||||
- namespace - пространство имен хоста, например decort
|
||||
- type - тип провайдера, может совпадать с пространством имен, например, decort
|
||||
- version - версия провайдера, например 1.2
|
||||
- target - версия ОС, например windows_amd64
|
||||
|
||||
6. После этого, создайте файл `main.tf`.
|
||||
7. Добавьте в него следующий блок
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
@@ -100,32 +127,39 @@ terraform {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
В поле `version` указывается версия провайдера.
|
||||
Обязательный параметр
|
||||
Тип поля - строка
|
||||
ВНИМАНИЕ: Версии в блоке и в репозитории, в который был помещен провайдер должны совпадать!
|
||||
|
||||
В поле `source` помещается путь до репозитория с версией вида:
|
||||
|
||||
```bash
|
||||
${host_name}/${namespace}/${type}
|
||||
```
|
||||
ВНИМАНИЕ: все параметры должны совпадать с путем репозитория, в котором помещен провайдер.
|
||||
|
||||
8. В консоле выполнить команду
|
||||
ВНИМАНИЕ: все параметры должны совпадать с путем репозитория, в котором помещен провайдер.
|
||||
|
||||
8. В консоле выполнить команду
|
||||
|
||||
```bash
|
||||
terraform init
|
||||
```
|
||||
|
||||
9. Если все прошло хорошо - ошибок не будет.
|
||||
9. Если все прошло хорошо - ошибок не будет.
|
||||
|
||||
Более подробно о сборке провайдера можно найти по ссылке: https://learn.hashicorp.com/tutorials/terraform/provider-use?in=terraform/providers
|
||||
|
||||
## Примеры работы
|
||||
|
||||
Примеры работы можно найти:
|
||||
|
||||
- На вики проекта: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
- В папке `samples`
|
||||
- В папке `samples`
|
||||
|
||||
Схемы к terraform'у доступны:
|
||||
- В папке `docs`
|
||||
|
||||
- В папке `docs`
|
||||
|
||||
Хорошей работы!
|
||||
|
||||
57
README_EN.md
57
README_EN.md
@@ -1,22 +1,26 @@
|
||||
# terraform-provider-decort
|
||||
|
||||
Terraform provider for Digital Energy Cloud Orchestration Technology (DECORT) platform
|
||||
|
||||
NOTE: provider 3.x is designed for DECORT API 3.8.x. For older API versions please use:
|
||||
|
||||
- DECORT API 3.7.x versions - provider verion rc-1.25
|
||||
- DECORT API 3.6.x versions - provider version rc-1.10
|
||||
- DECORT API versions prior to 3.6.0 - Terraform DECS provider (https://github.com/rudecs/terraform-provider-decs)
|
||||
|
||||
## Working modes
|
||||
|
||||
The provider support two working modes:
|
||||
|
||||
- User mode,
|
||||
- Administator mode.
|
||||
Use flag DECORT_ADMIN_MODE for swithcing beetwen modes.
|
||||
See user guide at https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
|
||||
Use flag DECORT_ADMIN_MODE for swithcing beetwen modes.
|
||||
See user guide at https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
|
||||
## Features
|
||||
- Work with Compute instances,
|
||||
- Work with disks,
|
||||
|
||||
- Work with Compute instances,
|
||||
- Work with disks,
|
||||
- Work with k8s,
|
||||
- Work with image,
|
||||
- Work with reource groups,
|
||||
@@ -29,21 +33,25 @@ See user guide at https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
- Work with vgpu,
|
||||
- Work with bservice,
|
||||
- Work with extnets,
|
||||
- Work with locations.
|
||||
- Work with locations,
|
||||
- Work with load balancers.
|
||||
|
||||
This provider supports Import operations on pre-existing resources.
|
||||
|
||||
See user guide at https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
|
||||
|
||||
## Get Started
|
||||
Two ways for starting:
|
||||
|
||||
Two ways for starting:
|
||||
|
||||
1. Installing via binary packages
|
||||
2. Manual installing
|
||||
|
||||
### Installing via binary packages
|
||||
|
||||
1. Download and install terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
||||
2. Create a file `main.tf` and add to it next section.
|
||||
|
||||
```terraform
|
||||
provider "decort" {
|
||||
authenticator = "oauth2"
|
||||
@@ -54,45 +62,62 @@ provider "decort" {
|
||||
allow_unverified_ssl = true
|
||||
}
|
||||
```
|
||||
|
||||
3. Execute next command
|
||||
|
||||
```
|
||||
terraform init
|
||||
```
|
||||
|
||||
The Provider will automatically install on your computer from the terrafrom registry.
|
||||
|
||||
### Manual installing
|
||||
|
||||
1. Download and install Go Programming Language: https://go.dev/dl/
|
||||
2. Download and install terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
|
||||
3. Clone provider's repo:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/rudecs/terraform-provider-decort.git
|
||||
```
|
||||
|
||||
4. Change directory to clone provider's and execute next command
|
||||
|
||||
```bash
|
||||
go build -o terraform-provider-decort
|
||||
```
|
||||
|
||||
If you have experience with _makefile_, you can change `Makefile`'s paramters and execute next command
|
||||
|
||||
```bash
|
||||
make build
|
||||
```
|
||||
|
||||
5. Now move compilled file to:
|
||||
Linux:
|
||||
Linux:
|
||||
|
||||
```bash
|
||||
~/.terraform.d/plugins/${host_name}/${namespace}/${type}/${version}/${target}
|
||||
```
|
||||
|
||||
Windows:
|
||||
|
||||
```powershell
|
||||
%APPDATA%\terraform.d\plugins\${host_name}/${namespace}/${type}/${version}/${target}
|
||||
```
|
||||
|
||||
NOTE: for Windows OS `%APP_DATA%` is a cataloge, where will place terraform files.
|
||||
Example:
|
||||
|
||||
- host_name - digitalenergy.online
|
||||
- namespace - decort
|
||||
- namespace - decort
|
||||
- type - decort
|
||||
- version - 1.2
|
||||
- target - windows_amd64
|
||||
|
||||
6. After all, create a file `main.tf`.
|
||||
7. Add to the file next code section
|
||||
|
||||
```terraform
|
||||
terraform {
|
||||
required_providers {
|
||||
@@ -103,18 +128,22 @@ terraform {
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`version`- field for provider's version
|
||||
Required
|
||||
String
|
||||
Note: Versions in code section and in a repository must be equal!
|
||||
|
||||
`source` - path to repository with provider's version
|
||||
|
||||
```bash
|
||||
${host_name}/${namespace}/${type}
|
||||
```
|
||||
|
||||
NOTE: all paramters must be equal to the repository path!
|
||||
|
||||
8. Execute command in your terminal
|
||||
8. Execute command in your terminal
|
||||
|
||||
```bash
|
||||
terraform init
|
||||
```
|
||||
@@ -124,10 +153,12 @@ terraform init
|
||||
More details about the provider's building process: https://learn.hashicorp.com/tutorials/terraform/provider-use?in=terraform/providers
|
||||
|
||||
## Examples and Samples
|
||||
|
||||
- Examples: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
- Samples: see in repository `samples`
|
||||
- Samples: see in repository `samples`
|
||||
|
||||
Terraform schemas in:
|
||||
- See in repository `docs`
|
||||
|
||||
- See in repository `docs`
|
||||
|
||||
Good work!
|
||||
|
||||
167
docs/data-sources/lb.md
Normal file
167
docs/data-sources/lb.md
Normal file
@@ -0,0 +1,167 @@
|
||||
---
|
||||
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||
page_title: "decort_lb Data Source - decort"
|
||||
subcategory: ""
|
||||
description: |-
|
||||
|
||||
---
|
||||
|
||||
# decort_lb (Data Source)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- schema generated by tfplugindocs -->
|
||||
## Schema
|
||||
|
||||
### Required
|
||||
|
||||
- `lb_id` (Number)
|
||||
|
||||
### Optional
|
||||
|
||||
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||
|
||||
### Read-Only
|
||||
|
||||
- `backends` (List of Object) (see [below for nested schema](#nestedatt--backends))
|
||||
- `created_by` (String)
|
||||
- `created_time` (Number)
|
||||
- `deleted_by` (String)
|
||||
- `deleted_time` (Number)
|
||||
- `desc` (String)
|
||||
- `dp_api_user` (String)
|
||||
- `extnet_id` (Number)
|
||||
- `frontends` (List of Object) (see [below for nested schema](#nestedatt--frontends))
|
||||
- `gid` (Number)
|
||||
- `guid` (Number)
|
||||
- `ha_mode` (Boolean)
|
||||
- `id` (String) The ID of this resource.
|
||||
- `image_id` (Number)
|
||||
- `milestones` (Number)
|
||||
- `name` (String)
|
||||
- `primary_node` (List of Object) (see [below for nested schema](#nestedatt--primary_node))
|
||||
- `rg_id` (Number)
|
||||
- `rg_name` (String)
|
||||
- `secondary_node` (List of Object) (see [below for nested schema](#nestedatt--secondary_node))
|
||||
- `status` (String)
|
||||
- `tech_status` (String)
|
||||
- `updated_by` (String)
|
||||
- `updated_time` (Number)
|
||||
- `vins_id` (Number)
|
||||
|
||||
<a id="nestedblock--timeouts"></a>
|
||||
### Nested Schema for `timeouts`
|
||||
|
||||
Optional:
|
||||
|
||||
- `default` (String)
|
||||
- `read` (String)
|
||||
|
||||
|
||||
<a id="nestedatt--backends"></a>
|
||||
### Nested Schema for `backends`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `algorithm` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `server_default_settings` (List of Object) (see [below for nested schema](#nestedobjatt--backends--server_default_settings))
|
||||
- `servers` (List of Object) (see [below for nested schema](#nestedobjatt--backends--servers))
|
||||
|
||||
<a id="nestedobjatt--backends--server_default_settings"></a>
|
||||
### Nested Schema for `backends.server_default_settings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `guid` (String)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `slowstart` (Number)
|
||||
- `weight` (Number)
|
||||
|
||||
|
||||
<a id="nestedobjatt--backends--servers"></a>
|
||||
### Nested Schema for `backends.servers`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `address` (String)
|
||||
- `check` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
- `server_settings` (List of Object) (see [below for nested schema](#nestedobjatt--backends--servers--server_settings))
|
||||
|
||||
<a id="nestedobjatt--backends--servers--server_settings"></a>
|
||||
### Nested Schema for `backends.servers.server_settings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `guid` (String)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `slowstart` (Number)
|
||||
- `weight` (Number)
|
||||
|
||||
|
||||
|
||||
|
||||
<a id="nestedatt--frontends"></a>
|
||||
### Nested Schema for `frontends`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend` (String)
|
||||
- `bindings` (List of Object) (see [below for nested schema](#nestedobjatt--frontends--bindings))
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
|
||||
<a id="nestedobjatt--frontends--bindings"></a>
|
||||
### Nested Schema for `frontends.bindings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `address` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
|
||||
|
||||
|
||||
<a id="nestedatt--primary_node"></a>
|
||||
### Nested Schema for `primary_node`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend_ip` (String)
|
||||
- `compute_id` (Number)
|
||||
- `frontend_ip` (String)
|
||||
- `guid` (String)
|
||||
- `mgmt_ip` (String)
|
||||
- `network_id` (Number)
|
||||
|
||||
|
||||
<a id="nestedatt--secondary_node"></a>
|
||||
### Nested Schema for `secondary_node`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend_ip` (String)
|
||||
- `compute_id` (Number)
|
||||
- `frontend_ip` (String)
|
||||
- `guid` (String)
|
||||
- `mgmt_ip` (String)
|
||||
- `network_id` (Number)
|
||||
|
||||
|
||||
175
docs/data-sources/lb_list.md
Normal file
175
docs/data-sources/lb_list.md
Normal file
@@ -0,0 +1,175 @@
|
||||
---
|
||||
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||
page_title: "decort_lb_list Data Source - decort"
|
||||
subcategory: ""
|
||||
description: |-
|
||||
|
||||
---
|
||||
|
||||
# decort_lb_list (Data Source)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- schema generated by tfplugindocs -->
|
||||
## Schema
|
||||
|
||||
### Optional
|
||||
|
||||
- `includedeleted` (Boolean)
|
||||
- `page` (Number)
|
||||
- `size` (Number)
|
||||
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||
|
||||
### Read-Only
|
||||
|
||||
- `id` (String) The ID of this resource.
|
||||
- `items` (List of Object) (see [below for nested schema](#nestedatt--items))
|
||||
|
||||
<a id="nestedblock--timeouts"></a>
|
||||
### Nested Schema for `timeouts`
|
||||
|
||||
Optional:
|
||||
|
||||
- `default` (String)
|
||||
- `read` (String)
|
||||
|
||||
|
||||
<a id="nestedatt--items"></a>
|
||||
### Nested Schema for `items`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backends` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends))
|
||||
- `created_by` (String)
|
||||
- `created_time` (Number)
|
||||
- `deleted_by` (String)
|
||||
- `deleted_time` (Number)
|
||||
- `desc` (String)
|
||||
- `dp_api_password` (String)
|
||||
- `dp_api_user` (String)
|
||||
- `extnet_id` (Number)
|
||||
- `frontends` (List of Object) (see [below for nested schema](#nestedobjatt--items--frontends))
|
||||
- `gid` (Number)
|
||||
- `guid` (Number)
|
||||
- `ha_mode` (Boolean)
|
||||
- `image_id` (Number)
|
||||
- `lb_id` (Number)
|
||||
- `milestones` (Number)
|
||||
- `name` (String)
|
||||
- `primary_node` (List of Object) (see [below for nested schema](#nestedobjatt--items--primary_node))
|
||||
- `rg_id` (Number)
|
||||
- `rg_name` (String)
|
||||
- `secondary_node` (List of Object) (see [below for nested schema](#nestedobjatt--items--secondary_node))
|
||||
- `status` (String)
|
||||
- `tech_status` (String)
|
||||
- `updated_by` (String)
|
||||
- `updated_time` (Number)
|
||||
- `vins_id` (Number)
|
||||
|
||||
<a id="nestedobjatt--items--backends"></a>
|
||||
### Nested Schema for `items.backends`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `algorithm` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `server_default_settings` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--server_default_settings))
|
||||
- `servers` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--servers))
|
||||
|
||||
<a id="nestedobjatt--items--backends--server_default_settings"></a>
|
||||
### Nested Schema for `items.backends.server_default_settings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `guid` (String)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `slowstart` (Number)
|
||||
- `weight` (Number)
|
||||
|
||||
|
||||
<a id="nestedobjatt--items--backends--servers"></a>
|
||||
### Nested Schema for `items.backends.servers`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `address` (String)
|
||||
- `check` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
- `server_settings` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--servers--server_settings))
|
||||
|
||||
<a id="nestedobjatt--items--backends--servers--server_settings"></a>
|
||||
### Nested Schema for `items.backends.servers.server_settings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `guid` (String)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `slowstart` (Number)
|
||||
- `weight` (Number)
|
||||
|
||||
|
||||
|
||||
|
||||
<a id="nestedobjatt--items--frontends"></a>
|
||||
### Nested Schema for `items.frontends`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend` (String)
|
||||
- `bindings` (List of Object) (see [below for nested schema](#nestedobjatt--items--frontends--bindings))
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
|
||||
<a id="nestedobjatt--items--frontends--bindings"></a>
|
||||
### Nested Schema for `items.frontends.bindings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `address` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
|
||||
|
||||
|
||||
<a id="nestedobjatt--items--primary_node"></a>
|
||||
### Nested Schema for `items.primary_node`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend_ip` (String)
|
||||
- `compute_id` (Number)
|
||||
- `frontend_ip` (String)
|
||||
- `guid` (String)
|
||||
- `mgmt_ip` (String)
|
||||
- `network_id` (Number)
|
||||
|
||||
|
||||
<a id="nestedobjatt--items--secondary_node"></a>
|
||||
### Nested Schema for `items.secondary_node`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend_ip` (String)
|
||||
- `compute_id` (Number)
|
||||
- `frontend_ip` (String)
|
||||
- `guid` (String)
|
||||
- `mgmt_ip` (String)
|
||||
- `network_id` (Number)
|
||||
|
||||
|
||||
174
docs/data-sources/lb_list_deleted.md
Normal file
174
docs/data-sources/lb_list_deleted.md
Normal file
@@ -0,0 +1,174 @@
|
||||
---
|
||||
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||
page_title: "decort_lb_list_deleted Data Source - decort"
|
||||
subcategory: ""
|
||||
description: |-
|
||||
|
||||
---
|
||||
|
||||
# decort_lb_list_deleted (Data Source)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- schema generated by tfplugindocs -->
|
||||
## Schema
|
||||
|
||||
### Optional
|
||||
|
||||
- `page` (Number)
|
||||
- `size` (Number)
|
||||
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||
|
||||
### Read-Only
|
||||
|
||||
- `id` (String) The ID of this resource.
|
||||
- `items` (List of Object) (see [below for nested schema](#nestedatt--items))
|
||||
|
||||
<a id="nestedblock--timeouts"></a>
|
||||
### Nested Schema for `timeouts`
|
||||
|
||||
Optional:
|
||||
|
||||
- `default` (String)
|
||||
- `read` (String)
|
||||
|
||||
|
||||
<a id="nestedatt--items"></a>
|
||||
### Nested Schema for `items`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backends` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends))
|
||||
- `created_by` (String)
|
||||
- `created_time` (Number)
|
||||
- `deleted_by` (String)
|
||||
- `deleted_time` (Number)
|
||||
- `desc` (String)
|
||||
- `dp_api_password` (String)
|
||||
- `dp_api_user` (String)
|
||||
- `extnet_id` (Number)
|
||||
- `frontends` (List of Object) (see [below for nested schema](#nestedobjatt--items--frontends))
|
||||
- `gid` (Number)
|
||||
- `guid` (Number)
|
||||
- `ha_mode` (Boolean)
|
||||
- `image_id` (Number)
|
||||
- `lb_id` (Number)
|
||||
- `milestones` (Number)
|
||||
- `name` (String)
|
||||
- `primary_node` (List of Object) (see [below for nested schema](#nestedobjatt--items--primary_node))
|
||||
- `rg_id` (Number)
|
||||
- `rg_name` (String)
|
||||
- `secondary_node` (List of Object) (see [below for nested schema](#nestedobjatt--items--secondary_node))
|
||||
- `status` (String)
|
||||
- `tech_status` (String)
|
||||
- `updated_by` (String)
|
||||
- `updated_time` (Number)
|
||||
- `vins_id` (Number)
|
||||
|
||||
<a id="nestedobjatt--items--backends"></a>
|
||||
### Nested Schema for `items.backends`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `algorithm` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `server_default_settings` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--server_default_settings))
|
||||
- `servers` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--servers))
|
||||
|
||||
<a id="nestedobjatt--items--backends--server_default_settings"></a>
|
||||
### Nested Schema for `items.backends.server_default_settings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `guid` (String)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `slowstart` (Number)
|
||||
- `weight` (Number)
|
||||
|
||||
|
||||
<a id="nestedobjatt--items--backends--servers"></a>
|
||||
### Nested Schema for `items.backends.servers`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `address` (String)
|
||||
- `check` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
- `server_settings` (List of Object) (see [below for nested schema](#nestedobjatt--items--backends--servers--server_settings))
|
||||
|
||||
<a id="nestedobjatt--items--backends--servers--server_settings"></a>
|
||||
### Nested Schema for `items.backends.servers.server_settings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `guid` (String)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `slowstart` (Number)
|
||||
- `weight` (Number)
|
||||
|
||||
|
||||
|
||||
|
||||
<a id="nestedobjatt--items--frontends"></a>
|
||||
### Nested Schema for `items.frontends`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend` (String)
|
||||
- `bindings` (List of Object) (see [below for nested schema](#nestedobjatt--items--frontends--bindings))
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
|
||||
<a id="nestedobjatt--items--frontends--bindings"></a>
|
||||
### Nested Schema for `items.frontends.bindings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `address` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
|
||||
|
||||
|
||||
<a id="nestedobjatt--items--primary_node"></a>
|
||||
### Nested Schema for `items.primary_node`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend_ip` (String)
|
||||
- `compute_id` (Number)
|
||||
- `frontend_ip` (String)
|
||||
- `guid` (String)
|
||||
- `mgmt_ip` (String)
|
||||
- `network_id` (Number)
|
||||
|
||||
|
||||
<a id="nestedobjatt--items--secondary_node"></a>
|
||||
### Nested Schema for `items.secondary_node`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend_ip` (String)
|
||||
- `compute_id` (Number)
|
||||
- `frontend_ip` (String)
|
||||
- `guid` (String)
|
||||
- `mgmt_ip` (String)
|
||||
- `network_id` (Number)
|
||||
|
||||
|
||||
87
docs/resources/image_virtual.md
Normal file
87
docs/resources/image_virtual.md
Normal file
@@ -0,0 +1,87 @@
|
||||
---
|
||||
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||
page_title: "decort_image_virtual Resource - decort"
|
||||
subcategory: ""
|
||||
description: |-
|
||||
|
||||
---
|
||||
|
||||
# decort_image_virtual (Resource)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- schema generated by tfplugindocs -->
|
||||
## Schema
|
||||
|
||||
### Required
|
||||
|
||||
- `link_to` (Number) ID of real image to link this virtual image to upon creation
|
||||
- `name` (String) Name of the rescue disk
|
||||
|
||||
### Optional
|
||||
|
||||
- `permanently` (Boolean) whether to completely delete the image
|
||||
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||
|
||||
### Read-Only
|
||||
|
||||
- `account_id` (Number)
|
||||
- `acl` (String)
|
||||
- `architecture` (String)
|
||||
- `boot_type` (String)
|
||||
- `bootable` (Boolean)
|
||||
- `ckey` (String)
|
||||
- `compute_ci_id` (Number)
|
||||
- `deleted_time` (String)
|
||||
- `desc` (String)
|
||||
- `drivers` (List of String)
|
||||
- `enabled` (Boolean)
|
||||
- `gid` (Number)
|
||||
- `guid` (Number)
|
||||
- `history` (List of Object) (see [below for nested schema](#nestedatt--history))
|
||||
- `hot_resize` (Boolean)
|
||||
- `id` (String) The ID of this resource.
|
||||
- `image_id` (Number) Image id
|
||||
- `image_name` (String)
|
||||
- `last_modified` (Number)
|
||||
- `milestones` (Number)
|
||||
- `password` (String)
|
||||
- `pool_name` (String)
|
||||
- `provider_name` (String)
|
||||
- `purge_attempts` (Number)
|
||||
- `res_id` (String)
|
||||
- `rescuecd` (Boolean)
|
||||
- `sep_id` (Number)
|
||||
- `shared_with` (List of Number)
|
||||
- `size` (Number)
|
||||
- `status` (String)
|
||||
- `tech_status` (String)
|
||||
- `type` (String)
|
||||
- `unc_path` (String)
|
||||
- `username` (String)
|
||||
- `version` (String)
|
||||
|
||||
<a id="nestedblock--timeouts"></a>
|
||||
### Nested Schema for `timeouts`
|
||||
|
||||
Optional:
|
||||
|
||||
- `create` (String)
|
||||
- `default` (String)
|
||||
- `delete` (String)
|
||||
- `read` (String)
|
||||
- `update` (String)
|
||||
|
||||
|
||||
<a id="nestedatt--history"></a>
|
||||
### Nested Schema for `history`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `guid` (String)
|
||||
- `id` (Number)
|
||||
- `timestamp` (Number)
|
||||
|
||||
|
||||
@@ -29,8 +29,12 @@ description: |-
|
||||
|
||||
- `cloud_init` (String) Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases.
|
||||
- `description` (String) Optional text description of this compute instance.
|
||||
- `detach_disks` (Boolean)
|
||||
- `extra_disks` (Set of Number) Optional list of IDs of extra disks to attach to this compute. You may specify several extra disks.
|
||||
- `ipa_type` (String) compute purpose
|
||||
- `is` (String) system name
|
||||
- `network` (Block Set, Max: 8) Optional network connection(s) for this compute. You may specify several network blocks, one for each connection. (see [below for nested schema](#nestedblock--network))
|
||||
- `permanently` (Boolean)
|
||||
- `pool` (String) Pool to use if sepId is set, can be also empty if needed to be chosen by system.
|
||||
- `sep_id` (Number) ID of SEP to create bootDisk on. Uses image's sepId if not set.
|
||||
- `started` (Boolean) Is compute started.
|
||||
|
||||
176
docs/resources/lb.md
Normal file
176
docs/resources/lb.md
Normal file
@@ -0,0 +1,176 @@
|
||||
---
|
||||
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||
page_title: "decort_lb Resource - decort"
|
||||
subcategory: ""
|
||||
description: |-
|
||||
|
||||
---
|
||||
|
||||
# decort_lb (Resource)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- schema generated by tfplugindocs -->
|
||||
## Schema
|
||||
|
||||
### Required
|
||||
|
||||
- `extnet_id` (Number)
|
||||
- `name` (String)
|
||||
- `rg_id` (Number)
|
||||
- `start` (Boolean)
|
||||
- `vins_id` (Number)
|
||||
|
||||
### Optional
|
||||
|
||||
- `config_reset` (Boolean)
|
||||
- `desc` (String)
|
||||
- `enable` (Boolean)
|
||||
- `permanently` (Boolean)
|
||||
- `restart` (Boolean)
|
||||
- `restore` (Boolean)
|
||||
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||
|
||||
### Read-Only
|
||||
|
||||
- `backends` (List of Object) (see [below for nested schema](#nestedatt--backends))
|
||||
- `created_by` (String)
|
||||
- `created_time` (Number)
|
||||
- `deleted_by` (String)
|
||||
- `deleted_time` (Number)
|
||||
- `dp_api_user` (String)
|
||||
- `frontends` (List of Object) (see [below for nested schema](#nestedatt--frontends))
|
||||
- `gid` (Number)
|
||||
- `guid` (Number)
|
||||
- `ha_mode` (Boolean)
|
||||
- `id` (String) The ID of this resource.
|
||||
- `image_id` (Number)
|
||||
- `lb_id` (Number)
|
||||
- `milestones` (Number)
|
||||
- `primary_node` (List of Object) (see [below for nested schema](#nestedatt--primary_node))
|
||||
- `rg_name` (String)
|
||||
- `secondary_node` (List of Object) (see [below for nested schema](#nestedatt--secondary_node))
|
||||
- `status` (String)
|
||||
- `tech_status` (String)
|
||||
- `updated_by` (String)
|
||||
- `updated_time` (Number)
|
||||
|
||||
<a id="nestedblock--timeouts"></a>
|
||||
### Nested Schema for `timeouts`
|
||||
|
||||
Optional:
|
||||
|
||||
- `create` (String)
|
||||
- `default` (String)
|
||||
- `delete` (String)
|
||||
- `read` (String)
|
||||
- `update` (String)
|
||||
|
||||
|
||||
<a id="nestedatt--backends"></a>
|
||||
### Nested Schema for `backends`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `algorithm` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `server_default_settings` (List of Object) (see [below for nested schema](#nestedobjatt--backends--server_default_settings))
|
||||
- `servers` (List of Object) (see [below for nested schema](#nestedobjatt--backends--servers))
|
||||
|
||||
<a id="nestedobjatt--backends--server_default_settings"></a>
|
||||
### Nested Schema for `backends.server_default_settings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `guid` (String)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `slowstart` (Number)
|
||||
- `weight` (Number)
|
||||
|
||||
|
||||
<a id="nestedobjatt--backends--servers"></a>
|
||||
### Nested Schema for `backends.servers`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `address` (String)
|
||||
- `check` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
- `server_settings` (List of Object) (see [below for nested schema](#nestedobjatt--backends--servers--server_settings))
|
||||
|
||||
<a id="nestedobjatt--backends--servers--server_settings"></a>
|
||||
### Nested Schema for `backends.servers.server_settings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `guid` (String)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `slowstart` (Number)
|
||||
- `weight` (Number)
|
||||
|
||||
|
||||
|
||||
|
||||
<a id="nestedatt--frontends"></a>
|
||||
### Nested Schema for `frontends`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend` (String)
|
||||
- `bindings` (List of Object) (see [below for nested schema](#nestedobjatt--frontends--bindings))
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
|
||||
<a id="nestedobjatt--frontends--bindings"></a>
|
||||
### Nested Schema for `frontends.bindings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `address` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
|
||||
|
||||
|
||||
<a id="nestedatt--primary_node"></a>
|
||||
### Nested Schema for `primary_node`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend_ip` (String)
|
||||
- `compute_id` (Number)
|
||||
- `frontend_ip` (String)
|
||||
- `guid` (String)
|
||||
- `mgmt_ip` (String)
|
||||
- `network_id` (Number)
|
||||
|
||||
|
||||
<a id="nestedatt--secondary_node"></a>
|
||||
### Nested Schema for `secondary_node`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `backend_ip` (String)
|
||||
- `compute_id` (Number)
|
||||
- `frontend_ip` (String)
|
||||
- `guid` (String)
|
||||
- `mgmt_ip` (String)
|
||||
- `network_id` (Number)
|
||||
|
||||
|
||||
88
docs/resources/lb_backend.md
Normal file
88
docs/resources/lb_backend.md
Normal file
@@ -0,0 +1,88 @@
|
||||
---
|
||||
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||
page_title: "decort_lb_backend Resource - decort"
|
||||
subcategory: ""
|
||||
description: |-
|
||||
|
||||
---
|
||||
|
||||
# decort_lb_backend (Resource)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- schema generated by tfplugindocs -->
|
||||
## Schema
|
||||
|
||||
### Required
|
||||
|
||||
- `lb_id` (Number) ID of the LB instance to backendCreate
|
||||
- `name` (String) Must be unique among all backends of this LB - name of the new backend to create
|
||||
|
||||
### Optional
|
||||
|
||||
- `algorithm` (String)
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `servers` (Block List) (see [below for nested schema](#nestedblock--servers))
|
||||
- `slowstart` (Number)
|
||||
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||
- `weight` (Number)
|
||||
|
||||
### Read-Only
|
||||
|
||||
- `guid` (String)
|
||||
- `id` (String) The ID of this resource.
|
||||
|
||||
<a id="nestedblock--servers"></a>
|
||||
### Nested Schema for `servers`
|
||||
|
||||
Optional:
|
||||
|
||||
- `address` (String)
|
||||
- `check` (String)
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
- `server_settings` (Block List) (see [below for nested schema](#nestedblock--servers--server_settings))
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `guid` (String)
|
||||
|
||||
<a id="nestedblock--servers--server_settings"></a>
|
||||
### Nested Schema for `servers.server_settings`
|
||||
|
||||
Optional:
|
||||
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `slowstart` (Number)
|
||||
- `weight` (Number)
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `guid` (String)
|
||||
|
||||
|
||||
|
||||
<a id="nestedblock--timeouts"></a>
|
||||
### Nested Schema for `timeouts`
|
||||
|
||||
Optional:
|
||||
|
||||
- `create` (String)
|
||||
- `default` (String)
|
||||
- `delete` (String)
|
||||
- `read` (String)
|
||||
- `update` (String)
|
||||
|
||||
|
||||
55
docs/resources/lb_backend_server.md
Normal file
55
docs/resources/lb_backend_server.md
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||
page_title: "decort_lb_backend_server Resource - decort"
|
||||
subcategory: ""
|
||||
description: |-
|
||||
|
||||
---
|
||||
|
||||
# decort_lb_backend_server (Resource)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- schema generated by tfplugindocs -->
|
||||
## Schema
|
||||
|
||||
### Required
|
||||
|
||||
- `address` (String) IP address of the server.
|
||||
- `backend_name` (String) Must be unique among all backends of this LB - name of the new backend to create
|
||||
- `lb_id` (Number) ID of the LB instance to backendCreate
|
||||
- `name` (String) Must be unique among all servers defined for this backend - name of the server definition to add.
|
||||
- `port` (Number) Port number on the server
|
||||
|
||||
### Optional
|
||||
|
||||
- `check` (String) set to disabled if this server should be used regardless of its state.
|
||||
- `downinter` (Number)
|
||||
- `fall` (Number)
|
||||
- `inter` (Number)
|
||||
- `maxconn` (Number)
|
||||
- `maxqueue` (Number)
|
||||
- `rise` (Number)
|
||||
- `slowstart` (Number)
|
||||
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||
- `weight` (Number)
|
||||
|
||||
### Read-Only
|
||||
|
||||
- `guid` (String)
|
||||
- `id` (String) The ID of this resource.
|
||||
|
||||
<a id="nestedblock--timeouts"></a>
|
||||
### Nested Schema for `timeouts`
|
||||
|
||||
Optional:
|
||||
|
||||
- `create` (String)
|
||||
- `default` (String)
|
||||
- `delete` (String)
|
||||
- `read` (String)
|
||||
- `update` (String)
|
||||
|
||||
|
||||
56
docs/resources/lb_frontend.md
Normal file
56
docs/resources/lb_frontend.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||
page_title: "decort_lb_frontend Resource - decort"
|
||||
subcategory: ""
|
||||
description: |-
|
||||
|
||||
---
|
||||
|
||||
# decort_lb_frontend (Resource)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- schema generated by tfplugindocs -->
|
||||
## Schema
|
||||
|
||||
### Required
|
||||
|
||||
- `backend_name` (String)
|
||||
- `lb_id` (Number) ID of the LB instance to backendCreate
|
||||
- `name` (String)
|
||||
|
||||
### Optional
|
||||
|
||||
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||
|
||||
### Read-Only
|
||||
|
||||
- `bindings` (List of Object) (see [below for nested schema](#nestedatt--bindings))
|
||||
- `guid` (String)
|
||||
- `id` (String) The ID of this resource.
|
||||
|
||||
<a id="nestedblock--timeouts"></a>
|
||||
### Nested Schema for `timeouts`
|
||||
|
||||
Optional:
|
||||
|
||||
- `create` (String)
|
||||
- `default` (String)
|
||||
- `delete` (String)
|
||||
- `read` (String)
|
||||
- `update` (String)
|
||||
|
||||
|
||||
<a id="nestedatt--bindings"></a>
|
||||
### Nested Schema for `bindings`
|
||||
|
||||
Read-Only:
|
||||
|
||||
- `address` (String)
|
||||
- `guid` (String)
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
|
||||
|
||||
46
docs/resources/lb_frontend_bind.md
Normal file
46
docs/resources/lb_frontend_bind.md
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
# generated by https://github.com/hashicorp/terraform-plugin-docs
|
||||
page_title: "decort_lb_frontend_bind Resource - decort"
|
||||
subcategory: ""
|
||||
description: |-
|
||||
|
||||
---
|
||||
|
||||
# decort_lb_frontend_bind (Resource)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
<!-- schema generated by tfplugindocs -->
|
||||
## Schema
|
||||
|
||||
### Required
|
||||
|
||||
- `address` (String)
|
||||
- `frontend_name` (String) Must be unique among all backends of this LB - name of the new backend to create
|
||||
- `lb_id` (Number) ID of the LB instance to backendCreate
|
||||
- `name` (String)
|
||||
- `port` (Number)
|
||||
|
||||
### Optional
|
||||
|
||||
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
|
||||
|
||||
### Read-Only
|
||||
|
||||
- `guid` (String)
|
||||
- `id` (String) The ID of this resource.
|
||||
|
||||
<a id="nestedblock--timeouts"></a>
|
||||
### Nested Schema for `timeouts`
|
||||
|
||||
Optional:
|
||||
|
||||
- `create` (String)
|
||||
- `default` (String)
|
||||
- `delete` (String)
|
||||
- `read` (String)
|
||||
- `update` (String)
|
||||
|
||||
|
||||
4
entrypoint.sh
Normal file
4
entrypoint.sh
Normal file
@@ -0,0 +1,4 @@
|
||||
#!/bin/sh
|
||||
|
||||
cp -aL /opt/decort/tf/* /opt/decort/tf/.* ./
|
||||
exec "$@"
|
||||
60
go.mod
60
go.mod
@@ -3,68 +3,68 @@ module github.com/rudecs/terraform-provider-decort
|
||||
go 1.18
|
||||
|
||||
require (
|
||||
github.com/golang-jwt/jwt/v4 v4.4.2
|
||||
github.com/golang-jwt/jwt/v4 v4.4.3
|
||||
github.com/google/uuid v1.3.0
|
||||
github.com/hashicorp/terraform-plugin-docs v0.13.0
|
||||
github.com/hashicorp/terraform-plugin-sdk/v2 v2.19.0
|
||||
github.com/hashicorp/terraform-plugin-sdk/v2 v2.24.1
|
||||
github.com/sirupsen/logrus v1.9.0
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2
|
||||
golang.org/x/net v0.4.0
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/Masterminds/goutils v1.1.1 // indirect
|
||||
github.com/Masterminds/semver/v3 v3.1.1 // indirect
|
||||
github.com/Masterminds/sprig/v3 v3.2.2 // indirect
|
||||
github.com/agext/levenshtein v1.2.2 // indirect
|
||||
github.com/Masterminds/semver/v3 v3.2.0 // indirect
|
||||
github.com/Masterminds/sprig/v3 v3.2.3 // indirect
|
||||
github.com/agext/levenshtein v1.2.3 // indirect
|
||||
github.com/apparentlymart/go-textseg/v13 v13.0.0 // indirect
|
||||
github.com/armon/go-radix v1.0.0 // indirect
|
||||
github.com/bgentry/speakeasy v0.1.0 // indirect
|
||||
github.com/fatih/color v1.13.0 // indirect
|
||||
github.com/golang/protobuf v1.5.2 // indirect
|
||||
github.com/google/go-cmp v0.5.8 // indirect
|
||||
github.com/google/go-cmp v0.5.9 // indirect
|
||||
github.com/hashicorp/errwrap v1.1.0 // indirect
|
||||
github.com/hashicorp/go-checkpoint v0.5.0 // indirect
|
||||
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
|
||||
github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 // indirect
|
||||
github.com/hashicorp/go-hclog v1.2.1 // indirect
|
||||
github.com/hashicorp/go-hclog v1.4.0 // indirect
|
||||
github.com/hashicorp/go-multierror v1.1.1 // indirect
|
||||
github.com/hashicorp/go-plugin v1.4.4 // indirect
|
||||
github.com/hashicorp/go-plugin v1.4.8 // indirect
|
||||
github.com/hashicorp/go-uuid v1.0.3 // indirect
|
||||
github.com/hashicorp/go-version v1.6.0 // indirect
|
||||
github.com/hashicorp/hc-install v0.4.0 // indirect
|
||||
github.com/hashicorp/hcl/v2 v2.13.0 // indirect
|
||||
github.com/hashicorp/hcl/v2 v2.15.0 // indirect
|
||||
github.com/hashicorp/logutils v1.0.0 // indirect
|
||||
github.com/hashicorp/terraform-exec v0.17.2 // indirect
|
||||
github.com/hashicorp/terraform-exec v0.17.3 // indirect
|
||||
github.com/hashicorp/terraform-json v0.14.0 // indirect
|
||||
github.com/hashicorp/terraform-plugin-go v0.12.0 // indirect
|
||||
github.com/hashicorp/terraform-plugin-log v0.6.0 // indirect
|
||||
github.com/hashicorp/terraform-registry-address v0.0.0-20220623143253-7d51757b572c // indirect
|
||||
github.com/hashicorp/terraform-plugin-go v0.14.2 // indirect
|
||||
github.com/hashicorp/terraform-plugin-log v0.7.0 // indirect
|
||||
github.com/hashicorp/terraform-registry-address v0.1.0 // indirect
|
||||
github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734 // indirect
|
||||
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d // indirect
|
||||
github.com/huandu/xstrings v1.3.2 // indirect
|
||||
github.com/hashicorp/yamux v0.1.1 // indirect
|
||||
github.com/huandu/xstrings v1.4.0 // indirect
|
||||
github.com/imdario/mergo v0.3.13 // indirect
|
||||
github.com/mattn/go-colorable v0.1.12 // indirect
|
||||
github.com/mattn/go-isatty v0.0.14 // indirect
|
||||
github.com/mitchellh/cli v1.1.4 // indirect
|
||||
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||
github.com/mattn/go-isatty v0.0.16 // indirect
|
||||
github.com/mitchellh/cli v1.1.5 // indirect
|
||||
github.com/mitchellh/copystructure v1.2.0 // indirect
|
||||
github.com/mitchellh/go-testing-interface v1.14.1 // indirect
|
||||
github.com/mitchellh/go-wordwrap v1.0.0 // indirect
|
||||
github.com/mitchellh/go-wordwrap v1.0.1 // indirect
|
||||
github.com/mitchellh/mapstructure v1.5.0 // indirect
|
||||
github.com/mitchellh/reflectwalk v1.0.2 // indirect
|
||||
github.com/oklog/run v1.0.0 // indirect
|
||||
github.com/oklog/run v1.1.0 // indirect
|
||||
github.com/posener/complete v1.2.3 // indirect
|
||||
github.com/russross/blackfriday v1.6.0 // indirect
|
||||
github.com/shopspring/decimal v1.3.1 // indirect
|
||||
github.com/spf13/cast v1.5.0 // indirect
|
||||
github.com/vmihailenco/msgpack v4.0.4+incompatible // indirect
|
||||
github.com/vmihailenco/msgpack/v4 v4.3.12 // indirect
|
||||
github.com/vmihailenco/tagparser v0.1.1 // indirect
|
||||
github.com/zclconf/go-cty v1.10.0 // indirect
|
||||
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d // indirect
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8 // indirect
|
||||
golang.org/x/text v0.3.7 // indirect
|
||||
google.golang.org/appengine v1.6.6 // indirect
|
||||
google.golang.org/genproto v0.0.0-20200711021454-869866162049 // indirect
|
||||
google.golang.org/grpc v1.48.0 // indirect
|
||||
google.golang.org/protobuf v1.28.0 // indirect
|
||||
github.com/vmihailenco/tagparser v0.1.2 // indirect
|
||||
github.com/zclconf/go-cty v1.12.1 // indirect
|
||||
golang.org/x/crypto v0.4.0 // indirect
|
||||
golang.org/x/sys v0.3.0 // indirect
|
||||
golang.org/x/text v0.5.0 // indirect
|
||||
google.golang.org/appengine v1.6.7 // indirect
|
||||
google.golang.org/genproto v0.0.0-20221207170731-23e4bf6bdc37 // indirect
|
||||
google.golang.org/grpc v1.51.0 // indirect
|
||||
google.golang.org/protobuf v1.28.1 // indirect
|
||||
)
|
||||
|
||||
81
go.sum
81
go.sum
@@ -6,9 +6,14 @@ github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJ
|
||||
github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
|
||||
github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc=
|
||||
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
|
||||
github.com/Masterminds/semver/v3 v3.2.0 h1:3MEsd0SM6jqZojhjLWWeBY+Kcjy9i6MQAeY7YgDP83g=
|
||||
github.com/Masterminds/semver/v3 v3.2.0/go.mod h1:qvl/7zhW3nngYb5+80sSMF+FG2BjYrf8m9wsX0PNOMQ=
|
||||
github.com/Masterminds/sprig/v3 v3.2.0/go.mod h1:tWhwTbUTndesPNeF0C900vKoq283u6zp4APT9vaF3SI=
|
||||
github.com/Masterminds/sprig/v3 v3.2.1/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk=
|
||||
github.com/Masterminds/sprig/v3 v3.2.2 h1:17jRggJu518dr3QaafizSXOjKYp94wKfABxUmyxvxX8=
|
||||
github.com/Masterminds/sprig/v3 v3.2.2/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk=
|
||||
github.com/Masterminds/sprig/v3 v3.2.3 h1:eL2fZNezLomi0uOLqjQoN6BfsDD+fyLtgbJMAj9n6YA=
|
||||
github.com/Masterminds/sprig/v3 v3.2.3/go.mod h1:rXcFaZ2zZbLRJv/xSysmlgIM1u11eBaRMhvYXJNkGuM=
|
||||
github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
|
||||
github.com/Microsoft/go-winio v0.4.16 h1:FtSW/jqD+l4ba5iPBj9CODVtgfYAD8w2wS923g/cFDk=
|
||||
github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
|
||||
@@ -18,9 +23,12 @@ github.com/acomagu/bufpipe v1.0.3 h1:fxAGrHZTgQ9w5QqVItgzwj235/uYZYgbXitB+dLupOk
|
||||
github.com/acomagu/bufpipe v1.0.3/go.mod h1:mxdxdup/WdsKVreO5GpW4+M/1CE2sMG4jeGJ2sYmHc4=
|
||||
github.com/agext/levenshtein v1.2.2 h1:0S/Yg6LYmFJ5stwQeRp6EeOcCbj7xiqQSdNelsXvaqE=
|
||||
github.com/agext/levenshtein v1.2.2/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
|
||||
github.com/agext/levenshtein v1.2.3 h1:YB2fHEn0UJagG8T1rrWknE3ZQzWM06O8AMAatNn7lmo=
|
||||
github.com/agext/levenshtein v1.2.3/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
|
||||
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
|
||||
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
|
||||
github.com/apparentlymart/go-dump v0.0.0-20190214190832-042adf3cf4a0 h1:MzVXffFUye+ZcSR6opIgz9Co7WcDx6ZcY+RjfFHoA0I=
|
||||
github.com/apparentlymart/go-textseg v1.0.0 h1:rRmlIsPEEhUTIKQb7T++Nz/A5Q6C9IuX2wFoYVvnCs0=
|
||||
github.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk=
|
||||
github.com/apparentlymart/go-textseg/v12 v12.0.0/go.mod h1:S/4uRK2UtaQttw1GenVJEynmyUenKwP++x/+DdGV/Ec=
|
||||
github.com/apparentlymart/go-textseg/v13 v13.0.0 h1:Y+KvPE1NYz0xl601PVImeQfFyEy6iT90AvPUL1NNfNw=
|
||||
@@ -70,6 +78,8 @@ github.com/go-git/go-git/v5 v5.4.2/go.mod h1:gQ1kArt6d+n+BGd+/B/I74HwRTLhth2+zti
|
||||
github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68=
|
||||
github.com/golang-jwt/jwt/v4 v4.4.2 h1:rcc4lwaZgFMCZ5jxF9ABolDcIHdBytAFgqFPbSJQAYs=
|
||||
github.com/golang-jwt/jwt/v4 v4.4.2/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang-jwt/jwt/v4 v4.4.3 h1:Hxl6lhQFj4AnOX6MLrsCb/+7tCj7DxP7VA+2rDIq5AU=
|
||||
github.com/golang-jwt/jwt/v4 v4.4.3/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
|
||||
github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
@@ -98,6 +108,8 @@ github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
|
||||
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
|
||||
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
|
||||
@@ -116,11 +128,15 @@ github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320 h1:1/D3zfFHttUK
|
||||
github.com/hashicorp/go-cty v1.4.1-0.20200414143053-d3edf31b6320/go.mod h1:EiZBMaudVLy8fmjf9Npq1dq9RalhveqZG5w/yz3mHWs=
|
||||
github.com/hashicorp/go-hclog v1.2.1 h1:YQsLlGDJgwhXFpucSPyVbCBviQtjlHv3jLTlp8YmtEw=
|
||||
github.com/hashicorp/go-hclog v1.2.1/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
|
||||
github.com/hashicorp/go-hclog v1.4.0 h1:ctuWFGrhFha8BnnzxqeRGidlEcQkDyL5u8J8t5eA11I=
|
||||
github.com/hashicorp/go-hclog v1.4.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
|
||||
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
|
||||
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
|
||||
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
|
||||
github.com/hashicorp/go-plugin v1.4.4 h1:NVdrSdFRt3SkZtNckJ6tog7gbpRrcbOjQi/rgF7JYWQ=
|
||||
github.com/hashicorp/go-plugin v1.4.4/go.mod h1:viDMjcLJuDui6pXb8U4HVfb8AamCWhHGUjr2IrTF67s=
|
||||
github.com/hashicorp/go-plugin v1.4.8 h1:CHGwpxYDOttQOY7HOWgETU9dyVjOXzniXDqJcYJE1zM=
|
||||
github.com/hashicorp/go-plugin v1.4.8/go.mod h1:viDMjcLJuDui6pXb8U4HVfb8AamCWhHGUjr2IrTF67s=
|
||||
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
|
||||
github.com/hashicorp/go-uuid v1.0.3 h1:2gKiV6YVmrJ1i2CKKa9obLvRieoRGviZFL26PcT/Co8=
|
||||
github.com/hashicorp/go-uuid v1.0.3/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
|
||||
@@ -132,29 +148,46 @@ github.com/hashicorp/hc-install v0.4.0 h1:cZkRFr1WVa0Ty6x5fTvL1TuO1flul231rWkGH9
|
||||
github.com/hashicorp/hc-install v0.4.0/go.mod h1:5d155H8EC5ewegao9A4PUTMNPZaq+TbOzkJJZ4vrXeI=
|
||||
github.com/hashicorp/hcl/v2 v2.13.0 h1:0Apadu1w6M11dyGFxWnmhhcMjkbAiKCv7G1r/2QgCNc=
|
||||
github.com/hashicorp/hcl/v2 v2.13.0/go.mod h1:e4z5nxYlWNPdDSNYX+ph14EvWYMFm3eP0zIUqPc2jr0=
|
||||
github.com/hashicorp/hcl/v2 v2.15.0 h1:CPDXO6+uORPjKflkWCCwoWc9uRp+zSIPcCQ+BrxV7m8=
|
||||
github.com/hashicorp/hcl/v2 v2.15.0/go.mod h1:JRmR89jycNkrrqnMmvPDMd56n1rQJ2Q6KocSLCMCXng=
|
||||
github.com/hashicorp/logutils v1.0.0 h1:dLEQVugN8vlakKOUE3ihGLTZJRB4j+M2cdTm/ORI65Y=
|
||||
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
|
||||
github.com/hashicorp/terraform-exec v0.17.2 h1:EU7i3Fh7vDUI9nNRdMATCEfnm9axzTnad8zszYZ73Go=
|
||||
github.com/hashicorp/terraform-exec v0.17.2/go.mod h1:tuIbsL2l4MlwwIZx9HPM+LOV9vVyEfBYu2GsO1uH3/8=
|
||||
github.com/hashicorp/terraform-exec v0.17.3 h1:MX14Kvnka/oWGmIkyuyvL6POx25ZmKrjlaclkx3eErU=
|
||||
github.com/hashicorp/terraform-exec v0.17.3/go.mod h1:+NELG0EqQekJzhvikkeQsOAZpsw0cv/03rbeQJqscAI=
|
||||
github.com/hashicorp/terraform-json v0.14.0 h1:sh9iZ1Y8IFJLx+xQiKHGud6/TSUCM0N8e17dKDpqV7s=
|
||||
github.com/hashicorp/terraform-json v0.14.0/go.mod h1:5A9HIWPkk4e5aeeXIBbkcOvaZbIYnAIkEyqP2pNSckM=
|
||||
github.com/hashicorp/terraform-plugin-docs v0.13.0 h1:6e+VIWsVGb6jYJewfzq2ok2smPzZrt1Wlm9koLeKazY=
|
||||
github.com/hashicorp/terraform-plugin-docs v0.13.0/go.mod h1:W0oCmHAjIlTHBbvtppWHe8fLfZ2BznQbuv8+UD8OucQ=
|
||||
github.com/hashicorp/terraform-plugin-go v0.12.0 h1:6wW9mT1dSs0Xq4LR6HXj1heQ5ovr5GxXNJwkErZzpJw=
|
||||
github.com/hashicorp/terraform-plugin-go v0.12.0/go.mod h1:kwhmaWHNDvT1B3QiSJdAtrB/D4RaKSY/v3r2BuoWK4M=
|
||||
github.com/hashicorp/terraform-plugin-go v0.14.2 h1:rhsVEOGCnY04msNymSvbUsXfRLKh9znXZmHlf5e8mhE=
|
||||
github.com/hashicorp/terraform-plugin-go v0.14.2/go.mod h1:Q12UjumPNGiFsZffxOsA40Tlz1WVXt2Evh865Zj0+UA=
|
||||
github.com/hashicorp/terraform-plugin-log v0.6.0 h1:/Vq78uSIdUSZ3iqDc9PESKtwt8YqNKN6u+khD+lLjuw=
|
||||
github.com/hashicorp/terraform-plugin-log v0.6.0/go.mod h1:p4R1jWBXRTvL4odmEkFfDdhUjHf9zcs/BCoNHAc7IK4=
|
||||
github.com/hashicorp/terraform-plugin-log v0.7.0 h1:SDxJUyT8TwN4l5b5/VkiTIaQgY6R+Y2BQ0sRZftGKQs=
|
||||
github.com/hashicorp/terraform-plugin-log v0.7.0/go.mod h1:p4R1jWBXRTvL4odmEkFfDdhUjHf9zcs/BCoNHAc7IK4=
|
||||
github.com/hashicorp/terraform-plugin-sdk/v2 v2.19.0 h1:7gDAcfto/C4Cjtf90SdukQshsxdMxJ/P69QxiF3digI=
|
||||
github.com/hashicorp/terraform-plugin-sdk/v2 v2.19.0/go.mod h1:/WYikYjhKB7c2j1HmXZhRsAARldRb4M38bLCLOhC3so=
|
||||
github.com/hashicorp/terraform-plugin-sdk/v2 v2.24.1 h1:zHcMbxY0+rFO9gY99elV/XC/UnQVg7FhRCbj1i5b7vM=
|
||||
github.com/hashicorp/terraform-plugin-sdk/v2 v2.24.1/go.mod h1:+tNlb0wkfdsDJ7JEiERLz4HzM19HyiuIoGzTsM7rPpw=
|
||||
github.com/hashicorp/terraform-registry-address v0.0.0-20220623143253-7d51757b572c h1:D8aRO6+mTqHfLsK/BC3j5OAoogv1WLRWzY1AaTo3rBg=
|
||||
github.com/hashicorp/terraform-registry-address v0.0.0-20220623143253-7d51757b572c/go.mod h1:Wn3Na71knbXc1G8Lh+yu/dQWWJeFQEpDeJMtWMtlmNI=
|
||||
github.com/hashicorp/terraform-registry-address v0.1.0 h1:W6JkV9wbum+m516rCl5/NjKxCyTVaaUBbzYcMzBDO3U=
|
||||
github.com/hashicorp/terraform-registry-address v0.1.0/go.mod h1:EnyO2jYO6j29DTHbJcm00E5nQTFeTtyZH3H5ycydQ5A=
|
||||
github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734 h1:HKLsbzeOsfXmKNpr3GiT18XAblV0BjCbzL8KQAMZGa0=
|
||||
github.com/hashicorp/terraform-svchost v0.0.0-20200729002733-f050f53b9734/go.mod h1:kNDNcF7sN4DocDLBkQYz73HGKwN1ANB1blq4lIYLYvg=
|
||||
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d h1:kJCB4vdITiW1eC1vq2e6IsrXKrZit1bv/TDYFGMp4BQ=
|
||||
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
|
||||
github.com/hashicorp/yamux v0.1.1 h1:yrQxtgseBDrq9Y652vSRDvsKCJKOUD+GzTS4Y0Y8pvE=
|
||||
github.com/hashicorp/yamux v0.1.1/go.mod h1:CtWFDAQgb7dxtzFs4tWbplKIe2jSi3+5vKbgIO0SLnQ=
|
||||
github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
|
||||
github.com/huandu/xstrings v1.3.2 h1:L18LIDzqlW6xN2rEkpdV8+oL/IXWJ1APd+vsdYy4Wdw=
|
||||
github.com/huandu/xstrings v1.3.2/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
|
||||
github.com/huandu/xstrings v1.3.3/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
|
||||
github.com/huandu/xstrings v1.4.0 h1:D17IlohoQq4UcpqD7fDk80P7l+lwAmlFaBHgOipl2FU=
|
||||
github.com/huandu/xstrings v1.4.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
|
||||
github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
|
||||
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
|
||||
github.com/imdario/mergo v0.3.13 h1:lFzP57bqS/wsqKssCGmtLAb8A0wKjLGrve2q3PPVcBk=
|
||||
@@ -180,12 +213,18 @@ github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaO
|
||||
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
|
||||
github.com/mattn/go-colorable v0.1.12 h1:jF+Du6AlPIjs2BiUiQlKOX0rt3SujHxPnksPKZbaA40=
|
||||
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
|
||||
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
|
||||
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
||||
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
|
||||
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
|
||||
github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
|
||||
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
|
||||
github.com/mattn/go-isatty v0.0.16 h1:bq3VjFmv/sOjHtdEhmkEV4x1AJtvUvOJ2PFAZ5+peKQ=
|
||||
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
||||
github.com/mitchellh/cli v1.1.4 h1:qj8czE26AU4PbiaPXK5uVmMSM+V5BYsFBiM9HhGRLUA=
|
||||
github.com/mitchellh/cli v1.1.4/go.mod h1:vTLESy5mRhKOs9KDp0/RATawxP1UqBmdrpVRMnpcvKQ=
|
||||
github.com/mitchellh/cli v1.1.5 h1:OxRIeJXpAMztws/XHlN2vu6imG5Dpq+j61AzAX5fLng=
|
||||
github.com/mitchellh/cli v1.1.5/go.mod h1:v8+iFts2sPIKUV1ltktPXMCC8fumSKFItNcD2cLtRR4=
|
||||
github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=
|
||||
github.com/mitchellh/copystructure v1.2.0 h1:vpKXTN4ewci03Vljg/q9QvCGUDttBOGBIa15WveJJGw=
|
||||
github.com/mitchellh/copystructure v1.2.0/go.mod h1:qLl+cE2AmVv+CoeAwDPye/v+N2HKCj9FbZEVFJRxO9s=
|
||||
@@ -195,6 +234,8 @@ github.com/mitchellh/go-testing-interface v1.14.1 h1:jrgshOhYAUVNMAJiKbEu7EqAwgJ
|
||||
github.com/mitchellh/go-testing-interface v1.14.1/go.mod h1:gfgS7OtZj6MA4U1UrDRp04twqAjfvlZyCfX3sDjEym8=
|
||||
github.com/mitchellh/go-wordwrap v1.0.0 h1:6GlHJ/LTGMrIJbwgdqdl2eEH8o+Exx/0m8ir9Gns0u4=
|
||||
github.com/mitchellh/go-wordwrap v1.0.0/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
|
||||
github.com/mitchellh/go-wordwrap v1.0.1 h1:TLuKupo69TCn6TQSyGxwI1EblZZEsQ0vMlAFQflz0v0=
|
||||
github.com/mitchellh/go-wordwrap v1.0.1/go.mod h1:R62XHJLzvMFRBbcrT7m7WgmE1eOyTSsCt+hzestvNj0=
|
||||
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
|
||||
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
|
||||
github.com/mitchellh/reflectwalk v1.0.0/go.mod h1:mSTlrgnPZtwu0c4WaC2kGObEpuNDbx0jmZXqmk4esnw=
|
||||
@@ -204,6 +245,8 @@ github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLA
|
||||
github.com/nsf/jsondiff v0.0.0-20200515183724-f29ed568f4ce h1:RPclfga2SEJmgMmz2k+Mg7cowZ8yv4Trqw9UsJby758=
|
||||
github.com/oklog/run v1.0.0 h1:Ru7dDtJNOyC66gQ5dQmaCa0qIsAUFY3sFpK1Xk8igrw=
|
||||
github.com/oklog/run v1.0.0/go.mod h1:dlhp/R75TPv97u0XWUtDeV/lRKWPKSdTuV0TZvrmrQA=
|
||||
github.com/oklog/run v1.1.0 h1:GEenZ1cK0+q0+wsJew9qUg/DyD8k3JzYsZAi5gYi2mA=
|
||||
github.com/oklog/run v1.1.0/go.mod h1:sVPdnTZT1zYwAJeCMu2Th4T21pA3FPOQRfWjQlk7DVU=
|
||||
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
@@ -245,12 +288,17 @@ github.com/vmihailenco/msgpack/v4 v4.3.12 h1:07s4sz9IReOgdikxLTKNbBdqDMLsjPKXwvC
|
||||
github.com/vmihailenco/msgpack/v4 v4.3.12/go.mod h1:gborTTJjAo/GWTqqRjrLCn9pgNN+NXzzngzBKDPIqw4=
|
||||
github.com/vmihailenco/tagparser v0.1.1 h1:quXMXlA39OCbd2wAdTsGDlK9RkOk6Wuw+x37wVyIuWY=
|
||||
github.com/vmihailenco/tagparser v0.1.1/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI=
|
||||
github.com/vmihailenco/tagparser v0.1.2 h1:gnjoVuB/kljJ5wICEEOpx98oXMWPLj22G67Vbd1qPqc=
|
||||
github.com/vmihailenco/tagparser v0.1.2/go.mod h1:OeAg3pn3UbLjkWt+rN9oFYB6u/cQgqMEUPoW2WPyhdI=
|
||||
github.com/xanzy/ssh-agent v0.3.0 h1:wUMzuKtKilRgBAD1sUb8gOwwRr2FGoBVumcjoOACClI=
|
||||
github.com/xanzy/ssh-agent v0.3.0/go.mod h1:3s9xbODqPuuhK9JV1R321M/FlMZSBvE5aY6eAcqrDh0=
|
||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
github.com/zclconf/go-cty v1.1.0/go.mod h1:xnAOWiHeOqg2nWS62VtQ7pbOu17FtxJNW8RLEih+O3s=
|
||||
github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8=
|
||||
github.com/zclconf/go-cty v1.10.0 h1:mp9ZXQeIcN8kAwuqorjH+Q+njbJKjLrvB2yIh4q7U+0=
|
||||
github.com/zclconf/go-cty v1.10.0/go.mod h1:vVKLxnk3puL4qRAv72AO+W99LUD4da90g3uUAzyuvAk=
|
||||
github.com/zclconf/go-cty v1.12.1 h1:PcupnljUm9EIvbgSHQnHhUr3fO6oFmkOrvs2BAFNXXY=
|
||||
github.com/zclconf/go-cty v1.12.1/go.mod h1:s9IfD1LK5ccNMSWCVFCE2rJfHiZgi7JijgeWIMfhLvA=
|
||||
github.com/zclconf/go-cty-debug v0.0.0-20191215020915-b22d67c1ba0b/go.mod h1:ZRKQfBXbGkpdV6QMzT3rU1kSTAnfu1dO8dPKjYprgj8=
|
||||
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
|
||||
golang.org/x/crypto v0.0.0-20190219172222-a4c6cb3142f2/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
|
||||
@@ -261,12 +309,17 @@ golang.org/x/crypto v0.0.0-20200820211705-5c72a883971a/go.mod h1:LzIPMQfyMNhhGPh
|
||||
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
|
||||
golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
|
||||
golang.org/x/crypto v0.0.0-20210616213533-5ff15b29337e/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d h1:sK3txAijHtOK88l68nt020reeT1ZdKLIYetKl95FzVY=
|
||||
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||
golang.org/x/crypto v0.3.0/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
|
||||
golang.org/x/crypto v0.4.0 h1:UVQgzMY87xqpKNgb+kDsll2Igd33HszWHFLmpaRMq/8=
|
||||
golang.org/x/crypto v0.4.0/go.mod h1:3quD/ATkf6oY+rnes5c3ExXTbLc8mueNue5/DoinL80=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
|
||||
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
|
||||
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
@@ -275,6 +328,7 @@ golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73r
|
||||
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20191009170851-d66e71096ffb/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200301022130-244492dfa37a/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
|
||||
@@ -284,6 +338,10 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
|
||||
golang.org/x/net v0.0.0-20210326060303-6b1517762897/go.mod h1:uSPa2vr4CLtc/ILN5odXGNXS6mhrKVzTaCXzk9m6W3k=
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 h1:CIJ76btIcR3eFI5EgSo6k1qKw9KJexJuRLI9G7Hp5wE=
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
|
||||
golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
|
||||
golang.org/x/net v0.4.0 h1:Q5QPcMlvfxFTAPV0+07Xz/MpK9NTXu2VDUuy0FeMfaU=
|
||||
golang.org/x/net v0.4.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
@@ -291,6 +349,7 @@ golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJ
|
||||
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
@@ -311,20 +370,34 @@ golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8 h1:0A+M6Uqn+Eje4kHMK80dtF3JCXC4ykBgQG4Fe06QRhQ=
|
||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.3.0 h1:w8ZOecv6NaNa/zC8944JTU3vz4u6Lagfk4RPQxv92NQ=
|
||||
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
|
||||
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
|
||||
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/text v0.5.0 h1:OLmvp0KP+FVG99Ct/qFiL/Fhk4zp4QQnZ7b2U+5piUM=
|
||||
golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
|
||||
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
|
||||
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
@@ -332,12 +405,16 @@ google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7
|
||||
google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/appengine v1.6.6 h1:lMO5rYAqUxkmaj76jAkRUvt5JZgFymx/+Q5Mzfivuhc=
|
||||
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
|
||||
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
|
||||
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
|
||||
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
|
||||
google.golang.org/genproto v0.0.0-20200711021454-869866162049 h1:YFTFpQhgvrLrmxtiIncJxFXeCyq84ixuKWVCaCAi9Oc=
|
||||
google.golang.org/genproto v0.0.0-20200711021454-869866162049/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
|
||||
google.golang.org/genproto v0.0.0-20221207170731-23e4bf6bdc37 h1:jmIfw8+gSvXcZSgaFAGyInDXeWzUhvYH57G/5GKMn70=
|
||||
google.golang.org/genproto v0.0.0-20221207170731-23e4bf6bdc37/go.mod h1:RGgjbofJ8xD9Sq1VVhDM1Vok1vRONV+rg+CjzG4SZKM=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
|
||||
@@ -346,6 +423,8 @@ google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTp
|
||||
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
|
||||
google.golang.org/grpc v1.48.0 h1:rQOsyJ/8+ufEDJd/Gdsz7HG220Mh9HAhFHRGnIjda0w=
|
||||
google.golang.org/grpc v1.48.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
|
||||
google.golang.org/grpc v1.51.0 h1:E1eGv1FTqoLIdnBCZufiSHgKjlqG6fKFf6pPWtMTh8U=
|
||||
google.golang.org/grpc v1.51.0/go.mod h1:wgNDFcnuBGmxLKI/qn4T+m5BtEBYXJPvibbUPsAIPww=
|
||||
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
|
||||
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
|
||||
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
|
||||
@@ -361,6 +440,8 @@ google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQ
|
||||
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
|
||||
google.golang.org/protobuf v1.28.0 h1:w43yiav+6bVFTBQFZX0r7ipe9JQ1QsbMgHwbBziscLw=
|
||||
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
||||
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
|
||||
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
|
||||
@@ -25,4 +25,7 @@ import "time"
|
||||
var Timeout30s = time.Second * 30
|
||||
var Timeout60s = time.Second * 60
|
||||
var Timeout180s = time.Second * 180
|
||||
var Timeout300s = time.Second * 300
|
||||
var Timeout600s = time.Second * 600
|
||||
var Timeout20m = time.Minute * 20
|
||||
var Timeout30m = time.Minute * 30
|
||||
|
||||
@@ -26,7 +26,9 @@ import (
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/disks"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/extnet"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/image"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/k8s"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/lb"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/locations"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/rg"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/snapshot"
|
||||
@@ -38,11 +40,22 @@ func NewDataSourcesMap() map[string]*schema.Resource {
|
||||
"decort_account": account.DataSourceAccount(),
|
||||
"decort_resgroup": rg.DataSourceResgroup(),
|
||||
"decort_kvmvm": kvmvm.DataSourceCompute(),
|
||||
"decort_k8s": k8s.DataSourceK8s(),
|
||||
"decort_k8s_list": k8s.DataSourceK8sList(),
|
||||
"decort_k8s_list_deleted": k8s.DataSourceK8sListDeleted(),
|
||||
"decort_k8s_wg": k8s.DataSourceK8sWg(),
|
||||
"decort_k8s_wg_list": k8s.DataSourceK8sWgList(),
|
||||
"decort_vins": vins.DataSourceVins(),
|
||||
"decort_snapshot_list": snapshot.DataSourceSnapshotList(),
|
||||
"decort_disk": disks.DataSourceDisk(),
|
||||
"decort_disk_list": disks.DataSourceDiskList(),
|
||||
"decort_rg_list": rg.DataSourceRgList(),
|
||||
"decort_disk_list_types_detailed": disks.DataSourceDiskListTypesDetailed(),
|
||||
"decort_disk_list_types": disks.DataSourceDiskListTypes(),
|
||||
"decort_disk_list_deleted": disks.DataSourceDiskListDeleted(),
|
||||
"decort_disk_list_unattached": disks.DataSourceDiskListUnattached(),
|
||||
"decort_disk_snapshot": disks.DataSourceDiskSnapshot(),
|
||||
"decort_disk_snapshot_list": disks.DataSourceDiskSnapshotList(),
|
||||
"decort_account_list": account.DataSourceAccountList(),
|
||||
"decort_account_computes_list": account.DataSourceAccountComputesList(),
|
||||
"decort_account_disks_list": account.DataSourceAccountDisksList(),
|
||||
@@ -69,6 +82,9 @@ func NewDataSourcesMap() map[string]*schema.Resource {
|
||||
"decort_location_url": locations.DataSourceLocationUrl(),
|
||||
"decort_image_list": image.DataSourceImageList(),
|
||||
"decort_image": image.DataSourceImage(),
|
||||
"decort_lb": lb.DataSourceLB(),
|
||||
"decort_lb_list": lb.DataSourceLBList(),
|
||||
"decort_lb_list_deleted": lb.DataSourceLBListDeleted(),
|
||||
// "decort_pfw": dataSourcePfw(),
|
||||
}
|
||||
|
||||
|
||||
@@ -27,6 +27,7 @@ import (
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/image"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/k8s"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/lb"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/pfw"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/rg"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/snapshot"
|
||||
@@ -35,18 +36,24 @@ import (
|
||||
|
||||
func NewRersourcesMap() map[string]*schema.Resource {
|
||||
return map[string]*schema.Resource{
|
||||
"decort_resgroup": rg.ResourceResgroup(),
|
||||
"decort_kvmvm": kvmvm.ResourceCompute(),
|
||||
"decort_disk": disks.ResourceDisk(),
|
||||
"decort_vins": vins.ResourceVins(),
|
||||
"decort_pfw": pfw.ResourcePfw(),
|
||||
"decort_k8s": k8s.ResourceK8s(),
|
||||
"decort_k8s_wg": k8s.ResourceK8sWg(),
|
||||
"decort_snapshot": snapshot.ResourceSnapshot(),
|
||||
"decort_account": account.ResourceAccount(),
|
||||
"decort_bservice": bservice.ResourceBasicService(),
|
||||
"decort_bservice_group": bservice.ResourceBasicServiceGroup(),
|
||||
"decort_image": image.ResourceImage(),
|
||||
"decort_image_virtual": image.ResourceImageVirtual(),
|
||||
"decort_resgroup": rg.ResourceResgroup(),
|
||||
"decort_kvmvm": kvmvm.ResourceCompute(),
|
||||
"decort_disk": disks.ResourceDisk(),
|
||||
"decort_disk_snapshot": disks.ResourceDiskSnapshot(),
|
||||
"decort_vins": vins.ResourceVins(),
|
||||
"decort_pfw": pfw.ResourcePfw(),
|
||||
"decort_k8s": k8s.ResourceK8s(),
|
||||
"decort_k8s_wg": k8s.ResourceK8sWg(),
|
||||
"decort_snapshot": snapshot.ResourceSnapshot(),
|
||||
"decort_account": account.ResourceAccount(),
|
||||
"decort_bservice": bservice.ResourceBasicService(),
|
||||
"decort_bservice_group": bservice.ResourceBasicServiceGroup(),
|
||||
"decort_image": image.ResourceImage(),
|
||||
"decort_image_virtual": image.ResourceImageVirtual(),
|
||||
"decort_lb": lb.ResourceLB(),
|
||||
"decort_lb_backend": lb.ResourceLBBackend(),
|
||||
"decort_lb_backend_server": lb.ResourceLBBackendServer(),
|
||||
"decort_lb_frontend": lb.ResourceLBFrontend(),
|
||||
"decort_lb_frontend_bind": lb.ResourceLBFrontendBind(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -786,15 +786,15 @@ func ResourceAccount() *schema.Resource {
|
||||
DeleteContext: resourceAccountDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout60s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout60s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceAccountSchemaMake(),
|
||||
|
||||
@@ -511,15 +511,15 @@ func ResourceBasicService() *schema.Resource {
|
||||
DeleteContext: resourceBasicServiceDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout60s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout60s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceBasicServiceSchemaMake(),
|
||||
|
||||
@@ -616,15 +616,15 @@ func ResourceBasicServiceGroup() *schema.Resource {
|
||||
DeleteContext: resourceBasicServiceGroupDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout60s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout60s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceBasicServiceGroupSchemaMake(),
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -31,11 +32,19 @@ Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
|
||||
package disks
|
||||
|
||||
const disksCreateAPI = "/restmachine/cloudapi/disks/create"
|
||||
const disksGetAPI = "/restmachine/cloudapi/disks/get"
|
||||
const disksListAPI = "/restmachine/cloudapi/disks/list"
|
||||
const disksResizeAPI = "/restmachine/cloudapi/disks/resize2"
|
||||
const disksRenameAPI = "/restmachine/cloudapi/disks/rename"
|
||||
const disksDeleteAPI = "/restmachine/cloudapi/disks/delete"
|
||||
const disksIOLimitAPI = "/restmachine/cloudapi/disks/limitIO"
|
||||
const disksRestoreAPI = "/restmachine/cloudapi/disks/restore"
|
||||
const (
|
||||
disksCreateAPI = "/restmachine/cloudapi/disks/create"
|
||||
disksGetAPI = "/restmachine/cloudapi/disks/get"
|
||||
disksListAPI = "/restmachine/cloudapi/disks/list"
|
||||
disksResizeAPI = "/restmachine/cloudapi/disks/resize2"
|
||||
disksRenameAPI = "/restmachine/cloudapi/disks/rename"
|
||||
disksDeleteAPI = "/restmachine/cloudapi/disks/delete"
|
||||
disksIOLimitAPI = "/restmachine/cloudapi/disks/limitIO"
|
||||
disksRestoreAPI = "/restmachine/cloudapi/disks/restore"
|
||||
disksListTypesAPI = "/restmachine/cloudapi/disks/listTypes"
|
||||
disksListDeletedAPI = "/restmachine/cloudapi/disks/listDeleted"
|
||||
disksListUnattachedAPI = "/restmachine/cloudapi/disks/listUnattached"
|
||||
|
||||
disksSnapshotDeleteAPI = "/restmachine/cloudapi/disks/snapshotDelete"
|
||||
disksSnapshotRollbackAPI = "/restmachine/cloudapi/disks/snapshotRollback"
|
||||
)
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -94,7 +95,7 @@ func dataSourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface
|
||||
d.Set("sep_type", disk.SepType)
|
||||
d.Set("size_max", disk.SizeMax)
|
||||
d.Set("size_used", disk.SizeUsed)
|
||||
d.Set("snapshots", flattendDiskSnapshotList(disk.Snapshots))
|
||||
d.Set("snapshots", flattenDiskSnapshotList(disk.Snapshots))
|
||||
d.Set("status", disk.Status)
|
||||
d.Set("tech_status", disk.TechStatus)
|
||||
d.Set("type", disk.Type)
|
||||
@@ -106,68 +107,83 @@ func dataSourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface
|
||||
func dataSourceDiskSchemaMake() map[string]*schema.Schema {
|
||||
rets := map[string]*schema.Schema{
|
||||
"disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "The unique ID of the subscriber-owner of the disk",
|
||||
},
|
||||
"account_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The unique ID of the subscriber-owner of the disk",
|
||||
},
|
||||
"account_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The name of the subscriber '(account') to whom this disk belongs",
|
||||
},
|
||||
"acl": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"boot_partition": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of disk partitions",
|
||||
},
|
||||
"compute_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Compute ID",
|
||||
},
|
||||
"compute_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Compute name",
|
||||
},
|
||||
"created_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Created time",
|
||||
},
|
||||
"deleted_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Deleted time",
|
||||
},
|
||||
"desc": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Description of disk",
|
||||
},
|
||||
"destruction_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Time of final deletion",
|
||||
},
|
||||
"devicename": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the device",
|
||||
},
|
||||
"disk_path": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk path",
|
||||
},
|
||||
"gid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the grid (platform)",
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk ID on the storage side",
|
||||
},
|
||||
"image_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Image ID",
|
||||
},
|
||||
"images": {
|
||||
Type: schema.TypeList,
|
||||
@@ -175,6 +191,7 @@ func dataSourceDiskSchemaMake() map[string]*schema.Schema {
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
Description: "IDs of images using the disk",
|
||||
},
|
||||
"iotune": {
|
||||
Type: schema.TypeList,
|
||||
@@ -182,143 +199,177 @@ func dataSourceDiskSchemaMake() map[string]*schema.Schema {
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"read_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of bytes to read per second",
|
||||
},
|
||||
"read_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of bytes to read",
|
||||
},
|
||||
"read_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of io read operations per second",
|
||||
},
|
||||
"read_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of io read operations",
|
||||
},
|
||||
"size_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Size of io operations",
|
||||
},
|
||||
"total_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Total size bytes per second",
|
||||
},
|
||||
"total_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum total size of bytes per second",
|
||||
},
|
||||
"total_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Total number of io operations per second",
|
||||
},
|
||||
"total_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum total number of io operations per second",
|
||||
},
|
||||
"write_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of bytes to write per second",
|
||||
},
|
||||
"write_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of bytes to write per second",
|
||||
},
|
||||
"write_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of write operations per second",
|
||||
},
|
||||
"write_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of write operations per second",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"iqn": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk IQN",
|
||||
},
|
||||
"login": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Login to access the disk",
|
||||
},
|
||||
"milestones": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Milestones",
|
||||
},
|
||||
"disk_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of disk",
|
||||
},
|
||||
"order": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk order",
|
||||
},
|
||||
"params": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk params",
|
||||
},
|
||||
"parent_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the parent disk",
|
||||
},
|
||||
"passwd": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Password to access the disk",
|
||||
},
|
||||
"pci_slot": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the pci slot to which the disk is connected",
|
||||
},
|
||||
"pool": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Pool for disk location",
|
||||
},
|
||||
"purge_attempts": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of deletion attempts",
|
||||
},
|
||||
"purge_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Time of the last deletion attempt",
|
||||
},
|
||||
"reality_device_number": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Reality device number",
|
||||
},
|
||||
"reference_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the reference to the disk",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Resource ID",
|
||||
},
|
||||
"res_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the resource",
|
||||
},
|
||||
"role": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk role",
|
||||
},
|
||||
"sep_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Storage endpoint provider ID to create disk",
|
||||
},
|
||||
"sep_type": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Type SEP. Defines the type of storage system and contains one of the values set in the cloud platform",
|
||||
},
|
||||
"size_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Size in GB",
|
||||
},
|
||||
"size_used": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of used space, in GB",
|
||||
},
|
||||
"snapshots": {
|
||||
Type: schema.TypeList,
|
||||
@@ -326,47 +377,57 @@ func dataSourceDiskSchemaMake() map[string]*schema.Schema {
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the snapshot",
|
||||
},
|
||||
"label": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the snapshot",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Reference to the snapshot",
|
||||
},
|
||||
"snap_set_guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The set snapshot ID",
|
||||
},
|
||||
"snap_set_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The set time of the snapshot",
|
||||
},
|
||||
"timestamp": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Snapshot time",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk status",
|
||||
},
|
||||
"tech_status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Technical status of the disk",
|
||||
},
|
||||
"type": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||
},
|
||||
"vmid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Virtual Machine ID (Deprecated)",
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -109,7 +110,7 @@ func flattenDiskList(dl DisksList) []map[string]interface{} {
|
||||
"sep_type": disk.SepType,
|
||||
"size_max": disk.SizeMax,
|
||||
"size_used": disk.SizeUsed,
|
||||
"snapshots": flattendDiskSnapshotList(disk.Snapshots),
|
||||
"snapshots": flattenDiskSnapshotList(disk.Snapshots),
|
||||
"status": disk.Status,
|
||||
"tech_status": disk.TechStatus,
|
||||
"type": disk.Type,
|
||||
@@ -121,7 +122,7 @@ func flattenDiskList(dl DisksList) []map[string]interface{} {
|
||||
|
||||
}
|
||||
|
||||
func flattendDiskSnapshotList(sl SnapshotList) []interface{} {
|
||||
func flattenDiskSnapshotList(sl SnapshotList) []interface{} {
|
||||
res := make([]interface{}, 0)
|
||||
for _, snapshot := range sl {
|
||||
temp := map[string]interface{}{
|
||||
@@ -140,7 +141,7 @@ func flattendDiskSnapshotList(sl SnapshotList) []interface{} {
|
||||
}
|
||||
|
||||
func dataSourceDiskListRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
diskList, err := utilityDiskListCheckPresence(ctx, d, m)
|
||||
diskList, err := utilityDiskListCheckPresence(ctx, d, m, disksListAPI)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
@@ -180,68 +181,83 @@ func dataSourceDiskListSchemaMake() map[string]*schema.Schema {
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"account_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The unique ID of the subscriber-owner of the disk",
|
||||
},
|
||||
"account_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The name of the subscriber '(account') to whom this disk belongs",
|
||||
},
|
||||
"acl": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"boot_partition": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of disk partitions",
|
||||
},
|
||||
"compute_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Compute ID",
|
||||
},
|
||||
"compute_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Compute name",
|
||||
},
|
||||
"created_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Created time",
|
||||
},
|
||||
"deleted_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Deleted time",
|
||||
},
|
||||
"desc": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Description of disk",
|
||||
},
|
||||
"destruction_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Time of final deletion",
|
||||
},
|
||||
"devicename": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the device",
|
||||
},
|
||||
"disk_path": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk path",
|
||||
},
|
||||
"gid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the grid (platform)",
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk ID on the storage side",
|
||||
},
|
||||
"disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The unique ID of the subscriber-owner of the disk",
|
||||
},
|
||||
"image_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Image ID",
|
||||
},
|
||||
"images": {
|
||||
Type: schema.TypeList,
|
||||
@@ -249,6 +265,7 @@ func dataSourceDiskListSchemaMake() map[string]*schema.Schema {
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
Description: "IDs of images using the disk",
|
||||
},
|
||||
"iotune": {
|
||||
Type: schema.TypeList,
|
||||
@@ -256,151 +273,187 @@ func dataSourceDiskListSchemaMake() map[string]*schema.Schema {
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"read_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of bytes to read per second",
|
||||
},
|
||||
"read_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of bytes to read",
|
||||
},
|
||||
"read_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of io read operations per second",
|
||||
},
|
||||
"read_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of io read operations",
|
||||
},
|
||||
"size_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Size of io operations",
|
||||
},
|
||||
"total_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Total size bytes per second",
|
||||
},
|
||||
"total_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum total size of bytes per second",
|
||||
},
|
||||
"total_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Total number of io operations per second",
|
||||
},
|
||||
"total_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum total number of io operations per second",
|
||||
},
|
||||
"write_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of bytes to write per second",
|
||||
},
|
||||
"write_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of bytes to write per second",
|
||||
},
|
||||
"write_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of write operations per second",
|
||||
},
|
||||
"write_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of write operations per second",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"iqn": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk IQN",
|
||||
},
|
||||
"login": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Login to access the disk",
|
||||
},
|
||||
"machine_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Machine ID",
|
||||
},
|
||||
"machine_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Machine name",
|
||||
},
|
||||
"milestones": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Milestones",
|
||||
},
|
||||
"disk_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of disk",
|
||||
},
|
||||
"order": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk order",
|
||||
},
|
||||
"params": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk params",
|
||||
},
|
||||
"parent_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the parent disk",
|
||||
},
|
||||
"passwd": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Password to access the disk",
|
||||
},
|
||||
"pci_slot": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the pci slot to which the disk is connected",
|
||||
},
|
||||
"pool": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Pool for disk location",
|
||||
},
|
||||
"purge_attempts": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of deletion attempts",
|
||||
},
|
||||
"purge_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Time of the last deletion attempt",
|
||||
},
|
||||
"reality_device_number": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Reality device number",
|
||||
},
|
||||
"reference_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the reference to the disk",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Resource ID",
|
||||
},
|
||||
"res_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the resource",
|
||||
},
|
||||
"role": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk role",
|
||||
},
|
||||
"sep_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Storage endpoint provider ID to create disk",
|
||||
},
|
||||
"sep_type": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Type SEP. Defines the type of storage system and contains one of the values set in the cloud platform",
|
||||
},
|
||||
"size_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Size in GB",
|
||||
},
|
||||
"size_used": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of used space, in GB",
|
||||
},
|
||||
"snapshots": {
|
||||
Type: schema.TypeList,
|
||||
@@ -408,47 +461,57 @@ func dataSourceDiskListSchemaMake() map[string]*schema.Schema {
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the snapshot",
|
||||
},
|
||||
"label": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the snapshot",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Reference to the snapshot",
|
||||
},
|
||||
"snap_set_guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The set snapshot ID",
|
||||
},
|
||||
"snap_set_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The set time of the snapshot",
|
||||
},
|
||||
"timestamp": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Snapshot time",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk status",
|
||||
},
|
||||
"tech_status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Technical status of the disk",
|
||||
},
|
||||
"type": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||
},
|
||||
"vmid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Virtual Machine ID (Deprecated)",
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
@@ -0,0 +1,82 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package disks
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
)
|
||||
|
||||
func dataSourceDiskListTypesRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
listTypes, err := utilityDiskListTypesCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
id := uuid.New()
|
||||
d.SetId(id.String())
|
||||
d.Set("types", listTypes)
|
||||
return nil
|
||||
}
|
||||
|
||||
func dataSourceDiskListTypesSchemaMake() map[string]*schema.Schema {
|
||||
res := map[string]*schema.Schema{
|
||||
"types": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
Description: "The types of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||
},
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func DataSourceDiskListTypes() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
ReadContext: dataSourceDiskListTypesRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceDiskListTypesSchemaMake(),
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,133 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package disks
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
)
|
||||
|
||||
func flattenDiskListTypesDetailed(tld TypesDetailedList) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
for _, typeListDetailed := range tld {
|
||||
temp := map[string]interface{}{
|
||||
"pools": flattenListTypesDetailedPools(typeListDetailed.Pools),
|
||||
"sep_id": typeListDetailed.SepID,
|
||||
}
|
||||
res = append(res, temp)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenListTypesDetailedPools(pools PoolList) []interface{} {
|
||||
res := make([]interface{}, 0)
|
||||
for _, pool := range pools {
|
||||
temp := map[string]interface{}{
|
||||
"name": pool.Name,
|
||||
"types": pool.Types,
|
||||
}
|
||||
res = append(res, temp)
|
||||
}
|
||||
|
||||
return res
|
||||
}
|
||||
|
||||
func dataSourceDiskListTypesDetailedRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
listTypesDetailed, err := utilityDiskListTypesDetailedCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
id := uuid.New()
|
||||
d.SetId(id.String())
|
||||
d.Set("items", flattenDiskListTypesDetailed(listTypesDetailed))
|
||||
return nil
|
||||
}
|
||||
|
||||
func dataSourceDiskListTypesDetailedSchemaMake() map[string]*schema.Schema {
|
||||
res := map[string]*schema.Schema{
|
||||
"items": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"pools": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Pool name",
|
||||
},
|
||||
"types": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
Description: "The types of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"sep_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Storage endpoint provider ID to create disk",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func DataSourceDiskListTypesDetailed() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
ReadContext: dataSourceDiskListTypesDetailedRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceDiskListTypesDetailedSchemaMake(),
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,485 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package disks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/flattens"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func utilityDiskListUnattachedCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (UnattachedList, error) {
|
||||
unattachedList := UnattachedList{}
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
if accountId, ok := d.GetOk("accountId"); ok {
|
||||
urlValues.Add("accountId", strconv.Itoa(accountId.(int)))
|
||||
}
|
||||
|
||||
log.Debugf("utilityDiskListUnattachedCheckPresence: load disk Unattached list")
|
||||
unattachedListRaw, err := c.DecortAPICall(ctx, "POST", disksListUnattachedAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = json.Unmarshal([]byte(unattachedListRaw), &unattachedList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return unattachedList, nil
|
||||
}
|
||||
|
||||
func flattenDiskListUnattached(ul UnattachedList) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
for _, unattachedDisk := range ul {
|
||||
unattachedDiskAcl, _ := json.Marshal(unattachedDisk.Acl)
|
||||
tmp := map[string]interface{}{
|
||||
"_ckey": unattachedDisk.Ckey,
|
||||
"_meta": flattens.FlattenMeta(unattachedDisk.Meta),
|
||||
"account_id": unattachedDisk.AccountID,
|
||||
"account_name": unattachedDisk.AccountName,
|
||||
"acl": string(unattachedDiskAcl),
|
||||
"boot_partition": unattachedDisk.BootPartition,
|
||||
"created_time": unattachedDisk.CreatedTime,
|
||||
"deleted_time": unattachedDisk.DeletedTime,
|
||||
"desc": unattachedDisk.Desc,
|
||||
"destruction_time": unattachedDisk.DestructionTime,
|
||||
"disk_path": unattachedDisk.DiskPath,
|
||||
"gid": unattachedDisk.GridID,
|
||||
"guid": unattachedDisk.GUID,
|
||||
"disk_id": unattachedDisk.ID,
|
||||
"image_id": unattachedDisk.ImageID,
|
||||
"images": unattachedDisk.Images,
|
||||
"iotune": flattenIOTune(unattachedDisk.IOTune),
|
||||
"iqn": unattachedDisk.IQN,
|
||||
"login": unattachedDisk.Login,
|
||||
"milestones": unattachedDisk.Milestones,
|
||||
"disk_name": unattachedDisk.Name,
|
||||
"order": unattachedDisk.Order,
|
||||
"params": unattachedDisk.Params,
|
||||
"parent_id": unattachedDisk.ParentID,
|
||||
"passwd": unattachedDisk.Passwd,
|
||||
"pci_slot": unattachedDisk.PciSlot,
|
||||
"pool": unattachedDisk.Pool,
|
||||
"purge_attempts": unattachedDisk.PurgeAttempts,
|
||||
"purge_time": unattachedDisk.PurgeTime,
|
||||
"reality_device_number": unattachedDisk.RealityDeviceNumber,
|
||||
"reference_id": unattachedDisk.ReferenceID,
|
||||
"res_id": unattachedDisk.ResID,
|
||||
"res_name": unattachedDisk.ResName,
|
||||
"role": unattachedDisk.Role,
|
||||
"sep_id": unattachedDisk.SepID,
|
||||
"size_max": unattachedDisk.SizeMax,
|
||||
"size_used": unattachedDisk.SizeUsed,
|
||||
"snapshots": flattenDiskSnapshotList(unattachedDisk.Snapshots),
|
||||
"status": unattachedDisk.Status,
|
||||
"tech_status": unattachedDisk.TechStatus,
|
||||
"type": unattachedDisk.Type,
|
||||
"vmid": unattachedDisk.VMID,
|
||||
}
|
||||
res = append(res, tmp)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func dataSourceDiskListUnattachedRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
diskListUnattached, err := utilityDiskListUnattachedCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
id := uuid.New()
|
||||
d.SetId(id.String())
|
||||
d.Set("items", flattenDiskListUnattached(diskListUnattached))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func DataSourceDiskListUnattached() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceDiskListUnattachedRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceDiskListUnattachedSchemaMake(),
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceDiskListUnattachedSchemaMake() map[string]*schema.Schema {
|
||||
res := map[string]*schema.Schema{
|
||||
"account_id": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Description: "ID of the account the disks belong to",
|
||||
},
|
||||
|
||||
"items": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"_ckey": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "CKey",
|
||||
},
|
||||
"_meta": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
Description: "Meta parameters",
|
||||
},
|
||||
"account_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the account the disks belong to",
|
||||
},
|
||||
"account_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The name of the subscriber '(account') to whom this disk belongs",
|
||||
},
|
||||
"acl": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"boot_partition": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of disk partitions",
|
||||
},
|
||||
"created_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Created time",
|
||||
},
|
||||
"deleted_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Deleted time",
|
||||
},
|
||||
"desc": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Description of disk",
|
||||
},
|
||||
"destruction_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Time of final deletion",
|
||||
},
|
||||
"disk_path": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk path",
|
||||
},
|
||||
"gid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the grid (platform)",
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk ID on the storage side",
|
||||
},
|
||||
"disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The unique ID of the subscriber-owner of the disk",
|
||||
},
|
||||
"image_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Image ID",
|
||||
},
|
||||
"images": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
Description: "IDs of images using the disk",
|
||||
},
|
||||
"iotune": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"read_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of bytes to read per second",
|
||||
},
|
||||
"read_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of bytes to read",
|
||||
},
|
||||
"read_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of io read operations per second",
|
||||
},
|
||||
"read_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of io read operations",
|
||||
},
|
||||
"size_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Size of io operations",
|
||||
},
|
||||
"total_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Total size bytes per second",
|
||||
},
|
||||
"total_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum total size of bytes per second",
|
||||
},
|
||||
"total_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Total number of io operations per second",
|
||||
},
|
||||
"total_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum total number of io operations per second",
|
||||
},
|
||||
"write_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of bytes to write per second",
|
||||
},
|
||||
"write_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of bytes to write per second",
|
||||
},
|
||||
"write_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of write operations per second",
|
||||
},
|
||||
"write_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Maximum number of write operations per second",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"iqn": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk IQN",
|
||||
},
|
||||
"login": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Login to access the disk",
|
||||
},
|
||||
"milestones": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Milestones",
|
||||
},
|
||||
"disk_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of disk",
|
||||
},
|
||||
"order": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk order",
|
||||
},
|
||||
"params": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk params",
|
||||
},
|
||||
"parent_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the parent disk",
|
||||
},
|
||||
"passwd": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Password to access the disk",
|
||||
},
|
||||
"pci_slot": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the pci slot to which the disk is connected",
|
||||
},
|
||||
"pool": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Pool for disk location",
|
||||
},
|
||||
"purge_attempts": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of deletion attempts",
|
||||
},
|
||||
"purge_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Time of the last deletion attempt",
|
||||
},
|
||||
"reality_device_number": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Reality device number",
|
||||
},
|
||||
"reference_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the reference to the disk",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Resource ID",
|
||||
},
|
||||
"res_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the resource",
|
||||
},
|
||||
"role": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk role",
|
||||
},
|
||||
"sep_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Storage endpoint provider ID to create disk",
|
||||
},
|
||||
"size_max": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Size in GB",
|
||||
},
|
||||
"size_used": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of used space, in GB",
|
||||
},
|
||||
"snapshots": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the snapshot",
|
||||
},
|
||||
"label": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the snapshot",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Reference to the snapshot",
|
||||
},
|
||||
"snap_set_guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The set snapshot ID",
|
||||
},
|
||||
"snap_set_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The set time of the snapshot",
|
||||
},
|
||||
"timestamp": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Snapshot time",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk status",
|
||||
},
|
||||
"tech_status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Technical status of the disk",
|
||||
},
|
||||
"type": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||
},
|
||||
"vmid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Virtual Machine ID (Deprecated)",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
return res
|
||||
}
|
||||
129
internal/service/cloudapi/disks/data_source_disk_snapshot.go
Normal file
129
internal/service/cloudapi/disks/data_source_disk_snapshot.go
Normal file
@@ -0,0 +1,129 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package disks
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
)
|
||||
|
||||
func dataSourceDiskSnapshotRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||
if disk == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
snapshots := disk.Snapshots
|
||||
snapshot := Snapshot{}
|
||||
label := d.Get("label").(string)
|
||||
for _, sn := range snapshots {
|
||||
if label == sn.Label {
|
||||
snapshot = sn
|
||||
break
|
||||
}
|
||||
}
|
||||
if label != snapshot.Label {
|
||||
return diag.Errorf("Snapshot with label \"%v\" not found", label)
|
||||
}
|
||||
|
||||
id := uuid.New()
|
||||
d.SetId(id.String())
|
||||
d.Set("timestamp", snapshot.TimeStamp)
|
||||
d.Set("guid", snapshot.Guid)
|
||||
d.Set("res_id", snapshot.ResId)
|
||||
d.Set("snap_set_guid", snapshot.SnapSetGuid)
|
||||
d.Set("snap_set_time", snapshot.SnapSetTime)
|
||||
return nil
|
||||
}
|
||||
|
||||
func DataSourceDiskSnapshot() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceDiskSnapshotRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceDiskSnapshotSchemaMake(),
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceDiskSnapshotSchemaMake() map[string]*schema.Schema {
|
||||
rets := map[string]*schema.Schema{
|
||||
"disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "The unique ID of the subscriber-owner of the disk",
|
||||
},
|
||||
"label": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "Name of the snapshot",
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the snapshot",
|
||||
},
|
||||
"timestamp": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Snapshot time",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Reference to the snapshot",
|
||||
},
|
||||
"snap_set_guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The set snapshot ID",
|
||||
},
|
||||
"snap_set_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The set time of the snapshot",
|
||||
},
|
||||
}
|
||||
return rets
|
||||
}
|
||||
@@ -0,0 +1,121 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package disks
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
)
|
||||
|
||||
func dataSourceDiskSnapshotListRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||
if disk == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
id := uuid.New()
|
||||
d.SetId(id.String())
|
||||
d.Set("items", flattenDiskSnapshotList(disk.Snapshots))
|
||||
return nil
|
||||
}
|
||||
|
||||
func DataSourceDiskSnapshotList() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceDiskSnapshotListRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceDiskSnapshotListSchemaMake(),
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceDiskSnapshotListSchemaMake() map[string]*schema.Schema {
|
||||
rets := map[string]*schema.Schema{
|
||||
"disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "The unique ID of the subscriber-owner of the disk",
|
||||
},
|
||||
"items": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"label": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the snapshot",
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the snapshot",
|
||||
},
|
||||
"timestamp": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Snapshot time",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Reference to the snapshot",
|
||||
},
|
||||
"snap_set_guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The set snapshot ID",
|
||||
},
|
||||
"snap_set_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The set time of the snapshot",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
return rets
|
||||
}
|
||||
69
internal/service/cloudapi/disks/data_source_list_deleted.go
Normal file
69
internal/service/cloudapi/disks/data_source_list_deleted.go
Normal file
@@ -0,0 +1,69 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package disks
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
)
|
||||
|
||||
func dataSourceDiskListDeletedRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
diskList, err := utilityDiskListCheckPresence(ctx, d, m, disksListDeletedAPI)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
id := uuid.New()
|
||||
d.SetId(id.String())
|
||||
d.Set("items", flattenDiskList(diskList))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func DataSourceDiskListDeleted() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
ReadContext: dataSourceDiskListDeletedRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceDiskListSchemaMake(),
|
||||
}
|
||||
}
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -109,3 +110,66 @@ type IOTune struct {
|
||||
WriteIopsSec int `json:"write_iops_sec"`
|
||||
WriteIopsSecMax int `json:"write_iops_sec_max"`
|
||||
}
|
||||
|
||||
type Pool struct {
|
||||
Name string `json:"name"`
|
||||
Types []string `json:"types"`
|
||||
}
|
||||
|
||||
type PoolList []Pool
|
||||
|
||||
type TypeDetailed struct {
|
||||
Pools []Pool `json:"pools"`
|
||||
SepID int `json:"sepId"`
|
||||
}
|
||||
|
||||
type TypesDetailedList []TypeDetailed
|
||||
|
||||
type TypesList []string
|
||||
|
||||
type Unattached struct {
|
||||
Ckey string `json:"_ckey"`
|
||||
Meta []interface{} `json:"_meta"`
|
||||
AccountID int `json:"accountId"`
|
||||
AccountName string `json:"accountName"`
|
||||
Acl map[string]interface{} `json:"acl"`
|
||||
BootPartition int `json:"bootPartition"`
|
||||
CreatedTime int `json:"createdTime"`
|
||||
DeletedTime int `json:"deletedTime"`
|
||||
Desc string `json:"desc"`
|
||||
DestructionTime int `json:"destructionTime"`
|
||||
DiskPath string `json:"diskPath"`
|
||||
GridID int `json:"gid"`
|
||||
GUID int `json:"guid"`
|
||||
ID int `json:"id"`
|
||||
ImageID int `json:"imageId"`
|
||||
Images []int `json:"images"`
|
||||
IOTune IOTune `json:"iotune"`
|
||||
IQN string `json:"iqn"`
|
||||
Login string `json:"login"`
|
||||
Milestones int `json:"milestones"`
|
||||
Name string `json:"name"`
|
||||
Order int `json:"order"`
|
||||
Params string `json:"params"`
|
||||
ParentID int `json:"parentId"`
|
||||
Passwd string `json:"passwd"`
|
||||
PciSlot int `json:"pciSlot"`
|
||||
Pool string `json:"pool"`
|
||||
PurgeAttempts int `json:"purgeAttempts"`
|
||||
PurgeTime int `json:"purgeTime"`
|
||||
RealityDeviceNumber int `json:"realityDeviceNumber"`
|
||||
ReferenceID string `json:"referenceId"`
|
||||
ResID string `json:"resId"`
|
||||
ResName string `json:"resName"`
|
||||
Role string `json:"role"`
|
||||
SepID int `json:"sepId"`
|
||||
SizeMax int `json:"sizeMax"`
|
||||
SizeUsed int `json:"sizeUsed"`
|
||||
Snapshots []Snapshot `json:"snapshots"`
|
||||
Status string `json:"status"`
|
||||
TechStatus string `json:"techStatus"`
|
||||
Type string `json:"type"`
|
||||
VMID int `json:"vmid"`
|
||||
}
|
||||
|
||||
type UnattachedList []Unattached
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -41,6 +42,7 @@ import (
|
||||
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/status"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
@@ -119,6 +121,9 @@ func resourceDiskCreate(ctx context.Context, d *schema.ResourceData, m interface
|
||||
}
|
||||
|
||||
func resourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
urlValues := &url.Values{}
|
||||
c := m.(*controller.ControllerCfg)
|
||||
|
||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||
if disk == nil {
|
||||
d.SetId("")
|
||||
@@ -128,6 +133,28 @@ func resourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface{}
|
||||
return nil
|
||||
}
|
||||
|
||||
if disk.Status == status.Destroyed || disk.Status == status.Purged {
|
||||
d.Set("disk_id", 0)
|
||||
return resourceDiskCreate(ctx, d, m)
|
||||
} else if disk.Status == status.Deleted {
|
||||
urlValues.Add("diskId", d.Id())
|
||||
urlValues.Add("reason", d.Get("reason").(string))
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", disksRestoreAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
urlValues = &url.Values{}
|
||||
disk, err = utilityDiskCheckPresence(ctx, d, m)
|
||||
if disk == nil {
|
||||
d.SetId("")
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
diskAcl, _ := json.Marshal(disk.Acl)
|
||||
|
||||
d.Set("account_id", disk.AccountID)
|
||||
@@ -169,7 +196,7 @@ func resourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface{}
|
||||
d.Set("sep_type", disk.SepType)
|
||||
d.Set("size_max", disk.SizeMax)
|
||||
d.Set("size_used", disk.SizeUsed)
|
||||
d.Set("snapshots", flattendDiskSnapshotList(disk.Snapshots))
|
||||
d.Set("snapshots", flattenDiskSnapshotList(disk.Snapshots))
|
||||
d.Set("status", disk.Status)
|
||||
d.Set("tech_status", disk.TechStatus)
|
||||
d.Set("type", disk.Type)
|
||||
@@ -179,9 +206,27 @@ func resourceDiskRead(ctx context.Context, d *schema.ResourceData, m interface{}
|
||||
}
|
||||
|
||||
func resourceDiskUpdate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||
if disk == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if disk.Status == status.Destroyed || disk.Status == status.Purged {
|
||||
return resourceDiskCreate(ctx, d, m)
|
||||
} else if disk.Status == status.Deleted {
|
||||
urlValues.Add("diskId", d.Id())
|
||||
urlValues.Add("reason", d.Get("reason").(string))
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", disksRestoreAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
|
||||
if d.HasChange("size_max") {
|
||||
oldSize, newSize := d.GetChange("size_max")
|
||||
@@ -238,26 +283,10 @@ func resourceDiskUpdate(ctx context.Context, d *schema.ResourceData, m interface
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
|
||||
if d.HasChange("restore") {
|
||||
if d.Get("restore").(bool) {
|
||||
urlValues.Add("diskId", d.Id())
|
||||
urlValues.Add("reason", d.Get("reason").(string))
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", disksRestoreAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
return resourceDiskRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func resourceDiskDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
|
||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||
if disk == nil {
|
||||
if err != nil {
|
||||
@@ -265,7 +294,9 @@ func resourceDiskDelete(ctx context.Context, d *schema.ResourceData, m interface
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if disk.Status == status.Destroyed || disk.Status == status.Purged {
|
||||
return nil
|
||||
}
|
||||
params := &url.Values{}
|
||||
params.Add("diskId", d.Id())
|
||||
params.Add("detach", strconv.FormatBool(d.Get("detach").(bool)))
|
||||
@@ -277,126 +308,141 @@ func resourceDiskDelete(ctx context.Context, d *schema.ResourceData, m interface
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceDiskSchemaMake() map[string]*schema.Schema {
|
||||
rets := map[string]*schema.Schema{
|
||||
"account_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "The unique ID of the subscriber-owner of the disk",
|
||||
},
|
||||
"disk_name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "Name of disk",
|
||||
},
|
||||
"size_max": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "Size in GB",
|
||||
},
|
||||
"gid": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "ID of the grid (platform)",
|
||||
},
|
||||
"pool": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Pool for disk location",
|
||||
},
|
||||
"sep_id": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Storage endpoint provider ID to create disk",
|
||||
},
|
||||
"desc": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Description of disk",
|
||||
},
|
||||
"type": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ValidateFunc: validation.StringInSlice([]string{"D", "B", "T"}, false),
|
||||
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data, T=Temp'",
|
||||
},
|
||||
|
||||
"detach": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: false,
|
||||
Description: "detach disk from machine first",
|
||||
Description: "Detaching the disk from compute",
|
||||
},
|
||||
"permanently": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: false,
|
||||
Description: "whether to completely delete the disk, works only with non attached disks",
|
||||
Description: "Whether to completely delete the disk, works only with non attached disks",
|
||||
},
|
||||
"reason": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Default: "",
|
||||
Description: "reason for an action",
|
||||
},
|
||||
"restore": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: false,
|
||||
Description: "restore deleting disk",
|
||||
Description: "Reason for deletion",
|
||||
},
|
||||
|
||||
"disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk ID. Duplicates the value of the ID parameter",
|
||||
},
|
||||
"account_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The name of the subscriber '(account') to whom this disk belongs",
|
||||
},
|
||||
"acl": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"boot_partition": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of disk partitions",
|
||||
},
|
||||
"compute_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Compute ID",
|
||||
},
|
||||
"compute_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Compute name",
|
||||
},
|
||||
"created_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Created time",
|
||||
},
|
||||
"deleted_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Deleted time",
|
||||
},
|
||||
"destruction_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Time of final deletion",
|
||||
},
|
||||
"devicename": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the device",
|
||||
},
|
||||
"disk_path": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk path",
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk ID on the storage side",
|
||||
},
|
||||
"image_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Image ID",
|
||||
},
|
||||
"images": {
|
||||
Type: schema.TypeList,
|
||||
@@ -404,6 +450,7 @@ func resourceDiskSchemaMake() map[string]*schema.Schema {
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
Description: "IDs of images using the disk",
|
||||
},
|
||||
"iotune": {
|
||||
Type: schema.TypeList,
|
||||
@@ -413,143 +460,171 @@ func resourceDiskSchemaMake() map[string]*schema.Schema {
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"read_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Number of bytes to read per second",
|
||||
},
|
||||
"read_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Maximum number of bytes to read",
|
||||
},
|
||||
"read_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Number of io read operations per second",
|
||||
},
|
||||
"read_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Maximum number of io read operations",
|
||||
},
|
||||
"size_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Size of io operations",
|
||||
},
|
||||
"total_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Total size bytes per second",
|
||||
},
|
||||
"total_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Maximum total size of bytes per second",
|
||||
},
|
||||
"total_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Total number of io operations per second",
|
||||
},
|
||||
"total_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Maximum total number of io operations per second",
|
||||
},
|
||||
"write_bytes_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Number of bytes to write per second",
|
||||
},
|
||||
"write_bytes_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Maximum number of bytes to write per second",
|
||||
},
|
||||
"write_iops_sec": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Number of write operations per second",
|
||||
},
|
||||
"write_iops_sec_max": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Maximum number of write operations per second",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"iqn": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk IQN",
|
||||
},
|
||||
"login": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Login to access the disk",
|
||||
},
|
||||
"milestones": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Milestones",
|
||||
},
|
||||
|
||||
"order": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk order",
|
||||
},
|
||||
"params": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk params",
|
||||
},
|
||||
"parent_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the parent disk",
|
||||
},
|
||||
"passwd": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Password to access the disk",
|
||||
},
|
||||
"pci_slot": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the pci slot to which the disk is connected",
|
||||
},
|
||||
|
||||
"purge_attempts": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of deletion attempts",
|
||||
},
|
||||
"purge_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Time of the last deletion attempt",
|
||||
},
|
||||
"reality_device_number": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Reality device number",
|
||||
},
|
||||
"reference_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the reference to the disk",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Resource ID",
|
||||
},
|
||||
"res_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the resource",
|
||||
},
|
||||
"role": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk role",
|
||||
},
|
||||
|
||||
"sep_type": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Type SEP. Defines the type of storage system and contains one of the values set in the cloud platform",
|
||||
},
|
||||
"size_used": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of used space, in GB",
|
||||
},
|
||||
"snapshots": {
|
||||
Type: schema.TypeList,
|
||||
@@ -557,43 +632,52 @@ func resourceDiskSchemaMake() map[string]*schema.Schema {
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the snapshot",
|
||||
},
|
||||
"label": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the snapshot",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Reference to the snapshot",
|
||||
},
|
||||
"snap_set_guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The set snapshot ID",
|
||||
},
|
||||
"snap_set_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The set time of the snapshot",
|
||||
},
|
||||
"timestamp": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Snapshot time",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Disk status",
|
||||
},
|
||||
"tech_status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Technical status of the disk",
|
||||
},
|
||||
"vmid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Virtual Machine ID (Deprecated)",
|
||||
},
|
||||
}
|
||||
|
||||
@@ -610,15 +694,15 @@ func ResourceDisk() *schema.Resource {
|
||||
DeleteContext: resourceDiskDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout180s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout180s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceDiskSchemaMake(),
|
||||
|
||||
246
internal/service/cloudapi/disks/resource_disk_snapshot.go
Normal file
246
internal/service/cloudapi/disks/resource_disk_snapshot.go
Normal file
@@ -0,0 +1,246 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package disks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func resourceDiskSnapshotCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
urlValues := &url.Values{}
|
||||
c := m.(*controller.ControllerCfg)
|
||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||
if disk == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
snapshots := disk.Snapshots
|
||||
snapshot := Snapshot{}
|
||||
label := d.Get("label").(string)
|
||||
for _, sn := range snapshots {
|
||||
if label == sn.Label {
|
||||
snapshot = sn
|
||||
break
|
||||
}
|
||||
}
|
||||
if label != snapshot.Label {
|
||||
return diag.Errorf("Snapshot with label \"%v\" not found", label)
|
||||
}
|
||||
if rollback := d.Get("rollback").(bool); rollback {
|
||||
urlValues.Add("diskId", strconv.Itoa(d.Get("disk_id").(int)))
|
||||
urlValues.Add("label", label)
|
||||
urlValues.Add("timestamp", strconv.Itoa(d.Get("timestamp").(int)))
|
||||
log.Debugf("resourceDiskCreate: Snapshot rollback with label", label)
|
||||
_, err := c.DecortAPICall(ctx, "POST", disksSnapshotRollbackAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
return resourceDiskSnapshotRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func resourceDiskSnapshotRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||
if disk == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
snapshots := disk.Snapshots
|
||||
snapshot := Snapshot{}
|
||||
label := d.Get("label").(string)
|
||||
for _, sn := range snapshots {
|
||||
if label == sn.Label {
|
||||
snapshot = sn
|
||||
break
|
||||
}
|
||||
}
|
||||
if label != snapshot.Label {
|
||||
return diag.Errorf("Snapshot with label \"%v\" not found", label)
|
||||
}
|
||||
|
||||
d.SetId(d.Get("label").(string))
|
||||
d.Set("timestamp", snapshot.TimeStamp)
|
||||
d.Set("guid", snapshot.Guid)
|
||||
d.Set("res_id", snapshot.ResId)
|
||||
d.Set("snap_set_guid", snapshot.SnapSetGuid)
|
||||
d.Set("snap_set_time", snapshot.SnapSetTime)
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceDiskSnapshotUpdate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
urlValues := &url.Values{}
|
||||
c := m.(*controller.ControllerCfg)
|
||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||
if disk == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
snapshots := disk.Snapshots
|
||||
snapshot := Snapshot{}
|
||||
label := d.Get("label").(string)
|
||||
for _, sn := range snapshots {
|
||||
if label == sn.Label {
|
||||
snapshot = sn
|
||||
break
|
||||
}
|
||||
}
|
||||
if label != snapshot.Label {
|
||||
return diag.Errorf("Snapshot with label \"%v\" not found", label)
|
||||
}
|
||||
if d.HasChange("rollback") && d.Get("rollback").(bool) == true {
|
||||
urlValues.Add("diskId", strconv.Itoa(d.Get("disk_id").(int)))
|
||||
urlValues.Add("label", label)
|
||||
urlValues.Add("timestamp", strconv.Itoa(d.Get("timestamp").(int)))
|
||||
log.Debugf("resourceDiskUpdtae: Snapshot rollback with label", label)
|
||||
_, err := c.DecortAPICall(ctx, "POST", disksSnapshotRollbackAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
|
||||
return resourceDiskSnapshotRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func resourceDiskSnapshotDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
|
||||
disk, err := utilityDiskCheckPresence(ctx, d, m)
|
||||
if disk == nil { //if disk not exits, can't call snapshotDelete
|
||||
d.SetId("")
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
params := &url.Values{}
|
||||
params.Add("diskId", strconv.Itoa(d.Get("disk_id").(int)))
|
||||
params.Add("label", d.Get("label").(string))
|
||||
|
||||
_, err = c.DecortAPICall(ctx, "POST", disksSnapshotDeleteAPI, params)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceDiskSnapshotSchemaMake() map[string]*schema.Schema {
|
||||
rets := map[string]*schema.Schema{
|
||||
"disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "The unique ID of the subscriber-owner of the disk",
|
||||
},
|
||||
"label": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "Name of the snapshot",
|
||||
},
|
||||
"rollback": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: false,
|
||||
Description: "Needed in order to make a snapshot rollback",
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "ID of the snapshot",
|
||||
},
|
||||
"timestamp": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Snapshot time",
|
||||
},
|
||||
"res_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Reference to the snapshot",
|
||||
},
|
||||
"snap_set_guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "The set snapshot ID",
|
||||
},
|
||||
"snap_set_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "The set time of the snapshot",
|
||||
},
|
||||
}
|
||||
return rets
|
||||
}
|
||||
|
||||
func ResourceDiskSnapshot() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
CreateContext: resourceDiskSnapshotCreate,
|
||||
ReadContext: resourceDiskSnapshotRead,
|
||||
UpdateContext: resourceDiskSnapshotUpdate,
|
||||
DeleteContext: resourceDiskSnapshotDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceDiskSnapshotSchemaMake(),
|
||||
}
|
||||
}
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -44,7 +45,7 @@ import (
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
)
|
||||
|
||||
func utilityDiskListCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (DisksList, error) {
|
||||
func utilityDiskListCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}, api string) (DisksList, error) {
|
||||
diskList := DisksList{}
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
@@ -63,7 +64,7 @@ func utilityDiskListCheckPresence(ctx context.Context, d *schema.ResourceData, m
|
||||
}
|
||||
|
||||
log.Debugf("utilityDiskListCheckPresence: load disk list")
|
||||
diskListRaw, err := c.DecortAPICall(ctx, "POST", disksListAPI, urlValues)
|
||||
diskListRaw, err := c.DecortAPICall(ctx, "POST", api, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
@@ -0,0 +1,62 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package disks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/url"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func utilityDiskListTypesDetailedCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (TypesDetailedList, error) {
|
||||
listTypesDetailed := TypesDetailedList{}
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("detailed", "true")
|
||||
log.Debugf("utilityDiskListTypesDetailedCheckPresence: load disk list Types Detailed")
|
||||
diskListRaw, err := c.DecortAPICall(ctx, "POST", disksListTypesAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(diskListRaw), &listTypesDetailed)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return listTypesDetailed, nil
|
||||
}
|
||||
62
internal/service/cloudapi/disks/utility_disk_types_list.go
Normal file
62
internal/service/cloudapi/disks/utility_disk_types_list.go
Normal file
@@ -0,0 +1,62 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package disks
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/url"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func utilityDiskListTypesCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (TypesList, error) {
|
||||
typesList := TypesList{}
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("detailed", "false")
|
||||
log.Debugf("utilityDiskListTypesCheckPresence: load disk list Types Detailed")
|
||||
diskListRaw, err := c.DecortAPICall(ctx, "POST", disksListTypesAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(diskListRaw), &typesList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return typesList, nil
|
||||
}
|
||||
@@ -31,7 +31,7 @@ Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
|
||||
package image
|
||||
|
||||
const imageCreateAPI = "/restmachine/cloudapi/image/createImage"
|
||||
const imageCreateAPI = "/restmachine/cloudapi/image/create"
|
||||
const imageCreateVirtualAPI = "/restmachine/cloudapi/image/createVirtual"
|
||||
const imageGetAPI = "/restmachine/cloudapi/image/get"
|
||||
const imageListGetAPI = "/restmachine/cloudapi/image/list"
|
||||
|
||||
@@ -33,6 +33,7 @@ package image
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
@@ -94,11 +95,25 @@ func resourceImageCreate(ctx context.Context, d *schema.ResourceData, m interfac
|
||||
if architecture, ok := d.GetOk("architecture"); ok {
|
||||
urlValues.Add("architecture", architecture.(string))
|
||||
}
|
||||
|
||||
/* uncomment then OK
|
||||
imageId, err := c.DecortAPICall(ctx, "POST", imageCreateAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
*/
|
||||
//innovation
|
||||
res, err := c.DecortAPICall(ctx, "POST", imageCreateAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
i := make([]interface{}, 0)
|
||||
err = json.Unmarshal([]byte(res), &i)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
imageId := strconv.Itoa(int(i[1].(float64)))
|
||||
// end innovation
|
||||
|
||||
d.SetId(imageId)
|
||||
d.Set("image_id", imageId)
|
||||
@@ -229,15 +244,15 @@ func ResourceImage() *schema.Resource {
|
||||
DeleteContext: resourceImageDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout60s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout60s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceImageSchemaMake(dataSourceImageExtendSchemaMake()),
|
||||
|
||||
@@ -116,15 +116,15 @@ func ResourceImageVirtual() *schema.Resource {
|
||||
DeleteContext: resourceImageDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout60s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout60s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceImageVirtualSchemaMake(dataSourceImageExtendSchemaMake()),
|
||||
|
||||
@@ -31,19 +31,23 @@ Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
|
||||
package k8s
|
||||
|
||||
const K8sCreateAPI = "/restmachine/cloudapi/k8s/create"
|
||||
const K8sGetAPI = "/restmachine/cloudapi/k8s/get"
|
||||
const K8sUpdateAPI = "/restmachine/cloudapi/k8s/update"
|
||||
const K8sDeleteAPI = "/restmachine/cloudapi/k8s/delete"
|
||||
const (
|
||||
K8sCreateAPI = "/restmachine/cloudapi/k8s/create"
|
||||
K8sGetAPI = "/restmachine/cloudapi/k8s/get"
|
||||
K8sUpdateAPI = "/restmachine/cloudapi/k8s/update"
|
||||
K8sDeleteAPI = "/restmachine/cloudapi/k8s/delete"
|
||||
K8sListAPI = "/restmachine/cloudapi/k8s/list"
|
||||
K8sListDeletedAPI = "/restmachine/cloudapi/k8s/listDeleted"
|
||||
|
||||
const K8sWgCreateAPI = "/restmachine/cloudapi/k8s/workersGroupAdd"
|
||||
const K8sWgDeleteAPI = "/restmachine/cloudapi/k8s/workersGroupDelete"
|
||||
K8sWgCreateAPI = "/restmachine/cloudapi/k8s/workersGroupAdd"
|
||||
K8sWgDeleteAPI = "/restmachine/cloudapi/k8s/workersGroupDelete"
|
||||
|
||||
const K8sWorkerAddAPI = "/restmachine/cloudapi/k8s/workerAdd"
|
||||
const K8sWorkerDeleteAPI = "/restmachine/cloudapi/k8s/deleteWorkerFromGroup"
|
||||
K8sWorkerAddAPI = "/restmachine/cloudapi/k8s/workerAdd"
|
||||
K8sWorkerDeleteAPI = "/restmachine/cloudapi/k8s/deleteWorkerFromGroup"
|
||||
|
||||
const K8sGetConfigAPI = "/restmachine/cloudapi/k8s/getConfig"
|
||||
K8sGetConfigAPI = "/restmachine/cloudapi/k8s/getConfig"
|
||||
|
||||
const LbGetAPI = "/restmachine/cloudapi/lb/get"
|
||||
LbGetAPI = "/restmachine/cloudapi/lb/get"
|
||||
|
||||
const AsyncTaskGetAPI = "/restmachine/cloudapi/tasks/get"
|
||||
AsyncTaskGetAPI = "/restmachine/cloudapi/tasks/get"
|
||||
)
|
||||
|
||||
419
internal/service/cloudapi/k8s/data_source_k8s.go
Normal file
419
internal/service/cloudapi/k8s/data_source_k8s.go
Normal file
@@ -0,0 +1,419 @@
|
||||
package k8s
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||
)
|
||||
|
||||
func dataSourceK8sRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
k8s, err := utilityDataK8sCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
d.SetId(strconv.FormatUint(k8s.ID, 10))
|
||||
|
||||
k8sList, err := utilityK8sListCheckPresence(ctx, d, m, K8sListAPI)
|
||||
if err != nil {
|
||||
d.SetId("")
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
curK8s := K8SItem{}
|
||||
for _, k8sCluster := range k8sList {
|
||||
if k8sCluster.ID == k8s.ID {
|
||||
curK8s = k8sCluster
|
||||
}
|
||||
}
|
||||
if curK8s.ID == 0 {
|
||||
return diag.Errorf("Cluster with id %d not found in List clusters", k8s.ID)
|
||||
}
|
||||
d.Set("vins_id", curK8s.VINSID)
|
||||
|
||||
masterComputeList := make([]kvmvm.ComputeGetResp, 0, len(k8s.K8SGroups.Masters.DetailedInfo))
|
||||
workersComputeList := make([]kvmvm.ComputeGetResp, 0, len(k8s.K8SGroups.Workers))
|
||||
for _, masterNode := range k8s.K8SGroups.Masters.DetailedInfo {
|
||||
compute, err := utilityComputeCheckPresence(ctx, d, m, masterNode.ID)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
masterComputeList = append(masterComputeList, *compute)
|
||||
}
|
||||
for _, worker := range k8s.K8SGroups.Workers {
|
||||
for _, info := range worker.DetailedInfo {
|
||||
compute, err := utilityComputeCheckPresence(ctx, d, m, info.ID)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
workersComputeList = append(workersComputeList, *compute)
|
||||
}
|
||||
}
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("k8sId", d.Id())
|
||||
kubeconfig, err := c.DecortAPICall(ctx, "POST", K8sGetConfigAPI, urlValues)
|
||||
if err != nil {
|
||||
log.Warnf("could not get kubeconfig: %v", err)
|
||||
}
|
||||
d.Set("kubeconfig", kubeconfig)
|
||||
|
||||
urlValues = &url.Values{}
|
||||
urlValues.Add("lbId", strconv.FormatUint(k8s.LBID, 10))
|
||||
resp, err := c.DecortAPICall(ctx, "POST", LbGetAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
var lb LbRecord
|
||||
if err := json.Unmarshal([]byte(resp), &lb); err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
d.Set("extnet_id", lb.ExtNetID)
|
||||
d.Set("lb_ip", lb.PrimaryNode.FrontendIP)
|
||||
|
||||
flattenK8sData(d, *k8s, masterComputeList, workersComputeList)
|
||||
return nil
|
||||
}
|
||||
|
||||
func aclListSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"explicit": {
|
||||
Type: schema.TypeBool,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"right": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"type": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"user_group_id": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func aclGroupSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"account_acl": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: aclListSchemaMake(),
|
||||
},
|
||||
},
|
||||
"k8s_acl": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: aclListSchemaMake(),
|
||||
},
|
||||
},
|
||||
"rg_acl": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: aclListSchemaMake(),
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func detailedInfoSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"compute_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"tech_status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"interfaces": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: interfacesSchemaMake(),
|
||||
},
|
||||
},
|
||||
"natable_vins_ip": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"natable_vins_network": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func interfacesSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"def_gw": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"ip_address": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func masterGroupSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"cpu": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"detailed_info": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: detailedInfoSchemaMake(),
|
||||
},
|
||||
},
|
||||
"disk": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"master_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"num": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"ram": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func k8sGroupListSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"annotations": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
"cpu": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"detailed_info": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: detailedInfoSchemaMake(),
|
||||
},
|
||||
},
|
||||
"disk": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"labels": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"num": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"ram": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"taints": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceK8sSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"k8s_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
},
|
||||
|
||||
"acl": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: aclGroupSchemaMake(),
|
||||
},
|
||||
},
|
||||
"account_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"account_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"bservice_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"k8sci_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"created_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"created_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"deleted_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"deleted_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"extnet_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the external network to connect workers to. If omitted network will be chosen by the platfom.",
|
||||
},
|
||||
"k8s_ci_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"masters": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: masterGroupSchemaMake(),
|
||||
},
|
||||
},
|
||||
"workers": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: k8sGroupListSchemaMake(),
|
||||
},
|
||||
},
|
||||
"lb_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"lb_ip": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "IP address of default load balancer.",
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"rg_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"rg_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"tech_status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"updated_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"updated_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"kubeconfig": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Kubeconfig for cluster access.",
|
||||
},
|
||||
"vins_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func DataSourceK8s() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceK8sRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceK8sSchemaMake(),
|
||||
}
|
||||
}
|
||||
268
internal/service/cloudapi/k8s/data_source_k8s_list.go
Normal file
268
internal/service/cloudapi/k8s/data_source_k8s_list.go
Normal file
@@ -0,0 +1,268 @@
|
||||
package k8s
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
)
|
||||
|
||||
func dataSourceK8sListRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
k8sList, err := utilityK8sListCheckPresence(ctx, d, m, K8sListAPI)
|
||||
if err != nil {
|
||||
d.SetId("")
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
id := uuid.New()
|
||||
d.SetId(id.String())
|
||||
flattenK8sList(d, k8sList)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func serviceAccountSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"password": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"username": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func k8sWorkersGroupsSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"annotations": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
"cpu": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"detailed_info": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: detailedInfoSchemaMake(),
|
||||
},
|
||||
},
|
||||
"disk": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"detailed_info_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"labels": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"num": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"ram": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"taints": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func createK8sListSchema() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"includedeleted": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
},
|
||||
"page": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
},
|
||||
"size": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
},
|
||||
|
||||
"items": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"account_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"account_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"acl": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
"bservice_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"ci_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"config": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
"created_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"created_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"deleted_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"deleted_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"desc": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"extnet_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"gid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"k8s_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"lb_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"milestones": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"k8s_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"rg_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"rg_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"service_account": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: serviceAccountSchemaMake(),
|
||||
},
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"tech_status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"updated_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"updated_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"vins_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"workers_groups": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: k8sWorkersGroupsSchemaMake(),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dataSourceK8sListSchemaMake() map[string]*schema.Schema {
|
||||
k8sListSchema := createK8sListSchema()
|
||||
return k8sListSchema
|
||||
}
|
||||
|
||||
func DataSourceK8sList() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceK8sListRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceK8sListSchemaMake(),
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,45 @@
|
||||
package k8s
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
)
|
||||
|
||||
func dataSourceK8sListDeletedRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
k8sList, err := utilityK8sListCheckPresence(ctx, d, m, K8sListDeletedAPI)
|
||||
if err != nil {
|
||||
d.SetId("")
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
id := uuid.New()
|
||||
d.SetId(id.String())
|
||||
flattenK8sList(d, k8sList)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func dataSourceK8sListDeletedSchemaMake() map[string]*schema.Schema {
|
||||
k8sListDeleted := createK8sListSchema()
|
||||
delete(k8sListDeleted, "includedeleted")
|
||||
return k8sListDeleted
|
||||
}
|
||||
|
||||
func DataSourceK8sListDeleted() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceK8sListDeletedRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceK8sListDeletedSchemaMake(),
|
||||
}
|
||||
}
|
||||
129
internal/service/cloudapi/k8s/data_source_k8s_wg_list.go
Normal file
129
internal/service/cloudapi/k8s/data_source_k8s_wg_list.go
Normal file
@@ -0,0 +1,129 @@
|
||||
package k8s
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||
)
|
||||
|
||||
func flattenWgList(wgList K8SGroupList, computesMap map[uint64][]kvmvm.ComputeGetResp) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
for _, wg := range wgList {
|
||||
computes := computesMap[wg.ID]
|
||||
temp := map[string]interface{}{
|
||||
"annotations": wg.Annotations,
|
||||
"cpu": wg.CPU,
|
||||
"wg_id": wg.ID,
|
||||
"detailed_info": flattenDetailedInfo(wg.DetailedInfo, computes),
|
||||
"disk": wg.Disk,
|
||||
"guid": wg.GUID,
|
||||
"labels": wg.Labels,
|
||||
"name": wg.Name,
|
||||
"num": wg.Num,
|
||||
"ram": wg.RAM,
|
||||
"taints": wg.Taints,
|
||||
}
|
||||
|
||||
res = append(res, temp)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenItemsWg(d *schema.ResourceData, wgList K8SGroupList, computes map[uint64][]kvmvm.ComputeGetResp) {
|
||||
d.Set("items", flattenWgList(wgList, computes))
|
||||
}
|
||||
|
||||
func utilityK8sWgListCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (K8SGroupList, error) {
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("k8sId", strconv.Itoa(d.Get("k8s_id").(int)))
|
||||
|
||||
resp, err := c.DecortAPICall(ctx, "POST", K8sGetAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if resp == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var k8s K8SRecord
|
||||
if err := json.Unmarshal([]byte(resp), &k8s); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return k8s.K8SGroups.Workers, nil
|
||||
}
|
||||
|
||||
func dataSourceK8sWgListRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
wgList, err := utilityK8sWgListCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
d.SetId(strconv.Itoa(d.Get("k8s_id").(int)))
|
||||
|
||||
workersComputeList := make(map[uint64][]kvmvm.ComputeGetResp)
|
||||
for _, worker := range wgList {
|
||||
workersComputeList[worker.ID] = make([]kvmvm.ComputeGetResp, 0, len(worker.DetailedInfo))
|
||||
for _, info := range worker.DetailedInfo {
|
||||
compute, err := utilityComputeCheckPresence(ctx, d, m, info.ID)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
workersComputeList[worker.ID] = append(workersComputeList[worker.ID], *compute)
|
||||
}
|
||||
}
|
||||
flattenItemsWg(d, wgList, workersComputeList)
|
||||
return nil
|
||||
}
|
||||
|
||||
func wgSchemaMake() map[string]*schema.Schema {
|
||||
wgSchema := dataSourceK8sWgSchemaMake()
|
||||
delete(wgSchema, "k8s_id")
|
||||
wgSchema["wg_id"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of k8s worker Group.",
|
||||
}
|
||||
return wgSchema
|
||||
}
|
||||
|
||||
func dataSourceK8sWgListSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"k8s_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
},
|
||||
"items": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: wgSchemaMake(),
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func DataSourceK8sWgList() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceK8sWgListRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceK8sWgListSchemaMake(),
|
||||
}
|
||||
}
|
||||
161
internal/service/cloudapi/k8s/data_source_wg.go
Normal file
161
internal/service/cloudapi/k8s/data_source_wg.go
Normal file
@@ -0,0 +1,161 @@
|
||||
package k8s
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strconv"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func flattenWgData(d *schema.ResourceData, wg K8SGroup, computes []kvmvm.ComputeGetResp) {
|
||||
d.Set("annotations", wg.Annotations)
|
||||
d.Set("cpu", wg.CPU)
|
||||
d.Set("detailed_info", flattenDetailedInfo(wg.DetailedInfo, computes))
|
||||
d.Set("disk", wg.Disk)
|
||||
d.Set("guid", wg.GUID)
|
||||
d.Set("labels", wg.Labels)
|
||||
d.Set("name", wg.Name)
|
||||
d.Set("num", wg.Num)
|
||||
d.Set("ram", wg.RAM)
|
||||
d.Set("taints", wg.Taints)
|
||||
}
|
||||
|
||||
func dataSourceK8sWgRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("dataSourceK8sWgRead: called with k8s id %d", d.Get("k8s_id").(int))
|
||||
|
||||
k8s, err := utilityDataK8sCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
d.SetId(strconv.Itoa(d.Get("wg_id").(int)))
|
||||
|
||||
var id int
|
||||
if d.Id() != "" {
|
||||
id, err = strconv.Atoi(d.Id())
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
} else {
|
||||
id = d.Get("wg_id").(int)
|
||||
}
|
||||
|
||||
curWg := K8SGroup{}
|
||||
for _, wg := range k8s.K8SGroups.Workers {
|
||||
if wg.ID == uint64(id) {
|
||||
curWg = wg
|
||||
break
|
||||
}
|
||||
}
|
||||
if curWg.ID == 0 {
|
||||
return diag.Errorf("Not found wg with id: %v in k8s cluster: %v", id, k8s.ID)
|
||||
}
|
||||
|
||||
workersComputeList := make([]kvmvm.ComputeGetResp, 0, 0)
|
||||
for _, info := range curWg.DetailedInfo {
|
||||
compute, err := utilityComputeCheckPresence(ctx, d, m, info.ID)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
workersComputeList = append(workersComputeList, *compute)
|
||||
}
|
||||
|
||||
flattenWgData(d, curWg, workersComputeList)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func dataSourceK8sWgSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"k8s_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "ID of k8s instance.",
|
||||
},
|
||||
"wg_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "ID of k8s worker Group.",
|
||||
},
|
||||
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the worker group.",
|
||||
},
|
||||
|
||||
"num": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Number of worker nodes to create.",
|
||||
},
|
||||
|
||||
"cpu": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Worker node CPU count.",
|
||||
},
|
||||
|
||||
"ram": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Worker node RAM in MB.",
|
||||
},
|
||||
|
||||
"disk": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Worker node boot disk size. If unspecified or 0, size is defined by OS image size.",
|
||||
},
|
||||
"detailed_info": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: detailedInfoSchemaMake(),
|
||||
},
|
||||
},
|
||||
"labels": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"annotations": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
"taints": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func DataSourceK8sWg() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceK8sWgRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dataSourceK8sWgSchemaMake(),
|
||||
}
|
||||
}
|
||||
247
internal/service/cloudapi/k8s/flattens.go
Normal file
247
internal/service/cloudapi/k8s/flattens.go
Normal file
@@ -0,0 +1,247 @@
|
||||
package k8s
|
||||
|
||||
import (
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||
)
|
||||
|
||||
func flattenAclList(aclList ACLList) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
for _, acl := range aclList {
|
||||
temp := map[string]interface{}{
|
||||
"explicit": acl.Explicit,
|
||||
"guid": acl.GUID,
|
||||
"right": acl.Right,
|
||||
"status": acl.Status,
|
||||
"type": acl.Type,
|
||||
"user_group_id": acl.UserGroupID,
|
||||
}
|
||||
res = append(res, temp)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenAcl(acl ACLGroup) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
temp := map[string]interface{}{
|
||||
"account_acl": flattenAclList(acl.AccountACL),
|
||||
"k8s_acl": flattenAclList(acl.K8SACL),
|
||||
"rg_acl": flattenAclList(acl.RGACL),
|
||||
}
|
||||
|
||||
res = append(res, temp)
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenInterfaces(interfaces []kvmvm.InterfaceRecord) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
for _, interfaceCompute := range interfaces {
|
||||
temp := map[string]interface{}{
|
||||
"def_gw": interfaceCompute.DefaultGW,
|
||||
"ip_address": interfaceCompute.IPAddress,
|
||||
}
|
||||
res = append(res, temp)
|
||||
}
|
||||
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenDetailedInfo(detailedInfoList DetailedInfoList, computes []kvmvm.ComputeGetResp) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
if computes != nil {
|
||||
for i, detailedInfo := range detailedInfoList {
|
||||
temp := map[string]interface{}{
|
||||
"compute_id": detailedInfo.ID,
|
||||
"name": detailedInfo.Name,
|
||||
"status": detailedInfo.Status,
|
||||
"tech_status": detailedInfo.TechStatus,
|
||||
"interfaces": flattenInterfaces(computes[i].Interfaces),
|
||||
"natable_vins_ip": computes[i].NatableVinsIP,
|
||||
"natable_vins_network": computes[i].NatableVinsNet,
|
||||
}
|
||||
res = append(res, temp)
|
||||
}
|
||||
} else {
|
||||
for _, detailedInfo := range detailedInfoList {
|
||||
temp := map[string]interface{}{
|
||||
"compute_id": detailedInfo.ID,
|
||||
"name": detailedInfo.Name,
|
||||
"status": detailedInfo.Status,
|
||||
"tech_status": detailedInfo.TechStatus,
|
||||
}
|
||||
res = append(res, temp)
|
||||
}
|
||||
}
|
||||
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenMasterGroup(mastersGroup MasterGroup, masters []kvmvm.ComputeGetResp) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
temp := map[string]interface{}{
|
||||
"cpu": mastersGroup.CPU,
|
||||
"detailed_info": flattenDetailedInfo(mastersGroup.DetailedInfo, masters),
|
||||
"disk": mastersGroup.Disk,
|
||||
"master_id": mastersGroup.ID,
|
||||
"name": mastersGroup.Name,
|
||||
"num": mastersGroup.Num,
|
||||
"ram": mastersGroup.RAM,
|
||||
}
|
||||
|
||||
res = append(res, temp)
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenK8sGroup(k8SGroupList K8SGroupList, workers []kvmvm.ComputeGetResp) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
for _, k8sGroup := range k8SGroupList {
|
||||
temp := map[string]interface{}{
|
||||
"annotations": k8sGroup.Annotations,
|
||||
"cpu": k8sGroup.CPU,
|
||||
"detailed_info": flattenDetailedInfo(k8sGroup.DetailedInfo, workers),
|
||||
"disk": k8sGroup.Disk,
|
||||
"guid": k8sGroup.GUID,
|
||||
"id": k8sGroup.ID,
|
||||
"labels": k8sGroup.Labels,
|
||||
"name": k8sGroup.Name,
|
||||
"num": k8sGroup.Num,
|
||||
"ram": k8sGroup.RAM,
|
||||
"taints": k8sGroup.Taints,
|
||||
}
|
||||
|
||||
res = append(res, temp)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenK8sGroups(k8sGroups K8SGroups, masters []kvmvm.ComputeGetResp, workers []kvmvm.ComputeGetResp) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
temp := map[string]interface{}{
|
||||
"masters": flattenMasterGroup(k8sGroups.Masters, masters),
|
||||
"workers": flattenK8sGroup(k8sGroups.Workers, workers),
|
||||
}
|
||||
res = append(res, temp)
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenK8sData(d *schema.ResourceData, k8s K8SRecord, masters []kvmvm.ComputeGetResp, workers []kvmvm.ComputeGetResp) {
|
||||
d.Set("acl", flattenAcl(k8s.ACL))
|
||||
d.Set("account_id", k8s.AccountID)
|
||||
d.Set("account_name", k8s.AccountName)
|
||||
d.Set("bservice_id", k8s.BServiceID)
|
||||
d.Set("k8sci_id", k8s.CIID)
|
||||
d.Set("created_by", k8s.CreatedBy)
|
||||
d.Set("created_time", k8s.CreatedTime)
|
||||
d.Set("deleted_by", k8s.DeletedBy)
|
||||
d.Set("deleted_time", k8s.DeletedTime)
|
||||
d.Set("k8s_ci_name", k8s.K8CIName)
|
||||
d.Set("masters", flattenMasterGroup(k8s.K8SGroups.Masters, masters))
|
||||
d.Set("workers", flattenK8sGroup(k8s.K8SGroups.Workers, workers))
|
||||
d.Set("lb_id", k8s.LBID)
|
||||
d.Set("name", k8s.Name)
|
||||
d.Set("rg_id", k8s.RGID)
|
||||
d.Set("rg_name", k8s.RGName)
|
||||
d.Set("status", k8s.Status)
|
||||
d.Set("tech_status", k8s.TechStatus)
|
||||
d.Set("updated_by", k8s.UpdatedBy)
|
||||
d.Set("updated_time", k8s.UpdatedTime)
|
||||
}
|
||||
|
||||
func flattenServiceAccount(serviceAccount ServiceAccount) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
temp := map[string]interface{}{
|
||||
"guid": serviceAccount.GUID,
|
||||
"password": serviceAccount.Password,
|
||||
"username": serviceAccount.Username,
|
||||
}
|
||||
res = append(res, temp)
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenWorkersGroup(workersGroups K8SGroupList) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
for _, worker := range workersGroups {
|
||||
temp := map[string]interface{}{
|
||||
"annotations": worker.Annotations,
|
||||
"cpu": worker.CPU,
|
||||
"detailed_info": flattenDetailedInfo(worker.DetailedInfo, nil),
|
||||
"disk": worker.Disk,
|
||||
"guid": worker.GUID,
|
||||
"detailed_info_id": worker.ID,
|
||||
"labels": worker.Labels,
|
||||
"name": worker.Name,
|
||||
"num": worker.Num,
|
||||
"ram": worker.RAM,
|
||||
"taints": worker.Taints,
|
||||
}
|
||||
res = append(res, temp)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenConfig(config interface{}) map[string]interface{} {
|
||||
return config.(map[string]interface{})
|
||||
}
|
||||
|
||||
func flattenK8sItems(k8sItems K8SList) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
for _, item := range k8sItems {
|
||||
temp := map[string]interface{}{
|
||||
"account_id": item.AccountID,
|
||||
"account_name": item.Name,
|
||||
"acl": item.ACL,
|
||||
"bservice_id": item.BServiceID,
|
||||
"ci_id": item.CIID,
|
||||
"created_by": item.CreatedBy,
|
||||
"created_time": item.CreatedTime,
|
||||
"deleted_by": item.DeletedBy,
|
||||
"deleted_time": item.DeletedTime,
|
||||
"desc": item.Description,
|
||||
"extnet_id": item.ExtNetID,
|
||||
"gid": item.GID,
|
||||
"guid": item.GUID,
|
||||
"k8s_id": item.ID,
|
||||
"lb_id": item.LBID,
|
||||
"milestones": item.Milestones,
|
||||
"k8s_name": item.Name,
|
||||
"rg_id": item.RGID,
|
||||
"rg_name": item.RGName,
|
||||
"service_account": flattenServiceAccount(item.ServiceAccount),
|
||||
"status": item.Status,
|
||||
"tech_status": item.TechStatus,
|
||||
"updated_by": item.UpdatedBy,
|
||||
"updated_time": item.UpdatedTime,
|
||||
"vins_id": item.VINSID,
|
||||
"workers_groups": flattenWorkersGroup(item.WorkersGroup),
|
||||
}
|
||||
|
||||
res = append(res, temp)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenK8sList(d *schema.ResourceData, k8sItems K8SList) {
|
||||
d.Set("items", flattenK8sItems(k8sItems))
|
||||
}
|
||||
|
||||
func flattenResourceK8s(d *schema.ResourceData, k8s K8SRecord, masters []kvmvm.ComputeGetResp, workers []kvmvm.ComputeGetResp) {
|
||||
d.Set("acl", flattenAcl(k8s.ACL))
|
||||
d.Set("account_id", k8s.AccountID)
|
||||
d.Set("account_name", k8s.AccountName)
|
||||
d.Set("bservice_id", k8s.BServiceID)
|
||||
d.Set("created_by", k8s.CreatedBy)
|
||||
d.Set("created_time", k8s.CreatedTime)
|
||||
d.Set("deleted_by", k8s.DeletedBy)
|
||||
d.Set("deleted_time", k8s.DeletedTime)
|
||||
d.Set("k8s_ci_name", k8s.K8CIName)
|
||||
d.Set("masters", flattenMasterGroup(k8s.K8SGroups.Masters, masters))
|
||||
d.Set("workers", flattenK8sGroup(k8s.K8SGroups.Workers, workers))
|
||||
d.Set("lb_id", k8s.LBID)
|
||||
d.Set("rg_id", k8s.RGID)
|
||||
d.Set("rg_name", k8s.RGName)
|
||||
d.Set("status", k8s.Status)
|
||||
d.Set("tech_status", k8s.TechStatus)
|
||||
d.Set("updated_by", k8s.UpdatedBy)
|
||||
d.Set("updated_time", k8s.UpdatedTime)
|
||||
d.Set("default_wg_id", k8s.K8SGroups.Workers[0].ID)
|
||||
}
|
||||
@@ -64,8 +64,11 @@ type K8sRecord struct {
|
||||
Name string `json:"name"`
|
||||
RgID int `json:"rgId"`
|
||||
RgName string `json:"rgName"`
|
||||
VinsID int `json:"vinsId"`
|
||||
}
|
||||
|
||||
type K8sRecordList []K8sRecord
|
||||
|
||||
//LbRecord represents load balancer instance
|
||||
type LbRecord struct {
|
||||
ID int `json:"id"`
|
||||
@@ -129,3 +132,124 @@ type SshKeyConfig struct {
|
||||
SshKey string
|
||||
UserShell string
|
||||
}
|
||||
|
||||
//FromSDK
|
||||
type K8SGroup struct {
|
||||
Annotations []string `json:"annotations"`
|
||||
CPU uint64 `json:"cpu"`
|
||||
DetailedInfo DetailedInfoList `json:"detailedInfo"`
|
||||
Disk uint64 `json:"disk"`
|
||||
GUID string `json:"guid"`
|
||||
ID uint64 `json:"id"`
|
||||
Labels []string `json:"labels"`
|
||||
Name string `json:"name"`
|
||||
Num uint64 `json:"num"`
|
||||
RAM uint64 `json:"ram"`
|
||||
Taints []string `json:"taints"`
|
||||
}
|
||||
|
||||
type K8SGroupList []K8SGroup
|
||||
|
||||
type DetailedInfo struct {
|
||||
ID uint64 `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Status string `json:"status"`
|
||||
TechStatus string `json:"techStatus"`
|
||||
}
|
||||
|
||||
type DetailedInfoList []DetailedInfo
|
||||
|
||||
type K8SRecord struct {
|
||||
ACL ACLGroup `json:"ACL"`
|
||||
AccountID uint64 `json:"accountId"`
|
||||
AccountName string `json:"accountName"`
|
||||
BServiceID uint64 `json:"bserviceId"`
|
||||
CIID uint64 `json:"ciId"`
|
||||
CreatedBy string `json:"createdBy"`
|
||||
CreatedTime uint64 `json:"createdTime"`
|
||||
DeletedBy string `json:"deletedBy"`
|
||||
DeletedTime uint64 `json:"deletedTime"`
|
||||
ID uint64 `json:"id"`
|
||||
K8CIName string `json:"k8ciName"`
|
||||
K8SGroups K8SGroups `json:"k8sGroups"`
|
||||
LBID uint64 `json:"lbId"`
|
||||
Name string `json:"name"`
|
||||
RGID uint64 `json:"rgId"`
|
||||
RGName string `json:"rgName"`
|
||||
Status string `json:"status"`
|
||||
TechStatus string `json:"techStatus"`
|
||||
UpdatedBy string `json:"updatedBy"`
|
||||
UpdatedTime uint64 `json:"updatedTime"`
|
||||
}
|
||||
|
||||
type K8SRecordList []K8SRecord
|
||||
|
||||
type K8SGroups struct {
|
||||
Masters MasterGroup `json:"masters"`
|
||||
Workers K8SGroupList `json:"workers"`
|
||||
}
|
||||
|
||||
type MasterGroup struct {
|
||||
CPU uint64 `json:"cpu"`
|
||||
DetailedInfo DetailedInfoList `json:"detailedInfo"`
|
||||
Disk uint64 `json:"disk"`
|
||||
ID uint64 `json:"id"`
|
||||
Name string `json:"name"`
|
||||
Num uint64 `json:"num"`
|
||||
RAM uint64 `json:"ram"`
|
||||
}
|
||||
|
||||
type ACLGroup struct {
|
||||
AccountACL ACLList `json:"accountAcl"`
|
||||
K8SACL ACLList `json:"k8sAcl"`
|
||||
RGACL ACLList `json:"rgAcl"`
|
||||
}
|
||||
|
||||
type ACL struct {
|
||||
Explicit bool `json:"explicit"`
|
||||
GUID string `json:"guid"`
|
||||
Right string `json:"right"`
|
||||
Status string `json:"status"`
|
||||
Type string `json:"type"`
|
||||
UserGroupID string `json:"userGroupId"`
|
||||
}
|
||||
|
||||
type ACLList []ACL
|
||||
|
||||
type K8SItem struct {
|
||||
AccountID uint64 `json:"accountId"`
|
||||
AccountName string `json:"accountName"`
|
||||
ACL []interface{} `json:"acl"`
|
||||
BServiceID uint64 `json:"bserviceId"`
|
||||
CIID uint64 `json:"ciId"`
|
||||
Config interface{} `json:"config"`
|
||||
CreatedBy string `json:"createdBy"`
|
||||
CreatedTime uint64 `json:"createdTime"`
|
||||
DeletedBy string `json:"deletedBy"`
|
||||
DeletedTime uint64 `json:"deletedTime"`
|
||||
Description string `json:"desc"`
|
||||
ExtNetID uint64 `json:"extnetId"`
|
||||
GID uint64 `json:"gid"`
|
||||
GUID uint64 `json:"guid"`
|
||||
ID uint64 `json:"id"`
|
||||
LBID uint64 `json:"lbId"`
|
||||
Milestones uint64 `json:"milestones"`
|
||||
Name string `json:"name"`
|
||||
RGID uint64 `json:"rgId"`
|
||||
RGName string `json:"rgName"`
|
||||
ServiceAccount ServiceAccount `json:"serviceAccount"`
|
||||
Status string `json:"status"`
|
||||
TechStatus string `json:"techStatus"`
|
||||
UpdatedBy string `json:"updatedBy"`
|
||||
UpdatedTime uint64 `json:"updatedTime"`
|
||||
VINSID uint64 `json:"vinsId"`
|
||||
WorkersGroup K8SGroupList `json:"workersGroups"`
|
||||
}
|
||||
|
||||
type ServiceAccount struct {
|
||||
GUID string `json:"guid"`
|
||||
Password string `json:"password"`
|
||||
Username string `json:"username"`
|
||||
}
|
||||
|
||||
type K8SList []K8SItem
|
||||
|
||||
@@ -103,3 +103,59 @@ func nodeK8sSubresourceSchemaMake() map[string]*schema.Schema {
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func mastersSchemaMake() map[string]*schema.Schema {
|
||||
masters := masterGroupSchemaMake()
|
||||
masters["num"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "Number of nodes to create.",
|
||||
}
|
||||
masters["cpu"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "Node CPU count.",
|
||||
}
|
||||
masters["ram"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "Node RAM in MB.",
|
||||
}
|
||||
masters["disk"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "Node boot disk size in GB.",
|
||||
}
|
||||
return masters
|
||||
}
|
||||
|
||||
func workersSchemaMake() map[string]*schema.Schema {
|
||||
workers := k8sGroupListSchemaMake()
|
||||
workers["num"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "Number of nodes to create.",
|
||||
}
|
||||
workers["cpu"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "Node CPU count.",
|
||||
}
|
||||
workers["ram"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "Node RAM in MB.",
|
||||
}
|
||||
workers["disk"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "Node boot disk size in GB.",
|
||||
}
|
||||
return workers
|
||||
}
|
||||
|
||||
@@ -44,6 +44,7 @@ import (
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
@@ -79,10 +80,11 @@ func resourceK8sCreate(ctx context.Context, d *schema.ResourceData, m interface{
|
||||
urlValues.Add("workerRam", strconv.Itoa(workerNode.Ram))
|
||||
urlValues.Add("workerDisk", strconv.Itoa(workerNode.Disk))
|
||||
|
||||
//if withLB, ok := d.GetOk("with_lb"); ok {
|
||||
//urlValues.Add("withLB", strconv.FormatBool(withLB.(bool)))
|
||||
//}
|
||||
urlValues.Add("withLB", strconv.FormatBool(true))
|
||||
if withLB, ok := d.GetOk("with_lb"); ok {
|
||||
urlValues.Add("withLB", strconv.FormatBool(withLB.(bool)))
|
||||
} else {
|
||||
urlValues.Add("withLB", strconv.FormatBool(true))
|
||||
}
|
||||
|
||||
if extNet, ok := d.GetOk("extnet_id"); ok {
|
||||
urlValues.Add("extnetId", strconv.Itoa(extNet.(int)))
|
||||
@@ -90,9 +92,9 @@ func resourceK8sCreate(ctx context.Context, d *schema.ResourceData, m interface{
|
||||
urlValues.Add("extnetId", "0")
|
||||
}
|
||||
|
||||
//if desc, ok := d.GetOk("desc"); ok {
|
||||
//urlValues.Add("desc", desc.(string))
|
||||
//}
|
||||
if desc, ok := d.GetOk("desc"); ok {
|
||||
urlValues.Add("desc", desc.(string))
|
||||
}
|
||||
|
||||
resp, err := c.DecortAPICall(ctx, "POST", K8sCreateAPI, urlValues)
|
||||
if err != nil {
|
||||
@@ -126,59 +128,57 @@ func resourceK8sCreate(ctx context.Context, d *schema.ResourceData, m interface{
|
||||
time.Sleep(time.Second * 10)
|
||||
}
|
||||
|
||||
k8s, err := utilityK8sCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
d.Set("default_wg_id", k8s.Groups.Workers[0].ID)
|
||||
|
||||
urlValues = &url.Values{}
|
||||
urlValues.Add("lbId", strconv.Itoa(k8s.LbID))
|
||||
|
||||
resp, err = c.DecortAPICall(ctx, "POST", LbGetAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
var lb LbRecord
|
||||
if err := json.Unmarshal([]byte(resp), &lb); err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
d.Set("extnet_id", lb.ExtNetID)
|
||||
d.Set("lb_ip", lb.PrimaryNode.FrontendIP)
|
||||
|
||||
urlValues = &url.Values{}
|
||||
urlValues.Add("k8sId", d.Id())
|
||||
kubeconfig, err := c.DecortAPICall(ctx, "POST", K8sGetConfigAPI, urlValues)
|
||||
if err != nil {
|
||||
log.Warnf("could not get kubeconfig: %v", err)
|
||||
}
|
||||
d.Set("kubeconfig", kubeconfig)
|
||||
|
||||
return nil
|
||||
return resourceK8sRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func resourceK8sRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceK8sRead: called with id %s, rg %d", d.Id(), d.Get("rg_id").(int))
|
||||
//log.Debugf("resourceK8sRead: called with id %s, rg %d", d.Id(), d.Get("rg_id").(int))
|
||||
|
||||
k8s, err := utilityK8sCheckPresence(ctx, d, m)
|
||||
if k8s == nil {
|
||||
k8s, err := utilityDataK8sCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
d.SetId("")
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
k8sList, err := utilityK8sListCheckPresence(ctx, d, m, K8sListAPI)
|
||||
if err != nil {
|
||||
d.SetId("")
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
curK8s := K8SItem{}
|
||||
for _, k8sCluster := range k8sList {
|
||||
if k8sCluster.ID == k8s.ID {
|
||||
curK8s = k8sCluster
|
||||
}
|
||||
}
|
||||
if curK8s.ID == 0 {
|
||||
return diag.Errorf("Cluster with id %d not found", k8s.ID)
|
||||
}
|
||||
d.Set("vins_id", curK8s.VINSID)
|
||||
|
||||
d.Set("name", k8s.Name)
|
||||
d.Set("rg_id", k8s.RgID)
|
||||
d.Set("k8sci_id", k8s.CI)
|
||||
d.Set("wg_name", k8s.Groups.Workers[0].Name)
|
||||
d.Set("masters", nodeToResource(k8s.Groups.Masters))
|
||||
d.Set("workers", nodeToResource(k8s.Groups.Workers[0]))
|
||||
d.Set("default_wg_id", k8s.Groups.Workers[0].ID)
|
||||
masterComputeList := make([]kvmvm.ComputeGetResp, 0, len(k8s.K8SGroups.Masters.DetailedInfo))
|
||||
workersComputeList := make([]kvmvm.ComputeGetResp, 0, len(k8s.K8SGroups.Workers))
|
||||
for _, masterNode := range k8s.K8SGroups.Masters.DetailedInfo {
|
||||
compute, err := utilityComputeCheckPresence(ctx, d, m, masterNode.ID)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
masterComputeList = append(masterComputeList, *compute)
|
||||
}
|
||||
for _, worker := range k8s.K8SGroups.Workers {
|
||||
for _, info := range worker.DetailedInfo {
|
||||
compute, err := utilityComputeCheckPresence(ctx, d, m, info.ID)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
workersComputeList = append(workersComputeList, *compute)
|
||||
}
|
||||
}
|
||||
|
||||
flattenResourceK8s(d, *k8s, masterComputeList, workersComputeList)
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("lbId", strconv.Itoa(k8s.LbID))
|
||||
urlValues.Add("lbId", strconv.FormatUint(k8s.LBID, 10))
|
||||
|
||||
resp, err := c.DecortAPICall(ctx, "POST", LbGetAPI, urlValues)
|
||||
if err != nil {
|
||||
@@ -225,21 +225,21 @@ func resourceK8sUpdate(ctx context.Context, d *schema.ResourceData, m interface{
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
wg := k8s.Groups.Workers[0]
|
||||
wg := k8s.K8SGroups.Workers[0]
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("k8sId", d.Id())
|
||||
urlValues.Add("workersGroupId", strconv.Itoa(wg.ID))
|
||||
urlValues.Add("workersGroupId", strconv.FormatUint(wg.ID, 10))
|
||||
|
||||
newWorkers := parseNode(d.Get("workers").([]interface{}))
|
||||
|
||||
if newWorkers.Num > wg.Num {
|
||||
urlValues.Add("num", strconv.Itoa(newWorkers.Num-wg.Num))
|
||||
if uint64(newWorkers.Num) > wg.Num {
|
||||
urlValues.Add("num", strconv.FormatUint(uint64(newWorkers.Num)-wg.Num, 10))
|
||||
if _, err := c.DecortAPICall(ctx, "POST", K8sWorkerAddAPI, urlValues); err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
} else {
|
||||
for i := wg.Num - 1; i >= newWorkers.Num; i-- {
|
||||
urlValues.Set("workerId", strconv.Itoa(wg.DetailedInfo[i].ID))
|
||||
for i := int(wg.Num) - 1; i >= newWorkers.Num; i-- {
|
||||
urlValues.Set("workerId", strconv.FormatUint(wg.DetailedInfo[i].ID, 10))
|
||||
if _, err := c.DecortAPICall(ctx, "POST", K8sWorkerDeleteAPI, urlValues); err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
@@ -306,10 +306,11 @@ func resourceK8sSchemaMake() map[string]*schema.Schema {
|
||||
"masters": {
|
||||
Type: schema.TypeList,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ForceNew: true,
|
||||
MaxItems: 1,
|
||||
Elem: &schema.Resource{
|
||||
Schema: nodeK8sSubresourceSchemaMake(),
|
||||
Schema: mastersSchemaMake(),
|
||||
},
|
||||
Description: "Master node(s) configuration.",
|
||||
},
|
||||
@@ -317,20 +318,21 @@ func resourceK8sSchemaMake() map[string]*schema.Schema {
|
||||
"workers": {
|
||||
Type: schema.TypeList,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
MaxItems: 1,
|
||||
Elem: &schema.Resource{
|
||||
Schema: nodeK8sSubresourceSchemaMake(),
|
||||
Schema: workersSchemaMake(),
|
||||
},
|
||||
Description: "Worker node(s) configuration.",
|
||||
},
|
||||
|
||||
//"with_lb": {
|
||||
//Type: schema.TypeBool,
|
||||
//Optional: true,
|
||||
//ForceNew: true,
|
||||
//Default: true,
|
||||
//Description: "Create k8s with load balancer if true.",
|
||||
//},
|
||||
"with_lb": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
ForceNew: true,
|
||||
Default: true,
|
||||
Description: "Create k8s with load balancer if true.",
|
||||
},
|
||||
|
||||
"extnet_id": {
|
||||
Type: schema.TypeInt,
|
||||
@@ -340,17 +342,80 @@ func resourceK8sSchemaMake() map[string]*schema.Schema {
|
||||
Description: "ID of the external network to connect workers to. If omitted network will be chosen by the platfom.",
|
||||
},
|
||||
|
||||
//"desc": {
|
||||
//Type: schema.TypeString,
|
||||
//Optional: true,
|
||||
//Description: "Text description of this instance.",
|
||||
//},
|
||||
"desc": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "Text description of this instance.",
|
||||
},
|
||||
|
||||
"acl": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: aclGroupSchemaMake(),
|
||||
},
|
||||
},
|
||||
"account_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"account_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"bservice_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"created_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"created_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"deleted_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"deleted_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"k8s_ci_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"lb_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"lb_ip": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "IP address of default load balancer.",
|
||||
},
|
||||
"rg_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"tech_status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"updated_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"updated_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
|
||||
"default_wg_id": {
|
||||
Type: schema.TypeInt,
|
||||
@@ -363,6 +428,11 @@ func resourceK8sSchemaMake() map[string]*schema.Schema {
|
||||
Computed: true,
|
||||
Description: "Kubeconfig for cluster access.",
|
||||
},
|
||||
"vins_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of default vins for this instace.",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -376,15 +446,15 @@ func ResourceK8s() *schema.Resource {
|
||||
DeleteContext: resourceK8sDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout20m,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout20m,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout30m,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceK8sSchemaMake(),
|
||||
|
||||
@@ -35,11 +35,13 @@ import (
|
||||
"context"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
@@ -91,23 +93,51 @@ func resourceK8sWgCreate(ctx context.Context, d *schema.ResourceData, m interfac
|
||||
//time.Sleep(time.Second * 5)
|
||||
//}
|
||||
|
||||
return nil
|
||||
return resourceK8sWgRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func resourceK8sWgRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceK8sWgRead: called with k8s id %d", d.Get("k8s_id").(int))
|
||||
log.Debugf("resourceK8sWgRead: called with %v", d.Id())
|
||||
|
||||
wg, err := utilityK8sWgCheckPresence(ctx, d, m)
|
||||
if wg == nil {
|
||||
d.SetId("")
|
||||
k8s, err := utilityDataK8sCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
d.Set("name", wg.Name)
|
||||
d.Set("num", wg.Num)
|
||||
d.Set("cpu", wg.Cpu)
|
||||
d.Set("ram", wg.Ram)
|
||||
d.Set("disk", wg.Disk)
|
||||
var id int
|
||||
if d.Id() != "" {
|
||||
id, err = strconv.Atoi(strings.Split(d.Id(), "#")[0])
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
} else {
|
||||
id = d.Get("wg_id").(int)
|
||||
}
|
||||
|
||||
curWg := K8SGroup{}
|
||||
for _, wg := range k8s.K8SGroups.Workers {
|
||||
if wg.ID == uint64(id) {
|
||||
curWg = wg
|
||||
break
|
||||
}
|
||||
}
|
||||
if curWg.ID == 0 {
|
||||
return diag.Errorf("Not found wg with id: %v in k8s cluster: %v", id, k8s.ID)
|
||||
}
|
||||
|
||||
workersComputeList := make([]kvmvm.ComputeGetResp, 0, 0)
|
||||
for _, info := range curWg.DetailedInfo {
|
||||
compute, err := utilityComputeCheckPresence(ctx, d, m, info.ID)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
workersComputeList = append(workersComputeList, *compute)
|
||||
}
|
||||
|
||||
d.SetId(strings.Split(d.Id(), "#")[0])
|
||||
d.Set("k8s_id", k8s.ID)
|
||||
d.Set("wg_id", curWg.ID)
|
||||
flattenWgData(d, curWg, workersComputeList)
|
||||
|
||||
return nil
|
||||
}
|
||||
@@ -126,15 +156,15 @@ func resourceK8sWgUpdate(ctx context.Context, d *schema.ResourceData, m interfac
|
||||
urlValues.Add("k8sId", strconv.Itoa(d.Get("k8s_id").(int)))
|
||||
urlValues.Add("workersGroupId", d.Id())
|
||||
|
||||
if newNum := d.Get("num").(int); newNum > wg.Num {
|
||||
urlValues.Add("num", strconv.Itoa(newNum-wg.Num))
|
||||
if newNum := d.Get("num").(int); uint64(newNum) > wg.Num {
|
||||
urlValues.Add("num", strconv.FormatUint(uint64(newNum)-wg.Num, 10))
|
||||
_, err := c.DecortAPICall(ctx, "POST", K8sWorkerAddAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
} else {
|
||||
for i := wg.Num - 1; i >= newNum; i-- {
|
||||
urlValues.Set("workerId", strconv.Itoa(wg.DetailedInfo[i].ID))
|
||||
for i := int(wg.Num) - 1; i >= newNum; i-- {
|
||||
urlValues.Set("workerId", strconv.FormatUint(wg.DetailedInfo[i].ID, 10))
|
||||
_, err := c.DecortAPICall(ctx, "POST", K8sWorkerDeleteAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
@@ -159,7 +189,7 @@ func resourceK8sWgDelete(ctx context.Context, d *schema.ResourceData, m interfac
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("k8sId", strconv.Itoa(d.Get("k8s_id").(int)))
|
||||
urlValues.Add("workersGroupId", strconv.Itoa(wg.ID))
|
||||
urlValues.Add("workersGroupId", strconv.FormatUint(wg.ID, 10))
|
||||
|
||||
_, err = c.DecortAPICall(ctx, "POST", K8sWgDeleteAPI, urlValues)
|
||||
if err != nil {
|
||||
@@ -215,6 +245,43 @@ func resourceK8sWgSchemaMake() map[string]*schema.Schema {
|
||||
Default: 0,
|
||||
Description: "Worker node boot disk size. If unspecified or 0, size is defined by OS image size.",
|
||||
},
|
||||
"wg_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of k8s worker Group.",
|
||||
},
|
||||
"detailed_info": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: detailedInfoSchemaMake(),
|
||||
},
|
||||
},
|
||||
"labels": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"annotations": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
"taints": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
@@ -228,15 +295,15 @@ func ResourceK8sWg() *schema.Resource {
|
||||
DeleteContext: resourceK8sWgDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout20m,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout20m,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceK8sWgSchemaMake(),
|
||||
|
||||
@@ -35,12 +35,15 @@ import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/service/cloudapi/kvmvm"
|
||||
)
|
||||
|
||||
func utilityK8sCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*K8sRecord, error) {
|
||||
func utilityK8sCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*K8SRecord, error) {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("k8sId", d.Id())
|
||||
@@ -54,10 +57,76 @@ func utilityK8sCheckPresence(ctx context.Context, d *schema.ResourceData, m inte
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
var k8s K8sRecord
|
||||
k8s := K8SRecord{}
|
||||
if err := json.Unmarshal([]byte(resp), &k8s); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return &k8s, nil
|
||||
}
|
||||
|
||||
func utilityComputeCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}, computeID uint64) (*kvmvm.ComputeGetResp, error) {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
urlValues.Add("computeId", strconv.FormatUint(computeID, 10))
|
||||
|
||||
computeRaw, err := c.DecortAPICall(ctx, "POST", kvmvm.ComputeGetAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
compute := &kvmvm.ComputeGetResp{}
|
||||
err = json.Unmarshal([]byte(computeRaw), compute)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return compute, nil
|
||||
}
|
||||
|
||||
func utilityDataK8sCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*K8SRecord, error) {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
if d.Get("k8s_id") != 0 && d.Get("k8s_id") != nil {
|
||||
urlValues.Add("k8sId", strconv.Itoa(d.Get("k8s_id").(int)))
|
||||
} else if id := d.Id(); id != "" {
|
||||
if strings.Contains(id, "#") {
|
||||
urlValues.Add("k8sId", strings.Split(d.Id(), "#")[1])
|
||||
} else {
|
||||
urlValues.Add("k8sId", d.Id())
|
||||
}
|
||||
}
|
||||
k8sRaw, err := c.DecortAPICall(ctx, "POST", K8sGetAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
k8s := &K8SRecord{}
|
||||
err = json.Unmarshal([]byte(k8sRaw), k8s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return k8s, nil
|
||||
}
|
||||
|
||||
func utilityK8sListCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}, api string) (K8SList, error) {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("includedeleted", "false")
|
||||
urlValues.Add("page", "0")
|
||||
urlValues.Add("size", "0")
|
||||
|
||||
k8sListRaw, err := c.DecortAPICall(ctx, "POST", api, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
k8sList := K8SList{}
|
||||
|
||||
err = json.Unmarshal([]byte(k8sListRaw), &k8sList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return k8sList, nil
|
||||
}
|
||||
|
||||
@@ -34,6 +34,7 @@ package k8s
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
@@ -41,7 +42,7 @@ import (
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
)
|
||||
|
||||
func utilityK8sWgCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*K8sNodeRecord, error) {
|
||||
func utilityK8sWgCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*K8SGroup, error) {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("k8sId", strconv.Itoa(d.Get("k8s_id").(int)))
|
||||
@@ -52,24 +53,29 @@ func utilityK8sWgCheckPresence(ctx context.Context, d *schema.ResourceData, m in
|
||||
}
|
||||
|
||||
if resp == "" {
|
||||
return nil, nil
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var k8s K8sRecord
|
||||
var k8s K8SRecord
|
||||
if err := json.Unmarshal([]byte(resp), &k8s); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
id, err := strconv.Atoi(d.Id())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
var id int
|
||||
if d.Id() != "" {
|
||||
id, err = strconv.Atoi(d.Id())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
id = d.Get("wg_id").(int)
|
||||
}
|
||||
|
||||
for _, wg := range k8s.Groups.Workers {
|
||||
if wg.ID == id {
|
||||
for _, wg := range k8s.K8SGroups.Workers {
|
||||
if wg.ID == uint64(id) {
|
||||
return &wg, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
return nil, fmt.Errorf("Not found wg with id: %v in k8s cluster: %v", id, k8s.ID)
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -31,16 +32,24 @@ Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
|
||||
package kvmvm
|
||||
|
||||
const KvmX86CreateAPI = "/restmachine/cloudapi/kvmx86/create"
|
||||
const KvmPPCCreateAPI = "/restmachine/cloudapi/kvmppc/create"
|
||||
const ComputeGetAPI = "/restmachine/cloudapi/compute/get"
|
||||
const RgListComputesAPI = "/restmachine/cloudapi/rg/listComputes"
|
||||
const ComputeNetAttachAPI = "/restmachine/cloudapi/compute/netAttach"
|
||||
const ComputeNetDetachAPI = "/restmachine/cloudapi/compute/netDetach"
|
||||
const ComputeDiskAttachAPI = "/restmachine/cloudapi/compute/diskAttach"
|
||||
const ComputeDiskDetachAPI = "/restmachine/cloudapi/compute/diskDetach"
|
||||
const ComputeStartAPI = "/restmachine/cloudapi/compute/start"
|
||||
const ComputeStopAPI = "/restmachine/cloudapi/compute/stop"
|
||||
const ComputeResizeAPI = "/restmachine/cloudapi/compute/resize"
|
||||
const DisksResizeAPI = "/restmachine/cloudapi/disks/resize2"
|
||||
const ComputeDeleteAPI = "/restmachine/cloudapi/compute/delete"
|
||||
const (
|
||||
KvmX86CreateAPI = "/restmachine/cloudapi/kvmx86/create"
|
||||
KvmPPCCreateAPI = "/restmachine/cloudapi/kvmppc/create"
|
||||
ComputeGetAPI = "/restmachine/cloudapi/compute/get"
|
||||
RgListComputesAPI = "/restmachine/cloudapi/rg/listComputes"
|
||||
ComputeNetAttachAPI = "/restmachine/cloudapi/compute/netAttach"
|
||||
ComputeNetDetachAPI = "/restmachine/cloudapi/compute/netDetach"
|
||||
ComputeDiskAttachAPI = "/restmachine/cloudapi/compute/diskAttach"
|
||||
ComputeDiskDetachAPI = "/restmachine/cloudapi/compute/diskDetach"
|
||||
ComputeStartAPI = "/restmachine/cloudapi/compute/start"
|
||||
ComputeStopAPI = "/restmachine/cloudapi/compute/stop"
|
||||
ComputeResizeAPI = "/restmachine/cloudapi/compute/resize"
|
||||
DisksResizeAPI = "/restmachine/cloudapi/disks/resize2"
|
||||
ComputeDeleteAPI = "/restmachine/cloudapi/compute/delete"
|
||||
ComputeUpdateAPI = "/restmachine/cloudapi/compute/update"
|
||||
ComputeDiskAddAPI = "/restmachine/cloudapi/compute/diskAdd"
|
||||
ComputeDiskDeleteAPI = "/restmachine/cloudapi/compute/diskDel"
|
||||
ComputeRestoreAPI = "/restmachine/cloudapi/compute/restore"
|
||||
ComputeEnableAPI = "/restmachine/cloudapi/compute/enable"
|
||||
ComputeDisableAPI = "/restmachine/cloudapi/compute/disable"
|
||||
)
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -39,10 +40,12 @@ import (
|
||||
// "net/url"
|
||||
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/status"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
|
||||
// "github.com/hashicorp/terraform-plugin-sdk/helper/validation"
|
||||
)
|
||||
|
||||
@@ -113,6 +116,36 @@ func parseComputeInterfacesToNetworks(ifaces []InterfaceRecord) []interface{} {
|
||||
return result
|
||||
}
|
||||
|
||||
func findInExtraDisks(DiskId uint, ExtraDisks []interface{}) bool {
|
||||
for _, ExtraDisk := range ExtraDisks {
|
||||
if DiskId == uint(ExtraDisk.(int)) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func flattenComputeDisksDemo(disksList []DiskRecord, extraDisks []interface{}) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0)
|
||||
for _, disk := range disksList {
|
||||
if disk.Name == "bootdisk" || findInExtraDisks(disk.ID, extraDisks) { //skip main bootdisk and extraDisks
|
||||
continue
|
||||
}
|
||||
temp := map[string]interface{}{
|
||||
"disk_name": disk.Name,
|
||||
"disk_id": disk.ID,
|
||||
"disk_type": disk.Type,
|
||||
"sep_id": disk.SepID,
|
||||
"pool": disk.Pool,
|
||||
"desc": disk.Desc,
|
||||
"image_id": disk.ImageID,
|
||||
"size": disk.SizeMax,
|
||||
}
|
||||
res = append(res, temp)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenCompute(d *schema.ResourceData, compFacts string) error {
|
||||
// This function expects that compFacts string contains response from API compute/get,
|
||||
// i.e. detailed information about compute instance.
|
||||
@@ -145,14 +178,17 @@ func flattenCompute(d *schema.ResourceData, compFacts string) error {
|
||||
d.Set("image_id", model.ImageID)
|
||||
}
|
||||
d.Set("description", model.Desc)
|
||||
d.Set("enabled", false)
|
||||
if model.Status == status.Enabled {
|
||||
d.Set("enabled", true)
|
||||
}
|
||||
|
||||
d.Set("cloud_init", "applied") // NOTE: for existing compute we hard-code this value as an indicator for DiffSuppress fucntion
|
||||
// d.Set("status", model.Status)
|
||||
// d.Set("tech_status", model.TechStatus)
|
||||
|
||||
d.Set("started", false)
|
||||
if model.TechStatus == "STARTED" {
|
||||
d.Set("started", true)
|
||||
} else {
|
||||
d.Set("started", false)
|
||||
}
|
||||
|
||||
bootDisk := findBootDisk(model.Disks)
|
||||
@@ -162,12 +198,12 @@ func flattenCompute(d *schema.ResourceData, compFacts string) error {
|
||||
d.Set("sep_id", bootDisk.SepID)
|
||||
d.Set("pool", bootDisk.Pool)
|
||||
|
||||
if len(model.Disks) > 0 {
|
||||
log.Debugf("flattenCompute: calling parseComputeDisksToExtraDisks for %d disks", len(model.Disks))
|
||||
if err = d.Set("extra_disks", parseComputeDisksToExtraDisks(model.Disks)); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
//if len(model.Disks) > 0 {
|
||||
//log.Debugf("flattenCompute: calling parseComputeDisksToExtraDisks for %d disks", len(model.Disks))
|
||||
//if err = d.Set("extra_disks", parseComputeDisksToExtraDisks(model.Disks)); err != nil {
|
||||
//return err
|
||||
//}
|
||||
//}
|
||||
|
||||
if len(model.Interfaces) > 0 {
|
||||
log.Debugf("flattenCompute: calling parseComputeInterfacesToNetworks for %d interfaces", len(model.Interfaces))
|
||||
@@ -183,6 +219,11 @@ func flattenCompute(d *schema.ResourceData, compFacts string) error {
|
||||
}
|
||||
}
|
||||
|
||||
err = d.Set("disks", flattenComputeDisksDemo(model.Disks, d.Get("extra_disks").(*schema.Set).List()))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -234,6 +275,12 @@ func DataSourceCompute() *schema.Resource {
|
||||
Description: "ID of the resource group where this compute instance is located.",
|
||||
},
|
||||
|
||||
"enabled": {
|
||||
Type: schema.TypeBool,
|
||||
Computed: true,
|
||||
Description: "If true - enable the compute, else - disable",
|
||||
},
|
||||
|
||||
"rg_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
@@ -303,16 +350,67 @@ func DataSourceCompute() *schema.Resource {
|
||||
Description: "IDs of the extra disk(s) attached to this compute.",
|
||||
},
|
||||
|
||||
/*
|
||||
"disks": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: dataSourceDiskSchemaMake(), // ID, type, name, size, account ID, SEP ID, SEP type, pool, status, tech status, compute ID, image ID
|
||||
"disks": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"disk_name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "Name for disk",
|
||||
},
|
||||
"size": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "Disk size in GiB",
|
||||
},
|
||||
"disk_type": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
ValidateFunc: validation.StringInSlice([]string{"B", "D"}, false),
|
||||
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data'",
|
||||
},
|
||||
"sep_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
Description: "Storage endpoint provider ID; by default the same with boot disk",
|
||||
},
|
||||
"pool": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
Description: "Pool name; by default will be chosen automatically",
|
||||
},
|
||||
"desc": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
Description: "Optional description",
|
||||
},
|
||||
"image_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
Description: "Specify image id for create disk from template",
|
||||
},
|
||||
"disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk ID",
|
||||
},
|
||||
"permanently": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: false,
|
||||
Description: "Disk deletion status",
|
||||
},
|
||||
},
|
||||
Description: "Detailed specification for all disks attached to this compute instance (including bood disk).",
|
||||
},
|
||||
*/
|
||||
},
|
||||
|
||||
"network": {
|
||||
Type: schema.TypeSet,
|
||||
@@ -348,7 +446,7 @@ func DataSourceCompute() *schema.Resource {
|
||||
"started": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: true,
|
||||
Computed: true,
|
||||
Description: "Is compute started.",
|
||||
},
|
||||
},
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -33,6 +34,7 @@ package kvmvm
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
@@ -40,6 +42,7 @@ import (
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/statefuncs"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/status"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
@@ -75,9 +78,8 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
|
||||
urlValues.Add("cpu", fmt.Sprintf("%d", d.Get("cpu").(int)))
|
||||
urlValues.Add("ram", fmt.Sprintf("%d", d.Get("ram").(int)))
|
||||
urlValues.Add("imageId", fmt.Sprintf("%d", d.Get("image_id").(int)))
|
||||
urlValues.Add("bootDisk", fmt.Sprintf("%d", d.Get("boot_disk_size").(int)))
|
||||
urlValues.Add("netType", "NONE") // at the 1st step create isolated compute
|
||||
urlValues.Add("start", "0") // at the 1st step create compute in a stopped state
|
||||
urlValues.Add("netType", "NONE")
|
||||
urlValues.Add("start", "0") // at the 1st step create compute in a stopped state
|
||||
|
||||
argVal, argSet := d.GetOk("description")
|
||||
if argSet {
|
||||
@@ -92,6 +94,31 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
|
||||
urlValues.Add("pool", pool.(string))
|
||||
}
|
||||
|
||||
if ipaType, ok := d.GetOk("ipa_type"); ok {
|
||||
urlValues.Add("ipaType", ipaType.(string))
|
||||
}
|
||||
|
||||
if bootSize, ok := d.GetOk("boot_disk_size"); ok {
|
||||
urlValues.Add("bootDisk", fmt.Sprintf("%d", bootSize.(int)))
|
||||
}
|
||||
|
||||
if IS, ok := d.GetOk("is"); ok {
|
||||
urlValues.Add("IS", IS.(string))
|
||||
}
|
||||
if networks, ok := d.GetOk("network"); ok {
|
||||
if networks.(*schema.Set).Len() > 0 {
|
||||
ns := networks.(*schema.Set).List()
|
||||
defaultNetwork := ns[0].(map[string]interface{})
|
||||
urlValues.Set("netType", defaultNetwork["net_type"].(string))
|
||||
urlValues.Add("netId", fmt.Sprintf("%d", defaultNetwork["net_id"].(int)))
|
||||
ipaddr, ipSet := defaultNetwork["ip_address"] // "ip_address" key is optional
|
||||
if ipSet {
|
||||
urlValues.Add("ipAddr", ipaddr.(string))
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
sshKeysVal, sshKeysSet := d.GetOk("ssh_keys")
|
||||
if sshKeysSet {
|
||||
@@ -123,6 +150,7 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
urlValues = &url.Values{}
|
||||
// Compute create API returns ID of the new Compute instance on success
|
||||
|
||||
d.SetId(apiResp) // update ID of the resource to tell Terraform that the resource exists, albeit partially
|
||||
@@ -140,6 +168,7 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
|
||||
log.Errorf("resourceComputeCreate: could not delete compute after failed creation: %v", err)
|
||||
}
|
||||
d.SetId("")
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
}()
|
||||
|
||||
@@ -161,7 +190,7 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
|
||||
argVal, argSet = d.GetOk("network")
|
||||
if argSet && argVal.(*schema.Set).Len() > 0 {
|
||||
log.Debugf("resourceComputeCreate: calling utilityComputeNetworksConfigure to attach %d network(s)", argVal.(*schema.Set).Len())
|
||||
err = utilityComputeNetworksConfigure(ctx, d, m, false) // do_delta=false, as we are working on a new compute
|
||||
err = utilityComputeNetworksConfigure(ctx, d, m, false, true) // do_delta=false, as we are working on a new compute
|
||||
if err != nil {
|
||||
log.Errorf("resourceComputeCreate: error when attaching networks to a new Compute ID %d: %s", compId, err)
|
||||
cleanup = true
|
||||
@@ -181,19 +210,72 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
|
||||
}
|
||||
}
|
||||
|
||||
if enabled, ok := d.GetOk("enabled"); ok {
|
||||
api := ComputeDisableAPI
|
||||
if enabled.(bool) {
|
||||
api = ComputeEnableAPI
|
||||
}
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("computeId", fmt.Sprintf("%d", compId))
|
||||
log.Debugf("resourceComputeCreate: enable=%t Compute ID %d after completing its resource configuration", compId, enabled)
|
||||
if _, err := c.DecortAPICall(ctx, "POST", api, urlValues); err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
if !cleanup {
|
||||
if disks, ok := d.GetOk("disks"); ok {
|
||||
log.Debugf("resourceComputeCreate: Create disks on ComputeID: %d", compId)
|
||||
addedDisks := disks.([]interface{})
|
||||
if len(addedDisks) > 0 {
|
||||
for _, disk := range addedDisks {
|
||||
diskConv := disk.(map[string]interface{})
|
||||
|
||||
urlValues.Add("computeId", d.Id())
|
||||
urlValues.Add("diskName", diskConv["disk_name"].(string))
|
||||
urlValues.Add("size", strconv.Itoa(diskConv["size"].(int)))
|
||||
if diskConv["disk_type"].(string) != "" {
|
||||
urlValues.Add("diskType", diskConv["disk_type"].(string))
|
||||
}
|
||||
if diskConv["sep_id"].(int) != 0 {
|
||||
urlValues.Add("sepId", strconv.Itoa(diskConv["sep_id"].(int)))
|
||||
}
|
||||
if diskConv["pool"].(string) != "" {
|
||||
urlValues.Add("pool", diskConv["pool"].(string))
|
||||
}
|
||||
if diskConv["desc"].(string) != "" {
|
||||
urlValues.Add("desc", diskConv["desc"].(string))
|
||||
}
|
||||
if diskConv["image_id"].(int) != 0 {
|
||||
urlValues.Add("imageId", strconv.Itoa(diskConv["image_id"].(int)))
|
||||
}
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeDiskAddAPI, urlValues)
|
||||
if err != nil {
|
||||
cleanup = true
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
log.Debugf("resourceComputeCreate: new Compute ID %d, name %s creation sequence complete", compId, d.Get("name").(string))
|
||||
|
||||
// We may reuse dataSourceComputeRead here as we maintain similarity
|
||||
// between Compute resource and Compute data source schemas
|
||||
// Compute read function will also update resource ID on success, so that Terraform
|
||||
// will know the resource exists
|
||||
return dataSourceComputeRead(ctx, d, m)
|
||||
return resourceComputeRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func resourceComputeRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceComputeRead: called for Compute name %s, RG ID %d",
|
||||
d.Get("name").(string), d.Get("rg_id").(int))
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
|
||||
compFacts, err := utilityComputeCheckPresence(ctx, d, m)
|
||||
if compFacts == "" {
|
||||
if err != nil {
|
||||
@@ -203,6 +285,49 @@ func resourceComputeRead(ctx context.Context, d *schema.ResourceData, m interfac
|
||||
return nil
|
||||
}
|
||||
|
||||
compute := &ComputeGetResp{}
|
||||
err = json.Unmarshal([]byte(compFacts), compute)
|
||||
|
||||
log.Debugf("resourceComputeRead: compute is: %+v", compute)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
switch compute.Status {
|
||||
case status.Deleted:
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("computeId", d.Id())
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeRestoreAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
_, err = c.DecortAPICall(ctx, "POST", ComputeEnableAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
case status.Destroyed:
|
||||
d.SetId("")
|
||||
return resourceComputeCreate(ctx, d, m)
|
||||
case status.Disabled:
|
||||
log.Debugf("The compute is in status: %s, may troubles can be occured with update. Please, enable compute first.", compute.Status)
|
||||
case status.Redeploying:
|
||||
case status.Deleting:
|
||||
case status.Destroying:
|
||||
return diag.Errorf("The compute is in progress with status: %s", compute.Status)
|
||||
case status.Modeled:
|
||||
return diag.Errorf("The compute is in status: %s, please, contant the support for more information", compute.Status)
|
||||
}
|
||||
|
||||
compFacts, err = utilityComputeCheckPresence(ctx, d, m)
|
||||
log.Debugf("resourceComputeRead: after changes compute is: %s", compFacts)
|
||||
if compFacts == "" {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
// Compute with such name and RG ID was not found
|
||||
return nil
|
||||
}
|
||||
|
||||
if err = flattenCompute(d, compFacts); err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
@@ -219,6 +344,56 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
|
||||
computeRaw, err := utilityComputeCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
compute := &ComputeGetResp{}
|
||||
err = json.Unmarshal([]byte(computeRaw), compute)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
if d.HasChange("enabled") {
|
||||
enabled := d.Get("enabled")
|
||||
api := ComputeDisableAPI
|
||||
if enabled.(bool) {
|
||||
api = ComputeEnableAPI
|
||||
}
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("computeId", d.Id())
|
||||
log.Debugf("resourceComputeUpdate: enable=%t Compute ID %s after completing its resource configuration", d.Id(), enabled)
|
||||
if _, err := c.DecortAPICall(ctx, "POST", api, urlValues); err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
}
|
||||
|
||||
// check compute statuses
|
||||
switch compute.Status {
|
||||
case status.Deleted:
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("computeId", d.Id())
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeRestoreAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
_, err = c.DecortAPICall(ctx, "POST", ComputeEnableAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
case status.Destroyed:
|
||||
d.SetId("")
|
||||
return resourceComputeCreate(ctx, d, m)
|
||||
case status.Disabled:
|
||||
log.Debugf("The compute is in status: %s, may troubles can be occured with update. Please, enable compute first.", compute.Status)
|
||||
case status.Redeploying:
|
||||
case status.Deleting:
|
||||
case status.Destroying:
|
||||
return diag.Errorf("The compute is in progress with status: %s", compute.Status)
|
||||
case status.Modeled:
|
||||
return diag.Errorf("The compute is in status: %s, please, contant the support for more information", compute.Status)
|
||||
}
|
||||
|
||||
/*
|
||||
1. Resize CPU/RAM
|
||||
2. Resize (grow) boot disk
|
||||
@@ -228,32 +403,32 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
|
||||
*/
|
||||
|
||||
// 1. Resize CPU/RAM
|
||||
params := &url.Values{}
|
||||
urlValues := &url.Values{}
|
||||
doUpdate := false
|
||||
params.Add("computeId", d.Id())
|
||||
urlValues.Add("computeId", d.Id())
|
||||
|
||||
oldCpu, newCpu := d.GetChange("cpu")
|
||||
if oldCpu.(int) != newCpu.(int) {
|
||||
params.Add("cpu", fmt.Sprintf("%d", newCpu.(int)))
|
||||
urlValues.Add("cpu", fmt.Sprintf("%d", newCpu.(int)))
|
||||
doUpdate = true
|
||||
} else {
|
||||
params.Add("cpu", "0") // no change to CPU allocation
|
||||
urlValues.Add("cpu", "0") // no change to CPU allocation
|
||||
}
|
||||
|
||||
oldRam, newRam := d.GetChange("ram")
|
||||
if oldRam.(int) != newRam.(int) {
|
||||
params.Add("ram", fmt.Sprintf("%d", newRam.(int)))
|
||||
urlValues.Add("ram", fmt.Sprintf("%d", newRam.(int)))
|
||||
doUpdate = true
|
||||
} else {
|
||||
params.Add("ram", "0")
|
||||
urlValues.Add("ram", "0")
|
||||
}
|
||||
|
||||
if doUpdate {
|
||||
log.Debugf("resourceComputeUpdate: changing CPU %d -> %d and/or RAM %d -> %d",
|
||||
oldCpu.(int), newCpu.(int),
|
||||
oldRam.(int), newRam.(int))
|
||||
params.Add("force", "true")
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeResizeAPI, params)
|
||||
urlValues.Add("force", "true")
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeResizeAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
@@ -276,15 +451,27 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
|
||||
}
|
||||
|
||||
// 3. Calculate and apply changes to data disks
|
||||
err := utilityComputeExtraDisksConfigure(ctx, d, m, true) // pass do_delta = true to apply changes, if any
|
||||
if d.HasChange("extra_disks") {
|
||||
err := utilityComputeExtraDisksConfigure(ctx, d, m, true) // pass do_delta = true to apply changes, if any
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Calculate and apply changes to network connections
|
||||
err = utilityComputeNetworksConfigure(ctx, d, m, true, false) // pass do_delta = true to apply changes, if any
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
// 4. Calculate and apply changes to network connections
|
||||
err = utilityComputeNetworksConfigure(ctx, d, m, true) // pass do_delta = true to apply changes, if any
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
if d.HasChange("description") || d.HasChange("name") {
|
||||
updateParams := &url.Values{}
|
||||
updateParams.Add("computeId", d.Id())
|
||||
updateParams.Add("name", d.Get("name").(string))
|
||||
updateParams.Add("desc", d.Get("description").(string))
|
||||
if _, err := c.DecortAPICall(ctx, "POST", ComputeUpdateAPI, updateParams); err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
}
|
||||
|
||||
if d.HasChange("started") {
|
||||
@@ -301,9 +488,108 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
|
||||
}
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
if d.HasChange("disks") {
|
||||
deletedDisks := make([]interface{}, 0)
|
||||
addedDisks := make([]interface{}, 0)
|
||||
|
||||
oldDisks, newDisks := d.GetChange("disks")
|
||||
oldConv := oldDisks.([]interface{})
|
||||
newConv := newDisks.([]interface{})
|
||||
|
||||
for _, el := range oldConv {
|
||||
if !isContainsDisk(newConv, el) {
|
||||
deletedDisks = append(deletedDisks, el)
|
||||
}
|
||||
}
|
||||
|
||||
for _, el := range newConv {
|
||||
if !isContainsDisk(oldConv, el) {
|
||||
addedDisks = append(addedDisks, el)
|
||||
}
|
||||
}
|
||||
|
||||
if len(deletedDisks) > 0 {
|
||||
urlValues.Add("computeId", d.Id())
|
||||
urlValues.Add("force", "false")
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeStopAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
urlValues = &url.Values{}
|
||||
|
||||
for _, disk := range deletedDisks {
|
||||
diskConv := disk.(map[string]interface{})
|
||||
if diskConv["disk_name"].(string) == "bootdisk" {
|
||||
continue
|
||||
}
|
||||
urlValues.Add("computeId", d.Id())
|
||||
urlValues.Add("diskId", strconv.Itoa(diskConv["disk_id"].(int)))
|
||||
urlValues.Add("permanently", strconv.FormatBool(diskConv["permanently"].(bool)))
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeDiskDeleteAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
urlValues.Add("computeId", d.Id())
|
||||
urlValues.Add("altBootId", "0")
|
||||
_, err = c.DecortAPICall(ctx, "POST", ComputeStartAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
|
||||
if len(addedDisks) > 0 {
|
||||
for _, disk := range addedDisks {
|
||||
diskConv := disk.(map[string]interface{})
|
||||
if diskConv["disk_name"].(string) == "bootdisk" {
|
||||
continue
|
||||
}
|
||||
urlValues.Add("computeId", d.Id())
|
||||
urlValues.Add("diskName", diskConv["disk_name"].(string))
|
||||
urlValues.Add("size", strconv.Itoa(diskConv["size"].(int)))
|
||||
if diskConv["disk_type"].(string) != "" {
|
||||
urlValues.Add("diskType", diskConv["disk_type"].(string))
|
||||
}
|
||||
if diskConv["sep_id"].(int) != 0 {
|
||||
urlValues.Add("sepId", strconv.Itoa(diskConv["sep_id"].(int)))
|
||||
}
|
||||
if diskConv["pool"].(string) != "" {
|
||||
urlValues.Add("pool", diskConv["pool"].(string))
|
||||
}
|
||||
if diskConv["desc"].(string) != "" {
|
||||
urlValues.Add("desc", diskConv["desc"].(string))
|
||||
}
|
||||
if diskConv["image_id"].(int) != 0 {
|
||||
urlValues.Add("imageId", strconv.Itoa(diskConv["image_id"].(int)))
|
||||
}
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeDiskAddAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// we may reuse dataSourceComputeRead here as we maintain similarity
|
||||
// between Compute resource and Compute data source schemas
|
||||
return dataSourceComputeRead(ctx, d, m)
|
||||
return resourceComputeRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func isContainsDisk(els []interface{}, el interface{}) bool {
|
||||
for _, elOld := range els {
|
||||
elOldConv := elOld.(map[string]interface{})
|
||||
elConv := el.(map[string]interface{})
|
||||
if elOldConv["disk_name"].(string) == elConv["disk_name"].(string) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func resourceComputeDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
@@ -318,8 +604,8 @@ func resourceComputeDelete(ctx context.Context, d *schema.ResourceData, m interf
|
||||
|
||||
params := &url.Values{}
|
||||
params.Add("computeId", d.Id())
|
||||
params.Add("permanently", "1")
|
||||
params.Add("detachDisks", "1")
|
||||
params.Add("permanently", strconv.FormatBool(d.Get("permanently").(bool)))
|
||||
params.Add("detachDisks", strconv.FormatBool(d.Get("detach_disks").(bool)))
|
||||
|
||||
if _, err := c.DecortAPICall(ctx, "POST", ComputeDeleteAPI, params); err != nil {
|
||||
return diag.FromErr(err)
|
||||
@@ -328,6 +614,252 @@ func resourceComputeDelete(ctx context.Context, d *schema.ResourceData, m interf
|
||||
return nil
|
||||
}
|
||||
|
||||
func ResourceComputeSchemaMake() map[string]*schema.Schema {
|
||||
rets := map[string]*schema.Schema{
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "Name of this compute. Compute names are case sensitive and must be unique in the resource group.",
|
||||
},
|
||||
|
||||
"rg_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ValidateFunc: validation.IntAtLeast(1),
|
||||
Description: "ID of the resource group where this compute should be deployed.",
|
||||
},
|
||||
|
||||
"driver": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
StateFunc: statefuncs.StateFuncToUpper,
|
||||
ValidateFunc: validation.StringInSlice([]string{"KVM_X86", "KVM_PPC"}, false), // observe case while validating
|
||||
Description: "Hardware architecture of this compute instance.",
|
||||
},
|
||||
|
||||
"cpu": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ValidateFunc: validation.IntBetween(1, constants.MaxCpusPerCompute),
|
||||
Description: "Number of CPUs to allocate to this compute instance.",
|
||||
},
|
||||
|
||||
"ram": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ValidateFunc: validation.IntAtLeast(constants.MinRamPerCompute),
|
||||
Description: "Amount of RAM in MB to allocate to this compute instance.",
|
||||
},
|
||||
|
||||
"image_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "ID of the OS image to base this compute instance on.",
|
||||
},
|
||||
|
||||
"boot_disk_size": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Description: "This compute instance boot disk size in GB. Make sure it is large enough to accomodate selected OS image.",
|
||||
},
|
||||
|
||||
"disks": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"disk_name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "Name for disk",
|
||||
},
|
||||
"size": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "Disk size in GiB",
|
||||
},
|
||||
"disk_type": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
ValidateFunc: validation.StringInSlice([]string{"B", "D"}, false),
|
||||
Description: "The type of disk in terms of its role in compute: 'B=Boot, D=Data'",
|
||||
},
|
||||
"sep_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
Description: "Storage endpoint provider ID; by default the same with boot disk",
|
||||
},
|
||||
"pool": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
Description: "Pool name; by default will be chosen automatically",
|
||||
},
|
||||
"desc": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
Description: "Optional description",
|
||||
},
|
||||
"image_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Optional: true,
|
||||
Description: "Specify image id for create disk from template",
|
||||
},
|
||||
"disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "Disk ID",
|
||||
},
|
||||
"permanently": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: false,
|
||||
Description: "Disk deletion status",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"sep_id": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ForceNew: true,
|
||||
Description: "ID of SEP to create bootDisk on. Uses image's sepId if not set.",
|
||||
},
|
||||
|
||||
"pool": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ForceNew: true,
|
||||
Description: "Pool to use if sepId is set, can be also empty if needed to be chosen by system.",
|
||||
},
|
||||
|
||||
"extra_disks": {
|
||||
Type: schema.TypeSet,
|
||||
Optional: true,
|
||||
MaxItems: constants.MaxExtraDisksPerCompute,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
},
|
||||
Description: "Optional list of IDs of extra disks to attach to this compute. You may specify several extra disks.",
|
||||
},
|
||||
|
||||
"network": {
|
||||
Type: schema.TypeSet,
|
||||
Optional: true,
|
||||
MinItems: 1,
|
||||
MaxItems: constants.MaxNetworksPerCompute,
|
||||
Elem: &schema.Resource{
|
||||
Schema: networkSubresourceSchemaMake(),
|
||||
},
|
||||
Description: "Optional network connection(s) for this compute. You may specify several network blocks, one for each connection.",
|
||||
},
|
||||
|
||||
/*
|
||||
"ssh_keys": {
|
||||
Type: schema.TypeList,
|
||||
Optional: true,
|
||||
MaxItems: MaxSshKeysPerCompute,
|
||||
Elem: &schema.Resource{
|
||||
Schema: sshSubresourceSchemaMake(),
|
||||
},
|
||||
Description: "SSH keys to authorize on this compute instance.",
|
||||
},
|
||||
*/
|
||||
|
||||
"description": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "Optional text description of this compute instance.",
|
||||
},
|
||||
|
||||
"cloud_init": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Default: "applied",
|
||||
DiffSuppressFunc: cloudInitDiffSupperss,
|
||||
Description: "Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases.",
|
||||
},
|
||||
|
||||
"enabled": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "If true - enable compute, else - disable",
|
||||
},
|
||||
|
||||
// The rest are Compute properties, which are "computed" once it is created
|
||||
"rg_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the resource group where this compute instance is located.",
|
||||
},
|
||||
|
||||
"account_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the account this compute instance belongs to.",
|
||||
},
|
||||
|
||||
"account_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the account this compute instance belongs to.",
|
||||
},
|
||||
|
||||
"boot_disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "This compute instance boot disk ID.",
|
||||
},
|
||||
|
||||
"os_users": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: osUsersSubresourceSchemaMake(),
|
||||
},
|
||||
Description: "Guest OS users provisioned on this compute instance.",
|
||||
},
|
||||
|
||||
"started": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Description: "Is compute started.",
|
||||
},
|
||||
"detach_disks": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: true,
|
||||
},
|
||||
"permanently": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: true,
|
||||
},
|
||||
"is": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "system name",
|
||||
},
|
||||
"ipa_type": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "compute purpose",
|
||||
},
|
||||
}
|
||||
return rets
|
||||
}
|
||||
|
||||
func ResourceCompute() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
@@ -338,169 +870,17 @@ func ResourceCompute() *schema.Resource {
|
||||
DeleteContext: resourceComputeDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout180s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout180s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "Name of this compute. Compute names are case sensitive and must be unique in the resource group.",
|
||||
},
|
||||
|
||||
"rg_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ValidateFunc: validation.IntAtLeast(1),
|
||||
Description: "ID of the resource group where this compute should be deployed.",
|
||||
},
|
||||
|
||||
"driver": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
StateFunc: statefuncs.StateFuncToUpper,
|
||||
ValidateFunc: validation.StringInSlice([]string{"KVM_X86", "KVM_PPC"}, false), // observe case while validating
|
||||
Description: "Hardware architecture of this compute instance.",
|
||||
},
|
||||
|
||||
"cpu": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ValidateFunc: validation.IntBetween(1, constants.MaxCpusPerCompute),
|
||||
Description: "Number of CPUs to allocate to this compute instance.",
|
||||
},
|
||||
|
||||
"ram": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ValidateFunc: validation.IntAtLeast(constants.MinRamPerCompute),
|
||||
Description: "Amount of RAM in MB to allocate to this compute instance.",
|
||||
},
|
||||
|
||||
"image_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
ForceNew: true,
|
||||
Description: "ID of the OS image to base this compute instance on.",
|
||||
},
|
||||
|
||||
"boot_disk_size": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "This compute instance boot disk size in GB. Make sure it is large enough to accomodate selected OS image.",
|
||||
},
|
||||
|
||||
"sep_id": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ForceNew: true,
|
||||
Description: "ID of SEP to create bootDisk on. Uses image's sepId if not set.",
|
||||
},
|
||||
|
||||
"pool": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ForceNew: true,
|
||||
Description: "Pool to use if sepId is set, can be also empty if needed to be chosen by system.",
|
||||
},
|
||||
|
||||
"extra_disks": {
|
||||
Type: schema.TypeSet,
|
||||
Optional: true,
|
||||
MaxItems: constants.MaxExtraDisksPerCompute,
|
||||
Elem: &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
},
|
||||
Description: "Optional list of IDs of extra disks to attach to this compute. You may specify several extra disks.",
|
||||
},
|
||||
|
||||
"network": {
|
||||
Type: schema.TypeSet,
|
||||
Optional: true,
|
||||
MaxItems: constants.MaxNetworksPerCompute,
|
||||
Elem: &schema.Resource{
|
||||
Schema: networkSubresourceSchemaMake(),
|
||||
},
|
||||
Description: "Optional network connection(s) for this compute. You may specify several network blocks, one for each connection.",
|
||||
},
|
||||
|
||||
/*
|
||||
"ssh_keys": {
|
||||
Type: schema.TypeList,
|
||||
Optional: true,
|
||||
MaxItems: MaxSshKeysPerCompute,
|
||||
Elem: &schema.Resource{
|
||||
Schema: sshSubresourceSchemaMake(),
|
||||
},
|
||||
Description: "SSH keys to authorize on this compute instance.",
|
||||
},
|
||||
*/
|
||||
|
||||
"description": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Description: "Optional text description of this compute instance.",
|
||||
},
|
||||
|
||||
"cloud_init": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Default: "applied",
|
||||
DiffSuppressFunc: cloudInitDiffSupperss,
|
||||
Description: "Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases.",
|
||||
},
|
||||
|
||||
// The rest are Compute properties, which are "computed" once it is created
|
||||
"rg_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the resource group where this compute instance is located.",
|
||||
},
|
||||
|
||||
"account_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "ID of the account this compute instance belongs to.",
|
||||
},
|
||||
|
||||
"account_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
Description: "Name of the account this compute instance belongs to.",
|
||||
},
|
||||
|
||||
"boot_disk_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
Description: "This compute instance boot disk ID.",
|
||||
},
|
||||
|
||||
"os_users": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: osUsersSubresourceSchemaMake(),
|
||||
},
|
||||
Description: "Guest OS users provisioned on this compute instance.",
|
||||
},
|
||||
|
||||
"started": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: true,
|
||||
Description: "Is compute started.",
|
||||
},
|
||||
},
|
||||
Schema: ResourceComputeSchemaMake(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
@@ -91,16 +92,33 @@ func utilityComputeExtraDisksConfigure(ctx context.Context, d *schema.ResourceDa
|
||||
|
||||
detach_set := old_set.(*schema.Set).Difference(new_set.(*schema.Set))
|
||||
log.Debugf("utilityComputeExtraDisksConfigure: detach set has %d items for Compute ID %s", detach_set.Len(), d.Id())
|
||||
for _, diskId := range detach_set.List() {
|
||||
|
||||
if detach_set.Len() > 0 {
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("computeId", d.Id())
|
||||
urlValues.Add("diskId", fmt.Sprintf("%d", diskId.(int)))
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeDiskDetachAPI, urlValues)
|
||||
urlValues.Add("force", "false")
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeStopAPI, urlValues)
|
||||
if err != nil {
|
||||
// failed to detach disk - there will be partial resource update
|
||||
log.Errorf("utilityComputeExtraDisksConfigure: failed to detach disk ID %d from Compute ID %s: %s", diskId.(int), d.Id(), err)
|
||||
apiErrCount++
|
||||
lastSavedError = err
|
||||
return err
|
||||
}
|
||||
for _, diskId := range detach_set.List() {
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("computeId", d.Id())
|
||||
urlValues.Add("diskId", fmt.Sprintf("%d", diskId.(int)))
|
||||
_, err := c.DecortAPICall(ctx, "POST", ComputeDiskDetachAPI, urlValues)
|
||||
if err != nil {
|
||||
// failed to detach disk - there will be partial resource update
|
||||
log.Errorf("utilityComputeExtraDisksConfigure: failed to detach disk ID %d from Compute ID %s: %s", diskId.(int), d.Id(), err)
|
||||
apiErrCount++
|
||||
lastSavedError = err
|
||||
}
|
||||
}
|
||||
urlValues = &url.Values{}
|
||||
urlValues.Add("computeId", d.Id())
|
||||
urlValues.Add("altBootId", "0")
|
||||
_, err = c.DecortAPICall(ctx, "POST", ComputeStartAPI, urlValues)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
@@ -128,7 +146,7 @@ func utilityComputeExtraDisksConfigure(ctx context.Context, d *schema.ResourceDa
|
||||
return nil
|
||||
}
|
||||
|
||||
func utilityComputeNetworksConfigure(ctx context.Context, d *schema.ResourceData, m interface{}, do_delta bool) error {
|
||||
func utilityComputeNetworksConfigure(ctx context.Context, d *schema.ResourceData, m interface{}, do_delta bool, skip_zero bool) error {
|
||||
// "d" is filled with data according to computeResource schema, so extra networks config is retrieved via "network" key
|
||||
// If do_delta is true, this function will identify changes between new and existing specs for network and try to
|
||||
// update compute configuration accordingly
|
||||
@@ -147,7 +165,10 @@ func utilityComputeNetworksConfigure(ctx context.Context, d *schema.ResourceData
|
||||
return nil
|
||||
}
|
||||
|
||||
for _, runner := range new_set.(*schema.Set).List() {
|
||||
for i, runner := range new_set.(*schema.Set).List() {
|
||||
if i == 0 && skip_zero {
|
||||
continue
|
||||
}
|
||||
urlValues := &url.Values{}
|
||||
net_data := runner.(map[string]interface{})
|
||||
urlValues.Add("computeId", d.Id())
|
||||
@@ -267,12 +288,12 @@ func utilityComputeCheckPresence(ctx context.Context, d *schema.ResourceData, m
|
||||
// and RG ID
|
||||
computeName, argSet := d.GetOk("name")
|
||||
if !argSet {
|
||||
return "", fmt.Errorf("Cannot locate compute instance if name is empty and no compute ID specified")
|
||||
return "", fmt.Errorf("cannot locate compute instance if name is empty and no compute ID specified")
|
||||
}
|
||||
|
||||
rgId, argSet := d.GetOk("rg_id")
|
||||
if !argSet {
|
||||
return "", fmt.Errorf("Cannot locate compute by name %s if no resource group ID is set", computeName.(string))
|
||||
return "", fmt.Errorf("cannot locate compute by name %s if no resource group ID is set", computeName.(string))
|
||||
}
|
||||
|
||||
urlValues.Add("rgId", fmt.Sprintf("%d", rgId))
|
||||
|
||||
57
internal/service/cloudapi/lb/api.go
Normal file
57
internal/service/cloudapi/lb/api.go
Normal file
@@ -0,0 +1,57 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
const lbListAPI = "/restmachine/cloudapi/lb/list"
|
||||
const lbListDeletedAPI = "/restmachine/cloudapi/lb/listDeleted"
|
||||
const lbGetAPI = "/restmachine/cloudapi/lb/get"
|
||||
const lbCreateAPI = "/restmachine/cloudapi/lb/create"
|
||||
const lbDeleteAPI = "/restmachine/cloudapi/lb/delete"
|
||||
const lbDisableAPI = "/restmachine/cloudapi/lb/disable"
|
||||
const lbEnableAPI = "/restmachine/cloudapi/lb/enable"
|
||||
const lbUpdateAPI = "/restmachine/cloudapi/lb/update"
|
||||
const lbStartAPI = "/restmachine/cloudapi/lb/start"
|
||||
const lbStopAPI = "/restmachine/cloudapi/lb/stop"
|
||||
const lbRestartAPI = "/restmachine/cloudapi/lb/restart"
|
||||
const lbRestoreAPI = "/restmachine/cloudapi/lb/restore"
|
||||
const lbConfigResetAPI = "/restmachine/cloudapi/lb/configReset"
|
||||
const lbBackendCreateAPI = "/restmachine/cloudapi/lb/backendCreate"
|
||||
const lbBackendDeleteAPI = "/restmachine/cloudapi/lb/backendDelete"
|
||||
const lbBackendUpdateAPI = "/restmachine/cloudapi/lb/backendUpdate"
|
||||
const lbBackendServerAddAPI = "/restmachine/cloudapi/lb/backendServerAdd"
|
||||
const lbBackendServerDeleteAPI = "/restmachine/cloudapi/lb/backendServerDelete"
|
||||
const lbBackendServerUpdateAPI = "/restmachine/cloudapi/lb/backendServerUpdate"
|
||||
const lbFrontendCreateAPI = "/restmachine/cloudapi/lb/frontendCreate"
|
||||
const lbFrontendDeleteAPI = "/restmachine/cloudapi/lb/frontendDelete"
|
||||
const lbFrontendBindAPI = "/restmachine/cloudapi/lb/frontendBind"
|
||||
const lbFrontendBindDeleteAPI = "/restmachine/cloudapi/lb/frontendBindDelete"
|
||||
const lbFrontendBindUpdateAPI = "/restmachine/cloudapi/lb/frontendBindingUpdate"
|
||||
96
internal/service/cloudapi/lb/data_source_lb.go
Normal file
96
internal/service/cloudapi/lb/data_source_lb.go
Normal file
@@ -0,0 +1,96 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"strconv"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
)
|
||||
|
||||
func flattenLB(d *schema.ResourceData, lb *LoadBalancer) {
|
||||
d.Set("ha_mode", lb.HAMode)
|
||||
d.Set("backends", flattenLBBackends(lb.Backends))
|
||||
d.Set("created_by", lb.CreatedBy)
|
||||
d.Set("created_time", lb.CreatedTime)
|
||||
d.Set("deleted_by", lb.DeletedBy)
|
||||
d.Set("deleted_time", lb.DeletedTime)
|
||||
d.Set("desc", lb.Description)
|
||||
d.Set("dp_api_user", lb.DPAPIUser)
|
||||
d.Set("extnet_id", lb.ExtnetId)
|
||||
d.Set("frontends", flattenFrontends(lb.Frontends))
|
||||
d.Set("gid", lb.GID)
|
||||
d.Set("guid", lb.GUID)
|
||||
d.Set("image_id", lb.ImageId)
|
||||
d.Set("milestones", lb.Milestones)
|
||||
d.Set("name", lb.Name)
|
||||
d.Set("primary_node", flattenNode(lb.PrimaryNode))
|
||||
d.Set("rg_id", lb.RGID)
|
||||
d.Set("rg_name", lb.RGName)
|
||||
d.Set("secondary_node", flattenNode(lb.SecondaryNode))
|
||||
d.Set("status", lb.Status)
|
||||
d.Set("tech_status", lb.TechStatus)
|
||||
d.Set("updated_by", lb.UpdatedBy)
|
||||
d.Set("updated_time", lb.UpdatedTime)
|
||||
d.Set("vins_id", lb.VinsId)
|
||||
}
|
||||
|
||||
func dataSourceLBRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
lb, err := utilityLBCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
d.SetId(strconv.FormatUint(lb.ID, 10))
|
||||
|
||||
flattenLB(d, lb)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func DataSourceLB() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceLBRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dsLBSchemaMake(),
|
||||
}
|
||||
}
|
||||
103
internal/service/cloudapi/lb/data_source_lb_list.go
Normal file
103
internal/service/cloudapi/lb/data_source_lb_list.go
Normal file
@@ -0,0 +1,103 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
)
|
||||
|
||||
func flattenLBList(lbl LBList) []map[string]interface{} {
|
||||
res := make([]map[string]interface{}, 0, len(lbl))
|
||||
for _, lb := range lbl {
|
||||
temp := map[string]interface{}{
|
||||
"ha_mode": lb.HAMode,
|
||||
"backends": flattenLBBackends(lb.Backends),
|
||||
"created_by": lb.CreatedBy,
|
||||
"created_time": lb.CreatedTime,
|
||||
"deleted_by": lb.DeletedBy,
|
||||
"deleted_time": lb.DeletedTime,
|
||||
"desc": lb.Description,
|
||||
"dp_api_user": lb.DPAPIUser,
|
||||
"dp_api_password": lb.DPAPIPassword,
|
||||
"extnet_id": lb.ExtnetId,
|
||||
"frontends": flattenFrontends(lb.Frontends),
|
||||
"gid": lb.GID,
|
||||
"guid": lb.GUID,
|
||||
"image_id": lb.ImageId,
|
||||
"milestones": lb.Milestones,
|
||||
"name": lb.Name,
|
||||
"primary_node": flattenNode(lb.PrimaryNode),
|
||||
"rg_id": lb.RGID,
|
||||
"rg_name": lb.RGName,
|
||||
"secondary_node": flattenNode(lb.SecondaryNode),
|
||||
"status": lb.Status,
|
||||
"tech_status": lb.TechStatus,
|
||||
"updated_by": lb.UpdatedBy,
|
||||
"updated_time": lb.UpdatedTime,
|
||||
"vins_id": lb.VinsId,
|
||||
}
|
||||
res = append(res, temp)
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
func dataSourceLBListRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
lbList, err := utilityLBListCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
id := uuid.New()
|
||||
d.SetId(id.String())
|
||||
d.Set("items", flattenLBList(lbList))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func DataSourceLBList() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceLBListRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dsLBListSchemaMake(),
|
||||
}
|
||||
}
|
||||
68
internal/service/cloudapi/lb/data_source_lb_list_deleted.go
Normal file
68
internal/service/cloudapi/lb/data_source_lb_list_deleted.go
Normal file
@@ -0,0 +1,68 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
)
|
||||
|
||||
func dataSourceLBListDeletedRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
lbList, err := utilityLBListDeletedCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
id := uuid.New()
|
||||
d.SetId(id.String())
|
||||
d.Set("items", flattenLBList(lbList))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func DataSourceLBListDeleted() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
ReadContext: dataSourceLBListDeletedRead,
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Read: &constants.Timeout30s,
|
||||
Default: &constants.Timeout60s,
|
||||
},
|
||||
|
||||
Schema: dsLBListDeletedSchemaMake(),
|
||||
}
|
||||
}
|
||||
128
internal/service/cloudapi/lb/flattens.go
Normal file
128
internal/service/cloudapi/lb/flattens.go
Normal file
@@ -0,0 +1,128 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
func flattenNode(node Node) []map[string]interface{} {
|
||||
temp := make([]map[string]interface{}, 0)
|
||||
n := map[string]interface{}{
|
||||
"backend_ip": node.BackendIp,
|
||||
"compute_id": node.ComputeId,
|
||||
"frontend_ip": node.FrontendIp,
|
||||
"guid": node.GUID,
|
||||
"mgmt_ip": node.MGMTIp,
|
||||
"network_id": node.NetworkId,
|
||||
}
|
||||
|
||||
temp = append(temp, n)
|
||||
|
||||
return temp
|
||||
}
|
||||
|
||||
func flattendBindings(bs []Binding) []map[string]interface{} {
|
||||
temp := make([]map[string]interface{}, 0, len(bs))
|
||||
for _, b := range bs {
|
||||
t := map[string]interface{}{
|
||||
"address": b.Address,
|
||||
"guid": b.GUID,
|
||||
"name": b.Name,
|
||||
"port": b.Port,
|
||||
}
|
||||
temp = append(temp, t)
|
||||
}
|
||||
return temp
|
||||
}
|
||||
|
||||
func flattenFrontends(fs []Frontend) []map[string]interface{} {
|
||||
temp := make([]map[string]interface{}, 0, len(fs))
|
||||
for _, f := range fs {
|
||||
t := map[string]interface{}{
|
||||
"backend": f.Backend,
|
||||
"bindings": flattendBindings(f.Bindings),
|
||||
"guid": f.GUID,
|
||||
"name": f.Name,
|
||||
}
|
||||
temp = append(temp, t)
|
||||
}
|
||||
|
||||
return temp
|
||||
}
|
||||
|
||||
func flattenServers(servers []Server) []map[string]interface{} {
|
||||
temp := make([]map[string]interface{}, 0, len(servers))
|
||||
for _, server := range servers {
|
||||
t := map[string]interface{}{
|
||||
"address": server.Address,
|
||||
"check": server.Check,
|
||||
"guid": server.GUID,
|
||||
"name": server.Name,
|
||||
"port": server.Port,
|
||||
"server_settings": flattenServerSettings(server.ServerSettings),
|
||||
}
|
||||
|
||||
temp = append(temp, t)
|
||||
}
|
||||
return temp
|
||||
}
|
||||
|
||||
func flattenServerSettings(defSet ServerSettings) []map[string]interface{} {
|
||||
temp := map[string]interface{}{
|
||||
"downinter": defSet.DownInter,
|
||||
"fall": defSet.Fall,
|
||||
"guid": defSet.GUID,
|
||||
"inter": defSet.Inter,
|
||||
"maxconn": defSet.MaxConn,
|
||||
"maxqueue": defSet.MaxQueue,
|
||||
"rise": defSet.Rise,
|
||||
"slowstart": defSet.SlowStart,
|
||||
"weight": defSet.Weight,
|
||||
}
|
||||
|
||||
res := make([]map[string]interface{}, 0)
|
||||
res = append(res, temp)
|
||||
return res
|
||||
}
|
||||
|
||||
func flattenLBBackends(backends []Backend) []map[string]interface{} {
|
||||
temp := make([]map[string]interface{}, 0, len(backends))
|
||||
for _, item := range backends {
|
||||
t := map[string]interface{}{
|
||||
"algorithm": item.Algorithm,
|
||||
"guid": item.GUID,
|
||||
"name": item.Name,
|
||||
"server_default_settings": flattenServerSettings(item.ServerDefaultSettings),
|
||||
"servers": flattenServers(item.Servers),
|
||||
}
|
||||
|
||||
temp = append(temp, t)
|
||||
}
|
||||
return temp
|
||||
}
|
||||
101
internal/service/cloudapi/lb/lb_data_subresource.go
Normal file
101
internal/service/cloudapi/lb/lb_data_subresource.go
Normal file
@@ -0,0 +1,101 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
|
||||
func dsLBSchemaMake() map[string]*schema.Schema {
|
||||
sch := createLBSchema()
|
||||
sch["lb_id"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
}
|
||||
return sch
|
||||
}
|
||||
|
||||
func dsLBListDeletedSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"page": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Default: 0,
|
||||
},
|
||||
"size": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Default: 0,
|
||||
},
|
||||
"items": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: dsLBItemSchemaMake(),
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dsLBListSchemaMake() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"includedeleted": {
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
Default: false,
|
||||
},
|
||||
"page": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Default: 0,
|
||||
},
|
||||
"size": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Default: 0,
|
||||
},
|
||||
"items": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: dsLBItemSchemaMake(),
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func dsLBItemSchemaMake() map[string]*schema.Schema {
|
||||
sch := createLBSchema()
|
||||
sch["dp_api_password"] = &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
}
|
||||
return sch
|
||||
}
|
||||
91
internal/service/cloudapi/lb/lb_resource_subresource.go
Normal file
91
internal/service/cloudapi/lb/lb_resource_subresource.go
Normal file
@@ -0,0 +1,91 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
|
||||
func lbResourceSchemaMake() map[string]*schema.Schema {
|
||||
sch := createLBSchema()
|
||||
sch["rg_id"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
}
|
||||
sch["name"] = &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
}
|
||||
|
||||
sch["extnet_id"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
}
|
||||
|
||||
sch["vins_id"] = &schema.Schema{
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
}
|
||||
sch["start"] = &schema.Schema{
|
||||
Type: schema.TypeBool,
|
||||
Required: true,
|
||||
}
|
||||
sch["desc"] = &schema.Schema{
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
}
|
||||
|
||||
sch["enable"] = &schema.Schema{
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
}
|
||||
|
||||
sch["restart"] = &schema.Schema{
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
}
|
||||
|
||||
sch["restore"] = &schema.Schema{
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
}
|
||||
|
||||
sch["config_reset"] = &schema.Schema{
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
}
|
||||
|
||||
sch["permanently"] = &schema.Schema{
|
||||
Type: schema.TypeBool,
|
||||
Optional: true,
|
||||
}
|
||||
return sch
|
||||
}
|
||||
367
internal/service/cloudapi/lb/lb_schema.go
Normal file
367
internal/service/cloudapi/lb/lb_schema.go
Normal file
@@ -0,0 +1,367 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
|
||||
func createLBSchema() map[string]*schema.Schema {
|
||||
return map[string]*schema.Schema{
|
||||
"ha_mode": {
|
||||
Type: schema.TypeBool,
|
||||
Computed: true,
|
||||
},
|
||||
"backends": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"algorithm": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"server_default_settings": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"downinter": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"fall": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"inter": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"maxconn": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"maxqueue": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"rise": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"slowstart": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"weight": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"servers": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"address": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"check": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"port": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"server_settings": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"downinter": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"fall": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"inter": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"maxconn": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"maxqueue": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"rise": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"slowstart": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"weight": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"created_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"created_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"deleted_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"deleted_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"desc": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"dp_api_user": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"extnet_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"frontends": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"backend": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"bindings": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"address": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"port": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"gid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"lb_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"image_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"milestones": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"primary_node": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"backend_ip": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"compute_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"frontend_ip": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"mgmt_ip": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"network_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"rg_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"rg_name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"secondary_node": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"backend_ip": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"compute_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"frontend_ip": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"mgmt_ip": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"network_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"tech_status": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"updated_by": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"updated_time": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
"vins_id": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
}
|
||||
}
|
||||
120
internal/service/cloudapi/lb/models.go
Normal file
120
internal/service/cloudapi/lb/models.go
Normal file
@@ -0,0 +1,120 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
type LoadBalancer struct {
|
||||
HAMode bool `json:"HAmode"`
|
||||
ACL interface{} `json:"acl"`
|
||||
Backends []Backend `json:"backends"`
|
||||
CreatedBy string `json:"createdBy"`
|
||||
CreatedTime uint64 `json:"createdTime"`
|
||||
DeletedBy string `json:"deletedBy"`
|
||||
DeletedTime uint64 `json:"deletedTime"`
|
||||
Description string `json:"desc"`
|
||||
DPAPIUser string `json:"dpApiUser"`
|
||||
ExtnetId uint64 `json:"extnetId"`
|
||||
Frontends []Frontend `json:"frontends"`
|
||||
GID uint64 `json:"gid"`
|
||||
GUID uint64 `json:"guid"`
|
||||
ID uint64 `json:"id"`
|
||||
ImageId uint64 `json:"imageId"`
|
||||
Milestones uint64 `json:"milestones"`
|
||||
Name string `json:"name"`
|
||||
PrimaryNode Node `json:"primaryNode"`
|
||||
RGID uint64 `json:"rgId"`
|
||||
RGName string `json:"rgName"`
|
||||
SecondaryNode Node `json:"secondaryNode"`
|
||||
Status string `json:"status"`
|
||||
TechStatus string `json:"techStatus"`
|
||||
UpdatedBy string `json:"updatedBy"`
|
||||
UpdatedTime uint64 `json:"updatedTime"`
|
||||
VinsId uint64 `json:"vinsId"`
|
||||
}
|
||||
|
||||
type LoadBalancerDetailed struct {
|
||||
DPAPIPassword string `json:"dpApiPassword"`
|
||||
LoadBalancer
|
||||
}
|
||||
|
||||
type Backend struct {
|
||||
Algorithm string `json:"algorithm"`
|
||||
GUID string `json:"guid"`
|
||||
Name string `json:"name"`
|
||||
ServerDefaultSettings ServerSettings `json:"serverDefaultSettings"`
|
||||
Servers []Server `json:"servers"`
|
||||
}
|
||||
|
||||
type LBList []LoadBalancerDetailed
|
||||
|
||||
type ServerSettings struct {
|
||||
Inter uint64 `json:"inter"`
|
||||
GUID string `json:"guid"`
|
||||
DownInter uint64 `json:"downinter"`
|
||||
Rise uint `json:"rise"`
|
||||
Fall uint `json:"fall"`
|
||||
SlowStart uint64 `json:"slowstart"`
|
||||
MaxConn uint `json:"maxconn"`
|
||||
MaxQueue uint `json:"maxqueue"`
|
||||
Weight uint `json:"weight"`
|
||||
}
|
||||
|
||||
type Server struct {
|
||||
Address string `json:"address"`
|
||||
Check string `json:"check"`
|
||||
GUID string `json:"guid"`
|
||||
Name string `json:"name"`
|
||||
Port uint `json:"port"`
|
||||
ServerSettings ServerSettings `json:"serverSettings"`
|
||||
}
|
||||
|
||||
type Node struct {
|
||||
BackendIp string `json:"backendIp"`
|
||||
ComputeId uint64 `json:"computeId"`
|
||||
FrontendIp string `json:"frontendIp"`
|
||||
GUID string `json:"guid"`
|
||||
MGMTIp string `json:"mgmtIp"`
|
||||
NetworkId uint64 `json:"networkId"`
|
||||
}
|
||||
|
||||
type Frontend struct {
|
||||
Backend string `json:"backend"`
|
||||
Bindings []Binding `json:"bindings"`
|
||||
GUID string `json:"guid"`
|
||||
Name string `json:"name"`
|
||||
}
|
||||
|
||||
type Binding struct {
|
||||
Address string `json:"address"`
|
||||
GUID string `json:"guid"`
|
||||
Name string `json:"name"`
|
||||
Port uint `json:"port"`
|
||||
}
|
||||
281
internal/service/cloudapi/lb/resource_lb.go
Normal file
281
internal/service/cloudapi/lb/resource_lb.go
Normal file
@@ -0,0 +1,281 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func resourceLBCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBCreate")
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("name", d.Get("name").(string))
|
||||
urlValues.Add("rgId", strconv.Itoa(d.Get("rg_id").(int)))
|
||||
urlValues.Add("extnetId", strconv.Itoa(d.Get("extnet_id").(int)))
|
||||
urlValues.Add("vinsId", strconv.Itoa(d.Get("vins_id").(int)))
|
||||
urlValues.Add("start", strconv.FormatBool((d.Get("start").(bool))))
|
||||
|
||||
if desc, ok := d.GetOk("desc"); ok {
|
||||
urlValues.Add("desc", desc.(string))
|
||||
}
|
||||
|
||||
lbId, err := c.DecortAPICall(ctx, "POST", lbCreateAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
d.SetId(lbId)
|
||||
d.Set("lb_id", lbId)
|
||||
|
||||
_, err = utilityLBCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
diagnostics := resourceLBRead(ctx, d, m)
|
||||
if diagnostics != nil {
|
||||
return diagnostics
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
|
||||
if enable, ok := d.GetOk("enable"); ok {
|
||||
api := lbDisableAPI
|
||||
if enable.(bool) {
|
||||
api = lbEnableAPI
|
||||
}
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", api, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBRead")
|
||||
|
||||
lb, err := utilityLBCheckPresence(ctx, d, m)
|
||||
if lb == nil {
|
||||
d.SetId("")
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
d.Set("ha_mode", lb.HAMode)
|
||||
d.Set("backends", flattenLBBackends(lb.Backends))
|
||||
d.Set("created_by", lb.CreatedBy)
|
||||
d.Set("created_time", lb.CreatedTime)
|
||||
d.Set("deleted_by", lb.DeletedBy)
|
||||
d.Set("deleted_time", lb.DeletedTime)
|
||||
d.Set("desc", lb.Description)
|
||||
d.Set("dp_api_user", lb.DPAPIUser)
|
||||
d.Set("extnet_id", lb.ExtnetId)
|
||||
d.Set("frontends", flattenFrontends(lb.Frontends))
|
||||
d.Set("gid", lb.GID)
|
||||
d.Set("guid", lb.GUID)
|
||||
d.Set("lb_id", lb.ID)
|
||||
d.Set("image_id", lb.ImageId)
|
||||
d.Set("milestones", lb.Milestones)
|
||||
d.Set("name", lb.Name)
|
||||
d.Set("primary_node", flattenNode(lb.PrimaryNode))
|
||||
d.Set("rg_id", lb.RGID)
|
||||
d.Set("rg_name", lb.RGName)
|
||||
d.Set("secondary_node", flattenNode(lb.SecondaryNode))
|
||||
d.Set("status", lb.Status)
|
||||
d.Set("tech_status", lb.TechStatus)
|
||||
d.Set("updated_by", lb.UpdatedBy)
|
||||
d.Set("updated_time", lb.UpdatedTime)
|
||||
d.Set("vins_id", lb.VinsId)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBDelete")
|
||||
|
||||
lb, err := utilityLBCheckPresence(ctx, d, m)
|
||||
if lb == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
|
||||
if permanently, ok := d.GetOk("permanently"); ok {
|
||||
urlValues.Add("permanently", strconv.FormatBool(permanently.(bool)))
|
||||
}
|
||||
|
||||
_, err = c.DecortAPICall(ctx, "POST", lbDeleteAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
d.SetId("")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBEdit(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBEdit")
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
if d.HasChange("enable") {
|
||||
api := lbDisableAPI
|
||||
enable := d.Get("enable").(bool)
|
||||
if enable {
|
||||
api = lbEnableAPI
|
||||
}
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", api, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
|
||||
if d.HasChange("start") {
|
||||
api := lbStopAPI
|
||||
start := d.Get("start").(bool)
|
||||
if start {
|
||||
api = lbStartAPI
|
||||
}
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", api, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
|
||||
if d.HasChange("desc") {
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
urlValues.Add("desc", d.Get("desc").(string))
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbUpdateAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
|
||||
if d.HasChange("restart") {
|
||||
restart := d.Get("restart").(bool)
|
||||
if restart {
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbRestartAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
}
|
||||
|
||||
if d.HasChange("restore") {
|
||||
restore := d.Get("restore").(bool)
|
||||
if restore {
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbRestoreAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
}
|
||||
|
||||
if d.HasChange("config_reset") {
|
||||
cfgReset := d.Get("config_reset").(bool)
|
||||
if cfgReset {
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbConfigResetAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
urlValues = &url.Values{}
|
||||
}
|
||||
}
|
||||
|
||||
//TODO: перенести backend и frontend из ресурсов сюда
|
||||
|
||||
return resourceLBRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func ResourceLB() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
CreateContext: resourceLBCreate,
|
||||
ReadContext: resourceLBRead,
|
||||
UpdateContext: resourceLBEdit,
|
||||
DeleteContext: resourceLBDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: lbResourceSchemaMake(),
|
||||
}
|
||||
}
|
||||
373
internal/service/cloudapi/lb/resource_lb_backend.go
Normal file
373
internal/service/cloudapi/lb/resource_lb_backend.go
Normal file
@@ -0,0 +1,373 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func resourceLBBackendCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBBackendCreate")
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("backendName", d.Get("name").(string))
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
|
||||
if algorithm, ok := d.GetOk("algorithm"); ok {
|
||||
urlValues.Add("algorithm", algorithm.(string))
|
||||
}
|
||||
if inter, ok := d.GetOk("inter"); ok {
|
||||
urlValues.Add("inter", strconv.Itoa(inter.(int)))
|
||||
}
|
||||
if downinter, ok := d.GetOk("downinter"); ok {
|
||||
urlValues.Add("downinter", strconv.Itoa(downinter.(int)))
|
||||
}
|
||||
if rise, ok := d.GetOk("rise"); ok {
|
||||
urlValues.Add("rise", strconv.Itoa(rise.(int)))
|
||||
}
|
||||
if fall, ok := d.GetOk("fall"); ok {
|
||||
urlValues.Add("fall", strconv.Itoa(fall.(int)))
|
||||
}
|
||||
if slowstart, ok := d.GetOk("slowstart"); ok {
|
||||
urlValues.Add("slowstart", strconv.Itoa(slowstart.(int)))
|
||||
}
|
||||
if maxconn, ok := d.GetOk("maxconn"); ok {
|
||||
urlValues.Add("maxconn", strconv.Itoa(maxconn.(int)))
|
||||
}
|
||||
if maxqueue, ok := d.GetOk("maxqueue"); ok {
|
||||
urlValues.Add("maxqueue", strconv.Itoa(maxqueue.(int)))
|
||||
}
|
||||
if weight, ok := d.GetOk("weight"); ok {
|
||||
urlValues.Add("weight", strconv.Itoa(weight.(int)))
|
||||
}
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbBackendCreateAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
d.SetId(strconv.Itoa(d.Get("lb_id").(int)) + "#" + d.Get("name").(string))
|
||||
|
||||
_, err = utilityLBBackendCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
diagnostics := resourceLBBackendRead(ctx, d, m)
|
||||
if diagnostics != nil {
|
||||
return diagnostics
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBBackendRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBBackendRead")
|
||||
|
||||
b, err := utilityLBBackendCheckPresence(ctx, d, m)
|
||||
if b == nil {
|
||||
d.SetId("")
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
lbId, _ := strconv.ParseInt(strings.Split(d.Id(), "#")[0], 10, 32)
|
||||
|
||||
d.Set("lb_id", lbId)
|
||||
d.Set("name", b.Name)
|
||||
d.Set("algorithm", b.Algorithm)
|
||||
d.Set("guid", b.GUID)
|
||||
d.Set("downinter", b.ServerDefaultSettings.DownInter)
|
||||
d.Set("fall", b.ServerDefaultSettings.Fall)
|
||||
d.Set("inter", b.ServerDefaultSettings.Inter)
|
||||
d.Set("maxconn", b.ServerDefaultSettings.MaxConn)
|
||||
d.Set("maxqueue", b.ServerDefaultSettings.MaxQueue)
|
||||
d.Set("rise", b.ServerDefaultSettings.Rise)
|
||||
d.Set("slowstart", b.ServerDefaultSettings.SlowStart)
|
||||
d.Set("weight", b.ServerDefaultSettings.Weight)
|
||||
d.Set("servers", flattenServers(b.Servers))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBBackendDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBBackendDelete")
|
||||
|
||||
lb, err := utilityLBBackendCheckPresence(ctx, d, m)
|
||||
if lb == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
urlValues.Add("backendName", d.Get("name").(string))
|
||||
|
||||
_, err = c.DecortAPICall(ctx, "POST", lbBackendDeleteAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
d.SetId("")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBBackendEdit(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBBackendEdit")
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
urlValues.Add("backendName", d.Get("name").(string))
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
|
||||
if d.HasChange("algorithm") {
|
||||
urlValues.Add("algorithm", d.Get("algorithm").(string))
|
||||
}
|
||||
if d.HasChange("inter") {
|
||||
urlValues.Add("inter", strconv.Itoa(d.Get("inter").(int)))
|
||||
}
|
||||
if d.HasChange("downinter") {
|
||||
urlValues.Add("downinter", strconv.Itoa(d.Get("downinter").(int)))
|
||||
}
|
||||
if d.HasChange("rise") {
|
||||
urlValues.Add("rise", strconv.Itoa(d.Get("rise").(int)))
|
||||
}
|
||||
if d.HasChange("fall") {
|
||||
urlValues.Add("fall", strconv.Itoa(d.Get("fall").(int)))
|
||||
}
|
||||
if d.HasChange("slowstart") {
|
||||
urlValues.Add("slowstart", strconv.Itoa(d.Get("slowstart").(int)))
|
||||
}
|
||||
if d.HasChange("maxconn") {
|
||||
urlValues.Add("maxconn", strconv.Itoa(d.Get("maxconn").(int)))
|
||||
}
|
||||
if d.HasChange("maxqueue") {
|
||||
urlValues.Add("maxqueue", strconv.Itoa(d.Get("maxqueue").(int)))
|
||||
}
|
||||
if d.HasChange("weight") {
|
||||
urlValues.Add("weight", strconv.Itoa(d.Get("weight").(int)))
|
||||
}
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbBackendUpdateAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
//TODO: перенести servers сюда
|
||||
|
||||
return resourceLBBackendRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func ResourceLBBackend() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
CreateContext: resourceLBBackendCreate,
|
||||
ReadContext: resourceLBBackendRead,
|
||||
UpdateContext: resourceLBBackendEdit,
|
||||
DeleteContext: resourceLBBackendDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"lb_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "ID of the LB instance to backendCreate",
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "Must be unique among all backends of this LB - name of the new backend to create",
|
||||
},
|
||||
"algorithm": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ValidateFunc: validation.StringInSlice([]string{"roundrobin", "static-rr", "leastconn"}, false),
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"downinter": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"fall": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"inter": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"maxconn": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"maxqueue": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"rise": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"slowstart": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"weight": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"servers": {
|
||||
Type: schema.TypeList,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"address": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"check": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"port": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"server_settings": {
|
||||
Type: schema.TypeList,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"downinter": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"fall": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"inter": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"maxconn": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"maxqueue": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"rise": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"slowstart": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"weight": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
314
internal/service/cloudapi/lb/resource_lb_backend_server.go
Normal file
314
internal/service/cloudapi/lb/resource_lb_backend_server.go
Normal file
@@ -0,0 +1,314 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func resourceLBBackendServerCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBBackendServerCreate")
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("backendName", d.Get("backend_name").(string))
|
||||
urlValues.Add("serverName", d.Get("name").(string))
|
||||
urlValues.Add("address", d.Get("address").(string))
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
urlValues.Add("port", strconv.Itoa(d.Get("port").(int)))
|
||||
|
||||
if check, ok := d.GetOk("check"); ok {
|
||||
urlValues.Add("check", check.(string))
|
||||
}
|
||||
|
||||
if inter, ok := d.GetOk("inter"); ok {
|
||||
urlValues.Add("inter", strconv.Itoa(inter.(int)))
|
||||
}
|
||||
if downinter, ok := d.GetOk("downinter"); ok {
|
||||
urlValues.Add("downinter", strconv.Itoa(downinter.(int)))
|
||||
}
|
||||
if rise, ok := d.GetOk("rise"); ok {
|
||||
urlValues.Add("rise", strconv.Itoa(rise.(int)))
|
||||
}
|
||||
if fall, ok := d.GetOk("fall"); ok {
|
||||
urlValues.Add("fall", strconv.Itoa(fall.(int)))
|
||||
}
|
||||
if slowstart, ok := d.GetOk("slowstart"); ok {
|
||||
urlValues.Add("slowstart", strconv.Itoa(slowstart.(int)))
|
||||
}
|
||||
if maxconn, ok := d.GetOk("maxconn"); ok {
|
||||
urlValues.Add("maxconn", strconv.Itoa(maxconn.(int)))
|
||||
}
|
||||
if maxqueue, ok := d.GetOk("maxqueue"); ok {
|
||||
urlValues.Add("maxqueue", strconv.Itoa(maxqueue.(int)))
|
||||
}
|
||||
if weight, ok := d.GetOk("weight"); ok {
|
||||
urlValues.Add("weight", strconv.Itoa(weight.(int)))
|
||||
}
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbBackendServerAddAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
d.SetId(strconv.Itoa(d.Get("lb_id").(int)) + "#" + d.Get("backend_name").(string) + "#" + d.Get("name").(string))
|
||||
|
||||
_, err = utilityLBBackendServerCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
diagnostics := resourceLBBackendServerRead(ctx, d, m)
|
||||
if diagnostics != nil {
|
||||
return diagnostics
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBBackendServerRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBBackendServerRead")
|
||||
|
||||
s, err := utilityLBBackendServerCheckPresence(ctx, d, m)
|
||||
if s == nil {
|
||||
d.SetId("")
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
lbId, _ := strconv.ParseInt(strings.Split(d.Id(), "#")[0], 10, 32)
|
||||
backendName := strings.Split(d.Id(), "#")[1]
|
||||
|
||||
d.Set("lb_id", lbId)
|
||||
d.Set("backend_name", backendName)
|
||||
d.Set("name", s.Name)
|
||||
d.Set("port", s.Port)
|
||||
d.Set("address", s.Address)
|
||||
d.Set("check", s.Check)
|
||||
d.Set("guid", s.GUID)
|
||||
d.Set("downinter", s.ServerSettings.DownInter)
|
||||
d.Set("fall", s.ServerSettings.Fall)
|
||||
d.Set("inter", s.ServerSettings.Inter)
|
||||
d.Set("maxconn", s.ServerSettings.MaxConn)
|
||||
d.Set("maxqueue", s.ServerSettings.MaxQueue)
|
||||
d.Set("rise", s.ServerSettings.Rise)
|
||||
d.Set("slowstart", s.ServerSettings.SlowStart)
|
||||
d.Set("weight", s.ServerSettings.Weight)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBBackendServerDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBBackendServerDelete")
|
||||
|
||||
lb, err := utilityLBBackendServerCheckPresence(ctx, d, m)
|
||||
if lb == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
urlValues.Add("serverName", d.Get("name").(string))
|
||||
urlValues.Add("backendName", d.Get("backend_name").(string))
|
||||
|
||||
_, err = c.DecortAPICall(ctx, "POST", lbBackendServerDeleteAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
d.SetId("")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBBackendServerEdit(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBBackendServerEdit")
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
urlValues.Add("backendName", d.Get("backend_name").(string))
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
urlValues.Add("serverName", d.Get("name").(string))
|
||||
urlValues.Add("address", d.Get("address").(string))
|
||||
urlValues.Add("port", strconv.Itoa(d.Get("port").(int)))
|
||||
|
||||
if d.HasChange("check") {
|
||||
urlValues.Add("check", d.Get("check").(string))
|
||||
}
|
||||
if d.HasChange("inter") {
|
||||
urlValues.Add("inter", strconv.Itoa(d.Get("inter").(int)))
|
||||
}
|
||||
if d.HasChange("downinter") {
|
||||
urlValues.Add("downinter", strconv.Itoa(d.Get("downinter").(int)))
|
||||
}
|
||||
if d.HasChange("rise") {
|
||||
urlValues.Add("rise", strconv.Itoa(d.Get("rise").(int)))
|
||||
}
|
||||
if d.HasChange("fall") {
|
||||
urlValues.Add("fall", strconv.Itoa(d.Get("fall").(int)))
|
||||
}
|
||||
if d.HasChange("slowstart") {
|
||||
urlValues.Add("slowstart", strconv.Itoa(d.Get("slowstart").(int)))
|
||||
}
|
||||
if d.HasChange("maxconn") {
|
||||
urlValues.Add("maxconn", strconv.Itoa(d.Get("maxconn").(int)))
|
||||
}
|
||||
if d.HasChange("maxqueue") {
|
||||
urlValues.Add("maxqueue", strconv.Itoa(d.Get("maxqueue").(int)))
|
||||
}
|
||||
if d.HasChange("weight") {
|
||||
urlValues.Add("weight", strconv.Itoa(d.Get("weight").(int)))
|
||||
}
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbBackendServerUpdateAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
//TODO: перенести servers сюда
|
||||
|
||||
return resourceLBBackendServerRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func ResourceLBBackendServer() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
CreateContext: resourceLBBackendServerCreate,
|
||||
ReadContext: resourceLBBackendServerRead,
|
||||
UpdateContext: resourceLBBackendServerEdit,
|
||||
DeleteContext: resourceLBBackendServerDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"lb_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "ID of the LB instance to backendCreate",
|
||||
},
|
||||
"backend_name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "Must be unique among all backends of this LB - name of the new backend to create",
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "Must be unique among all servers defined for this backend - name of the server definition to add.",
|
||||
},
|
||||
"address": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "IP address of the server.",
|
||||
},
|
||||
"port": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "Port number on the server",
|
||||
},
|
||||
"check": {
|
||||
Type: schema.TypeString,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
ValidateFunc: validation.StringInSlice([]string{"disabled", "enabled"}, false),
|
||||
Description: "set to disabled if this server should be used regardless of its state.",
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"downinter": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"fall": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"inter": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"maxconn": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"maxqueue": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"rise": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"slowstart": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
"weight": {
|
||||
Type: schema.TypeInt,
|
||||
Optional: true,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
192
internal/service/cloudapi/lb/resource_lb_frontend.go
Normal file
192
internal/service/cloudapi/lb/resource_lb_frontend.go
Normal file
@@ -0,0 +1,192 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func resourceLBFrontendCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBFrontendCreate")
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("backendName", d.Get("backend_name").(string))
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
urlValues.Add("frontendName", d.Get("name").(string))
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbFrontendCreateAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
d.SetId(strconv.Itoa(d.Get("lb_id").(int)) + "#" + d.Get("name").(string))
|
||||
|
||||
_, err = utilityLBFrontendCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
diagnostics := resourceLBFrontendRead(ctx, d, m)
|
||||
if diagnostics != nil {
|
||||
return diagnostics
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBFrontendRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBFrontendRead")
|
||||
|
||||
f, err := utilityLBFrontendCheckPresence(ctx, d, m)
|
||||
if f == nil {
|
||||
d.SetId("")
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
lbId, _ := strconv.ParseInt(strings.Split(d.Id(), "#")[0], 10, 32)
|
||||
d.Set("lb_id", lbId)
|
||||
d.Set("backend_name", f.Backend)
|
||||
d.Set("name", f.Name)
|
||||
d.Set("guid", f.GUID)
|
||||
d.Set("bindings", flattendBindings(f.Bindings))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBFrontendDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBFrontendDelete")
|
||||
|
||||
lb, err := utilityLBFrontendCheckPresence(ctx, d, m)
|
||||
if lb == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
urlValues.Add("frontendName", d.Get("name").(string))
|
||||
|
||||
_, err = c.DecortAPICall(ctx, "POST", lbFrontendDeleteAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
d.SetId("")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBFrontendEdit(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
|
||||
//TODO: перенести bindings сюда
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func ResourceLBFrontend() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
CreateContext: resourceLBFrontendCreate,
|
||||
ReadContext: resourceLBFrontendRead,
|
||||
UpdateContext: resourceLBFrontendEdit,
|
||||
DeleteContext: resourceLBFrontendDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"lb_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "ID of the LB instance to backendCreate",
|
||||
},
|
||||
"backend_name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
"bindings": {
|
||||
Type: schema.TypeList,
|
||||
Computed: true,
|
||||
Elem: &schema.Resource{
|
||||
Schema: map[string]*schema.Schema{
|
||||
"address": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"port": {
|
||||
Type: schema.TypeInt,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
201
internal/service/cloudapi/lb/resource_lb_frontend_bind.go
Normal file
201
internal/service/cloudapi/lb/resource_lb_frontend_bind.go
Normal file
@@ -0,0 +1,201 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/constants"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func resourceLBFrontendBindCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBFrontendBindCreate")
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("frontendName", d.Get("frontend_name").(string))
|
||||
urlValues.Add("bindingName", d.Get("name").(string))
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
urlValues.Add("bindingAddress", d.Get("address").(string))
|
||||
urlValues.Add("bindingPort", strconv.Itoa(d.Get("port").(int)))
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbFrontendBindAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
d.SetId(strconv.Itoa(d.Get("lb_id").(int)) + "#" + d.Get("frontend_name").(string) + "#" + d.Get("name").(string))
|
||||
|
||||
_, err = utilityLBFrontendBindCheckPresence(ctx, d, m)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
diagnostics := resourceLBFrontendBindRead(ctx, d, m)
|
||||
if diagnostics != nil {
|
||||
return diagnostics
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBFrontendBindRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBFrontendBindRead")
|
||||
|
||||
b, err := utilityLBFrontendBindCheckPresence(ctx, d, m)
|
||||
if b == nil {
|
||||
d.SetId("")
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
lbId, _ := strconv.ParseInt(strings.Split(d.Id(), "#")[0], 10, 32)
|
||||
frontendName := strings.Split(d.Id(), "#")[1]
|
||||
|
||||
d.Set("lb_id", lbId)
|
||||
d.Set("frontend_name", frontendName)
|
||||
d.Set("name", b.Name)
|
||||
d.Set("address", b.Address)
|
||||
d.Set("guid", b.GUID)
|
||||
d.Set("port", b.Port)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBFrontendBindDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBFrontendBindDelete")
|
||||
|
||||
b, err := utilityLBFrontendBindCheckPresence(ctx, d, m)
|
||||
if b == nil {
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
urlValues.Add("bindingName", d.Get("name").(string))
|
||||
urlValues.Add("frontendName", d.Get("frontend_name").(string))
|
||||
|
||||
_, err = c.DecortAPICall(ctx, "POST", lbFrontendBindDeleteAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
d.SetId("")
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func resourceLBFrontendBindEdit(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
|
||||
log.Debugf("resourceLBFrontendBindEdit")
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
urlValues.Add("frontendName", d.Get("frontend_name").(string))
|
||||
urlValues.Add("bindingName", d.Get("name").(string))
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
|
||||
if d.HasChange("address") {
|
||||
urlValues.Add("bindingAddress", d.Get("address").(string))
|
||||
}
|
||||
|
||||
if d.HasChange("port") {
|
||||
urlValues.Add("bindingPort", strconv.Itoa(d.Get("port").(int)))
|
||||
}
|
||||
|
||||
_, err := c.DecortAPICall(ctx, "POST", lbFrontendBindUpdateAPI, urlValues)
|
||||
if err != nil {
|
||||
return diag.FromErr(err)
|
||||
}
|
||||
|
||||
return resourceLBFrontendBindRead(ctx, d, m)
|
||||
}
|
||||
|
||||
func ResourceLBFrontendBind() *schema.Resource {
|
||||
return &schema.Resource{
|
||||
SchemaVersion: 1,
|
||||
|
||||
CreateContext: resourceLBFrontendBindCreate,
|
||||
ReadContext: resourceLBFrontendBindRead,
|
||||
UpdateContext: resourceLBFrontendBindEdit,
|
||||
DeleteContext: resourceLBFrontendBindDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
"lb_id": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
Description: "ID of the LB instance to backendCreate",
|
||||
},
|
||||
"frontend_name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
Description: "Must be unique among all backends of this LB - name of the new backend to create",
|
||||
},
|
||||
"address": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
"guid": {
|
||||
Type: schema.TypeString,
|
||||
Computed: true,
|
||||
},
|
||||
"name": {
|
||||
Type: schema.TypeString,
|
||||
Required: true,
|
||||
},
|
||||
"port": {
|
||||
Type: schema.TypeInt,
|
||||
Required: true,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
70
internal/service/cloudapi/lb/utility_lb.go
Normal file
70
internal/service/cloudapi/lb/utility_lb.go
Normal file
@@ -0,0 +1,70 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
)
|
||||
|
||||
func utilityLBCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*LoadBalancer, error) {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
if (d.Get("lb_id").(int)) != 0 {
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
} else {
|
||||
urlValues.Add("lbId", d.Id())
|
||||
}
|
||||
|
||||
resp, err := c.DecortAPICall(ctx, "POST", lbGetAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if resp == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
lb := &LoadBalancer{}
|
||||
if err := json.Unmarshal([]byte(resp), lb); err != nil {
|
||||
return nil, fmt.Errorf("can not unmarshall data to lb: %s %+v", resp, lb)
|
||||
}
|
||||
|
||||
return lb, nil
|
||||
}
|
||||
82
internal/service/cloudapi/lb/utility_lb_backend.go
Normal file
82
internal/service/cloudapi/lb/utility_lb_backend.go
Normal file
@@ -0,0 +1,82 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
)
|
||||
|
||||
func utilityLBBackendCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*Backend, error) {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
bName := d.Get("name").(string)
|
||||
|
||||
if (d.Get("lb_id").(int)) != 0 {
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
} else {
|
||||
parameters := strings.Split(d.Id(), "#")
|
||||
urlValues.Add("lbId", parameters[0])
|
||||
bName = parameters[1]
|
||||
}
|
||||
|
||||
resp, err := c.DecortAPICall(ctx, "POST", lbGetAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if resp == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
lb := &LoadBalancer{}
|
||||
if err := json.Unmarshal([]byte(resp), lb); err != nil {
|
||||
return nil, fmt.Errorf("can not unmarshall data to lb: %s %+v", resp, lb)
|
||||
}
|
||||
|
||||
backends := lb.Backends
|
||||
for _, b := range backends {
|
||||
if b.Name == bName {
|
||||
return &b, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("can not find backend with name: %s for lb: %d", bName, lb.ID)
|
||||
}
|
||||
95
internal/service/cloudapi/lb/utility_lb_backend_server.go
Normal file
95
internal/service/cloudapi/lb/utility_lb_backend_server.go
Normal file
@@ -0,0 +1,95 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
)
|
||||
|
||||
func utilityLBBackendServerCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*Server, error) {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
bName := d.Get("backend_name").(string)
|
||||
sName := d.Get("name").(string)
|
||||
|
||||
if (d.Get("lb_id").(int)) != 0 {
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
} else {
|
||||
parameters := strings.Split(d.Id(), "#")
|
||||
urlValues.Add("lbId", parameters[0])
|
||||
bName = parameters[1]
|
||||
sName = parameters[2]
|
||||
}
|
||||
|
||||
resp, err := c.DecortAPICall(ctx, "POST", lbGetAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if resp == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
lb := &LoadBalancer{}
|
||||
if err := json.Unmarshal([]byte(resp), lb); err != nil {
|
||||
return nil, fmt.Errorf("can not unmarshall data to lb: %s %+v", resp, lb)
|
||||
}
|
||||
|
||||
backend := &Backend{}
|
||||
backends := lb.Backends
|
||||
for i, b := range backends {
|
||||
if b.Name == bName {
|
||||
backend = &backends[i]
|
||||
break
|
||||
}
|
||||
}
|
||||
if backend.Name == "" {
|
||||
return nil, fmt.Errorf("can not find backend with name: %s for lb: %d", bName, lb.ID)
|
||||
}
|
||||
|
||||
for _, s := range backend.Servers {
|
||||
if s.Name == sName {
|
||||
return &s, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("can not find server with name: %s for backend: %s for lb: %d", sName, bName, lb.ID)
|
||||
}
|
||||
82
internal/service/cloudapi/lb/utility_lb_frontend.go
Normal file
82
internal/service/cloudapi/lb/utility_lb_frontend.go
Normal file
@@ -0,0 +1,82 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
)
|
||||
|
||||
func utilityLBFrontendCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*Frontend, error) {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
fName := d.Get("name").(string)
|
||||
|
||||
if (d.Get("lb_id").(int)) != 0 {
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
} else {
|
||||
parameters := strings.Split(d.Id(), "#")
|
||||
urlValues.Add("lbId", parameters[0])
|
||||
fName = parameters[1]
|
||||
}
|
||||
|
||||
resp, err := c.DecortAPICall(ctx, "POST", lbGetAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if resp == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
lb := &LoadBalancer{}
|
||||
if err := json.Unmarshal([]byte(resp), lb); err != nil {
|
||||
return nil, fmt.Errorf("can not unmarshall data to lb: %s %+v", resp, lb)
|
||||
}
|
||||
|
||||
frontends := lb.Frontends
|
||||
for _, f := range frontends {
|
||||
if f.Name == fName {
|
||||
return &f, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("can not find frontend with name: %s for lb: %d", fName, lb.ID)
|
||||
}
|
||||
95
internal/service/cloudapi/lb/utility_lb_frontend_bind.go
Normal file
95
internal/service/cloudapi/lb/utility_lb_frontend_bind.go
Normal file
@@ -0,0 +1,95 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
)
|
||||
|
||||
func utilityLBFrontendBindCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (*Binding, error) {
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
fName := d.Get("frontend_name").(string)
|
||||
bName := d.Get("name").(string)
|
||||
|
||||
if (d.Get("lb_id").(int)) != 0 {
|
||||
urlValues.Add("lbId", strconv.Itoa(d.Get("lb_id").(int)))
|
||||
} else {
|
||||
parameters := strings.Split(d.Id(), "#")
|
||||
urlValues.Add("lbId", parameters[0])
|
||||
fName = parameters[1]
|
||||
bName = parameters[2]
|
||||
}
|
||||
|
||||
resp, err := c.DecortAPICall(ctx, "POST", lbGetAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if resp == "" {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
lb := &LoadBalancer{}
|
||||
if err := json.Unmarshal([]byte(resp), lb); err != nil {
|
||||
return nil, fmt.Errorf("can not unmarshall data to lb: %s %+v", resp, lb)
|
||||
}
|
||||
|
||||
frontend := &Frontend{}
|
||||
frontends := lb.Frontends
|
||||
for i, f := range frontends {
|
||||
if f.Name == fName {
|
||||
frontend = &frontends[i]
|
||||
break
|
||||
}
|
||||
}
|
||||
if frontend.Name == "" {
|
||||
return nil, fmt.Errorf("can not find frontend with name: %s for lb: %d", fName, lb.ID)
|
||||
}
|
||||
|
||||
for _, b := range frontend.Bindings {
|
||||
if b.Name == bName {
|
||||
return &b, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("can not find bind with name: %s for frontend: %s for lb: %d", bName, fName, lb.ID)
|
||||
}
|
||||
74
internal/service/cloudapi/lb/utility_lb_list.go
Normal file
74
internal/service/cloudapi/lb/utility_lb_list.go
Normal file
@@ -0,0 +1,74 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
)
|
||||
|
||||
func utilityLBListCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (LBList, error) {
|
||||
lbList := LBList{}
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
if includedeleted, ok := d.GetOk("includedeleted"); ok {
|
||||
urlValues.Add("includedeleted", strconv.FormatBool((includedeleted.(bool))))
|
||||
}
|
||||
|
||||
if page, ok := d.GetOk("page"); ok {
|
||||
urlValues.Add("page", strconv.Itoa(page.(int)))
|
||||
}
|
||||
if size, ok := d.GetOk("size"); ok {
|
||||
urlValues.Add("size", strconv.Itoa(size.(int)))
|
||||
}
|
||||
|
||||
log.Debugf("utilityLBListCheckPresence: load lb list")
|
||||
lbListRaw, err := c.DecortAPICall(ctx, "POST", lbListAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(lbListRaw), &lbList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return lbList, nil
|
||||
}
|
||||
70
internal/service/cloudapi/lb/utility_lb_list_deleted.go
Normal file
70
internal/service/cloudapi/lb/utility_lb_list_deleted.go
Normal file
@@ -0,0 +1,70 @@
|
||||
/*
|
||||
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
/*
|
||||
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
|
||||
Orchestration Technology) with Terraform by Hashicorp.
|
||||
|
||||
Source code: https://github.com/rudecs/terraform-provider-decort
|
||||
|
||||
Please see README.md to learn where to place source code so that it
|
||||
builds seamlessly.
|
||||
|
||||
Documentation: https://github.com/rudecs/terraform-provider-decort/wiki
|
||||
*/
|
||||
|
||||
package lb
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
"github.com/rudecs/terraform-provider-decort/internal/controller"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
|
||||
)
|
||||
|
||||
func utilityLBListDeletedCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (LBList, error) {
|
||||
lbList := LBList{}
|
||||
c := m.(*controller.ControllerCfg)
|
||||
urlValues := &url.Values{}
|
||||
|
||||
if page, ok := d.GetOk("page"); ok {
|
||||
urlValues.Add("page", strconv.Itoa(page.(int)))
|
||||
}
|
||||
if size, ok := d.GetOk("size"); ok {
|
||||
urlValues.Add("size", strconv.Itoa(size.(int)))
|
||||
}
|
||||
|
||||
log.Debugf("utilityLBListDeletedCheckPresence: load lb list")
|
||||
lbListRaw, err := c.DecortAPICall(ctx, "POST", lbListDeletedAPI, urlValues)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
err = json.Unmarshal([]byte(lbListRaw), &lbList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return lbList, nil
|
||||
}
|
||||
@@ -181,15 +181,15 @@ func ResourcePfw() *schema.Resource {
|
||||
DeleteContext: resourcePfwDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout60s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout60s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourcePfwSchemaMake(),
|
||||
|
||||
@@ -307,15 +307,15 @@ func ResourceResgroup() *schema.Resource {
|
||||
DeleteContext: resourceResgroupDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout180s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout180s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: map[string]*schema.Schema{
|
||||
|
||||
@@ -178,15 +178,15 @@ func ResourceSnapshot() *schema.Resource {
|
||||
DeleteContext: resourceSnapshotDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout60s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout60s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceSnapshotSchemaMake(),
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
||||
@@ -3,6 +3,7 @@ Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
|
||||
Authors:
|
||||
Petr Krutov, <petr.krutov@digitalenergy.online>
|
||||
Stanislav Solovev, <spsolovev@digitalenergy.online>
|
||||
Kasim Baybikov, <kmbaybikov@basistech.ru>
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
|
||||
@@ -292,15 +292,15 @@ func ResourceVins() *schema.Resource {
|
||||
DeleteContext: resourceVinsDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
Create: &constants.Timeout180s,
|
||||
Read: &constants.Timeout30s,
|
||||
Update: &constants.Timeout180s,
|
||||
Delete: &constants.Timeout60s,
|
||||
Default: &constants.Timeout60s,
|
||||
Create: &constants.Timeout600s,
|
||||
Read: &constants.Timeout300s,
|
||||
Update: &constants.Timeout300s,
|
||||
Delete: &constants.Timeout300s,
|
||||
Default: &constants.Timeout300s,
|
||||
},
|
||||
|
||||
Schema: resourceVinsSchemaMake(),
|
||||
|
||||
@@ -455,7 +455,7 @@ func ResourceAccount() *schema.Resource {
|
||||
DeleteContext: resourceAccountDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
|
||||
@@ -610,7 +610,7 @@ func ResourceDisk() *schema.Resource {
|
||||
DeleteContext: resourceDiskDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
|
||||
@@ -381,7 +381,7 @@ func ResourceCDROMImage() *schema.Resource {
|
||||
DeleteContext: resourceCDROMImageDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
|
||||
@@ -126,7 +126,7 @@ func ResourceDeleteImages() *schema.Resource {
|
||||
DeleteContext: resourceDeleteListImages,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
|
||||
@@ -665,7 +665,7 @@ func ResourceImage() *schema.Resource {
|
||||
DeleteContext: resourceImageDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
|
||||
@@ -328,7 +328,7 @@ func ResourceVirtualImage() *schema.Resource {
|
||||
DeleteContext: resourceImageDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
|
||||
@@ -376,7 +376,7 @@ func ResourceK8s() *schema.Resource {
|
||||
DeleteContext: resourceK8sDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
|
||||
@@ -228,7 +228,7 @@ func ResourceK8sWg() *schema.Resource {
|
||||
DeleteContext: resourceK8sWgDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
|
||||
@@ -338,7 +338,7 @@ func ResourceCompute() *schema.Resource {
|
||||
DeleteContext: resourceComputeDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
|
||||
@@ -229,7 +229,7 @@ func ResourcePcidevice() *schema.Resource {
|
||||
DeleteContext: resourcePcideviceDelete,
|
||||
|
||||
Importer: &schema.ResourceImporter{
|
||||
State: schema.ImportStatePassthrough,
|
||||
StateContext: schema.ImportStatePassthroughContext,
|
||||
},
|
||||
|
||||
Timeouts: &schema.ResourceTimeout{
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user