main 1.3.0
asteam 2 days ago
parent 5382579a5f
commit ddbb12996d

@ -1,13 +1,110 @@
## Version 1.2.2 ## Version 1.3.0
### Исправлено ### Добавлено
#### account
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-900 | Опциональное поле `desc` в resource `dynamix_account` |
| BATF-900 | Вычисляемое поле `desc` в datasource `dynamix_account_list`, `dynamix_account_deleted_list`, `dynamix_account` и `dynamix_account_rg_list` |
| BATF-904 | Опциональное поле `reason` в resource `dynamix_account` |
#### bservice #### bservice
| Идентификатор<br>задачи | Описание | | Идентификатор<br>задачи | Описание |
| --- | --- | | --- | --- |
| BATF-879 | Исправлена работа resources `dynamix_bservice` и `dynamix_bservice_group` в cloudapi/bservice | | BATF-905 | Опциональное поле `chipset` в resource `dynamix_bservice_group` |
| BATF-906 | Вычисляемое поле `chipset` в datasource `dynamix_bservice_group` |
#### disks
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-914 | Вычисляемые поля `size_available`, `updated_by`, `deleted_by`, `created_by`, `updated_time` и `milestones` в datasource `dynamix_disk_list` и `dynamix_disk_list_deleted` |
| BATF-914 | Вычисляемые поля `size_available`, `updated_by`, `deleted_by`, `created_by`, `updated_time`, `machine_id`, `machine_name` и `milestones` в datasource `dynamix_disk` |
| BATF-914 | Вычисляемые поля `machine_id`, `machine_name`, `updated_by`, `deleted_by`, `created_by` и `updated_time` в resource `dynamix_disk` |
#### extnet
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-907 | Опциональное поле `ovs_bridge` в datasource `dynamix_extnet_list` |
| BATF-911 | Вычисляемое поле `ntp` в datasource `dynamix_extnet` |
#### image
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-903 | Вычисляемое поле `snapshot_id` в datasource `dynamix_image` и resource `dynamix_image` |
| BATF-942 | Вычисляемое поле `snapshot_id` в resource `resource_image` |
| BATF-942 | Вычисляемые поля `cd_presented_to`, `snapshot_id` и `network_interface_naming` в resource `resource_image_virtual` |
#### k8s
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-900 | Вычисляемое поле `desc` в datasource `dynamix_k8s` |
#### kvmvm
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-901 | Опциональные поля `loader_type`, `boot_type`, `hot_resize`, `network_interface_naming` в resource `dynamix_kvmvm` |
| BATF-901 | Вычисляемые поля `loader_type`, `boot_type`, `hot_resize`, `network_interface_naming` в datasources `dynamix_kvmvm`, `dynamix_kvmvm_list`, `dynamix_kvmvm_list_deleted` |
| BATF-913 | Вычисляемое поле `size_available` в datasource `dynamix_kvmvm` |
| BATF-954 | Опциональное поле `snapshot_delete_async` в resource `dynamix_kvmvm` |
#### rg
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-945 | Вычисляемые поля `created_by` и `created_time` в resource `dynamix_resgroup` |
#### vins #### vins
| Идентификатор<br>задачи | Описание | | Идентификатор<br>задачи | Описание |
| --- | --- | | --- | --- |
| BATF-877 | Исправлена ошибка записи state в resource `dynamix_vins` в cloudapi/vins | | BATF-898 | Опциональное поле `status` в datasource `dynamix_vins_list` в cloudapi/vins |
### Удалено
#### kvmvm
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-938 | Опциональное поле `stateless`, `cd` в resource `dynamix_kvmvm` |
#### rg
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-910 | Опциональное поле `register_computes` в resource `resource_rg` |
| BATF-910 | Вычисляемое поле `register_computes` в datasource `rdata_rg`, `rdata_rg_list` и `rdata_rg_list_deleted` |
| BATF-912 | Опциональные поля `auto_start` и `data_disks` resource `dynamix_account` |
### Исправлено
#### общие изменения
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-883 | Исправлены некорректные импорты имеющихся ресурсов |
| BATF-984 | Установлены значения по умолчанию для булевых полех в имеющихся ресурсов |
#### disks
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-899 | Изменен тип поля `type` с Optional на Computed в resource `dynamix_disk` |
| BATF-902 | Изменен тип поля `present_to` с []int на map[string]int в datasources `dynamix_disk`, `dynamix_disk_list`, `dynamix_disk_list_deleted`, `dynamix_disk_replication`, в resources `dynamix_disk`, `dynamix_disk_replication` |
| BATF-908 | Изменен тип поля `gid` с Required на Computed в resource `dynamix_disk` |
| BATF-988 | Изменен тип поля `images` c []string на []int в resources `dynamix_disk`, `dynamix_disk_replication` |
| BATF-988 | Изменен тип поля `images` c []string на []int в datasources `dynamix_disk`, `dynamix_disk_list`, `dynamix_disk_replication`, `dynamix_disk_list_deleted`, `dynamix_disk_list_unattached` |
#### image
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-902 | Изменен тип поля `present_to` с []int на map[string]int в datasource `dynamix_image`, resource `dynamix_virtual_image`, `dynamix_image` |
| BATF-915 | Изменено возможное значение поля `image_type` c `other` на `unknown` в resource `dynamix_image` |
#### kvmvm
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-902 | Изменен тип поля `present_to` с []int на map[string]int в datasources `dynamix_kvmvm`, resource `dynamix_kvmvm` |
| BATF-948 | Изменен тип поля `mac` в блоке `network` с `computed` на `optional` в resource `dynamix_kvmvm` |
| BATF-938 | Исправлен импорт опциональных и обязательных значений в resource `dynamix_kvmvm` |
| BATF-938 | Изменен тип поля `cd_image_id` с `computed` на `optional` в resource `dynamix_kvmvm` |
| BATF-920 | Рестарт виртуальной машины при отключении дисков в resource `dynamix_kvmvm` |
| BATF-938 | Сняты ограничения по количеству дисков и сетей в resource `dynamix_kvmvm` |
| BATF-931 | Изменен тип поля `vgpus` с []int на []struct в resource `dynamix_kvmvm` и в datasource `dynamix_kvmvm`|
| BATF-988 | Изменен тип поля `images` c []string на []int в resource `dynamix_kvmvm` |

@ -8,7 +8,7 @@ ZIPDIR = ./zip
BINARY=${NAME} BINARY=${NAME}
WORKPATH= ./examples/terraform.d/plugins/${HOSTNAME}/${NAMESPACE}/${SECONDNAMESPACE}/${VERSION}/${OS_ARCH} WORKPATH= ./examples/terraform.d/plugins/${HOSTNAME}/${NAMESPACE}/${SECONDNAMESPACE}/${VERSION}/${OS_ARCH}
MAINPATH = ./cmd/dynamix/ MAINPATH = ./cmd/dynamix/
VERSION=1.2.2 VERSION=1.3.0
OS_ARCH=$(shell go env GOHOSTOS)_$(shell go env GOHOSTARCH) OS_ARCH=$(shell go env GOHOSTOS)_$(shell go env GOHOSTARCH)
FILES = ${BINARY}_${VERSION}_darwin_amd64\ FILES = ${BINARY}_${VERSION}_darwin_amd64\

@ -12,7 +12,7 @@
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура): Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура):
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|--------------------------|-------------------| |------------------|------------|
| data source dynamix_account_resource_consumption_list | consumed, reserved | | data source dynamix_account_resource_consumption_list | consumed, reserved |
| data source dynamix_account_resource_consumption_get | consumed, reserved, resource_limits | | data source dynamix_account_resource_consumption_get | consumed, reserved, resource_limits |
| data source dynamix_account_rg_list | computes, reserved, resource_limits, limits, reserved | | data source dynamix_account_rg_list | computes, reserved, resource_limits, limits, reserved |
@ -34,13 +34,13 @@
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Set: Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Set:
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|--------------------------|-------------------| |------------------|------------|
| resource dynamix_bservice | snapshots | | resource dynamix_bservice | snapshots |
#### Ресурсная группа disks #### Ресурсная группа disks
| Название ресурса | Поля схемы | Изменение по сравнению с terraform-provider-decort | Комментарий | | Название ресурса | Поля схемы | Изменение по сравнению с terraform-provider-decort | Комментарий |
|------------------------------------------|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |------------------|------------|---------------------------------------------------|-------------|
| resource dynamix_disk | iotune, shareable | Если при создании ресурса диска операция настроки лимитов (поле iotune) и/или операция поделиться диском (поле shareable) выполняются некорректно, теперь ресурс создается с предупреждениями (Warnings). Ранее ресурс создавался с ошибками (Errors). | Данное изменение касается только создания ресурса. Обновление ресурса проходит также: если операция изменения iotune и/или shareable выполняется некорректно, возвращаются ошибки (Errors). | | resource dynamix_disk | iotune, shareable | Если при создании ресурса диска операция настроки лимитов (поле iotune) и/или операция поделиться диском (поле shareable) выполняются некорректно, теперь ресурс создается с предупреждениями (Warnings). Ранее ресурс создавался с ошибками (Errors). | Данное изменение касается только создания ресурса. Обновление ресурса проходит также: если операция изменения iotune и/или shareable выполняется некорректно, возвращаются ошибки (Errors). |
| resource dynamix_disk | - | Операция автоматического восстановления диска (для диска, находящегося в корзине) теперь происходит при чтении ресурса. Ранее она происходила при обновлении ресурса. | | | resource dynamix_disk | - | Операция автоматического восстановления диска (для диска, находящегося в корзине) теперь происходит при чтении ресурса. Ранее она происходила при обновлении ресурса. | |
| data source dynamix_disk_list_unattached | ckey, meta | Изменилось названия поля с "_ckey" на "ckey" и с "_meta" на "meta". | | | data source dynamix_disk_list_unattached | ckey, meta | Изменилось названия поля с "_ckey" на "ckey" и с "_meta" на "meta". | |
@ -48,7 +48,7 @@
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура): Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура):
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|------------------------------------------|------------| |------------------|------------|
| data source dynamix_disk_list_unattached | iotune | | data source dynamix_disk_list_unattached | iotune |
| data source dynamix_disk | iotune | | data source dynamix_disk | iotune |
| data source dynamix_disk_list | iotune | | data source dynamix_disk_list | iotune |
@ -58,7 +58,7 @@
#### Ресурсная группа rg #### Ресурсная группа rg
| Название ресурса | Поля схемы | Изменение по сравнению с terraform-provider-decort | Комментарий | | Название ресурса | Поля схемы | Изменение по сравнению с terraform-provider-decort | Комментарий |
|--------------------------------|------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |------------------|------------|---------------------------------------------------|-------------|
| resource dynamix_resgroup | def_net_type | Поле def_net_type теперь только Optional (не Computed). Если не задан блок def_net, оно отображает текущий статус def_net_type. В случае если задан блок def_net, то текущий тип сети по умолчанию находится в поле def_net.net_type. | При другой реализации возникают ошибки, т.к. к вычисляемому полю def_net_type обращаются сразу две разные структуры, и фреймворк выдает ошибку при несовпадении плана и платформы, что неизбежно, когда к полю обращаются две разные структуры. | | resource dynamix_resgroup | def_net_type | Поле def_net_type теперь только Optional (не Computed). Если не задан блок def_net, оно отображает текущий статус def_net_type. В случае если задан блок def_net, то текущий тип сети по умолчанию находится в поле def_net.net_type. | При другой реализации возникают ошибки, т.к. к вычисляемому полю def_net_type обращаются сразу две разные структуры, и фреймворк выдает ошибку при несовпадении плана и платформы, что неизбежно, когда к полю обращаются две разные структуры. |
| resource dynamix_resgroup | def_net, access, quota | Блоки def_net, access, quota стали атрибутами. При конфигурации ресурса задаются как атрибуты (через знак равно).<br/>Стало: def_net = {}.<br/>Было: def_net {}. | | | resource dynamix_resgroup | def_net, access, quota | Блоки def_net, access, quota стали атрибутами. При конфигурации ресурса задаются как атрибуты (через знак равно).<br/>Стало: def_net = {}.<br/>Было: def_net {}. | |
| resource dynamix_resgroup | force, permanently | Новые дефолтные значения: true.<br/>Старые дефолтные значения: false. | | | resource dynamix_resgroup | force, permanently | Новые дефолтные значения: true.<br/>Старые дефолтные значения: false. | |
@ -69,7 +69,7 @@
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура): Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура):
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|--------------------------------------------------|------------------------------------------------------------------------| |------------------|------------|
| data source dynamix_rg_resource_consumption_list | consumed, reserved, resource_limits | | data source dynamix_rg_resource_consumption_list | consumed, reserved, resource_limits |
| data source dynamix_rg | resource_limits | | data source dynamix_rg | resource_limits |
| data source dynamix_rg_get_resource_consumption | consumed, reserved, resource_limits | | data source dynamix_rg_get_resource_consumption | consumed, reserved, resource_limits |
@ -83,7 +83,7 @@
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура): Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура):
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|----------------------------|-------------------| |------------------|------------|
| data source dynamix_extnet | default_qos, vnfs | | data source dynamix_extnet | default_qos, vnfs |
#### Кластеры k8s #### Кластеры k8s
@ -91,7 +91,7 @@
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура): Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура):
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|--------------------------------------|-----------------| |------------------|------------|
| data source dynamix_k8s | acl, masters | | data source dynamix_k8s | acl, masters |
| data source dynamix_k8s_list | service_account | | data source dynamix_k8s_list | service_account |
| data source dynamix_k8s_list_deleted | service_account | | data source dynamix_k8s_list_deleted | service_account |
@ -104,7 +104,7 @@
#### Внутренние сети vins #### Внутренние сети vins
| Название ресурса | Поля схемы | Изменение по сравнению с terraform-provider-decort | Комментарий | | Название ресурса | Поля схемы | Изменение по сравнению с terraform-provider-decort | Комментарий |
|--------------------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------| |------------------|------------|---------------------------------------------------|-------------|
| data source dynamix_vins | ckey | Название вычисляемых полей изменено с "_ckey" на "ckey". | Переименование связано с ограничениями terraform framework. | | data source dynamix_vins | ckey | Название вычисляемых полей изменено с "_ckey" на "ckey". | Переименование связано с ограничениями terraform framework. |
| resource dynamix_vins | ckey | Название вычисляемых полей изменено с "_ckey" на "ckey". | Переименование связано с ограничениями terraform framework. | | resource dynamix_vins | ckey | Название вычисляемых полей изменено с "_ckey" на "ckey". | Переименование связано с ограничениями terraform framework. |
| resource dynamix_vins | ext_net_id, ext_ip_addr | Удалены вычисляемые поля ext_net_id и ext_ip_addr. | При создании и изменении внешних сетей используется блок ext_net {ext_net_id int; ext_net_ip string}. | | resource dynamix_vins | ext_net_id, ext_ip_addr | Удалены вычисляемые поля ext_net_id и ext_ip_addr. | При создании и изменении внешних сетей используется блок ext_net {ext_net_id int; ext_net_ip string}. |
@ -116,29 +116,32 @@
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура): Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура):
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|--------------------------|----------------------------------------------------------------------------------------------------| |------------------|------------|
| data source dynamix_vins | vnf_dev, config, mgmt, resources, qos, default_qos, vnfs, dhcp, devices, primary, gw, nat | | data source dynamix_vins | vnf_dev, config, mgmt, resources, qos, default_qos, vnfs, dhcp, devices, primary, gw, nat,libvirt_settings |
| resource dynamix_vins | ext_net, vnf_dev, config, mgmt, resources, qos, default_qos, vnfs, dhcp, devices, primary, gw, nat | | resource dynamix_vins | ext_net, vnf_dev, config, mgmt, resources, qos, default_qos, vnfs, dhcp, devices, primary, gw, nat, libvirt_settings |
#### Виртуальные машины KVM KVMVM #### Виртуальные машины KVM KVMVM
| Название ресурса | Поля схемы | Изменение по сравнению с terraform-provider-decort | Комментарий | | Название ресурса | Поля схемы | Изменение по сравнению с terraform-provider-decort | Комментарий |
|------------------------|-------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------| |------------------|------------|---------------------------------------------------|-------------|
| resource dynamix_kvmvm | disks | Поле удалено | Исключено дублирование работы с дисками, которое можно производить при помощи ресурса dynamix_disk | | resource dynamix_kvmvm | disks | Поле удалено | Исключено дублирование работы с дисками, которое можно производить при помощи ресурса dynamix_disk |
| resource dynamix_kvmvm | affinity_rules, anti_affinity_rules | Изменен тип с List на Set | | | resource dynamix_kvmvm | affinity_rules, anti_affinity_rules | Изменен тип с List на Set | |
| resource dynamix_kvmvm | force, permanently | Новые дефолтные значения: true.<br/>Старые дефолтные значения: false. | | | resource dynamix_kvmvm | force, permanently | Новые дефолтные значения: true.<br/>Старые дефолтные значения: false. | |
| resource dynamix_kvmvm | restore | Новое дефолтное значение: true. | | | resource dynamix_kvmvm | restore | Новое дефолтное значение: true. | |
| resource dynamix_kvmvm | acl | Добавлен новый атрибут с вложенной | |
| resource dynamix_kvmvm | image_name | Добавлен новый атрибут | |
| resource dynamix_kvmvm | user_data | Добавлен новый атрибут | |
| resource dynamix_kvmvm | - | Операция автоматического восстановления диска (для диска, находящегося в корзине) теперь происходит при чтении ресурса. Ранее она происходила при обновлении ресурса. | | | resource dynamix_kvmvm | - | Операция автоматического восстановления диска (для диска, находящегося в корзине) теперь происходит при чтении ресурса. Ранее она происходила при обновлении ресурса. | |
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура): Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура):
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|----------------------------------------|---------------------------------------------------------| |------------------|------------|
| data source dynamix_kvmvm | acl, iotune, replication, qos | | data source dynamix_kvmvm | acl, iotune, replication, qos, libvirt_settings |
| data source dynamix_kvmvm_list | qos | | data source dynamix_kvmvm_list | qos, libvirt_settings |
| data source dynamix_kvmvm_list_deleted | qos | | data source dynamix_kvmvm_list_deleted | qos, libvirt_settings |
| data source dynamix_kvmvm_user_list | items | | data source dynamix_kvmvm_user_list | items |
| resource dynamix_kvmvm | rollback, cd, boot_disk, acl, qos, iotune, replication | | resource dynamix_kvmvm | rollback, cd, boot_disk, acl, qos, iotune, replication, libvirt_settings |
#### Балансировщики нагрузок lb #### Балансировщики нагрузок lb
@ -147,7 +150,7 @@
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура): Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура):
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|-------------------------------------|------------------------------------------------------------------------| |------------------|------------|
| data source dynamix_lb | server_default_settings, server_settings, primary_node, secondary_node | | data source dynamix_lb | server_default_settings, server_settings, primary_node, secondary_node |
| data source dynamix_lb_list | server_default_settings, server_settings, primary_node, secondary_node | | data source dynamix_lb_list | server_default_settings, server_settings, primary_node, secondary_node |
| data source dynamix_lb_list_deleted | server_default_settings, server_settings, primary_node, secondary_node | | data source dynamix_lb_list_deleted | server_default_settings, server_settings, primary_node, secondary_node |
@ -161,7 +164,7 @@
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура): Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура):
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|--------------------------|-------------------| |------------------|------------|
| data source dynamix_cb_account_resource_consumption_list | consumed, reserved | | data source dynamix_cb_account_resource_consumption_list | consumed, reserved |
| data source dynamix_cb_account_resource_consumption_get | consumed, reserved, resource_limits | | data source dynamix_cb_account_resource_consumption_get | consumed, reserved, resource_limits |
| data source dynamix_cb_account_rg_list | computes, reserved, resource_limits, limits, reserved | | data source dynamix_cb_account_rg_list | computes, reserved, resource_limits, limits, reserved |
@ -172,12 +175,12 @@
| data source dynamix_cb_disk_replication | iotune, replication | | data source dynamix_cb_disk_replication | iotune, replication |
| data source dynamix_cb_disk | iotune | | data source dynamix_cb_disk | iotune |
| resource dynamix_cb_disk_replication | iotune, replication | | resource dynamix_cb_disk_replication | iotune, replication |
| resource dynamix_cb_disk | iotune | | | resource dynamix_cb_disk | iotune |
#### Диски disks #### Диски disks
| Название ресурса | Поля схемы | Изменение по сравнению с terraform-provider-decort | Комментарий | | Название ресурса | Поля схемы | Изменение по сравнению с terraform-provider-decort | Комментарий |
|------------------------------------------|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| |------------------|------------|---------------------------------------------------|-------------|
| resource dynamix_cb_disk | iotune, shareable | Если при создании ресурса диска операция настроки лимитов (поле iotune) и/или операция поделиться диском (поле shareable) выполняются некорректно, теперь ресурс создается с предупреждениями (Warnings). Ранее ресурс создавался с ошибками (Errors). | Данное изменение касается только создания ресурса. Обновление ресурса проходит также: если операция изменения iotune и/или shareable выполняется некорректно, возвращаются ошибки (Errors). | | resource dynamix_cb_disk | iotune, shareable | Если при создании ресурса диска операция настроки лимитов (поле iotune) и/или операция поделиться диском (поле shareable) выполняются некорректно, теперь ресурс создается с предупреждениями (Warnings). Ранее ресурс создавался с ошибками (Errors). | Данное изменение касается только создания ресурса. Обновление ресурса проходит также: если операция изменения iotune и/или shareable выполняется некорректно, возвращаются ошибки (Errors). |
| resource dynamix_cb_disk | - | Операция автоматического восстановления диска (для диска, находящегося в корзине) теперь происходит при чтении ресурса. Ранее она происходила при обновлении ресурса. | | | resource dynamix_cb_disk | - | Операция автоматического восстановления диска (для диска, находящегося в корзине) теперь происходит при чтении ресурса. Ранее она происходила при обновлении ресурса. | |
| data source dynamix_cb_disk_list_unattached | ckey, meta | Изменилось названия поля с "_ckey" на "ckey" и с "_meta" на "meta". | | | data source dynamix_cb_disk_list_unattached | ckey, meta | Изменилось названия поля с "_ckey" на "ckey" и с "_meta" на "meta". | |
@ -185,7 +188,7 @@
Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура): Следующие поля в terraform-provider-decort имели тип списка (List), а в terraform-provider-dynamix имеют тип Single (единичная структура):
| Название ресурса | Поля схемы | | Название ресурса | Поля схемы |
|------------------------------------------|------------| |------------------|------------|
| data source dynamix_cb_disk_list_unattached | iotune | | data source dynamix_cb_disk_list_unattached | iotune |
| data source dynamix_cb_disk | iotune | | data source dynamix_cb_disk | iotune |
| data source dynamix_cb_disk_list | iotune | | data source dynamix_cb_disk_list | iotune |

@ -40,6 +40,7 @@ description: |-
- `deactivation_time` (Number) - `deactivation_time` (Number)
- `deleted_by` (String) - `deleted_by` (String)
- `deleted_time` (Number) - `deleted_time` (Number)
- `desc` (String)
- `displayname` (String) - `displayname` (String)
- `guid` (Number) - `guid` (Number)
- `id` (String) The ID of this resource. - `id` (String) The ID of this resource.

@ -1,12 +1,12 @@
--- ---
# generated by https://github.com/hashicorp/terraform-plugin-docs # generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "dynamix_account_list_deleted Data Source - terraform-provider-dynamix" page_title: "dynamix_account_deleted_list Data Source - terraform-provider-dynamix"
subcategory: "" subcategory: ""
description: |- description: |-
--- ---
# dynamix_account_list_deleted (Data Source) # dynamix_account_deleted_list (Data Source)
@ -50,6 +50,7 @@ Read-Only:
- `compute_features` (List of String) - `compute_features` (List of String)
- `created_time` (Number) - `created_time` (Number)
- `deleted_time` (Number) - `deleted_time` (Number)
- `desc` (String)
- `status` (String) - `status` (String)
- `updated_time` (Number) - `updated_time` (Number)

@ -51,6 +51,7 @@ Read-Only:
- `compute_features` (List of String) - `compute_features` (List of String)
- `created_time` (Number) - `created_time` (Number)
- `deleted_time` (Number) - `deleted_time` (Number)
- `desc` (String)
- `status` (String) - `status` (String)
- `updated_time` (Number) - `updated_time` (Number)

@ -55,6 +55,7 @@ Read-Only:
- `created_time` (Number) - `created_time` (Number)
- `deleted_by` (String) - `deleted_by` (String)
- `deleted_time` (Number) - `deleted_time` (Number)
- `desc` (String)
- `milestones` (Number) - `milestones` (Number)
- `resources` (Attributes) (see [below for nested schema](#nestedatt--items--resources)) - `resources` (Attributes) (see [below for nested schema](#nestedatt--items--resources))
- `rg_id` (Number) - `rg_id` (Number)

@ -70,6 +70,7 @@ Optional:
Read-Only: Read-Only:
- `chipset` (String)
- `id` (Number) - `id` (Number)
- `ip_addresses` (List of String) - `ip_addresses` (List of String)
- `name` (String) - `name` (String)

@ -29,7 +29,9 @@ description: |-
- `account_name` (String) - `account_name` (String)
- `acl` (String) - `acl` (String)
- `computes` (Attributes List) (see [below for nested schema](#nestedatt--computes)) - `computes` (Attributes List) (see [below for nested schema](#nestedatt--computes))
- `created_by` (String)
- `created_time` (Number) - `created_time` (Number)
- `deleted_by` (String)
- `deleted_time` (Number) - `deleted_time` (Number)
- `desc` (String) - `desc` (String)
- `destruction_time` (Number) - `destruction_time` (Number)
@ -38,14 +40,17 @@ description: |-
- `gid` (Number) - `gid` (Number)
- `id` (String) The ID of this resource. - `id` (String) The ID of this resource.
- `image_id` (Number) - `image_id` (Number)
- `images` (List of String) - `images` (List of Number)
- `iotune` (Attributes) (see [below for nested schema](#nestedatt--iotune)) - `iotune` (Attributes) (see [below for nested schema](#nestedatt--iotune))
- `machine_id` (Number)
- `machine_name` (String)
- `milestones` (Number)
- `order` (Number) - `order` (Number)
- `params` (String) - `params` (String)
- `parent_id` (Number) - `parent_id` (Number)
- `pci_slot` (Number) - `pci_slot` (Number)
- `pool` (String) - `pool` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `purge_time` (Number) - `purge_time` (Number)
- `res_id` (String) - `res_id` (String)
- `res_name` (String) - `res_name` (String)
@ -53,12 +58,15 @@ description: |-
- `sep_id` (Number) - `sep_id` (Number)
- `sep_type` (String) - `sep_type` (String)
- `shareable` (Boolean) - `shareable` (Boolean)
- `size_available` (Number)
- `size_max` (Number) - `size_max` (Number)
- `size_used` (Number) - `size_used` (Number)
- `snapshots` (Attributes List) (see [below for nested schema](#nestedatt--snapshots)) - `snapshots` (Attributes List) (see [below for nested schema](#nestedatt--snapshots))
- `status` (String) - `status` (String)
- `tech_status` (String) - `tech_status` (String)
- `type` (String) - `type` (String)
- `updated_by` (String)
- `updated_time` (Number)
- `vmid` (Number) - `vmid` (Number)
<a id="nestedblock--timeouts"></a> <a id="nestedblock--timeouts"></a>

@ -55,7 +55,9 @@ Read-Only:
- `account_name` (String) - `account_name` (String)
- `acl` (String) - `acl` (String)
- `computes` (Attributes List) (see [below for nested schema](#nestedatt--items--computes)) - `computes` (Attributes List) (see [below for nested schema](#nestedatt--items--computes))
- `created_by` (String)
- `created_time` (Number) - `created_time` (Number)
- `deleted_by` (String)
- `deleted_time` (Number) - `deleted_time` (Number)
- `desc` (String) - `desc` (String)
- `destruction_time` (Number) - `destruction_time` (Number)
@ -64,16 +66,17 @@ Read-Only:
- `disk_name` (String) - `disk_name` (String)
- `gid` (Number) - `gid` (Number)
- `image_id` (Number) - `image_id` (Number)
- `images` (List of String) - `images` (List of Number)
- `iotune` (Attributes) (see [below for nested schema](#nestedatt--items--iotune)) - `iotune` (Attributes) (see [below for nested schema](#nestedatt--items--iotune))
- `machine_id` (Number) - `machine_id` (Number)
- `machine_name` (String) - `machine_name` (String)
- `milestones` (Number)
- `order` (Number) - `order` (Number)
- `params` (String) - `params` (String)
- `parent_id` (Number) - `parent_id` (Number)
- `pci_slot` (Number) - `pci_slot` (Number)
- `pool` (String) - `pool` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `purge_time` (Number) - `purge_time` (Number)
- `res_id` (String) - `res_id` (String)
- `res_name` (String) - `res_name` (String)
@ -81,12 +84,15 @@ Read-Only:
- `sep_id` (Number) - `sep_id` (Number)
- `sep_type` (String) - `sep_type` (String)
- `shareable` (Boolean) - `shareable` (Boolean)
- `size_available` (Number)
- `size_max` (Number) - `size_max` (Number)
- `size_used` (Number) - `size_used` (Number)
- `snapshots` (Attributes List) (see [below for nested schema](#nestedatt--items--snapshots)) - `snapshots` (Attributes List) (see [below for nested schema](#nestedatt--items--snapshots))
- `status` (String) - `status` (String)
- `tech_status` (String) - `tech_status` (String)
- `type` (String) - `type` (String)
- `updated_by` (String)
- `updated_time` (Number)
- `vmid` (Number) - `vmid` (Number)
<a id="nestedatt--items--computes"></a> <a id="nestedatt--items--computes"></a>

@ -52,7 +52,9 @@ Read-Only:
- `account_name` (String) - `account_name` (String)
- `acl` (String) - `acl` (String)
- `computes` (Attributes List) (see [below for nested schema](#nestedatt--items--computes)) - `computes` (Attributes List) (see [below for nested schema](#nestedatt--items--computes))
- `created_by` (String)
- `created_time` (Number) - `created_time` (Number)
- `deleted_by` (String)
- `deleted_time` (Number) - `deleted_time` (Number)
- `desc` (String) - `desc` (String)
- `destruction_time` (Number) - `destruction_time` (Number)
@ -61,16 +63,17 @@ Read-Only:
- `disk_name` (String) - `disk_name` (String)
- `gid` (Number) - `gid` (Number)
- `image_id` (Number) - `image_id` (Number)
- `images` (List of String) - `images` (List of Number)
- `iotune` (Attributes) (see [below for nested schema](#nestedatt--items--iotune)) - `iotune` (Attributes) (see [below for nested schema](#nestedatt--items--iotune))
- `machine_id` (Number) - `machine_id` (Number)
- `machine_name` (String) - `machine_name` (String)
- `milestones` (Number)
- `order` (Number) - `order` (Number)
- `params` (String) - `params` (String)
- `parent_id` (Number) - `parent_id` (Number)
- `pci_slot` (Number) - `pci_slot` (Number)
- `pool` (String) - `pool` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `purge_time` (Number) - `purge_time` (Number)
- `res_id` (String) - `res_id` (String)
- `res_name` (String) - `res_name` (String)
@ -78,12 +81,15 @@ Read-Only:
- `sep_id` (Number) - `sep_id` (Number)
- `sep_type` (String) - `sep_type` (String)
- `shareable` (Boolean) - `shareable` (Boolean)
- `size_available` (Number)
- `size_max` (Number) - `size_max` (Number)
- `size_used` (Number) - `size_used` (Number)
- `snapshots` (Attributes List) (see [below for nested schema](#nestedatt--items--snapshots)) - `snapshots` (Attributes List) (see [below for nested schema](#nestedatt--items--snapshots))
- `status` (String) - `status` (String)
- `tech_status` (String) - `tech_status` (String)
- `type` (String) - `type` (String)
- `updated_by` (String)
- `updated_time` (Number)
- `vmid` (Number) - `vmid` (Number)
<a id="nestedatt--items--computes"></a> <a id="nestedatt--items--computes"></a>

@ -64,7 +64,7 @@ Read-Only:
- `gid` (Number) - `gid` (Number)
- `guid` (Number) - `guid` (Number)
- `image_id` (Number) - `image_id` (Number)
- `images` (List of String) - `images` (List of Number)
- `iotune` (Attributes) (see [below for nested schema](#nestedatt--items--iotune)) - `iotune` (Attributes) (see [below for nested schema](#nestedatt--items--iotune))
- `iqn` (String) - `iqn` (String)
- `login` (String) - `login` (String)

@ -38,14 +38,14 @@ description: |-
- `disk_name` (String) - `disk_name` (String)
- `gid` (Number) - `gid` (Number)
- `image_id` (Number) - `image_id` (Number)
- `images` (List of String) - `images` (List of Number)
- `iotune` (Attributes) (see [below for nested schema](#nestedatt--iotune)) - `iotune` (Attributes) (see [below for nested schema](#nestedatt--iotune))
- `order` (Number) - `order` (Number)
- `params` (String) - `params` (String)
- `parent_id` (Number) - `parent_id` (Number)
- `pci_slot` (Number) - `pci_slot` (Number)
- `pool` (String) - `pool` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `purge_time` (Number) - `purge_time` (Number)
- `replication` (Attributes) (see [below for nested schema](#nestedatt--replication)) - `replication` (Attributes) (see [below for nested schema](#nestedatt--replication))
- `res_id` (String) - `res_id` (String)

@ -43,6 +43,7 @@ description: |-
- `net_name` (String) - `net_name` (String)
- `network` (String) - `network` (String)
- `network_id` (Number) - `network_id` (Number)
- `ntp` (List of String)
- `pre_reservations_num` (Number) - `pre_reservations_num` (Number)
- `prefix` (Number) - `prefix` (Number)
- `pri_vnf_dev_id` (Number) - `pri_vnf_dev_id` (Number)

@ -21,6 +21,7 @@ description: |-
- `by_id` (Number) find by id - `by_id` (Number) find by id
- `name` (String) find by name - `name` (String) find by name
- `network` (String) find by network ip address - `network` (String) find by network ip address
- `ovs_bridge` (String) find by ovs_bridge
- `page` (Number) page number - `page` (Number) page number
- `size` (Number) page size - `size` (Number) page size
- `sort_by` (String) sort by one of supported fields, format +|-(field) - `sort_by` (String) sort by one of supported fields, format +|-(field)

@ -51,7 +51,7 @@ description: |-
- `network_interface_naming` (String) - `network_interface_naming` (String)
- `password` (String) - `password` (String)
- `pool_name` (String) - `pool_name` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `provider_name` (String) - `provider_name` (String)
- `purge_attempts` (Number) - `purge_attempts` (Number)
- `res_id` (String) - `res_id` (String)
@ -59,6 +59,7 @@ description: |-
- `sep_id` (Number) - `sep_id` (Number)
- `shared_with` (List of Number) - `shared_with` (List of Number)
- `size` (Number) - `size` (Number)
- `snapshot_id` (String)
- `status` (String) - `status` (String)
- `tech_status` (String) - `tech_status` (String)
- `unc_path` (String) - `unc_path` (String)

@ -34,6 +34,7 @@ description: |-
- `created_time` (Number) - `created_time` (Number)
- `deleted_by` (String) - `deleted_by` (String)
- `deleted_time` (Number) - `deleted_time` (Number)
- `desc` (String)
- `extnet_id` (Number) - `extnet_id` (Number)
- `extnet_only` (Boolean) - `extnet_only` (Boolean)
- `ha_mode` (Boolean) - `ha_mode` (Boolean)

@ -35,6 +35,7 @@ description: |-
- `arch` (String) - `arch` (String)
- `auto_start_w_node` (Boolean) - `auto_start_w_node` (Boolean)
- `boot_order` (List of String) - `boot_order` (List of String)
- `boot_type` (String)
- `bootdisk_size` (Number) - `bootdisk_size` (Number)
- `cd_image_id` (Number) - `cd_image_id` (Number)
- `chipset` (String) - `chipset` (String)
@ -54,11 +55,13 @@ description: |-
- `driver` (String) - `driver` (String)
- `gid` (Number) - `gid` (Number)
- `guid` (Number) - `guid` (Number)
- `hot_resize` (Boolean)
- `hp_backed` (Boolean) - `hp_backed` (Boolean)
- `id` (String) The ID of this resource. - `id` (String) The ID of this resource.
- `image_id` (Number) - `image_id` (Number)
- `image_name` (String) - `image_name` (String)
- `interfaces` (Attributes List) (see [below for nested schema](#nestedatt--interfaces)) - `interfaces` (Attributes List) (see [below for nested schema](#nestedatt--interfaces))
- `loader_type` (String)
- `lock_status` (String) - `lock_status` (String)
- `manager_id` (Number) - `manager_id` (Number)
- `manager_type` (String) - `manager_type` (String)
@ -71,6 +74,7 @@ description: |-
- `natable_vins_network` (String) - `natable_vins_network` (String)
- `natable_vins_network_name` (String) - `natable_vins_network_name` (String)
- `need_reboot` (Boolean) - `need_reboot` (Boolean)
- `network_interface_naming` (String)
- `numa_affinity` (String) - `numa_affinity` (String)
- `numa_node_id` (Number) - `numa_node_id` (Number)
- `os_users` (Attributes List) (see [below for nested schema](#nestedatt--os_users)) - `os_users` (Attributes List) (see [below for nested schema](#nestedatt--os_users))
@ -94,7 +98,7 @@ description: |-
- `updated_time` (Number) - `updated_time` (Number)
- `user_data` (String) - `user_data` (String)
- `user_managed` (Boolean) - `user_managed` (Boolean)
- `vgpus` (List of Number) - `vgpus` (Attributes List) (see [below for nested schema](#nestedatt--vgpus))
- `virtual_image_id` (Number) - `virtual_image_id` (Number)
- `virtual_image_name` (String) - `virtual_image_name` (String)
- `vnc_password` (String) - `vnc_password` (String)
@ -213,7 +217,7 @@ Read-Only:
- `passwd` (String) - `passwd` (String)
- `pci_slot` (Number) - `pci_slot` (Number)
- `pool` (String) - `pool` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `purge_time` (Number) - `purge_time` (Number)
- `reality_device_number` (Number) - `reality_device_number` (Number)
- `reference_id` (String) - `reference_id` (String)
@ -222,6 +226,7 @@ Read-Only:
- `role` (String) - `role` (String)
- `sep_id` (Number) - `sep_id` (Number)
- `shareable` (Boolean) - `shareable` (Boolean)
- `size_available` (Number)
- `size_max` (Number) - `size_max` (Number)
- `size_used` (Number) - `size_used` (Number)
- `snapshots` (Attributes List) (see [below for nested schema](#nestedatt--disks--snapshots)) - `snapshots` (Attributes List) (see [below for nested schema](#nestedatt--disks--snapshots))
@ -352,3 +357,29 @@ Read-Only:
- `guid` (String) - `guid` (String)
- `label` (String) - `label` (String)
- `timestamp` (Number) - `timestamp` (Number)
<a id="nestedatt--vgpus"></a>
### Nested Schema for `vgpus`
Read-Only:
- `account_id` (Number)
- `bus_number` (Number)
- `created_time` (Number)
- `deleted_time` (Number)
- `gid` (Number)
- `guid` (Number)
- `id` (Number)
- `last_claimed_by` (Number)
- `last_update_time` (Number)
- `mode` (String)
- `pci_slot` (Number)
- `pgpuid` (Number)
- `profile_id` (Number)
- `ram` (Number)
- `reference_id` (String)
- `rg_id` (Number)
- `status` (String)
- `type` (String)
- `vmid` (Number)

@ -63,6 +63,7 @@ Read-Only:
- `arch` (String) - `arch` (String)
- `auto_start_w_node` (Boolean) - `auto_start_w_node` (Boolean)
- `boot_order` (List of String) - `boot_order` (List of String)
- `boot_type` (String)
- `bootdisk_size` (Number) - `bootdisk_size` (Number)
- `cd_image_id` (Number) - `cd_image_id` (Number)
- `chipset` (String) - `chipset` (String)
@ -83,9 +84,11 @@ Read-Only:
- `driver` (String) - `driver` (String)
- `gid` (Number) - `gid` (Number)
- `guid` (Number) - `guid` (Number)
- `hot_resize` (Boolean)
- `hp_backed` (Boolean) - `hp_backed` (Boolean)
- `image_id` (Number) - `image_id` (Number)
- `interfaces` (Attributes List) (see [below for nested schema](#nestedatt--items--interfaces)) - `interfaces` (Attributes List) (see [below for nested schema](#nestedatt--items--interfaces))
- `loader_type` (String)
- `lock_status` (String) - `lock_status` (String)
- `manager_id` (Number) - `manager_id` (Number)
- `manager_type` (String) - `manager_type` (String)
@ -93,6 +96,7 @@ Read-Only:
- `milestones` (Number) - `milestones` (Number)
- `name` (String) - `name` (String)
- `need_reboot` (Boolean) - `need_reboot` (Boolean)
- `network_interface_naming` (String)
- `numa_affinity` (String) - `numa_affinity` (String)
- `numa_node_id` (Number) - `numa_node_id` (Number)
- `pinned` (Boolean) - `pinned` (Boolean)

@ -61,6 +61,7 @@ Read-Only:
- `arch` (String) - `arch` (String)
- `auto_start_w_node` (Boolean) - `auto_start_w_node` (Boolean)
- `boot_order` (List of String) - `boot_order` (List of String)
- `boot_type` (String)
- `bootdisk_size` (Number) - `bootdisk_size` (Number)
- `cd_image_id` (Number) - `cd_image_id` (Number)
- `chipset` (String) - `chipset` (String)
@ -81,9 +82,11 @@ Read-Only:
- `driver` (String) - `driver` (String)
- `gid` (Number) - `gid` (Number)
- `guid` (Number) - `guid` (Number)
- `hot_resize` (Boolean)
- `hp_backed` (Boolean) - `hp_backed` (Boolean)
- `image_id` (Number) - `image_id` (Number)
- `interfaces` (Attributes List) (see [below for nested schema](#nestedatt--items--interfaces)) - `interfaces` (Attributes List) (see [below for nested schema](#nestedatt--items--interfaces))
- `loader_type` (String)
- `lock_status` (String) - `lock_status` (String)
- `manager_id` (Number) - `manager_id` (Number)
- `manager_type` (String) - `manager_type` (String)
@ -91,6 +94,7 @@ Read-Only:
- `milestones` (Number) - `milestones` (Number)
- `name` (String) - `name` (String)
- `need_reboot` (Boolean) - `need_reboot` (Boolean)
- `network_interface_naming` (String)
- `numa_affinity` (String) - `numa_affinity` (String)
- `numa_node_id` (Number) - `numa_node_id` (Number)
- `pinned` (Boolean) - `pinned` (Boolean)

@ -46,7 +46,6 @@ description: |-
- `lock_status` (String) - `lock_status` (String)
- `milestones` (Number) - `milestones` (Number)
- `name` (String) - `name` (String)
- `register_computes` (Boolean)
- `res_types` (List of String) - `res_types` (List of String)
- `resource_limits` (Attributes) (see [below for nested schema](#nestedatt--resource_limits)) - `resource_limits` (Attributes) (see [below for nested schema](#nestedatt--resource_limits))
- `secret` (String) - `secret` (String)

@ -69,7 +69,6 @@ Read-Only:
- `lock_status` (String) - `lock_status` (String)
- `milestones` (Number) - `milestones` (Number)
- `name` (String) - `name` (String)
- `register_computes` (Boolean)
- `resource_limits` (Attributes) (see [below for nested schema](#nestedatt--items--resource_limits)) - `resource_limits` (Attributes) (see [below for nested schema](#nestedatt--items--resource_limits))
- `resource_types` (List of String) - `resource_types` (List of String)
- `rg_id` (Number) - `rg_id` (Number)

@ -67,7 +67,6 @@ Read-Only:
- `lock_status` (String) - `lock_status` (String)
- `milestones` (Number) - `milestones` (Number)
- `name` (String) - `name` (String)
- `register_computes` (Boolean)
- `resource_limits` (Attributes) (see [below for nested schema](#nestedatt--items--resource_limits)) - `resource_limits` (Attributes) (see [below for nested schema](#nestedatt--items--resource_limits))
- `resource_types` (List of String) - `resource_types` (List of String)
- `rg_id` (Number) - `rg_id` (Number)

@ -26,6 +26,7 @@ description: |-
- `rg_id` (Number) Filter by RG ID - `rg_id` (Number) Filter by RG ID
- `size` (Number) Page size - `size` (Number) Page size
- `sort_by` (String) sort by one of supported fields, format +|-(field) - `sort_by` (String) sort by one of supported fields, format +|-(field)
- `status` (String) Status
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts)) - `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
- `vnf_dev_id` (Number) Filter by VNF Device id - `vnf_dev_id` (Number) Filter by VNF Device id

@ -18,18 +18,18 @@ description: |-
### Required ### Required
- `account_name` (String) name of the account - `account_name` (String) name of the account
- `username` (String) username of owner the account
### Optional ### Optional
- `emailaddress` (String) email - `desc` (String) description of the account
- `enable` (Boolean) enable/disable account - `enable` (Boolean) enable/disable account
- `permanently` (Boolean) whether to completely delete the account - `permanently` (Boolean) whether to completely delete the account
- `reason` (String) reason to disable
- `resource_limits` (Attributes) (see [below for nested schema](#nestedatt--resource_limits)) - `resource_limits` (Attributes) (see [below for nested schema](#nestedatt--resource_limits))
- `restore` (Boolean) restore a deleted account - `restore` (Boolean) restore a deleted account
- `send_access_emails` (Boolean) if true send emails when a user is granted access to resources - `send_access_emails` (Boolean) if true send emails when a user is granted access to resources
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts)) - `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
- `users` (Attributes List) (see [below for nested schema](#nestedatt--users)) - `users` (Attributes Set) (see [below for nested schema](#nestedatt--users))
### Read-Only ### Read-Only

@ -28,6 +28,7 @@ description: |-
### Optional ### Optional
- `chipset` (String) Type of the emulated system, Q35 or i440fx
- `cloud_init` (String) - `cloud_init` (String)
- `compgroup_id` (Number) - `compgroup_id` (Number)
- `extnets` (List of Number) - `extnets` (List of Number)
@ -83,6 +84,7 @@ Optional:
Read-Only: Read-Only:
- `chipset` (String) Type of the emulated system, Q35 or i440fx
- `id` (Number) - `id` (Number)
- `ip_addresses` (List of String) - `ip_addresses` (List of String)
- `name` (String) - `name` (String)

@ -19,7 +19,6 @@ description: |-
- `account_id` (Number) ID of the account - `account_id` (Number) ID of the account
- `disk_name` (String) Iname of disk - `disk_name` (String) Iname of disk
- `gid` (Number) ID of the grid (platform)
- `size_max` (Number) size in GB, default is 10 - `size_max` (Number) size in GB, default is 10
### Optional ### Optional
@ -32,27 +31,31 @@ description: |-
- `sep_id` (Number) Storage endpoint provider ID to create disk - `sep_id` (Number) Storage endpoint provider ID to create disk
- `shareable` (Boolean) share disk - `shareable` (Boolean) share disk
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts)) - `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
- `type` (String) (B;D;T) B=Boot;D=Data;T=Temp
### Read-Only ### Read-Only
- `account_name` (String) - `account_name` (String)
- `acl` (String) - `acl` (String)
- `computes` (Attributes List) (see [below for nested schema](#nestedatt--computes)) - `computes` (Attributes List) (see [below for nested schema](#nestedatt--computes))
- `created_by` (String)
- `created_time` (Number) - `created_time` (Number)
- `deleted_by` (String)
- `deleted_time` (Number) - `deleted_time` (Number)
- `destruction_time` (Number) - `destruction_time` (Number)
- `devicename` (String) - `devicename` (String)
- `disk_id` (Number) - `disk_id` (Number)
- `gid` (Number) ID of the grid (platform)
- `id` (String) The ID of this resource. - `id` (String) The ID of this resource.
- `image_id` (Number) - `image_id` (Number)
- `images` (List of String) - `images` (List of Number)
- `last_updated` (String) Timestamp of the last Terraform update of the disk resource. - `last_updated` (String) Timestamp of the last Terraform update of the disk resource.
- `machine_id` (Number)
- `machine_name` (String)
- `order` (Number) - `order` (Number)
- `params` (String) - `params` (String)
- `parent_id` (Number) - `parent_id` (Number)
- `pci_slot` (Number) - `pci_slot` (Number)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `purge_time` (Number) - `purge_time` (Number)
- `res_id` (String) - `res_id` (String)
- `res_name` (String) - `res_name` (String)
@ -62,6 +65,9 @@ description: |-
- `snapshots` (Attributes List) (see [below for nested schema](#nestedatt--snapshots)) - `snapshots` (Attributes List) (see [below for nested schema](#nestedatt--snapshots))
- `status` (String) - `status` (String)
- `tech_status` (String) - `tech_status` (String)
- `type` (String) (B;D;T) B=Boot;D=Data;T=Temp
- `updated_by` (String)
- `updated_time` (Number)
- `vmid` (Number) - `vmid` (Number)
<a id="nestedatt--iotune"></a> <a id="nestedatt--iotune"></a>

@ -45,14 +45,14 @@ description: |-
- `gid` (Number) - `gid` (Number)
- `id` (String) The ID of this resource. - `id` (String) The ID of this resource.
- `image_id` (Number) - `image_id` (Number)
- `images` (List of String) - `images` (List of Number)
- `iotune` (Attributes) (see [below for nested schema](#nestedatt--iotune)) - `iotune` (Attributes) (see [below for nested schema](#nestedatt--iotune))
- `order` (Number) - `order` (Number)
- `params` (String) - `params` (String)
- `parent_id` (Number) - `parent_id` (Number)
- `pci_slot` (Number) - `pci_slot` (Number)
- `pool` (String) - `pool` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `purge_time` (Number) - `purge_time` (Number)
- `replica_disk_id` (Number) - `replica_disk_id` (Number)
- `replication` (Attributes) (see [below for nested schema](#nestedatt--replication)) - `replication` (Attributes) (see [below for nested schema](#nestedatt--replication))

@ -21,7 +21,7 @@ description: |-
- `boot_type` (String) Boot type of image bios or uefi - `boot_type` (String) Boot type of image bios or uefi
- `drivers` (List of String) List of types of compute suitable for image. Example: [ "KVM_X86" ] - `drivers` (List of String) List of types of compute suitable for image. Example: [ "KVM_X86" ]
- `image_name` (String) Name of the rescue disk - `image_name` (String) Name of the rescue disk
- `image_type` (String) Image type linux, windows or other - `image_type` (String) Image type linux, windows or unknown
- `url` (String) URL where to download media from - `url` (String) URL where to download media from
### Optional ### Optional
@ -56,13 +56,14 @@ description: |-
- `link_to` (Number) - `link_to` (Number)
- `milestones` (Number) - `milestones` (Number)
- `network_interface_naming` (String) - `network_interface_naming` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `provider_name` (String) - `provider_name` (String)
- `purge_attempts` (Number) - `purge_attempts` (Number)
- `res_id` (String) - `res_id` (String)
- `rescuecd` (Boolean) - `rescuecd` (Boolean)
- `shared_with` (List of Number) - `shared_with` (List of Number)
- `size` (Number) - `size` (Number)
- `snapshot_id` (String)
- `status` (String) - `status` (String)
- `tech_status` (String) - `tech_status` (String)
- `unc_path` (String) - `unc_path` (String)

@ -31,6 +31,7 @@ description: |-
- `architecture` (String) - `architecture` (String)
- `boot_type` (String) - `boot_type` (String)
- `bootable` (Boolean) - `bootable` (Boolean)
- `cd_presented_to` (String)
- `ckey` (String) - `ckey` (String)
- `compute_ci_id` (Number) - `compute_ci_id` (Number)
- `deleted_time` (Number) - `deleted_time` (Number)
@ -47,9 +48,10 @@ description: |-
- `last_modified` (Number) - `last_modified` (Number)
- `last_updated` (String) Timestamp of the last Terraform update of the order. - `last_updated` (String) Timestamp of the last Terraform update of the order.
- `milestones` (Number) - `milestones` (Number)
- `network_interface_naming` (String)
- `password` (String) - `password` (String)
- `pool_name` (String) - `pool_name` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `provider_name` (String) - `provider_name` (String)
- `purge_attempts` (Number) - `purge_attempts` (Number)
- `res_id` (String) - `res_id` (String)
@ -57,6 +59,7 @@ description: |-
- `sep_id` (Number) - `sep_id` (Number)
- `shared_with` (List of Number) - `shared_with` (List of Number)
- `size` (Number) - `size` (Number)
- `snapshot_id` (String)
- `status` (String) - `status` (String)
- `tech_status` (String) - `tech_status` (String)
- `unc_path` (String) - `unc_path` (String)

@ -32,7 +32,6 @@ description: |-
- `ram` (Number) Worker node RAM in MB. - `ram` (Number) Worker node RAM in MB.
- `taints` (List of String) - `taints` (List of String)
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts)) - `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
- `worker_chipset` (String) Type of the emulated system of worker nodes
- `worker_sep_id` (Number) - `worker_sep_id` (Number)
- `worker_sep_pool` (String) - `worker_sep_pool` (String)

@ -28,26 +28,28 @@ description: |-
- `affinity_label` (String) Set affinity label for compute - `affinity_label` (String) Set affinity label for compute
- `affinity_rules` (Attributes Set) (see [below for nested schema](#nestedatt--affinity_rules)) - `affinity_rules` (Attributes Set) (see [below for nested schema](#nestedatt--affinity_rules))
- `anti_affinity_rules` (Attributes Set) (see [below for nested schema](#nestedatt--anti_affinity_rules)) - `anti_affinity_rules` (Attributes Set) (see [below for nested schema](#nestedatt--anti_affinity_rules))
- `auto_start` (Boolean) Flag for redeploy compute
- `auto_start_w_node` (Boolean) Flag for start compute after node exits from MAINTENANCE state - `auto_start_w_node` (Boolean) Flag for start compute after node exits from MAINTENANCE state
- `boot_disk_size` (Number) This compute instance boot disk size in GB. Make sure it is large enough to accomodate selected OS image. - `boot_disk_size` (Number) This compute instance boot disk size in GB. Make sure it is large enough to accomodate selected OS image.
- `cd` (Attributes) (see [below for nested schema](#nestedatt--cd)) - `boot_type` (String) Type of image upload
- `cd_image_id` (Number)
- `chipset` (String) Type of the emulated system, Q35 or i440fx - `chipset` (String) Type of the emulated system, Q35 or i440fx
- `cloud_init` (String) Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases. - `cloud_init` (String) Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases.
- `cpu_pin` (Boolean) Run VM on dedicated CPUs. To use this feature, the system must be pre-configured by allocating CPUs on the physical node. - `cpu_pin` (Boolean) Run VM on dedicated CPUs. To use this feature, the system must be pre-configured by allocating CPUs on the physical node.
- `custom_fields` (String) custom fields for Compute. Must be dict - `custom_fields` (String) custom fields for Compute. Must be dict
- `data_disks` (String) Flag for redeploy compute
- `description` (String) Optional text description of this compute instance. - `description` (String) Optional text description of this compute instance.
- `detach_disks` (Boolean) - `detach_disks` (Boolean)
- `enabled` (Boolean) If true - enable compute, else - disable - `enabled` (Boolean) If true - enable compute, else - disable
- `extra_disks` (Set of Number) Optional list of IDs of extra disks to attach to this compute. You may specify several extra disks. - `extra_disks` (Set of Number) Optional list of IDs of extra disks to attach to this compute. You may specify several extra disks.
- `force_resize` (Boolean) Flag for resize compute - `force_resize` (Boolean) Flag for resize compute
- `force_stop` (Boolean) Flag for redeploy compute - `force_stop` (Boolean) Flag for redeploy compute
- `hot_resize` (Boolean) Changing the size of a VM
- `hp_backed` (Boolean) Use Huge Pages to allocate RAM of the virtual machine. The system must be pre-configured by allocating Huge Pages on the physical node. - `hp_backed` (Boolean) Use Huge Pages to allocate RAM of the virtual machine. The system must be pre-configured by allocating Huge Pages on the physical node.
- `image_id` (Number) ID of the OS image to base this compute instance on. - `image_id` (Number) ID of the OS image to base this compute instance on.
- `ipa_type` (String) compute purpose - `ipa_type` (String) compute purpose
- `is` (String) system name - `is` (String) system name
- `loader_type` (String) Type of VM
- `network` (Attributes Set) Optional network connection(s) for this compute. You may specify several network blocks, one for each connection. (see [below for nested schema](#nestedatt--network)) - `network` (Attributes Set) Optional network connection(s) for this compute. You may specify several network blocks, one for each connection. (see [below for nested schema](#nestedatt--network))
- `network_interface_naming` (String) Name of the network interface
- `numa_affinity` (String) Rule for VM placement with NUMA affinity. - `numa_affinity` (String) Rule for VM placement with NUMA affinity.
- `pause` (Boolean) - `pause` (Boolean)
- `pci_devices` (Set of Number) ID of the connected pci devices - `pci_devices` (Set of Number) ID of the connected pci devices
@ -61,8 +63,8 @@ description: |-
- `rollback` (Attributes) (see [below for nested schema](#nestedatt--rollback)) - `rollback` (Attributes) (see [below for nested schema](#nestedatt--rollback))
- `sep_id` (Number) ID of SEP to create bootDisk on. Uses image's sepId if not set. - `sep_id` (Number) ID of SEP to create bootDisk on. Uses image's sepId if not set.
- `snapshot` (Attributes Set) (see [below for nested schema](#nestedatt--snapshot)) - `snapshot` (Attributes Set) (see [below for nested schema](#nestedatt--snapshot))
- `snapshot_delete_async` (Boolean) Flag for deleting snapshots asynchronously
- `started` (Boolean) Is compute started. - `started` (Boolean) Is compute started.
- `stateless` (Boolean) Compute will be stateless (SVA_KVM_X86) if set to True
- `tags` (Attributes Set) (see [below for nested schema](#nestedatt--tags)) - `tags` (Attributes Set) (see [below for nested schema](#nestedatt--tags))
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts)) - `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
- `user_access` (Attributes Set) (see [below for nested schema](#nestedatt--user_access)) - `user_access` (Attributes Set) (see [below for nested schema](#nestedatt--user_access))
@ -78,7 +80,6 @@ description: |-
- `boot_disk` (Attributes) (see [below for nested schema](#nestedatt--boot_disk)) - `boot_disk` (Attributes) (see [below for nested schema](#nestedatt--boot_disk))
- `boot_disk_id` (Number) - `boot_disk_id` (Number)
- `boot_order` (List of String) - `boot_order` (List of String)
- `cd_image_id` (Number)
- `clone_reference` (Number) - `clone_reference` (Number)
- `clones` (List of Number) - `clones` (List of Number)
- `compute_id` (Number) - `compute_id` (Number)
@ -122,7 +123,7 @@ description: |-
- `updated_time` (Number) - `updated_time` (Number)
- `user_data` (String) - `user_data` (String)
- `user_managed` (Boolean) - `user_managed` (Boolean)
- `vgpus` (List of Number) - `vgpus` (Attributes List) (see [below for nested schema](#nestedatt--vgpus))
- `virtual_image_id` (Number) - `virtual_image_id` (Number)
- `virtual_image_name` (String) - `virtual_image_name` (String)
- `vnc_password` (String) - `vnc_password` (String)
@ -157,14 +158,6 @@ Optional:
- `value` (String) value that must match the key to be taken into account when analyzing this rule - `value` (String) value that must match the key to be taken into account when analyzing this rule
<a id="nestedatt--cd"></a>
### Nested Schema for `cd`
Required:
- `cdrom_id` (Number)
<a id="nestedatt--network"></a> <a id="nestedatt--network"></a>
### Nested Schema for `network` ### Nested Schema for `network`
@ -176,13 +169,10 @@ Required:
Optional: Optional:
- `ip_address` (String) Optional IP address to assign to this connection. This IP should belong to the selected network and free for use. - `ip_address` (String) Optional IP address to assign to this connection. This IP should belong to the selected network and free for use.
- `mac` (String) MAC address associated with this connection. MAC address is assigned automatically.
- `mtu` (Number) Maximum transmission unit, used only for DPDK type, must be 1-9216 - `mtu` (Number) Maximum transmission unit, used only for DPDK type, must be 1-9216
- `weight` (Number) Weight the network if you need to sort network list, the smallest attach first. zero or null weight attach last - `weight` (Number) Weight the network if you need to sort network list, the smallest attach first. zero or null weight attach last
Read-Only:
- `mac` (String) MAC address associated with this connection. MAC address is assigned automatically.
<a id="nestedatt--port_forwarding"></a> <a id="nestedatt--port_forwarding"></a>
### Nested Schema for `port_forwarding` ### Nested Schema for `port_forwarding`
@ -301,7 +291,6 @@ Read-Only:
- `acl` (String) - `acl` (String)
- `boot_partition` (Number) - `boot_partition` (Number)
- `bus_number` (Number) - `bus_number` (Number)
- `ckey` (String)
- `created_time` (Number) - `created_time` (Number)
- `deleted_time` (Number) - `deleted_time` (Number)
- `desc` (String) - `desc` (String)
@ -311,7 +300,7 @@ Read-Only:
- `gid` (Number) - `gid` (Number)
- `guid` (Number) - `guid` (Number)
- `image_id` (Number) - `image_id` (Number)
- `images` (List of String) - `images` (List of Number)
- `iotune` (Attributes) (see [below for nested schema](#nestedatt--boot_disk--iotune)) - `iotune` (Attributes) (see [below for nested schema](#nestedatt--boot_disk--iotune))
- `iqn` (String) - `iqn` (String)
- `login` (String) - `login` (String)
@ -323,7 +312,7 @@ Read-Only:
- `passwd` (String) - `passwd` (String)
- `pci_slot` (Number) - `pci_slot` (Number)
- `pool` (String) - `pool` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `purge_time` (Number) - `purge_time` (Number)
- `reality_device_number` (Number) - `reality_device_number` (Number)
- `reference_id` (String) - `reference_id` (String)
@ -397,7 +386,6 @@ Read-Only:
- `acl` (String) - `acl` (String)
- `boot_partition` (Number) - `boot_partition` (Number)
- `bus_number` (Number) - `bus_number` (Number)
- `ckey` (String)
- `created_time` (Number) - `created_time` (Number)
- `deleted_time` (Number) - `deleted_time` (Number)
- `desc` (String) - `desc` (String)
@ -407,7 +395,7 @@ Read-Only:
- `gid` (Number) - `gid` (Number)
- `guid` (Number) - `guid` (Number)
- `image_id` (Number) - `image_id` (Number)
- `images` (List of String) - `images` (List of Number)
- `iotune` (Attributes) (see [below for nested schema](#nestedatt--disks--iotune)) - `iotune` (Attributes) (see [below for nested schema](#nestedatt--disks--iotune))
- `iqn` (String) - `iqn` (String)
- `login` (String) - `login` (String)
@ -419,7 +407,7 @@ Read-Only:
- `passwd` (String) - `passwd` (String)
- `pci_slot` (Number) - `pci_slot` (Number)
- `pool` (String) - `pool` (String)
- `present_to` (List of Number) - `present_to` (Map of Number)
- `purge_time` (Number) - `purge_time` (Number)
- `reality_device_number` (Number) - `reality_device_number` (Number)
- `reference_id` (String) - `reference_id` (String)
@ -558,3 +546,29 @@ Read-Only:
- `guid` (String) - `guid` (String)
- `label` (String) - `label` (String)
- `timestamp` (Number) - `timestamp` (Number)
<a id="nestedatt--vgpus"></a>
### Nested Schema for `vgpus`
Read-Only:
- `account_id` (Number)
- `bus_number` (Number)
- `created_time` (Number)
- `deleted_time` (Number)
- `gid` (Number)
- `guid` (Number)
- `id` (Number)
- `last_claimed_by` (Number)
- `last_update_time` (Number)
- `mode` (String)
- `pci_slot` (Number)
- `pgpuid` (Number)
- `profile_id` (Number)
- `ram` (Number)
- `reference_id` (String)
- `rg_id` (Number)
- `status` (String)
- `type` (String)
- `vmid` (Number)

@ -23,7 +23,7 @@ description: |-
### Optional ### Optional
- `access` (Attributes List) Grant/revoke user or group access to the Resource group as specified (see [below for nested schema](#nestedatt--access)) - `access` (Attributes Set) Grant/revoke user or group access to the Resource group as specified (see [below for nested schema](#nestedatt--access))
- `def_net` (Attributes) Set default network for attach associated VMs (see [below for nested schema](#nestedatt--def_net)) - `def_net` (Attributes) Set default network for attach associated VMs (see [below for nested schema](#nestedatt--def_net))
- `def_net_type` (String) type of the default network for this RG. VMs created in this RG will be by default connected to this network. Allowed values are PRIVATE, PUBLIC, NONE. - `def_net_type` (String) type of the default network for this RG. VMs created in this RG will be by default connected to this network. Allowed values are PRIVATE, PUBLIC, NONE.
- `description` (String) User-defined text description of this resource group. - `description` (String) User-defined text description of this resource group.
@ -35,7 +35,6 @@ description: |-
- `owner` (String) username - owner of this RG. Leave blank to set current user as owner - `owner` (String) username - owner of this RG. Leave blank to set current user as owner
- `permanently` (Boolean) Set to True if you want force delete non-empty RG - `permanently` (Boolean) Set to True if you want force delete non-empty RG
- `quota` (Attributes) Quota settings for this resource group. (see [below for nested schema](#nestedatt--quota)) - `quota` (Attributes) Quota settings for this resource group. (see [below for nested schema](#nestedatt--quota))
- `register_computes` (Boolean) Register computes in registration system
- `restore` (Boolean) - `restore` (Boolean)
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts)) - `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
- `uniq_pools` (List of String) List of strings with pools. Applies only when updating - `uniq_pools` (List of String) List of strings with pools. Applies only when updating
@ -47,6 +46,8 @@ description: |-
- `compute_features` (List of String) - `compute_features` (List of String)
- `cpu_allocation_parameter` (String) - `cpu_allocation_parameter` (String)
- `cpu_allocation_ratio` (Number) - `cpu_allocation_ratio` (Number)
- `created_by` (String)
- `created_time` (Number)
- `def_net_id` (Number) - `def_net_id` (Number)
- `deleted_by` (String) - `deleted_by` (String)
- `deleted_time` (Number) - `deleted_time` (Number)

@ -9,7 +9,7 @@ require (
github.com/hashicorp/terraform-plugin-framework-validators v0.12.0 github.com/hashicorp/terraform-plugin-framework-validators v0.12.0
github.com/hashicorp/terraform-plugin-log v0.9.0 github.com/hashicorp/terraform-plugin-log v0.9.0
github.com/sirupsen/logrus v1.9.3 github.com/sirupsen/logrus v1.9.3
repository.basistech.ru/BASIS/decort-golang-sdk v1.10.2 repository.basistech.ru/BASIS/decort-golang-sdk v1.11.5
) )
require ( require (

@ -100,5 +100,7 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
repository.basistech.ru/BASIS/decort-golang-sdk v1.10.2 h1:sA/ZngL4xvkyz8lVGkqbi2RBi4CrHJjho2WV21KX918= repository.basistech.ru/BASIS/decort-golang-sdk v1.11.4 h1:OEFgSEGjzut+vVMGeNgoNq3dtk63FbXB6yGLTywtAas=
repository.basistech.ru/BASIS/decort-golang-sdk v1.10.2/go.mod h1:OaUynHHuSjWMzpfyoL4au6oLcUogqUkPPBKA15pbHWo= repository.basistech.ru/BASIS/decort-golang-sdk v1.11.4/go.mod h1:OaUynHHuSjWMzpfyoL4au6oLcUogqUkPPBKA15pbHWo=
repository.basistech.ru/BASIS/decort-golang-sdk v1.11.5 h1:gZEH9+qdulvrPQFLMQOxcY97ef9BlYRuItDYMYf5d4U=
repository.basistech.ru/BASIS/decort-golang-sdk v1.11.5/go.mod h1:OaUynHHuSjWMzpfyoL4au6oLcUogqUkPPBKA15pbHWo=

@ -6,12 +6,6 @@ const LimitMaxVinsPerResgroup = 4
// MaxSshKeysPerCompute sets maximum number of user:ssh_key pairs to authorize when creating new compute // MaxSshKeysPerCompute sets maximum number of user:ssh_key pairs to authorize when creating new compute
const MaxSshKeysPerCompute = 12 const MaxSshKeysPerCompute = 12
// MaxExtraDisksPerCompute sets maximum number of extra disks that can be added when creating new compute
const MaxExtraDisksPerCompute = 12
// MaxNetworksPerCompute sets maximum number of vNICs per compute
const MaxNetworksPerCompute = 8
// MaxCpusPerCompute sets maximum number of vCPUs per compute // MaxCpusPerCompute sets maximum number of vCPUs per compute
const MaxCpusPerCompute = 128 const MaxCpusPerCompute = 128

@ -17,3 +17,12 @@ func FlattenSimpleTypeToList(ctx context.Context, elementType attr.Type, element
} }
return res return res
} }
// FlattenSimpleTypeToSet convert a slice of simple type to a types.Set
func FlattenSimpleTypeToSet(ctx context.Context, elementType attr.Type, elements any) types.Set {
res, diags := types.SetValueFrom(ctx, elementType, elements)
if diags != nil {
tflog.Error(ctx, fmt.Sprint("Error FlattenSimpleTypeToSet", diags))
}
return res
}

@ -76,7 +76,7 @@ func (d *dataSourceAccountListDeleted) Schema(ctx context.Context, _ datasource.
} }
func (d *dataSourceAccountListDeleted) Metadata(_ context.Context, req datasource.MetadataRequest, resp *datasource.MetadataResponse) { func (d *dataSourceAccountListDeleted) Metadata(_ context.Context, req datasource.MetadataRequest, resp *datasource.MetadataResponse) {
resp.TypeName = req.ProviderTypeName + "_account_list_deleted" resp.TypeName = req.ProviderTypeName + "_account_deleted_list"
} }
// Configure adds the provider configured client to the data source. // Configure adds the provider configured client to the data source.

@ -51,6 +51,7 @@ func AccountDataSource(ctx context.Context, state *models.DataSourceAccountModel
DeactivationTime: types.Float64Value(recordAccount.DeactivationTime), DeactivationTime: types.Float64Value(recordAccount.DeactivationTime),
DeletedBy: types.StringValue(recordAccount.DeletedBy), DeletedBy: types.StringValue(recordAccount.DeletedBy),
DeletedTime: types.Int64Value(int64(recordAccount.DeletedTime)), DeletedTime: types.Int64Value(int64(recordAccount.DeletedTime)),
Desc: types.StringValue(recordAccount.Description),
DisplayName: types.StringValue(recordAccount.DisplayName), DisplayName: types.StringValue(recordAccount.DisplayName),
GUID: types.Int64Value(int64(recordAccount.GUID)), GUID: types.Int64Value(int64(recordAccount.GUID)),
Machines: flattenMachines(ctx, recordAccount.Machines), Machines: flattenMachines(ctx, recordAccount.Machines),

@ -52,6 +52,7 @@ func AccountListDataSource(ctx context.Context, state *models.DataSourceAccountL
AccountName: types.StringValue(item.Name), AccountName: types.StringValue(item.Name),
Status: types.StringValue(item.Status), Status: types.StringValue(item.Status),
UpdatedTime: types.Int64Value(int64(item.UpdatedTime)), UpdatedTime: types.Int64Value(int64(item.UpdatedTime)),
Desc: types.StringValue(item.Description),
} }
i.ComputeFeatures, diags = types.ListValueFrom(ctx, types.StringType, item.ComputeFeatures) i.ComputeFeatures, diags = types.ListValueFrom(ctx, types.StringType, item.ComputeFeatures)
if diags.HasError() { if diags.HasError() {

@ -48,6 +48,7 @@ func AccountListDeletedDataSource(ctx context.Context, state *models.DataSourceA
i := models.ItemAccountListDeletedModel{ i := models.ItemAccountListDeletedModel{
CreatedTime: types.Int64Value(int64(item.CreatedTime)), CreatedTime: types.Int64Value(int64(item.CreatedTime)),
DeletedTime: types.Int64Value(int64(item.DeletedTime)), DeletedTime: types.Int64Value(int64(item.DeletedTime)),
Desc: types.StringValue(item.Description),
AccountID: types.Int64Value(int64(item.ID)), AccountID: types.Int64Value(int64(item.ID)),
AccountName: types.StringValue(item.Name), AccountName: types.StringValue(item.Name),
Status: types.StringValue(item.Status), Status: types.StringValue(item.Status),

@ -90,6 +90,7 @@ func AccountRGListDataSource(ctx context.Context, state *models.DataSourceAccoun
CreatedTime: types.Int64Value(int64(item.CreatedTime)), CreatedTime: types.Int64Value(int64(item.CreatedTime)),
DeletedBy: types.StringValue(item.DeletedBy), DeletedBy: types.StringValue(item.DeletedBy),
DeletedTime: types.Int64Value(int64(item.DeletedTime)), DeletedTime: types.Int64Value(int64(item.DeletedTime)),
Desc: types.StringValue(item.Description),
RGID: types.Int64Value(int64(item.RGID)), RGID: types.Int64Value(int64(item.RGID)),
Milestones: types.Int64Value(int64(item.Milestones)), Milestones: types.Int64Value(int64(item.Milestones)),
RGName: types.StringValue(item.RGName), RGName: types.StringValue(item.RGName),

@ -7,6 +7,7 @@ import (
"github.com/hashicorp/terraform-plugin-framework/types/basetypes" "github.com/hashicorp/terraform-plugin-framework/types/basetypes"
"repository.basistech.ru/BASIS/terraform-provider-dynamix/internal/client" "repository.basistech.ru/BASIS/terraform-provider-dynamix/internal/client"
"repository.basistech.ru/BASIS/terraform-provider-dynamix/internal/flattens"
"github.com/hashicorp/terraform-plugin-framework/diag" "github.com/hashicorp/terraform-plugin-framework/diag"
"github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-framework/types"
@ -47,13 +48,13 @@ func AccountResource(ctx context.Context, state *models.ResourceAccountModel, c
*state = models.ResourceAccountModel{ *state = models.ResourceAccountModel{
// request fields // request fields
AccountName: types.StringValue(recordAccount.Name), AccountName: types.StringValue(recordAccount.Name),
Username: state.Username, SendAccessEmails: types.BoolValue(recordAccount.SendAccessEmails),
EmailAddress: state.EmailAddress, Users: flattenUsers(ctx, recordAccount.ACL),
SendAccessEmails: state.SendAccessEmails,
Users: state.Users,
Restore: state.Restore, Restore: state.Restore,
Permanently: state.Permanently, Permanently: state.Permanently,
Desc: types.StringValue(recordAccount.Description),
Enable: state.Enable, Enable: state.Enable,
Reason: state.Reason,
ResourceLimits: flattenResourceLimitsInAccountResource(ctx, recordAccount.ResourceLimits, state), ResourceLimits: flattenResourceLimitsInAccountResource(ctx, recordAccount.ResourceLimits, state),
Timeouts: state.Timeouts, Timeouts: state.Timeouts,
@ -67,6 +68,7 @@ func AccountResource(ctx context.Context, state *models.ResourceAccountModel, c
Company: types.StringValue(recordAccount.Company), Company: types.StringValue(recordAccount.Company),
CompanyURL: types.StringValue(recordAccount.CompanyURL), CompanyURL: types.StringValue(recordAccount.CompanyURL),
Computes: flattenComputes(ctx, recordAccount.Computes), Computes: flattenComputes(ctx, recordAccount.Computes),
ComputeFeatures: flattens.FlattenSimpleTypeToList(ctx, types.StringType, recordAccount.ComputeFeatures),
CPUAllocationParameter: types.StringValue(recordAccount.CPUAllocationParameter), CPUAllocationParameter: types.StringValue(recordAccount.CPUAllocationParameter),
CPUAllocationRatio: types.Float64Value(recordAccount.CPUAllocationRatio), CPUAllocationRatio: types.Float64Value(recordAccount.CPUAllocationRatio),
CreatedBy: types.StringValue(recordAccount.CreatedBy), CreatedBy: types.StringValue(recordAccount.CreatedBy),
@ -74,23 +76,22 @@ func AccountResource(ctx context.Context, state *models.ResourceAccountModel, c
DeactivationTime: types.Float64Value(recordAccount.DeactivationTime), DeactivationTime: types.Float64Value(recordAccount.DeactivationTime),
DeletedBy: types.StringValue(recordAccount.DeletedBy), DeletedBy: types.StringValue(recordAccount.DeletedBy),
DeletedTime: types.Int64Value(int64(recordAccount.DeletedTime)), DeletedTime: types.Int64Value(int64(recordAccount.DeletedTime)),
//Description: types.StringValue(recordAccount.Description),
DisplayName: types.StringValue(recordAccount.DisplayName), DisplayName: types.StringValue(recordAccount.DisplayName),
GUID: types.Int64Value(int64(recordAccount.GUID)), GUID: types.Int64Value(int64(recordAccount.GUID)),
Machines: flattenMachines(ctx, recordAccount.Machines), Machines: flattenMachines(ctx, recordAccount.Machines),
Status: types.StringValue(recordAccount.Status), Status: types.StringValue(recordAccount.Status),
UpdatedTime: types.Int64Value(int64(recordAccount.UpdatedTime)), UpdatedTime: types.Int64Value(int64(recordAccount.UpdatedTime)),
Version: types.Int64Value(int64(recordAccount.Version)), Version: types.Int64Value(int64(recordAccount.Version)),
VINS: flattens.FlattenSimpleTypeToList(ctx, types.Int64Type, recordAccount.VINS),
VINSes: types.Int64Value(int64(recordAccount.VINSes)), VINSes: types.Int64Value(int64(recordAccount.VINSes)),
} }
state.VINS, diags = types.ListValueFrom(ctx, types.Int64Type, recordAccount.VINS) if state.Enable == types.BoolNull() {
if diags.HasError() { state.Enable = types.BoolValue(false)
tflog.Error(ctx, fmt.Sprint("flattens.AccountResource: cannot flatten recordAccount.VINS to state.VINS", diags)) if recordAccount.Status == "CONFIRMED" {
state.Enable = types.BoolValue(true)
} }
state.ComputeFeatures, diags = types.ListValueFrom(ctx, types.StringType, recordAccount.ComputeFeatures)
if diags.HasError() {
tflog.Error(ctx, fmt.Sprint("flattens.AccountResource: cannot flatten recordAccount.ComputeFeatures to state.ComputeFeatures", diags))
} }
tflog.Info(ctx, "flattens.AccountResource: after flatten", map[string]any{"account_id": state.Id.ValueString()}) tflog.Info(ctx, "flattens.AccountResource: after flatten", map[string]any{"account_id": state.Id.ValueString()})
@ -137,3 +138,30 @@ func flattenResourceLimitsInAccountResource(ctx context.Context, limits account.
tflog.Info(ctx, "End flattenResourceLimitsInAccountResource") tflog.Info(ctx, "End flattenResourceLimitsInAccountResource")
return res return res
} }
func flattenUsers(ctx context.Context, aclList []account.RecordACL) types.Set {
tflog.Info(ctx, "Start flattenUsers")
tempSlice := make([]types.Object, 0, len(aclList))
for i, item := range aclList {
if i == 0 {
continue
}
temp := models.UsersModel{
UserID: types.StringValue(item.UgroupID),
AccessType: types.StringValue(item.Rights),
}
obj, diags := types.ObjectValueFrom(ctx, models.ItemUsersResource, temp)
if diags != nil {
tflog.Error(ctx, fmt.Sprint("Error flattenUsers struct to obj", diags))
}
tempSlice = append(tempSlice, obj)
}
res, diags := types.SetValueFrom(ctx, types.ObjectType{AttrTypes: models.ItemUsersResource}, tempSlice)
if diags != nil {
tflog.Error(ctx, fmt.Sprint("Error flattenUsers", diags))
}
tflog.Info(ctx, "End flattenUsers")
return res
}

@ -27,6 +27,7 @@ type DataSourceAccountModel struct {
DeactivationTime types.Float64 `tfsdk:"deactivation_time"` DeactivationTime types.Float64 `tfsdk:"deactivation_time"`
DeletedBy types.String `tfsdk:"deleted_by"` DeletedBy types.String `tfsdk:"deleted_by"`
DeletedTime types.Int64 `tfsdk:"deleted_time"` DeletedTime types.Int64 `tfsdk:"deleted_time"`
Desc types.String `tfsdk:"desc"`
DisplayName types.String `tfsdk:"displayname"` DisplayName types.String `tfsdk:"displayname"`
GUID types.Int64 `tfsdk:"guid"` GUID types.Int64 `tfsdk:"guid"`
Machines types.Object `tfsdk:"machines"` Machines types.Object `tfsdk:"machines"`

@ -31,6 +31,7 @@ type ItemAccountListModel struct {
Status types.String `tfsdk:"status"` Status types.String `tfsdk:"status"`
UpdatedTime types.Int64 `tfsdk:"updated_time"` UpdatedTime types.Int64 `tfsdk:"updated_time"`
ComputeFeatures types.List `tfsdk:"compute_features"` ComputeFeatures types.List `tfsdk:"compute_features"`
Desc types.String `tfsdk:"desc"`
} }
type RecordACLModel struct { type RecordACLModel struct {

@ -25,6 +25,7 @@ type ItemAccountListDeletedModel struct {
ACL []RecordACLModel `tfsdk:"acl"` ACL []RecordACLModel `tfsdk:"acl"`
CreatedTime types.Int64 `tfsdk:"created_time"` CreatedTime types.Int64 `tfsdk:"created_time"`
DeletedTime types.Int64 `tfsdk:"deleted_time"` DeletedTime types.Int64 `tfsdk:"deleted_time"`
Desc types.String `tfsdk:"desc"`
AccountID types.Int64 `tfsdk:"account_id"` AccountID types.Int64 `tfsdk:"account_id"`
AccountName types.String `tfsdk:"account_name"` AccountName types.String `tfsdk:"account_name"`
Status types.String `tfsdk:"status"` Status types.String `tfsdk:"status"`

@ -31,6 +31,7 @@ type ItemAccountRGModel struct {
CreatedTime types.Int64 `tfsdk:"created_time"` CreatedTime types.Int64 `tfsdk:"created_time"`
DeletedBy types.String `tfsdk:"deleted_by"` DeletedBy types.String `tfsdk:"deleted_by"`
DeletedTime types.Int64 `tfsdk:"deleted_time"` DeletedTime types.Int64 `tfsdk:"deleted_time"`
Desc types.String `tfsdk:"desc"`
RGID types.Int64 `tfsdk:"rg_id"` RGID types.Int64 `tfsdk:"rg_id"`
Milestones types.Int64 `tfsdk:"milestones"` Milestones types.Int64 `tfsdk:"milestones"`
RGName types.String `tfsdk:"rg_name"` RGName types.String `tfsdk:"rg_name"`

@ -9,17 +9,16 @@ import (
type ResourceAccountModel struct { type ResourceAccountModel struct {
// request fields - required // request fields - required
AccountName types.String `tfsdk:"account_name"` AccountName types.String `tfsdk:"account_name"`
Username types.String `tfsdk:"username"`
// request fields - optional // request fields - optional
EmailAddress types.String `tfsdk:"emailaddress"`
SendAccessEmails types.Bool `tfsdk:"send_access_emails"` SendAccessEmails types.Bool `tfsdk:"send_access_emails"`
Users types.List `tfsdk:"users"` Users types.Set `tfsdk:"users"`
Restore types.Bool `tfsdk:"restore"` Restore types.Bool `tfsdk:"restore"`
Permanently types.Bool `tfsdk:"permanently"` Permanently types.Bool `tfsdk:"permanently"`
Enable types.Bool `tfsdk:"enable"` Enable types.Bool `tfsdk:"enable"`
ResourceLimits types.Object `tfsdk:"resource_limits"` ResourceLimits types.Object `tfsdk:"resource_limits"`
Timeouts timeouts.Value `tfsdk:"timeouts"` Timeouts timeouts.Value `tfsdk:"timeouts"`
Desc types.String `tfsdk:"desc"`
// response fields // response fields
Id types.String `tfsdk:"id"` Id types.String `tfsdk:"id"`
@ -42,6 +41,7 @@ type ResourceAccountModel struct {
DisplayName types.String `tfsdk:"displayname"` DisplayName types.String `tfsdk:"displayname"`
GUID types.Int64 `tfsdk:"guid"` GUID types.Int64 `tfsdk:"guid"`
Machines types.Object `tfsdk:"machines"` Machines types.Object `tfsdk:"machines"`
Reason types.String `tfsdk:"reason"`
Status types.String `tfsdk:"status"` Status types.String `tfsdk:"status"`
UpdatedTime types.Int64 `tfsdk:"updated_time"` UpdatedTime types.Int64 `tfsdk:"updated_time"`
Version types.Int64 `tfsdk:"version"` Version types.Int64 `tfsdk:"version"`
@ -54,6 +54,11 @@ type UsersModel struct {
AccessType types.String `tfsdk:"access_type"` AccessType types.String `tfsdk:"access_type"`
} }
var ItemUsersResource = map[string]attr.Type{
"user_id": types.StringType,
"access_type": types.StringType,
}
type ResourceLimitsInAccountResourceModel struct { type ResourceLimitsInAccountResourceModel struct {
CUC types.Float64 `tfsdk:"cu_c"` CUC types.Float64 `tfsdk:"cu_c"`
CUD types.Float64 `tfsdk:"cu_d"` CUD types.Float64 `tfsdk:"cu_d"`

@ -137,7 +137,7 @@ func (r *resourceAccount) Update(ctx context.Context, req resource.UpdateRequest
// enable/disable account // enable/disable account
if !plan.Enable.Equal(state.Enable) && !plan.Enable.IsNull() { if !plan.Enable.Equal(state.Enable) && !plan.Enable.IsNull() {
resp.Diagnostics.Append(utilities.EnableDisableAccount(ctx, uint64(accountId), plan.Enable.ValueBool(), r.client)...) resp.Diagnostics.Append(utilities.EnableDisableAccount(ctx, uint64(accountId), &plan, r.client)...)
if resp.Diagnostics.HasError() { if resp.Diagnostics.HasError() {
tflog.Error(ctx, "Update resourceAccount: Error enabling/disabling account") tflog.Error(ctx, "Update resourceAccount: Error enabling/disabling account")
return return
@ -204,9 +204,6 @@ func (r *resourceAccount) Delete(ctx context.Context, req resource.DeleteRequest
defer cancel() defer cancel()
permanently := state.Permanently.ValueBool() permanently := state.Permanently.ValueBool()
if state.Permanently.IsNull() {
permanently = true
} // default true
// Delete existing resource group // Delete existing resource group
delReq := account.DeleteRequest{ delReq := account.DeleteRequest{

@ -73,6 +73,9 @@ func MakeSchemaDataSourceAccount() map[string]schema.Attribute {
"deleted_by": schema.StringAttribute{ "deleted_by": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"desc": schema.StringAttribute{
Computed: true,
},
"deleted_time": schema.Int64Attribute{ "deleted_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },

@ -83,6 +83,9 @@ func MakeSchemaDataSourceAccountList() map[string]schema.Attribute {
"account_id": schema.Int64Attribute{ "account_id": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
"desc": schema.StringAttribute{
Computed: true,
},
"account_name": schema.StringAttribute{ "account_name": schema.StringAttribute{
Computed: true, Computed: true,
}, },

@ -72,6 +72,9 @@ func MakeSchemaDataSourceAccountListDeleted() map[string]schema.Attribute {
"deleted_time": schema.Int64Attribute{ "deleted_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
"desc": schema.StringAttribute{
Computed: true,
},
"account_id": schema.Int64Attribute{ "account_id": schema.Int64Attribute{
Computed: true, Computed: true,
}, },

@ -201,6 +201,9 @@ func MakeSchemaDataSourceAccountRGList() map[string]schema.Attribute {
"deleted_time": schema.Int64Attribute{ "deleted_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
"desc": schema.StringAttribute{
Computed: true,
},
"rg_id": schema.Int64Attribute{ "rg_id": schema.Int64Attribute{
Computed: true, Computed: true,
}, },

@ -2,6 +2,7 @@ package schemas
import ( import (
"github.com/hashicorp/terraform-plugin-framework/resource/schema" "github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/booldefault"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
"github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-framework/types"
@ -14,21 +15,19 @@ func MakeSchemaResourceAccount() map[string]schema.Attribute {
Required: true, Required: true,
Description: "name of the account", Description: "name of the account",
}, },
"username": schema.StringAttribute{
Required: true,
Description: "username of owner the account",
},
// optional attributes // optional attributes
"emailaddress": schema.StringAttribute{
Optional: true,
Description: "email",
},
"send_access_emails": schema.BoolAttribute{ "send_access_emails": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Default: booldefault.StaticBool(false),
Description: "if true send emails when a user is granted access to resources", Description: "if true send emails when a user is granted access to resources",
}, },
"users": schema.ListNestedAttribute{ "desc": schema.StringAttribute{
Optional: true,
Description: "description of the account",
},
"users": schema.SetNestedAttribute{
Optional: true, Optional: true,
NestedObject: schema.NestedAttributeObject{ NestedObject: schema.NestedAttributeObject{
Attributes: map[string]schema.Attribute{ Attributes: map[string]schema.Attribute{
@ -43,16 +42,25 @@ func MakeSchemaResourceAccount() map[string]schema.Attribute {
}, },
"restore": schema.BoolAttribute{ "restore": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Description: "restore a deleted account", Description: "restore a deleted account",
Default: booldefault.StaticBool(true),
}, },
"permanently": schema.BoolAttribute{ "permanently": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Description: "whether to completely delete the account", Description: "whether to completely delete the account",
// default is false Default: booldefault.StaticBool(true),
}, },
"enable": schema.BoolAttribute{ "enable": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Description: "enable/disable account", Description: "enable/disable account",
Default: booldefault.StaticBool(true),
},
"reason": schema.StringAttribute{
Optional: true,
Description: "reason to disable",
}, },
"resource_limits": schema.SingleNestedAttribute{ "resource_limits": schema.SingleNestedAttribute{
Optional: true, Optional: true,

@ -120,10 +120,11 @@ func RestoreAccount(ctx context.Context, accountId uint64, c *client.Client) dia
// EnableDisableAccount performs account Enable/Disable request. // EnableDisableAccount performs account Enable/Disable request.
// Returns error in case of failures. // Returns error in case of failures.
func EnableDisableAccount(ctx context.Context, accountId uint64, enable bool, c *client.Client) diag.Diagnostics { func EnableDisableAccount(ctx context.Context, accountId uint64, plan *models.ResourceAccountModel, c *client.Client) diag.Diagnostics {
tflog.Info(ctx, "Start EnableDisableAccount", map[string]any{"account_id": accountId}) tflog.Info(ctx, "Start EnableDisableAccount", map[string]any{"account_id": accountId})
diags := diag.Diagnostics{} diags := diag.Diagnostics{}
enable := plan.Enable.ValueBool()
if enable { if enable {
tflog.Info(ctx, "EnableDisableAccount: before calling CloudAPI().Account().Enable", map[string]any{"account_id": accountId}) tflog.Info(ctx, "EnableDisableAccount: before calling CloudAPI().Account().Enable", map[string]any{"account_id": accountId})
@ -141,7 +142,8 @@ func EnableDisableAccount(ctx context.Context, accountId uint64, enable bool, c
} }
tflog.Info(ctx, "EnableDisableAccount: before calling CloudAPI().Account().Disable", map[string]any{"account_id": accountId}) tflog.Info(ctx, "EnableDisableAccount: before calling CloudAPI().Account().Disable", map[string]any{"account_id": accountId})
res, err := c.CloudAPI().Account().Disable(ctx, account.DisableEnableRequest{AccountID: accountId}) reason := plan.Reason.ValueString()
res, err := c.CloudAPI().Account().Disable(ctx, account.DisableEnableRequest{AccountID: accountId, Reason: reason})
if err != nil { if err != nil {
diags.AddError( diags.AddError(
"EnableDisableAccount: cannot disable account", "EnableDisableAccount: cannot disable account",
@ -172,6 +174,12 @@ func UpdateAccount(ctx context.Context, accountId uint64, plan, state *models.Re
updateNeeded = true updateNeeded = true
} }
// check if description was changed
if !plan.Desc.Equal(state.Desc) {
updateReq.Description = plan.Desc.ValueString()
updateNeeded = true
}
// check if resource_limits were changed // check if resource_limits were changed
if !plan.ResourceLimits.Equal(state.ResourceLimits) && !plan.ResourceLimits.IsUnknown() { if !plan.ResourceLimits.Equal(state.ResourceLimits) && !plan.ResourceLimits.IsUnknown() {
tflog.Info(ctx, "UpdateAccount: new ResourceLimits specified", map[string]any{"account_id": accountId}) tflog.Info(ctx, "UpdateAccount: new ResourceLimits specified", map[string]any{"account_id": accountId})

@ -83,6 +83,7 @@ func BServiceGroupDataSource(ctx context.Context, state *models.RecordGroupModel
ID: types.Int64Value(int64(v.ID)), ID: types.Int64Value(int64(v.ID)),
IPAddresses: flattens.FlattenSimpleTypeToList(ctx, types.StringType, ipAddresses), IPAddresses: flattens.FlattenSimpleTypeToList(ctx, types.StringType, ipAddresses),
Name: types.StringValue(v.Name), Name: types.StringValue(v.Name),
Chipset: types.StringValue(v.Chipset),
OSUsers: osUsers, OSUsers: osUsers,
} }
computesList = append(computesList, temp) computesList = append(computesList, temp)

@ -57,6 +57,7 @@ func BServiceGroupResource(ctx context.Context, plan *models.ResourceRecordGroup
RAM: types.Int64Value(int64(recordResourceGroup.RAM)), RAM: types.Int64Value(int64(recordResourceGroup.RAM)),
Disk: types.Int64Value(int64(recordResourceGroup.Disk)), Disk: types.Int64Value(int64(recordResourceGroup.Disk)),
ImageID: types.Int64Value(int64(recordResourceGroup.ImageID)), ImageID: types.Int64Value(int64(recordResourceGroup.ImageID)),
Chipset: plan.Chipset,
Driver: types.StringValue(recordResourceGroup.Driver), Driver: types.StringValue(recordResourceGroup.Driver),
SEPID: types.Int64Value(int64(recordResourceGroup.SEPID)), SEPID: types.Int64Value(int64(recordResourceGroup.SEPID)),
SepPool: types.StringValue(recordResourceGroup.PoolName), SepPool: types.StringValue(recordResourceGroup.PoolName),
@ -112,6 +113,7 @@ func flattenGroupComputes(ctx context.Context, items bservice.ListGroupComputes)
ID: types.Int64Value(int64(v.ID)), ID: types.Int64Value(int64(v.ID)),
Name: types.StringValue(v.Name), Name: types.StringValue(v.Name),
IPAddresses: flattens.FlattenSimpleTypeToList(ctx, types.StringType, v.IPAddresses), IPAddresses: flattens.FlattenSimpleTypeToList(ctx, types.StringType, v.IPAddresses),
Chipset: types.StringValue(v.Chipset),
OSUsers: flattenOSuser(ctx, v.OSUsers), OSUsers: flattenOSuser(ctx, v.OSUsers),
} }
obj, diags := types.ObjectValueFrom(ctx, models.ResourceItemGroupCompute, temp) obj, diags := types.ObjectValueFrom(ctx, models.ResourceItemGroupCompute, temp)

@ -48,6 +48,7 @@ type ItemGroupComputeModel struct {
ID types.Int64 `tfsdk:"id"` ID types.Int64 `tfsdk:"id"`
IPAddresses types.List `tfsdk:"ip_addresses"` IPAddresses types.List `tfsdk:"ip_addresses"`
Name types.String `tfsdk:"name"` Name types.String `tfsdk:"name"`
Chipset types.String `tfsdk:"chipset"`
OSUsers []ItemOSUserModel `tfsdk:"os_users"` OSUsers []ItemOSUserModel `tfsdk:"os_users"`
} }

@ -22,6 +22,7 @@ type ResourceRecordGroupModel struct {
SepPool types.String `tfsdk:"sep_pool"` SepPool types.String `tfsdk:"sep_pool"`
CloudInit types.String `tfsdk:"cloud_init"` CloudInit types.String `tfsdk:"cloud_init"`
Role types.String `tfsdk:"role"` Role types.String `tfsdk:"role"`
Chipset types.String `tfsdk:"chipset"`
TimeoutStart types.Int64 `tfsdk:"timeout_start"` TimeoutStart types.Int64 `tfsdk:"timeout_start"`
VINSes types.List `tfsdk:"vinses"` VINSes types.List `tfsdk:"vinses"`
ExtNets types.List `tfsdk:"extnets"` ExtNets types.List `tfsdk:"extnets"`
@ -61,6 +62,7 @@ type ResourceItemGroupComputeModel struct {
ID types.Int64 `tfsdk:"id"` ID types.Int64 `tfsdk:"id"`
IPAddresses types.List `tfsdk:"ip_addresses"` IPAddresses types.List `tfsdk:"ip_addresses"`
Name types.String `tfsdk:"name"` Name types.String `tfsdk:"name"`
Chipset types.String `tfsdk:"chipset"`
OSUsers types.List `tfsdk:"os_users"` OSUsers types.List `tfsdk:"os_users"`
} }
@ -68,6 +70,7 @@ var ResourceItemGroupCompute = map[string]attr.Type{
"id": types.Int64Type, "id": types.Int64Type,
"ip_addresses": types.ListType{ElemType: types.StringType}, "ip_addresses": types.ListType{ElemType: types.StringType},
"name": types.StringType, "name": types.StringType,
"chipset": types.StringType,
"os_users": types.ListType{ElemType: types.ObjectType{AttrTypes: ResourceItemOSUser}}, "os_users": types.ListType{ElemType: types.ObjectType{AttrTypes: ResourceItemOSUser}},
} }

@ -33,6 +33,9 @@ func MakeSchemaDataSourceBServiceGroup() map[string]schema.Attribute {
"name": schema.StringAttribute{ "name": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"chipset": schema.StringAttribute{
Computed: true,
},
"os_users": schema.ListNestedAttribute{ "os_users": schema.ListNestedAttribute{
Computed: true, Computed: true,
NestedObject: schema.NestedAttributeObject{ NestedObject: schema.NestedAttributeObject{

@ -2,6 +2,7 @@ package schemas
import ( import (
"github.com/hashicorp/terraform-plugin-framework/resource/schema" "github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/booldefault"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
) )
@ -24,15 +25,23 @@ func MakeSchemaResourceBService() map[string]schema.Attribute {
}, },
"permanently": schema.BoolAttribute{ "permanently": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Default: booldefault.StaticBool(true),
}, },
"enable": schema.BoolAttribute{ "enable": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Default: booldefault.StaticBool(true),
}, },
"restore": schema.BoolAttribute{ "restore": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Default: booldefault.StaticBool(false),
}, },
"start": schema.BoolAttribute{ "start": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Default: booldefault.StaticBool(false),
}, },
"service_id": schema.Int64Attribute{ "service_id": schema.Int64Attribute{
Optional: true, Optional: true,

@ -4,6 +4,7 @@ import (
"github.com/hashicorp/terraform-plugin-framework-validators/int64validator" "github.com/hashicorp/terraform-plugin-framework-validators/int64validator"
"github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator" "github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator"
"github.com/hashicorp/terraform-plugin-framework/resource/schema" "github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/booldefault"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
"github.com/hashicorp/terraform-plugin-framework/schema/validator" "github.com/hashicorp/terraform-plugin-framework/schema/validator"
@ -42,6 +43,13 @@ func MakeSchemaResourceBServiceGroup() map[string]schema.Attribute {
"driver": schema.StringAttribute{ "driver": schema.StringAttribute{
Required: true, Required: true,
}, },
"chipset": schema.StringAttribute{
Optional: true,
Validators: []validator.String{
stringvalidator.OneOf("i440fx", "Q35"),
},
Description: "Type of the emulated system, Q35 or i440fx",
},
"sep_id": schema.Int64Attribute{ "sep_id": schema.Int64Attribute{
Optional: true, Optional: true,
Computed: true, Computed: true,
@ -79,12 +87,18 @@ func MakeSchemaResourceBServiceGroup() map[string]schema.Attribute {
}, },
"start": schema.BoolAttribute{ "start": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Default: booldefault.StaticBool(false),
}, },
"force_stop": schema.BoolAttribute{ "force_stop": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Default: booldefault.StaticBool(false),
}, },
"force_update": schema.BoolAttribute{ "force_update": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Default: booldefault.StaticBool(false),
}, },
"parents": schema.ListAttribute{ "parents": schema.ListAttribute{
Optional: true, Optional: true,
@ -119,6 +133,10 @@ func MakeSchemaResourceBServiceGroup() map[string]schema.Attribute {
"name": schema.StringAttribute{ "name": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"chipset": schema.StringAttribute{
Computed: true,
Description: "Type of the emulated system, Q35 or i440fx",
},
"os_users": schema.ListNestedAttribute{ "os_users": schema.ListNestedAttribute{
Computed: true, Computed: true,
NestedObject: schema.NestedAttributeObject{ NestedObject: schema.NestedAttributeObject{

@ -48,6 +48,12 @@ func BServiceGroupResourceCreate(ctx context.Context, plan *models.ResourceRecor
req.SEPID = uint64(plan.SEPID.ValueInt64()) req.SEPID = uint64(plan.SEPID.ValueInt64())
} }
if plan.Chipset.IsNull() {
req.Chipset = "i440fx"
} else {
req.Chipset = plan.Chipset.ValueString()
}
if !plan.SepPool.IsNull() { if !plan.SepPool.IsNull() {
req.SEPPool = plan.SepPool.ValueString() req.SEPPool = plan.SepPool.ValueString()
} }
@ -131,6 +137,12 @@ func BServiceGroupResize(ctx context.Context, plan *models.ResourceRecordGroupMo
Mode: plan.Mode.ValueString(), Mode: plan.Mode.ValueString(),
} }
if plan.Chipset.IsNull() {
req.Chipset = "i440fx"
} else {
req.Chipset = plan.Chipset.ValueString()
}
tflog.Info(ctx, "BServiceGroupResize: before calling CloudAPI().BService().GroupResize", map[string]any{"req": req}) tflog.Info(ctx, "BServiceGroupResize: before calling CloudAPI().BService().GroupResize", map[string]any{"req": req})
_, err = c.CloudAPI().BService().GroupResize(ctx, req) _, err = c.CloudAPI().BService().GroupResize(ctx, req)
if err != nil { if err != nil {

@ -39,18 +39,23 @@ func DiskDataSource(ctx context.Context, state *models.DataSourceDiskModel, c *c
Timeouts: state.Timeouts, Timeouts: state.Timeouts,
// computed fields // computed fields
Id: types.StringValue(id.String()), ID: types.StringValue(id.String()),
AccountID: types.Int64Value(int64(recordDisk.AccountID)), AccountID: types.Int64Value(int64(recordDisk.AccountID)),
AccountName: types.StringValue(recordDisk.AccountName), AccountName: types.StringValue(recordDisk.AccountName),
ACL: types.StringValue(string(diskAcl)), ACL: types.StringValue(string(diskAcl)),
Computes: flattenComputes(ctx, recordDisk.Computes), Computes: flattenComputes(ctx, recordDisk.Computes),
CreatedBy: types.StringValue(recordDisk.CreatedBy),
CreatedTime: types.Int64Value(int64(recordDisk.CreatedTime)), CreatedTime: types.Int64Value(int64(recordDisk.CreatedTime)),
DeletedBy: types.StringValue(recordDisk.DeletedBy),
DeletedTime: types.Int64Value(int64(recordDisk.DeletedTime)), DeletedTime: types.Int64Value(int64(recordDisk.DeletedTime)),
Description: types.StringValue(recordDisk.Description), Description: types.StringValue(recordDisk.Description),
DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)), DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)),
DeviceName: types.StringValue(recordDisk.DeviceName), DeviceName: types.StringValue(recordDisk.DeviceName),
GID: types.Int64Value(int64(recordDisk.GID)), GID: types.Int64Value(int64(recordDisk.GID)),
ImageID: types.Int64Value(int64(recordDisk.ImageID)), ImageID: types.Int64Value(int64(recordDisk.ImageID)),
MachineID: types.Int64Value(int64(recordDisk.MachineID)),
MachineName: types.StringValue(recordDisk.MachineName),
Milestones: types.Int64Value(int64(recordDisk.Milestones)),
Name: types.StringValue(recordDisk.Name), Name: types.StringValue(recordDisk.Name),
Order: types.Int64Value(int64(recordDisk.Order)), Order: types.Int64Value(int64(recordDisk.Order)),
Params: types.StringValue(recordDisk.Params), Params: types.StringValue(recordDisk.Params),
@ -64,20 +69,23 @@ func DiskDataSource(ctx context.Context, state *models.DataSourceDiskModel, c *c
SepID: types.Int64Value(int64(recordDisk.SepID)), SepID: types.Int64Value(int64(recordDisk.SepID)),
SepType: types.StringValue(recordDisk.SepType), SepType: types.StringValue(recordDisk.SepType),
Shareable: types.BoolValue(recordDisk.Shareable), Shareable: types.BoolValue(recordDisk.Shareable),
SizeAvailable: types.Float64Value(recordDisk.SizeAvailable),
SizeMax: types.Int64Value(int64(recordDisk.SizeMax)), SizeMax: types.Int64Value(int64(recordDisk.SizeMax)),
SizeUsed: types.Float64Value(recordDisk.SizeUsed), SizeUsed: types.Float64Value(recordDisk.SizeUsed),
Snapshots: flattenSnapshots(ctx, recordDisk.Snapshots), Snapshots: flattenSnapshots(ctx, recordDisk.Snapshots),
Status: types.StringValue(recordDisk.Status), Status: types.StringValue(recordDisk.Status),
TechStatus: types.StringValue(recordDisk.TechStatus), TechStatus: types.StringValue(recordDisk.TechStatus),
Type: types.StringValue(recordDisk.Type), Type: types.StringValue(recordDisk.Type),
UpdatedBy: types.StringValue(recordDisk.UpdatedBy),
UpdatedTime: types.Int64Value(int64(recordDisk.UpdatedTime)),
VMID: types.Int64Value(int64(recordDisk.VMID)), VMID: types.Int64Value(int64(recordDisk.VMID)),
} }
state.Images, diags = types.ListValueFrom(ctx, types.StringType, recordDisk.Images) state.Images, diags = types.ListValueFrom(ctx, types.Int64Type, recordDisk.Images)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskDataSource: cannot flatten recordDisk.Images to state.Images", diags)) tflog.Error(ctx, fmt.Sprint("flattens.DiskDataSource: cannot flatten recordDisk.Images to state.Images", diags))
} }
state.PresentTo, diags = types.ListValueFrom(ctx, types.Int64Type, recordDisk.PresentTo) state.PresentTo, diags = types.MapValueFrom(ctx, types.Int64Type, recordDisk.PresentTo)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskDataSource: cannot flatten recordDisk.PresentTo to state.PresentTo", diags)) tflog.Error(ctx, fmt.Sprint("flattens.DiskDataSource: cannot flatten recordDisk.PresentTo to state.PresentTo", diags))
} }

@ -47,7 +47,7 @@ func DiskListDataSource(ctx context.Context, state *models.DataSourceDiskListMod
Timeouts: state.Timeouts, Timeouts: state.Timeouts,
// computed fields // computed fields
Id: types.StringValue(id.String()), ID: types.StringValue(id.String()),
EntryCount: types.Int64Value(int64(diskList.EntryCount)), EntryCount: types.Int64Value(int64(diskList.EntryCount)),
} }
@ -59,7 +59,9 @@ func DiskListDataSource(ctx context.Context, state *models.DataSourceDiskListMod
AccountName: types.StringValue(recordDisk.AccountName), AccountName: types.StringValue(recordDisk.AccountName),
ACL: types.StringValue(string(diskAcl)), ACL: types.StringValue(string(diskAcl)),
Computes: flattenComputes(ctx, recordDisk.Computes), Computes: flattenComputes(ctx, recordDisk.Computes),
CreatedBy: types.StringValue(recordDisk.CreatedBy),
CreatedTime: types.Int64Value(int64(recordDisk.CreatedTime)), CreatedTime: types.Int64Value(int64(recordDisk.CreatedTime)),
DeletedBy: types.StringValue(recordDisk.DeletedBy),
DeletedTime: types.Int64Value(int64(recordDisk.DeletedTime)), DeletedTime: types.Int64Value(int64(recordDisk.DeletedTime)),
Description: types.StringValue(recordDisk.Description), Description: types.StringValue(recordDisk.Description),
DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)), DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)),
@ -68,6 +70,9 @@ func DiskListDataSource(ctx context.Context, state *models.DataSourceDiskListMod
ImageID: types.Int64Value(int64(recordDisk.ImageID)), ImageID: types.Int64Value(int64(recordDisk.ImageID)),
DiskId: types.Int64Value(int64(recordDisk.ID)), DiskId: types.Int64Value(int64(recordDisk.ID)),
DiskName: types.StringValue(recordDisk.Name), DiskName: types.StringValue(recordDisk.Name),
MachineID: types.Int64Value(int64(recordDisk.MachineID)),
MachineName: types.StringValue(recordDisk.MachineName),
Milestones: types.Int64Value(int64(recordDisk.Milestones)),
Order: types.Int64Value(int64(recordDisk.Order)), Order: types.Int64Value(int64(recordDisk.Order)),
Params: types.StringValue(recordDisk.Params), Params: types.StringValue(recordDisk.Params),
ParentID: types.Int64Value(int64(recordDisk.ParentID)), ParentID: types.Int64Value(int64(recordDisk.ParentID)),
@ -80,20 +85,23 @@ func DiskListDataSource(ctx context.Context, state *models.DataSourceDiskListMod
SepID: types.Int64Value(int64(recordDisk.SepID)), SepID: types.Int64Value(int64(recordDisk.SepID)),
SepType: types.StringValue(recordDisk.SepType), SepType: types.StringValue(recordDisk.SepType),
Shareable: types.BoolValue(recordDisk.Shareable), Shareable: types.BoolValue(recordDisk.Shareable),
SizeAvailable: types.Float64Value(recordDisk.SizeAvailable),
SizeMax: types.Int64Value(int64(recordDisk.SizeMax)), SizeMax: types.Int64Value(int64(recordDisk.SizeMax)),
SizeUsed: types.Float64Value(recordDisk.SizeUsed), SizeUsed: types.Float64Value(recordDisk.SizeUsed),
Snapshots: flattenSnapshots(ctx, recordDisk.Snapshots), Snapshots: flattenSnapshots(ctx, recordDisk.Snapshots),
Status: types.StringValue(recordDisk.Status), Status: types.StringValue(recordDisk.Status),
TechStatus: types.StringValue(recordDisk.TechStatus), TechStatus: types.StringValue(recordDisk.TechStatus),
Type: types.StringValue(recordDisk.Type), Type: types.StringValue(recordDisk.Type),
UpdatedBy: types.StringValue(recordDisk.UpdatedBy),
UpdatedTime: types.Int64Value(int64(recordDisk.UpdatedTime)),
VMID: types.Int64Value(int64(recordDisk.VMID)), VMID: types.Int64Value(int64(recordDisk.VMID)),
} }
d.Images, diags = types.ListValueFrom(ctx, types.StringType, recordDisk.Images) d.Images, diags = types.ListValueFrom(ctx, types.Int64Type, recordDisk.Images)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskListDataSource: cannot flatten recordDisk.Images to d.Images", diags)) tflog.Error(ctx, fmt.Sprint("flattens.DiskListDataSource: cannot flatten recordDisk.Images to d.Images", diags))
} }
d.PresentTo, diags = types.ListValueFrom(ctx, types.Int64Type, recordDisk.PresentTo) d.PresentTo, diags = types.MapValueFrom(ctx, types.Int64Type, recordDisk.PresentTo)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskListDataSource: cannot flatten recordDisk.PresentTo to d.PresentTo", diags)) tflog.Error(ctx, fmt.Sprint("flattens.DiskListDataSource: cannot flatten recordDisk.PresentTo to d.PresentTo", diags))
} }

@ -44,7 +44,7 @@ func DiskListDeletedDataSource(ctx context.Context, state *models.DataSourceDisk
Timeouts: state.Timeouts, Timeouts: state.Timeouts,
// computed fields // computed fields
Id: types.StringValue(id.String()), ID: types.StringValue(id.String()),
EntryCount: types.Int64Value(int64(diskList.EntryCount)), EntryCount: types.Int64Value(int64(diskList.EntryCount)),
} }
@ -56,7 +56,9 @@ func DiskListDeletedDataSource(ctx context.Context, state *models.DataSourceDisk
AccountName: types.StringValue(recordDisk.AccountName), AccountName: types.StringValue(recordDisk.AccountName),
ACL: types.StringValue(string(diskAcl)), ACL: types.StringValue(string(diskAcl)),
Computes: flattenComputes(ctx, recordDisk.Computes), Computes: flattenComputes(ctx, recordDisk.Computes),
CreatedBy: types.StringValue(recordDisk.CreatedBy),
CreatedTime: types.Int64Value(int64(recordDisk.CreatedTime)), CreatedTime: types.Int64Value(int64(recordDisk.CreatedTime)),
DeletedBy: types.StringValue(recordDisk.DeletedBy),
DeletedTime: types.Int64Value(int64(recordDisk.DeletedTime)), DeletedTime: types.Int64Value(int64(recordDisk.DeletedTime)),
Description: types.StringValue(recordDisk.Description), Description: types.StringValue(recordDisk.Description),
DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)), DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)),
@ -65,6 +67,7 @@ func DiskListDeletedDataSource(ctx context.Context, state *models.DataSourceDisk
ImageID: types.Int64Value(int64(recordDisk.ImageID)), ImageID: types.Int64Value(int64(recordDisk.ImageID)),
DiskId: types.Int64Value(int64(recordDisk.ID)), DiskId: types.Int64Value(int64(recordDisk.ID)),
DiskName: types.StringValue(recordDisk.Name), DiskName: types.StringValue(recordDisk.Name),
Milestones: types.Int64Value(int64(recordDisk.Milestones)),
Order: types.Int64Value(int64(recordDisk.Order)), Order: types.Int64Value(int64(recordDisk.Order)),
Params: types.StringValue(recordDisk.Params), Params: types.StringValue(recordDisk.Params),
ParentID: types.Int64Value(int64(recordDisk.ParentID)), ParentID: types.Int64Value(int64(recordDisk.ParentID)),
@ -77,20 +80,23 @@ func DiskListDeletedDataSource(ctx context.Context, state *models.DataSourceDisk
SepID: types.Int64Value(int64(recordDisk.SepID)), SepID: types.Int64Value(int64(recordDisk.SepID)),
SepType: types.StringValue(recordDisk.SepType), SepType: types.StringValue(recordDisk.SepType),
Shareable: types.BoolValue(recordDisk.Shareable), Shareable: types.BoolValue(recordDisk.Shareable),
SizeAvailable: types.Float64Value(recordDisk.SizeAvailable),
SizeMax: types.Int64Value(int64(recordDisk.SizeMax)), SizeMax: types.Int64Value(int64(recordDisk.SizeMax)),
SizeUsed: types.Float64Value(recordDisk.SizeUsed), SizeUsed: types.Float64Value(recordDisk.SizeUsed),
Snapshots: flattenSnapshots(ctx, recordDisk.Snapshots), Snapshots: flattenSnapshots(ctx, recordDisk.Snapshots),
Status: types.StringValue(recordDisk.Status), Status: types.StringValue(recordDisk.Status),
TechStatus: types.StringValue(recordDisk.TechStatus), TechStatus: types.StringValue(recordDisk.TechStatus),
Type: types.StringValue(recordDisk.Type), Type: types.StringValue(recordDisk.Type),
UpdatedBy: types.StringValue(recordDisk.UpdatedBy),
UpdatedTime: types.Int64Value(int64(recordDisk.UpdatedTime)),
VMID: types.Int64Value(int64(recordDisk.VMID)), VMID: types.Int64Value(int64(recordDisk.VMID)),
} }
d.Images, diags = types.ListValueFrom(ctx, types.StringType, recordDisk.Images) d.Images, diags = types.ListValueFrom(ctx, types.Int64Type, recordDisk.Images)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskListDeletedDataSource: cannot flatten recordDisk.Images to d.Images", diags)) tflog.Error(ctx, fmt.Sprint("flattens.DiskListDeletedDataSource: cannot flatten recordDisk.Images to d.Images", diags))
} }
d.PresentTo, diags = types.ListValueFrom(ctx, types.Int64Type, recordDisk.PresentTo) d.PresentTo, diags = types.MapValueFrom(ctx, types.Int64Type, recordDisk.PresentTo)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskListDeletedDataSource: cannot flatten recordDisk.PresentTo to d.PresentTo", diags)) tflog.Error(ctx, fmt.Sprint("flattens.DiskListDeletedDataSource: cannot flatten recordDisk.PresentTo to d.PresentTo", diags))
} }

@ -97,7 +97,7 @@ func DiskListUnattachedDataSource(ctx context.Context, state *models.DataSourceD
VMID: types.Int64Value(int64(recordDisk.VMID)), VMID: types.Int64Value(int64(recordDisk.VMID)),
} }
d.Images, diags = types.ListValueFrom(ctx, types.StringType, recordDisk.Images) d.Images, diags = types.ListValueFrom(ctx, types.Int64Type, recordDisk.Images)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskListUnattachedDataSource: cannot flatten recordDisk.Images to d.Images", diags)) tflog.Error(ctx, fmt.Sprint("flattens.DiskListUnattachedDataSource: cannot flatten recordDisk.Images to d.Images", diags))
} }

@ -47,14 +47,13 @@ func DiskReplicationDataSource(ctx context.Context, state *models.RecordDiskMode
DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)), DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)),
GID: types.Int64Value(int64(recordDisk.GID)), GID: types.Int64Value(int64(recordDisk.GID)),
ImageID: types.Int64Value(int64(recordDisk.ImageID)), ImageID: types.Int64Value(int64(recordDisk.ImageID)),
Images: flattens.FlattenSimpleTypeToList(ctx, types.StringType, recordDisk.Images), Images: flattens.FlattenSimpleTypeToList(ctx, types.Int64Type, recordDisk.Images),
Name: types.StringValue(recordDisk.Name), Name: types.StringValue(recordDisk.Name),
Order: types.Int64Value(int64(recordDisk.Order)), Order: types.Int64Value(int64(recordDisk.Order)),
Params: types.StringValue(recordDisk.Params), Params: types.StringValue(recordDisk.Params),
ParentID: types.Int64Value(int64(recordDisk.ParentID)), ParentID: types.Int64Value(int64(recordDisk.ParentID)),
PCISlot: types.Int64Value(int64(recordDisk.PCISlot)), PCISlot: types.Int64Value(int64(recordDisk.PCISlot)),
Pool: types.StringValue(recordDisk.Pool), Pool: types.StringValue(recordDisk.Pool),
PresentTo: flattens.FlattenSimpleTypeToList(ctx, types.StringType, recordDisk.PresentTo),
PurgeTime: types.Int64Value(int64(recordDisk.PurgeTime)), PurgeTime: types.Int64Value(int64(recordDisk.PurgeTime)),
Replication: &models.ItemReplicationModel{}, Replication: &models.ItemReplicationModel{},
ResID: types.StringValue(recordDisk.ResID), ResID: types.StringValue(recordDisk.ResID),
@ -73,6 +72,11 @@ func DiskReplicationDataSource(ctx context.Context, state *models.RecordDiskMode
VMID: types.Int64Value(int64(recordDisk.VMID)), VMID: types.Int64Value(int64(recordDisk.VMID)),
} }
state.PresentTo, diags = types.MapValueFrom(ctx, types.Int64Type, recordDisk.PresentTo)
if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskDataSource: cannot flatten recordDisk.PresentTo to state.PresentTo", diags))
}
iotune := models.DiskReplicationIOTune{ iotune := models.DiskReplicationIOTune{
ReadBytesSec: types.Int64Value(int64(recordDisk.IOTune.ReadBytesSec)), ReadBytesSec: types.Int64Value(int64(recordDisk.IOTune.ReadBytesSec)),
ReadBytesSecMax: types.Int64Value(int64(recordDisk.IOTune.ReadBytesSecMax)), ReadBytesSecMax: types.Int64Value(int64(recordDisk.IOTune.ReadBytesSecMax)),

@ -23,7 +23,7 @@ func DiskResource(ctx context.Context, plan *models.ResourceDiskModel, c *client
diags := diag.Diagnostics{} diags := diag.Diagnostics{}
diskId, err := strconv.ParseUint(plan.Id.ValueString(), 10, 64) diskId, err := strconv.ParseUint(plan.ID.ValueString(), 10, 64)
if err != nil { if err != nil {
diags.AddError("flattens.DiskResource: Cannot parse disk ID from state", err.Error()) diags.AddError("flattens.DiskResource: Cannot parse disk ID from state", err.Error())
return diags return diags
@ -43,13 +43,11 @@ func DiskResource(ctx context.Context, plan *models.ResourceDiskModel, c *client
AccountID: types.Int64Value(int64(recordDisk.AccountID)), AccountID: types.Int64Value(int64(recordDisk.AccountID)),
DiskName: types.StringValue(recordDisk.Name), DiskName: types.StringValue(recordDisk.Name),
SizeMax: types.Int64Value(int64(recordDisk.SizeMax)), SizeMax: types.Int64Value(int64(recordDisk.SizeMax)),
GID: types.Int64Value(int64(recordDisk.GID)),
// optional fields // optional fields
Description: plan.Description, Description: types.StringValue(recordDisk.Description),
Pool: plan.Pool, Pool: types.StringValue(recordDisk.Pool),
SEPID: plan.SEPID, SEPID: types.Int64Value(int64(recordDisk.SepID)),
Type: plan.Type,
Detach: plan.Detach, Detach: plan.Detach,
Permanently: plan.Permanently, Permanently: plan.Permanently,
Shareable: plan.Shareable, Shareable: plan.Shareable,
@ -57,16 +55,21 @@ func DiskResource(ctx context.Context, plan *models.ResourceDiskModel, c *client
// computed fields // computed fields
LastUpdated: plan.LastUpdated, LastUpdated: plan.LastUpdated,
Id: types.StringValue(strconv.Itoa(int(recordDisk.ID))), ID: types.StringValue(strconv.Itoa(int(recordDisk.ID))),
DiskId: types.Int64Value(int64(recordDisk.ID)), DiskID: types.Int64Value(int64(recordDisk.ID)),
AccountName: types.StringValue(recordDisk.AccountName), AccountName: types.StringValue(recordDisk.AccountName),
ACL: types.StringValue(string(diskAcl)), ACL: types.StringValue(string(diskAcl)),
Computes: flattenComputes(ctx, recordDisk.Computes), Computes: flattenComputes(ctx, recordDisk.Computes),
CreatedBy: types.StringValue(recordDisk.CreatedBy),
CreatedTime: types.Int64Value(int64(recordDisk.CreatedTime)), CreatedTime: types.Int64Value(int64(recordDisk.CreatedTime)),
DeletedBy: types.StringValue(recordDisk.DeletedBy),
DeletedTime: types.Int64Value(int64(recordDisk.DeletedTime)), DeletedTime: types.Int64Value(int64(recordDisk.DeletedTime)),
DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)), DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)),
DeviceName: types.StringValue(recordDisk.DeviceName), DeviceName: types.StringValue(recordDisk.DeviceName),
ImageID: types.Int64Value(int64(recordDisk.ImageID)), ImageID: types.Int64Value(int64(recordDisk.ImageID)),
MachineID: types.Int64Value(int64(recordDisk.MachineID)),
MachineName: types.StringValue(recordDisk.MachineName),
GID: types.Int64Value(int64(recordDisk.GID)),
Order: types.Int64Value(int64(recordDisk.Order)), Order: types.Int64Value(int64(recordDisk.Order)),
Params: types.StringValue(recordDisk.Params), Params: types.StringValue(recordDisk.Params),
ParentID: types.Int64Value(int64(recordDisk.ParentID)), ParentID: types.Int64Value(int64(recordDisk.ParentID)),
@ -80,14 +83,17 @@ func DiskResource(ctx context.Context, plan *models.ResourceDiskModel, c *client
Snapshots: flattenSnapshots(ctx, recordDisk.Snapshots), Snapshots: flattenSnapshots(ctx, recordDisk.Snapshots),
Status: types.StringValue(recordDisk.Status), Status: types.StringValue(recordDisk.Status),
TechStatus: types.StringValue(recordDisk.TechStatus), TechStatus: types.StringValue(recordDisk.TechStatus),
Type: types.StringValue(recordDisk.Type),
UpdatedBy: types.StringValue(recordDisk.UpdatedBy),
UpdatedTime: types.Int64Value(int64(recordDisk.UpdatedTime)),
VMID: types.Int64Value(int64(recordDisk.VMID)), VMID: types.Int64Value(int64(recordDisk.VMID)),
} }
plan.Images, diags = types.ListValueFrom(ctx, types.StringType, recordDisk.Images) plan.Images, diags = types.ListValueFrom(ctx, types.Int64Type, recordDisk.Images)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskResource: cannot flatten recordDisk.Images to plan.Images", diags)) tflog.Error(ctx, fmt.Sprint("flattens.DiskResource: cannot flatten recordDisk.Images to plan.Images", diags))
} }
plan.PresentTo, diags = types.ListValueFrom(ctx, types.Int64Type, recordDisk.PresentTo) plan.PresentTo, diags = types.MapValueFrom(ctx, types.Int64Type, recordDisk.PresentTo)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskResource: cannot flatten recordDisk.PresentTo to plan.PresentTo", diags)) tflog.Error(ctx, fmt.Sprint("flattens.DiskResource: cannot flatten recordDisk.PresentTo to plan.PresentTo", diags))
} }
@ -129,7 +135,7 @@ func DiskResource(ctx context.Context, plan *models.ResourceDiskModel, c *client
} }
plan.IOTune = obj plan.IOTune = obj
tflog.Info(ctx, "flattens.DiskResource: after flatten", map[string]any{"disk_id": plan.Id.ValueString()}) tflog.Info(ctx, "flattens.DiskResource: after flatten", map[string]any{"disk_id": plan.ID.ValueString()})
tflog.Info(ctx, "End flattens.DiskResource") tflog.Info(ctx, "End flattens.DiskResource")
return nil return nil

@ -57,13 +57,12 @@ func DiskReplicationResource(ctx context.Context, state *models.ResourceRecordDi
DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)), DestructionTime: types.Int64Value(int64(recordDisk.DestructionTime)),
GID: types.Int64Value(int64(recordDisk.GID)), GID: types.Int64Value(int64(recordDisk.GID)),
ImageID: types.Int64Value(int64(recordDisk.ImageID)), ImageID: types.Int64Value(int64(recordDisk.ImageID)),
Images: flattens.FlattenSimpleTypeToList(ctx, types.StringType, recordDisk.Images), Images: flattens.FlattenSimpleTypeToList(ctx, types.Int64Type, recordDisk.Images),
Order: types.Int64Value(int64(recordDisk.Order)), Order: types.Int64Value(int64(recordDisk.Order)),
Params: types.StringValue(recordDisk.Params), Params: types.StringValue(recordDisk.Params),
ParentID: types.Int64Value(int64(recordDisk.ParentID)), ParentID: types.Int64Value(int64(recordDisk.ParentID)),
PCISlot: types.Int64Value(int64(recordDisk.PCISlot)), PCISlot: types.Int64Value(int64(recordDisk.PCISlot)),
Pool: types.StringValue(recordDisk.Pool), Pool: types.StringValue(recordDisk.Pool),
PresentTo: flattens.FlattenSimpleTypeToList(ctx, types.StringType, recordDisk.PresentTo),
PurgeTime: types.Int64Value(int64(recordDisk.PurgeTime)), PurgeTime: types.Int64Value(int64(recordDisk.PurgeTime)),
ResID: types.StringValue(recordDisk.ResID), ResID: types.StringValue(recordDisk.ResID),
ResName: types.StringValue(recordDisk.ResName), ResName: types.StringValue(recordDisk.ResName),
@ -80,6 +79,11 @@ func DiskReplicationResource(ctx context.Context, state *models.ResourceRecordDi
VMID: types.Int64Value(int64(recordDisk.VMID)), VMID: types.Int64Value(int64(recordDisk.VMID)),
} }
state.PresentTo, diags = types.MapValueFrom(ctx, types.Int64Type, recordDisk.PresentTo)
if diags != nil {
tflog.Error(ctx, fmt.Sprint("flattens.DiskDataSource: cannot flatten recordDisk.PresentTo to state.PresentTo", diags))
}
iotune := models.ResourceDiskReplicationIOTuneModel{ iotune := models.ResourceDiskReplicationIOTuneModel{
ReadBytesSec: types.Int64Value(int64(recordDisk.IOTune.ReadBytesSec)), ReadBytesSec: types.Int64Value(int64(recordDisk.IOTune.ReadBytesSec)),
ReadBytesSecMax: types.Int64Value(int64(recordDisk.IOTune.ReadBytesSecMax)), ReadBytesSecMax: types.Int64Value(int64(recordDisk.IOTune.ReadBytesSecMax)),

@ -22,12 +22,6 @@ func resourceDiskCreateInputChecks(ctx context.Context, plan *models.ResourceDis
diags.AddError(fmt.Sprintf("Cannot get info about account with ID %v", accountId), err.Error()) diags.AddError(fmt.Sprintf("Cannot get info about account with ID %v", accountId), err.Error())
} }
gid := uint64(plan.GID.ValueInt64())
tflog.Info(ctx, "resourceDiskCreateInputChecks: exist gid check", map[string]any{"gid": gid})
err = ic.ExistGID(ctx, gid, c)
if err != nil {
diags.AddError(fmt.Sprintf("Cannot get info about GID %v", gid), err.Error())
}
return diags return diags
} }
@ -62,7 +56,7 @@ func resourceDiskUpdateInputChecks(ctx context.Context, plan, state *models.Reso
fmt.Sprintf("cannot change description from %s to %s for disk id %s", fmt.Sprintf("cannot change description from %s to %s for disk id %s",
state.Description.ValueString(), state.Description.ValueString(),
plan.Description.ValueString(), plan.Description.ValueString(),
plan.Id.ValueString())) plan.ID.ValueString()))
} }
// check pool // check pool
@ -72,7 +66,7 @@ func resourceDiskUpdateInputChecks(ctx context.Context, plan, state *models.Reso
fmt.Sprintf("cannot change pool from %s to %s for disk id %s", fmt.Sprintf("cannot change pool from %s to %s for disk id %s",
state.Pool.ValueString(), state.Pool.ValueString(),
plan.Pool.ValueString(), plan.Pool.ValueString(),
plan.Id.ValueString())) plan.ID.ValueString()))
} }
// check sep_id // check sep_id
@ -82,17 +76,7 @@ func resourceDiskUpdateInputChecks(ctx context.Context, plan, state *models.Reso
fmt.Sprintf("cannot change sep_id from %d to %d for disk id %s", fmt.Sprintf("cannot change sep_id from %d to %d for disk id %s",
state.SEPID.ValueInt64(), state.SEPID.ValueInt64(),
plan.SEPID.ValueInt64(), plan.SEPID.ValueInt64(),
plan.Id.ValueString())) plan.ID.ValueString()))
}
// check type
if !plan.Type.Equal(state.Type) && !plan.Type.IsUnknown() {
diags.AddError(
"resourceDiskUpdateInputChecks: type change is not allowed",
fmt.Sprintf("cannot change type from %s to %s for disk id %s",
state.Type.ValueString(),
plan.Type.ValueString(),
plan.Id.ValueString()))
} }
return diags return diags

@ -11,12 +11,14 @@ type DataSourceDiskModel struct {
Timeouts timeouts.Value `tfsdk:"timeouts"` Timeouts timeouts.Value `tfsdk:"timeouts"`
// response fields // response fields
Id types.String `tfsdk:"id"` ID types.String `tfsdk:"id"`
ACL types.String `tfsdk:"acl"` ACL types.String `tfsdk:"acl"`
AccountID types.Int64 `tfsdk:"account_id"` AccountID types.Int64 `tfsdk:"account_id"`
AccountName types.String `tfsdk:"account_name"` AccountName types.String `tfsdk:"account_name"`
Computes types.List `tfsdk:"computes"` Computes types.List `tfsdk:"computes"`
CreatedBy types.String `tfsdk:"created_by"`
CreatedTime types.Int64 `tfsdk:"created_time"` CreatedTime types.Int64 `tfsdk:"created_time"`
DeletedBy types.String `tfsdk:"deleted_by"`
DeletedTime types.Int64 `tfsdk:"deleted_time"` DeletedTime types.Int64 `tfsdk:"deleted_time"`
DeviceName types.String `tfsdk:"devicename"` DeviceName types.String `tfsdk:"devicename"`
Description types.String `tfsdk:"desc"` Description types.String `tfsdk:"desc"`
@ -25,13 +27,16 @@ type DataSourceDiskModel struct {
ImageID types.Int64 `tfsdk:"image_id"` ImageID types.Int64 `tfsdk:"image_id"`
Images types.List `tfsdk:"images"` Images types.List `tfsdk:"images"`
IOTune types.Object `tfsdk:"iotune"` IOTune types.Object `tfsdk:"iotune"`
MachineID types.Int64 `tfsdk:"machine_id"`
MachineName types.String `tfsdk:"machine_name"`
Milestones types.Int64 `tfsdk:"milestones"`
Name types.String `tfsdk:"disk_name"` Name types.String `tfsdk:"disk_name"`
Order types.Int64 `tfsdk:"order"` Order types.Int64 `tfsdk:"order"`
Params types.String `tfsdk:"params"` Params types.String `tfsdk:"params"`
ParentID types.Int64 `tfsdk:"parent_id"` ParentID types.Int64 `tfsdk:"parent_id"`
PCISlot types.Int64 `tfsdk:"pci_slot"` PCISlot types.Int64 `tfsdk:"pci_slot"`
Pool types.String `tfsdk:"pool"` Pool types.String `tfsdk:"pool"`
PresentTo types.List `tfsdk:"present_to"` PresentTo types.Map `tfsdk:"present_to"`
PurgeTime types.Int64 `tfsdk:"purge_time"` PurgeTime types.Int64 `tfsdk:"purge_time"`
ResID types.String `tfsdk:"res_id"` ResID types.String `tfsdk:"res_id"`
ResName types.String `tfsdk:"res_name"` ResName types.String `tfsdk:"res_name"`
@ -39,11 +44,14 @@ type DataSourceDiskModel struct {
SepType types.String `tfsdk:"sep_type"` SepType types.String `tfsdk:"sep_type"`
SepID types.Int64 `tfsdk:"sep_id"` SepID types.Int64 `tfsdk:"sep_id"`
Shareable types.Bool `tfsdk:"shareable"` Shareable types.Bool `tfsdk:"shareable"`
SizeAvailable types.Float64 `tfsdk:"size_available"`
SizeMax types.Int64 `tfsdk:"size_max"` SizeMax types.Int64 `tfsdk:"size_max"`
SizeUsed types.Float64 `tfsdk:"size_used"` SizeUsed types.Float64 `tfsdk:"size_used"`
Snapshots types.List `tfsdk:"snapshots"` Snapshots types.List `tfsdk:"snapshots"`
Status types.String `tfsdk:"status"` Status types.String `tfsdk:"status"`
TechStatus types.String `tfsdk:"tech_status"` TechStatus types.String `tfsdk:"tech_status"`
Type types.String `tfsdk:"type"` Type types.String `tfsdk:"type"`
UpdatedBy types.String `tfsdk:"updated_by"`
UpdatedTime types.Int64 `tfsdk:"updated_time"`
VMID types.Int64 `tfsdk:"vmid"` VMID types.Int64 `tfsdk:"vmid"`
} }

@ -23,7 +23,7 @@ type DataSourceDiskListModel struct {
Timeouts timeouts.Value `tfsdk:"timeouts"` Timeouts timeouts.Value `tfsdk:"timeouts"`
// response fields // response fields
Id types.String `tfsdk:"id"` ID types.String `tfsdk:"id"`
Items []ItemDiskModel `tfsdk:"items"` Items []ItemDiskModel `tfsdk:"items"`
EntryCount types.Int64 `tfsdk:"entry_count"` EntryCount types.Int64 `tfsdk:"entry_count"`
} }
@ -33,7 +33,9 @@ type ItemDiskModel struct {
AccountName types.String `tfsdk:"account_name"` AccountName types.String `tfsdk:"account_name"`
ACL types.String `tfsdk:"acl"` ACL types.String `tfsdk:"acl"`
Computes types.List `tfsdk:"computes"` Computes types.List `tfsdk:"computes"`
CreatedBy types.String `tfsdk:"created_by"`
CreatedTime types.Int64 `tfsdk:"created_time"` CreatedTime types.Int64 `tfsdk:"created_time"`
DeletedBy types.String `tfsdk:"deleted_by"`
DeletedTime types.Int64 `tfsdk:"deleted_time"` DeletedTime types.Int64 `tfsdk:"deleted_time"`
Description types.String `tfsdk:"desc"` Description types.String `tfsdk:"desc"`
DestructionTime types.Int64 `tfsdk:"destruction_time"` DestructionTime types.Int64 `tfsdk:"destruction_time"`
@ -46,12 +48,13 @@ type ItemDiskModel struct {
MachineName types.String `tfsdk:"machine_name"` MachineName types.String `tfsdk:"machine_name"`
DiskId types.Int64 `tfsdk:"disk_id"` DiskId types.Int64 `tfsdk:"disk_id"`
DiskName types.String `tfsdk:"disk_name"` DiskName types.String `tfsdk:"disk_name"`
Milestones types.Int64 `tfsdk:"milestones"`
Order types.Int64 `tfsdk:"order"` Order types.Int64 `tfsdk:"order"`
Params types.String `tfsdk:"params"` Params types.String `tfsdk:"params"`
ParentID types.Int64 `tfsdk:"parent_id"` ParentID types.Int64 `tfsdk:"parent_id"`
PCISlot types.Int64 `tfsdk:"pci_slot"` PCISlot types.Int64 `tfsdk:"pci_slot"`
Pool types.String `tfsdk:"pool"` Pool types.String `tfsdk:"pool"`
PresentTo types.List `tfsdk:"present_to"` PresentTo types.Map `tfsdk:"present_to"`
PurgeTime types.Int64 `tfsdk:"purge_time"` PurgeTime types.Int64 `tfsdk:"purge_time"`
ResID types.String `tfsdk:"res_id"` ResID types.String `tfsdk:"res_id"`
ResName types.String `tfsdk:"res_name"` ResName types.String `tfsdk:"res_name"`
@ -59,11 +62,14 @@ type ItemDiskModel struct {
SepID types.Int64 `tfsdk:"sep_id"` SepID types.Int64 `tfsdk:"sep_id"`
SepType types.String `tfsdk:"sep_type"` SepType types.String `tfsdk:"sep_type"`
Shareable types.Bool `tfsdk:"shareable"` Shareable types.Bool `tfsdk:"shareable"`
SizeAvailable types.Float64 `tfsdk:"size_available"`
SizeMax types.Int64 `tfsdk:"size_max"` SizeMax types.Int64 `tfsdk:"size_max"`
SizeUsed types.Float64 `tfsdk:"size_used"` SizeUsed types.Float64 `tfsdk:"size_used"`
Snapshots types.List `tfsdk:"snapshots"` Snapshots types.List `tfsdk:"snapshots"`
Status types.String `tfsdk:"status"` Status types.String `tfsdk:"status"`
TechStatus types.String `tfsdk:"tech_status"` TechStatus types.String `tfsdk:"tech_status"`
Type types.String `tfsdk:"type"` Type types.String `tfsdk:"type"`
UpdatedBy types.String `tfsdk:"updated_by"`
UpdatedTime types.Int64 `tfsdk:"updated_time"`
VMID types.Int64 `tfsdk:"vmid"` VMID types.Int64 `tfsdk:"vmid"`
} }

@ -20,7 +20,7 @@ type DataSourceDiskListDeletedModel struct {
Timeouts timeouts.Value `tfsdk:"timeouts"` Timeouts timeouts.Value `tfsdk:"timeouts"`
// response fields // response fields
Id types.String `tfsdk:"id"` ID types.String `tfsdk:"id"`
Items []ItemDiskModel `tfsdk:"items"` Items []ItemDiskModel `tfsdk:"items"`
EntryCount types.Int64 `tfsdk:"entry_count"` EntryCount types.Int64 `tfsdk:"entry_count"`
} }

@ -31,7 +31,7 @@ type RecordDiskModel struct {
ParentID types.Int64 `tfsdk:"parent_id"` ParentID types.Int64 `tfsdk:"parent_id"`
PCISlot types.Int64 `tfsdk:"pci_slot"` PCISlot types.Int64 `tfsdk:"pci_slot"`
Pool types.String `tfsdk:"pool"` Pool types.String `tfsdk:"pool"`
PresentTo types.List `tfsdk:"present_to"` PresentTo types.Map `tfsdk:"present_to"`
PurgeTime types.Int64 `tfsdk:"purge_time"` PurgeTime types.Int64 `tfsdk:"purge_time"`
Replication *ItemReplicationModel `tfsdk:"replication"` Replication *ItemReplicationModel `tfsdk:"replication"`
ResID types.String `tfsdk:"res_id"` ResID types.String `tfsdk:"res_id"`

@ -11,7 +11,6 @@ type ResourceDiskModel struct {
AccountID types.Int64 `tfsdk:"account_id"` AccountID types.Int64 `tfsdk:"account_id"`
DiskName types.String `tfsdk:"disk_name"` DiskName types.String `tfsdk:"disk_name"`
SizeMax types.Int64 `tfsdk:"size_max"` SizeMax types.Int64 `tfsdk:"size_max"`
GID types.Int64 `tfsdk:"gid"`
// request fields - optional // request fields - optional
Description types.String `tfsdk:"desc"` Description types.String `tfsdk:"desc"`
@ -25,23 +24,28 @@ type ResourceDiskModel struct {
Timeouts timeouts.Value `tfsdk:"timeouts"` Timeouts timeouts.Value `tfsdk:"timeouts"`
// response fields // response fields
Id types.String `tfsdk:"id"` ID types.String `tfsdk:"id"`
LastUpdated types.String `tfsdk:"last_updated"` LastUpdated types.String `tfsdk:"last_updated"`
ACL types.String `tfsdk:"acl"` ACL types.String `tfsdk:"acl"`
AccountName types.String `tfsdk:"account_name"` AccountName types.String `tfsdk:"account_name"`
Computes types.List `tfsdk:"computes"` Computes types.List `tfsdk:"computes"`
CreatedBy types.String `tfsdk:"created_by"`
CreatedTime types.Int64 `tfsdk:"created_time"` CreatedTime types.Int64 `tfsdk:"created_time"`
DeletedBy types.String `tfsdk:"deleted_by"`
DeletedTime types.Int64 `tfsdk:"deleted_time"` DeletedTime types.Int64 `tfsdk:"deleted_time"`
DeviceName types.String `tfsdk:"devicename"` DeviceName types.String `tfsdk:"devicename"`
DestructionTime types.Int64 `tfsdk:"destruction_time"` DestructionTime types.Int64 `tfsdk:"destruction_time"`
DiskId types.Int64 `tfsdk:"disk_id"` DiskID types.Int64 `tfsdk:"disk_id"`
ImageID types.Int64 `tfsdk:"image_id"` ImageID types.Int64 `tfsdk:"image_id"`
Images types.List `tfsdk:"images"` Images types.List `tfsdk:"images"`
MachineID types.Int64 `tfsdk:"machine_id"`
MachineName types.String `tfsdk:"machine_name"`
GID types.Int64 `tfsdk:"gid"`
Order types.Int64 `tfsdk:"order"` Order types.Int64 `tfsdk:"order"`
Params types.String `tfsdk:"params"` Params types.String `tfsdk:"params"`
ParentID types.Int64 `tfsdk:"parent_id"` ParentID types.Int64 `tfsdk:"parent_id"`
PCISlot types.Int64 `tfsdk:"pci_slot"` PCISlot types.Int64 `tfsdk:"pci_slot"`
PresentTo types.List `tfsdk:"present_to"` PresentTo types.Map `tfsdk:"present_to"`
PurgeTime types.Int64 `tfsdk:"purge_time"` PurgeTime types.Int64 `tfsdk:"purge_time"`
ResID types.String `tfsdk:"res_id"` ResID types.String `tfsdk:"res_id"`
ResName types.String `tfsdk:"res_name"` ResName types.String `tfsdk:"res_name"`
@ -51,6 +55,8 @@ type ResourceDiskModel struct {
Snapshots types.List `tfsdk:"snapshots"` Snapshots types.List `tfsdk:"snapshots"`
Status types.String `tfsdk:"status"` Status types.String `tfsdk:"status"`
TechStatus types.String `tfsdk:"tech_status"` TechStatus types.String `tfsdk:"tech_status"`
UpdatedBy types.String `tfsdk:"updated_by"`
UpdatedTime types.Int64 `tfsdk:"updated_time"`
VMID types.Int64 `tfsdk:"vmid"` VMID types.Int64 `tfsdk:"vmid"`
} }

@ -40,7 +40,7 @@ type ResourceRecordDiskReplicationModel struct {
ParentID types.Int64 `tfsdk:"parent_id"` ParentID types.Int64 `tfsdk:"parent_id"`
PCISlot types.Int64 `tfsdk:"pci_slot"` PCISlot types.Int64 `tfsdk:"pci_slot"`
Pool types.String `tfsdk:"pool"` Pool types.String `tfsdk:"pool"`
PresentTo types.List `tfsdk:"present_to"` PresentTo types.Map `tfsdk:"present_to"`
PurgeTime types.Int64 `tfsdk:"purge_time"` PurgeTime types.Int64 `tfsdk:"purge_time"`
Replication types.Object `tfsdk:"replication"` Replication types.Object `tfsdk:"replication"`
ResID types.String `tfsdk:"res_id"` ResID types.String `tfsdk:"res_id"`

@ -51,7 +51,6 @@ func (r *resourceDisk) Create(ctx context.Context, req resource.CreateRequest, r
"account_id": plan.AccountID.ValueInt64(), "account_id": plan.AccountID.ValueInt64(),
"disk_name": plan.DiskName.ValueString(), "disk_name": plan.DiskName.ValueString(),
"size_max": plan.SizeMax.ValueInt64(), "size_max": plan.SizeMax.ValueInt64(),
"gid": plan.GID.ValueInt64(),
} }
tflog.Info(ctx, "Create resourceDisk: got plan successfully", contextCreateMap) tflog.Info(ctx, "Create resourceDisk: got plan successfully", contextCreateMap)
tflog.Info(ctx, "Create resourceDisk: start creating", contextCreateMap) tflog.Info(ctx, "Create resourceDisk: start creating", contextCreateMap)
@ -67,7 +66,6 @@ func (r *resourceDisk) Create(ctx context.Context, req resource.CreateRequest, r
"account_id": plan.AccountID.ValueInt64(), "account_id": plan.AccountID.ValueInt64(),
"disk_name": plan.DiskName.ValueString(), "disk_name": plan.DiskName.ValueString(),
"size_max": plan.SizeMax.ValueInt64(), "size_max": plan.SizeMax.ValueInt64(),
"gid": plan.GID.ValueInt64(),
"createTimeout": createTimeout}) "createTimeout": createTimeout})
ctx, cancel := context.WithTimeout(ctx, createTimeout) ctx, cancel := context.WithTimeout(ctx, createTimeout)
@ -93,7 +91,7 @@ func (r *resourceDisk) Create(ctx context.Context, req resource.CreateRequest, r
) )
return return
} }
plan.Id = types.StringValue(strconv.Itoa(int(diskId))) plan.ID = types.StringValue(strconv.Itoa(int(diskId)))
tflog.Info(ctx, "Create resourceDisk: disk created", map[string]any{"diskId": diskId, "disk_name": plan.DiskName.ValueString()}) tflog.Info(ctx, "Create resourceDisk: disk created", map[string]any{"diskId": diskId, "disk_name": plan.DiskName.ValueString()})
// additional settings after disk creation: in case of failures, warnings are added to resp.Diagnostics, // additional settings after disk creation: in case of failures, warnings are added to resp.Diagnostics,
@ -137,7 +135,7 @@ func (r *resourceDisk) Read(ctx context.Context, req resource.ReadRequest, resp
tflog.Error(ctx, "Read resourceDisk: Error get state") tflog.Error(ctx, "Read resourceDisk: Error get state")
return return
} }
tflog.Info(ctx, "Read resourceDisk: got state successfully", map[string]any{"disk_id": state.Id.ValueString()}) tflog.Info(ctx, "Read resourceDisk: got state successfully", map[string]any{"disk_id": state.ID.ValueString()})
// Set timeouts // Set timeouts
readTimeout, diags := state.Timeouts.Read(ctx, constants.Timeout300s) readTimeout, diags := state.Timeouts.Read(ctx, constants.Timeout300s)
@ -147,7 +145,7 @@ func (r *resourceDisk) Read(ctx context.Context, req resource.ReadRequest, resp
return return
} }
tflog.Info(ctx, "Read resourceDisk: set timeouts successfully", map[string]any{ tflog.Info(ctx, "Read resourceDisk: set timeouts successfully", map[string]any{
"disk_id": state.Id.ValueString(), "disk_id": state.ID.ValueString(),
"readTimeout": readTimeout}) "readTimeout": readTimeout})
ctx, cancel := context.WithTimeout(ctx, readTimeout) ctx, cancel := context.WithTimeout(ctx, readTimeout)
@ -185,7 +183,7 @@ func (r *resourceDisk) Update(ctx context.Context, req resource.UpdateRequest, r
tflog.Error(ctx, "Update resourceDisk: Error receiving the plan") tflog.Error(ctx, "Update resourceDisk: Error receiving the plan")
return return
} }
tflog.Info(ctx, "Update resourceDisk: got plan successfully", map[string]any{"disk_id": plan.Id.ValueString()}) tflog.Info(ctx, "Update resourceDisk: got plan successfully", map[string]any{"disk_id": plan.ID.ValueString()})
// Retrieve values from state // Retrieve values from state
var state models.ResourceDiskModel var state models.ResourceDiskModel
@ -194,7 +192,7 @@ func (r *resourceDisk) Update(ctx context.Context, req resource.UpdateRequest, r
tflog.Error(ctx, "Update resourceDisk: Error receiving the state") tflog.Error(ctx, "Update resourceDisk: Error receiving the state")
return return
} }
tflog.Info(ctx, "Update resourceDisk: got state successfully", map[string]any{"disk_id": state.Id.ValueString()}) tflog.Info(ctx, "Update resourceDisk: got state successfully", map[string]any{"disk_id": state.ID.ValueString()})
// Set timeouts // Set timeouts
updateTimeout, diags := plan.Timeouts.Update(ctx, constants.Timeout300s) updateTimeout, diags := plan.Timeouts.Update(ctx, constants.Timeout300s)
@ -204,22 +202,22 @@ func (r *resourceDisk) Update(ctx context.Context, req resource.UpdateRequest, r
return return
} }
tflog.Info(ctx, "Update resourceDisk: set timeouts successfully", map[string]any{ tflog.Info(ctx, "Update resourceDisk: set timeouts successfully", map[string]any{
"disk_id": state.Id.ValueString(), "disk_id": state.ID.ValueString(),
"updateTimeout": updateTimeout}) "updateTimeout": updateTimeout})
ctx, cancel := context.WithTimeout(ctx, updateTimeout) ctx, cancel := context.WithTimeout(ctx, updateTimeout)
defer cancel() defer cancel()
// Checking if inputs are valid // Checking if inputs are valid
tflog.Info(ctx, "Update resourceDisk: starting input checks", map[string]any{"disk_id": plan.Id.ValueString()}) tflog.Info(ctx, "Update resourceDisk: starting input checks", map[string]any{"disk_id": plan.ID.ValueString()})
resp.Diagnostics.Append(resourceDiskUpdateInputChecks(ctx, &plan, &state, r.client)...) resp.Diagnostics.Append(resourceDiskUpdateInputChecks(ctx, &plan, &state, r.client)...)
if resp.Diagnostics.HasError() { if resp.Diagnostics.HasError() {
tflog.Error(ctx, "Update resourceDisk: Error input checks") tflog.Error(ctx, "Update resourceDisk: Error input checks")
return return
} }
tflog.Info(ctx, "Update resourceDisk: input checks successful", map[string]any{"disk_id": state.Id.ValueString()}) tflog.Info(ctx, "Update resourceDisk: input checks successful", map[string]any{"disk_id": state.ID.ValueString()})
diskId, err := strconv.Atoi(state.Id.ValueString()) diskId, err := strconv.Atoi(state.ID.ValueString())
if err != nil { if err != nil {
resp.Diagnostics.AddError("Update resourceDisk: Cannot parse disk ID from state", err.Error()) resp.Diagnostics.AddError("Update resourceDisk: Cannot parse disk ID from state", err.Error())
return return
@ -261,7 +259,7 @@ func (r *resourceDisk) Update(ctx context.Context, req resource.UpdateRequest, r
} }
} }
tflog.Info(ctx, "Update resourceDisk: disk update is completed", map[string]any{"disk_id": plan.Id.ValueString()}) tflog.Info(ctx, "Update resourceDisk: disk update is completed", map[string]any{"disk_id": plan.ID.ValueString()})
// Map response body to schema and populate Computed attribute values // Map response body to schema and populate Computed attribute values
resp.Diagnostics.Append(flattens.DiskResource(ctx, &plan, r.client)...) resp.Diagnostics.Append(flattens.DiskResource(ctx, &plan, r.client)...)
@ -288,7 +286,7 @@ func (r *resourceDisk) Delete(ctx context.Context, req resource.DeleteRequest, r
tflog.Error(ctx, "Delete resourceDisk: Error get state") tflog.Error(ctx, "Delete resourceDisk: Error get state")
return return
} }
tflog.Info(ctx, "Delete resourceDisk: got state successfully", map[string]any{"disk_id": state.Id.ValueString()}) tflog.Info(ctx, "Delete resourceDisk: got state successfully", map[string]any{"disk_id": state.ID.ValueString()})
// Set timeouts // Set timeouts
deleteTimeout, diags := state.Timeouts.Delete(ctx, constants.Timeout300s) deleteTimeout, diags := state.Timeouts.Delete(ctx, constants.Timeout300s)
@ -298,7 +296,7 @@ func (r *resourceDisk) Delete(ctx context.Context, req resource.DeleteRequest, r
return return
} }
tflog.Info(ctx, "Delete resourceDisk: set timeouts successfully", map[string]any{ tflog.Info(ctx, "Delete resourceDisk: set timeouts successfully", map[string]any{
"disk_id": state.Id.ValueString(), "disk_id": state.ID.ValueString(),
"deleteTimeout": deleteTimeout}) "deleteTimeout": deleteTimeout})
ctx, cancel := context.WithTimeout(ctx, deleteTimeout) ctx, cancel := context.WithTimeout(ctx, deleteTimeout)
@ -306,7 +304,7 @@ func (r *resourceDisk) Delete(ctx context.Context, req resource.DeleteRequest, r
// Delete existing resource group // Delete existing resource group
delReq := disks.DeleteRequest{ delReq := disks.DeleteRequest{
DiskID: uint64(state.DiskId.ValueInt64()), DiskID: uint64(state.DiskID.ValueInt64()),
Detach: state.Detach.ValueBool(), // default false Detach: state.Detach.ValueBool(), // default false
Permanently: state.Permanently.ValueBool(), // default false Permanently: state.Permanently.ValueBool(), // default false
} }
@ -318,7 +316,7 @@ func (r *resourceDisk) Delete(ctx context.Context, req resource.DeleteRequest, r
return return
} }
tflog.Info(ctx, "End delete resourceDisk", map[string]any{"disk_id": state.Id.ValueString()}) tflog.Info(ctx, "End delete resourceDisk", map[string]any{"disk_id": state.ID.ValueString()})
} }
// Schema defines the schema for the resource. // Schema defines the schema for the resource.

@ -38,9 +38,15 @@ func MakeSchemaDataSourceDisk() map[string]schema.Attribute {
}, },
}, },
}, },
"created_by": schema.StringAttribute{
Computed: true,
},
"created_time": schema.Int64Attribute{ "created_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
"deleted_by": schema.StringAttribute{
Computed: true,
},
"deleted_time": schema.Int64Attribute{ "deleted_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
@ -61,7 +67,7 @@ func MakeSchemaDataSourceDisk() map[string]schema.Attribute {
}, },
"images": schema.ListAttribute{ "images": schema.ListAttribute{
Computed: true, Computed: true,
ElementType: types.StringType, ElementType: types.Int64Type,
}, },
"iotune": schema.SingleNestedAttribute{ "iotune": schema.SingleNestedAttribute{
Computed: true, Computed: true,
@ -110,6 +116,15 @@ func MakeSchemaDataSourceDisk() map[string]schema.Attribute {
"disk_name": schema.StringAttribute{ "disk_name": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"machine_id": schema.Int64Attribute{
Computed: true,
},
"machine_name": schema.StringAttribute{
Computed: true,
},
"milestones": schema.Int64Attribute{
Computed: true,
},
"order": schema.Int64Attribute{ "order": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
@ -125,7 +140,7 @@ func MakeSchemaDataSourceDisk() map[string]schema.Attribute {
"pool": schema.StringAttribute{ "pool": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"present_to": schema.ListAttribute{ "present_to": schema.MapAttribute{
Computed: true, Computed: true,
ElementType: types.Int64Type, ElementType: types.Int64Type,
}, },
@ -150,6 +165,9 @@ func MakeSchemaDataSourceDisk() map[string]schema.Attribute {
"shareable": schema.BoolAttribute{ "shareable": schema.BoolAttribute{
Computed: true, Computed: true,
}, },
"size_available": schema.Float64Attribute{
Computed: true,
},
"size_max": schema.Int64Attribute{ "size_max": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
@ -190,6 +208,12 @@ func MakeSchemaDataSourceDisk() map[string]schema.Attribute {
"type": schema.StringAttribute{ "type": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"updated_by": schema.StringAttribute{
Computed: true,
},
"updated_time": schema.Int64Attribute{
Computed: true,
},
"vmid": schema.Int64Attribute{ "vmid": schema.Int64Attribute{
Computed: true, Computed: true,
}, },

@ -91,9 +91,15 @@ func MakeSchemaDataSourceDiskList() map[string]schema.Attribute {
}, },
}, },
}, },
"created_by": schema.StringAttribute{
Computed: true,
},
"created_time": schema.Int64Attribute{ "created_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
"deleted_by": schema.StringAttribute{
Computed: true,
},
"deleted_time": schema.Int64Attribute{ "deleted_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
@ -117,7 +123,7 @@ func MakeSchemaDataSourceDiskList() map[string]schema.Attribute {
}, },
"images": schema.ListAttribute{ "images": schema.ListAttribute{
Computed: true, Computed: true,
ElementType: types.StringType, ElementType: types.Int64Type,
}, },
"iotune": schema.SingleNestedAttribute{ "iotune": schema.SingleNestedAttribute{
Computed: true, Computed: true,
@ -169,6 +175,9 @@ func MakeSchemaDataSourceDiskList() map[string]schema.Attribute {
"machine_name": schema.StringAttribute{ "machine_name": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"milestones": schema.Int64Attribute{
Computed: true,
},
"disk_name": schema.StringAttribute{ "disk_name": schema.StringAttribute{
Computed: true, Computed: true,
}, },
@ -187,7 +196,7 @@ func MakeSchemaDataSourceDiskList() map[string]schema.Attribute {
"pool": schema.StringAttribute{ "pool": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"present_to": schema.ListAttribute{ "present_to": schema.MapAttribute{
Computed: true, Computed: true,
ElementType: types.Int64Type, ElementType: types.Int64Type,
}, },
@ -212,6 +221,9 @@ func MakeSchemaDataSourceDiskList() map[string]schema.Attribute {
"shareable": schema.BoolAttribute{ "shareable": schema.BoolAttribute{
Computed: true, Computed: true,
}, },
"size_available": schema.Float64Attribute{
Computed: true,
},
"size_max": schema.Int64Attribute{ "size_max": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
@ -252,6 +264,12 @@ func MakeSchemaDataSourceDiskList() map[string]schema.Attribute {
"type": schema.StringAttribute{ "type": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"updated_by": schema.StringAttribute{
Computed: true,
},
"updated_time": schema.Int64Attribute{
Computed: true,
},
"vmid": schema.Int64Attribute{ "vmid": schema.Int64Attribute{
Computed: true, Computed: true,
}, },

@ -79,9 +79,15 @@ func MakeSchemaDataSourceDiskListDeleted() map[string]schema.Attribute {
}, },
}, },
}, },
"created_by": schema.StringAttribute{
Computed: true,
},
"created_time": schema.Int64Attribute{ "created_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
"deleted_by": schema.StringAttribute{
Computed: true,
},
"deleted_time": schema.Int64Attribute{ "deleted_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
@ -105,7 +111,7 @@ func MakeSchemaDataSourceDiskListDeleted() map[string]schema.Attribute {
}, },
"images": schema.ListAttribute{ "images": schema.ListAttribute{
Computed: true, Computed: true,
ElementType: types.StringType, ElementType: types.Int64Type,
}, },
"iotune": schema.SingleNestedAttribute{ "iotune": schema.SingleNestedAttribute{
Computed: true, Computed: true,
@ -157,6 +163,9 @@ func MakeSchemaDataSourceDiskListDeleted() map[string]schema.Attribute {
"machine_name": schema.StringAttribute{ "machine_name": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"milestones": schema.Int64Attribute{
Computed: true,
},
"disk_name": schema.StringAttribute{ "disk_name": schema.StringAttribute{
Computed: true, Computed: true,
}, },
@ -175,7 +184,7 @@ func MakeSchemaDataSourceDiskListDeleted() map[string]schema.Attribute {
"pool": schema.StringAttribute{ "pool": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"present_to": schema.ListAttribute{ "present_to": schema.MapAttribute{
Computed: true, Computed: true,
ElementType: types.Int64Type, ElementType: types.Int64Type,
}, },
@ -200,6 +209,9 @@ func MakeSchemaDataSourceDiskListDeleted() map[string]schema.Attribute {
"shareable": schema.BoolAttribute{ "shareable": schema.BoolAttribute{
Computed: true, Computed: true,
}, },
"size_available": schema.Float64Attribute{
Computed: true,
},
"size_max": schema.Int64Attribute{ "size_max": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
@ -240,6 +252,12 @@ func MakeSchemaDataSourceDiskListDeleted() map[string]schema.Attribute {
"type": schema.StringAttribute{ "type": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"updated_by": schema.StringAttribute{
Computed: true,
},
"updated_time": schema.Int64Attribute{
Computed: true,
},
"vmid": schema.Int64Attribute{ "vmid": schema.Int64Attribute{
Computed: true, Computed: true,
}, },

@ -109,7 +109,7 @@ func MakeSchemaDataSourceDiskListUnattached() map[string]schema.Attribute {
}, },
"images": schema.ListAttribute{ "images": schema.ListAttribute{
Computed: true, Computed: true,
ElementType: types.StringType, ElementType: types.Int64Type,
}, },
"iotune": schema.SingleNestedAttribute{ "iotune": schema.SingleNestedAttribute{
Computed: true, Computed: true,

@ -66,7 +66,7 @@ func MakeSchemaDataSourceDiskReplication() map[string]schema.Attribute {
}, },
"images": schema.ListAttribute{ "images": schema.ListAttribute{
Computed: true, Computed: true,
ElementType: types.StringType, ElementType: types.Int64Type,
}, },
"iotune": schema.SingleNestedAttribute{ "iotune": schema.SingleNestedAttribute{
Computed: true, Computed: true,
@ -130,7 +130,7 @@ func MakeSchemaDataSourceDiskReplication() map[string]schema.Attribute {
"pool": schema.StringAttribute{ "pool": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"present_to": schema.ListAttribute{ "present_to": schema.MapAttribute{
Computed: true, Computed: true,
ElementType: types.Int64Type, ElementType: types.Int64Type,
}, },

@ -1,11 +1,10 @@
package schemas package schemas
import ( import (
"github.com/hashicorp/terraform-plugin-framework-validators/stringvalidator"
"github.com/hashicorp/terraform-plugin-framework/resource/schema" "github.com/hashicorp/terraform-plugin-framework/resource/schema"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/booldefault"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/planmodifier"
"github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier" "github.com/hashicorp/terraform-plugin-framework/resource/schema/stringplanmodifier"
"github.com/hashicorp/terraform-plugin-framework/schema/validator"
"github.com/hashicorp/terraform-plugin-framework/types" "github.com/hashicorp/terraform-plugin-framework/types"
) )
@ -24,10 +23,6 @@ func MakeSchemaResourceDisk() map[string]schema.Attribute {
Required: true, Required: true,
Description: "size in GB, default is 10", Description: "size in GB, default is 10",
}, },
"gid": schema.Int64Attribute{
Required: true,
Description: "ID of the grid (platform)",
},
// optional attributes // optional attributes
"desc": schema.StringAttribute{ "desc": schema.StringAttribute{
@ -46,23 +41,21 @@ func MakeSchemaResourceDisk() map[string]schema.Attribute {
Description: "Storage endpoint provider ID to create disk", Description: "Storage endpoint provider ID to create disk",
}, },
"type": schema.StringAttribute{ "type": schema.StringAttribute{
Optional: true,
Computed: true, Computed: true,
Validators: []validator.String{
stringvalidator.OneOf("B", "D", "T"), // case is not ignored
},
Description: "(B;D;T) B=Boot;D=Data;T=Temp", Description: "(B;D;T) B=Boot;D=Data;T=Temp",
// default is D // default is D
}, },
"detach": schema.BoolAttribute{ "detach": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Description: "Detaching the disk from compute", Description: "Detaching the disk from compute",
// default is false Default: booldefault.StaticBool(true),
}, },
"permanently": schema.BoolAttribute{ "permanently": schema.BoolAttribute{
Optional: true, Optional: true,
Computed: true,
Description: "Whether to completely delete the disk, works only with non attached disks", Description: "Whether to completely delete the disk, works only with non attached disks",
// default is false Default: booldefault.StaticBool(true),
}, },
"shareable": schema.BoolAttribute{ "shareable": schema.BoolAttribute{
Optional: true, Optional: true,
@ -170,9 +163,15 @@ func MakeSchemaResourceDisk() map[string]schema.Attribute {
}, },
}, },
}, },
"created_by": schema.StringAttribute{
Computed: true,
},
"created_time": schema.Int64Attribute{ "created_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
"deleted_by": schema.StringAttribute{
Computed: true,
},
"deleted_time": schema.Int64Attribute{ "deleted_time": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
@ -187,12 +186,22 @@ func MakeSchemaResourceDisk() map[string]schema.Attribute {
}, },
"images": schema.ListAttribute{ "images": schema.ListAttribute{
Computed: true, Computed: true,
ElementType: types.StringType, ElementType: types.Int64Type,
},
"gid": schema.Int64Attribute{
Computed: true,
Description: "ID of the grid (platform)",
}, },
"last_updated": schema.StringAttribute{ "last_updated": schema.StringAttribute{
Computed: true, Computed: true,
Description: "Timestamp of the last Terraform update of the disk resource.", Description: "Timestamp of the last Terraform update of the disk resource.",
}, },
"machine_id": schema.Int64Attribute{
Computed: true,
},
"machine_name": schema.StringAttribute{
Computed: true,
},
"order": schema.Int64Attribute{ "order": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
@ -205,7 +214,7 @@ func MakeSchemaResourceDisk() map[string]schema.Attribute {
"pci_slot": schema.Int64Attribute{ "pci_slot": schema.Int64Attribute{
Computed: true, Computed: true,
}, },
"present_to": schema.ListAttribute{ "present_to": schema.MapAttribute{
Computed: true, Computed: true,
ElementType: types.Int64Type, ElementType: types.Int64Type,
}, },
@ -258,6 +267,12 @@ func MakeSchemaResourceDisk() map[string]schema.Attribute {
"tech_status": schema.StringAttribute{ "tech_status": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"updated_by": schema.StringAttribute{
Computed: true,
},
"updated_time": schema.Int64Attribute{
Computed: true,
},
"vmid": schema.Int64Attribute{ "vmid": schema.Int64Attribute{
Computed: true, Computed: true,
}, },

@ -97,7 +97,7 @@ func MakeSchemaResourceDiskReplication() map[string]schema.Attribute {
}, },
"images": schema.ListAttribute{ "images": schema.ListAttribute{
Computed: true, Computed: true,
ElementType: types.StringType, ElementType: types.Int64Type,
}, },
"iotune": schema.SingleNestedAttribute{ "iotune": schema.SingleNestedAttribute{
Computed: true, Computed: true,
@ -158,7 +158,7 @@ func MakeSchemaResourceDiskReplication() map[string]schema.Attribute {
"pool": schema.StringAttribute{ "pool": schema.StringAttribute{
Computed: true, Computed: true,
}, },
"present_to": schema.ListAttribute{ "present_to": schema.MapAttribute{
Computed: true, Computed: true,
ElementType: types.Int64Type, ElementType: types.Int64Type,
}, },

@ -17,16 +17,16 @@ import (
"repository.basistech.ru/BASIS/terraform-provider-dynamix/internal/service/cloudapi/disks/models" "repository.basistech.ru/BASIS/terraform-provider-dynamix/internal/service/cloudapi/disks/models"
) )
// DiskCheckPresence checks if disk with diskId exists // DiskCheckPresence checks if disk with diskID exists
func DiskCheckPresence(ctx context.Context, diskId uint64, c *client.Client) (*disks.RecordDisk, error) { func DiskCheckPresence(ctx context.Context, diskID uint64, c *client.Client) (*disks.RecordDisk, error) {
tflog.Info(ctx, fmt.Sprintf("Get info about disk with ID - %v", diskId)) tflog.Info(ctx, fmt.Sprintf("Get info about disk with ID - %v", diskID))
diskRecord, err := c.CloudAPI().Disks().Get(ctx, disks.GetRequest{DiskID: diskId}) diskRecord, err := c.CloudAPI().Disks().Get(ctx, disks.GetRequest{DiskID: diskID})
if err != nil { if err != nil {
return nil, fmt.Errorf("cannot get info about disk with error: %w", err) return nil, fmt.Errorf("cannot get info about disk with error: %w", err)
} }
tflog.Info(ctx, "DiskCheckPresence resourceDisk: response from CloudAPI().Disks().Get", map[string]any{"disk_id": diskId, "response": diskRecord}) tflog.Info(ctx, "DiskCheckPresence resourceDisk: response from CloudAPI().Disks().Get", map[string]any{"disk_id": diskID, "response": diskRecord})
return diskRecord, err return diskRecord, err
} }
@ -37,7 +37,6 @@ func CreateRequestResourceDisk(ctx context.Context, plan *models.ResourceDiskMod
"account_id": plan.AccountID.ValueInt64(), "account_id": plan.AccountID.ValueInt64(),
"disk_name": plan.DiskName.ValueString(), "disk_name": plan.DiskName.ValueString(),
"size_max": plan.SizeMax.ValueInt64(), "size_max": plan.SizeMax.ValueInt64(),
"gid": plan.GID.ValueInt64(),
}) })
// set up required parameters in disk create request // set up required parameters in disk create request
@ -45,14 +44,8 @@ func CreateRequestResourceDisk(ctx context.Context, plan *models.ResourceDiskMod
AccountID: uint64(plan.AccountID.ValueInt64()), AccountID: uint64(plan.AccountID.ValueInt64()),
Name: plan.DiskName.ValueString(), Name: plan.DiskName.ValueString(),
Size: uint64(plan.SizeMax.ValueInt64()), Size: uint64(plan.SizeMax.ValueInt64()),
GID: uint64(plan.GID.ValueInt64()),
} }
if plan.Type.IsUnknown() {
createReq.Type = "D" // default value
} else {
createReq.Type = plan.Type.ValueString()
}
if !plan.SEPID.IsUnknown() { if !plan.SEPID.IsUnknown() {
createReq.SEPID = uint64(plan.SEPID.ValueInt64()) createReq.SEPID = uint64(plan.SEPID.ValueInt64())
} }
@ -68,16 +61,16 @@ func CreateRequestResourceDisk(ctx context.Context, plan *models.ResourceDiskMod
// LimitIOCreateDisk sets IO limits that user specified in iotune field for created resource. // LimitIOCreateDisk sets IO limits that user specified in iotune field for created resource.
// In case of failure returns warnings. // In case of failure returns warnings.
func LimitIOCreateDisk(ctx context.Context, diskId uint64, plan *models.ResourceDiskModel, c *client.Client) diag.Diagnostics { func LimitIOCreateDisk(ctx context.Context, diskID uint64, plan *models.ResourceDiskModel, c *client.Client) diag.Diagnostics {
diags := diag.Diagnostics{} diags := diag.Diagnostics{}
limitIOReq := disks.LimitIORequest{ limitIOReq := disks.LimitIORequest{
DiskID: diskId, DiskID: diskID,
} }
var iotunePlan models.IOTuneModel var iotunePlan models.IOTuneModel
// plan.IOTune is not null as it was checked before call // plan.IOTune is not null as it was checked before call
tflog.Info(ctx, "LimitIOCreateDisk: new iotune specified", map[string]any{"disk_id": diskId}) tflog.Info(ctx, "LimitIOCreateDisk: new iotune specified", map[string]any{"disk_id": diskID})
diags.Append(plan.IOTune.As(ctx, &iotunePlan, basetypes.ObjectAsOptions{})...) diags.Append(plan.IOTune.As(ctx, &iotunePlan, basetypes.ObjectAsOptions{})...)
if diags.HasError() { if diags.HasError() {
tflog.Error(ctx, "LimitIOCreateDisk: cannot populate iotune with plan.IOTune object element") tflog.Error(ctx, "LimitIOCreateDisk: cannot populate iotune with plan.IOTune object element")
@ -103,7 +96,7 @@ func LimitIOCreateDisk(ctx context.Context, diskId uint64, plan *models.Resource
limitIOReq.WriteIOPSSecMax = uint64(iotunePlan.WriteIOPSSecMax.ValueInt64()) limitIOReq.WriteIOPSSecMax = uint64(iotunePlan.WriteIOPSSecMax.ValueInt64())
tflog.Info(ctx, "LimitIOCreateDisk: before calling CloudAPI().Disks().LimitIO", map[string]any{ tflog.Info(ctx, "LimitIOCreateDisk: before calling CloudAPI().Disks().LimitIO", map[string]any{
"disk_id": diskId, "disk_id": diskID,
"limitIOReq": limitIOReq}) "limitIOReq": limitIOReq})
res, err := c.CloudAPI().Disks().LimitIO(ctx, limitIOReq) res, err := c.CloudAPI().Disks().LimitIO(ctx, limitIOReq)
if err != nil { if err != nil {
@ -111,7 +104,7 @@ func LimitIOCreateDisk(ctx context.Context, diskId uint64, plan *models.Resource
err.Error()) err.Error())
} }
tflog.Info(ctx, "LimitIOCreateDisk: response from CloudAPI().Disks().LimitIO", map[string]any{ tflog.Info(ctx, "LimitIOCreateDisk: response from CloudAPI().Disks().LimitIO", map[string]any{
"disk_id": diskId, "disk_id": diskID,
"response": res}) "response": res})
return diags return diags
@ -119,17 +112,17 @@ func LimitIOCreateDisk(ctx context.Context, diskId uint64, plan *models.Resource
// ShareableCreateDisk shares disk. // ShareableCreateDisk shares disk.
// In case of failure returns warnings. // In case of failure returns warnings.
func ShareableCreateDisk(ctx context.Context, diskId uint64, c *client.Client) diag.Diagnostics { func ShareableCreateDisk(ctx context.Context, diskID uint64, c *client.Client) diag.Diagnostics {
diags := diag.Diagnostics{} diags := diag.Diagnostics{}
tflog.Info(ctx, "ShareableCreateDisk: before calling CloudAPI().Disks().Share", map[string]any{"disk_id": diskId}) tflog.Info(ctx, "ShareableCreateDisk: before calling CloudAPI().Disks().Share", map[string]any{"disk_id": diskID})
res, err := c.CloudAPI().Disks().Share(ctx, disks.ShareRequest{DiskID: diskId}) res, err := c.CloudAPI().Disks().Share(ctx, disks.ShareRequest{DiskID: diskID})
if err != nil { if err != nil {
diags.AddWarning("ShareableCreateDisk: Unable to share Disk", diags.AddWarning("ShareableCreateDisk: Unable to share Disk",
err.Error()) err.Error())
} }
tflog.Info(ctx, "ShareableCreateDisk: response from CloudAPI().Disks().Share", map[string]any{ tflog.Info(ctx, "ShareableCreateDisk: response from CloudAPI().Disks().Share", map[string]any{
"disk_id": diskId, "disk_id": diskID,
"response": res}) "response": res})
return diags return diags
@ -139,17 +132,17 @@ func ShareableCreateDisk(ctx context.Context, diskId uint64, c *client.Client) d
// Deleted status. // Deleted status.
// In case of failure returns errors. // In case of failure returns errors.
func DiskReadStatus(ctx context.Context, state *models.ResourceDiskModel, c *client.Client) diag.Diagnostics { func DiskReadStatus(ctx context.Context, state *models.ResourceDiskModel, c *client.Client) diag.Diagnostics {
tflog.Info(ctx, "DiskReadStatus: Read status disk with ID", map[string]any{"disk_id": state.Id.ValueString()}) tflog.Info(ctx, "DiskReadStatus: Read status disk with ID", map[string]any{"disk_id": state.ID.ValueString()})
diags := diag.Diagnostics{} diags := diag.Diagnostics{}
diskId, err := strconv.ParseUint(state.Id.ValueString(), 10, 64) diskID, err := strconv.ParseUint(state.ID.ValueString(), 10, 64)
if err != nil { if err != nil {
diags.AddError("DiskReadStatus: Cannot parse disk ID from state", err.Error()) diags.AddError("DiskReadStatus: Cannot parse disk ID from state", err.Error())
return diags return diags
} }
recordDisk, err := DiskCheckPresence(ctx, diskId, c) recordDisk, err := DiskCheckPresence(ctx, diskID, c)
if err != nil { if err != nil {
diags.AddError("DiskReadStatus: Unable to Read Disk before status check", err.Error()) diags.AddError("DiskReadStatus: Unable to Read Disk before status check", err.Error())
return diags return diags
@ -168,17 +161,17 @@ func DiskReadStatus(ctx context.Context, state *models.ResourceDiskModel, c *cli
tflog.Info(ctx, "DiskReadStatus: disk with status.Deleted is being read, attempt to restore it", map[string]any{ tflog.Info(ctx, "DiskReadStatus: disk with status.Deleted is being read, attempt to restore it", map[string]any{
"disk_id": recordDisk.ID, "disk_id": recordDisk.ID,
"status": recordDisk.Status}) "status": recordDisk.Status})
diags.Append(RestoreDisk(ctx, diskId, c)...) diags.Append(RestoreDisk(ctx, diskID, c)...)
if diags.HasError() { if diags.HasError() {
tflog.Error(ctx, "DiskReadStatus: cannot restore disk") tflog.Error(ctx, "DiskReadStatus: cannot restore disk")
return diags return diags
} }
tflog.Info(ctx, "DiskReadStatus: disk restored successfully", map[string]any{"disk_id": diskId}) tflog.Info(ctx, "DiskReadStatus: disk restored successfully", map[string]any{"disk_id": diskID})
state.LastUpdated = types.StringValue(time.Now().Format(time.RFC850)) state.LastUpdated = types.StringValue(time.Now().Format(time.RFC850))
case status.Destroyed, status.Purged: case status.Destroyed, status.Purged:
diags.AddError( diags.AddError(
"DiskReadStatus: Disk is in status Destroyed or Purged", "DiskReadStatus: Disk is in status Destroyed or Purged",
fmt.Sprintf("the resource with disk_id %d cannot be read because it has been destroyed or purged", diskId), fmt.Sprintf("the resource with disk_id %d cannot be read because it has been destroyed or purged", diskID),
) )
return diags return diags
} }
@ -188,14 +181,14 @@ func DiskReadStatus(ctx context.Context, state *models.ResourceDiskModel, c *cli
// RestoreDisk performs disk Restore request. // RestoreDisk performs disk Restore request.
// Returns error in case of failures. // Returns error in case of failures.
func RestoreDisk(ctx context.Context, diskId uint64, c *client.Client) diag.Diagnostics { func RestoreDisk(ctx context.Context, diskID uint64, c *client.Client) diag.Diagnostics {
diags := diag.Diagnostics{} diags := diag.Diagnostics{}
restoreReq := disks.RestoreRequest{ restoreReq := disks.RestoreRequest{
DiskID: diskId, DiskID: diskID,
} }
tflog.Info(ctx, "RestoreDisk: before calling CloudAPI().Disks().Restore", map[string]any{"diskId": diskId, "req": restoreReq}) tflog.Info(ctx, "RestoreDisk: before calling CloudAPI().Disks().Restore", map[string]any{"diskID": diskID, "req": restoreReq})
res, err := c.CloudAPI().Disks().Restore(ctx, restoreReq) res, err := c.CloudAPI().Disks().Restore(ctx, restoreReq)
if err != nil { if err != nil {
@ -205,18 +198,18 @@ func RestoreDisk(ctx context.Context, diskId uint64, c *client.Client) diag.Diag
) )
return diags return diags
} }
tflog.Info(ctx, "RestoreDisk: response from CloudAPI().Disks().Restore", map[string]any{"disk_id": diskId, "response": res}) tflog.Info(ctx, "RestoreDisk: response from CloudAPI().Disks().Restore", map[string]any{"disk_id": diskID, "response": res})
return nil return nil
} }
// SizeMaxUpdateDisk resizes disk. // SizeMaxUpdateDisk resizes disk.
// Returns error in case of failures. // Returns error in case of failures.
func SizeMaxUpdateDisk(ctx context.Context, diskId uint64, plan, state *models.ResourceDiskModel, c *client.Client) diag.Diagnostics { func SizeMaxUpdateDisk(ctx context.Context, diskID uint64, plan, state *models.ResourceDiskModel, c *client.Client) diag.Diagnostics {
var diags diag.Diagnostics var diags diag.Diagnostics
resizeReq := disks.ResizeRequest{ resizeReq := disks.ResizeRequest{
DiskID: diskId, DiskID: diskID,
} }
// check if resize request is valid // check if resize request is valid
@ -224,7 +217,7 @@ func SizeMaxUpdateDisk(ctx context.Context, diskId uint64, plan, state *models.R
diags.AddError( diags.AddError(
"SizeMaxUpdateDisk: reducing disk size is not allowed", "SizeMaxUpdateDisk: reducing disk size is not allowed",
fmt.Sprintf("disk with id %s has state size %d, plan size %d", fmt.Sprintf("disk with id %s has state size %d, plan size %d",
plan.Id.ValueString(), plan.ID.ValueString(),
state.SizeMax.ValueInt64(), state.SizeMax.ValueInt64(),
plan.SizeMax.ValueInt64())) plan.SizeMax.ValueInt64()))
return diags return diags
@ -233,7 +226,7 @@ func SizeMaxUpdateDisk(ctx context.Context, diskId uint64, plan, state *models.R
resizeReq.Size = uint64(plan.SizeMax.ValueInt64()) resizeReq.Size = uint64(plan.SizeMax.ValueInt64())
tflog.Info(ctx, "SizeMaxUpdateDisk: before calling CloudAPI().Disks().Resize2", map[string]any{ tflog.Info(ctx, "SizeMaxUpdateDisk: before calling CloudAPI().Disks().Resize2", map[string]any{
"disk_id": plan.Id.ValueString(), "disk_id": plan.ID.ValueString(),
"size_max_state": state.SizeMax.ValueInt64(), "size_max_state": state.SizeMax.ValueInt64(),
"size_max_plan": plan.SizeMax.ValueInt64(), "size_max_plan": plan.SizeMax.ValueInt64(),
"req": resizeReq, "req": resizeReq,
@ -247,7 +240,7 @@ func SizeMaxUpdateDisk(ctx context.Context, diskId uint64, plan, state *models.R
} }
tflog.Info(ctx, "SizeMaxUpdateDisk: response from CloudAPI().Disks().Resize2", map[string]any{ tflog.Info(ctx, "SizeMaxUpdateDisk: response from CloudAPI().Disks().Resize2", map[string]any{
"disk_id": plan.Id.ValueString(), "disk_id": plan.ID.ValueString(),
"response": res}) "response": res})
return nil return nil
@ -255,16 +248,16 @@ func SizeMaxUpdateDisk(ctx context.Context, diskId uint64, plan, state *models.R
// NameUpdateDisk renames disk. // NameUpdateDisk renames disk.
// Returns error in case of failures. // Returns error in case of failures.
func NameUpdateDisk(ctx context.Context, diskId uint64, plan *models.ResourceDiskModel, c *client.Client) diag.Diagnostics { func NameUpdateDisk(ctx context.Context, diskID uint64, plan *models.ResourceDiskModel, c *client.Client) diag.Diagnostics {
var diags diag.Diagnostics var diags diag.Diagnostics
renameReq := disks.RenameRequest{ renameReq := disks.RenameRequest{
DiskID: diskId, DiskID: diskID,
Name: plan.DiskName.ValueString(), Name: plan.DiskName.ValueString(),
} }
tflog.Info(ctx, "NameUpdateDisk: before calling CloudAPI().Disks().Rename", map[string]any{ tflog.Info(ctx, "NameUpdateDisk: before calling CloudAPI().Disks().Rename", map[string]any{
"disk_id": plan.Id.ValueString(), "disk_id": plan.ID.ValueString(),
"disk_name_plan": plan.DiskName.ValueString(), "disk_name_plan": plan.DiskName.ValueString(),
"req": renameReq, "req": renameReq,
}) })
@ -277,7 +270,7 @@ func NameUpdateDisk(ctx context.Context, diskId uint64, plan *models.ResourceDis
} }
tflog.Info(ctx, "NameUpdateDisk: response from CloudAPI().Disks().Rename", map[string]any{ tflog.Info(ctx, "NameUpdateDisk: response from CloudAPI().Disks().Rename", map[string]any{
"disk_id": plan.Id.ValueString(), "disk_id": plan.ID.ValueString(),
"response": res}) "response": res})
return nil return nil
@ -285,16 +278,16 @@ func NameUpdateDisk(ctx context.Context, diskId uint64, plan *models.ResourceDis
// LimitIOUpdateDisk changes IO limits that user specified in iotune field for updated resource. // LimitIOUpdateDisk changes IO limits that user specified in iotune field for updated resource.
// In case of failure returns errors. // In case of failure returns errors.
func LimitIOUpdateDisk(ctx context.Context, diskId uint64, plan *models.ResourceDiskModel, c *client.Client) diag.Diagnostics { func LimitIOUpdateDisk(ctx context.Context, diskID uint64, plan *models.ResourceDiskModel, c *client.Client) diag.Diagnostics {
diags := diag.Diagnostics{} diags := diag.Diagnostics{}
limitIOReq := disks.LimitIORequest{ limitIOReq := disks.LimitIORequest{
DiskID: diskId, DiskID: diskID,
} }
var iotunePlan models.IOTuneModel var iotunePlan models.IOTuneModel
// plan.IOTune is not null as it was checked before call // plan.IOTune is not null as it was checked before call
tflog.Info(ctx, "LimitIOUpdateDisk: new iotune specified", map[string]any{"disk_id": diskId}) tflog.Info(ctx, "LimitIOUpdateDisk: new iotune specified", map[string]any{"disk_id": diskID})
diags.Append(plan.IOTune.As(ctx, &iotunePlan, basetypes.ObjectAsOptions{})...) diags.Append(plan.IOTune.As(ctx, &iotunePlan, basetypes.ObjectAsOptions{})...)
if diags.HasError() { if diags.HasError() {
tflog.Error(ctx, "LimitIOUpdateDisk: cannot populate iotune with plan.IOTune object element") tflog.Error(ctx, "LimitIOUpdateDisk: cannot populate iotune with plan.IOTune object element")
@ -320,7 +313,7 @@ func LimitIOUpdateDisk(ctx context.Context, diskId uint64, plan *models.Resource
limitIOReq.WriteIOPSSecMax = uint64(iotunePlan.WriteIOPSSecMax.ValueInt64()) limitIOReq.WriteIOPSSecMax = uint64(iotunePlan.WriteIOPSSecMax.ValueInt64())
tflog.Info(ctx, "LimitIOUpdateDisk: before calling CloudAPI().Disks().LimitIO", map[string]any{ tflog.Info(ctx, "LimitIOUpdateDisk: before calling CloudAPI().Disks().LimitIO", map[string]any{
"disk_id": diskId, "disk_id": diskID,
"limitIOReq": limitIOReq}) "limitIOReq": limitIOReq})
res, err := c.CloudAPI().Disks().LimitIO(ctx, limitIOReq) res, err := c.CloudAPI().Disks().LimitIO(ctx, limitIOReq)
if err != nil { if err != nil {
@ -329,7 +322,7 @@ func LimitIOUpdateDisk(ctx context.Context, diskId uint64, plan *models.Resource
return diags return diags
} }
tflog.Info(ctx, "LimitIOUpdateDisk: response from CloudAPI().Disks().LimitIO", map[string]any{ tflog.Info(ctx, "LimitIOUpdateDisk: response from CloudAPI().Disks().LimitIO", map[string]any{
"disk_id": diskId, "disk_id": diskID,
"response": res}) "response": res})
return nil return nil
@ -337,34 +330,34 @@ func LimitIOUpdateDisk(ctx context.Context, diskId uint64, plan *models.Resource
// ShareableUpdateDisk shares or unshares disk. // ShareableUpdateDisk shares or unshares disk.
// In case of failure returns errors. // In case of failure returns errors.
func ShareableUpdateDisk(ctx context.Context, diskId uint64, share bool, c *client.Client) diag.Diagnostics { func ShareableUpdateDisk(ctx context.Context, diskID uint64, share bool, c *client.Client) diag.Diagnostics {
diags := diag.Diagnostics{} diags := diag.Diagnostics{}
// share // share
if share { if share {
tflog.Info(ctx, "ShareableUpdateDisk: before calling CloudAPI().Disks().Share", map[string]any{"disk_id": diskId}) tflog.Info(ctx, "ShareableUpdateDisk: before calling CloudAPI().Disks().Share", map[string]any{"disk_id": diskID})
res, err := c.CloudAPI().Disks().Share(ctx, disks.ShareRequest{DiskID: diskId}) res, err := c.CloudAPI().Disks().Share(ctx, disks.ShareRequest{DiskID: diskID})
if err != nil { if err != nil {
diags.AddError("ShareableUpdateDisk: Unable to share Disk", diags.AddError("ShareableUpdateDisk: Unable to share Disk",
err.Error()) err.Error())
return diags return diags
} }
tflog.Info(ctx, "ShareableUpdateDisk: response from CloudAPI().Disks().Share", map[string]any{ tflog.Info(ctx, "ShareableUpdateDisk: response from CloudAPI().Disks().Share", map[string]any{
"disk_id": diskId, "disk_id": diskID,
"response": res}) "response": res})
} }
// unshare // unshare
if !share { if !share {
tflog.Info(ctx, "ShareableUpdateDisk: before calling CloudAPI().Disks().Unshare", map[string]any{"disk_id": diskId}) tflog.Info(ctx, "ShareableUpdateDisk: before calling CloudAPI().Disks().Unshare", map[string]any{"disk_id": diskID})
res, err := c.CloudAPI().Disks().Unshare(ctx, disks.UnshareRequest{DiskID: diskId}) res, err := c.CloudAPI().Disks().Unshare(ctx, disks.UnshareRequest{DiskID: diskID})
if err != nil { if err != nil {
diags.AddError("ShareableUpdateDisk: Unable to unshare Disk", diags.AddError("ShareableUpdateDisk: Unable to unshare Disk",
err.Error()) err.Error())
return diags return diags
} }
tflog.Info(ctx, "ShareableUpdateDisk: response from CloudAPI().Disks().Unshare", map[string]any{ tflog.Info(ctx, "ShareableUpdateDisk: response from CloudAPI().Disks().Unshare", map[string]any{
"disk_id": diskId, "disk_id": diskID,
"response": res}) "response": res})
} }

@ -58,6 +58,7 @@ func ExtNetDataSource(ctx context.Context, state *models.DataSourceExtNetModel,
NetName: types.StringValue(recordExtNet.Name), NetName: types.StringValue(recordExtNet.Name),
Network: types.StringValue(recordExtNet.Network), Network: types.StringValue(recordExtNet.Network),
NetworkID: types.Int64Value(int64(recordExtNet.NetworkID)), NetworkID: types.Int64Value(int64(recordExtNet.NetworkID)),
NTP: flattens.FlattenSimpleTypeToList(ctx, types.StringType, recordExtNet.NTP),
PreReservationsNum: types.Int64Value(int64(recordExtNet.PreReservationsNum)), PreReservationsNum: types.Int64Value(int64(recordExtNet.PreReservationsNum)),
Prefix: types.Int64Value(int64(recordExtNet.Prefix)), Prefix: types.Int64Value(int64(recordExtNet.Prefix)),
PriVNFDevID: types.Int64Value(int64(recordExtNet.PriVNFDevID)), PriVNFDevID: types.Int64Value(int64(recordExtNet.PriVNFDevID)),

@ -36,6 +36,7 @@ func ExtNetListDataSource(ctx context.Context, state *models.DataSourceExtNetLis
Network: state.Network, Network: state.Network,
VLANID: state.VLANID, VLANID: state.VLANID,
VNFDevID: state.VNFDevID, VNFDevID: state.VNFDevID,
OVSBridge: state.OVSBridge,
Status: state.Status, Status: state.Status,
Page: state.Page, Page: state.Page,
Size: state.Size, Size: state.Size,

@ -30,6 +30,7 @@ type DataSourceExtNetModel struct {
NetName types.String `tfsdk:"net_name"` NetName types.String `tfsdk:"net_name"`
Network types.String `tfsdk:"network"` Network types.String `tfsdk:"network"`
NetworkID types.Int64 `tfsdk:"network_id"` NetworkID types.Int64 `tfsdk:"network_id"`
NTP types.List `tfsdk:"ntp"`
PreReservationsNum types.Int64 `tfsdk:"pre_reservations_num"` PreReservationsNum types.Int64 `tfsdk:"pre_reservations_num"`
Prefix types.Int64 `tfsdk:"prefix"` Prefix types.Int64 `tfsdk:"prefix"`
PriVNFDevID types.Int64 `tfsdk:"pri_vnf_dev_id"` PriVNFDevID types.Int64 `tfsdk:"pri_vnf_dev_id"`

@ -13,6 +13,7 @@ type DataSourceExtNetListModel struct {
Network types.String `tfsdk:"network"` Network types.String `tfsdk:"network"`
VLANID types.Int64 `tfsdk:"vlan_id"` VLANID types.Int64 `tfsdk:"vlan_id"`
VNFDevID types.Int64 `tfsdk:"vnfdev_id"` VNFDevID types.Int64 `tfsdk:"vnfdev_id"`
OVSBridge types.String `tfsdk:"ovs_bridge"`
Status types.String `tfsdk:"status"` Status types.String `tfsdk:"status"`
Page types.Int64 `tfsdk:"page"` Page types.Int64 `tfsdk:"page"`
Size types.Int64 `tfsdk:"size"` Size types.Int64 `tfsdk:"size"`

@ -31,6 +31,10 @@ func MakeSchemaDataSourceExtNet() map[string]schema.Attribute {
"default": schema.BoolAttribute{ "default": schema.BoolAttribute{
Computed: true, Computed: true,
}, },
"ntp": schema.ListAttribute{
Computed: true,
ElementType: types.StringType,
},
"default_qos": schema.SingleNestedAttribute{ "default_qos": schema.SingleNestedAttribute{
Computed: true, Computed: true,
Attributes: map[string]schema.Attribute{ Attributes: map[string]schema.Attribute{

@ -31,6 +31,10 @@ func MakeSchemaDataSourceExtNetList() map[string]schema.Attribute {
Optional: true, Optional: true,
Description: "find by vnfdevices id", Description: "find by vnfdevices id",
}, },
"ovs_bridge": schema.StringAttribute{
Optional: true,
Description: "find by ovs_bridge",
},
"status": schema.StringAttribute{ "status": schema.StringAttribute{
Optional: true, Optional: true,
Description: "find by status", Description: "find by status",

@ -34,6 +34,9 @@ func ExtNetListCheckPresence(ctx context.Context, plan *models.DataSourceExtNetL
if !plan.VNFDevID.IsNull() { if !plan.VNFDevID.IsNull() {
extnetListReq.VNFDevID = uint64(plan.VNFDevID.ValueInt64()) extnetListReq.VNFDevID = uint64(plan.VNFDevID.ValueInt64())
} }
if !plan.OVSBridge.IsNull() {
extnetListReq.OVSBridge = plan.OVSBridge.ValueString()
}
if !plan.Status.IsNull() { if !plan.Status.IsNull() {
extnetListReq.Status = plan.Status.ValueString() extnetListReq.Status = plan.Status.ValueString()
} }

@ -33,16 +33,16 @@ func FlipgroupResource(ctx context.Context, plan *models.ResourceFLIPGroupModel,
} }
*plan = models.ResourceFLIPGroupModel{ *plan = models.ResourceFLIPGroupModel{
AccountID: plan.AccountID, AccountID: types.Int64Value(int64(recordFG.AccountID)),
Name: plan.Name, Name: types.StringValue(recordFG.Name),
NetType: plan.NetType, NetType: types.StringValue(recordFG.NetType),
NetID: plan.NetID, NetID: types.Int64Value(int64(recordFG.NetID)),
ClientType: plan.ClientType, ClientType: types.StringValue(recordFG.ClientType),
Timeouts: plan.Timeouts, Timeouts: plan.Timeouts,
Description: plan.Description, Description: types.StringValue(recordFG.Description),
ClientIDs: plan.ClientIDs, ClientIDs: flattens.FlattenSimpleTypeToList(ctx, types.Int64Type, recordFG.ClientIDs),
ID: plan.ID, ID: plan.ID,
IP: plan.IP, IP: types.StringValue(recordFG.IP),
AccountName: types.StringValue(recordFG.AccountName), AccountName: types.StringValue(recordFG.AccountName),
ConnID: types.Int64Value(int64(recordFG.ConnID)), ConnID: types.Int64Value(int64(recordFG.ConnID)),
@ -69,10 +69,6 @@ func FlipgroupResource(ctx context.Context, plan *models.ResourceFLIPGroupModel,
plan.IP = types.StringValue(recordFG.IP) plan.IP = types.StringValue(recordFG.IP)
} }
if plan.ClientIDs.IsUnknown() {
plan.ClientIDs = flattens.FlattenSimpleTypeToList(ctx, types.Int64Type, recordFG.ClientIDs)
}
tflog.Info(ctx, "End flattens.FlipgroupResource", map[string]any{"flipgroup_id": plan.ID.ValueString()}) tflog.Info(ctx, "End flattens.FlipgroupResource", map[string]any{"flipgroup_id": plan.ID.ValueString()})
return nil return nil
} }

@ -62,6 +62,7 @@ func DataSourceImage(ctx context.Context, state *models.RecordImageModel, c *cli
ResID: types.StringValue(image.ResID), ResID: types.StringValue(image.ResID),
RescueCD: types.BoolValue(image.RescueCD), RescueCD: types.BoolValue(image.RescueCD),
SepID: types.Int64Value(int64(image.SepID)), SepID: types.Int64Value(int64(image.SepID)),
SnapshotID: types.StringValue(image.SnapshotID),
Size: types.Int64Value(int64(image.Size)), Size: types.Int64Value(int64(image.Size)),
Status: types.StringValue(image.Status), Status: types.StringValue(image.Status),
TechStatus: types.StringValue(image.TechStatus), TechStatus: types.StringValue(image.TechStatus),
@ -77,7 +78,7 @@ func DataSourceImage(ctx context.Context, state *models.RecordImageModel, c *cli
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("Error flattenDrivers", diags)) tflog.Error(ctx, fmt.Sprint("Error flattenDrivers", diags))
} }
state.PresentTo, diags = types.ListValueFrom(ctx, types.Int64Type, image.PresentTo) state.PresentTo, diags = types.MapValueFrom(ctx, types.Int64Type, image.PresentTo)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("Error flattenPresentTo", diags)) tflog.Error(ctx, fmt.Sprint("Error flattenPresentTo", diags))
} }

@ -71,6 +71,7 @@ func ResourceImage(ctx context.Context, plan *models.ImageResourceModel, c *clie
ResID: types.StringValue(image.ResID), ResID: types.StringValue(image.ResID),
RescueCD: types.BoolValue(image.RescueCD), RescueCD: types.BoolValue(image.RescueCD),
Size: types.Int64Value(int64(image.Size)), Size: types.Int64Value(int64(image.Size)),
SnapshotID: types.StringValue(image.SnapshotID),
Status: types.StringValue(image.Status), Status: types.StringValue(image.Status),
TechStatus: types.StringValue(image.TechStatus), TechStatus: types.StringValue(image.TechStatus),
Version: types.StringValue(image.Version), Version: types.StringValue(image.Version),
@ -84,7 +85,7 @@ func ResourceImage(ctx context.Context, plan *models.ImageResourceModel, c *clie
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("Error flattenDrivers", diags)) tflog.Error(ctx, fmt.Sprint("Error flattenDrivers", diags))
} }
plan.PresentTo, diags = types.ListValueFrom(ctx, types.Int64Type, image.PresentTo) plan.PresentTo, diags = types.MapValueFrom(ctx, types.Int64Type, image.PresentTo)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("Error flattenPresentTo", diags)) tflog.Error(ctx, fmt.Sprint("Error flattenPresentTo", diags))
} }

@ -2,6 +2,7 @@ package flattens
import ( import (
"context" "context"
"encoding/json"
"fmt" "fmt"
"strconv" "strconv"
@ -30,6 +31,7 @@ func ResourceImageVirtual(ctx context.Context, plan *models.ImageVirtualResource
return diags return diags
} }
cdPresentedTo, _ := json.Marshal(image.CdPresentedTo)
*plan = models.ImageVirtualResourceModel{ *plan = models.ImageVirtualResourceModel{
ImageName: types.StringValue(image.Name), ImageName: types.StringValue(image.Name),
LinkTo: types.Int64Value(int64(image.LinkTo)), LinkTo: types.Int64Value(int64(image.LinkTo)),
@ -42,6 +44,7 @@ func ResourceImageVirtual(ctx context.Context, plan *models.ImageVirtualResource
Architecture: types.StringValue(image.Architecture), Architecture: types.StringValue(image.Architecture),
BootType: types.StringValue(image.BootType), BootType: types.StringValue(image.BootType),
Bootable: types.BoolValue(image.Bootable), Bootable: types.BoolValue(image.Bootable),
CdPresentedTo: types.StringValue(string(cdPresentedTo)),
ComputeCIID: types.Int64Value(int64(image.ComputeCIID)), ComputeCIID: types.Int64Value(int64(image.ComputeCIID)),
DeletedTime: types.Int64Value(int64(image.DeletedTime)), DeletedTime: types.Int64Value(int64(image.DeletedTime)),
Description: types.StringValue(image.Description), Description: types.StringValue(image.Description),
@ -52,6 +55,7 @@ func ResourceImageVirtual(ctx context.Context, plan *models.ImageVirtualResource
HotResize: types.BoolValue(image.HotResize), HotResize: types.BoolValue(image.HotResize),
LastModified: types.Int64Value(int64(image.LastModified)), LastModified: types.Int64Value(int64(image.LastModified)),
Milestones: types.Int64Value(int64(image.Milestones)), Milestones: types.Int64Value(int64(image.Milestones)),
NetworkInterfaceNaming: types.StringValue(image.NetworkInterfaceNaming),
ImageId: types.Int64Value(int64(image.ID)), ImageId: types.Int64Value(int64(image.ID)),
ImageType: types.StringValue(image.Type), ImageType: types.StringValue(image.Type),
Password: types.StringValue(image.Password), Password: types.StringValue(image.Password),
@ -62,6 +66,7 @@ func ResourceImageVirtual(ctx context.Context, plan *models.ImageVirtualResource
RescueCD: types.BoolValue(image.RescueCD), RescueCD: types.BoolValue(image.RescueCD),
SepID: types.Int64Value(int64(image.SepID)), SepID: types.Int64Value(int64(image.SepID)),
Size: types.Int64Value(int64(image.Size)), Size: types.Int64Value(int64(image.Size)),
SnapshotID: types.StringValue(image.SnapshotID),
Status: types.StringValue(image.Status), Status: types.StringValue(image.Status),
TechStatus: types.StringValue(image.TechStatus), TechStatus: types.StringValue(image.TechStatus),
Username: types.StringValue(image.Username), Username: types.StringValue(image.Username),
@ -76,7 +81,7 @@ func ResourceImageVirtual(ctx context.Context, plan *models.ImageVirtualResource
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("Error flattenDrivers", diags)) tflog.Error(ctx, fmt.Sprint("Error flattenDrivers", diags))
} }
plan.PresentTo, diags = types.ListValueFrom(ctx, types.Int64Type, image.PresentTo) plan.PresentTo, diags = types.MapValueFrom(ctx, types.Int64Type, image.PresentTo)
if diags != nil { if diags != nil {
tflog.Error(ctx, fmt.Sprint("Error flattenPresentTo", diags)) tflog.Error(ctx, fmt.Sprint("Error flattenPresentTo", diags))
} }

@ -37,12 +37,13 @@ type RecordImageModel struct {
Password types.String `tfsdk:"password"` Password types.String `tfsdk:"password"`
NetworkInterfaceNaming types.String `tfsdk:"network_interface_naming"` NetworkInterfaceNaming types.String `tfsdk:"network_interface_naming"`
PoolName types.String `tfsdk:"pool_name"` PoolName types.String `tfsdk:"pool_name"`
PresentTo types.List `tfsdk:"present_to"` PresentTo types.Map `tfsdk:"present_to"`
ProviderName types.String `tfsdk:"provider_name"` ProviderName types.String `tfsdk:"provider_name"`
PurgeAttempts types.Int64 `tfsdk:"purge_attempts"` PurgeAttempts types.Int64 `tfsdk:"purge_attempts"`
ResID types.String `tfsdk:"res_id"` ResID types.String `tfsdk:"res_id"`
RescueCD types.Bool `tfsdk:"rescuecd"` RescueCD types.Bool `tfsdk:"rescuecd"`
SepID types.Int64 `tfsdk:"sep_id"` SepID types.Int64 `tfsdk:"sep_id"`
SnapshotID types.String `tfsdk:"snapshot_id"`
SharedWith types.List `tfsdk:"shared_with"` SharedWith types.List `tfsdk:"shared_with"`
Size types.Int64 `tfsdk:"size"` Size types.Int64 `tfsdk:"size"`
Status types.String `tfsdk:"status"` Status types.String `tfsdk:"status"`

@ -43,12 +43,13 @@ type ImageResourceModel struct {
LinkTo types.Int64 `tfsdk:"link_to"` LinkTo types.Int64 `tfsdk:"link_to"`
Milestones types.Int64 `tfsdk:"milestones"` Milestones types.Int64 `tfsdk:"milestones"`
ImageId types.Int64 `tfsdk:"image_id"` ImageId types.Int64 `tfsdk:"image_id"`
PresentTo types.List `tfsdk:"present_to"` PresentTo types.Map `tfsdk:"present_to"`
ProviderName types.String `tfsdk:"provider_name"` ProviderName types.String `tfsdk:"provider_name"`
PurgeAttempts types.Int64 `tfsdk:"purge_attempts"` PurgeAttempts types.Int64 `tfsdk:"purge_attempts"`
ResID types.String `tfsdk:"res_id"` ResID types.String `tfsdk:"res_id"`
RescueCD types.Bool `tfsdk:"rescuecd"` RescueCD types.Bool `tfsdk:"rescuecd"`
SharedWith types.List `tfsdk:"shared_with"` SharedWith types.List `tfsdk:"shared_with"`
SnapshotID types.String `tfsdk:"snapshot_id"`
Size types.Int64 `tfsdk:"size"` Size types.Int64 `tfsdk:"size"`
Status types.String `tfsdk:"status"` Status types.String `tfsdk:"status"`
TechStatus types.String `tfsdk:"tech_status"` TechStatus types.String `tfsdk:"tech_status"`

@ -20,6 +20,7 @@ type ImageVirtualResourceModel struct {
Architecture types.String `tfsdk:"architecture"` Architecture types.String `tfsdk:"architecture"`
BootType types.String `tfsdk:"boot_type"` BootType types.String `tfsdk:"boot_type"`
Bootable types.Bool `tfsdk:"bootable"` Bootable types.Bool `tfsdk:"bootable"`
CdPresentedTo types.String `tfsdk:"cd_presented_to"`
ComputeCIID types.Int64 `tfsdk:"compute_ci_id"` ComputeCIID types.Int64 `tfsdk:"compute_ci_id"`
DeletedTime types.Int64 `tfsdk:"deleted_time"` DeletedTime types.Int64 `tfsdk:"deleted_time"`
Description types.String `tfsdk:"desc"` Description types.String `tfsdk:"desc"`
@ -33,9 +34,10 @@ type ImageVirtualResourceModel struct {
Milestones types.Int64 `tfsdk:"milestones"` Milestones types.Int64 `tfsdk:"milestones"`
ImageId types.Int64 `tfsdk:"image_id"` ImageId types.Int64 `tfsdk:"image_id"`
ImageType types.String `tfsdk:"image_type"` ImageType types.String `tfsdk:"image_type"`
NetworkInterfaceNaming types.String `tfsdk:"network_interface_naming"`
Password types.String `tfsdk:"password"` Password types.String `tfsdk:"password"`
PoolName types.String `tfsdk:"pool_name"` PoolName types.String `tfsdk:"pool_name"`
PresentTo types.List `tfsdk:"present_to"` PresentTo types.Map `tfsdk:"present_to"`
ProviderName types.String `tfsdk:"provider_name"` ProviderName types.String `tfsdk:"provider_name"`
PurgeAttempts types.Int64 `tfsdk:"purge_attempts"` PurgeAttempts types.Int64 `tfsdk:"purge_attempts"`
ResID types.String `tfsdk:"res_id"` ResID types.String `tfsdk:"res_id"`
@ -43,6 +45,7 @@ type ImageVirtualResourceModel struct {
SepID types.Int64 `tfsdk:"sep_id"` SepID types.Int64 `tfsdk:"sep_id"`
SharedWith types.List `tfsdk:"shared_with"` SharedWith types.List `tfsdk:"shared_with"`
Size types.Int64 `tfsdk:"size"` Size types.Int64 `tfsdk:"size"`
SnapshotID types.String `tfsdk:"snapshot_id"`
Status types.String `tfsdk:"status"` Status types.String `tfsdk:"status"`
TechStatus types.String `tfsdk:"tech_status"` TechStatus types.String `tfsdk:"tech_status"`
Username types.String `tfsdk:"username"` Username types.String `tfsdk:"username"`

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save