This commit is contained in:
2026-04-23 12:46:24 +03:00
parent 156b0a2d0c
commit af79f6ab3e
16 changed files with 624 additions and 53 deletions

View File

@@ -1,38 +1,8 @@
## Version 4.11.1 ## Version 4.11.2
### Добавлено ### Добавлено
#### disks
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-1216 | Вычисляемое поле `blk_discard` в resource `resource_disk` в cloudapi/disks |
#### kvmvm #### kvmvm
| Идентификатор<br>задачи | Описание | | Идентификатор<br>задачи | Описание |
| --- | --- | | --- | --- |
| BATF-1216 | Вычисляемое поле `blk_discard` в resource `resource_kvmvm` в cloudapi/kvmvm | | BATF-1270 | Опциональное поле `iotune` в блоке `disks` в resource `decort_kvmvm` в cloudapi/kvmvm и в resource `decort_cb_kvmvm` в cloudbroker/kvmvm |
### Исправлено
#### disks
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-1186 | Тип поля `cache` с опционального на вычисляемый в resources `decort_disk` в cloudapi/disks |
#### kvmvm
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-1220 | Ошибки применения новой конфигурации после импортирования в resource `decort_kvmvm` в cloudapi/kvmvm и в resource `decort_cb_kvmvm` в cloudbroker/kvmvm |
#### rg
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-1214 | Оптимизировано создание ресурных групп с указанными полем `def_net_type` в `decort_resgroup` в cloudapi/rg и в `decort_cb_rg` в cloudbroker/rg|
| BATF-1219 | Отображение полей `updated_by` и `updated_time` в datasource `decort_rg_list_computes` в cloudapi/rg |
#### zone
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-1192 | Ошибка отображения в datasource `decort_zone` в cloudapi/zone |

View File

@@ -7,7 +7,7 @@ ZIPDIR = ./zip
BINARY=${NAME} BINARY=${NAME}
WORKPATH= ./examples/terraform.d/plugins/${HOSTNAME}/${NAMESPACE}/${NAMESPACE}/${VERSION}/${OS_ARCH} WORKPATH= ./examples/terraform.d/plugins/${HOSTNAME}/${NAMESPACE}/${NAMESPACE}/${VERSION}/${OS_ARCH}
MAINPATH = ./cmd/decort/ MAINPATH = ./cmd/decort/
VERSION=4.11.1 VERSION=4.11.2
OS_ARCH=$(shell go env GOHOSTOS)_$(shell go env GOHOSTARCH) OS_ARCH=$(shell go env GOHOSTOS)_$(shell go env GOHOSTARCH)
FILES = ${BINARY}_${VERSION}_darwin_amd64\ FILES = ${BINARY}_${VERSION}_darwin_amd64\

View File

@@ -17,6 +17,7 @@ description: |-
### Required ### Required
- `authenticator` (String) Authentication mode to use when connecting to DECORT cloud API. Should be one of 'decs3o', 'legacy', 'jwt' or 'bvs'. - `authenticator` (String) Authentication mode to use when connecting to DECORT cloud API. Should be one of 'decs3o', 'legacy', 'jwt' or 'bvs'.
- `controller_url` (String) URL of DECORT Cloud controller to use. API calls will be directed to this URL.
### Optional ### Optional
@@ -25,7 +26,6 @@ description: |-
- `app_secret` (String) Application secret to access DECORT cloud API in 'decs3o' and 'bvs' authentication mode. - `app_secret` (String) Application secret to access DECORT cloud API in 'decs3o' and 'bvs' authentication mode.
- `bvs_password` (String) User password for DECORT cloud API operations in 'bvs' authentication mode. - `bvs_password` (String) User password for DECORT cloud API operations in 'bvs' authentication mode.
- `bvs_user` (String) User name for DECORT cloud API operations in 'bvs' authentication mode. - `bvs_user` (String) User name for DECORT cloud API operations in 'bvs' authentication mode.
- `controller_url` (String) URL of DECORT Cloud controller to use. API calls will be directed to this URL.
- `domain` (String) User password for DECORT cloud API operations in 'bvs' authentication mode. - `domain` (String) User password for DECORT cloud API operations in 'bvs' authentication mode.
- `jwt` (String) JWT to access DECORT cloud API in 'jwt' authentication mode. - `jwt` (String) JWT to access DECORT cloud API in 'jwt' authentication mode.
- `oauth2_url` (String) OAuth2 application URL in 'decs3o' and 'bvs' authentication mode. - `oauth2_url` (String) OAuth2 application URL in 'decs3o' and 'bvs' authentication mode.

View File

@@ -188,6 +188,7 @@ Optional:
- `desc` (String) Optional description - `desc` (String) Optional description
- `disk_type` (String) The type of disk in terms of its role in compute: 'B=Boot, D=Data' - `disk_type` (String) The type of disk in terms of its role in compute: 'B=Boot, D=Data'
- `image_id` (Number) Specify image id for create disk from template - `image_id` (Number) Specify image id for create disk from template
- `iotune` (Block List, Max: 1) (see [below for nested schema](#nestedblock--disks--iotune))
- `node_ids` (Set of Number) - `node_ids` (Set of Number)
- `permanently` (Boolean) Disk deletion status - `permanently` (Boolean) Disk deletion status
- `pool` (String) Pool name; by default will be chosen automatically - `pool` (String) Pool name; by default will be chosen automatically
@@ -208,6 +209,26 @@ Read-Only:
- `to_clean` (Boolean) - `to_clean` (Boolean)
- `update_time` (Number) - `update_time` (Number)
<a id="nestedblock--disks--iotune"></a>
### Nested Schema for `disks.iotune`
Optional:
- `read_bytes_sec` (Number)
- `read_bytes_sec_max` (Number)
- `read_iops_sec` (Number)
- `read_iops_sec_max` (Number)
- `size_iops_sec` (Number)
- `total_bytes_sec` (Number)
- `total_bytes_sec_max` (Number)
- `total_iops_sec` (Number)
- `total_iops_sec_max` (Number)
- `write_bytes_sec` (Number)
- `write_bytes_sec_max` (Number)
- `write_iops_sec` (Number)
- `write_iops_sec_max` (Number)
<a id="nestedblock--libvirt_settings"></a> <a id="nestedblock--libvirt_settings"></a>
### Nested Schema for `libvirt_settings` ### Nested Schema for `libvirt_settings`

View File

@@ -179,6 +179,7 @@ Optional:
- `desc` (String) Optional description - `desc` (String) Optional description
- `disk_type` (String) The type of disk in terms of its role in compute: 'B=Boot, D=Data' - `disk_type` (String) The type of disk in terms of its role in compute: 'B=Boot, D=Data'
- `image_id` (Number) Specify image id for create disk from template - `image_id` (Number) Specify image id for create disk from template
- `iotune` (Block List, Max: 1) (see [below for nested schema](#nestedblock--disks--iotune))
- `permanently` (Boolean) Disk deletion status - `permanently` (Boolean) Disk deletion status
- `pool` (String) Pool name; by default will be chosen automatically - `pool` (String) Pool name; by default will be chosen automatically
- `sep_id` (Number) Storage endpoint provider ID; by default the same with boot disk - `sep_id` (Number) Storage endpoint provider ID; by default the same with boot disk
@@ -200,6 +201,26 @@ Read-Only:
- `to_clean` (Boolean) - `to_clean` (Boolean)
- `updated_time` (Number) - `updated_time` (Number)
<a id="nestedblock--disks--iotune"></a>
### Nested Schema for `disks.iotune`
Optional:
- `read_bytes_sec` (Number)
- `read_bytes_sec_max` (Number)
- `read_iops_sec` (Number)
- `read_iops_sec_max` (Number)
- `size_iops_sec` (Number)
- `total_bytes_sec` (Number)
- `total_bytes_sec_max` (Number)
- `total_iops_sec` (Number)
- `total_iops_sec_max` (Number)
- `write_bytes_sec` (Number)
- `write_bytes_sec_max` (Number)
- `write_iops_sec` (Number)
- `write_iops_sec_max` (Number)
<a id="nestedblock--network"></a> <a id="nestedblock--network"></a>
### Nested Schema for `network` ### Nested Schema for `network`
@@ -311,6 +332,7 @@ Read-Only:
- `disk_name` (String) - `disk_name` (String)
- `disk_type` (String) - `disk_type` (String)
- `image_id` (Number) - `image_id` (Number)
- `iotune` (List of Object) (see [below for nested schema](#nestedobjatt--boot_disk--iotune))
- `permanently` (Boolean) - `permanently` (Boolean)
- `pool` (String) - `pool` (String)
- `present_to` (Map of Number) - `present_to` (Map of Number)
@@ -323,6 +345,26 @@ Read-Only:
- `to_clean` (Boolean) - `to_clean` (Boolean)
- `updated_time` (Number) - `updated_time` (Number)
<a id="nestedobjatt--boot_disk--iotune"></a>
### Nested Schema for `boot_disk.iotune`
Read-Only:
- `read_bytes_sec` (Number)
- `read_bytes_sec_max` (Number)
- `read_iops_sec` (Number)
- `read_iops_sec_max` (Number)
- `size_iops_sec` (Number)
- `total_bytes_sec` (Number)
- `total_bytes_sec_max` (Number)
- `total_iops_sec` (Number)
- `total_iops_sec_max` (Number)
- `write_bytes_sec` (Number)
- `write_bytes_sec_max` (Number)
- `write_iops_sec` (Number)
- `write_iops_sec_max` (Number)
<a id="nestedatt--interfaces"></a> <a id="nestedatt--interfaces"></a>
### Nested Schema for `interfaces` ### Nested Schema for `interfaces`

2
go.mod
View File

@@ -9,7 +9,7 @@ require (
github.com/hashicorp/terraform-plugin-sdk/v2 v2.38.1 github.com/hashicorp/terraform-plugin-sdk/v2 v2.38.1
github.com/sirupsen/logrus v1.9.0 github.com/sirupsen/logrus v1.9.0
golang.org/x/net v0.44.0 golang.org/x/net v0.44.0
repository.basistech.ru/BASIS/decort-golang-sdk v1.13.8 repository.basistech.ru/BASIS/decort-golang-sdk v1.13.9
) )
require ( require (

4
go.sum
View File

@@ -318,5 +318,5 @@ gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
repository.basistech.ru/BASIS/decort-golang-sdk v1.13.8 h1:Uc+GBfbEg1dQPMuFfqNhKZmMO19N7OvdCNHIFnLLkp0= repository.basistech.ru/BASIS/decort-golang-sdk v1.13.9 h1:jrfwiJBuHbt3JlVwD6DWF3E/H9pyDOJOvb8F5sQ/mhM=
repository.basistech.ru/BASIS/decort-golang-sdk v1.13.8/go.mod h1:S/f7GxwWcE88eFpORV+I9xqEf8zDW5srQHpG2XQCLZM= repository.basistech.ru/BASIS/decort-golang-sdk v1.13.9/go.mod h1:S/f7GxwWcE88eFpORV+I9xqEf8zDW5srQHpG2XQCLZM=

View File

@@ -335,6 +335,7 @@ func flattenComputeDisksDemo(disksList compute.ListComputeDisks, disksBlocks, ex
"permanently": pernamentlyValue, "permanently": pernamentlyValue,
"cache": disk.Cache, "cache": disk.Cache,
"blk_discard": disk.BLKDiscard, "blk_discard": disk.BLKDiscard,
"iotune": flattenIotune(disk.IOTune),
} }
res = append(res, temp) res = append(res, temp)
indexDataDisks++ indexDataDisks++

View File

@@ -390,6 +390,12 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
} }
} }
if _, ok := d.GetOk("disks"); ok {
if err := utilityComputeCreateIOTune(ctx, d, m); err != nil {
warnings.Add(err)
}
}
if !cleanup { if !cleanup {
if enabled, ok := d.GetOk("enabled"); ok { if enabled, ok := d.GetOk("enabled"); ok {
@@ -1124,6 +1130,7 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
resizedDisks := make([]interface{}, 0) resizedDisks := make([]interface{}, 0)
renamedDisks := make([]interface{}, 0) renamedDisks := make([]interface{}, 0)
changeStoragePolicyDisks := make([]interface{}, 0) changeStoragePolicyDisks := make([]interface{}, 0)
iotuneUpdatedDisks := make([]interface{}, 0)
oldDisks, newDisks := d.GetChange("disks") oldDisks, newDisks := d.GetChange("disks")
oldConv := oldDisks.([]interface{}) oldConv := oldDisks.([]interface{})
@@ -1164,6 +1171,9 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
if isChangeStoragePolicy(oldConv, el) { if isChangeStoragePolicy(oldConv, el) {
changeStoragePolicyDisks = append(changeStoragePolicyDisks, el) changeStoragePolicyDisks = append(changeStoragePolicyDisks, el)
} }
if isChangeIOTuneDisk(oldConv, el) {
iotuneUpdatedDisks = append(iotuneUpdatedDisks, el)
}
} }
if len(deletedDisks) > 0 { if len(deletedDisks) > 0 {
@@ -1216,10 +1226,33 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
if diskConv["image_id"].(int) != 0 { if diskConv["image_id"].(int) != 0 {
req.ImageID = uint64(diskConv["image_id"].(int)) req.ImageID = uint64(diskConv["image_id"].(int))
} }
_, err := c.CloudAPI().Compute().DiskAdd(ctx, req) diskID, err := c.CloudAPI().Compute().DiskAdd(ctx, req)
if err != nil { if err != nil {
return diag.FromErr(err) return diag.FromErr(err)
} }
if iotuneRaw, ok := diskConv["iotune"].([]interface{}); ok && len(iotuneRaw) > 0 {
iotuneMap := iotuneRaw[0].(map[string]interface{})
limitReq := disks.LimitIORequest{
DiskID: diskID,
ReadBytesSec: uint64(iotuneMap["read_bytes_sec"].(int)),
ReadBytesSecMax: uint64(iotuneMap["read_bytes_sec_max"].(int)),
ReadIOPSSec: uint64(iotuneMap["read_iops_sec"].(int)),
ReadIOPSSecMax: uint64(iotuneMap["read_iops_sec_max"].(int)),
SizeIOPSSec: uint64(iotuneMap["size_iops_sec"].(int)),
TotalBytesSec: uint64(iotuneMap["total_bytes_sec"].(int)),
TotalBytesSecMax: uint64(iotuneMap["total_bytes_sec_max"].(int)),
TotalIOPSSec: uint64(iotuneMap["total_iops_sec"].(int)),
TotalIOPSSecMax: uint64(iotuneMap["total_iops_sec_max"].(int)),
WriteBytesSec: uint64(iotuneMap["write_bytes_sec"].(int)),
WriteBytesSecMax: uint64(iotuneMap["write_bytes_sec_max"].(int)),
WriteIOPSSec: uint64(iotuneMap["write_iops_sec"].(int)),
WriteIOPSSecMax: uint64(iotuneMap["write_iops_sec_max"].(int)),
}
_, err = c.CloudAPI().Disks().LimitIO(ctx, limitReq)
if err != nil {
return diag.FromErr(err)
}
}
} }
} }
@@ -1273,6 +1306,44 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
} }
} }
} }
if len(iotuneUpdatedDisks) > 0 {
for _, disk := range iotuneUpdatedDisks {
diskConv := disk.(map[string]interface{})
if diskConv["disk_type"].(string) == "B" {
continue
}
diskID := uint64(diskConv["disk_id"].(int))
if diskID == 0 {
continue
}
iotuneRaw, ok := diskConv["iotune"].([]interface{})
if !ok || len(iotuneRaw) == 0 {
continue
}
iotuneMap := iotuneRaw[0].(map[string]interface{})
req := disks.LimitIORequest{
DiskID: diskID,
ReadBytesSec: uint64(iotuneMap["read_bytes_sec"].(int)),
ReadBytesSecMax: uint64(iotuneMap["read_bytes_sec_max"].(int)),
ReadIOPSSec: uint64(iotuneMap["read_iops_sec"].(int)),
ReadIOPSSecMax: uint64(iotuneMap["read_iops_sec_max"].(int)),
SizeIOPSSec: uint64(iotuneMap["size_iops_sec"].(int)),
TotalBytesSec: uint64(iotuneMap["total_bytes_sec"].(int)),
TotalBytesSecMax: uint64(iotuneMap["total_bytes_sec_max"].(int)),
TotalIOPSSec: uint64(iotuneMap["total_iops_sec"].(int)),
TotalIOPSSecMax: uint64(iotuneMap["total_iops_sec_max"].(int)),
WriteBytesSec: uint64(iotuneMap["write_bytes_sec"].(int)),
WriteBytesSecMax: uint64(iotuneMap["write_bytes_sec_max"].(int)),
WriteIOPSSec: uint64(iotuneMap["write_iops_sec"].(int)),
WriteIOPSSecMax: uint64(iotuneMap["write_iops_sec_max"].(int)),
}
_, err := c.CloudAPI().Disks().LimitIO(ctx, req)
if err != nil {
return diag.FromErr(err)
}
}
}
} }
if d.HasChange("affinity_label") { if d.HasChange("affinity_label") {
@@ -1862,6 +1933,40 @@ func isChangeStoragePolicy(els []interface{}, el interface{}) bool {
return false return false
} }
func isChangeIOTuneDisk(els []interface{}, el interface{}) bool {
for _, elOld := range els {
elOldConv := elOld.(map[string]interface{})
elConv := el.(map[string]interface{})
if elOldConv["disk_id"].(int) != elConv["disk_id"].(int) {
continue
}
oldIOTune := elOldConv["iotune"].([]interface{})
newIOTune := elConv["iotune"].([]interface{})
if len(oldIOTune) == 0 && len(newIOTune) == 0 {
return false
}
if len(oldIOTune) == 0 || len(newIOTune) == 0 {
return true
}
oldMap := oldIOTune[0].(map[string]interface{})
newMap := newIOTune[0].(map[string]interface{})
return oldMap["read_bytes_sec"].(int) != newMap["read_bytes_sec"].(int) ||
oldMap["read_bytes_sec_max"].(int) != newMap["read_bytes_sec_max"].(int) ||
oldMap["read_iops_sec"].(int) != newMap["read_iops_sec"].(int) ||
oldMap["read_iops_sec_max"].(int) != newMap["read_iops_sec_max"].(int) ||
oldMap["size_iops_sec"].(int) != newMap["size_iops_sec"].(int) ||
oldMap["total_bytes_sec"].(int) != newMap["total_bytes_sec"].(int) ||
oldMap["total_bytes_sec_max"].(int) != newMap["total_bytes_sec_max"].(int) ||
oldMap["total_iops_sec"].(int) != newMap["total_iops_sec"].(int) ||
oldMap["total_iops_sec_max"].(int) != newMap["total_iops_sec_max"].(int) ||
oldMap["write_bytes_sec"].(int) != newMap["write_bytes_sec"].(int) ||
oldMap["write_bytes_sec_max"].(int) != newMap["write_bytes_sec_max"].(int) ||
oldMap["write_iops_sec"].(int) != newMap["write_iops_sec"].(int) ||
oldMap["write_iops_sec_max"].(int) != newMap["write_iops_sec_max"].(int)
}
return false
}
func isContainsDisk(els []interface{}, el interface{}) bool { func isContainsDisk(els []interface{}, el interface{}) bool {
for _, elOld := range els { for _, elOld := range els {
elOldConv := elOld.(map[string]interface{}) elOldConv := elOld.(map[string]interface{})
@@ -1967,6 +2072,81 @@ func disksSubresourceSchemaMake() map[string]*schema.Schema {
Optional: true, Optional: true,
Description: "Disk deletion status", Description: "Disk deletion status",
}, },
"iotune": {
Type: schema.TypeList,
Optional: true,
Computed: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"read_bytes_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"read_bytes_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"read_iops_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"read_iops_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"size_iops_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"total_bytes_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"total_bytes_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"total_iops_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"total_iops_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"write_bytes_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"write_bytes_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"write_iops_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"write_iops_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
},
},
},
"disk_id": { "disk_id": {
Type: schema.TypeInt, Type: schema.TypeInt,
Computed: true, Computed: true,

View File

@@ -43,6 +43,7 @@ import (
"github.com/hashicorp/go-cty/cty" "github.com/hashicorp/go-cty/cty"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudapi/compute" "repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudapi/compute"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudapi/disks"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/controller" "repository.basistech.ru/BASIS/terraform-provider-decort/internal/controller"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema" "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
@@ -628,3 +629,84 @@ func enabledNetwork(rawNetworkConfig cty.Value, netID uint64, netType string) bo
return false return false
} }
func getComputeDiskIDsAPI(disksList compute.ListComputeDisks, disksBlocks, extraDisks []interface{}, bootDiskId uint64) []interface{} {
res := make([]interface{}, 0)
if len(disksBlocks) == 0 {
return res
}
sort.Slice(disksList, func(i, j int) bool {
return disksList[i].ID < disksList[j].ID
})
for _, disk := range disksList {
if disk.ID == bootDiskId || findInExtraDisks(uint(disk.ID), extraDisks) {
continue
}
res = append(res, disk.ID)
}
return res
}
func utilityComputeCreateIOTune(ctx context.Context, d *schema.ResourceData, m interface{}) error {
c := m.(*controller.ControllerCfg)
diskList := d.Get("disks").([]interface{})
iotuneArr := make([]interface{}, 0, len(diskList))
hasAny := false
for _, elem := range diskList {
diskVal := elem.(map[string]interface{})
iotune := diskVal["iotune"].([]interface{})
iotuneArr = append(iotuneArr, iotune)
if len(iotune) > 0 {
hasAny = true
}
}
if !hasAny {
return nil
}
computeRec, err := utilityComputeCheckPresence(ctx, d, m)
if err != nil {
return err
}
bootDisk := findBootDisk(computeRec.Disks)
computeDisksIDs := getComputeDiskIDsAPI(computeRec.Disks, diskList, d.Get("extra_disks").(*schema.Set).List(), bootDisk.ID)
for i, diskID := range computeDisksIDs {
if i >= len(iotuneArr) {
continue
}
iotune, ok := iotuneArr[i].([]interface{})
if !ok || len(iotune) == 0 {
continue
}
iotuneMap := iotune[0].(map[string]interface{})
req := disks.LimitIORequest{
DiskID: diskID.(uint64),
ReadBytesSec: uint64(iotuneMap["read_bytes_sec"].(int)),
ReadBytesSecMax: uint64(iotuneMap["read_bytes_sec_max"].(int)),
ReadIOPSSec: uint64(iotuneMap["read_iops_sec"].(int)),
ReadIOPSSecMax: uint64(iotuneMap["read_iops_sec_max"].(int)),
SizeIOPSSec: uint64(iotuneMap["size_iops_sec"].(int)),
TotalBytesSec: uint64(iotuneMap["total_bytes_sec"].(int)),
TotalBytesSecMax: uint64(iotuneMap["total_bytes_sec_max"].(int)),
TotalIOPSSec: uint64(iotuneMap["total_iops_sec"].(int)),
TotalIOPSSecMax: uint64(iotuneMap["total_iops_sec_max"].(int)),
WriteBytesSec: uint64(iotuneMap["write_bytes_sec"].(int)),
WriteBytesSecMax: uint64(iotuneMap["write_bytes_sec_max"].(int)),
WriteIOPSSec: uint64(iotuneMap["write_iops_sec"].(int)),
WriteIOPSSecMax: uint64(iotuneMap["write_iops_sec_max"].(int)),
}
_, err := c.CloudAPI().Disks().LimitIO(ctx, req)
if err != nil {
return err
}
}
return nil
}

View File

@@ -289,6 +289,7 @@ func flattenComputeDisks(disksList compute.ListDisks, disksBlocks, extraDisks []
"update_time": disk.UpdatedTime, "update_time": disk.UpdatedTime,
"cache": disk.Cache, "cache": disk.Cache,
"blk_discard": disk.BLKDiscard, "blk_discard": disk.BLKDiscard,
"iotune": flattenIOTune(disk.IOTune),
} }
res = append(res, temp) res = append(res, temp)
indexDataDisks++ indexDataDisks++

View File

@@ -677,6 +677,9 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
if err != nil { if err != nil {
warnings.Add(err) warnings.Add(err)
} }
if err := utilityComputeCreateIOTune(ctx, d, m); err != nil {
warnings.Add(err)
}
} }
if readOnly, ok := d.GetOk("read_only"); ok { if readOnly, ok := d.GetOk("read_only"); ok {

View File

@@ -3793,6 +3793,81 @@ func resourceComputeSchemaMake() map[string]*schema.Schema {
Optional: true, Optional: true,
Description: "Disk deletion status", Description: "Disk deletion status",
}, },
"iotune": {
Type: schema.TypeList,
Optional: true,
Computed: true,
MaxItems: 1,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"read_bytes_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"read_bytes_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"read_iops_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"read_iops_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"size_iops_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"total_bytes_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"total_bytes_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"total_iops_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"total_iops_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"write_bytes_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"write_bytes_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"write_iops_sec": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
"write_iops_sec_max": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
},
},
},
},
"disk_id": { "disk_id": {
Type: schema.TypeInt, Type: schema.TypeInt,
Computed: true, Computed: true,

View File

@@ -262,6 +262,7 @@ func utilityComputeUpdateDisks(ctx context.Context, d *schema.ResourceData, m in
changeStoragePolicyDisks := make([]interface{}, 0) changeStoragePolicyDisks := make([]interface{}, 0)
cacheUpdatedDisks := make([]interface{}, 0) cacheUpdatedDisks := make([]interface{}, 0)
blkDiscardUpdatedDisks := make([]interface{}, 0) blkDiscardUpdatedDisks := make([]interface{}, 0)
iotuneUpdatedDisks := make([]interface{}, 0)
migratedDisks := make([]interface{}, 0) migratedDisks := make([]interface{}, 0)
presentNewDisks := make([]interface{}, 0) presentNewDisks := make([]interface{}, 0)
presentOldDisks := make([]interface{}, 0) presentOldDisks := make([]interface{}, 0)
@@ -320,6 +321,10 @@ func utilityComputeUpdateDisks(ctx context.Context, d *schema.ResourceData, m in
if isChangeBLKDiscardDisk(oldConv, el) { if isChangeBLKDiscardDisk(oldConv, el) {
blkDiscardUpdatedDisks = append(blkDiscardUpdatedDisks, el) blkDiscardUpdatedDisks = append(blkDiscardUpdatedDisks, el)
} }
if isChangeIOTuneDisk(oldConv, el) {
iotuneUpdatedDisks = append(iotuneUpdatedDisks, el)
}
} }
if len(deletedDisks) > 0 { if len(deletedDisks) > 0 {
@@ -393,10 +398,31 @@ func utilityComputeUpdateDisks(ctx context.Context, d *schema.ResourceData, m in
} }
} }
} }
if iotuneRaw, ok := diskConv["iotune"].([]interface{}); ok && len(iotuneRaw) > 0 {
iotuneMap := iotuneRaw[0].(map[string]interface{})
limitReq := disks.LimitIORequest{
DiskID: diskID,
ReadBytesSec: uint64(iotuneMap["read_bytes_sec"].(int)),
ReadBytesSecMax: uint64(iotuneMap["read_bytes_sec_max"].(int)),
ReadIOPSSec: uint64(iotuneMap["read_iops_sec"].(int)),
ReadIOPSSecMax: uint64(iotuneMap["read_iops_sec_max"].(int)),
SizeIOPSSec: uint64(iotuneMap["size_iops_sec"].(int)),
TotalBytesSec: uint64(iotuneMap["total_bytes_sec"].(int)),
TotalBytesSecMax: uint64(iotuneMap["total_bytes_sec_max"].(int)),
TotalIOPSSec: uint64(iotuneMap["total_iops_sec"].(int)),
TotalIOPSSecMax: uint64(iotuneMap["total_iops_sec_max"].(int)),
WriteBytesSec: uint64(iotuneMap["write_bytes_sec"].(int)),
WriteBytesSecMax: uint64(iotuneMap["write_bytes_sec_max"].(int)),
WriteIOPSSec: uint64(iotuneMap["write_iops_sec"].(int)),
WriteIOPSSecMax: uint64(iotuneMap["write_iops_sec_max"].(int)),
}
_, err := c.CloudBroker().Disks().LimitIO(ctx, limitReq)
if err != nil { if err != nil {
return err return err
} }
} }
}
} }
if len(resizedDisks) > 0 { if len(resizedDisks) > 0 {
@@ -495,6 +521,44 @@ func utilityComputeUpdateDisks(ctx context.Context, d *schema.ResourceData, m in
} }
} }
if len(iotuneUpdatedDisks) > 0 {
for _, disk := range iotuneUpdatedDisks {
diskConv := disk.(map[string]interface{})
if diskConv["disk_type"].(string) == "B" {
continue
}
diskID := uint64(diskConv["disk_id"].(int))
if diskID == 0 {
continue
}
iotuneRaw, ok := diskConv["iotune"].([]interface{})
if !ok || len(iotuneRaw) == 0 {
continue
}
iotuneMap := iotuneRaw[0].(map[string]interface{})
req := disks.LimitIORequest{
DiskID: diskID,
ReadBytesSec: uint64(iotuneMap["read_bytes_sec"].(int)),
ReadBytesSecMax: uint64(iotuneMap["read_bytes_sec_max"].(int)),
ReadIOPSSec: uint64(iotuneMap["read_iops_sec"].(int)),
ReadIOPSSecMax: uint64(iotuneMap["read_iops_sec_max"].(int)),
SizeIOPSSec: uint64(iotuneMap["size_iops_sec"].(int)),
TotalBytesSec: uint64(iotuneMap["total_bytes_sec"].(int)),
TotalBytesSecMax: uint64(iotuneMap["total_bytes_sec_max"].(int)),
TotalIOPSSec: uint64(iotuneMap["total_iops_sec"].(int)),
TotalIOPSSecMax: uint64(iotuneMap["total_iops_sec_max"].(int)),
WriteBytesSec: uint64(iotuneMap["write_bytes_sec"].(int)),
WriteBytesSecMax: uint64(iotuneMap["write_bytes_sec_max"].(int)),
WriteIOPSSec: uint64(iotuneMap["write_iops_sec"].(int)),
WriteIOPSSecMax: uint64(iotuneMap["write_iops_sec_max"].(int)),
}
_, err := c.CloudBroker().Disks().LimitIO(ctx, req)
if err != nil {
return err
}
}
}
if len(migratedDisks) > 0 { if len(migratedDisks) > 0 {
if err := utilityComputeMigrateDisks(ctx, d, m, migratedDisks, oldConv); err != nil { if err := utilityComputeMigrateDisks(ctx, d, m, migratedDisks, oldConv); err != nil {
return err return err
@@ -2077,6 +2141,100 @@ func isChangeBLKDiscardDisk(els []interface{}, el interface{}) bool {
return false return false
} }
func isChangeIOTuneDisk(els []interface{}, el interface{}) bool {
for _, elOld := range els {
elOldConv := elOld.(map[string]interface{})
elConv := el.(map[string]interface{})
if elOldConv["disk_id"].(int) != elConv["disk_id"].(int) {
continue
}
oldIOTune := elOldConv["iotune"].([]interface{})
newIOTune := elConv["iotune"].([]interface{})
if len(oldIOTune) == 0 && len(newIOTune) == 0 {
return false
}
if len(oldIOTune) == 0 || len(newIOTune) == 0 {
return true
}
oldMap := oldIOTune[0].(map[string]interface{})
newMap := newIOTune[0].(map[string]interface{})
return oldMap["read_bytes_sec"].(int) != newMap["read_bytes_sec"].(int) ||
oldMap["read_bytes_sec_max"].(int) != newMap["read_bytes_sec_max"].(int) ||
oldMap["read_iops_sec"].(int) != newMap["read_iops_sec"].(int) ||
oldMap["read_iops_sec_max"].(int) != newMap["read_iops_sec_max"].(int) ||
oldMap["size_iops_sec"].(int) != newMap["size_iops_sec"].(int) ||
oldMap["total_bytes_sec"].(int) != newMap["total_bytes_sec"].(int) ||
oldMap["total_bytes_sec_max"].(int) != newMap["total_bytes_sec_max"].(int) ||
oldMap["total_iops_sec"].(int) != newMap["total_iops_sec"].(int) ||
oldMap["total_iops_sec_max"].(int) != newMap["total_iops_sec_max"].(int) ||
oldMap["write_bytes_sec"].(int) != newMap["write_bytes_sec"].(int) ||
oldMap["write_bytes_sec_max"].(int) != newMap["write_bytes_sec_max"].(int) ||
oldMap["write_iops_sec"].(int) != newMap["write_iops_sec"].(int) ||
oldMap["write_iops_sec_max"].(int) != newMap["write_iops_sec_max"].(int)
}
return false
}
func utilityComputeCreateIOTune(ctx context.Context, d *schema.ResourceData, m interface{}) error {
c := m.(*controller.ControllerCfg)
diskList := d.Get("disks").([]interface{})
iotuneArr := make([]interface{}, 0, len(diskList))
hasAny := false
for _, elem := range diskList {
diskVal := elem.(map[string]interface{})
iotune := diskVal["iotune"].([]interface{})
iotuneArr = append(iotuneArr, iotune)
if len(iotune) > 0 {
hasAny = true
}
}
if !hasAny {
return nil
}
computeRec, err := utilityComputeCheckPresence(ctx, d, m)
if err != nil {
return err
}
bootDisk := findBootDisk(computeRec.Disks)
computeDisksIDs := getComputeDiskIDs(computeRec.Disks, diskList, d.Get("extra_disks").(*schema.Set).List(), bootDisk.ID)
for i, diskID := range computeDisksIDs {
if i >= len(iotuneArr) {
continue
}
iotune, ok := iotuneArr[i].([]interface{})
if !ok || len(iotune) == 0 {
continue
}
iotuneMap := iotune[0].(map[string]interface{})
req := disks.LimitIORequest{
DiskID: diskID.(uint64),
ReadBytesSec: uint64(iotuneMap["read_bytes_sec"].(int)),
ReadBytesSecMax: uint64(iotuneMap["read_bytes_sec_max"].(int)),
ReadIOPSSec: uint64(iotuneMap["read_iops_sec"].(int)),
ReadIOPSSecMax: uint64(iotuneMap["read_iops_sec_max"].(int)),
SizeIOPSSec: uint64(iotuneMap["size_iops_sec"].(int)),
TotalBytesSec: uint64(iotuneMap["total_bytes_sec"].(int)),
TotalBytesSecMax: uint64(iotuneMap["total_bytes_sec_max"].(int)),
TotalIOPSSec: uint64(iotuneMap["total_iops_sec"].(int)),
TotalIOPSSecMax: uint64(iotuneMap["total_iops_sec_max"].(int)),
WriteBytesSec: uint64(iotuneMap["write_bytes_sec"].(int)),
WriteBytesSecMax: uint64(iotuneMap["write_bytes_sec_max"].(int)),
WriteIOPSSec: uint64(iotuneMap["write_iops_sec"].(int)),
WriteIOPSSecMax: uint64(iotuneMap["write_iops_sec_max"].(int)),
}
_, err := c.CloudBroker().Disks().LimitIO(ctx, req)
if err != nil {
return err
}
}
return nil
}
func isChangeStoragePolicy(els []interface{}, el interface{}) bool { func isChangeStoragePolicy(els []interface{}, el interface{}) bool {
for _, elOld := range els { for _, elOld := range els {
elOldConv := elOld.(map[string]interface{}) elOldConv := elOld.(map[string]interface{})

View File

@@ -189,6 +189,25 @@ resource "decort_kvmvm" "comp" {
#опциональный параметр #опциональный параметр
#тип - булев #тип - булев
#permanently = false #permanently = false
#блок для управления IO-лимитами диска
#опциональный параметр
#тип - блок
#iotune {
#read_bytes_sec = 0
#read_bytes_sec_max = 0
#read_iops_sec = 0
#read_iops_sec_max = 0
#size_iops_sec = 0
#total_bytes_sec = 0
#total_bytes_sec_max = 0
#total_iops_sec = 3000
#total_iops_sec_max = 0
#write_bytes_sec = 0
#write_bytes_sec_max = 0
#write_iops_sec = 0
#write_iops_sec_max = 0
#}
#} #}
#правила affinity #правила affinity

View File

@@ -229,6 +229,25 @@ resource "decort_cb_kvmvm" "comp" {
#тип - булев #тип - булев
#по умолчанию - false #по умолчанию - false
#blk_discard = false #blk_discard = false
#блок для управления IO-лимитами диска
#опциональный параметр
#тип - блок
#iotune {
#read_bytes_sec = 0
#read_bytes_sec_max = 0
#read_iops_sec = 0
#read_iops_sec_max = 0
#size_iops_sec = 0
#total_bytes_sec = 0
#total_bytes_sec_max = 0
#total_iops_sec = 3000
#total_iops_sec_max = 0
#write_bytes_sec = 0
#write_bytes_sec_max = 0
#write_iops_sec = 0
#write_iops_sec_max = 0
#}
#} #}
#правила affinity #правила affinity