Compare commits

...

8 Commits
4.7.1 ... 4.8.x

Author SHA1 Message Date
f33b8e3e7d 4.8.5 2025-08-12 12:05:17 +03:00
asteam
5d15e83d56 4.8.4 2025-07-31 15:51:28 +03:00
2c70109d2d 4.8.3 2025-04-16 11:39:57 +03:00
asteam
efe0c88556 4.8.2 2025-03-28 12:36:42 +03:00
5496073a0c 4.8.1 2025-02-07 11:30:15 +03:00
dc39a6412e 4.8.0 2024-12-27 12:00:59 +03:00
asteam
de8857b1d5 4.7.3 2024-12-04 12:13:55 +03:00
782afe70da 4.7.2 2024-11-22 12:31:21 +03:00
2608 changed files with 2659 additions and 273372 deletions

3
.gitignore vendored
View File

@@ -4,4 +4,5 @@ url_scrapping/
terraform-provider-decort*
.vscode/
.DS_Store
vendor/
.idea/

View File

@@ -1,4 +1,8 @@
## Version 4.7.1
## Version 4.8.5
### Bugfix
- Fix bug with create resource in decort_cb_pcidevice in cloudbroker/pcidevice
### Исправлено
#### kvmvm
| Идентификатор<br>задачи | Описание |
| --- | --- |
| BATF-1053 | Перезагрузка виртуальной машины при изменении полей `ram` и `cpu` в resources `decort_kvmvm` и `decort_cb_kvmvm` в cloudapi/kvmvm и в cloudbroker/kvmvm |

View File

@@ -7,7 +7,7 @@ ZIPDIR = ./zip
BINARY=${NAME}
WORKPATH= ./examples/terraform.d/plugins/${HOSTNAME}/${NAMESPACE}/${NAMESPACE}/${VERSION}/${OS_ARCH}
MAINPATH = ./cmd/decort/
VERSION=4.7.1
VERSION=4.8.5
OS_ARCH=$(shell go env GOHOSTOS)_$(shell go env GOHOSTARCH)
FILES = ${BINARY}_${VERSION}_darwin_amd64\

View File

@@ -6,6 +6,7 @@ Terraform provider для платформы Digital Energy Cloud Orchestration
| Версия DECORT API | Версия провайдера Terraform |
| ------ | ------ |
| 4.2.0 | 4.8.x |
| 4.1.0 | 4.7.x |
| 4.0.0 | 4.6.x |
| 3.8.9 | 4.5.x |

View File

@@ -1,168 +0,0 @@
# terraform-provider-decort
Terraform provider for Digital Energy Cloud Orchestration Technology (DECORT) platform
## Mapping of platform versions with provider versions
| DECORT API version | Terraform provider version |
| ------ | ------ |
| 3.8.5 | 3.4.x |
| 3.8.0 - 3.8.4 | 3.3.1 |
| 3.7.x | rc-1.25 |
| 3.6.x | rc-1.10 |
| до 3.6.0 | [terraform-provider-decs](https://github.com/rudecs/terraform-provider-decs) |
## Working modes
The provider support two working modes:
- User mode,
- Administator mode.
Use flag DECORT_ADMIN_MODE for swithcing beetwen modes.
See user guide at https://repository.basistech.ru/BASIS/terraform-provider-decort/wiki
## Features
- Work with Compute instances,
- Work with disks,
- Work with k8s,
- Work with image,
- Work with reource groups,
- Work with VINS,
- Work with pfw,
- Work with accounts,
- Work with snapshots,
- Work with pcidevice.
- Work with sep,
- Work with vgpu,
- Work with bservice,
- Work with extnets,
- Work with locations,
- Work with load balancers.
This provider supports Import operations on pre-existing resources.
See user guide at https://repository.basistech.ru/BASIS/terraform-provider-decort/wiki
## Get Started
Two ways for starting:
1. Installing via binary packages
2. Manual installing
### Installing via binary packages
1. Download and install terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
2. Create a file `main.tf` and add to it next section.
```terraform
provider "decort" {
authenticator = "decs3o"
#controller_url = <DECORT_CONTROLLER_URL>
controller_url = "https://ds1.digitalenergy.online"
#oauth2_url = <DECORT_SSO_URL>
oauth2_url = "https://sso.digitalenergy.online"
allow_unverified_ssl = true
}
```
3. Execute next command
```
terraform init
```
The Provider will automatically install on your computer from the terrafrom registry.
### Manual installing
1. Download and install Go Programming Language: https://go.dev/dl/
2. Download and install terraform: https://learn.hashicorp.com/tutorials/terraform/install-cli?in=terraform/aws-get-started
3. Clone provider's repo:
```bash
git clone https://github.com/rudecs/terraform-provider-decort.git
```
4. Change directory to clone provider's and execute next command
```bash
go build -o terraform-provider-decort
```
If you have experience with _makefile_, you can change `Makefile`'s paramters and execute next command
```bash
make build
```
5. Now move compilled file to:
Linux:
```bash
~/.terraform.d/plugins/${host_name}/${namespace}/${type}/${version}/${target}
```
Windows:
```powershell
%APPDATA%\terraform.d\plugins\${host_name}/${namespace}/${type}/${version}/${target}
```
NOTE: for Windows OS `%APP_DATA%` is a cataloge, where will place terraform files.
Example:
- host_name - digitalenergy.online
- namespace - decort
- type - decort
- version - 1.2
- target - windows_amd64
6. After all, create a file `main.tf`.
7. Add to the file next code section
```terraform
terraform {
required_providers {
decort = {
version = "1.2"
source = "digitalenergy.online/decort/decort"
}
}
}
```
`version`- field for provider's version
Required
String
Note: Versions in code section and in a repository must be equal!
`source` - path to repository with provider's version
```bash
${host_name}/${namespace}/${type}
```
NOTE: all paramters must be equal to the repository path!
8. Execute command in your terminal
```bash
terraform init
```
9. If everything all right - you got green message in your terminal!
More details about the provider's building process: https://learn.hashicorp.com/tutorials/terraform/provider-use?in=terraform/providers
## Examples and Samples
- Examples: https://repository.basistech.ru/BASIS/terraform-provider-decort/wiki
- Samples: see in repository `samples`
Terraform schemas in:
- See in repository `docs`
Good work!

View File

@@ -93,6 +93,7 @@ Read-Only:
Read-Only:
- `account_id` (Number)
- `client_type` (String)
- `desc` (String)
- `domain_name` (String)

View File

@@ -0,0 +1,61 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "decort_cb_extnet_reserved_ip_list Data Source - terraform-provider-decort"
subcategory: ""
description: |-
---
# decort_cb_extnet_reserved_ip_list (Data Source)
<!-- schema generated by tfplugindocs -->
## Schema
### Required
- `account_id` (Number)
### Optional
- `extnet_id` (Number)
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
### Read-Only
- `id` (String) The ID of this resource.
- `items` (List of Object) (see [below for nested schema](#nestedatt--items))
<a id="nestedblock--timeouts"></a>
### Nested Schema for `timeouts`
Optional:
- `default` (String)
- `read` (String)
<a id="nestedatt--items"></a>
### Nested Schema for `items`
Read-Only:
- `extnet_id` (Number)
- `reservations` (List of Object) (see [below for nested schema](#nestedobjatt--items--reservations))
<a id="nestedobjatt--items--reservations"></a>
### Nested Schema for `items.reservations`
Read-Only:
- `account_id` (Number)
- `client_type` (String)
- `domain_name` (String)
- `hostname` (String)
- `ip` (String)
- `mac` (String)
- `type` (String)
- `vm_id` (Number)

View File

@@ -17,6 +17,7 @@ description: |-
### Required
- `file_path` (String)
- `gid` (Number)
### Optional
@@ -25,7 +26,6 @@ description: |-
### Read-Only
- `diagnosis` (String)
- `id` (String) The ID of this resource.
<a id="nestedblock--timeouts"></a>

View File

@@ -1,37 +0,0 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "decort_cb_grid_post_diagnosis Data Source - terraform-provider-decort"
subcategory: ""
description: |-
---
# decort_cb_grid_post_diagnosis (Data Source)
<!-- schema generated by tfplugindocs -->
## Schema
### Required
- `gid` (Number)
### Optional
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
### Read-Only
- `diagnosis` (String)
- `id` (String) The ID of this resource.
<a id="nestedblock--timeouts"></a>
### Nested Schema for `timeouts`
Optional:
- `default` (String)
- `read` (String)

View File

@@ -59,6 +59,7 @@ description: |-
- `rescuecd` (Boolean)
- `sep_id` (Number) storage endpoint provider ID
- `size` (Number) image size
- `snapshot_id` (String) snapshot id
- `status` (String) status
- `tech_status` (String) tech atatus
- `unc_path` (String) unc path

View File

@@ -89,6 +89,7 @@ Read-Only:
- `sep_id` (Number)
- `shared_with` (List of Number)
- `size` (Number)
- `snapshot_id` (String)
- `status` (String)
- `tech_status` (String)
- `unc_path` (String)

View File

@@ -33,6 +33,7 @@ description: |-
- `affinity_weight` (Number)
- `anti_affinity_rules` (List of Object) (see [below for nested schema](#nestedatt--anti_affinity_rules))
- `arch` (String)
- `auto_start_w_node` (Boolean)
- `boot_disk_id` (Number)
- `boot_disk_size` (Number)
- `boot_order` (List of String)
@@ -77,6 +78,7 @@ description: |-
- `pci_devices` (List of Number)
- `pinned` (Boolean)
- `pool` (String)
- `preferred_cpu` (List of Number)
- `ram` (Number)
- `reference_id` (String)
- `registered` (Boolean)
@@ -100,6 +102,7 @@ description: |-
- `vgpus` (List of Number)
- `virtual_image_id` (Number)
- `virtual_image_name` (String)
- `vnc_password` (String)
<a id="nestedblock--timeouts"></a>
### Nested Schema for `timeouts`

View File

@@ -19,6 +19,7 @@ description: |-
- `account_id` (Number) Find by AccountID
- `by_id` (Number) Find by ID
- `cd_image_id` (Number) Find by CD image ID
- `extnet_id` (Number) Find by Extnet ID
- `extnet_name` (String) Find by Extnet name
- `ignore_k8s` (Boolean) If set to true, ignores any VMs associated with any k8s cluster
@@ -64,6 +65,7 @@ Read-Only:
- `affinity_weight` (Number)
- `anti_affinity_rules` (List of Object) (see [below for nested schema](#nestedobjatt--items--anti_affinity_rules))
- `arch` (String)
- `auto_start_w_node` (Boolean)
- `boot_order` (List of String)
- `bootdisk_size` (Number)
- `cd_image_id` (Number)
@@ -99,6 +101,7 @@ Read-Only:
- `numa_node_id` (Number)
- `os_users` (List of Object) (see [below for nested schema](#nestedobjatt--items--os_users))
- `pinned` (Boolean)
- `preferred_cpu` (List of Number)
- `ram` (Number)
- `reference_id` (String)
- `registered` (Boolean)

View File

@@ -60,6 +60,7 @@ Read-Only:
- `affinity_weight` (Number)
- `anti_affinity_rules` (List of Object) (see [below for nested schema](#nestedobjatt--items--anti_affinity_rules))
- `arch` (String)
- `auto_start_w_node` (Boolean)
- `boot_order` (List of String)
- `bootdisk_size` (Number)
- `cd_image_id` (Number)
@@ -94,6 +95,7 @@ Read-Only:
- `numa_node_id` (Number)
- `os_users` (List of Object) (see [below for nested schema](#nestedobjatt--items--os_users))
- `pinned` (Boolean)
- `preferred_cpu` (List of Number)
- `ram` (Number)
- `reference_id` (String)
- `registered` (Boolean)

View File

@@ -28,12 +28,15 @@ description: |-
- `consumption` (List of Object) (see [below for nested schema](#nestedatt--consumption))
- `cpu_allocation_ratio` (Number)
- `cpu_info` (List of Object) (see [below for nested schema](#nestedatt--cpu_info))
- `dpdk` (List of Object) (see [below for nested schema](#nestedatt--dpdk))
- `gid` (Number)
- `id` (String) The ID of this resource.
- `ipaddr` (List of String)
- `isolated_cpus` (List of String)
- `name` (String)
- `need_reboot` (Boolean)
- `net_addr` (List of Object) (see [below for nested schema](#nestedatt--net_addr))
- `network_mode` (String)
- `nic_info` (List of Object) (see [below for nested schema](#nestedatt--nic_info))
- `numa_topology` (List of Object) (see [below for nested schema](#nestedatt--numa_topology))
- `reserved_cpus` (List of String)
@@ -41,6 +44,10 @@ description: |-
- `sriov_enabled` (Boolean)
- `stack_id` (Number)
- `status` (String)
- `to_active` (List of Object) (see [below for nested schema](#nestedatt--to_active))
- `to_installing` (List of Object) (see [below for nested schema](#nestedatt--to_installing))
- `to_maintenance` (List of Object) (see [below for nested schema](#nestedatt--to_maintenance))
- `to_restricted` (List of Object) (see [below for nested schema](#nestedatt--to_restricted))
- `version` (String)
<a id="nestedblock--timeouts"></a>
@@ -109,6 +116,42 @@ Read-Only:
- `phys_count` (Number)
<a id="nestedatt--dpdk"></a>
### Nested Schema for `dpdk`
Read-Only:
- `bridges` (List of Object) (see [below for nested schema](#nestedobjatt--dpdk--bridges))
- `hp_memory` (Map of Number)
- `pmd_cpu` (List of Number)
<a id="nestedobjatt--dpdk--bridges"></a>
### Nested Schema for `dpdk.bridges`
Read-Only:
- `backplane1` (List of Object) (see [below for nested schema](#nestedobjatt--dpdk--bridges--backplane1))
<a id="nestedobjatt--dpdk--bridges--backplane1"></a>
### Nested Schema for `dpdk.bridges.backplane1`
Read-Only:
- `interfaces` (List of String)
- `numa_node` (Number)
<a id="nestedatt--net_addr"></a>
### Nested Schema for `net_addr`
Read-Only:
- `ip` (List of String)
- `name` (String)
<a id="nestedatt--nic_info"></a>
### Nested Schema for `nic_info`
@@ -156,3 +199,45 @@ Read-Only:
- `one_g` (Number)
- `total` (Number)
- `two_m` (Number)
<a id="nestedatt--to_active"></a>
### Nested Schema for `to_active`
Read-Only:
- `actor` (String)
- `reason` (String)
- `time` (Number)
<a id="nestedatt--to_installing"></a>
### Nested Schema for `to_installing`
Read-Only:
- `actor` (String)
- `reason` (String)
- `time` (Number)
<a id="nestedatt--to_maintenance"></a>
### Nested Schema for `to_maintenance`
Read-Only:
- `actor` (String)
- `reason` (String)
- `time` (Number)
<a id="nestedatt--to_restricted"></a>
### Nested Schema for `to_restricted`
Read-Only:
- `actor` (String)
- `reason` (String)
- `time` (Number)

View File

@@ -52,6 +52,7 @@ Read-Only:
- `additional_pkgs` (List of String)
- `cpu_info` (List of Object) (see [below for nested schema](#nestedobjatt--items--cpu_info))
- `description` (String)
- `dpdk` (List of Object) (see [below for nested schema](#nestedobjatt--items--dpdk))
- `gid` (Number)
- `guid` (String)
- `hostkey` (String)
@@ -86,6 +87,7 @@ Read-Only:
- `status` (String)
- `tags` (List of String)
- `type` (String)
- `uefi_firmware_file` (String)
- `version` (String)
<a id="nestedobjatt--items--cpu_info"></a>
@@ -98,6 +100,33 @@ Read-Only:
- `phys_count` (Number)
<a id="nestedobjatt--items--dpdk"></a>
### Nested Schema for `items.dpdk`
Read-Only:
- `bridges` (List of Object) (see [below for nested schema](#nestedobjatt--items--dpdk--bridges))
- `hp_memory` (Map of Number)
- `pmd_cpu` (List of Number)
<a id="nestedobjatt--items--dpdk--bridges"></a>
### Nested Schema for `items.dpdk.bridges`
Read-Only:
- `backplane1` (List of Object) (see [below for nested schema](#nestedobjatt--items--dpdk--bridges--backplane1))
<a id="nestedobjatt--items--dpdk--bridges--backplane1"></a>
### Nested Schema for `items.dpdk.bridges.backplane1`
Read-Only:
- `interfaces` (List of String)
- `numa_node` (Number)
<a id="nestedobjatt--items--net_addr"></a>
### Nested Schema for `items.net_addr`

View File

@@ -25,15 +25,14 @@ description: |-
### Read-Only
- `ckey` (String) ckey
- `config` (String) config
- `consumed_by` (Set of Number) consumed by
- `desc` (String) description
- `gid` (Number) gid
- `guid` (Number) guid
- `id` (String) The ID of this resource.
- `meta` (List of String) meta
- `milestones` (Number) milestones
- `multipath_num` (Number) multipath_num
- `name` (String) name
- `obj_status` (String) object status
- `provided_by` (List of Number) provided by

View File

@@ -49,14 +49,13 @@ Optional:
Read-Only:
- `ckey` (String)
- `config` (String)
- `consumed_by` (Set of Number)
- `desc` (String)
- `gid` (Number)
- `guid` (Number)
- `meta` (List of String)
- `milestones` (Number)
- `multipath_num` (Number)
- `name` (String)
- `obj_status` (String)
- `provided_by` (List of Number)

View File

@@ -101,6 +101,7 @@ Read-Only:
- `tech_status` (String)
- `type` (String)
- `vins` (List of Number)
- `vnc_password` (String)
<a id="nestedobjatt--vnf_dev--config"></a>
### Nested Schema for `vnf_dev.config`
@@ -240,10 +241,7 @@ Read-Only:
Read-Only:
- `client_type` (String)
- `description` (String)
- `domain_name` (String)
- `host_name` (String)
- `account_id` (Number)
- `ip` (String)
- `mac` (String)
- `type` (String)

View File

@@ -90,6 +90,7 @@ Read-Only:
Read-Only:
- `account_id` (Number)
- `client_type` (String)
- `desc` (String)
- `domainname` (String)

View File

@@ -0,0 +1,61 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "decort_extnet_reserved_ip_list Data Source - terraform-provider-decort"
subcategory: ""
description: |-
---
# decort_extnet_reserved_ip_list (Data Source)
<!-- schema generated by tfplugindocs -->
## Schema
### Required
- `account_id` (Number)
### Optional
- `extnet_id` (Number)
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
### Read-Only
- `id` (String) The ID of this resource.
- `items` (List of Object) (see [below for nested schema](#nestedatt--items))
<a id="nestedblock--timeouts"></a>
### Nested Schema for `timeouts`
Optional:
- `default` (String)
- `read` (String)
<a id="nestedatt--items"></a>
### Nested Schema for `items`
Read-Only:
- `extnet_id` (Number)
- `reservations` (List of Object) (see [below for nested schema](#nestedobjatt--items--reservations))
<a id="nestedobjatt--items--reservations"></a>
### Nested Schema for `items.reservations`
Read-Only:
- `account_id` (Number)
- `client_type` (String)
- `domain_name` (String)
- `hostname` (String)
- `ip` (String)
- `mac` (String)
- `type` (String)
- `vm_id` (Number)

View File

@@ -20,7 +20,7 @@ description: |-
- `account_id` (Number) Account id
- `by_id` (Number) Filter by ID
- `by_ip` (String) Filter by IP-address
- `client_ids` (List of String) client_ids
- `client_ids` (List of Number) client_ids
- `conn_id` (Number) Conn id
- `extnet_id` (Number) Filter by ExtNetID
- `name` (String) Filter by Name

View File

@@ -33,6 +33,7 @@ description: |-
- `affinity_weight` (Number)
- `anti_affinity_rules` (List of Object) (see [below for nested schema](#nestedatt--anti_affinity_rules))
- `arch` (String)
- `auto_start_w_node` (Boolean)
- `boot_order` (List of String)
- `bootdisk_size` (Number)
- `cd_image_id` (Number)
@@ -74,6 +75,7 @@ description: |-
- `os_users` (List of Object) (see [below for nested schema](#nestedatt--os_users))
- `pci_devices` (List of Number)
- `pinned` (Boolean)
- `preferred_cpu` (List of Number)
- `ram` (Number)
- `reference_id` (String)
- `registered` (Boolean)
@@ -94,6 +96,7 @@ description: |-
- `vgpus` (List of Number)
- `virtual_image_id` (Number)
- `virtual_image_name` (String)
- `vnc_password` (String)
<a id="nestedblock--timeouts"></a>
### Nested Schema for `timeouts`

View File

@@ -62,6 +62,7 @@ Read-Only:
- `affinity_weight` (Number)
- `anti_affinity_rules` (List of Object) (see [below for nested schema](#nestedobjatt--items--anti_affinity_rules))
- `arch` (String)
- `auto_start_w_node` (Boolean)
- `boot_order` (List of String)
- `bootdisk_size` (Number)
- `cd_image_id` (Number)
@@ -96,6 +97,7 @@ Read-Only:
- `numa_affinity` (String)
- `numa_node_id` (Number)
- `pinned` (Boolean)
- `preferred_cpu` (List of Number)
- `ram` (Number)
- `reference_id` (String)
- `registered` (Boolean)

View File

@@ -60,6 +60,7 @@ Read-Only:
- `affinity_weight` (Number)
- `anti_affinity_rules` (List of Object) (see [below for nested schema](#nestedobjatt--items--anti_affinity_rules))
- `arch` (String)
- `auto_start_w_node` (Boolean)
- `boot_order` (List of String)
- `bootdisk_size` (Number)
- `cd_image_id` (Number)
@@ -94,6 +95,7 @@ Read-Only:
- `numa_affinity` (String)
- `numa_node_id` (Number)
- `pinned` (Boolean)
- `preferred_cpu` (List of Number)
- `ram` (Number)
- `reference_id` (String)
- `registered` (Boolean)

View File

@@ -108,6 +108,7 @@ Read-Only:
- `tech_status` (String)
- `type` (String)
- `vins` (List of Number)
- `vnc_password` (String)
- `vnf_id` (Number)
- `vnf_name` (String)
@@ -248,10 +249,7 @@ Read-Only:
Read-Only:
- `client_type` (String)
- `desc` (String)
- `domainname` (String)
- `hostname` (String)
- `account_id` (Number)
- `ip` (String)
- `mac` (String)
- `type` (String)

View File

@@ -62,6 +62,7 @@ description: |-
- `res_name` (String)
- `rescuecd` (Boolean)
- `size` (Number) image size
- `snapshot_id` (String) snapshot id
- `status` (String) status
- `tech_status` (String) tech atatus
- `unc_path` (String) unc path

View File

@@ -37,6 +37,7 @@ description: |-
- `ntp` (List of String) List of NTP addresses
- `ovs_bridge` (String) OpenvSwith bridge name for ExtNet connection
- `pre_reservations_num` (Number) Number of pre created reservations
- `reserved_ip` (Block Set) (see [below for nested schema](#nestedblock--reserved_ip))
- `restart` (Boolean) restart extnet vnf device
- `set_default` (Boolean) Set current extnet as default (can not be undone)
- `shared_with` (Set of Number)
@@ -88,6 +89,19 @@ Required:
- `ip_start` (String)
<a id="nestedblock--reserved_ip"></a>
### Nested Schema for `reserved_ip`
Required:
- `account_id` (Number)
Optional:
- `ip_count` (Number)
- `ips` (Set of String)
<a id="nestedblock--timeouts"></a>
### Nested Schema for `timeouts`
@@ -119,6 +133,7 @@ Read-Only:
Read-Only:
- `account_id` (Number)
- `client_type` (String)
- `desc` (String)
- `domain_name` (String)

View File

@@ -67,6 +67,7 @@ description: |-
- `res_name` (String)
- `rescuecd` (Boolean)
- `size` (Number) image size
- `snapshot_id` (String) snapshot id
- `status` (String) status
- `tech_status` (String) tech atatus
- `unc_path` (String) unc path

View File

@@ -64,6 +64,7 @@ description: |-
- `res_name` (String)
- `rescuecd` (Boolean)
- `size` (Number)
- `snapshot_id` (String) snapshot id
- `status` (String)
- `tech_status` (String)
- `unc_path` (String)

View File

@@ -64,6 +64,7 @@ description: |-
- `res_name` (String)
- `rescuecd` (Boolean)
- `size` (Number)
- `snapshot_id` (String) snapshot id
- `status` (String)
- `tech_status` (String)
- `unc_path` (String)

View File

@@ -29,14 +29,13 @@ description: |-
- `affinity_rules` (Block List) (see [below for nested schema](#nestedblock--affinity_rules))
- `alt_boot_id` (Number) ID of CD-ROM live image to boot
- `anti_affinity_rules` (Block List) (see [below for nested schema](#nestedblock--anti_affinity_rules))
- `auto_start` (Boolean) Flag for redeploy compute
- `auto_start_w_node` (Boolean)
- `boot_disk_size` (Number) This compute instance boot disk size in GB. Make sure it is large enough to accomodate selected OS image.
- `cd` (Block Set, Max: 1) (see [below for nested schema](#nestedblock--cd))
- `chipset` (String) Type of the emulated system.
- `cloud_init` (String) Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases.
- `cpu_pin` (Boolean) Run VM on dedicated CPUs. To use this feature, the system must be pre-configured by allocating CPUs on the physical node.
- `custom_fields` (String)
- `data_disks` (String) Flag for redeploy compute
- `depresent` (Boolean) whether to depresent compute disks from node or not
- `description` (String) Optional text description of this compute instance.
- `detach_disks` (Boolean)
@@ -59,6 +58,7 @@ description: |-
- `pin_to_stack` (Boolean)
- `pool` (String) Pool to use if sepId is set, can be also empty if needed to be chosen by system.
- `port_forwarding` (Block Set) (see [below for nested schema](#nestedblock--port_forwarding))
- `preferred_cpu` (List of Number) Recommended isolated CPUs. Field is ignored if compute.cpupin=False or compute.pinned=False
- `reset` (Boolean)
- `restore` (Boolean)
- `rollback` (Block Set, Max: 1) (see [below for nested schema](#nestedblock--rollback))
@@ -67,7 +67,6 @@ description: |-
- `stack_id` (Number) ID of stack to start compute
- `started` (Boolean) Is compute started.
- `tags` (Block Set) (see [below for nested schema](#nestedblock--tags))
- `target_stack_id` (Number)
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
- `user_access` (Block Set) (see [below for nested schema](#nestedblock--user_access))
- `without_boot_disk` (Boolean) If True, the imageId, bootDisk, sepId, pool parameters are ignored and the compute is created without a boot disk in the stopped state.
@@ -128,6 +127,7 @@ description: |-
- `vgpus` (List of Number)
- `virtual_image_id` (Number)
- `virtual_image_name` (String)
- `vnc_password` (String)
<a id="nestedblock--affinity_rules"></a>
### Nested Schema for `affinity_rules`
@@ -198,7 +198,8 @@ Read-Only:
Required:
- `mac` (String)
- `net_id` (Number) ID of the network
- `net_type` (String) Type of the network
Optional:
@@ -221,6 +222,7 @@ Required:
Optional:
- `ip_address` (String) Optional IP address to assign to this connection. This IP should belong to the selected network and free for use.
- `mtu` (Number) Maximum transmission unit, used only for DPDK type, must be 1-9216
- `weight` (Number) weight the network if you need to sort network list, the smallest attach first. zero or null weight attach last
Read-Only:

View File

@@ -37,11 +37,10 @@ description: |-
### Read-Only
- `ckey` (String) ckey
- `guid` (Number) guid
- `id` (String) The ID of this resource.
- `meta` (List of String) meta
- `milestones` (Number) milestones
- `multipath_num` (Number) multipath_num
- `obj_status` (String) object status
- `tech_status` (String) tech status

View File

@@ -167,6 +167,7 @@ Read-Only:
- `tech_status` (String)
- `type` (String)
- `vins` (List of Number)
- `vnc_password` (String)
<a id="nestedobjatt--vnf_dev--config"></a>
### Nested Schema for `vnf_dev.config`
@@ -306,10 +307,7 @@ Read-Only:
Read-Only:
- `client_type` (String)
- `description` (String)
- `domain_name` (String)
- `host_name` (String)
- `account_id` (Number)
- `ip` (String)
- `mac` (String)
- `type` (String)

View File

@@ -24,11 +24,11 @@ description: |-
### Optional
- `compute_ids` (List of Number)
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
### Read-Only
- `compute_ids` (List of Number)
- `guid` (String)
- `id` (String) The ID of this resource.
- `route_id` (Number) Unique ID of the static route

View File

@@ -61,6 +61,7 @@ description: |-
- `rescuecd` (Boolean)
- `sep_id` (Number) storage endpoint provider ID
- `size` (Number) image size
- `snapshot_id` (String) snapshot id
- `status` (String) status
- `tech_status` (String) tech atatus
- `unc_path` (String) unc path

View File

@@ -28,14 +28,13 @@ description: |-
- `affinity_label` (String) Set affinity label for compute
- `affinity_rules` (Block List) (see [below for nested schema](#nestedblock--affinity_rules))
- `anti_affinity_rules` (Block List) (see [below for nested schema](#nestedblock--anti_affinity_rules))
- `auto_start` (Boolean) Flag for redeploy compute
- `auto_start_w_node` (Boolean) Flag for start compute after node exits from MAINTENANCE state
- `boot_disk_size` (Number) This compute instance boot disk size in GB. Make sure it is large enough to accomodate selected OS image.
- `cd` (Block Set, Max: 1) (see [below for nested schema](#nestedblock--cd))
- `chipset` (String) Type of the emulated system.
- `cloud_init` (String) Optional cloud_init parameters. Applied when creating new compute instance only, ignored in all other cases.
- `cpu_pin` (Boolean) Run VM on dedicated CPUs. To use this feature, the system must be pre-configured by allocating CPUs on the physical node.
- `custom_fields` (String)
- `data_disks` (String) Flag for redeploy compute
- `description` (String) Optional text description of this compute instance.
- `detach_disks` (Boolean)
- `disks` (Block List) (see [below for nested schema](#nestedblock--disks))
@@ -55,6 +54,7 @@ description: |-
- `pin_to_stack` (Boolean)
- `pool` (String) Pool to use if sepId is set, can be also empty if needed to be chosen by system.
- `port_forwarding` (Block Set) (see [below for nested schema](#nestedblock--port_forwarding))
- `preferred_cpu` (List of Number) Recommended isolated CPUs. Field is ignored if compute.cpupin=False or compute.pinned=False
- `reset` (Boolean)
- `restore` (Boolean)
- `rollback` (Block Set, Max: 1) (see [below for nested schema](#nestedblock--rollback))
@@ -121,6 +121,7 @@ description: |-
- `vgpus` (List of Number)
- `virtual_image_id` (Number)
- `virtual_image_name` (String)
- `vnc_password` (String)
<a id="nestedblock--affinity_rules"></a>
### Nested Schema for `affinity_rules`
@@ -196,6 +197,7 @@ Required:
Optional:
- `ip_address` (String) Optional IP address to assign to this connection. This IP should belong to the selected network and free for use.
- `mtu` (Number) Maximum transmission unit, used only for DPDK type, must be 1-9216
- `weight` (Number) weight the network if you need to sort network list, the smallest attach first. zero or null weight attach last
Read-Only:

View File

@@ -77,7 +77,7 @@ description: |-
Optional:
- `ext_net_id` (Number)
- `ext_net_ip` (Number)
- `ext_net_ip` (String)
<a id="nestedblock--ip"></a>
@@ -163,6 +163,7 @@ Read-Only:
- `tech_status` (String)
- `type` (String)
- `vins` (List of Number)
- `vnc_password` (String)
- `vnf_id` (Number)
- `vnf_name` (String)
@@ -303,10 +304,7 @@ Read-Only:
Read-Only:
- `client_type` (String)
- `desc` (String)
- `domainname` (String)
- `hostname` (String)
- `account_id` (Number)
- `ip` (String)
- `mac` (String)
- `type` (String)

View File

@@ -24,12 +24,12 @@ description: |-
### Optional
- `compute_ids` (List of Number)
- `route_id` (Number)
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))
### Read-Only
- `compute_ids` (List of Number)
- `guid` (String)
- `id` (String) The ID of this resource.

2
go.mod
View File

@@ -8,7 +8,7 @@ require (
github.com/hashicorp/terraform-plugin-sdk/v2 v2.33.0
github.com/sirupsen/logrus v1.9.0
golang.org/x/net v0.23.0
repository.basistech.ru/BASIS/decort-golang-sdk v1.9.0
repository.basistech.ru/BASIS/decort-golang-sdk v1.10.2
)
require (

4
go.sum
View File

@@ -273,5 +273,5 @@ gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
repository.basistech.ru/BASIS/decort-golang-sdk v1.9.0 h1:RLOWSc7EJ6O37aPHQI9gkJ2JfuZMzGonF2PKeWN6sXw=
repository.basistech.ru/BASIS/decort-golang-sdk v1.9.0/go.mod h1:OaUynHHuSjWMzpfyoL4au6oLcUogqUkPPBKA15pbHWo=
repository.basistech.ru/BASIS/decort-golang-sdk v1.10.2 h1:sA/ZngL4xvkyz8lVGkqbi2RBi4CrHJjho2WV21KX918=
repository.basistech.ru/BASIS/decort-golang-sdk v1.10.2/go.mod h1:OaUynHHuSjWMzpfyoL4au6oLcUogqUkPPBKA15pbHWo=

View File

@@ -137,6 +137,7 @@ func newDataSourcesMap() map[string]*schema.Resource {
"decort_extnet_computes_list": extnet.DataSourceExtnetComputesList(),
"decort_extnet": extnet.DataSourceExtnet(),
"decort_extnet_default": extnet.DataSourceExtnetDefault(),
"decort_extnet_reserved_ip_list": extnet.DataSourceExtnetReservedIp(),
"decort_locations_list": locations.DataSourceLocationsList(),
"decort_location_url": locations.DataSourceLocationUrl(),
"decort_image_list": image.DataSourceImageList(),
@@ -180,6 +181,7 @@ func newDataSourcesMap() map[string]*schema.Resource {
"decort_cb_extnet": cb_extnet.DataSourceExtnetCB(),
"decort_cb_extnet_list": cb_extnet.DataSourceExtnetListCB(),
"decort_cb_extnet_default": cb_extnet.DataSourceExtnetDefaultCB(),
"decort_cb_extnet_reserved_ip_list": cb_extnet.DataSourceExtnetReservedIp(),
"decort_cb_extnet_static_route_list": cb_extnet.DataSourceStaticRouteList(),
"decort_cb_extnet_static_route": cb_extnet.DataSourceStaticRoute(),
"decort_cb_image": cb_image.DataSourceImage(),
@@ -187,7 +189,6 @@ func newDataSourcesMap() map[string]*schema.Resource {
"decort_cb_grid_get_status": cb_grid.DataSourceGridGetStatus(),
"decort_cb_grid_post_status": cb_grid.DataSourceGridPostStatus(),
"decort_cb_grid_get_diagnosis": cb_grid.DataSourceGridGetDiagnosis(),
"decort_cb_grid_post_diagnosis": cb_grid.DataSourceGridPostDiagnosis(),
"decort_cb_grid_get_settings": cb_grid.DataSourceGridGetSettings(),
"decort_cb_grid_list": cb_grid.DataSourceGridList(),
"decort_cb_grid_list_emails": cb_grid.DataSourceGridListEmails(),

View File

@@ -204,6 +204,10 @@ func dataSourceExtnetSchemaMake() map[string]*schema.Schema {
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"account_id": {
Type: schema.TypeInt,
Computed: true,
},
"client_type": {
Type: schema.TypeString,
Computed: true,

View File

@@ -0,0 +1,137 @@
/*
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
Authors:
Petr Krutov, <petr.krutov@digitalenergy.online>
Stanislav Solovev, <spsolovev@digitalenergy.online>
Kasim Baybikov, <kmbaybikov@basistech.ru>
Tim Tkachev, <tvtkachev@basistech.ru>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
Orchestration Technology) with Terraform by Hashicorp.
Source code: https://repository.basistech.ru/BASIS/terraform-provider-decort
Please see README.md to learn where to place source code so that it
builds seamlessly.
Documentation: https://repository.basistech.ru/BASIS/terraform-provider-decort/wiki
*/
package extnet
import (
"context"
"github.com/google/uuid"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/constants"
)
func dataSourceExtnetReservedIpRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
reservedList, err := utilityExtnetReservedIpCheckPresence(ctx, d, m)
if err != nil {
return diag.FromErr(err)
}
id := uuid.New()
d.SetId(id.String())
d.Set("items", flattenExtnetReservedIp(reservedList))
return nil
}
func dataSourceExtnetReservedIpSchemaMake() map[string]*schema.Schema {
res := map[string]*schema.Schema{
"account_id": {
Type: schema.TypeInt,
Required: true,
},
"extnet_id": {
Type: schema.TypeInt,
Optional: true,
},
"items": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"extnet_id": {
Type: schema.TypeInt,
Computed: true,
},
"reservations": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"account_id": {
Type: schema.TypeInt,
Computed: true,
},
"client_type": {
Type: schema.TypeString,
Computed: true,
},
"domain_name": {
Type: schema.TypeString,
Computed: true,
},
"hostname": {
Type: schema.TypeString,
Computed: true,
},
"ip": {
Type: schema.TypeString,
Computed: true,
},
"mac": {
Type: schema.TypeString,
Computed: true,
},
"type": {
Type: schema.TypeString,
Computed: true,
},
"vm_id": {
Type: schema.TypeInt,
Computed: true,
},
},
},
},
},
},
},
}
return res
}
func DataSourceExtnetReservedIp() *schema.Resource {
return &schema.Resource{
SchemaVersion: 1,
ReadContext: dataSourceExtnetReservedIpRead,
Timeouts: &schema.ResourceTimeout{
Read: &constants.Timeout30s,
Default: &constants.Timeout60s,
},
Schema: dataSourceExtnetReservedIpSchemaMake(),
}
}

View File

@@ -54,6 +54,7 @@ func flattenExtnetReservations(ers extnet.ListReservations) []map[string]interfa
res := make([]map[string]interface{}, 0, len(ers))
for _, er := range ers {
temp := map[string]interface{}{
"account_id": er.AccountID,
"client_type": er.ClientType,
"domainname": er.DomainName,
"hostname": er.Hostname,
@@ -135,3 +136,29 @@ func flattenExtnetList(el *extnet.ListExtNets) []map[string]interface{} {
}
return res
}
func flattenExtnetReservedIp(el []extnet.RecordReservedIP) []map[string]interface{} {
res := make([]map[string]interface{}, 0, len(el))
for _, e := range el {
reservations := make([]map[string]interface{}, 0, len(e.Reservations))
for _, r := range e.Reservations {
temp := map[string]interface{}{
"account_id": r.AccountID,
"client_type": r.ClientType,
"domain_name": r.DomainName,
"hostname": r.Hostname,
"ip": r.IP,
"mac": r.Mac,
"type": r.Type,
"vm_id": r.VMID,
}
reservations = append(reservations, temp)
}
item := map[string]interface{}{
"extnet_id": e.ExtnetID,
"reservations": reservations,
}
res = append(res, item)
}
return res
}

View File

@@ -0,0 +1,61 @@
/*
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
Authors:
Petr Krutov, <petr.krutov@digitalenergy.online>
Stanislav Solovev, <spsolovev@digitalenergy.online>
Kasim Baybikov, <kmbaybikov@basistech.ru>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
Orchestration Technology) with Terraform by Hashicorp.
Source code: https://repository.basistech.ru/BASIS/terraform-provider-decort
Please see README.md to learn where to place source code so that it
builds seamlessly.
Documentation: https://repository.basistech.ru/BASIS/terraform-provider-decort/wiki
*/
package extnet
import (
"context"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
log "github.com/sirupsen/logrus"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudapi/extnet"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/controller"
)
func utilityExtnetReservedIpCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) ([]extnet.RecordReservedIP, error) {
c := m.(*controller.ControllerCfg)
req := extnet.GetReservedIP{
AccountID: uint64(d.Get("account_id").(int)),
}
if extNetID, ok := d.GetOk("extnet_id"); ok {
req.ExtNetID = uint64(extNetID.(int))
}
log.Debugf("utilityExtnetReservedIpCheckPresence")
res, err := c.CloudAPI().ExtNet().GetReservedIP(ctx, req)
if err != nil {
return nil, err
}
return res, nil
}

View File

@@ -124,7 +124,7 @@ func dataSourceFlipgroupListSchemaMake() map[string]*schema.Schema {
Type: schema.TypeList,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
Type: schema.TypeInt,
},
Description: "client_ids",
},

View File

@@ -189,7 +189,7 @@ func utilityFlipgroupListCheckPresence(ctx context.Context, d *schema.ResourceDa
if cliensId, ok := d.GetOk("client_ids"); ok {
cliensIds := cliensId.([]interface{})
for _, elem := range cliensIds {
req.ClientIDs = append(req.ClientIDs, (elem.(string)))
req.ClientIDs = append(req.ClientIDs, uint64(elem.(int)))
}
}
if status, ok := d.GetOk("status"); ok {

View File

@@ -37,6 +37,7 @@ import (
"fmt"
"strconv"
"strings"
"time"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
@@ -44,6 +45,7 @@ import (
log "github.com/sirupsen/logrus"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudapi/compute"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudapi/k8s"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudapi/tasks"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/constants"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/controller"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/validators"
@@ -105,7 +107,33 @@ func resourceK8sWgCreate(ctx context.Context, d *schema.ResourceData, m interfac
return diag.FromErr(err)
}
d.SetId(fmt.Sprintf("%d#%d", d.Get("k8s_id").(int), resp))
taskReq := tasks.GetRequest{
AuditID: strings.Trim(resp, `"`),
}
for {
task, err := c.CloudAPI().Tasks().Get(ctx, taskReq)
if err != nil {
return diag.FromErr(err)
}
log.Debugf("resourceK8sWgCreate: instance creating - %s", task.Stage)
if task.Completed {
if task.Error != "" {
return diag.FromErr(fmt.Errorf("cannot create k8s wg instance: %v", task.Error))
}
id, err := task.Result.ID()
if err != nil {
return diag.FromErr(err)
}
d.SetId(fmt.Sprint(d.Get("k8s_id").(int), "#", strconv.Itoa(id)))
break
}
time.Sleep(time.Second * 20)
}
return resourceK8sWgRead(ctx, d, m)
}

View File

@@ -703,6 +703,10 @@ func dataSourceComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
"auto_start_w_node": {
Type: schema.TypeBool,
Computed: true,
},
"chipset": {
Type: schema.TypeString,
Computed: true,
@@ -881,6 +885,13 @@ func dataSourceComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeBool,
Computed: true,
},
"preferred_cpu": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
},
"ram": {
Type: schema.TypeInt,
Computed: true,
@@ -950,6 +961,10 @@ func dataSourceComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
"vnc_password": {
Type: schema.TypeString,
Computed: true,
},
"vgpus": {
Type: schema.TypeList,
Computed: true,

View File

@@ -121,6 +121,10 @@ func itemComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
"auto_start_w_node": {
Type: schema.TypeBool,
Computed: true,
},
"boot_order": {
Type: schema.TypeList,
Computed: true,
@@ -269,6 +273,13 @@ func itemComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeBool,
Computed: true,
},
"preferred_cpu": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
},
"ram": {
Type: schema.TypeInt,
Computed: true,

View File

@@ -183,6 +183,7 @@ func flattenComputeList(computes *compute.ListComputes) []map[string]interface{}
"affinity_weight": compute.AffinityWeight,
"anti_affinity_rules": flattenListRules(compute.AntiAffinityRules),
"arch": compute.Architecture,
"auto_start_w_node": compute.AutoStart,
"boot_order": compute.BootOrder,
"bootdisk_size": compute.BootDiskSize,
"chipset": compute.Chipset,
@@ -217,6 +218,7 @@ func flattenComputeList(computes *compute.ListComputes) []map[string]interface{}
"numa_affinity": compute.NumaAffinity,
"numa_node_id": compute.NumaNodeId,
"pinned": compute.Pinned,
"preferred_cpu": compute.PreferredCPU,
"ram": compute.RAM,
"reference_id": compute.ReferenceID,
"registered": compute.Registered,
@@ -316,6 +318,7 @@ func flattenNetwork(networks []interface{}, interfaces compute.ListInterfaces) [
"net_type": network.NetType,
"ip_address": network.IPAddress,
"mac": network.MAC,
"mtu": network.MTU,
"weight": flattenNetworkWeight(networks, network.NetID, network.NetType),
}
res = append(res, temp)
@@ -361,6 +364,7 @@ func flattenCompute(d *schema.ResourceData, computeRec compute.RecordCompute, pc
d.Set("account_name", computeRec.AccountName)
d.Set("affinity_weight", computeRec.AffinityWeight)
d.Set("arch", computeRec.Architecture)
d.Set("auto_start_w_node", computeRec.AutoStart)
d.Set("boot_order", computeRec.BootOrder)
// we intentionally use the SizeMax field, do not change it until the BootDiskSize field is fixed on the platform
d.Set("boot_disk_size", bootDisk.SizeMax)
@@ -412,6 +416,7 @@ func flattenCompute(d *schema.ResourceData, computeRec compute.RecordCompute, pc
return err
}
d.Set("pinned", computeRec.Pinned)
d.Set("preferred_cpu", computeRec.PreferredCPU)
d.Set("ram", computeRec.RAM)
d.Set("reference_id", computeRec.ReferenceID)
d.Set("registered", computeRec.Registered)
@@ -428,6 +433,7 @@ func flattenCompute(d *schema.ResourceData, computeRec compute.RecordCompute, pc
d.Set("updated_by", computeRec.UpdatedBy)
d.Set("updated_time", computeRec.UpdatedTime)
d.Set("user_managed", computeRec.UserManaged)
d.Set("vnc_password", computeRec.VNCPassword)
d.Set("vgpus", computeRec.VGPUs)
d.Set("virtual_image_id", computeRec.VirtualImageID)
d.Set("virtual_image_name", computeRec.VirtualImageName)
@@ -612,6 +618,7 @@ func flattenDataCompute(d *schema.ResourceData, computeRec compute.RecordCompute
d.Set("affinity_rules", flattenAffinityRules(computeRec.AffinityRules))
d.Set("affinity_weight", computeRec.AffinityWeight)
d.Set("anti_affinity_rules", flattenListRules(computeRec.AntiAffinityRules))
d.Set("auto_start_w_node", computeRec.AutoStart)
d.Set("arch", computeRec.Architecture)
d.Set("chipset", computeRec.Chipset)
d.Set("boot_order", computeRec.BootOrder)
@@ -654,6 +661,7 @@ func flattenDataCompute(d *schema.ResourceData, computeRec compute.RecordCompute
d.Set("natable_vins_network_name", computeRec.NatableVINSNetworkName)
d.Set("os_users", flattenOsUsers(computeRec.OSUsers))
d.Set("pinned", computeRec.Pinned)
d.Set("preferred_CPU", computeRec.PreferredCPU)
d.Set("ram", computeRec.RAM)
d.Set("reference_id", computeRec.ReferenceID)
d.Set("registered", computeRec.Registered)
@@ -671,6 +679,7 @@ func flattenDataCompute(d *schema.ResourceData, computeRec compute.RecordCompute
d.Set("updated_time", computeRec.UpdatedTime)
d.Set("user_managed", computeRec.UserManaged)
d.Set("userdata", string(userdata))
d.Set("vnc_password", computeRec.VNCPassword)
d.Set("vgpus", computeRec.VGPUs)
d.Set("virtual_image_id", computeRec.VirtualImageID)
d.Set("virtual_image_name", computeRec.VirtualImageName)

View File

@@ -158,6 +158,15 @@ func networkSubresourceSchemaMake() map[string]*schema.Schema {
Computed: true,
Description: "weight the network if you need to sort network list, the smallest attach first. zero or null weight attach last",
},
"mtu": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
//Default: 1500,
ValidateFunc: validation.IntBetween(1, 9216),
Description: "Maximum transmission unit, used only for DPDK type, must be 1-9216",
},
}
return rets
}

View File

@@ -162,6 +162,10 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
NetID: uint64(netInterfaceVal["net_id"].(int)),
}
if reqInterface.NetType == "DPDK" {
reqInterface.MTU = uint64(netInterfaceVal["mtu"].(int))
}
ipaddr, ipSet := netInterfaceVal["ip_address"]
if ipSet {
reqInterface.IPAddr = ipaddr.(string)
@@ -244,6 +248,16 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
createReqX86.HPBacked = d.Get("hp_backed").(bool)
createReqX86.Chipset = d.Get("chipset").(string)
if preferredCPU, ok := d.GetOk("preferred_cpu"); ok {
preferredList := preferredCPU.([]interface{})
if len(preferredList) > 0 {
for _, v := range preferredList {
cpuNum := v.(int)
createReqX86.PreferredCPU = append(createReqX86.PreferredCPU, int64(cpuNum))
}
}
}
log.Debugf("resourceComputeCreate: creating Compute of type KVM VM x86")
apiResp, err := c.CloudAPI().KVMX86().Create(ctx, createReqX86)
if err != nil {
@@ -274,6 +288,25 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
log.Debugf("resourceComputeCreate: new simple Compute ID %d, name %s created", computeId, d.Get("name").(string))
if ars, ok := d.GetOk("pci_devices"); ok {
log.Debugf("resourceComputeCreate: add pci devices on ComputeID: %d", computeId)
addedPciDevices := ars.(*schema.Set).List()
if len(addedPciDevices) > 0 {
for _, v := range addedPciDevices {
devicesConv := v.(int)
req := compute.AttachPCIDeviceRequest{
ComputeID: computeId,
DeviceID: uint64(devicesConv),
}
_, err := c.CloudAPI().Compute().AttachPCIDevice(ctx, req)
if err != nil {
warnings.Add(err)
}
}
}
}
argVal, ok = d.GetOk("extra_disks")
if ok && argVal.(*schema.Set).Len() > 0 {
log.Debugf("resourceComputeCreate: calling utilityComputeExtraDisksConfigure to attach %d extra disk(s)", argVal.(*schema.Set).Len())
@@ -303,6 +336,17 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if d.Get("pin_to_stack").(bool) {
req := compute.PinToStackRequest{
ComputeID: computeId,
}
req.AutoStart = d.Get("auto_start_w_node").(bool)
_, err := c.CloudAPI().Compute().PinToStack(ctx, req)
if err != nil {
warnings.Add(err)
}
}
// Note bene: we created compute in a STOPPED state (this is required to properly attach 1st network interface),
// now we need to start it before we report the sequence complete
if start, ok := d.GetOk("started"); ok {
@@ -483,11 +527,14 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if d.Get("pin_to_stack").(bool) {
req := compute.PinToStackRequest{
if !d.Get("pin_to_stack").(bool) && d.Get("auto_start_w_node").(bool) {
req := compute.UpdateRequest{
ComputeID: computeId,
AutoStart: d.Get("auto_start_w_node").(bool),
CPUPin: d.Get("cpu_pin").(bool),
HPBacked: d.Get("hp_backed").(bool),
}
_, err := c.CloudAPI().Compute().PinToStack(ctx, req)
_, err := c.CloudAPI().Compute().Update(ctx, req)
if err != nil {
warnings.Add(err)
}
@@ -503,24 +550,6 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if ars, ok := d.GetOk("pci_devices"); ok {
log.Debugf("resourceComputeCreate: add pci devices on ComputeID: %d", computeId)
addedPciDevices := ars.(*schema.Set).List()
if len(addedPciDevices) > 0 {
for _, v := range addedPciDevices {
devicesConv := v.(int)
req := compute.AttachPCIDeviceRequest{
ComputeID: computeId,
DeviceID: uint64(devicesConv),
}
_, err := c.CloudAPI().Compute().AttachPCIDevice(ctx, req)
if err != nil {
warnings.Add(err)
}
}
}
}
}
log.Debugf("resourceComputeCreate: new Compute ID %d, name %s creation sequence complete", computeId, d.Get("name").(string))
@@ -615,7 +644,7 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
}
if !hasRG {
return diag.Errorf("resourceComputeUpdate: can't update Compute bacause rgID %d not allowed or does not exist", d.Get("rg_id").(int))
return diag.Errorf("resourceComputeUpdate: can't update Compute because rgID %d not allowed or does not exist", d.Get("rg_id").(int))
}
hasImage, err := existImageId(ctx, d, m)
@@ -624,7 +653,7 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
}
if !hasImage {
return diag.Errorf("resourceComputeUpdate: can't update Compute bacause imageID %d not allowed or does not exist", d.Get("image_id").(int))
return diag.Errorf("resourceComputeUpdate: can't update Compute because imageID %d not allowed or does not exist", d.Get("image_id").(int))
}
if disks, ok := d.GetOk("disks"); ok {
@@ -776,46 +805,8 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
}
}
doUpdate := false
resizeReq := compute.ResizeRequest{
ComputeID: computeRec.ID,
}
forceResize, ok := d.GetOk("force_resize")
if ok {
resizeReq.Force = forceResize.(bool)
}
warnings := dc.Warnings{}
oldCpu, newCpu := d.GetChange("cpu")
if oldCpu.(int) > newCpu.(int) && !forceResize.(bool) {
return diag.Errorf("Cannot resize compute ID %d: enable 'force_resize' to reduce compute vCPUs", computeRec.ID)
}
if oldCpu.(int) != newCpu.(int) {
resizeReq.CPU = uint64(newCpu.(int))
doUpdate = true
} else {
resizeReq.CPU = 0
}
oldRam, newRam := d.GetChange("ram")
if oldRam.(int) != newRam.(int) {
resizeReq.RAM = uint64(newRam.(int))
doUpdate = true
} else {
resizeReq.RAM = 0
}
if doUpdate {
log.Debugf("resourceComputeUpdate: changing CPU %d -> %d and/or RAM %d -> %d",
oldCpu.(int), newCpu.(int),
oldRam.(int), newRam.(int))
_, err := c.CloudAPI().Compute().Resize(ctx, resizeReq)
if err != nil {
return diag.FromErr(err)
}
}
oldSize, newSize := d.GetChange("boot_disk_size")
if oldSize.(int) < newSize.(int) {
req := compute.DiskResizeRequest{ComputeID: computeRec.ID}
@@ -851,14 +842,101 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if d.HasChange("network") {
err = utilityComputeNetworksConfigure(ctx, d, m)
if d.HasChange("pin_to_stack") {
oldPin, newPin := d.GetChange("pin_to_stack")
if !newPin.(bool) {
req := compute.UnpinFromStackRequest{
ComputeID: computeRec.ID,
}
_, err := c.CloudAPI().Compute().UnpinFromStack(ctx, req)
if err != nil {
return diag.FromErr(err)
}
}
if !oldPin.(bool) {
req := compute.PinToStackRequest{
ComputeID: computeRec.ID,
}
req.AutoStart = d.Get("auto_start_w_node").(bool)
_, err := c.CloudAPI().Compute().PinToStack(ctx, req)
if err != nil {
return diag.FromErr(err)
}
}
}
// Note bene: numa_affinity, cpu_pin, old_cpu > new_cpu and hp_backed are not allowed to be changed for compute in STARTED tech status.
// If STARTED, we need to stop it before update
var isStopRequired bool
if d.HasChanges("numa_affinity", "cpu_pin", "hp_backed", "chipset", "preferred_cpu") && d.Get("started").(bool) {
isStopRequired = true
}
old, new := d.GetChange("cpu")
if old.(int) > new.(int) && d.Get("started").(bool) {
isStopRequired = true
}
if isStopRequired {
if _, err := c.CloudAPI().Compute().Stop(ctx, compute.StopRequest{ComputeID: computeRec.ID}); err != nil {
return diag.FromErr(err)
}
}
doUpdate := false
resizeReq := compute.ResizeRequest{
ComputeID: computeRec.ID,
}
forceResize, ok := d.GetOk("force_resize")
if ok {
resizeReq.Force = forceResize.(bool)
}
oldCpu, newCpu := d.GetChange("cpu")
if oldCpu.(int) > newCpu.(int) && !forceResize.(bool) {
return diag.Errorf("Cannot resize compute ID %d: enable 'force_resize' to reduce compute vCPUs", computeRec.ID)
}
if oldCpu.(int) != newCpu.(int) {
resizeReq.CPU = uint64(newCpu.(int))
doUpdate = true
} else {
resizeReq.CPU = 0
}
if resizeReq.CPU != 0 {
if preferredCPU, ok := d.GetOk("preferred_cpu"); ok {
preferredList := preferredCPU.([]interface{})
if len(preferredList) > 0 {
for _, v := range preferredList {
cpuNum := v.(int)
resizeReq.PreferredCPU = append(resizeReq.PreferredCPU, int64(cpuNum))
}
}
}
oldPCPU, newPCPU := d.GetChange("preferred_cpu")
if len(oldPCPU.([]interface{})) != 0 && len(newPCPU.([]interface{})) == 0 {
resizeReq.PreferredCPU = []int64{-1}
}
}
oldRam, newRam := d.GetChange("ram")
if oldRam.(int) != newRam.(int) {
resizeReq.RAM = uint64(newRam.(int))
doUpdate = true
} else {
resizeReq.RAM = 0
}
if doUpdate {
log.Debugf("resourceComputeUpdate: changing CPU %d -> %d and/or RAM %d -> %d",
oldCpu.(int), newCpu.(int),
oldRam.(int), newRam.(int))
_, err := c.CloudAPI().Compute().Resize(ctx, resizeReq)
if err != nil {
return diag.FromErr(err)
}
}
if d.HasChanges("description", "name", "numa_affinity", "cpu_pin", "hp_backed") {
if d.HasChanges("description", "name", "numa_affinity", "cpu_pin", "hp_backed", "chipset", "auto_start_w_node", "preferred_cpu") {
req := compute.UpdateRequest{
ComputeID: computeRec.ID,
}
@@ -872,39 +950,46 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
if d.HasChange("numa_affinity") {
req.NumaAffinity = d.Get("numa_affinity").(string)
}
if d.HasChange("cpu_pin") {
req.CPUPin = d.Get("cpu_pin").(bool)
}
if d.HasChange("hp_backed") {
req.HPBacked = d.Get("hp_backed").(bool)
}
if d.HasChange("chipset") {
req.Chipset = d.Get("chipset").(string)
}
// Note bene: numa_affinity, cpu_pin and hp_backed are not allowed to be changed for compute in STARTED tech status.
// If STARTED, we need to stop it before update
var isStopRequired bool
if d.HasChanges("numa_affinity", "cpu_pin", "hp_backed") && d.Get("started").(bool) {
isStopRequired = true
}
if isStopRequired {
if _, err := c.CloudAPI().Compute().Stop(ctx, compute.StopRequest{ComputeID: computeRec.ID}); err != nil {
return diag.FromErr(err)
if d.HasChange("preferred_cpu") {
if preferredCPU, ok := d.GetOk("preferred_cpu"); ok {
preferredList := preferredCPU.([]interface{})
if len(preferredList) > 0 {
for _, v := range preferredList {
cpuNum := v.(int)
req.PreferredCPU = append(req.PreferredCPU, int64(cpuNum))
}
}
}
oldPCPU, newPCPU := d.GetChange("preferred_cpu")
if len(oldPCPU.([]interface{})) != 0 && len(newPCPU.([]interface{})) == 0 {
req.PreferredCPU = []int64{-1}
}
}
req.CPUPin = d.Get("cpu_pin").(bool)
req.HPBacked = d.Get("hp_backed").(bool)
req.AutoStart = d.Get("auto_start_w_node").(bool)
// perform update
if _, err := c.CloudAPI().Compute().Update(ctx, req); err != nil {
return diag.FromErr(err)
}
// If used to be STARTED, we need to start it after update
if isStopRequired {
if _, err := c.CloudAPI().Compute().Start(ctx, compute.StartRequest{ComputeID: computeRec.ID}); err != nil {
return diag.FromErr(err)
}
}
// If used to be STARTED, we need to start it after update
if isStopRequired {
if _, err := c.CloudAPI().Compute().Start(ctx, compute.StartRequest{ComputeID: computeRec.ID}); err != nil {
return diag.FromErr(err)
}
}
if d.HasChange("network") {
err = utilityComputeNetworksConfigure(ctx, d, m)
if err != nil {
return diag.FromErr(err)
}
}
@@ -1062,24 +1147,6 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if d.HasChange("started") {
if d.Get("started").(bool) {
req := compute.StartRequest{
ComputeID: computeRec.ID,
}
if _, err := c.CloudAPI().Compute().Start(ctx, req); err != nil {
return diag.FromErr(err)
}
} else {
req := compute.StopRequest{
ComputeID: computeRec.ID,
}
if _, err := c.CloudAPI().Compute().Stop(ctx, req); err != nil {
return diag.FromErr(err)
}
}
}
if d.HasChange("affinity_label") {
affinityLabel := d.Get("affinity_label").(string)
if affinityLabel == "" {
@@ -1461,30 +1528,6 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if d.HasChange("pin_to_stack") {
oldPin, newPin := d.GetChange("pin_to_stack")
if !newPin.(bool) {
req := compute.UnpinFromStackRequest{
ComputeID: computeRec.ID,
}
_, err := c.CloudAPI().Compute().UnpinFromStack(ctx, req)
if err != nil {
return diag.FromErr(err)
}
}
if !oldPin.(bool) {
req := compute.PinToStackRequest{
ComputeID: computeRec.ID,
}
_, err := c.CloudAPI().Compute().PinToStack(ctx, req)
if err != nil {
return diag.FromErr(err)
}
}
}
if d.HasChange("pause") {
oldPause, newPause := d.GetChange("pause")
if !newPause.(bool) {
@@ -1527,6 +1570,9 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
ComputeID: computeRec.ID,
Force: false,
}
if forceStop, ok := d.GetOk("force_stop"); ok {
stopReq.Force = forceStop.(bool)
}
_, err := c.CloudAPI().Compute().Stop(ctx, stopReq)
if err != nil {
@@ -1537,15 +1583,13 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
req := compute.RedeployRequest{
ComputeID: computeRec.ID,
ImageID: uint64(newImage.(int)),
DataDisks: "KEEP",
}
if diskSize, ok := d.GetOk("boot_disk_size"); ok {
req.DiskSize = uint64(diskSize.(int))
}
if dataDisks, ok := d.GetOk("data_disks"); ok {
req.DataDisks = dataDisks.(string)
}
if autoStart, ok := d.GetOk("auto_start"); ok {
if autoStart, ok := d.GetOk("started"); ok {
req.AutoStart = autoStart.(bool)
}
if forceStop, ok := d.GetOk("force_stop"); ok {
@@ -1661,6 +1705,16 @@ func resourceComputeDelete(ctx context.Context, d *schema.ResourceData, m interf
c := m.(*controller.ControllerCfg)
computeId, _ := strconv.ParseUint(d.Id(), 10, 64)
if start, ok := d.GetOk("started"); ok {
if start.(bool) {
req := compute.StopRequest{ComputeID: computeId}
log.Debugf("resourceComputeDelete: stoping Compute ID %d", computeId)
if _, err := c.CloudAPI().Compute().Stop(ctx, req); err != nil {
diag.FromErr(err)
}
}
}
pciList, ok := d.GetOk("pci_devices")
if d.Get("permanently").(bool) && ok {
@@ -2119,11 +2173,11 @@ func ResourceComputeSchemaMake() map[string]*schema.Schema {
Optional: true,
Default: false,
},
"auto_start": {
"auto_start_w_node": {
Type: schema.TypeBool,
Optional: true,
Default: false,
Description: "Flag for redeploy compute",
Computed: true,
Description: "Flag for start compute after node exits from MAINTENANCE state",
},
"force_stop": {
Type: schema.TypeBool,
@@ -2137,13 +2191,6 @@ func ResourceComputeSchemaMake() map[string]*schema.Schema {
Default: false,
Description: "Flag for resize compute",
},
"data_disks": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{"KEEP", "DETACH", "DESTROY"}, false),
Default: "DETACH",
Description: "Flag for redeploy compute",
},
"started": {
Type: schema.TypeBool,
Optional: true,
@@ -2189,6 +2236,15 @@ func ResourceComputeSchemaMake() map[string]*schema.Schema {
Default: false,
Description: "Use Huge Pages to allocate RAM of the virtual machine. The system must be pre-configured by allocating Huge Pages on the physical node.",
},
"preferred_cpu": {
Type: schema.TypeList,
Optional: true,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
Description: "Recommended isolated CPUs. Field is ignored if compute.cpupin=False or compute.pinned=False",
},
"pci_devices": {
Type: schema.TypeSet,
Optional: true,
@@ -2407,6 +2463,10 @@ func ResourceComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeBool,
Computed: true,
},
"vnc_password": {
Type: schema.TypeString,
Computed: true,
},
"vgpus": {
Type: schema.TypeList,
Computed: true,
@@ -2439,6 +2499,18 @@ func ResourceCompute() *schema.Resource {
StateContext: schema.ImportStatePassthroughContext,
},
CustomizeDiff: func(ctx context.Context, diff *schema.ResourceDiff, i interface{}) error {
if diff.HasChanges() || diff.HasChanges("chipset", "pin_to_stack", "auto_start_w_node", "network", "affinity_rules", "anti_affinity_rules",
"extra_disks", "tags", "port_forwarding", "user_access", "snapshot", "pci_devices", "preferred_cpu") {
diff.SetNewComputed("updated_time")
diff.SetNewComputed("updated_by")
}
if diff.HasChanges("pin_to_stack") {
diff.SetNewComputed("pinned")
}
return nil
},
Timeouts: &schema.ResourceTimeout{
Create: &constants.Timeout600s,
Read: &constants.Timeout300s,

View File

@@ -224,7 +224,7 @@ func utilityComputeNetworksConfigure(ctx context.Context, d *schema.ResourceData
needStart := false
if oldSet.(*schema.Set).Len() == len(detachMap) || oldSet.(*schema.Set).Len() == 0 {
if oldSet.(*schema.Set).Len() == len(detachMap) || oldSet.(*schema.Set).Len() == 0 || hasDPDKnetwork(attachMap) {
computeId, _ := strconv.ParseUint(d.Id(), 10, 64)
if err := utilityComputeStop(ctx, computeId, m); err != nil {
apiErrCount++
@@ -259,6 +259,10 @@ func utilityComputeNetworksConfigure(ctx context.Context, d *schema.ResourceData
req.IPAddr = netData["ip_address"].(string)
}
if req.NetType == "DPDK" {
req.MTU = uint64(netData["mtu"].(int))
}
_, err := c.CloudAPI().Compute().NetAttach(ctx, req)
if err != nil {
log.Errorf("utilityComputeNetworksConfigure: failed to attach net ID %d of type %s to Compute ID %s: %s",
@@ -285,6 +289,15 @@ func utilityComputeNetworksConfigure(ctx context.Context, d *schema.ResourceData
return nil
}
func hasDPDKnetwork(networkAttachMap []map[string]interface{}) bool {
for _, elem := range networkAttachMap {
if elem["net_type"].(string) == "DPDK" {
return true
}
}
return false
}
func utilityComputeCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (compute.RecordCompute, error) {
c := m.(*controller.ControllerCfg)
computeId, _ := strconv.ParseUint(d.Id(), 10, 64)
@@ -374,12 +387,12 @@ func differenceNetwork(oldList, newList []interface{}) (detachMap, changeIpMap,
found := false
for _, newNetwork := range newList {
newMap := newNetwork.(map[string]interface{})
if newMap["net_type"] == oldMap["net_type"] && newMap["net_id"] == oldMap["net_id"] && newMap["weight"] == oldMap["weight"] {
if (newMap["net_type"].(string) == "EXTNET" || newMap["net_type"].(string) == "VINS") && newMap["ip_address"] != oldMap["ip_address"] {
if newMap["net_type"] == oldMap["net_type"] && newMap["net_id"] == oldMap["net_id"] && newMap["weight"] == oldMap["weight"] && (newMap["mtu"] == oldMap["mtu"] || newMap["mtu"].(int) == 0) {
if (newMap["net_type"].(string) == "EXTNET" || newMap["net_type"].(string) == "VINS") && (newMap["ip_address"] != oldMap["ip_address"] && newMap["ip_address"].(string) != "") {
changeIpMap = append(changeIpMap, newMap)
found = true
break
} else if newMap["ip_address"] == oldMap["ip_address"] {
} else if newMap["ip_address"] == oldMap["ip_address"] || newMap["ip_address"].(string) == "" {
found = true
break
}
@@ -396,8 +409,10 @@ func differenceNetwork(oldList, newList []interface{}) (detachMap, changeIpMap,
found := false
for _, oldNetwork := range oldList {
oldMap := oldNetwork.(map[string]interface{})
if newMap["net_type"] == oldMap["net_type"] && newMap["net_id"] == oldMap["net_id"] && newMap["weight"] == oldMap["weight"] {
if newMap["ip_address"] == oldMap["ip_address"] || ((newMap["net_type"].(string) == "EXTNET" || newMap["net_type"].(string) == "VINS") && newMap["ip_address"] != oldMap["ip_address"]) {
if newMap["net_type"] == oldMap["net_type"] && newMap["net_id"] == oldMap["net_id"] && newMap["weight"] == oldMap["weight"] && (newMap["mtu"] == oldMap["mtu"] || newMap["mtu"].(int) == 0) {
if newMap["ip_address"] == oldMap["ip_address"] || newMap["ip_address"].(string) == "" ||
((newMap["net_type"].(string) == "EXTNET" || newMap["net_type"].(string) == "VINS") &&
newMap["ip_address"] != oldMap["ip_address"] && newMap["ip_address"].(string) != "") {
found = true
break
}

View File

@@ -347,6 +347,10 @@ func vnfDevSchemaMake() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
"vnc_password": {
Type: schema.TypeString,
Computed: true,
},
"vins": {
Type: schema.TypeList,
Computed: true,
@@ -372,20 +376,8 @@ func vinsComputeSchemaMake() map[string]*schema.Schema {
func reservationSchemaMake() map[string]*schema.Schema {
return map[string]*schema.Schema{
"client_type": {
Type: schema.TypeString,
Computed: true,
},
"desc": {
Type: schema.TypeString,
Computed: true,
},
"domainname": {
Type: schema.TypeString,
Computed: true,
},
"hostname": {
Type: schema.TypeString,
"account_id": {
Type: schema.TypeInt,
Computed: true,
},
"ip": {

View File

@@ -151,6 +151,7 @@ func flattenVNFDev(vnfDev vins.RecordVNFDev) []map[string]interface{} {
"status": vnfDev.Status,
"tech_status": vnfDev.TechStatus,
"type": vnfDev.Type,
"vnc_password": vnfDev.VNCPassword,
"vins": vnfDev.VINS,
}
@@ -176,14 +177,11 @@ func flattenReservations(reservations vins.ListReservations) []map[string]interf
res := make([]map[string]interface{}, 0, len(reservations))
for _, reservation := range reservations {
temp := map[string]interface{}{
"client_type": reservation.ClientType,
"desc": reservation.Description,
"domainname": reservation.DomainName,
"hostname": reservation.Hostname,
"ip": reservation.IP,
"mac": reservation.MAC,
"type": reservation.Type,
"vm_id": reservation.VMID,
"account_id": reservation.AccountID,
"ip": reservation.IP,
"mac": reservation.MAC,
"type": reservation.Type,
"vm_id": reservation.VMID,
}
res = append(res, temp)
}

View File

@@ -1,277 +1,194 @@
/*
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
Authors:
Petr Krutov, <petr.krutov@digitalenergy.online>
Stanislav Solovev, <spsolovev@digitalenergy.online>
Kasim Baybikov, <kmbaybikov@basistech.ru>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
Orchestration Technology) with Terraform by Hashicorp.
Source code: https://repository.basistech.ru/BASIS/terraform-provider-decort
Please see README.md to learn where to place source code so that it
builds seamlessly.
Documentation: https://repository.basistech.ru/BASIS/terraform-provider-decort/wiki
*/
package vins
import (
"context"
"fmt"
"strconv"
"strings"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudapi/vins"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/constants"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/controller"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/dc"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)
func resourceStaticRouteCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
c := m.(*controller.ControllerCfg)
if _, ok := d.GetOk("vins_id"); ok {
haveVinsID, err := existVinsID(ctx, d, m)
if err != nil {
return diag.FromErr(err)
}
if !haveVinsID {
return diag.Errorf("resourceStaticRouteCreate: can't create Static Route because Vins ID %d is not allowed or does not exist", d.Get("vins_id").(int))
}
}
req := vins.StaticRouteAddRequest{
VINSID: uint64(d.Get("vins_id").(int)),
Destination: d.Get("destination").(string),
Netmask: d.Get("netmask").(string),
Gateway: d.Get("gateway").(string),
}
if computesIDS, ok := d.GetOk("compute_ids"); ok {
ids := computesIDS.([]interface{})
res := make([]uint64, 10)
for _, id := range ids {
computeId := uint64(id.(int))
res = append(res, computeId)
}
req.ComputeIds = res
}
_, err := c.CloudAPI().VINS().StaticRouteAdd(ctx, req)
if err != nil {
return diag.FromErr(err)
}
staticRouteData, err := getStaticRouteData(ctx, d, m)
if err != nil {
d.SetId("")
return diag.FromErr(err)
}
d.SetId(fmt.Sprintf("%d#%d", req.VINSID, staticRouteData.ID))
return resourceStaticRouteRead(ctx, d, m)
}
func resourceStaticRouteRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
warnings := dc.Warnings{}
staticRouteData, err := utilityDataStaticRouteCheckPresence(ctx, d, m)
if err != nil {
d.SetId("")
return diag.FromErr(err)
}
flattenStaticRouteData(d, staticRouteData)
return warnings.Get()
}
func resourceStaticRouteUpdate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
c := m.(*controller.ControllerCfg)
warnings := dc.Warnings{}
if _, ok := d.GetOk("vins_id"); ok {
haveVinsID, err := existVinsID(ctx, d, m)
if err != nil {
return diag.FromErr(err)
}
if !haveVinsID {
return diag.Errorf("resourceVinsUpdate: can't update Static Route because VinsID %d is not allowed or does not exist", d.Get("vins_id").(int))
}
}
staticRouteData, err := utilityDataStaticRouteCheckPresence(ctx, d, m)
if err != nil {
d.SetId("")
return diag.FromErr(err)
}
if d.HasChange("compute_ids") {
deletedIds := make([]uint64, 0)
addedIds := make([]uint64, 0)
oldComputeIds, newComputeIds := d.GetChange("compute_ids")
oldComputeIdsSlice := oldComputeIds.([]interface{})
newComputeIdsSlice := newComputeIds.([]interface{})
for _, el := range oldComputeIdsSlice {
if !isContainsIds(newComputeIdsSlice, el) {
convertedEl := uint64(el.(int))
deletedIds = append(deletedIds, convertedEl)
}
}
for _, el := range newComputeIdsSlice {
if !isContainsIds(oldComputeIdsSlice, el) {
convertedEl := uint64(el.(int))
addedIds = append(addedIds, convertedEl)
}
}
if len(deletedIds) > 0 {
req := vins.StaticRouteAccessRevokeRequest{
VINSID: uint64(d.Get("vins_id").(int)),
RouteId: staticRouteData.ID,
ComputeIds: deletedIds,
}
_, err := c.CloudAPI().VINS().StaticRouteAccessRevoke(ctx, req)
if err != nil {
warnings.Add(err)
}
}
if len(addedIds) > 0 {
req := vins.StaticRouteAccessGrantRequest{
VINSID: uint64(d.Get("vins_id").(int)),
RouteId: staticRouteData.ID,
ComputeIds: addedIds,
}
_, err := c.CloudAPI().VINS().StaticRouteAccessGrant(ctx, req)
if err != nil {
warnings.Add(err)
}
}
}
return append(warnings.Get(), resourceStaticRouteRead(ctx, d, m)...)
}
func resourceStaticRouteDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
c := m.(*controller.ControllerCfg)
arr := strings.Split(d.Id(), "#")
if len(arr) != 2 {
return diag.FromErr(fmt.Errorf("broken state id"))
}
vinsId, _ := strconv.ParseUint(arr[0], 10, 64)
routeId, _ := strconv.ParseUint(arr[1], 10, 64)
req := vins.StaticRouteDelRequest{
VINSID: vinsId,
RouteId: routeId,
}
_, err := c.CloudAPI().VINS().StaticRouteDel(ctx, req)
if err != nil {
return diag.FromErr(err)
}
d.SetId("")
return nil
}
func resourceStaticRouteSchemaMake() map[string]*schema.Schema {
rets := dataSourceStaticRouteSchemaMake()
rets["route_id"] = &schema.Schema{
Type: schema.TypeInt,
Computed: true,
Optional: true,
}
rets["compute_ids"] = &schema.Schema{
Type: schema.TypeList,
Optional: true,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
}
rets["destination"] = &schema.Schema{
Type: schema.TypeString,
Required: true,
}
rets["gateway"] = &schema.Schema{
Type: schema.TypeString,
Required: true,
}
rets["netmask"] = &schema.Schema{
Type: schema.TypeString,
Required: true,
}
return rets
}
func isContainsIds(els []interface{}, el interface{}) bool {
convEl := el.(int)
for _, elOld := range els {
if convEl == elOld.(int) {
return true
}
}
return false
}
func ResourceStaticRoute() *schema.Resource {
return &schema.Resource{
SchemaVersion: 1,
CreateContext: resourceStaticRouteCreate,
ReadContext: resourceStaticRouteRead,
UpdateContext: resourceStaticRouteUpdate,
DeleteContext: resourceStaticRouteDelete,
Importer: &schema.ResourceImporter{
StateContext: schema.ImportStatePassthroughContext,
},
Timeouts: &schema.ResourceTimeout{
Create: &constants.Timeout20m,
Read: &constants.Timeout600s,
Update: &constants.Timeout20m,
Delete: &constants.Timeout600s,
Default: &constants.Timeout600s,
},
Schema: resourceStaticRouteSchemaMake(),
}
}
/*
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
Authors:
Petr Krutov, <petr.krutov@digitalenergy.online>
Stanislav Solovev, <spsolovev@digitalenergy.online>
Kasim Baybikov, <kmbaybikov@basistech.ru>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
Orchestration Technology) with Terraform by Hashicorp.
Source code: https://repository.basistech.ru/BASIS/terraform-provider-decort
Please see README.md to learn where to place source code so that it
builds seamlessly.
Documentation: https://repository.basistech.ru/BASIS/terraform-provider-decort/wiki
*/
package vins
import (
"context"
"fmt"
"strconv"
"strings"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudapi/vins"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/constants"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/controller"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/dc"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)
func resourceStaticRouteCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
c := m.(*controller.ControllerCfg)
if _, ok := d.GetOk("vins_id"); ok {
haveVinsID, err := existVinsID(ctx, d, m)
if err != nil {
return diag.FromErr(err)
}
if !haveVinsID {
return diag.Errorf("resourceStaticRouteCreate: can't create Static Route because Vins ID %d is not allowed or does not exist", d.Get("vins_id").(int))
}
}
req := vins.StaticRouteAddRequest{
VINSID: uint64(d.Get("vins_id").(int)),
Destination: d.Get("destination").(string),
Netmask: d.Get("netmask").(string),
Gateway: d.Get("gateway").(string),
}
_, err := c.CloudAPI().VINS().StaticRouteAdd(ctx, req)
if err != nil {
return diag.FromErr(err)
}
staticRouteData, err := getStaticRouteData(ctx, d, m)
if err != nil {
d.SetId("")
return diag.FromErr(err)
}
d.SetId(fmt.Sprintf("%d#%d", req.VINSID, staticRouteData.ID))
return resourceStaticRouteRead(ctx, d, m)
}
func resourceStaticRouteRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
warnings := dc.Warnings{}
staticRouteData, err := utilityDataStaticRouteCheckPresence(ctx, d, m)
if err != nil {
d.SetId("")
return diag.FromErr(err)
}
flattenStaticRouteData(d, staticRouteData)
return warnings.Get()
}
func resourceStaticRouteUpdate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
return nil
}
func resourceStaticRouteDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
c := m.(*controller.ControllerCfg)
arr := strings.Split(d.Id(), "#")
if len(arr) != 2 {
return diag.FromErr(fmt.Errorf("broken state id"))
}
vinsId, _ := strconv.ParseUint(arr[0], 10, 64)
routeId, _ := strconv.ParseUint(arr[1], 10, 64)
req := vins.StaticRouteDelRequest{
VINSID: vinsId,
RouteId: routeId,
}
_, err := c.CloudAPI().VINS().StaticRouteDel(ctx, req)
if err != nil {
return diag.FromErr(err)
}
d.SetId("")
return nil
}
func resourceStaticRouteSchemaMake() map[string]*schema.Schema {
rets := dataSourceStaticRouteSchemaMake()
rets["route_id"] = &schema.Schema{
Type: schema.TypeInt,
Computed: true,
Optional: true,
}
rets["compute_ids"] = &schema.Schema{
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
}
rets["destination"] = &schema.Schema{
Type: schema.TypeString,
Required: true,
}
rets["gateway"] = &schema.Schema{
Type: schema.TypeString,
Required: true,
}
rets["netmask"] = &schema.Schema{
Type: schema.TypeString,
Required: true,
}
return rets
}
func isContainsIds(els []interface{}, el interface{}) bool {
convEl := el.(int)
for _, elOld := range els {
if convEl == elOld.(int) {
return true
}
}
return false
}
func ResourceStaticRoute() *schema.Resource {
return &schema.Resource{
SchemaVersion: 1,
CreateContext: resourceStaticRouteCreate,
ReadContext: resourceStaticRouteRead,
UpdateContext: resourceStaticRouteUpdate,
DeleteContext: resourceStaticRouteDelete,
Importer: &schema.ResourceImporter{
StateContext: schema.ImportStatePassthroughContext,
},
Timeouts: &schema.ResourceTimeout{
Create: &constants.Timeout20m,
Read: &constants.Timeout600s,
Update: &constants.Timeout20m,
Delete: &constants.Timeout600s,
Default: &constants.Timeout600s,
},
Schema: resourceStaticRouteSchemaMake(),
}
}

View File

@@ -706,7 +706,7 @@ func extNetSchemaMake() map[string]*schema.Schema {
Optional: true,
},
"ext_net_ip": {
Type: schema.TypeInt,
Type: schema.TypeString,
Optional: true,
Default: "",
},

View File

@@ -0,0 +1,71 @@
/*
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
Authors:
Petr Krutov, <petr.krutov@digitalenergy.online>
Stanislav Solovev, <spsolovev@digitalenergy.online>
Kasim Baybikov, <kmbaybikov@basistech.ru>
Tim Tkachev, <tvtkachev@basistech.ru>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
Orchestration Technology) with Terraform by Hashicorp.
Source code: https://repository.basistech.ru/BASIS/terraform-provider-decort
Please see README.md to learn where to place source code so that it
builds seamlessly.
Documentation: https://repository.basistech.ru/BASIS/terraform-provider-decort/wiki
*/
package extnet
import (
"context"
"github.com/google/uuid"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/constants"
)
func dataSourceExtnetReservedIpRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
reservedList, err := utilityExtnetReservedIpCheckPresence(ctx, d, m)
if err != nil {
return diag.FromErr(err)
}
id := uuid.New()
d.SetId(id.String())
d.Set("items", flattenExtnetReservedIp(reservedList))
return nil
}
func DataSourceExtnetReservedIp() *schema.Resource {
return &schema.Resource{
SchemaVersion: 1,
ReadContext: dataSourceExtnetReservedIpRead,
Timeouts: &schema.ResourceTimeout{
Read: &constants.Timeout30s,
Default: &constants.Timeout60s,
},
Schema: dataSourceExtnetReservedIpSchemaMake(),
}
}

View File

@@ -133,7 +133,6 @@ func flattenRecordExtnetResource(d *schema.ResourceData, recNet *extnet.RecordEx
d.Set("routes", flattenStaticRouteList(staticRouteList))
}
func flattenExtnetExcluded(ers extnet.ListReservations) []map[string]interface{} {
res := make([]map[string]interface{}, 0)
for _, er := range ers {
@@ -157,6 +156,7 @@ func flattenExtnetReservations(ers extnet.ListReservations) []map[string]interfa
res := make([]map[string]interface{}, 0)
for _, er := range ers {
temp := map[string]interface{}{
"account_id": er.AccountID,
"client_type": er.ClientType,
"domain_name": er.DomainName,
"hostname": er.Hostname,
@@ -217,4 +217,30 @@ func flattenStaticRouteData(d *schema.ResourceData, route *extnet.ItemRoutes) {
d.Set("netmask", route.Netmask)
d.Set("compute_ids", route.ComputeIds)
d.Set("route_id", route.ID)
}
}
func flattenExtnetReservedIp(el []extnet.RecordReservedIP) []map[string]interface{} {
res := make([]map[string]interface{}, 0, len(el))
for _, e := range el {
reservations := make([]map[string]interface{}, 0, len(e.Reservations))
for _, r := range e.Reservations {
temp := map[string]interface{}{
"account_id": r.AccountID,
"client_type": r.ClientType,
"domain_name": r.DomainName,
"hostname": r.Hostname,
"ip": r.IP,
"mac": r.Mac,
"type": r.Type,
"vm_id": r.VMID,
}
reservations = append(reservations, temp)
}
item := map[string]interface{}{
"extnet_id": e.ExtnetID,
"reservations": reservations,
}
res = append(res, item)
}
return res
}

View File

@@ -54,6 +54,9 @@ func resourceExtnetCreate(ctx context.Context, d *schema.ResourceData, m interfa
if err := ic.ExistGID(ctx, uint64(d.Get("gid").(int)), c); err != nil {
return diag.FromErr(err)
}
if err := checkReserveIp(ctx, d, c); err != nil {
return diag.FromErr(err)
}
req := extnet.CreateRequest{
Name: d.Get("name").(string),
@@ -191,6 +194,34 @@ func resourceExtnetCreate(ctx context.Context, d *schema.ResourceData, m interfa
}
}
// for reserve IP extnet must be enabled
if d.Get("reserved_ip").(*schema.Set).Len() > 0 {
for _, reservedIP := range d.Get("reserved_ip").(*schema.Set).List() {
reservedIPMap := reservedIP.(map[string]interface{})
req := extnet.AddReserveIPRequest{
AccountID: uint64(reservedIPMap["account_id"].(int)),
ExtNetID: netID,
}
if ipCount, ok := reservedIPMap["ip_count"]; ok {
req.IPCount = uint64(ipCount.(int))
}
if reservedIPMap["ips"].(*schema.Set).Len() > 0 {
ips := reservedIPMap["ips"].(*schema.Set).List()
for i, ip := range ips {
if i >= int(req.IPCount) {
break
}
req.IPs = append(req.IPs, ip.(string))
}
}
_, err := c.CloudBroker().ExtNet().AddReserveIP(ctx, req)
if err != nil {
w.Add(err)
}
}
}
return resourceExtnetRead(ctx, d, m)
}
@@ -215,6 +246,10 @@ func resourceExtnetUpdate(ctx context.Context, d *schema.ResourceData, m interfa
log.Debugf("cloudbroker: resourceExtnetUpdate called with id %s", d.Id())
c := m.(*controller.ControllerCfg)
if err := checkReserveIp(ctx, d, c); err != nil {
return diag.FromErr(err)
}
recNet, err := utilityExtnetCheckPresence(ctx, d, m)
if err != nil {
d.SetId("")
@@ -267,6 +302,12 @@ func resourceExtnetUpdate(ctx context.Context, d *schema.ResourceData, m interfa
}
}
if d.HasChange("reserved_ip") {
if err := reservedIPsUpdate(ctx, d, c, recNet); err != nil {
return diag.FromErr(err)
}
}
if d.HasChange("shared_with") {
if err := handleSharedWithUpdate(ctx, d, c); err != nil {
return diag.FromErr(err)
@@ -330,6 +371,8 @@ func ResourceExtnetCB() *schema.Resource {
Default: &constants.Timeout300s,
},
CustomizeDiff: validateReserveIPs,
Schema: resourceExtnetSchemaMake(),
}
}

View File

@@ -1,6 +1,9 @@
package extnet
import "github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/validation"
)
func dataSourceExtnetDefaultSchemaMake() map[string]*schema.Schema {
return map[string]*schema.Schema{
@@ -474,6 +477,10 @@ func dataSourceExtnetSchemaMake() map[string]*schema.Schema {
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"account_id": {
Type: schema.TypeInt,
Computed: true,
},
"client_type": {
Type: schema.TypeString,
Computed: true,
@@ -728,6 +735,30 @@ func resourceExtnetSchemaMake() map[string]*schema.Schema {
Type: schema.TypeInt,
},
},
"reserved_ip": {
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"account_id": {
Type: schema.TypeInt,
Required: true,
},
"ip_count": {
Type: schema.TypeInt,
Optional: true,
ValidateFunc: validation.IntBetween(1, 255),
},
"ips": {
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
},
},
},
"ckey": {
Type: schema.TypeString,
Computed: true,
@@ -868,6 +899,10 @@ func resourceExtnetSchemaMake() map[string]*schema.Schema {
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"account_id": {
Type: schema.TypeInt,
Computed: true,
},
"client_type": {
Type: schema.TypeString,
Computed: true,
@@ -905,3 +940,69 @@ func resourceExtnetSchemaMake() map[string]*schema.Schema {
},
}
}
func dataSourceExtnetReservedIpSchemaMake() map[string]*schema.Schema {
res := map[string]*schema.Schema{
"account_id": {
Type: schema.TypeInt,
Required: true,
},
"extnet_id": {
Type: schema.TypeInt,
Optional: true,
},
"items": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"extnet_id": {
Type: schema.TypeInt,
Computed: true,
},
"reservations": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"account_id": {
Type: schema.TypeInt,
Computed: true,
},
"client_type": {
Type: schema.TypeString,
Computed: true,
},
"domain_name": {
Type: schema.TypeString,
Computed: true,
},
"hostname": {
Type: schema.TypeString,
Computed: true,
},
"ip": {
Type: schema.TypeString,
Computed: true,
},
"mac": {
Type: schema.TypeString,
Computed: true,
},
"type": {
Type: schema.TypeString,
Computed: true,
},
"vm_id": {
Type: schema.TypeInt,
Computed: true,
},
},
},
},
},
},
},
}
return res
}

View File

@@ -0,0 +1,61 @@
/*
Copyright (c) 2019-2022 Digital Energy Cloud Solutions LLC. All Rights Reserved.
Authors:
Petr Krutov, <petr.krutov@digitalenergy.online>
Stanislav Solovev, <spsolovev@digitalenergy.online>
Kasim Baybikov, <kmbaybikov@basistech.ru>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Terraform DECORT provider - manage resources provided by DECORT (Digital Energy Cloud
Orchestration Technology) with Terraform by Hashicorp.
Source code: https://repository.basistech.ru/BASIS/terraform-provider-decort
Please see README.md to learn where to place source code so that it
builds seamlessly.
Documentation: https://repository.basistech.ru/BASIS/terraform-provider-decort/wiki
*/
package extnet
import (
"context"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
log "github.com/sirupsen/logrus"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudbroker/extnet"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/controller"
)
func utilityExtnetReservedIpCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) ([]extnet.RecordReservedIP, error) {
c := m.(*controller.ControllerCfg)
req := extnet.GetReservedIP{
AccountID: uint64(d.Get("account_id").(int)),
}
if extNetID, ok := d.GetOk("extnet_id"); ok {
req.ExtNetID = uint64(extNetID.(int))
}
log.Debugf("utilityExtnetReservedIpCheckPresence")
res, err := c.CloudBroker().ExtNet().GetReservedIP(ctx, req)
if err != nil {
return nil, err
}
return res, nil
}

View File

@@ -35,6 +35,8 @@ package extnet
import (
"context"
"errors"
"fmt"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
log "github.com/sirupsen/logrus"
@@ -341,3 +343,255 @@ func handleMigrateUpdate(ctx context.Context, d *schema.ResourceData, c *control
return nil
}
func checkReserveIp(ctx context.Context, d *schema.ResourceData, c *controller.ControllerCfg) error {
var err error
if d.Get("reserved_ip").(*schema.Set).Len() > 0 {
reservedIPList := d.Get("reserved_ip").(*schema.Set).List()
accountMap := make(map[int]struct{}, len(reservedIPList))
for _, reservedIP := range reservedIPList {
reservedIPMap := reservedIP.(map[string]interface{})
accountId := reservedIPMap["account_id"].(int)
if _, ok := accountMap[accountId]; ok {
err = errors.Join(err, fmt.Errorf("checkReserveIp: you must have only one block with id %d", accountId))
}
accountMap[accountId] = struct{}{}
_, okCount := reservedIPMap["ip_count"]
if !okCount && reservedIPMap["ips"].(*schema.Set).Len() == 0 {
err = errors.Join(err, fmt.Errorf("checkReserveIp: either ip_count or set of ips must be specified"))
}
existErr := ic.ExistAccount(ctx, uint64(accountId), c)
if existErr != nil {
err = errors.Join(err, existErr)
}
}
}
return err
}
func reservedIPsUpdate(ctx context.Context, d *schema.ResourceData, c *controller.ControllerCfg, recNet *extnet.RecordExtNet) error {
addSet, delSet, err := differenceIPReserved(ctx, d, c)
if err != nil {
return err
}
if len(delSet) > 0 {
for _, del := range delSet {
delMap := del.(map[string]interface{})
log.Debugf("reservedIPsUpdate: removing reserved IPs for account %d", delMap["account_id"].(int))
req := extnet.DelReserveIPRequest{
AccountID: uint64(delMap["account_id"].(int)),
ExtNetID: recNet.ID,
}
if ipCount, ok := delMap["ip_count"]; ok {
req.IPCount = uint64(ipCount.(int))
}
if delIPs, ok := delMap["ips"]; ok {
ips := delIPs.(*schema.Set).List()
for _, ip := range ips {
req.IPs = append(req.IPs, ip.(string))
}
}
_, err := c.CloudBroker().ExtNet().DelReserveIP(ctx, req)
if err != nil {
return err
}
}
}
if len(addSet) > 0 {
for _, add := range addSet {
addMap := add.(map[string]interface{})
log.Debugf("reservedIPsUpdate: add reserved IPs for account %d", addMap["account_id"].(int))
req := extnet.AddReserveIPRequest{
AccountID: uint64(addMap["account_id"].(int)),
ExtNetID: recNet.ID,
}
if ipCount, ok := addMap["ip_count"]; ok {
req.IPCount = uint64(ipCount.(int))
}
if addIPs, ok := addMap["ips"]; ok {
ips := addIPs.(*schema.Set).List()
for _, ip := range ips {
req.IPs = append(req.IPs, ip.(string))
}
}
_, err := c.CloudBroker().ExtNet().AddReserveIP(ctx, req)
if err != nil {
return err
}
}
}
return nil
}
func differenceIPReserved(ctx context.Context, d *schema.ResourceData, c *controller.ControllerCfg) (addList, delList []interface{}, errs error) {
addList = make([]interface{}, 0)
delList = make([]interface{}, 0)
oldSet, newSet := d.GetChange("reserved_ip")
oldList := oldSet.(*schema.Set).List()
newList := newSet.(*schema.Set).List()
for _, oldReservedIP := range oldList {
oldMap := oldReservedIP.(map[string]interface{})
found := false
for _, newReservedIP := range newList {
newMap := newReservedIP.(map[string]interface{})
if newMap["account_id"] == oldMap["account_id"] && newMap["ip_count"] == oldMap["ip_count"] && newMap["ips"] == oldMap["ips"] {
found = true
break
}
if newMap["account_id"] == oldMap["account_id"] {
add := make(map[string]interface{}, 0)
del := make(map[string]interface{}, 0)
delSet := oldMap["ips"].(*schema.Set).Difference(newMap["ips"].(*schema.Set))
addSet := newMap["ips"].(*schema.Set).Difference(oldMap["ips"].(*schema.Set))
if delSet.Len() > 0 {
del["account_id"] = oldMap["account_id"].(int)
del["ips"] = delSet
if oldMap["ip_count"].(int)-newMap["ip_count"].(int) >= delSet.Len() {
del["ip_count"] = oldMap["ip_count"].(int) - newMap["ip_count"].(int)
} else {
del["ip_count"] = delSet.Len()
newMap["ip_count"] = newMap["ip_count"].(int) + delSet.Len()
}
} else if newMap["ip_count"].(int) < oldMap["ip_count"].(int) {
del["account_id"] = oldMap["account_id"].(int)
del["ip_count"] = oldMap["ip_count"].(int) - newMap["ip_count"].(int)
}
if addSet.Len() > 0 {
add["account_id"] = oldMap["account_id"].(int)
add["ips"] = addSet
add["ip_count"] = newMap["ip_count"].(int) - oldMap["ip_count"].(int)
if add["ip_count"].(int) < 0 {
add["ip_count"] = 0
}
if add["ip_count"].(int)-addSet.Len() < 0 {
del["account_id"] = oldMap["account_id"].(int)
if _, ok := del["ip_count"]; ok {
del["ip_count"] = del["ip_count"].(int) + addSet.Len()
} else {
del["ip_count"] = addSet.Len()
}
}
} else if newMap["ip_count"].(int)-oldMap["ip_count"].(int) > 0 {
add["account_id"] = oldMap["account_id"].(int)
add["ip_count"] = newMap["ip_count"].(int) - oldMap["ip_count"].(int)
}
if _, ok := add["account_id"]; ok {
addList = append(addList, add)
}
if _, ok := del["account_id"]; ok {
ipsLen := 0
if _, ok := del["ips"]; ok {
ipsLen = del["ips"].(*schema.Set).Len()
}
freeCount := del["ip_count"].(int) - ipsLen
if freeCount > 0 {
req := extnet.GetReservedIP{
AccountID: uint64(del["account_id"].(int)),
ExtNetID: uint64(d.Get("extnet_id").(int)),
}
resIpsList, err := c.CloudBroker().ExtNet().GetReservedIP(ctx, req)
if err != nil {
errs = errors.Join(errs, err)
}
freeIPs := getFreeIps(resIpsList[0].Reservations, newMap["ips"], del["ips"])
if _, ok := del["ips"]; !ok {
del["ips"] = schema.NewSet(schema.HashString, []interface{}{})
}
for i := 0; i < freeCount; i++ {
del["ips"].(*schema.Set).Add(freeIPs[i])
}
}
delList = append(delList, del)
}
found = true
break
}
}
if found {
continue
}
delList = append(delList, oldReservedIP)
}
for _, newReservedIP := range newList {
newMap := newReservedIP.(map[string]interface{})
found := false
for _, oldReservedIP := range oldList {
oldMap := oldReservedIP.(map[string]interface{})
if newMap["account_id"] == oldMap["account_id"] {
found = true
break
}
}
if found {
continue
}
addList = append(addList, newReservedIP)
}
if errs != nil {
d.Set("reserved_ip", oldSet)
}
return
}
// getFreeIps returns array IPs which can be deleted
func getFreeIps(reserved []extnet.Reservations, newIps, delIps interface{}) []string {
newIpsList := make([]interface{}, 0)
delIpsList := make([]interface{}, 0)
if newIps != nil {
newIpsList = newIps.(*schema.Set).List()
}
if delIps != nil {
delIpsList = delIps.(*schema.Set).List()
}
newIpsMap := make(map[string]struct{}, len(newIpsList))
delIpsMap := make(map[string]struct{}, len(delIpsList))
freeIPs := make([]string, 0)
for _, ip := range newIpsList {
newIpsMap[ip.(string)] = struct{}{}
}
for _, ip := range delIpsList {
delIpsMap[ip.(string)] = struct{}{}
}
for _, item := range reserved {
if _, ok := newIpsMap[item.IP]; ok {
continue
}
if _, ok := delIpsMap[item.IP]; ok {
continue
}
freeIPs = append(freeIPs, item.IP)
}
return freeIPs
}
func validateReserveIPs(ctx context.Context, d *schema.ResourceDiff, m interface{}) error {
list := d.Get("reserved_ip").(*schema.Set).List()
var errs error
for _, reservedIP := range list {
reservedIPMap := reservedIP.(map[string]interface{})
var countIP, ipsLen int
if _, ok := reservedIPMap["ip_count"]; ok {
countIP = reservedIPMap["ip_count"].(int)
}
if _, ok := reservedIPMap["ips"]; ok {
ipsLen = reservedIPMap["ips"].(*schema.Set).Len()
}
if ipsLen > countIP {
errs = errors.Join(errs, fmt.Errorf("for the reserved_ip block with account_id %d the count parameter must be greater than or equal to len the ips array", reservedIPMap["account_id"].(int)))
}
}
return errs
}

View File

@@ -34,21 +34,45 @@ package grid
import (
"context"
"strconv"
"os"
"github.com/google/uuid"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
log "github.com/sirupsen/logrus"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/constants"
)
func dataSourceGridGetDiagnosisRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
filePath := "diagnosis.tar.gz"
if userPath, ok := d.GetOk("file_path"); ok {
filePath = userPath.(string)
}
log.Debugf("dataSourceGridGetDiagnosisRead: create file with name: %s", filePath)
file, err := os.Create(filePath)
defer file.Close()
if err != nil {
d.SetId("") // ensure ID is empty in this case
return diag.FromErr(err)
}
diagnosis, err := utilityGridGetDiagnosisCheckPresence(ctx, d, m)
if err != nil {
d.SetId("")
return diag.FromErr(err)
}
d.SetId(strconv.Itoa(d.Get("gid").(int)))
d.Set("diagnosis", diagnosis)
log.Debugf("dataSourceGridGetDiagnosisRead: write data to file with name: %s", filePath)
_, err = file.WriteString(diagnosis)
if err != nil {
d.SetId("") // ensure ID is empty in this case
return diag.FromErr(err)
}
id := uuid.New()
d.SetId(id.String())
return nil
}
@@ -66,29 +90,3 @@ func DataSourceGridGetDiagnosis() *schema.Resource {
Schema: dataSourceGridGetDiagnosisSchemaMake(),
}
}
func dataSourceGridPostDiagnosisRead(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
diagnosis, err := utilityGridPostDiagnosisCheckPresence(ctx, d, m)
if err != nil {
d.SetId("")
return diag.FromErr(err)
}
d.SetId(strconv.Itoa(d.Get("gid").(int)))
d.Set("diagnosis", diagnosis)
return nil
}
func DataSourceGridPostDiagnosis() *schema.Resource {
return &schema.Resource{
SchemaVersion: 1,
ReadContext: dataSourceGridPostDiagnosisRead,
Timeouts: &schema.ResourceTimeout{
Read: &constants.Timeout30s,
Default: &constants.Timeout60s,
},
Schema: dataSourceGridPostDiagnosisSchemaMake(),
}
}

View File

@@ -586,23 +586,10 @@ func dataSourceGridGetDiagnosisSchemaMake() map[string]*schema.Schema {
Type: schema.TypeInt,
Required: true,
},
"diagnosis": {
"file_path": {
Type: schema.TypeString,
Computed: true,
},
}
}
func dataSourceGridPostDiagnosisSchemaMake() map[string]*schema.Schema {
return map[string]*schema.Schema{
"gid": {
Type: schema.TypeInt,
Required: true,
},
"diagnosis": {
Type: schema.TypeString,
Computed: true,
},
}
}

View File

@@ -62,23 +62,3 @@ func utilityGridGetDiagnosisCheckPresence(ctx context.Context, d *schema.Resourc
return gridGetDiagnosis, nil
}
func utilityGridPostDiagnosisCheckPresence(ctx context.Context, d *schema.ResourceData, m interface{}) (string, error) {
c := m.(*controller.ControllerCfg)
req := grid.GetDiagnosisRequest{}
if d.Id() != "" {
id, _ := strconv.ParseUint(d.Id(), 10, 64)
req.GID = id
} else {
req.GID = uint64(d.Get("gid").(int))
}
log.Debugf("utilityGridPostDiagnosisCheckPresence: load grid post diagnosis")
gridPostDiagnosis, err := c.CloudBroker().Grid().GetDiagnosis(ctx, req)
if err != nil {
return "", err
}
return gridPostDiagnosis, nil
}

View File

@@ -48,6 +48,7 @@ func flattenImage(d *schema.ResourceData, img *image.RecordImage) {
d.Set("sep_id", img.SEPID)
d.Set("shared_with", img.SharedWith)
d.Set("size", img.Size)
d.Set("snapshot_id", img.SnapshotID)
d.Set("status", img.Status)
d.Set("tech_status", img.TechStatus)
d.Set("image_type", img.Type)
@@ -92,47 +93,48 @@ func flattenImageList(il *image.ListImages) []map[string]interface{} {
cdPresentedTo, _ := json.Marshal(item.CdPresentedTo)
temp := map[string]interface{}{
"image_id": item.ID,
"unc_path": item.UNCPath,
"account_id": item.AccountID,
"acl": flattenAcl(item.ACL),
"architecture": item.Architecture,
"boot_type": item.BootType,
"bootable": item.Bootable,
"computeci_id": item.ComputeCIID,
"cd_presented_to": string(cdPresentedTo),
"deleted_time": item.DeletedTime,
"desc": item.Description,
"drivers": item.Drivers,
"enabled": item.Enabled,
"gid": item.GID,
"guid": item.GUID,
"history": flattenHistory(item.History),
"hot_resize": item.HotResize,
"last_modified": item.LastModified,
"link_to": item.LinkTo,
"milestones": item.Milestones,
"name": item.Name,
"image_id": item.ID,
"unc_path": item.UNCPath,
"account_id": item.AccountID,
"acl": flattenAcl(item.ACL),
"architecture": item.Architecture,
"boot_type": item.BootType,
"bootable": item.Bootable,
"computeci_id": item.ComputeCIID,
"cd_presented_to": string(cdPresentedTo),
"deleted_time": item.DeletedTime,
"desc": item.Description,
"drivers": item.Drivers,
"enabled": item.Enabled,
"gid": item.GID,
"guid": item.GUID,
"history": flattenHistory(item.History),
"hot_resize": item.HotResize,
"last_modified": item.LastModified,
"link_to": item.LinkTo,
"milestones": item.Milestones,
"name": item.Name,
"network_interface_naming": item.NetworkInterfaceNaming,
"password": item.Password,
"pool_name": item.Pool,
"present_to": item.PresentTo,
"provider_name": item.ProviderName,
"purge_attempts": item.PurgeAttempts,
"reference_id": item.ReferenceID,
"res_id": item.ResID,
"res_name": item.ResName,
"rescuecd": item.RescueCD,
"sep_id": item.SEPID,
"shared_with": item.SharedWith,
"size": item.Size,
"status": item.Status,
"tech_status": item.TechStatus,
"image_type": item.Type,
"url": item.URL,
"username": item.Username,
"version": item.Version,
"virtual": item.Virtual,
"password": item.Password,
"pool_name": item.Pool,
"present_to": item.PresentTo,
"provider_name": item.ProviderName,
"purge_attempts": item.PurgeAttempts,
"reference_id": item.ReferenceID,
"res_id": item.ResID,
"res_name": item.ResName,
"rescuecd": item.RescueCD,
"sep_id": item.SEPID,
"shared_with": item.SharedWith,
"size": item.Size,
"snapshot_id": item.SnapshotID,
"status": item.Status,
"tech_status": item.TechStatus,
"image_type": item.Type,
"url": item.URL,
"username": item.Username,
"version": item.Version,
"virtual": item.Virtual,
}
res = append(res, temp)
}

View File

@@ -622,6 +622,11 @@ func dataSourceImageListSchemaMake() map[string]*schema.Schema {
Computed: true,
Description: "image size",
},
"snapshot_id": {
Type: schema.TypeString,
Computed: true,
Description: "snapshot id",
},
"status": {
Type: schema.TypeString,
Computed: true,
@@ -875,6 +880,11 @@ func dataSourceImageSchemaMake() map[string]*schema.Schema {
Computed: true,
Description: "image size",
},
"snapshot_id": {
Type: schema.TypeString,
Computed: true,
Description: "snapshot id",
},
"status": {
Type: schema.TypeString,
Computed: true,
@@ -1124,6 +1134,11 @@ func resourceCDROMImageSchemaMake() map[string]*schema.Schema {
Type: schema.TypeInt,
},
},
"snapshot_id": {
Type: schema.TypeString,
Computed: true,
Description: "snapshot id",
},
"status": {
Type: schema.TypeString,
Computed: true,
@@ -1415,6 +1430,11 @@ func resourceImageSchemaMake() map[string]*schema.Schema {
Type: schema.TypeInt,
},
},
"snapshot_id": {
Type: schema.TypeString,
Computed: true,
Description: "snapshot id",
},
"status": {
Type: schema.TypeString,
Computed: true,
@@ -1689,6 +1709,11 @@ func resourceVirtualImageSchemaMake() map[string]*schema.Schema {
Computed: true,
Description: "image size",
},
"snapshot_id": {
Type: schema.TypeString,
Computed: true,
Description: "snapshot id",
},
"status": {
Type: schema.TypeString,
Computed: true,
@@ -1967,6 +1992,11 @@ func resourceImageFromBlankComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeInt,
Computed: true,
},
"snapshot_id": {
Type: schema.TypeString,
Computed: true,
Description: "snapshot id",
},
"status": {
Type: schema.TypeString,
Computed: true,
@@ -2241,6 +2271,11 @@ func resourceImageFromPlatformDiskSchemaMake() map[string]*schema.Schema {
Type: schema.TypeInt,
Computed: true,
},
"snapshot_id": {
Type: schema.TypeString,
Computed: true,
Description: "snapshot id",
},
"status": {
Type: schema.TypeString,
Computed: true,

View File

@@ -34,6 +34,7 @@ func flattenCompute(d *schema.ResourceData, computeRec *compute.RecordCompute, p
d.Set("affinity_rules", flattenAffinityRules(computeRec.AffinityRules))
d.Set("anti_affinity_rules", flattenAffinityRules(computeRec.AntiAffinityRules))
d.Set("arch", computeRec.Arch)
d.Set("auto_start_w_node", computeRec.AutoStart)
d.Set("boot_order", computeRec.BootOrder)
d.Set("boot_disk_id", bootDisk.ID)
// we intentionally use the SizeMax field, do not change it until the BootDiskSize field is fixed on the platform
@@ -75,6 +76,7 @@ func flattenCompute(d *schema.ResourceData, computeRec *compute.RecordCompute, p
d.Set("numa_node_id", computeRec.NumaNodeId)
d.Set("os_users", flattenOSUsers(computeRec.OSUsers))
d.Set("pinned", computeRec.Pinned)
d.Set("preferred_cpu", computeRec.PreferredCPU)
d.Set("reference_id", computeRec.ReferenceID)
d.Set("registered", computeRec.Registered)
d.Set("res_name", computeRec.ResName)
@@ -92,6 +94,7 @@ func flattenCompute(d *schema.ResourceData, computeRec *compute.RecordCompute, p
d.Set("updated_time", computeRec.UpdatedTime)
d.Set("user_data", string(userData))
d.Set("user_managed", computeRec.UserManaged)
d.Set("vnc_password", computeRec.VNCPassword)
d.Set("vgpus", computeRec.VGPUs)
d.Set("virtual_image_id", computeRec.VirtualImageID)
d.Set("virtual_image_name", computeRec.VirtualImageName)
@@ -294,6 +297,7 @@ func flattenComputeList(computes *compute.ListComputes) []map[string]interface{}
"affinity_weight": computeItem.AffinityWeight,
"anti_affinity_rules": flattenListRules(computeItem.AntiAffinityRules),
"arch": computeItem.Arch,
"auto_start_w_node": computeItem.AutoStart,
"chipset": computeItem.Chipset,
"cd_image_id": computeItem.CdImageId,
"boot_order": computeItem.BootOrder,
@@ -329,6 +333,7 @@ func flattenComputeList(computes *compute.ListComputes) []map[string]interface{}
"numa_node_id": computeItem.NumaNodeId,
"os_users": flattenOSUsers(computeItem.OSUsers),
"pinned": computeItem.Pinned,
"preferred_cpu": computeItem.PreferredCPU,
"ram": computeItem.RAM,
"reference_id": computeItem.ReferenceID,
"registered": computeItem.Registered,
@@ -613,6 +618,7 @@ func flattenDataCompute(d *schema.ResourceData, compFacts *compute.RecordCompute
d.Set("affinity_weight", compFacts.AffinityWeight)
d.Set("anti_affinity_rules", flattenAffinityRules(compFacts.AntiAffinityRules))
d.Set("arch", compFacts.Arch)
d.Set("auto_start_w_node", compFacts.AutoStart)
d.Set("boot_order", compFacts.BootOrder)
d.Set("chipset", compFacts.Chipset)
d.Set("cd_image_id", compFacts.CdImageId)
@@ -652,6 +658,7 @@ func flattenDataCompute(d *schema.ResourceData, compFacts *compute.RecordCompute
d.Set("numa_node_id", compFacts.NumaNodeId)
d.Set("os_users", flattenOSUsers(compFacts.OSUsers))
d.Set("pinned", compFacts.Pinned)
d.Set("preferred_cpu", compFacts.PreferredCPU)
d.Set("ram", compFacts.RAM)
d.Set("reference_id", compFacts.ReferenceID)
d.Set("registered", compFacts.Registered)
@@ -671,6 +678,7 @@ func flattenDataCompute(d *schema.ResourceData, compFacts *compute.RecordCompute
d.Set("updated_time", compFacts.UpdatedTime)
d.Set("user_data", string(userData))
d.Set("user_managed", compFacts.UserManaged)
d.Set("vnc_password", compFacts.VNCPassword)
d.Set("vgpus", compFacts.VGPUs)
d.Set("virtual_image_id", compFacts.VirtualImageID)
d.Set("virtual_image_name", compFacts.VirtualImageName)
@@ -695,7 +703,6 @@ func parseComputeInterfacesToNetworks(networks []interface{}, ifaces compute.Lis
log.Debugf("parseComputeInterfacesToNetworks: called for %d ifaces", length)
result := []interface{}{}
for _, value := range ifaces {
elem := make(map[string]interface{})
// Keys in this map should correspond to the Schema definition for "network"
@@ -703,6 +710,7 @@ func parseComputeInterfacesToNetworks(networks []interface{}, ifaces compute.Lis
elem["net_type"] = value.NetType
elem["ip_address"] = value.IPAddress
elem["mac"] = value.MAC
elem["mtu"] = value.MTU
elem["weight"] = flattenNetworkWeight(networks, value.NetID, value.NetType)
result = append(result, elem)

View File

@@ -75,10 +75,6 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
createReqX86.StackID = uint64(stackID.(int))
}
if start, ok := d.GetOk("started"); ok {
createReqX86.Start = start.(bool)
}
if ipaType, ok := d.GetOk("ipa_type"); ok {
createReqX86.IPAType = ipaType.(string)
}
@@ -116,6 +112,10 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
NetID: uint64(netInterfaceVal["net_id"].(int)),
}
if reqInterface.NetType == "DPDK" {
reqInterface.MTU = uint64(netInterfaceVal["mtu"].(int))
}
ipaddr, ipSet := netInterfaceVal["ip_address"]
if ipSet {
reqInterface.IPAddr = ipaddr.(string)
@@ -197,6 +197,16 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
createReqX86.HPBacked = d.Get("hp_backed").(bool)
createReqX86.Chipset = d.Get("chipset").(string)
if preferredCPU, ok := d.GetOk("preferred_cpu"); ok {
preferredList := preferredCPU.([]interface{})
if len(preferredList) > 0 {
for _, v := range preferredList {
cpuNum := v.(int)
createReqX86.PreferredCPU = append(createReqX86.PreferredCPU, int64(cpuNum))
}
}
}
log.Debugf("resourceComputeCreate: creating Compute of type KVM VM x86")
apiResp, err := c.CloudBroker().KVMX86().Create(ctx, createReqX86)
if err != nil {
@@ -208,6 +218,11 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
warnings := dc.Warnings{}
simpleCompRec, err := utilityComputeCheckPresence(ctx, d, m)
if err != nil {
warnings.Add(err)
}
cleanup := false
defer func() {
if cleanup {
@@ -227,6 +242,25 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
log.Debugf("resourceComputeCreate: new simple Compute ID %d, name %s created", computeId, d.Get("name").(string))
if ars, ok := d.GetOk("pci_devices"); ok {
log.Debugf("resourceComputeCreate: add pci devices on ComputeID: %d", computeId)
addedPciDevices := ars.(*schema.Set).List()
if len(addedPciDevices) > 0 {
for _, v := range addedPciDevices {
devicesConv := v.(int)
req := compute.AttachPCIDeviceRequest{
ComputeID: computeId,
DeviceID: uint64(devicesConv),
}
_, err := c.CloudBroker().Compute().AttachPCIDevice(ctx, req)
if err != nil {
warnings.Add(err)
}
}
}
}
extraDisks, ok := d.GetOk("extra_disks")
if ok && extraDisks.(*schema.Set).Len() > 0 {
log.Debugf("resourceComputeCreate: calling utilityComputeExtraDisksConfigure to attach %d extra disk(s)", extraDisks.(*schema.Set).Len())
@@ -255,6 +289,61 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if pin, ok := d.GetOk("pin_to_stack"); ok && pin.(bool) {
req := compute.PinToStackRequest{
ComputeID: computeId,
TargetStackID: uint64(d.Get("stack_id").(int)),
}
if force, ok := d.Get("force_pin").(bool); ok {
req.Force = force
}
if autoStart, ok := d.Get("auto_start_w_node").(bool); ok {
req.AutoStart = autoStart
}
_, err := c.CloudBroker().Compute().PinToStack(ctx, req)
if err != nil {
warnings.Add(err)
}
}
if libvirtSettings, ok := d.GetOk("libvirt_settings"); ok {
if libvirtSettings.(*schema.Set).Len() > 0 {
lvs := libvirtSettings.(*schema.Set).List()
for _, elem := range lvs {
netLibvirtMap := elem.(map[string]interface{})
netType := netLibvirtMap["net_type"].(string)
netId := uint64(netLibvirtMap["net_id"].(int))
var mac string
for _, iface := range simpleCompRec.Interfaces {
if iface.NetID == netId && iface.NetType == netType {
mac = iface.MAC
break
}
}
log.Debugf("resourceComputeCreate: Configure libvirt virtio interface parameters on Network with type %s and id %d", netType, netId)
req := compute.SetNetConfigRequest{
ComputeID: computeId,
MAC: mac,
TXMode: netLibvirtMap["txmode"].(string),
IOEventFD: netLibvirtMap["ioeventfd"].(string),
EventIDx: netLibvirtMap["event_idx"].(string),
Queues: uint64(netLibvirtMap["queues"].(int)),
RXQueueSize: uint64(netLibvirtMap["rx_queue_size"].(int)),
TXQueueSize: uint64(netLibvirtMap["tx_queue_size"].(int)),
}
_, err := c.CloudBroker().Compute().SetNetConfig(ctx, req)
if err != nil {
warnings.Add(err)
}
}
}
}
if start, ok := d.GetOk("started"); ok && start.(bool) {
req := compute.StartRequest{ComputeID: computeId}
log.Debugf("resourceComputeCreate: starting Compute ID %d after completing its resource configuration", computeId)
@@ -434,17 +523,14 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if pin, ok := d.GetOk("pin_to_stack"); ok && pin.(bool) {
req := compute.PinToStackRequest{
ComputeID: computeId,
TargetStackID: uint64(d.Get("target_stack_id").(int)),
if !d.Get("pin_to_stack").(bool) && d.Get("auto_start_w_node").(bool) {
req := compute.UpdateRequest{
ComputeID: computeId,
AutoStart: d.Get("auto_start_w_node").(bool),
CPUPin: d.Get("cpu_pin").(bool),
HPBacked: d.Get("hp_backed").(bool),
}
if force, ok := d.Get("force_pin").(bool); ok {
req.Force = force
}
_, err := c.CloudBroker().Compute().PinToStack(ctx, req)
_, err := c.CloudBroker().Compute().Update(ctx, req)
if err != nil {
warnings.Add(err)
}
@@ -467,49 +553,6 @@ func resourceComputeCreate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if ars, ok := d.GetOk("pci_devices"); ok {
log.Debugf("resourceComputeCreate: add pci devices on ComputeID: %d", computeId)
addedPciDevices := ars.(*schema.Set).List()
if len(addedPciDevices) > 0 {
for _, v := range addedPciDevices {
devicesConv := v.(int)
req := compute.AttachPCIDeviceRequest{
ComputeID: computeId,
DeviceID: uint64(devicesConv),
}
_, err := c.CloudBroker().Compute().AttachPCIDevice(ctx, req)
if err != nil {
warnings.Add(err)
}
}
}
}
if ars, ok := d.GetOk("libvirt_settings"); ok {
log.Debugf("resourceComputeCreate: Configure libvirt virtio interface parameters on ComputeID: %d", computeId)
settings := ars.(*schema.Set).List()
if len(settings) > 0 {
for _, v := range settings {
settingsConv := v.(map[string]interface{})
req := compute.SetNetConfigRequest{
ComputeID: computeId,
MAC: settingsConv["mac"].(string),
TXMode: settingsConv["txmode"].(string),
IOEventFD: settingsConv["ioeventfd"].(string),
EventIDx: settingsConv["event_idx"].(string),
Queues: uint64(settingsConv["queues"].(int)),
RXQueueSize: uint64(settingsConv["rx_queue_size"].(int)),
TXQueueSize: uint64(settingsConv["tx_queue_size"].(int)),
}
_, err := c.CloudBroker().Compute().SetNetConfig(ctx, req)
if err != nil {
warnings.Add(err)
}
}
}
}
}
log.Debugf("resourceComputeCreate: new Compute ID %d, name %s creation sequence complete", computeId, d.Get("name").(string))
@@ -664,19 +707,25 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if d.HasChange("network") {
err = utilityComputeNetworksConfigure(ctx, d, m) // pass do_delta = true to apply changes, if any
if err != nil {
if d.HasChange("pin_to_stack") {
if err := utilityComputePinToStack(ctx, d, m); err != nil {
return diag.FromErr(err)
}
}
if d.HasChanges("description", "name", "numa_affinity", "cpu_pin", "hp_backed") {
if d.HasChanges("description", "name", "numa_affinity", "cpu_pin", "hp_backed", "chipset", "auto_start_w_node", "preferred_cpu") {
if err := utilityComputeUpdate(ctx, d, m); err != nil {
return diag.FromErr(err)
}
}
if d.HasChanges("network", "libvirt_settings") {
err = utilityComputeNetworksConfigure(ctx, d, m) // pass do_delta = true to apply changes, if any
if err != nil {
return diag.FromErr(err)
}
}
if d.HasChange("disks") {
if err := utilityComputeUpdateDisks(ctx, d, m); err != nil {
return diag.FromErr(err)
@@ -737,12 +786,6 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if d.HasChange("pin_to_stack") {
if err := utilityComputePinToStack(ctx, d, m); err != nil {
return diag.FromErr(err)
}
}
if d.HasChange("pause") {
if err := utilityComputePause(ctx, d, m); err != nil {
return diag.FromErr(err)
@@ -773,12 +816,6 @@ func resourceComputeUpdate(ctx context.Context, d *schema.ResourceData, m interf
}
}
if d.HasChange("libvirt_settings") {
if err := utilityComputeUpdateLibvirtSettings(ctx, d, m); err != nil {
return diag.FromErr(err)
}
}
return append(resourceComputeRead(ctx, d, m), warnings.Get()...)
}
@@ -793,6 +830,16 @@ func resourceComputeDelete(ctx context.Context, d *schema.ResourceData, m interf
c := m.(*controller.ControllerCfg)
computeId, _ := strconv.ParseUint(d.Id(), 10, 64)
if start, ok := d.GetOk("started"); ok {
if start.(bool) {
req := compute.StopRequest{ComputeID: computeId}
log.Debugf("resourceComputeDelete: stoping Compute ID %d", computeId)
if _, err := c.CloudBroker().Compute().Stop(ctx, req); err != nil {
diag.FromErr(err)
}
}
}
pciList, ok := d.GetOk("pci_devices")
if d.Get("permanently").(bool) && ok {
@@ -829,6 +876,23 @@ func ResourceCompute() *schema.Resource {
return &schema.Resource{
SchemaVersion: 1,
CustomizeDiff: func(ctx context.Context, diff *schema.ResourceDiff, i interface{}) error {
if diff.HasChanges() || diff.HasChanges("chipset", "pin_to_stack", "auto_start_w_node", "libvirt_settings", "network", "affinity_rules", "anti_affinity_rules",
"extra_disks", "tags", "port_forwarding", "user_access", "snapshot", "pci_devices", "preferred_cpu") {
diff.SetNewComputed("updated_time")
diff.SetNewComputed("updated_by")
}
if diff.HasChanges("pin_to_stack") {
diff.SetNewComputed("pinned")
}
if diff.HasChanges("started") {
diff.SetNewComputed("tech_status")
diff.SetNewComputed("updated_time")
diff.SetNewComputed("updated_by")
}
return nil
},
CreateContext: resourceComputeCreate,
ReadContext: resourceComputeRead,
UpdateContext: resourceComputeUpdate,

View File

@@ -134,6 +134,10 @@ func dataSourceComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
"auto_start_w_node": {
Type: schema.TypeBool,
Computed: true,
},
"boot_order": {
Type: schema.TypeList,
Computed: true,
@@ -788,6 +792,13 @@ func dataSourceComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeBool,
Computed: true,
},
"preferred_cpu": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
},
"ram": {
Type: schema.TypeInt,
Computed: true,
@@ -902,6 +913,10 @@ func dataSourceComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeBool,
Computed: true,
},
"vnc_password": {
Type: schema.TypeString,
Computed: true,
},
"vgpus": {
Type: schema.TypeList,
Computed: true,
@@ -990,6 +1005,11 @@ func dataSourceComputeListSchemaMake() map[string]*schema.Schema {
Optional: true,
Description: "Find by image ID",
},
"cd_image_id": {
Type: schema.TypeInt,
Optional: true,
Description: "Find by CD image ID",
},
"extnet_name": {
Type: schema.TypeString,
Optional: true,
@@ -1145,6 +1165,10 @@ func dataSourceComputeListSchemaMake() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
"auto_start_w_node": {
Type: schema.TypeBool,
Computed: true,
},
"boot_order": {
Type: schema.TypeList,
Computed: true,
@@ -1474,6 +1498,13 @@ func dataSourceComputeListSchemaMake() map[string]*schema.Schema {
Type: schema.TypeBool,
Computed: true,
},
"preferred_cpu": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
},
"ram": {
Type: schema.TypeInt,
Computed: true,
@@ -1802,6 +1833,10 @@ func dataSourceComputeListDeletedSchemaMake() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
"auto_start_w_node": {
Type: schema.TypeBool,
Computed: true,
},
"boot_order": {
Type: schema.TypeList,
Computed: true,
@@ -2083,6 +2118,13 @@ func dataSourceComputeListDeletedSchemaMake() map[string]*schema.Schema {
Type: schema.TypeInt,
Computed: true,
},
"preferred_cpu": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
},
"reference_id": {
Type: schema.TypeString,
Computed: true,
@@ -3012,13 +3054,11 @@ func resourceComputeSchemaMake() map[string]*schema.Schema {
ValidateFunc: validation.StringInSlice([]string{"EXTNET", "VINS", "VFNIC", "DPDK"}, false), // observe case while validating
Description: "Type of the network for this connection, either EXTNET or VINS.",
},
"net_id": {
Type: schema.TypeInt,
Required: true,
Description: "ID of the network for this connection.",
},
"ip_address": {
Type: schema.TypeString,
Optional: true,
@@ -3026,24 +3066,82 @@ func resourceComputeSchemaMake() map[string]*schema.Schema {
DiffSuppressFunc: networkSubresIPAddreDiffSupperss,
Description: "Optional IP address to assign to this connection. This IP should belong to the selected network and free for use.",
},
"mac": {
Type: schema.TypeString,
Computed: true,
Description: "MAC address associated with this connection. MAC address is assigned automatically.",
},
"weight": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
Description: "weight the network if you need to sort network list, the smallest attach first. zero or null weight attach last",
},
"mtu": {
Type: schema.TypeInt,
Optional: true,
Computed: true,
//Default: 1500,
ValidateFunc: validation.IntBetween(1, 9216),
Description: "Maximum transmission unit, used only for DPDK type, must be 1-9216",
},
},
},
Description: "Optional network connection(s) for this compute. You may specify several network blocks, one for each connection.",
},
"libvirt_settings": {
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"net_type": {
Type: schema.TypeString,
Required: true,
StateFunc: statefuncs.StateFuncToUpper,
ValidateFunc: validation.StringInSlice([]string{"VINS", "VFNIC", "DPDK"}, false), // observe case while validating
Description: "Type of the network",
},
"net_id": {
Type: schema.TypeInt,
Required: true,
Description: "ID of the network",
},
"txmode": {
Type: schema.TypeString,
Default: "",
Optional: true,
},
"ioeventfd": {
Type: schema.TypeString,
Default: "",
Optional: true,
},
"event_idx": {
Type: schema.TypeString,
Default: "",
Optional: true,
},
"queues": {
Type: schema.TypeInt,
Default: 0,
Optional: true,
},
"rx_queue_size": {
Type: schema.TypeInt,
Default: 0,
Optional: true,
},
"tx_queue_size": {
Type: schema.TypeInt,
Default: 0,
Optional: true,
},
},
},
Description: "Configure libvirt virtio interface parameters. You can only delete values locally. Data on the platform cannot be deleted.",
},
"affinity_label": {
Type: schema.TypeString,
Optional: true,
@@ -3319,9 +3417,10 @@ func resourceComputeSchemaMake() map[string]*schema.Schema {
Optional: true,
Default: false,
},
"target_stack_id": {
Type: schema.TypeInt,
"auto_start_w_node": {
Type: schema.TypeBool,
Optional: true,
Computed: true,
},
"force_pin": {
Type: schema.TypeBool,
@@ -3349,12 +3448,6 @@ func resourceComputeSchemaMake() map[string]*schema.Schema {
Optional: true,
Default: false,
},
"auto_start": {
Type: schema.TypeBool,
Optional: true,
Default: false,
Description: "Flag for redeploy compute",
},
"force_stop": {
Type: schema.TypeBool,
Optional: true,
@@ -3367,13 +3460,6 @@ func resourceComputeSchemaMake() map[string]*schema.Schema {
Default: false,
Description: "Flag for resize compute",
},
"data_disks": {
Type: schema.TypeString,
Optional: true,
ValidateFunc: validation.StringInSlice([]string{"KEEP", "DETACH", "DESTROY"}, false),
Default: "DETACH",
Description: "Flag for redeploy compute",
},
"detach_disks": {
Type: schema.TypeBool,
Optional: true,
@@ -3403,6 +3489,15 @@ func resourceComputeSchemaMake() map[string]*schema.Schema {
Default: false,
Description: "Use Huge Pages to allocate RAM of the virtual machine. The system must be pre-configured by allocating Huge Pages on the physical node.",
},
"preferred_cpu": {
Type: schema.TypeList,
Optional: true,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
Description: "Recommended isolated CPUs. Field is ignored if compute.cpupin=False or compute.pinned=False",
},
"pci_devices": {
Type: schema.TypeSet,
Optional: true,
@@ -3412,43 +3507,6 @@ func resourceComputeSchemaMake() map[string]*schema.Schema {
},
Description: "ID of the connected pci devices",
},
"libvirt_settings": {
Type: schema.TypeSet,
Optional: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"mac": {
Type: schema.TypeString,
Required: true,
},
"txmode": {
Type: schema.TypeString,
Optional: true,
},
"ioeventfd": {
Type: schema.TypeString,
Optional: true,
},
"event_idx": {
Type: schema.TypeString,
Optional: true,
},
"queues": {
Type: schema.TypeInt,
Optional: true,
},
"rx_queue_size": {
Type: schema.TypeInt,
Optional: true,
},
"tx_queue_size": {
Type: schema.TypeInt,
Optional: true,
},
},
},
Description: "Configure libvirt virtio interface parameters. You can only delete values locally. Data on the platform cannot be deleted.",
},
// Computed properties
"account_id": {
Type: schema.TypeInt,
@@ -3887,6 +3945,10 @@ func resourceComputeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeBool,
Computed: true,
},
"vnc_password": {
Type: schema.TypeString,
Computed: true,
},
"vgpus": {
Type: schema.TypeList,
Computed: true,

View File

@@ -114,6 +114,22 @@ func utilityComputeResize(ctx context.Context, d *schema.ResourceData, m interfa
c := m.(*controller.ControllerCfg)
computeId, _ := strconv.ParseUint(d.Id(), 10, 64)
var isStopRequired bool
old, new := d.GetChange("cpu")
if d.Get("started").(bool) && (old.(int) > new.(int)) {
isStopRequired = true
}
if isStopRequired {
stopReq := compute.StopRequest{
ComputeID: computeId,
Force: false,
}
if _, err := c.CloudBroker().Compute().Stop(ctx, stopReq); err != nil {
return err
}
}
resizeReq := compute.ResizeRequest{
ComputeID: computeId,
}
@@ -135,6 +151,22 @@ func utilityComputeResize(ctx context.Context, d *schema.ResourceData, m interfa
resizeReq.CPU = 0
}
if resizeReq.CPU != 0 {
if preferredCPU, ok := d.GetOk("preferred_cpu"); ok {
preferredList := preferredCPU.([]interface{})
if len(preferredList) > 0 {
for _, v := range preferredList {
cpuNum := v.(int)
resizeReq.PreferredCPU = append(resizeReq.PreferredCPU, int64(cpuNum))
}
}
}
oldPCPU, newPCPU := d.GetChange("preferred_cpu")
if len(oldPCPU.([]interface{})) != 0 && len(newPCPU.([]interface{})) == 0 {
resizeReq.PreferredCPU = []int64{-1}
}
}
oldRam, newRam := d.GetChange("ram")
if oldRam.(int) != newRam.(int) {
resizeReq.RAM = uint64(newRam.(int))
@@ -153,6 +185,12 @@ func utilityComputeResize(ctx context.Context, d *schema.ResourceData, m interfa
}
}
if isStopRequired {
if _, err := c.CloudBroker().Compute().Start(ctx, compute.StartRequest{ComputeID: computeId}); err != nil {
return err
}
}
return nil
}
@@ -629,7 +667,11 @@ func utilityComputeNetworksConfigure(ctx context.Context, d *schema.ResourceData
needStart := false
if oldSet.(*schema.Set).Len() == len(detachMap) || oldSet.(*schema.Set).Len() == 0 {
oldLibvirtSet, newLibvirtSet := d.GetChange("libvirt_settings")
addedLibvirtSettings := (newLibvirtSet.(*schema.Set).Difference(oldLibvirtSet.(*schema.Set))).List()
libvirtSettingsMap := addAttachedNetwork(addedLibvirtSettings, newLibvirtSet.(*schema.Set).List(), attachMap)
if oldSet.(*schema.Set).Len() == len(detachMap) || oldSet.(*schema.Set).Len() == 0 || len(libvirtSettingsMap) > 0 || hasDPDKnetwork(attachMap) {
if err := utilityComputeStop(ctx, d, m); err != nil {
apiErrCount++
lastSavedError = err
@@ -659,6 +701,10 @@ func utilityComputeNetworksConfigure(ctx context.Context, d *schema.ResourceData
NetID: uint64(netData["net_id"].(int)),
}
if req.NetType == "DPDK" {
req.MTU = uint64(netData["mtu"].(int))
}
if netData["ip_address"].(string) != "" {
req.IPAddr = netData["ip_address"].(string)
}
@@ -672,6 +718,51 @@ func utilityComputeNetworksConfigure(ctx context.Context, d *schema.ResourceData
}
}
if len(libvirtSettingsMap) > 0 {
computeId, _ := strconv.ParseUint(d.Id(), 10, 64)
computeRec, err := utilityComputeCheckPresence(ctx, d, m)
if err != nil {
log.Errorf("utilityComputeNetworksConfigure: failed to read information about compute with ID %s: %s",
d.Id(), err)
apiErrCount++
lastSavedError = err
}
if computeRec != nil {
log.Debugf("utilityComputeNetworksConfigure: libvirt virtio set has %d items for Compute ID %s", len(attachMap), d.Id())
for _, libvirtSetting := range libvirtSettingsMap {
netType := libvirtSetting["net_type"].(string)
netId := uint64(libvirtSetting["net_id"].(int))
var mac string
for _, iface := range computeRec.Interfaces {
if iface.NetID == netId && iface.NetType == netType {
mac = iface.MAC
break
}
}
log.Debugf("utilityComputeNetworksConfigure: Configure libvirt virtio interface parameters on Network with type %s and id %d", netType, netId)
req := compute.SetNetConfigRequest{
ComputeID: computeId,
MAC: mac,
TXMode: libvirtSetting["txmode"].(string),
IOEventFD: libvirtSetting["ioeventfd"].(string),
EventIDx: libvirtSetting["event_idx"].(string),
Queues: uint64(libvirtSetting["queues"].(int)),
RXQueueSize: uint64(libvirtSetting["rx_queue_size"].(int)),
TXQueueSize: uint64(libvirtSetting["tx_queue_size"].(int)),
}
_, err := c.CloudBroker().Compute().SetNetConfig(ctx, req)
if err != nil {
log.Errorf("utilityComputeNetworksConfigure: failed to set net config to net ID %d of type %s to Compute ID %s: %s",
netId, netType, d.Id(), err)
apiErrCount++
lastSavedError = err
}
}
}
}
if needStart {
computeId, _ := strconv.ParseUint(d.Id(), 10, 64)
if numErr, err := utilityComputeStart(ctx, computeId, m); err != nil {
@@ -698,12 +789,12 @@ func differenceNetwork(oldList, newList []interface{}) (detachMap, changeIpMap,
found := false
for _, newNetwork := range newList {
newMap := newNetwork.(map[string]interface{})
if newMap["net_type"] == oldMap["net_type"] && newMap["net_id"] == oldMap["net_id"] && newMap["weight"] == oldMap["weight"] {
if (newMap["net_type"].(string) == "EXTNET" || newMap["net_type"].(string) == "VINS") && newMap["ip_address"] != oldMap["ip_address"] {
if newMap["net_type"] == oldMap["net_type"] && newMap["net_id"] == oldMap["net_id"] && newMap["weight"] == oldMap["weight"] && (newMap["mtu"] == oldMap["mtu"] || newMap["mtu"].(int) == 0) {
if (newMap["net_type"].(string) == "EXTNET" || newMap["net_type"].(string) == "VINS") && (newMap["ip_address"] != oldMap["ip_address"] && newMap["ip_address"].(string) != "") {
changeIpMap = append(changeIpMap, newMap)
found = true
break
} else if newMap["ip_address"] == oldMap["ip_address"] {
} else if newMap["ip_address"] == oldMap["ip_address"] || newMap["ip_address"].(string) == "" {
found = true
break
}
@@ -720,8 +811,10 @@ func differenceNetwork(oldList, newList []interface{}) (detachMap, changeIpMap,
found := false
for _, oldNetwork := range oldList {
oldMap := oldNetwork.(map[string]interface{})
if newMap["net_type"] == oldMap["net_type"] && newMap["net_id"] == oldMap["net_id"] && newMap["weight"] == oldMap["weight"] {
if newMap["ip_address"] == oldMap["ip_address"] || ((newMap["net_type"].(string) == "EXTNET" || newMap["net_type"].(string) == "VINS") && newMap["ip_address"] != oldMap["ip_address"]) {
if newMap["net_type"] == oldMap["net_type"] && newMap["net_id"] == oldMap["net_id"] && newMap["weight"] == oldMap["weight"] && (newMap["mtu"] == oldMap["mtu"] || newMap["mtu"].(int) == 0) {
if newMap["ip_address"] == oldMap["ip_address"] || newMap["ip_address"].(string) == "" ||
((newMap["net_type"].(string) == "EXTNET" || newMap["net_type"].(string) == "VINS") &&
newMap["ip_address"] != oldMap["ip_address"] && newMap["ip_address"].(string) != "") {
found = true
break
}
@@ -736,6 +829,49 @@ func differenceNetwork(oldList, newList []interface{}) (detachMap, changeIpMap,
return
}
func hasDPDKnetwork(networkAttachMap []map[string]interface{}) bool {
for _, elem := range networkAttachMap {
if elem["net_type"].(string) == "DPDK" {
return true
}
}
return false
}
// addAttachedNetwork adds libvirt_settings of attached networks to the libvirt_settings settings list
func addAttachedNetwork(addedLibvirtSettings []interface{}, newLibvirtSettings []interface{}, networkAttachMap []map[string]interface{}) (addedLibvirtSettingsMap []map[string]interface{}) {
addedLibvirtSettingsMap = make([]map[string]interface{}, 0)
for _, attach := range networkAttachMap {
found := false
for _, elem := range addedLibvirtSettings {
addedLVSMap := elem.(map[string]interface{})
if attach["net_id"] == addedLVSMap["net_id"] && attach["net_type"] == addedLVSMap["net_type"] {
found = true
break
}
}
if found {
continue
}
for _, elem := range newLibvirtSettings {
newVirtSettingMap := elem.(map[string]interface{})
if attach["net_id"] == newVirtSettingMap["net_id"] && attach["net_type"] == newVirtSettingMap["net_type"] {
addedLibvirtSettingsMap = append(addedLibvirtSettingsMap, newVirtSettingMap)
found = true
break
}
}
}
for _, elem := range addedLibvirtSettings {
addedLVSMap := elem.(map[string]interface{})
addedLibvirtSettingsMap = append(addedLibvirtSettingsMap, addedLVSMap)
}
return
}
func utilityComputeUpdate(ctx context.Context, d *schema.ResourceData, m interface{}) error {
c := m.(*controller.ControllerCfg)
@@ -754,21 +890,33 @@ func utilityComputeUpdate(ctx context.Context, d *schema.ResourceData, m interfa
if d.HasChange("numa_affinity") {
req.NumaAffinity = d.Get("numa_affinity").(string)
}
if d.HasChange("cpu_pin") {
req.CPUPin = d.Get("cpu_pin").(bool)
}
if d.HasChange("hp_backed") {
req.HPBacked = d.Get("hp_backed").(bool)
}
if d.HasChange("chipset") {
req.Chipset = d.Get("chipset").(string)
}
req.CPUPin = d.Get("cpu_pin").(bool)
req.HPBacked = d.Get("hp_backed").(bool)
req.AutoStart = d.Get("auto_start_w_node").(bool)
if d.HasChange("preferred_cpu") {
if preferredCPU, ok := d.GetOk("preferred_cpu"); ok {
preferredList := preferredCPU.([]interface{})
if len(preferredList) > 0 {
for _, v := range preferredList {
cpuNum := v.(int)
req.PreferredCPU = append(req.PreferredCPU, int64(cpuNum))
}
}
}
oldPCPU, newPCPU := d.GetChange("preferred_cpu")
if len(oldPCPU.([]interface{})) != 0 && len(newPCPU.([]interface{})) == 0 {
req.PreferredCPU = []int64{-1}
}
}
// Note bene: numa_affinity, cpu_pin and hp_backed are not allowed to be changed for compute in STARTED tech status.
// If STARTED, we need to stop it before update
var isStopRequired bool
if d.HasChanges("numa_affinity", "cpu_pin", "hp_backed") && d.Get("started").(bool) {
if d.HasChanges("numa_affinity", "cpu_pin", "hp_backed", "chipset", "preferred_cpu") && d.Get("started").(bool) {
isStopRequired = true
}
if isStopRequired {
@@ -1015,36 +1163,6 @@ func utilityComputeUpdatePciDevices(ctx context.Context, d *schema.ResourceData,
return nil
}
func utilityComputeUpdateLibvirtSettings(ctx context.Context, d *schema.ResourceData, m interface{}) error {
c := m.(*controller.ControllerCfg)
computeId, _ := strconv.ParseUint(d.Id(), 10, 64)
oldSet, newSet := d.GetChange("libvirt_settings")
added := (newSet.(*schema.Set).Difference(oldSet.(*schema.Set))).List()
if len(added) > 0 {
for _, v := range added {
settingsConv := v.(map[string]interface{})
req := compute.SetNetConfigRequest{
ComputeID: computeId,
MAC: settingsConv["mac"].(string),
TXMode: settingsConv["txmode"].(string),
IOEventFD: settingsConv["ioeventfd"].(string),
EventIDx: settingsConv["event_idx"].(string),
Queues: uint64(settingsConv["queues"].(int)),
RXQueueSize: uint64(settingsConv["rx_queue_size"].(int)),
TXQueueSize: uint64(settingsConv["tx_queue_size"].(int)),
}
_, err := c.CloudBroker().Compute().SetNetConfig(ctx, req)
if err != nil {
return err
}
}
}
return nil
}
func utilityComputeUpdateTags(ctx context.Context, d *schema.ResourceData, m interface{}) error {
c := m.(*controller.ControllerCfg)
@@ -1360,12 +1478,17 @@ func utilityComputePinToStack(ctx context.Context, d *schema.ResourceData, m int
if !oldPin.(bool) && newPin.(bool) {
req := compute.PinToStackRequest{
ComputeID: computeId,
TargetStackID: uint64(d.Get("target_stack_id").(int)),
TargetStackID: uint64(d.Get("stack_id").(int)),
}
if force, ok := d.Get("force_pin").(bool); ok {
req.Force = force
}
if autoStart, ok := d.Get("auto_start_w_node").(bool); ok {
req.AutoStart = autoStart
}
_, err := c.CloudBroker().Compute().PinToStack(ctx, req)
if err != nil {
return err
@@ -1433,6 +1556,9 @@ func utilityComputeUpdateImage(ctx context.Context, d *schema.ResourceData, m in
if depresent, ok := d.Get("depresent").(bool); ok {
stopReq.Depresent = depresent
}
if forceStop, ok := d.GetOk("force_stop"); ok {
stopReq.Force = forceStop.(bool)
}
_, err := c.CloudBroker().Compute().Stop(ctx, stopReq)
if err != nil {
@@ -1443,15 +1569,13 @@ func utilityComputeUpdateImage(ctx context.Context, d *schema.ResourceData, m in
req := compute.RedeployRequest{
ComputeID: computeId,
ImageID: uint64(newImage.(int)),
DataDisks: "KEEP",
}
if diskSize, ok := d.GetOk("boot_disk_size"); ok {
req.DiskSize = uint64(diskSize.(int))
}
if dataDisks, ok := d.GetOk("data_disks"); ok {
req.DataDisks = dataDisks.(string)
}
if autoStart, ok := d.GetOk("auto_start"); ok {
if autoStart, ok := d.GetOk("started"); ok {
req.AutoStart = autoStart.(bool)
}
if forceStop, ok := d.GetOk("force_stop"); ok {
@@ -1510,7 +1634,7 @@ func utilityComputeStop(ctx context.Context, d *schema.ResourceData, m interface
req.Depresent = depresent
}
log.Debugf("utilityComputeNetworksConfigure: stopping compute %d", req.ComputeID)
log.Debugf("utilityComputeStop: stopping compute %d", req.ComputeID)
_, err := c.CloudBroker().Compute().Stop(ctx, req)
if err != nil {
return err
@@ -1522,7 +1646,7 @@ func utilityComputeStart(ctx context.Context, computeID uint64, m interface{}) (
c := m.(*controller.ControllerCfg)
startReq := compute.StartRequest{ComputeID: computeID}
log.Debugf("utilityComputeNetworksConfigure: starting compute %d", computeID)
log.Debugf("utilityComputeStart: starting compute %d", computeID)
_, err := c.CloudBroker().Compute().Start(ctx, startReq)
if err != nil {
return 1, err

View File

@@ -75,6 +75,9 @@ func utilityDataComputeListCheckPresence(ctx context.Context, d *schema.Resource
if imageID, ok := d.GetOk("image_id"); ok {
req.ImageID = imageID.(uint64)
}
if cdImageID, ok := d.GetOk("cd_image_id"); ok {
req.CDImageID = cdImageID.(uint64)
}
if extNetName, ok := d.GetOk("extnet_name"); ok {
req.ExtNetName = extNetName.(string)
}

View File

@@ -46,11 +46,14 @@ func flattenNode(d *schema.ResourceData, item *node.RecordNode) {
d.Set("consumption", flattenConsumption(item.Consumption))
d.Set("cpu_info", flattenCpuInfo(item.CpuInfo))
d.Set("cpu_allocation_ratio", item.CPUAllocationRatio)
d.Set("dpdk", flattenDPDKItem(item.DPDK))
d.Set("gid", item.GID)
d.Set("ipaddr", item.IPAddr)
d.Set("isolated_cpus", flattenNodeItem(item.IsolatedCpus))
d.Set("name", item.Name)
d.Set("need_reboot", item.NeedReboot)
d.Set("net_addr", flattenGetNetAddr(item.NetAddr))
d.Set("network_mode", item.NetworkMode)
d.Set("nic_info", flattenNicInfo(item.NicInfo))
d.Set("numa_topology", flattenNumaTopology(item.NumaTopology))
d.Set("reserved_cpus", flattenNodeItem(item.ReservedCPUs))
@@ -58,6 +61,10 @@ func flattenNode(d *schema.ResourceData, item *node.RecordNode) {
d.Set("sriov_enabled", item.SriovEnabled)
d.Set("stack_id", item.StackID)
d.Set("status", item.Status)
d.Set("to_active", flattenRole(item.ToActive))
d.Set("to_installing", flattenRole(item.ToInstalling))
d.Set("to_maintenance", flattenRole(item.ToMaintenance))
d.Set("to_restricted", flattenRole(item.ToRestricted))
d.Set("version", item.Version)
}
@@ -106,44 +113,46 @@ func flattenNodeList(nodes *node.ListNodes) []map[string]interface{} {
res := make([]map[string]interface{}, 0, len(nodes.Data))
for _, item := range nodes.Data {
temp := map[string]interface{}{
"additional_pkgs": flattenNodeItem(item.AdditionalPkgs),
"cpu_info": flattenCpuInfo(item.CpuInfo),
"description": item.Description,
"gid": item.GID,
"guid": item.GUID,
"hostkey": item.HostKey,
"node_id": item.ID,
"ipaddr": item.IPAddr,
"isolated_cpus": flattenNodeItem(item.IsolatedCpus),
"lastcheck": item.LastCheck,
"machine_guid": item.MachineGUID,
"mainboard_sn": item.MainboardSN,
"memory": item.Memory,
"milestones": item.Milestones,
"model": item.Model,
"name": item.Name,
"need_reboot": item.NeedReboot,
"net_addr": flattenNetAddr(item.NetAddr),
"network_mode": item.NetworkMode,
"nic_info": flattenNicInfo(item.NicInfo),
"node_uuid": item.NodeUUID,
"numa_topology": flattenNumaTopology(item.NumaTopology),
"peer_backup": item.PeerBackup,
"peer_log": item.PeerLog,
"peer_stats": item.PeerStats,
"pgpus": item.Pgpus,
"public_keys": item.PublicKeys,
"release": item.Release,
"reserved_cpus": flattenNodeItem(item.ReservedCPUs),
"roles": item.Roles,
"seps": item.Seps,
"serial_num": item.SerialNum,
"sriov_enabled": item.SriovEnabled,
"stack_id": item.StackID,
"status": item.Status,
"tags": item.Tags,
"type": item.Type,
"version": item.Version,
"additional_pkgs": flattenNodeItem(item.AdditionalPkgs),
"cpu_info": flattenCpuInfo(item.CpuInfo),
"description": item.Description,
"dpdk": flattenDPDKItem(item.DPDK),
"gid": item.GID,
"guid": item.GUID,
"hostkey": item.HostKey,
"node_id": item.ID,
"ipaddr": item.IPAddr,
"isolated_cpus": flattenNodeItem(item.IsolatedCpus),
"lastcheck": item.LastCheck,
"machine_guid": item.MachineGUID,
"mainboard_sn": item.MainboardSN,
"memory": item.Memory,
"milestones": item.Milestones,
"model": item.Model,
"name": item.Name,
"need_reboot": item.NeedReboot,
"net_addr": flattenNetAddr(item.NetAddr),
"network_mode": item.NetworkMode,
"nic_info": flattenNicInfo(item.NicInfo),
"node_uuid": item.NodeUUID,
"numa_topology": flattenNumaTopology(item.NumaTopology),
"peer_backup": item.PeerBackup,
"peer_log": item.PeerLog,
"peer_stats": item.PeerStats,
"pgpus": item.Pgpus,
"public_keys": item.PublicKeys,
"release": item.Release,
"reserved_cpus": flattenNodeItem(item.ReservedCPUs),
"roles": item.Roles,
"seps": item.Seps,
"serial_num": item.SerialNum,
"sriov_enabled": item.SriovEnabled,
"stack_id": item.StackID,
"status": item.Status,
"tags": item.Tags,
"type": item.Type,
"uefi_firmware_file": item.UEFIFirmwareFile,
"version": item.Version,
}
res = append(res, temp)
}
@@ -205,9 +214,9 @@ func flattenVFList(vfList []interface{}) []map[string]interface{} {
return res
}
func flattenNetAddr(adresses node.ListNetAddr) []map[string]interface{} {
res := make([]map[string]interface{}, 0, len(adresses))
for _, item := range adresses {
func flattenNetAddr(addresses node.ListNetAddr) []map[string]interface{} {
res := make([]map[string]interface{}, 0, len(addresses))
for _, item := range addresses {
temp := map[string]interface{}{
"cidr": item.CIDR,
"index": item.Index,
@@ -221,6 +230,16 @@ func flattenNetAddr(adresses node.ListNetAddr) []map[string]interface{} {
return res
}
func flattenGetNetAddr(address node.NetAddr) []map[string]interface{} {
res := make([]map[string]interface{}, 1)
temp := map[string]interface{}{
"ip": address.IP,
"name": address.Name,
}
res[0] = temp
return res
}
func flattenCpuInfo(info node.CpuInfo) []map[string]interface{} {
res := make([]map[string]interface{}, 1)
temp := map[string]interface{}{
@@ -250,3 +269,36 @@ func flattenNodeItem(m []interface{}) []string {
}
return output
}
func flattenDPDKItem(dpdk node.DPDK) []map[string]interface{} {
res := make([]map[string]interface{}, 1)
bridges := make([]map[string]interface{}, 1)
backplane := make([]map[string]interface{}, 1)
backplane[0] = map[string]interface{}{
"interfaces": dpdk.Bridges.Backplane1.Interfaces,
"numa_node": dpdk.Bridges.Backplane1.NumaNode,
}
bridges[0] = map[string]interface{}{
"backplane1": backplane,
}
res[0] = map[string]interface{}{
"bridges": bridges,
"hp_memory": dpdk.HPMemory,
"pmd_cpu": dpdk.PMDCPU,
}
return res
}
func flattenRole(role node.Role) []map[string]interface{} {
res := make([]map[string]interface{}, 1)
temp := map[string]interface{}{
"actor": role.Actor,
"reason": role.Reason,
"time": role.Time,
}
res[0] = temp
return res
}

View File

@@ -105,6 +105,55 @@ func dataSourceNodeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeInt,
Computed: true,
},
"dpdk": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"bridges": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"backplane1": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"interfaces": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
"numa_node": {
Type: schema.TypeInt,
Computed: true,
},
},
},
},
},
},
},
"hp_memory": {
Type: schema.TypeMap,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
},
"pmd_cpu": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
},
},
},
},
"gid": {
Type: schema.TypeInt,
Computed: true,
@@ -131,6 +180,29 @@ func dataSourceNodeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeBool,
Computed: true,
},
"net_addr": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"ip": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
"name": {
Type: schema.TypeString,
Computed: true,
},
},
},
},
"network_mode": {
Type: schema.TypeString,
Computed: true,
},
"nic_info": {
Type: schema.TypeList,
Computed: true,
@@ -253,6 +325,86 @@ func dataSourceNodeSchemaMake() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
"to_active": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"actor": {
Type: schema.TypeString,
Computed: true,
},
"reason": {
Type: schema.TypeString,
Computed: true,
},
"time": {
Type: schema.TypeInt,
Computed: true,
},
},
},
},
"to_installing": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"actor": {
Type: schema.TypeString,
Computed: true,
},
"reason": {
Type: schema.TypeString,
Computed: true,
},
"time": {
Type: schema.TypeInt,
Computed: true,
},
},
},
},
"to_maintenance": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"actor": {
Type: schema.TypeString,
Computed: true,
},
"reason": {
Type: schema.TypeString,
Computed: true,
},
"time": {
Type: schema.TypeInt,
Computed: true,
},
},
},
},
"to_restricted": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"actor": {
Type: schema.TypeString,
Computed: true,
},
"reason": {
Type: schema.TypeString,
Computed: true,
},
"time": {
Type: schema.TypeInt,
Computed: true,
},
},
},
},
"version": {
Type: schema.TypeString,
Computed: true,
@@ -349,6 +501,55 @@ func dataSourceNodeListSchemaMake() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
"dpdk": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"bridges": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"backplane1": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"interfaces": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
"numa_node": {
Type: schema.TypeInt,
Computed: true,
},
},
},
},
},
},
},
"hp_memory": {
Type: schema.TypeMap,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
},
"pmd_cpu": {
Type: schema.TypeList,
Computed: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},
},
},
},
},
"gid": {
Type: schema.TypeInt,
Computed: true,
@@ -631,6 +832,10 @@ func dataSourceNodeListSchemaMake() map[string]*schema.Schema {
Type: schema.TypeString,
Computed: true,
},
"uefi_firmware_file": {
Type: schema.TypeString,
Computed: true,
},
"version": {
Type: schema.TypeString,
Computed: true,

View File

@@ -37,18 +37,16 @@ import (
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudbroker/sep"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/flattens"
)
func flattenSep(d *schema.ResourceData, desSep *sep.RecordSEP) {
d.Set("ckey", desSep.CKey)
d.Set("meta", flattens.FlattenMeta(desSep.Meta))
d.Set("consumed_by", desSep.ConsumedBy)
d.Set("desc", desSep.Description)
d.Set("gid", desSep.GID)
d.Set("guid", desSep.GUID)
d.Set("sep_id", desSep.ID)
d.Set("milestones", desSep.Milestones)
d.Set("multipath_num", desSep.MultipathNum)
d.Set("name", desSep.Name)
d.Set("obj_status", desSep.ObjStatus)
d.Set("provided_by", desSep.ProvidedBy)
@@ -64,21 +62,20 @@ func flattenSepListItems(sl *sep.ListSEP) []map[string]interface{} {
for _, item := range sl.Data {
data, _ := json.Marshal(item.Config)
temp := map[string]interface{}{
"ckey": item.CKey,
"meta": flattens.FlattenMeta(item.Meta),
"consumed_by": item.ConsumedBy,
"desc": item.Description,
"gid": item.GID,
"guid": item.GUID,
"sep_id": item.ID,
"milestones": item.Milestones,
"name": item.Name,
"obj_status": item.ObjStatus,
"provided_by": item.ProvidedBy,
"shared_with": item.SharedWith,
"tech_status": item.TechStatus,
"type": item.Type,
"config": string(data),
"consumed_by": item.ConsumedBy,
"desc": item.Description,
"gid": item.GID,
"guid": item.GUID,
"sep_id": item.ID,
"milestones": item.Milestones,
"name": item.Name,
"multipath_num": item.MultipathNum,
"obj_status": item.ObjStatus,
"provided_by": item.ProvidedBy,
"shared_with": item.SharedWith,
"tech_status": item.TechStatus,
"type": item.Type,
"config": string(data),
}
res = append(res, temp)
}

View File

@@ -9,19 +9,6 @@ func dataSourceSepCSchemaMake() map[string]*schema.Schema {
Required: true,
Description: "sep type des id",
},
"ckey": {
Type: schema.TypeString,
Computed: true,
Description: "ckey",
},
"meta": {
Type: schema.TypeList,
Computed: true,
Description: "meta",
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
"config": {
Type: schema.TypeString,
Computed: true,
@@ -50,6 +37,11 @@ func dataSourceSepCSchemaMake() map[string]*schema.Schema {
Computed: true,
Description: "guid",
},
"multipath_num": {
Type: schema.TypeInt,
Computed: true,
Description: "multipath_num",
},
"milestones": {
Type: schema.TypeInt,
Computed: true,
@@ -295,19 +287,6 @@ func dataSourceSepListSchemaMake() map[string]*schema.Schema {
Description: "sep list",
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"ckey": {
Type: schema.TypeString,
Computed: true,
Description: "ckey",
},
"meta": {
Type: schema.TypeList,
Computed: true,
Description: "meta",
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
"config": {
Description: "config",
Type: schema.TypeString,
@@ -346,6 +325,11 @@ func dataSourceSepListSchemaMake() map[string]*schema.Schema {
Computed: true,
Description: "milestones",
},
"multipath_num": {
Type: schema.TypeInt,
Computed: true,
Description: "multipath_num",
},
"name": {
Type: schema.TypeString,
Computed: true,
@@ -607,19 +591,6 @@ func resourceSepSchemaMake() map[string]*schema.Schema {
},
},
},
"ckey": {
Type: schema.TypeString,
Computed: true,
Description: "ckey",
},
"meta": {
Type: schema.TypeList,
Computed: true,
Description: "meta",
Elem: &schema.Schema{
Type: schema.TypeString,
},
},
"config": {
Type: schema.TypeString,
Required: true,
@@ -651,6 +622,11 @@ func resourceSepSchemaMake() map[string]*schema.Schema {
Computed: true,
Description: "milestones",
},
"multipath_num": {
Type: schema.TypeInt,
Computed: true,
Description: "multipath_num",
},
"obj_status": {
Type: schema.TypeString,
Computed: true,

View File

@@ -150,6 +150,7 @@ func flattenVinsVNFDev(vd vins.VNFDev) []map[string]interface{} {
"status": vd.Status,
"tech_status": vd.TechStatus,
"type": vd.Type,
"vnc_password": vd.VNCPassword,
"vins": vd.VINS,
}
res = append(res, temp)
@@ -371,14 +372,11 @@ func flattenVinsListReservations(li vins.ListReservations) []map[string]interfac
res := make([]map[string]interface{}, 0, len(li))
for _, v := range li {
temp := map[string]interface{}{
"client_type": v.ClientType,
"description": v.Description,
"domain_name": v.DomainName,
"host_name": v.Hostname,
"ip": v.IP,
"mac": v.MAC,
"type": v.Type,
"vm_id": v.VMID,
"account_id": v.AccountID,
"ip": v.IP,
"mac": v.MAC,
"type": v.Type,
"vm_id": v.VMID,
}
res = append(res, temp)
}

View File

@@ -38,14 +38,12 @@ import (
"strconv"
"strings"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
log "github.com/sirupsen/logrus"
"repository.basistech.ru/BASIS/decort-golang-sdk/pkg/cloudbroker/vins"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/constants"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/controller"
"repository.basistech.ru/BASIS/terraform-provider-decort/internal/dc"
"github.com/hashicorp/terraform-plugin-sdk/v2/diag"
"github.com/hashicorp/terraform-plugin-sdk/v2/helper/schema"
)
func resourceStaticRouteCreate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
@@ -64,19 +62,6 @@ func resourceStaticRouteCreate(ctx context.Context, d *schema.ResourceData, m in
Gateway: d.Get("gateway").(string),
}
if computesIDS, ok := d.GetOk("compute_ids"); ok {
ids := computesIDS.([]interface{})
res := make([]uint64, 10)
for _, id := range ids {
computeId := uint64(id.(int))
res = append(res, computeId)
}
req.ComputeIds = res
}
_, err := c.CloudBroker().VINS().StaticRouteAdd(ctx, req)
if err != nil {
d.SetId("")
@@ -108,30 +93,7 @@ func resourceStaticRouteRead(ctx context.Context, d *schema.ResourceData, m inte
}
func resourceStaticRouteUpdate(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
log.Debugf("resourceStaticRouteUpdate: called for static route id %s", d.Id())
c := m.(*controller.ControllerCfg)
if diags := checkParamsExistenceStaticRoute(ctx, d, c); diags != nil {
return diags
}
staticRouteData, err := utilityDataStaticRouteCheckPresence(ctx, d, m)
if err != nil {
d.SetId("")
return diag.FromErr(err)
}
warnings := dc.Warnings{}
if d.HasChange("compute_ids") {
if errs := resourceStaticRouteChangeComputeIds(ctx, d, m, staticRouteData); len(errs) != 0 {
for _, err := range errs {
warnings.Add(err)
}
}
}
return append(warnings.Get(), resourceStaticRouteRead(ctx, d, m)...)
return nil
}
func resourceStaticRouteDelete(ctx context.Context, d *schema.ResourceData, m interface{}) diag.Diagnostics {
@@ -158,61 +120,6 @@ func resourceStaticRouteDelete(ctx context.Context, d *schema.ResourceData, m in
return nil
}
func resourceStaticRouteChangeComputeIds(ctx context.Context, d *schema.ResourceData, m interface{}, staticRouteData *vins.ItemRoutes) []error {
c := m.(*controller.ControllerCfg)
var errs []error
vinsId := uint64(d.Get("vins_id").(int))
deletedIds := make([]uint64, 0)
addedIds := make([]uint64, 0)
oldComputeIds, newComputeIds := d.GetChange("compute_ids")
oldComputeIdsSlice := oldComputeIds.([]interface{})
newComputeIdsSlice := newComputeIds.([]interface{})
for _, el := range oldComputeIdsSlice {
if !isContainsIds(newComputeIdsSlice, el) {
convertedEl := uint64(el.(int))
deletedIds = append(deletedIds, convertedEl)
}
}
for _, el := range newComputeIdsSlice {
if !isContainsIds(oldComputeIdsSlice, el) {
convertedEl := uint64(el.(int))
addedIds = append(addedIds, convertedEl)
}
}
if len(deletedIds) > 0 {
req := vins.StaticRouteAccessRevokeRequest{
VINSID: vinsId,
RouteId: staticRouteData.ID,
ComputeIds: deletedIds,
}
if _, err := c.CloudBroker().VINS().StaticRouteAccessRevoke(ctx, req); err != nil {
errs = append(errs, err)
}
}
if len(addedIds) > 0 {
req := vins.StaticRouteAccessGrantRequest{
VINSID: vinsId,
RouteId: staticRouteData.ID,
ComputeIds: addedIds,
}
if _, err := c.CloudBroker().VINS().StaticRouteAccessGrant(ctx, req); err != nil {
errs = append(errs, err)
}
}
return errs
}
func isContainsIds(els []interface{}, el interface{}) bool {
convEl := el.(int)
for _, elOld := range els {

View File

@@ -341,6 +341,10 @@ func dataSourceVinsSchemaMake() map[string]*schema.Schema {
Computed: true,
Description: "type",
},
"vnc_password": {
Type: schema.TypeString,
Computed: true,
},
"vins": {
Type: schema.TypeList,
Computed: true,
@@ -586,26 +590,11 @@ func dataSourceVinsSchemaMake() map[string]*schema.Schema {
Description: "reservations",
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"client_type": {
Type: schema.TypeString,
"account_id": {
Type: schema.TypeInt,
Computed: true,
Description: "client type",
},
"description": {
Type: schema.TypeString,
Computed: true,
Description: "description",
},
"domain_name": {
Type: schema.TypeString,
Computed: true,
Description: "domain name",
},
"host_name": {
Type: schema.TypeString,
Computed: true,
Description: "host name",
},
"ip": {
Type: schema.TypeString,
Computed: true,
@@ -2509,6 +2498,10 @@ func resourceVinsSchemaMake() map[string]*schema.Schema {
Computed: true,
Description: "type",
},
"vnc_password": {
Type: schema.TypeString,
Computed: true,
},
"vins": {
Type: schema.TypeList,
Computed: true,
@@ -2699,26 +2692,11 @@ func resourceVinsSchemaMake() map[string]*schema.Schema {
Description: "reservations",
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"client_type": {
Type: schema.TypeString,
"account_id": {
Type: schema.TypeInt,
Computed: true,
Description: "client type",
},
"description": {
Type: schema.TypeString,
Computed: true,
Description: "description",
},
"domain_name": {
Type: schema.TypeString,
Computed: true,
Description: "domain name",
},
"host_name": {
Type: schema.TypeString,
Computed: true,
Description: "host name",
},
"ip": {
Type: schema.TypeString,
Computed: true,
@@ -2745,7 +2723,6 @@ func resourceVinsSchemaMake() map[string]*schema.Schema {
},
},
},
"created_time": {
Type: schema.TypeInt,
Computed: true,
@@ -3389,7 +3366,6 @@ func resourceStaticRouteSchemaMake() map[string]*schema.Schema {
"compute_ids": {
Type: schema.TypeList,
Computed: true,
Optional: true,
Elem: &schema.Schema{
Type: schema.TypeInt,
},

View File

@@ -43,6 +43,7 @@
- extnet_computes_list
- extnet_default
- extnet_list
- extnet_reserved_ip_list
- flipgroup
- flipgroup_list
- image
@@ -157,6 +158,7 @@
- cb_extnet
- cb_extnet_default
- cb_extnet_list
- cb_extnet_reserved_ip_list
- cb_extnet_static_route
- cb_extnet_static_route_list
- cb_flipgroup
@@ -169,7 +171,6 @@
- cb_grid_list
- cb_grid_list_consumption
- cb_grid_list_emails
- cb_grid_post_diagnosis
- cb_grid_post_status
- cb_image
- cb_image_list

View File

@@ -1,6 +1,6 @@
/*
Пример использования
Получение снимка платформы с дополнительной диагностической информацией, такой как журналы и т.д.
Получение информации о зарезервированных IP адресах или пуле адресов
*/
#Расскомментируйте этот код,
@@ -26,14 +26,18 @@ provider "decort" {
allow_unverified_ssl = true
}
data "decort_cb_grid_post_diagnosis" "grid" {
#id grid для получения информации
#обязательный параметр
data "decort_extnet_reserved_ip_list" "ex_reserved_ip" {
#идентификатор аккаунта, для которого зарезервированны ресурсы
#обязательный параметр
#тип - целое число
gid = 215
account_id = 1111
#идентификатор сети
#опциональный параметр
#тип - целое число
#extnet_id = 1111
}
output "test" {
value = data.decort_cb_grid_post_diagnosis.grid
value = data.decort_extnet_reserved_ip_list.ex_reserved_ip
}

View File

@@ -93,8 +93,8 @@ data "decort_flipgroup_list" "fg" {
#фильтр по id клиентов
#опциональный параметр
#тип - массив строк
#client_ids = ["10","11"]
#тип - массив целых чисел
#client_ids = [10,11]
#фильтр по статусу
#опциональный параметр

View File

@@ -269,7 +269,7 @@ resource "decort_kvmvm" "comp" {
#тип сети
#обязательный параметр
#тип - строка
#возможные значения - "VINS", "EXTNET", "VFNIC", "DPDK"
#возможные значения - "VINS", "EXTNET", "VFNIC", "DPDK" (при выборе типа DPDK, необходимо указать hp_backed = true)
net_type = "VINS"
#id сети
@@ -288,6 +288,13 @@ resource "decort_kvmvm" "comp" {
#опциональный параметр
#тип - целое число
weight = 15
#максимальный объём данных, который может быть передан за одну итерацию
#используется только с сетями типа "DPDK"
#возможные значения - 1-9216
#опциональный параметр
#тип - целое число
mtu = 1500
}
#добавление и удаление тэгов
@@ -389,6 +396,17 @@ resource "decort_kvmvm" "comp" {
#тип - булев
pin_to_stack = true
#список ядер для использования в механизме vcpupinning. Количество указанных ядер должно быть равно количеству виртуальных процессоров ВМ
#игнорируется если cpu_pin=false или pin_to_stack=false
#опциональный параметр
#тип - массив целых чисел
preferred_cpu = [1234, 456]
#флаг для старта компьюта при рестарте ноды
#опциональный параметр
#тип - булев
auto_start_w_node = true
#флаг доступности компьюта для проведения с ним операций
#опциональный параметр
#тип - булев
@@ -409,11 +427,6 @@ resource "decort_kvmvm" "comp" {
#тип - булев
restore = true
#флаг для редеплоя компьюта
#опциональный параметр
#тип - булев
auto_start = true
#флаг для редеплоя компьюта
#опциональный параметр
#тип - булев
@@ -424,11 +437,6 @@ resource "decort_kvmvm" "comp" {
#тип - булев
force_resize = true
#поле для редеплоя компьюта
#опциональный параметр
#тип - строка
data_disks = "KEEP"
#запуск/стоп компьюта
#опциональный параметр
#тип - булев

View File

@@ -52,12 +52,6 @@ resource "decort_vins_static_route" "sr" {
#обязательный параметр
#тип - строка
gateway = "192.168.201.40"
#список виртуальных машин, которым будет предоставлен доступ к роуту
#опциональный параметр
#тип - массив целых чисел
compute_ids = [111, 222]
}
output "sr" {

View File

@@ -0,0 +1,43 @@
/*
Пример использования
Получение информации о зарезервированных IP адресах или пуле адресов
*/
#Расскомментируйте этот код,
#и внесите необходимые правки в версию и путь,
#чтобы работать с установленным вручную (не через hashicorp provider registry) провайдером
/*
terraform {
required_providers {
decort = {
source = "basis/decort/decort"
version = "<VERSION>"
}
}
}
*/
provider "decort" {
authenticator = "decs3o"
#controller_url = <DECORT_CONTROLLER_URL>
controller_url = "https://ds1.digitalenergy.online"
#oauth2_url = <DECORT_SSO_URL>
oauth2_url = "https://sso.digitalenergy.online"
allow_unverified_ssl = true
}
data "decort_cb_extnet_reserved_ip_list" "ex_reserved_ip" {
#идентификатор аккаунта, для которого зарезервированны ресурсы
#обязательный параметр
#тип - целое число
account_id = 1111
#идентификатор сети
#опциональный параметр
#тип - целое число
#extnet_id = 1111
}
output "test" {
value = data.decort_cb_extnet_reserved_ip_list.ex_reserved_ip
}

View File

@@ -168,6 +168,26 @@ resource "decort_cb_extnet" "new_extnet" {
#e_rate = 0
}
#список зарезервированных IP или пула адресов
#опциональный параметр
#тип - блок
reserved_ip {
#идентификатор аккаунта, для которого резервируются ресурсы
#обязательный параметр
#тип - целое число
account_id = 11111
#количество резервируемых IP
#опциональный параметр
#тип - целое число
ip_count = 15
#список резервируемых IP
#опциональный параметр
#тип - массив строк
ips = ["192.168.10.10", "192.168.10.20"]
}
#ID stack на который происходит миграция
#опциональный параметр
#тип - целое число

View File

@@ -32,6 +32,10 @@ data "decort_cb_grid_get_diagnosis" "grid" {
#тип - целое число
gid = 215
#путь, где будет создан архив, если не указан, создается в директории с main.tf с именем "diagnosis.tar.gz"
#обязательный параметр
#тип - строка
file_path = "abcdefg.tar.gz"
}
output "test" {

View File

@@ -66,6 +66,21 @@ data "decort_cb_kvmvm_list" "compute_list" {
#тип - строка
#ip_address = "test"
#фильтр по stack id
#опциональный параметр
#тип - целое число
#stack_id = 123
#фильтр по image id
#опциональный параметр
#тип - целое число
#image_id = 123
#фильтр по cd image id
#опциональный параметр
#тип - целое число
#cd_image_id = 123
#фильтр по имени extnet
#опциональный параметр
#тип - строка

View File

@@ -281,9 +281,8 @@ resource "decort_cb_kvmvm" "comp" {
#network {
#тип сети
#обязательный параметр
#возможные значения - "VINS", "EXTNET", "VFNIC", "DPDK"
#возможные значения - "VINS", "EXTNET", "VFNIC", "DPDK" (при выборе типа DPDK, необходимо указать hp_backed = true)
#тип - строка
#net_type = "VINS"
#ID сети
@@ -302,6 +301,13 @@ resource "decort_cb_kvmvm" "comp" {
#опциональный параметр
#тип - целое число
#weight = 15
#максимальный объём данных, который может быть передан за одну итерацию
#используется только с сетями типа "DPDK"
#возможные значения - 1-9216
#опциональный параметр
#тип - целое число
#mtu = 1500
#}
#добавление и удаление тэгов
@@ -403,10 +409,16 @@ resource "decort_cb_kvmvm" "comp" {
#тип - булев
#pin_to_stack = true
#id stack для добавления компьюта на этот стэк
#список ядер для использования в механизме vcpupinning. Количество указанных ядер должно быть равно количеству виртуальных процессоров ВМ
#игнорируется если cpu_pin=false или pin_to_stack=false
#опциональный параметр
#тип - целое число
#target_stack_id = 1
#тип - массив целых чисел
#preferred_cpu = [1234, 456]
#флаг для старта компьюта при рестарте ноды
#опциональный параметр
#тип - булев
#auto_start_w_node = true
#флаг для принужительного добавления компьюта на стэк
#опциональный параметр
@@ -438,11 +450,6 @@ resource "decort_cb_kvmvm" "comp" {
#тип - булев
#restore = true
#флаг для редеплоя компьюта
#опциональный параметр
#тип - булев
#auto_start = true
#флаг для редеплоя компьюта
#опциональный параметр
#тип - булев
@@ -459,11 +466,6 @@ resource "decort_cb_kvmvm" "comp" {
#тип - булев
#force_resize = true
#поле для редеплоя компьюта
#опциональный параметр
#тип - строка
#data_disks = "KEEP"
#запуск/стоп компьюта
#опциональный параметр
#тип - булев
@@ -485,10 +487,16 @@ resource "decort_cb_kvmvm" "comp" {
#удаление блока удалит настройки только локально, состояние на платформе не изменится
#тип - блок
#libvirt_settings {
#mac адреc
#тип сети
#обязательный параметр
#возможные значения - "VINS", "VFNIC", "DPDK"
#тип - строка
#mac = "52:54:00:00:19:e1"
#net_type = "VINS"
#ID сети
#обязательный параметр
#тип - целое число
#net_id = 1234
#tx mode
#опциональный параметр

View File

@@ -42,7 +42,7 @@ resource "decort_cb_sep" "s" {
#тип sep
#обязательный параметр
#возможные значения - des, dorado, tatlin, hitachi
#возможные значения - des, dorado, tatlin, hitachi, ovs, local, shared
#тип - строка
type = "des"

View File

@@ -51,12 +51,6 @@ resource "decort_cb_vins_static_route" "sr" {
#обязательный параметр
#тип - строка
gateway = "192.168.201.40"
#список виртуальных машин, которым будет предоставлен доступ к роуту
#опциональный параметр
#тип - массив целых чисел
#compute_ids = [111,222]
}
output "sr" {

1
wiki/.gitignore vendored
View File

@@ -1 +0,0 @@
.idea/

View File

@@ -1,7 +0,0 @@
DECORT Terraform Provider позволяет управлять облачными ресурсами на платформе Digital Energy Cloud Orchestration Technology (DECORT) версии 3.7.x и выше посредством Terraform.
С помощью данного провайдера можно организовать программное управление вычислительными ресурсами (_compute_), ресурсными группами, сетевыми и дисковыми ресурсами, образами дисков, кластером, а также другими параметрами облачной платформы DECORT.
Если вы хорошо знакомы с инструментом Terraform и хотите максимально быстро начать использовать платформу DECORT в своих Terraform-проектах, то можете сразу перейти к разделу [Пример работы](https://repository.basistech.ru/BASIS/terraform-provider-decort/src/branch/main/wiki/4.5.2/02.-Пример-работы.md), где приведён подробно откомментированный пример работы с основными видами ресурсов платформы. Если у вас всё же возникнут вопросы по облачной платформе DECORT и порядку авторизации в ней, то обратитесь к главе [«Обзор облачной платформы DECORT»](https://repository.basistech.ru/BASIS/terraform-provider-decort/src/branch/main/wiki/4.5.2/03.-Обзор-облачной-платформы-DECORT.md). Также может оказаться полезной глава [«Инициализация Terraform провайдера DECORT»](https://repository.basistech.ru/BASIS/terraform-provider-decort/src/branch/main/wiki/4.5.2/04.02-Инициализация-Terraform-провайдера-DECORT.md).
Если вы только начинаете использовать инструмент Terraform и облачную платформу DECORT, то рекомендуем вам начать с главы [«Обзор облачной платформы DECORT»](https://repository.basistech.ru/BASIS/terraform-provider-decort/src/branch/main/wiki/4.5.2/03.-Обзор-облачной-платформы-DECORT.md), после чего изучить главы [«_Data source_ функции Terraform провайдера DECORT»](https://repository.basistech.ru/BASIS/terraform-provider-decort/src/branch/main/wiki/4.5.2/06.-Data-source-функции-Terraform-провайдера-DECORT.md) и [«_Resource_ функции Terraform провайдера DECORT»](https://repository.basistech.ru/BASIS/terraform-provider-decort/src/branch/main/wiki/4.5.2/07.-Resource-функции-Terraform-провайдера-DECORT.md). Примеры, приведенные в этих разделах, помогут вам быстро освоить базовые приёмы работы с инструментом Terraform и провайдером DECORT.

View File

@@ -1,92 +0,0 @@
Данный раздел предназначен для тех, кто хорошо знаком с инструментом Terraform, а также имеет представление об основных понятиях и способах авторизации в облачной платформе DECORT.
Ниже приведён подробно откомментированный пример, показывающий, как создать виртуальный сервер (aka _compute_ на базе системы виртуализации KVM x86) в облачной платформе DECORT с помощью соответствующего Terraform провайдера. Сервер будет создан в новой ресурсной группе, к нему будет подключён один предварительно созданный диск, у сервера будет прямое сетевое подключение во внешнюю сеть.
Идентификатор образа операционной системы, на базе которого должен быть создан виртуальный сервер, считывается из облачной платформы с помощью _data source_ функции `decort_image`.
Далее мы с помощью _resource_ функции `decort_resgroup` создаём новую ресурсную группу, в которую будет помещён этот виртуальный сервер. В качестве альтернативы, для получения информации об уже имеющейся ресурсной группе можно использовать _data source_ функцию с таким же названием.
Затем с помощью _resource_ функции `decort_disk` создаётся диск, который будет подключён к виртуальному серверу в качестве дополнительного. Помимо этого дополнительного диска у сервера будет также и загрузочный диск, на который в процессе создания сервера клонируется выбранный образ операционной системы.
Виртуальный сервер - в данном примере на базе системы виртуализации KVM x86 - создаётся посредством _resource_ функции `decort_kvmvm`.
Только авторизованные в контроллере облачной платформы пользователи могут управлять облачными ресурсами. Подробнее о способах авторизации см. [Обзор облачной платформы DECORT](https://repository.basistech.ru/BASIS/terraform-provider-decort/src/branch/main/wiki/4.5.2/03.-Обзор-облачной-платформы-DECORT.md).
```terraform
# 1. Initialize DECORT plugin and connection to DECORT cloud controller
# NOTE: in this example credentials are expected to come from
# DECORT_APP_ID and DECORT_APP_SECRET environmental variables - set them
# in the shell before calling terraform.
# Alternatively you may define plugin parameters app_id and app_secret in
# the TF file, however, this may not be secure if you plan to share this TF
# file with others.
provider "decort" {
authenticator = "decs3o"
controller_url = "<<DECORT_CONTROLLER_URL>>" # specify correct DECORT controller URL, e.g. "https://ds1.digitalenergy.online"
oauth2_url = "<<DECORT_SSO_URL>>" # specify corresponding DECORT SSO URL, e.g. "https://sso.digitalenergy.online"
app_id = "<<DECORT_APP_ID>>" # application secret to access DECORT cloud API in 'decs3o' and 'bvs' authentication mode, e.g. "ewqfrvea7s890avw804389qwguf234h0otfi3w4eiu"
app_secret = "<<DECORT_APP_SECRET>>" # application ID to access DECORT cloud API in 'decs3o' and 'bvs' authentication mode, e.g. "ewqfrvea7s890avw804389qwguf234h0otfi3w4eiu"
# allow_unverified_ssl = true
}
# 2. Load account to use - new VM will belong to this account
data "decort_account" "my_account" {
account_id = <ACCOUNT_ID> # Specify account ID
}
# 3. Load OS image to use for VM deployment
data "decort_image" "os_image" {
image_id = <OS_IMAGE_ID> # Specify OS image id, e.g. 1234. You can get accessible image id from data source "decort_image_list"
}
# 4. Create new Resource Group in the selected account, new VM will be created in this RG
resource "decort_resgroup" "my_rg" {
name = "NewRgByTF"
account_id = data.decort_account.my_account.account_id
gid = <GRID_ID> # Grid (platform) ID
# if you want to set resource quota on this Resource Group, uncomment
# the following code fragment
# quota {
# cpu = 8 # CPU limit
# ram = 8912 # RAM limit in MB
# disk = 96 # disk volume limit in GB
#}
}
# 5. Create extra disk, which will be attached to the new VM.
# This step is optional - if you do not want extra disks on your VM, skip it
# and comment out extra_disks parameter when creating VM below.
resource "decort_disk" "extra_disk" {
disk_name = "extra-disk-for-vm"
account_id = data.decort_account.my_account.account_id
gid = <GRID_ID> # Grid (platform) ID
size_max = 5 # disk size in GB
type = "D" # disk type, always use "D" for extra disks
sep_id = data.decort_image.os_image.sep_id # use the same SEP ID as the OS image
pool = "<<DATA_POOL_NAME>>" # consult your DECORT platform admin for configured storage pool names
}
# 6. Create virtual machine (a compute of type KVM VM x86 in this example)
# Now that we have all necessary components at hand, we may create a virtual machine.
# This VM will be based on the previsouly obtained OS image, located in the specified
# Resource Group, directly connected to an external network, have a boot disk of
# specified size and one extra disk attached.
resource "decort_kvmvm" "my_new_vm" {
name = "tf-managed-vm"
driver = "KVM_X86" # Compute virtualization driver
rg_id = decort_resgroup.my_rg.id
cpu = 1 # CPU count
ram = 1024 # RAM size in MB, must be even number, ideally a power of 2
boot_disk_size = 10 # Boot disk size in GB
image_id = data.decort_image.os_image.image_id
description = "Test KVM VM Compute managed by Terraform"
extra_disks = [ decort_disk.extra_disk.id ]
network {
net_type = "EXTNET"
net_id = <<EXT_NET_ID>> # specify external network ID to use, consult your DECORT platform admin for correct IDs
# ip_address = "<<SOME VALID AND FREE IP ADDRESS>>" # you may optionally request a specific IP address
}
}
```

View File

@@ -1,31 +0,0 @@
## Основные понятия
Ниже перечислены основные понятия с указанием соответствующих им аргументов в Terraform провайдере DECORT.
1. **Контроллер облачной платформы DECORT** управляющее приложение, которое обеспечивает авторизацию пользователей и оркестрацию облачных ресурсов.
- Адрес контроллера задается в обязательном аргументе `controller_url` на стадии инициализации Terraform провайдера DECORT. Например, `controller_url= "https://ds1.digitalenergy.online"`
2. **Авторизационный провайдер** приложение, работающее по протоколу Oauth2, предназначенное для выпуска и валидации токенов доступа к контроллеру облачной платформы в соответствующих режимах авторизации. Все действия в платформе должны выполняться авторизованными пользователями, и авторизационное приложение позволяет получить токен доступа, действующий некоторое ограниченное время, наличие которого подтверждает успешную авторизацию.
- Адрес авторизационного провайдера задается в аргументе`oauth2_url` на стадии инициализации Terraform провайдера DECORT. Например `oauth2_url= "https://sso.digitalenergy.online"`
3. **Подписчик** (_account_) сущность, которая используется для группирования облачных ресурсов по принадлежности к определенному клиенту для целей учета потребления и биллинга.
- Имя подписчика задается аргументом `account_name` при вызове _resource_ или _data_ функций провайдера. Альтернативной является задание численного идентификатора подписчика в аргументе `account_id`.
4. **Пользователь** (_user_) пользователь облачной инфраструктуры, представленный учетной записью. Чтобы получить возможность управлять облачными ресурсами (например, создавать виртуальные серверы или дискт) пользователь должен быть ассоциирован с одним или несколькими подписчиками и иметь соответствующие права, определяемые ролевой моделью, принятой в облачной платформе DECORT. Для доступа к платформе пользователь должен авторизоваться одним из способов, описанных ниже в разделе «Способы авторизации».
5. **Ресурсная группа** (_resource group_) способ группирования вычислительных ресурсов (например, виртуальных серверов по функциональному признаку или принадлежности к одному и тому же проекту). Ресурсную группу можно рассматривать как небольшой персональный дата-центр, в котором размещаются один или несколько серверов и виртуальных сетевых сегментов. Ресурсная группа идентифицируется по комбинации параметров `account` и `name`. Обратите внимание, что имя имя ресурсной группы уникально только в рамках одного и того же `account`.
6. **Вычислительный ресурс** (_compute_) - универсальная абстракция пользовательского сервера в платформе DECORT. Благодаря использованию такой абстракции можно, например, создать одну виртуальную машину на базе KVM Intel x86, а другую - на базе KVM IBM Power, а потом управлять ими - изменять количество CPU/RAM, подключать/отключать диски и т.п. - одинаковым образом, не задумываясь об их архитектурных различиях. В то же время, так как типизация ресурсов в Terraform не поддерживает наследование, различные типы вычислительных ресурсов, доступных на платформе DECORT и абстрагируемых через понятие унифицированный _compute_, в Terraform представлены разными типами (напр., свой тип для виртуальных серверов на базе KVM и свой тип для перспективных x86-совместимых bare metal серверов).
7. **Ресурс хранения** (_disk_) - универсальная абстракция дискового ресурса в платформе DECORT. Платформа поддерживает различные типы систем хранения данных, но при этом управление созданными на разных системах хранения дисками осуществляется посредством унифицированного набора действий, например, "подключить диск к _compute_", "увеличить размер диска", "сделать мгновенный снимок диска", "настроить параметры быстродействия диска".
8. **Виртуальный сервер** экземпляр _compute_, в основе технической реализации которого лежит виртуальная машина, работающая в облаке DECORT и доступна по сети. Виртуальный сервер характеризуется количеством выделенных ему CPU (аргумент`cpu`), объемом ОЗУ (`ram`), размером загрузочного диска (`boot_disk size`). При создании виртуального сервера на загрузочный диск устанавливается образ операционной системы, заданный в аргументе `image_id`. Помимо загрузочного диска к виртуальному серверу можно подключить несколько дисков для хранения прикладных данных, список которых задается аргументами `extra_disks`. Виртуальный сервер идентифицируется по комбинации аргументов `name` (имя сервера) и `rgid` (идентификатор ресурсной группы). Обратите внимание, что имя виртуального сервера `name` уникально только в рамках одной и той же ресурсной группы.
9. **Виртуальный сетевой сегмент** (_Virtual Network Segment_ или _ViNS_) - сетевой сегмент и обеспечивающая его функционирование виртуальная инфраструктура, которые пользователь может создавать для своих нужд на уровне ресурсной группы или подписчика (_account_). ViNS можно создать полностью изолированным от внешних сетей (см. ниже _External Network_) или с подключением во внешнюю сеть. Внутри ViNS работает DHCP-сервис, обеспечивающий управление IP адресами экземпляров _compute_, подключённых в этот ViNS.
10. **Внешняя сеть** (_External Network_) - сетевой сегмент, через который платформа DECORT взаимодействует с внешними по отношению к ней сетевыми ресурсами. Например, в случае с публичным облаком на базе DECORT в качестве внешней сети выступает сеть Интернет. В отличие от ViNS платформа не управляет внешней сетью, а лишь пользуется её ресурсами. В платформе может быть настроено несколько внешних сетей с различными диапазонами IP адресов, и существует механизм управления доступом пользователей к внешним сетям.
11. Сетевой доступ к экземпляру _compute_ (виртуальному серверу) реализуется через его подключение к ViNS и/или прямое подключение во внешнюю сеть (External Network). Один и тот же экземпляр _compute_ может одновременно иметь несколько подключений в разные ViNS и/или различные внешние сети.
## Способы авторизации
Облачная платформа DECORT поддерживает три базовых типа авторизации:
1. С использованием авторизационного провайдера, работающего по протоколу _Oauth2_. Данный способ является предпочтительным, так как обеспечивает бОльшую гибкость и безопасность. Для авторизации в этом режиме при инициализации Terrafrom провайдера DECORT необходимо указать параметры `oauth2_url` и `controller_url`, а также предоставить одно из нижеперечисленного:
- Комбинация Application ID & Application secret, соответствующих пользователю, от имени которого будет осуществляться управление облачными ресурсами в текущей сессии. В процессе проверки предоставленных Application ID & Application secret модуль получает от авторизационного провайдера токен (JSON Web Token, JWT), который затем используется для доступа к указанному контроллеру DECORT. Для авторизации по данному варианту, при инициализации Terraform провайдера DECORT следует установить аргумент `authenticator=decs3o` и задать аргументы `app_id` и `app_secret` (или определить соответствующие переменные окружения `DECORT_APP_ID` и `DECORT_APP_SECRET`).
- JSON Web Token заранее полученный от авторизационного провайдера токен доступа, ассоциированный с определенным пользователем, от имени которого будет осуществляться управление облачными ресурсами в текущей сессии. Для авторизации по данному варианту, при инициализации Terraform провайдера DECORT следует установить аргумент `authenticator=jwt` и задать аргумент `jwt` (или определить переменную окружения `DECORT_JWT`).
2. С использованием комбинации _имя пользователя : пароль_. Данный режим не использует внешних авторизационных провайдеров и подразумевает, что пользователь с такой комбинацией зарегистрирован непосредственно на указанном в параметре `controller_url` контроллере облачной платформы DECORT.
- Чтобы провайдер авторизовался по данному варианту, при его инициализации следует установить аргумент `authenticator=legacy` и задать аргументы `user` и `password` (или определить соответствующие переменные окружения `DECORT_USER` и `DECORT_PASSWORD`).
3. С использованием авторизационного провайдера, работающего по протоколу _Oauth2_oidc_. Для авторизации в этом режиме при инициализации Terrafrom провайдера DECORT необходимо указать параметры `oauth2_url` и `controller_url`, а также Application ID & Application secret, _имя пользователя и пароль_, соответствующих пользователю, от имени которого будет осуществляться управление облачными ресурсами в текущей сессии, и _имя домена_. В процессе проверки предоставленных Application ID & Application secret и пары _имя пользователя-пароль_ модуль получает от авторизационного провайдера токен (JSON Web Token, JWT), который затем используется для доступа к указанному контроллеру DECORT. Для авторизации по данному варианту, при инициализации Terraform провайдера DECORT следует установить аргумент `authenticator=bvs`, задать аргументы `app_id` и `app_secret` (или определить соответствующие переменные окружения `DECORT_APP_ID` и `DECORT_APP_SECRET`), `bvs_user` и `bvs_password` (или определить соответствующие переменные окружения `DECORT_BVS_USER` и `DECORT_BVS_PASSWORD`), а также указать `domain` (или определить соответствующие переменные окружения `DECORT_DOMAIN`).
После успешной авторизации пользователь (или приложение-клиент) получает доступ к ресурсам, находящимся под управлением соответствующего DECORT контроллера. Доступ предоставляется в рамках подписчиков (_account_), с которыми ассоциирован данный пользователь (_user_), и в соответствии с присвоенными ему ролями.
## Пользовательская и административная группы API
Пользовательская группа API - группа API платформы DECORT, которая позволяет выполнять операции с платформой с правами обычного пользователя. Покрывает большую часть задач.
Административная группа API - группа API платформы DECORT, которая позволяет выполнять операции с платформой с расширенными правами. Данные права подразумевают расширенный перечень операций над ресурсами, расширенный перечень ресурсов, расширенную информацию. Требуются права администратора для взаимодействия с этой группой API.

View File

@@ -1,6 +0,0 @@
Данный раздел описывает:
- Системные требования
- Установку провайдера
- Инициализацию провайдера
- Переключение режима работы между разными группами API
- Получение gid/grid_id площадки

View File

@@ -1,150 +0,0 @@
## Системные требования
Для запуска провайдера вам потребуется машина, на которой установлен Terraform.
Кроме того, в связи с тем, что начиная с версии 0.12 Terraform изменил алгоритм поиска и инициализации локальных провайдеров, настройка данного провайдера для работы с Terraform 0.12 или более новыми версиями потребует выполнения ряда дополнительных действий. Подробнее см. [8.3 Настройка локального провайдера для работы с новыми версиями Terraform](https://repository.basistech.ru/BASIS/terraform-provider-decort/src/branch/main/wiki/4.5.2/08.-Полезные-советы#user-content-8-3-настройка-локального-провайдера-для-работы-с-новыми-версиями-terraform.md).
## Установка
Начиная с версии провайдера `4.3.0` в релизном архиве находятся скрипты-инсталляторы.
Чтобы выполнить установку, необходимо:
1. Перейти по адресу: https://repository.basistech.ru/BASIS/terraform-provider-decort/releases
2. Выбрать необходимую версию провайдера подходящую под операционную систему.
3. Скачать архив.
4. Распаковать архив.
5. Выполнить скрипт установщика, `install.sh` или `install.bat` для Windows.<br/>
*Для запуска `install.sh` не забудьте изменить права доступа к файлу*
```bash
chmod u+x install.sh
```
6. Дождаться сообщения об успешной установке. Установщик выведет актуальный блок конфигурации провайдера, скопируйте его
```bash
DECORT provider version 4.3.0 has been successfully installed
Copy this provider configuration to main.tf file:
terraform {
required_providers {
decort = {
version = "4.3.0"
source = "basis/decort/decort"
}
}
}
```
7. После этого, создайте файл `main.tf` в рабочей директории, которая может находится в любом удобном для пользователя месте.
В данном примере, рабочая директория с файлом main.tf находится по пути:
```bash
~/work/tfdir/main.tf
```
8. Вставьте в `main.tf` блок конфигурации провайдера, который был выведен на экран установщиком:
```terraform
terraform {
required_providers {
decort = {
version = "4.3.0"
source = "basis/decort/decort"
}
}
}
```
9. Добавьте в файл блок с инициализацией провайдера.
```terraform
provider "decort" {
authenticator = "decs3o"
controller_url = "https://mr4.digitalenergy.online"
oauth2_url = "https://sso.digitalenergy.online"
allow_unverified_ssl = true
}
```
10. В консоли выполните команду
```bash
terraform init
```
11. В случае успешной установки, Terraform инициализирует провайдер и будет готов к дальнейшей работе.
## Установка из релизов
Terraform провайдер DECORT имеет скомпилированные релизные версии, которые расположены по адресу: [Релизы](https://repository.basistech.ru/BASIS/terraform-provider-decort/releases).
Чтобы выполнить установку из релиза, необходимо:
1. Перейти по адресу: https://repository.basistech.ru/BASIS/terraform-provider-decort/releases
2. Выбрать необходимую версию провайдера подходящую под операционную систему.
3. Скачать архив.
4. Распаковать архив.
5. Полученный файл (в директории `bin/`) необходимо поместить:
Linux:
```bash
~/.terraform.d/plugins/${host_name}/${namespace}/${type}/${version}/${target}
```
Windows:
```powershell
%APPDATA%\terraform.d\plugins\${host_name}\${namespace}\${type}\${version}\${target}
```
Где:
- host_name - имя хоста, держателя провайдера, например, basis
- namespace - пространство имен хоста, например decort
- type - тип провайдера, может совпадать с пространством имен, например, decort
- version - версия провайдера, например 4.3.0
- target - архитектура операционной системы, например windows_amd64
В примере ниже используется путь до провайдера на машине с ОС Linux:
```bash
~/.terraform.d/plugins/basis/decort/decort/4.3.0/linux_amd64/tf-provider
^ ^ ^ ^ ^ ^
host_name | | | | | |
| | | | |
namespace | | | | |
| | | |
type | | | |
| | |
version | | |
| |
target | |
|
исполняемый файл |
```
6. После этого, создайте файл `main.tf` в рабочей директории, которая может находится в любом удобном для пользователя месте.
В данном примере, рабочая директория с файлом main.tf находится по пути:
```bash
~/work/tfdir/main.tf
```
7. Добавьте в `main.tf` следующий блок
```terraform
terraform {
required_providers {
decort = {
version = "4.3.0"
source = "basis/decort/decort"
}
}
}
```
В поле `version` указывается версия провайдера.
<br/>
**ВНИМАНИЕ: Версии в блоке и в пути к исполняемому файлу провайдера должны совпадать!**
В поле `source` помещается путь до репозитория с версией вида:
```bash
${host_name}/${namespace}/${type}
```
**ВНИМАНИЕ: Версии в блоке и в пути к исполняемому файлу провайдера должны совпадать!**
8. Добавьте в файл блок с инициализацией провайдера.
```terraform
provider "decort" {
authenticator = "decs3o"
controller_url = "https://mr4.digitalenergy.online"
oauth2_url = "https://sso.digitalenergy.online"
allow_unverified_ssl = true
}
```
9. В консоли выполните команду
```bash
terraform init
```
10. В случае успешной установки, Terraform инициализирует провайдер и будет готов к дальнейшей работе.

Some files were not shown because too many files have changed in this diff Show More