diff --git a/CHANGELOG.md b/CHANGELOG.md
index ec16da8..a0e10f2 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,11 +1,170 @@
-# Список изменений в версии 10.0.1
+# Список изменений в версии 11.0.0
## Добавлено
+### Глобально
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-909 | В системные требования добавлена библиотека Python `dynamix_sdk`. |
+| BANS-918 | Добавлен общий для всех модулей параметр `ignore_api_compatibility`. |
+| BANS-913 | Добавлен общий для всех модулей параметр `ignore_sdk_version_check`. |
+| BANS-954 | Добавлен модуль `decort_vm` в связи с переименованием из `decort_kvmvm`. |
+| BANS-953 | Добавлен модуль `decort_image` в связи с переименованием из `decort_osimage`. |
+| BANS-997 | Добавлен модуль `decort_security_group_list`, позволяющий получить список доступных групп безопасности. |
+| BANS-884 | Добавлен модуль `decort_disk_list`, позволяющий получить список доступных дисков. |
+| BANS-936 | Добавлен модуль `decort_rg_list`, позволяющий получить список доступных ресурсных групп. |
+| BANS-949 | Добавлен модуль `decort_vins_list`, позволяющий получить список доступных внутренних сетей. |
+| BANS-940 | Добавлен модуль `decort_vm_list`, позволяющий получить список доступных виртуальных машин. |
+| BANS-959 | Добавлен модуль `decort_flip_group_list`, позволяющий получить список доступных групп с плавающим IP-адресом. |
+| BANS-952 | Добавлен модуль `decort_image_list`, позволяющий получить список доступных образов. |
+| BANS-983 | Добавлен модуль `decort_account_list`, позволяющий получить список доступных аккаунтов. |
+| BANS-985 | Добавлен модуль `decort_audit_list`, позволяющий получить список аудитов. |
+| BANS-988 | Добавлен модуль `decort_trunk_list`, позволяющий получить список доступных транковых портов. |
+| BANS-987 | Добавлен модуль `decort_zone_list`, позволяющий получить список доступных зон. |
+| BANS-989 | Добавлен модуль `decort_storage_policy_list`, позволяющий получить список политик хранения. |
+| BANS-945 | Добавлен модуль `decort_user` в связи с переименованием из `decort_user_info`. |
-## Удалено
+### Модуль decort_vm
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-926 | Для параметра `chipset` добавлено значение по умолчанию `Q35` при создании ВМ. |
+| BANS-933 | Добавлено возвращаемое значение `pinned_to_node` в связи с переименованием из `pinned_to_stack`. |
+| BANS-934 | Добавлено возвращаемое значение `read_only`. |
+| BANS-994 | Добавлена возможность задать параметр `mtu` при создании сетевого интерфейса для TRUNK-сети и изменить `mtu` у существующего интерфейса, подключённого к TRUNK-сети. |
+| BANS-991 | Добавлена возможность указать параметр `ip_addr` при присоединении и изменении `DPDK` сети. |
+| BANS-1017 | Добавлено возвращаемое значение `disks.cache`. |
+| BANS-1034 | Добавлена возможность указать параметр `ip_addr` при присоединении и изменении `VFNIC` сети. |
+| BANS-992 | Добавлен параметр `networks.net_prefix`. |
-## Исправлено
### Модуль decort_group
| Идентификатор
задачи | Описание |
| --- | --- |
-| BANS-941 | Исправлена ошибка, из-за которой не происходил запуск группы после создании с указанием параметра `timeoutStart`. |
+| BANS-927 | Для параметра `chipset` добавлено значение по умолчанию `Q35` при создании группы. |
+
+### Модуль decort_k8s
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-928 | Для параметра `chipset` добавлено значение по умолчанию `Q35` при создании кластера. |
+
+### Модуль decort_account
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-966 | Добавлен параметр `get_resource_consumption` и возвращаемое значение `resource_consumption`. |
+
+### Модуль decort_trunk
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-993 | Добавлено возвращаемое значение `mtu`. |
+| BANS-976 | Добавлены возвращаемые значения `created_datetime`, `deleted_datetime`, `updated_datetime`. |
+
+### Модуль decort_zone
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-970 | Добавлены возвращаемые значения `created_datetime`, `updated_datetime` и возвращаемые значения `account_ids`, `bservice_ids`, `vm_ids`, `extnet_ids`, `k8s_ids`, `lb_ids`, `vins_ids` в связи с переименованием из `accountIds`, `bserviceIds`, `computeIds`, `extnetIds`, `k8sIds`, `lbIds`, `vinsIds`. |
+| BANS-1024 | Добавлено возвращаемое значение `node_auto_start`. |
+
+### Модуль decort_vm_snapshot
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-978 | Добавлено возвращаемое значение `datetime`. |
+
+### Модуль decort_storage_policy
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-977 | Добавлены возвращаемые значения `sep_name`, `sep_tech_status`. |
+
+### Модуль decort_disk
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-1019 | Добавлено возвращаемое значение `cache_mode`. |
+| BANS-1050 | Добавлено возвращаемое значение `blkdiscard`. |
+
+## Удалено
+### Глобально
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-954 | Удалён модуль `decort_kvmvm` в связи с переименованием в `decort_vm`. |
+| BANS-969 | Модуль `decort_account_info` расформирован, его функционал перенесён в модули: `decort_disk_list`, `decort_rg_list`, `decort_vins_list`, `decort_vm_list`, `decort_flip_group_list`, `decort_image_list`, `decort_account`. |
+| BANS-953 | Удалён модуль `decort_osimage` в связи с переименованием в `decort_image`. |
+| BANS-945 | Удалён модуль `decort_user_info` в связи с переименованием в `decort_user`. |
+
+### Модуль decort_account
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-924 | Удалён параметр `quotas.ext_traffic`. |
+| BANS-998 |Для параметра `state` удалено значение по умолчанию. |
+
+### Модуль decort_rg
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-925 | Удалён параметр `quotas.net_transfer`. |
+
+### Модуль decort_vm
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-926 | Для параметра `chipset` удалено значение по умолчанию `i440fx` при создании ВМ. |
+| BANS-933 | Удалено возвращаемое значение `pinned_to_stack` в связи с переименованием в `pinned_to_node`. |
+| BANS-961 | Параметр `storage_policy_id` удалён из обязательных при пересоздании загрузочного диска. |
+
+### Модуль decort_group
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-927 | Для параметра `chipset` удалено значение по умолчанию `i440fx` при создании группы. |
+| BANS-1027 | Удалён параметр `driver`. |
+
+### Модуль decort_k8s
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-928 | Для параметра `chipset` удалено значение по умолчанию `i440fx` при создании кластера. |
+
+### Модуль decort_user
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-983 | Удалён параметр `accounts` и возвращаемое значение `accounts` в связи с переносом этой функциональности в модуль `decort_account_list`. |
+| BANS-985 | Удалён параметр `audits` и возвращаемое значение `audits` в связи с переносом этой функциональности в модуль `decort_audit_list`. |
+| BANS-988 | Удалён параметр `trunks` и возвращаемое значение `trunks` в связи с переносом этой функциональности в модуль `decort_trunk_list`. |
+| BANS-987 | Удалён параметр `zones` и возвращаемое значение `zones` в связи с переносом этой функциональности в модуль `decort_zone_list`. |
+| BANS-989 | Удалён параметр `storage_policies` и возвращаемое значение `storage_policies` в связи с переносом этой функциональности в модуль `decort_storage_policy_list`. |
+| BANS-997 | Удалён параметр `security_groups` и возвращаемое значение `security_groups` в связи с переносом этой функциональности в модуль `decort_security_group_list`. |
+
+### Модуль decort_zone
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-970 | Удалены возвращаемые значения `accountIds`, `bserviceIds`, `computeIds`, `extnetIds`, `k8sIds`, `lbIds`, `vinsIds` в связи с переименованием в `account_ids`, `bservice_ids`, `vm_ids`, `extnet_ids`, `k8s_ids`, `lb_ids`, `vins_ids`. |
+
+### Модуль decort_disk
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-1004 | Удалён параметр `reason` |
+
+### Модуль decort_security_group
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-1000 | Удалено возвращаемое значение `rules.remote_ip_prefix` в связи с переименованием в `rules.remote_net_cidr`. |
+| BANS-1013 | Удален параметр `rules.objects.remote_ip_prefix` в связи с переименованием в `rules.objects.remote_net_cidr`. |
+
+### Модуль decort_vm_snapshot
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-1012 | Удалено возвращаемое значение `disks` в связи с переименованием в `disk_ids`. |
+
+## Исправлено
+### Модуль decort_vm
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-996 | Параметры `mac`, `security_groups`, `enable_secgroups`, `enabled` сетевого интерфейса DPDK-сети могли меняться при изменении `mtu`. |
+| BANS-1052 | Параметры `numa_affinity`, `cpu_pin`, `hp_backed` не применялись при создании ВМ без образа. |
+
+### Модуль decort_bservice
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-389 | После создания базовой службы, модуль не возвращал информацию о созданном объекте. |
+
+### Модуль decort_vm_snapshot
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-1022 | После создания снимка не возвращалась информация о снимке. |
+
+### Модуль decort_k8s
+| Идентификатор
задачи | Описание |
+| --- | --- |
+| BANS-1033 | Модуль без необходимости выполнял запрос к API `/cloudapi/k8s/update`, передавая в него параметры, не вызывающие изменения. |
diff --git a/Makefile b/Makefile
deleted file mode 100644
index d0864a0..0000000
--- a/Makefile
+++ /dev/null
@@ -1,3 +0,0 @@
-dev:
- pip install -r requirements-dev.txt
- pre-commit install
diff --git a/README.md b/README.md
index 58f1c1e..1100576 100644
--- a/README.md
+++ b/README.md
@@ -5,7 +5,8 @@
| Версия платформы | Версия модулей Ansible |
|:----------------:|:--------------------------:|
-| 4.4.0 | 10.0.x |
+| 4.5.0 | 11.0.x |
+| 4.4.0 | 10.0.x |
| 4.4.0 build 963 | 9.0.x |
| 4.3.0 | 8.0.x |
| 4.2.0 | 7.0.x, 7.1.x, 7.2.x |
diff --git a/examples/decort_osimage/create-osimage.yaml b/examples/decort_image/create-image.yaml
similarity index 92%
rename from examples/decort_osimage/create-osimage.yaml
rename to examples/decort_image/create-image.yaml
index ae616e4..ee81b0e 100644
--- a/examples/decort_osimage/create-osimage.yaml
+++ b/examples/decort_image/create-image.yaml
@@ -1,11 +1,11 @@
---
#
-# DECORT osimage module example
+# DECORT image module example
#
- hosts: localhost
tasks:
- name: create
- decort_osimage:
+ decort_image:
authenticator: oauth2
verify_ssl: False
controller_url: "https://ds1.digitalenergy.online"
diff --git a/examples/decort_osimage/create-virtual-osimage.yaml b/examples/decort_image/create-virtual-image.yaml
similarity index 66%
rename from examples/decort_osimage/create-virtual-osimage.yaml
rename to examples/decort_image/create-virtual-image.yaml
index 5d193e3..93dc79c 100644
--- a/examples/decort_osimage/create-virtual-osimage.yaml
+++ b/examples/decort_image/create-virtual-image.yaml
@@ -1,14 +1,14 @@
---
#
-# DECORT osimage module example
+# DECORT image module example
#
- hosts: localhost
tasks:
- - name: create_virtual_osimage
- decort_osimage:
+ - name: create_virtual_image
+ decort_image:
authenticator: oauth2
controller_url: "https://ds1.digitalenergy.online"
image_name: "alpine_linux_3.14.0"
virt_name: "alpine_last"
delegate_to: localhost
- register: osimage
+ register: image
diff --git a/examples/decort_osimage/get-osimage.yaml b/examples/decort_image/get-image.yaml
similarity index 75%
rename from examples/decort_osimage/get-osimage.yaml
rename to examples/decort_image/get-image.yaml
index 3e61612..445dcce 100644
--- a/examples/decort_osimage/get-osimage.yaml
+++ b/examples/decort_image/get-image.yaml
@@ -1,11 +1,11 @@
---
#
-# DECORT osimage module example
+# DECORT image module example
#
- hosts: localhost
tasks:
- - name: get_osimage
- decort_osimage:
+ - name: get_image
+ decort_image:
authenticator: oauth2
controller_url: "https://ds1.digitalenergy.online"
image_name: "alpine_linux_3.14.0"
diff --git a/examples/decort_osimage/rename-osimage.yaml b/examples/decort_image/rename-image.yaml
similarity index 68%
rename from examples/decort_osimage/rename-osimage.yaml
rename to examples/decort_image/rename-image.yaml
index 0a9dcc3..4401dd0 100644
--- a/examples/decort_osimage/rename-osimage.yaml
+++ b/examples/decort_image/rename-image.yaml
@@ -1,14 +1,14 @@
---
#
-# DECORT osimage module example
+# DECORT image module example
#
- hosts: localhost
tasks:
- - name: rename_osimage
- decort_osimage:
+ - name: rename_image
+ decort_image:
authenticator: oauth2
controller_url: "https://ds1.digitalenergy.online"
image_name: "alpine_linux_3.14.0v2.0"
image_id: 54321
delegate_to: localhost
- register: osimage
+ register: image
diff --git a/library/decort_account.py b/library/decort_account.py
index bc7f444..c7d143f 100644
--- a/library/decort_account.py
+++ b/library/decort_account.py
@@ -62,6 +62,10 @@ class DecortAccount(DecortController):
name=dict(
type='str',
),
+ get_resource_consumption=dict(
+ type='bool',
+ default=False,
+ ),
quotas=dict(
type='dict',
options=dict(
@@ -71,9 +75,6 @@ class DecortAccount(DecortController):
disks_size=dict(
type='int',
),
- ext_traffic=dict(
- type='int',
- ),
gpu=dict(
type='int',
),
@@ -94,7 +95,6 @@ class DecortAccount(DecortController):
'disabled',
'present',
],
- default='present',
),
sep_pools=dict(
type='list',
@@ -131,7 +131,7 @@ class DecortAccount(DecortController):
"""
arg_state = self.aparams['state']
- if 'absent' in arg_state:
+ if arg_state is not None and 'absent' in arg_state:
# Parameters or combinations of parameters that can
# cause changing the object.
changing_params = [
@@ -175,6 +175,7 @@ class DecortAccount(DecortController):
if check_error:
self.exit(fail=True)
+ @DecortController.handle_sdk_exceptions
def run(self):
self.get_info()
self.check_amodule_args_for_change()
@@ -187,6 +188,7 @@ class DecortAccount(DecortController):
self.acc_id, self._acc_info = self.account_find(
account_name=self.aparams['name'],
account_id=self.aparams['id'],
+ resource_consumption=self.aparams['get_resource_consumption'],
)
# If this is a repeated getting info
else:
@@ -195,6 +197,9 @@ class DecortAccount(DecortController):
if not self.amodule.check_mode:
self.acc_id, self._acc_info = self.account_find(
account_id=self.acc_id,
+ resource_consumption=(
+ self.aparams['get_resource_consumption']
+ ),
)
self.facts = self.acc_info
@@ -361,7 +366,6 @@ class DecortAccount(DecortController):
quotas_naming = [
['cpu', 'CU_C', 'cpu_quota'],
['disks_size', 'CU_DM', 'disks_size_quota'],
- ['ext_traffic', 'CU_NP', 'ext_traffic_quota'],
['gpu', 'gpu_units', 'gpu_quota'],
['public_ip', 'CU_I', 'public_ip_quota'],
['ram', 'CU_M', 'ram_quota'],
diff --git a/library/decort_account_info.py b/library/decort_account_info.py
index ce47859..b71679f 100644
--- a/library/decort_account_info.py
+++ b/library/decort_account_info.py
@@ -8,565 +8,46 @@ description: See L(Module Documentation,https://repository.basistech.ru/BASIS/de
'''
from ansible.module_utils.basic import AnsibleModule
-from ansible.module_utils.decort_utils import DecortController
-
-
-class DecortAccountInfo(DecortController):
- def __init__(self):
- super().__init__(AnsibleModule(**self.amodule_init_args))
-
- @property
- def amodule_init_args(self) -> dict:
- return self.pack_amodule_init_args(
- argument_spec=dict(
- audits=dict(
- type='bool',
- default=False
- ),
- computes=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- ext_net_id=dict(
- type='int',
- ),
- ext_net_name=dict(
- type='str'
- ),
- id=dict(
- type='int',
- ),
- ip=dict(
- type='str'
- ),
- name=dict(
- type='str'
- ),
- rg_id=dict(
- type='int',
- ),
- rg_name=dict(
- type='str'
- ),
- tech_status=dict(
- type='str',
- choices=self.COMPUTE_TECH_STATUSES,
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- required=True,
- ),
- ),
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=self.FIELDS_FOR_SORTING_ACCOUNT_COMPUTE_LIST, # noqa: E501
- required=True,
- ),
- ),
- ),
- ),
- ),
- disks=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- id=dict(
- type='int',
- ),
- name=dict(
- type='str',
- ),
- size=dict(
- type='int',
- ),
- type=dict(
- type='str',
- choices=self.DISK_TYPES,
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- required=True,
- ),
- ),
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=self.FIELDS_FOR_SORTING_ACCOUNT_DISK_LIST, # noqa: E501
- required=True,
- ),
- ),
- ),
- ),
- ),
- flip_groups=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- ext_net_id=dict(
- type='int',
- ),
- id=dict(
- type='int',
- ),
- ip=dict(
- type='str',
- ),
- name=dict(
- type='str',
- ),
- vins_id=dict(
- type='int',
- ),
- vins_name=dict(
- type='str',
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- required=True,
- ),
- ),
- ),
- ),
- ),
- id=dict(
- type='int',
- ),
- images=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- id=dict(
- type='int',
- ),
- name=dict(
- type='str',
- ),
- type=dict(
- type='str',
- choices=self.IMAGE_TYPES,
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- required=True,
- ),
- ),
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=self.FIELDS_FOR_SORTING_ACCOUNT_IMAGE_LIST, # noqa: E501
- required=True,
- ),
- ),
- ),
- ),
- ),
- name=dict(
- type='str',
- ),
- resource_groups=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- id=dict(
- type='int',
- ),
- name=dict(
- type='str'
- ),
- status=dict(
- type='str',
- choices=self.RESOURCE_GROUP_STATUSES,
- ),
- vins_id=dict(
- type='int'
- ),
- vm_id=dict(
- type='int'
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- required=True,
- ),
- ),
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=self.FIELDS_FOR_SORTING_ACCOUNT_RG_LIST, # noqa: E501
- required=True,
- ),
- ),
- ),
- ),
- ),
- resource_consumption=dict(
- type='bool',
- default=False
- ),
- vinses=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- ext_ip=dict(
- type='str',
- ),
- id=dict(
- type='int',
- ),
- name=dict(
- type='str'
- ),
- rg_id=dict(
- type='int',
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- required=True,
- ),
- ),
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=self.FIELDS_FOR_SORTING_ACCOUNT_VINS_LIST, # noqa: E501
- required=True,
- ),
- ),
- ),
- ),
- ),
- ),
- mutually_exclusive=[
- ('id', 'name')
- ],
- required_one_of=[
- ('id', 'name')
- ],
- supports_check_mode=True,
- )
-
- @property
- def mapped_computes_args(self) -> None | dict:
- """
- Map the module argument `computes` to
- arguments dictionary for the method
- `DecortController.account_computes`
- (excluding for `account_id`).
- """
-
- input_args = self.aparams['computes']
- if not input_args:
- return input_args
-
- mapped_args = {}
- if input_args['filter']:
- mapped_args['compute_id'] = input_args['filter']['id']
- mapped_args['compute_ip'] = input_args['filter']['ip']
- mapped_args['compute_name'] = input_args['filter']['name']
- mapped_args['compute_tech_status'] =\
- input_args['filter']['tech_status']
- mapped_args['ext_net_id'] = input_args['filter']['ext_net_id']
- mapped_args['ext_net_name'] =\
- input_args['filter']['ext_net_name']
- mapped_args['rg_id'] = input_args['filter']['rg_id']
- mapped_args['rg_name'] = input_args['filter']['rg_name']
- if input_args['pagination']:
- mapped_args['page_number'] =\
- input_args['pagination']['number']
- mapped_args['page_size'] =\
- input_args['pagination']['size']
- if input_args['sorting']:
- mapped_args['sort_by_asc'] =\
- input_args['sorting']['asc']
- mapped_args['sort_by_field'] =\
- input_args['sorting']['field']
-
- return mapped_args
-
- @property
- def mapped_disks_args(self) -> None | dict:
- """
- Map the module argument `disks` to
- arguments dictionary for the method
- `DecortController.account_disks`
- (excluding for `account_id`).
- """
-
- input_args = self.aparams['disks']
- if not input_args:
- return input_args
-
- mapped_args = {}
- if input_args['filter']:
- mapped_args['disk_id'] = input_args['filter']['id']
- mapped_args['disk_name'] = input_args['filter']['name']
- mapped_args['disk_size'] = input_args['filter']['size']
- mapped_args['disk_type'] = input_args['filter']['type']
- if input_args['pagination']:
- mapped_args['page_number'] =\
- input_args['pagination']['number']
- mapped_args['page_size'] =\
- input_args['pagination']['size']
- if input_args['sorting']:
- mapped_args['sort_by_asc'] =\
- input_args['sorting']['asc']
- mapped_args['sort_by_field'] =\
- input_args['sorting']['field']
-
- return mapped_args
-
- @property
- def mapped_flip_groups_args(self) -> None | dict:
- """
- Map the module argument `flip_groups` to
- arguments dictionary for the method
- `DecortController.account_flip_groups`
- (excluding for `account_id`).
- """
-
- input_args = self.aparams['flip_groups']
- if not input_args:
- return input_args
-
- mapped_args = {}
- if input_args['filter']:
- mapped_args['ext_net_id'] = input_args['filter']['ext_net_id']
- mapped_args['flig_group_id'] = input_args['filter']['id']
- mapped_args['flig_group_ip'] = input_args['filter']['ip']
- mapped_args['flig_group_name'] = input_args['filter']['name']
- mapped_args['vins_id'] = input_args['filter']['vins_id']
- mapped_args['vins_name'] = input_args['filter']['vins_name']
- if input_args['pagination']:
- mapped_args['page_number'] =\
- input_args['pagination']['number']
- mapped_args['page_size'] =\
- input_args['pagination']['size']
-
- return mapped_args
-
- @property
- def mapped_images_args(self) -> None | dict:
- """
- Map the module argument `images` to
- arguments dictionary for the method
- `DecortController.account_images`
- (excluding for `account_id`).
- """
-
- input_args = self.aparams['images']
- if not input_args:
- return input_args
-
- mapped_args = {}
- if input_args['filter']:
- mapped_args['image_id'] = input_args['filter']['id']
- mapped_args['image_name'] = input_args['filter']['name']
- mapped_args['image_type'] = input_args['filter']['type']
- if input_args['pagination']:
- mapped_args['page_number'] =\
- input_args['pagination']['number']
- mapped_args['page_size'] =\
- input_args['pagination']['size']
- if input_args['sorting']:
- mapped_args['sort_by_asc'] =\
- input_args['sorting']['asc']
- mapped_args['sort_by_field'] =\
- input_args['sorting']['field']
-
- return mapped_args
-
- @property
- def mapped_rg_args(self) -> None | dict:
- """
- Map the module argument `resource_groups` to
- arguments dictionary for the method
- `DecortController.account_resource_groups`
- (excluding for `account_id`).
- """
-
- input_args = self.aparams['resource_groups']
- if not input_args:
- return input_args
-
- mapped_args = {}
- if input_args['filter']:
- mapped_args['rg_id'] =\
- input_args['filter']['id']
- mapped_args['rg_name'] =\
- input_args['filter']['name']
- mapped_args['rg_status'] =\
- input_args['filter']['status']
- mapped_args['vins_id'] =\
- input_args['filter']['vins_id']
- mapped_args['vm_id'] =\
- input_args['filter']['vm_id']
- if input_args['pagination']:
- mapped_args['page_number'] =\
- input_args['pagination']['number']
- mapped_args['page_size'] =\
- input_args['pagination']['size']
- if input_args['sorting']:
- mapped_args['sort_by_asc'] =\
- input_args['sorting']['asc']
- mapped_args['sort_by_field'] =\
- input_args['sorting']['field']
-
- return mapped_args
-
- @property
- def mapped_vinses_args(self) -> None | dict:
- """
- Map the module argument `vinses` to
- arguments dictionary for the method
- `DecortController.account_vinses`
- (excluding for `account_id`).
- """
-
- input_args = self.aparams['vinses']
- if not input_args:
- return input_args
-
- mapped_args = {}
- if input_args['filter']:
- mapped_args['vins_id'] = input_args['filter']['id']
- mapped_args['vins_name'] = input_args['filter']['name']
- mapped_args['ext_ip'] = input_args['filter']['ext_ip']
- mapped_args['rg_id'] = input_args['filter']['rg_id']
- if input_args['pagination']:
- mapped_args['page_number'] =\
- input_args['pagination']['number']
- mapped_args['page_size'] =\
- input_args['pagination']['size']
- if input_args['sorting']:
- mapped_args['sort_by_asc'] =\
- input_args['sorting']['asc']
- mapped_args['sort_by_field'] =\
- input_args['sorting']['field']
-
- return mapped_args
-
- def run(self):
- self.get_info()
- self.exit()
-
- def get_info(self):
- self.id, self.facts = self.account_find(
- account_name=self.aparams['name'],
- account_id=self.aparams['id'],
- audits=self.aparams['audits'],
- computes_args=self.mapped_computes_args,
- disks_args=self.mapped_disks_args,
- flip_groups_args=self.mapped_flip_groups_args,
- images_args=self.mapped_images_args,
- resource_consumption=self.aparams['resource_consumption'],
- resource_groups_args=self.mapped_rg_args,
- vinses_args=self.mapped_vinses_args,
- fail_if_not_found=True,
- )
def main():
- DecortAccountInfo().run()
+ module = AnsibleModule(
+ argument_spec=dict(
+ app_id=dict(type='raw'),
+ app_secret=dict(type='raw'),
+ authenticator=dict(type='raw'),
+ controller_url=dict(type='raw'),
+ domain=dict(type='raw'),
+ jwt=dict(type='raw'),
+ oauth2_url=dict(type='raw'),
+ password=dict(type='raw'),
+ username=dict(type='raw'),
+ verify_ssl=dict(type='raw'),
+ ignore_api_compatibility=dict(type='raw'),
+ ignore_sdk_version_check=dict(type='raw'),
+ audits=dict(type='raw'),
+ computes=dict(type='raw'),
+ disks=dict(type='raw'),
+ flip_groups=dict(type='raw'),
+ id=dict(type='raw'),
+ images=dict(type='raw'),
+ name=dict(type='raw'),
+ resource_groups=dict(type='raw'),
+ resource_consumption=dict(type='raw'),
+ vinses=dict(type='raw'),
+ ),
+ )
+
+ module.fail_json(
+ msg=(
+ 'The functionality of the module has been moved to the modules '
+ '"decort_disk_list", "decort_rg_list", "decort_vm_list", '
+ '"decort_vins_list", "decort_image_list", '
+ '"decort_flip_group_list", "decort_account".'
+ '\nPlease use the new modules to get information about the objects'
+ ' available to the account.'
+ ),
+ )
if __name__ == '__main__':
diff --git a/library/decort_account_list.py b/library/decort_account_list.py
new file mode 100644
index 0000000..fada940
--- /dev/null
+++ b/library/decort_account_list.py
@@ -0,0 +1,130 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_account_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortAccountList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ access_type=dict(
+ type='str',
+ choices=sdk_types.AccessType._member_names_,
+ ),
+ id=dict(
+ type='int',
+ ),
+ name=dict(
+ type='str',
+ ),
+ status=dict(
+ type='str',
+ choices=sdk_types.AccountStatus._member_names_,
+ ),
+ zone_id=dict(
+ type='int',
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.AccountForCAAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+ aparam_status: str | None = aparam_filter['status']
+ aparam_access_type: str | None = aparam_filter['access_type']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.AccountForCAAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.account.list(
+ access_type=(
+ sdk_types.AccessType[aparam_access_type]
+ if aparam_access_type else None
+ ),
+ id=aparam_filter['id'],
+ name=aparam_filter['name'],
+ status=(
+ sdk_types.AccountStatus[aparam_status]
+ if aparam_status else None
+ ),
+ zone_id=aparam_filter['zone_id'],
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortAccountList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_audit_list.py b/library/decort_audit_list.py
new file mode 100644
index 0000000..f433789
--- /dev/null
+++ b/library/decort_audit_list.py
@@ -0,0 +1,168 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_audit_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortAuditList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ account_id=dict(
+ type='int',
+ ),
+ api_url_path=dict(
+ type='str',
+ ),
+ bservice_id=dict(
+ type='int',
+ ),
+ exclude_audit_lines=dict(
+ type='bool',
+ ),
+ flip_group_id=dict(
+ type='int',
+ ),
+ request_id=dict(
+ type='str',
+ ),
+ k8s_id=dict(
+ type='int',
+ ),
+ lb_id=dict(
+ type='int',
+ ),
+ max_status_code=dict(
+ type='int',
+ ),
+ min_status_code=dict(
+ type='int',
+ ),
+ request_timestamp_end=dict(
+ type='int',
+ ),
+ request_timestamp_start=dict(
+ type='int',
+ ),
+ rg_id=dict(
+ type='int',
+ ),
+ sep_id=dict(
+ type='int',
+ ),
+ user_name=dict(
+ type='str',
+ ),
+ vins_id=dict(
+ type='int',
+ ),
+ vm_id=dict(
+ type='int',
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.AuditAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.AuditAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.audit.list(
+ account_id=aparam_filter['account_id'],
+ api_url_path=aparam_filter['api_url_path'],
+ bservice_id=aparam_filter['bservice_id'],
+ exclude_audit_lines=aparam_filter['exclude_audit_lines'] or False,
+ flip_group_id=aparam_filter['flip_group_id'],
+ request_id=aparam_filter['request_id'],
+ k8s_id=aparam_filter['k8s_id'],
+ lb_id=aparam_filter['lb_id'],
+ max_status_code=aparam_filter['max_status_code'],
+ min_status_code=aparam_filter['min_status_code'],
+ request_timestamp_end=aparam_filter['request_timestamp_end'],
+ request_timestamp_start=aparam_filter['request_timestamp_start'],
+ rg_id=aparam_filter['rg_id'],
+ sep_id=aparam_filter['sep_id'],
+ user_name=aparam_filter['user_name'],
+ vins_id=aparam_filter['vins_id'],
+ vm_id=aparam_filter['vm_id'],
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortAuditList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_bservice.py b/library/decort_bservice.py
index b34e75c..34703ab 100644
--- a/library/decort_bservice.py
+++ b/library/decort_bservice.py
@@ -114,6 +114,8 @@ class decort_bservice(DecortController):
if self.bservice_id:
_, self.bservice_info = self.bservice_get_by_id(self.bservice_id)
self.bservice_state(self.bservice_info, self.aparams['state'])
+
+ self.bservice_should_exist = True
return
def action(self,d_state):
@@ -259,76 +261,80 @@ class decort_bservice(DecortController):
if check_errors:
self.exit(fail=True)
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ amodule = self.amodule
+
+ if self.amodule.check_mode:
+ self.result['changed'] = False
+ if self.bservice_id:
+ self.result['failed'] = False
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
+ # we exit the module at this point
+ else:
+ self.result['failed'] = True
+ self.result['msg'] = ("Cannot locate B-service name '{}'. Other arguments are: B-service ID {}, "
+ "RG name '{}', RG ID {}, Account '{}'.").format(amodule.params['name'],
+ amodule.params['id'],
+ amodule.params['rg_name'],
+ amodule.params['rg_id'],
+ amodule.params['account_name'])
+ amodule.fail_json(**self.result)
+ pass
+
+
+ #MAIN MANAGE PART
+
+ if self.bservice_id:
+ if self.bservice_info['status'] in ("DELETING","DESTROYNG","RECONFIGURING","DESTROYING",
+ "ENABLING","DISABLING","RESTORING","MODELED"):
+ self.error()
+ elif self.bservice_info['status'] == "DELETED":
+ if amodule.params['state'] in (
+ 'disabled', 'enabled', 'present', 'started', 'stopped'
+ ):
+ self.restore(self.bservice_id)
+ self.action(amodule.params['state'])
+ if amodule.params['state'] == 'absent':
+ self.nop()
+ elif self.bservice_info['status'] in (
+ 'ENABLED', 'DISABLED', 'CREATED',
+ ):
+ if amodule.params['state'] == 'absent':
+ self.destroy()
+ else:
+ self.action(amodule.params['state'])
+ elif self.bservice_info['status'] == "DESTROYED":
+ if amodule.params['state'] in ('present','enabled'):
+ self.create()
+ self.action(amodule.params['state'])
+ if amodule.params['state'] == 'absent':
+ self.nop()
+ else:
+ state = amodule.params['state']
+ if state is None:
+ state = 'present'
+ if state == 'absent':
+ self.nop()
+ if state in ('present','started'):
+ self.create()
+ elif state in ('stopped', 'disabled','enabled'):
+ self.error()
+
+ if self.result['failed']:
+ amodule.fail_json(**self.result)
+ else:
+ if self.bservice_should_exist:
+ _, self.bservice_info = self.bservice_get_by_id(self.bservice_id)
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
+ else:
+ amodule.exit_json(**self.result)
def main():
- subj = decort_bservice()
- amodule = subj.amodule
-
- if subj.amodule.check_mode:
- subj.result['changed'] = False
- if subj.bservice_id:
- subj.result['failed'] = False
- subj.result['facts'] = subj.package_facts(amodule.check_mode)
- amodule.exit_json(**subj.result)
- # we exit the module at this point
- else:
- subj.result['failed'] = True
- subj.result['msg'] = ("Cannot locate B-service name '{}'. Other arguments are: B-service ID {}, "
- "RG name '{}', RG ID {}, Account '{}'.").format(amodule.params['name'],
- amodule.params['id'],
- amodule.params['rg_name'],
- amodule.params['rg_id'],
- amodule.params['account_name'])
- amodule.fail_json(**subj.result)
- pass
-
-
- #MAIN MANAGE PART
+ decort_bservice().run()
- if subj.bservice_id:
- if subj.bservice_info['status'] in ("DELETING","DESTROYNG","RECONFIGURING","DESTROYING",
- "ENABLING","DISABLING","RESTORING","MODELED"):
- subj.error()
- elif subj.bservice_info['status'] == "DELETED":
- if amodule.params['state'] in (
- 'disabled', 'enabled', 'present', 'started', 'stopped'
- ):
- subj.restore(subj.bservice_id)
- subj.action(amodule.params['state'])
- if amodule.params['state'] == 'absent':
- subj.nop()
- elif subj.bservice_info['status'] in (
- 'ENABLED', 'DISABLED', 'CREATED',
- ):
- if amodule.params['state'] == 'absent':
- subj.destroy()
- else:
- subj.action(amodule.params['state'])
- elif subj.bservice_info['status'] == "DESTROYED":
- if amodule.params['state'] in ('present','enabled'):
- subj.create()
- subj.action(amodule.params['state'])
- if amodule.params['state'] == 'absent':
- subj.nop()
- else:
- state = amodule.params['state']
- if state is None:
- state = 'present'
- if state == 'absent':
- subj.nop()
- if state in ('present','started'):
- subj.create()
- elif state in ('stopped', 'disabled','enabled'):
- subj.error()
- if subj.result['failed']:
- amodule.fail_json(**subj.result)
- else:
- if subj.bservice_should_exist:
- _, subj.bservice_info = subj.bservice_get_by_id(subj.bservice_id)
- subj.result['facts'] = subj.package_facts(amodule.check_mode)
- amodule.exit_json(**subj.result)
- else:
- amodule.exit_json(**subj.result)
-if __name__ == "__main__":
+if __name__ == '__main__':
main()
diff --git a/library/decort_disk.py b/library/decort_disk.py
index ac1856b..11cf023 100644
--- a/library/decort_disk.py
+++ b/library/decort_disk.py
@@ -73,81 +73,124 @@ class decort_disk(DecortController):
else:
self.check_amodule_args_for_create()
+ def compare_iotune_params(self, new_iotune: dict, current_iotune: dict):
+ io_fields = sdk_types.IOTuneAPIResultNM.model_fields.keys()
+
+ for field in io_fields:
+ new_value = new_iotune.get(field)
+ current_value = current_iotune.get(field)
+
+ if new_value != current_value:
+ return False
+
+ return True
+
+ def limit_io(self, aparam_limit_io: dict):
+ self.sdk_checkmode(self.api.cloudapi.disks.limit_io)(
+ disk_id=self.disk_id,
+ read_bytes_sec_max=(aparam_limit_io.get('read_bytes_sec_max')),
+ read_bytes_sec=(aparam_limit_io.get('read_bytes_sec')),
+ read_iops_sec_max=(aparam_limit_io.get('read_iops_sec_max')),
+ read_iops_sec=(aparam_limit_io.get('read_iops_sec')),
+ size_iops_sec=(aparam_limit_io.get('size_iops_sec')),
+ total_bytes_sec_max=(aparam_limit_io.get('total_bytes_sec_max')),
+ total_bytes_sec=(aparam_limit_io.get('total_bytes_sec')),
+ total_iops_sec_max=(aparam_limit_io.get('total_iops_sec_max')),
+ total_iops_sec=(aparam_limit_io.get('total_iops_sec')),
+ write_bytes_sec_max=(aparam_limit_io.get('write_bytes_sec_max')),
+ write_bytes_sec=(aparam_limit_io.get('write_bytes_sec')),
+ write_iops_sec_max=(aparam_limit_io.get('write_iops_sec_max')),
+ write_iops_sec=(aparam_limit_io.get('write_iops_sec')),
+ )
+
def create(self):
- self.disk_id = self.disk_create(
- accountId=self.acc_id,
- name = self.amodule.params['name'],
- description=self.amodule.params['description'],
- size=self.amodule.params['size'],
- sep_id=self.amodule.params['sep_id'],
- pool=self.amodule.params['pool'],
+ self.disk_id = self.sdk_checkmode(self.api.cloudapi.disks.create)(
+ account_id=self.acc_id,
+ name=self.amodule.params['name'],
+ size_gb=self.amodule.params['size'],
storage_policy_id=self.aparams['storage_policy_id'],
+ description=self.amodule.params['description'],
+ sep_id=self.amodule.params['sep_id'],
+ sep_pool_name=self.amodule.params['pool'],
)
#IO tune
- if self.amodule.params['limitIO']:
- self.disk_limitIO(disk_id=self.disk_id,
- limits=self.amodule.params['limitIO'])
+ aparam_limit_io: dict[str, int | None] = self.amodule.params['limitIO']
+ self.limit_io(aparam_limit_io=aparam_limit_io)
#set share status
if self.amodule.params['shareable']:
- self.disk_share(self.disk_id,self.amodule.params['shareable'])
+ self.sdk_checkmode(self.api.cloudapi.disks.share)(
+ disk_id=self.disk_id,
+ )
return
def action(self,restore=False):
#restore never be done
if restore:
- self.disk_restore(self.disk_id)
+ self.sdk_checkmode(self.api.cloudapi.disks.restore)(
+ disk_id=self.disk_id,
+ )
#rename if id present
if (
self.amodule.params['name'] is not None
and self.amodule.params['name'] != self.disk_info['name']
):
- self.disk_rename(disk_id=self.disk_id,
- name=self.amodule.params['name'])
+ self.rename()
#resize
if (
self.amodule.params['size'] is not None
and self.amodule.params['size'] != self.disk_info['sizeMax']
):
- self.disk_resize(self.disk_info,self.amodule.params['size'])
+ self.sdk_checkmode(self.api.cloudapi.disks.resize2)(
+ disk_id=self.disk_id,
+ disk_size_gb=self.amodule.params['size'],
+ )
#IO TUNE
- if self.amodule.params['limitIO']:
- clean_io = [param for param in self.amodule.params['limitIO'] \
- if self.amodule.params['limitIO'][param] == None]
- for key in clean_io: del self.amodule.params['limitIO'][key]
- if self.amodule.params['limitIO'] != self.disk_info['iotune']:
- self.disk_limitIO(self.disk_id,self.amodule.params['limitIO'])
+ aparam_limit_io: dict[str, int | None] = self.amodule.params['limitIO']
+ if aparam_limit_io:
+ if not self.compare_iotune_params(
+ new_iotune=aparam_limit_io,
+ current_iotune=self.disk_info['iotune'],
+ ):
+ self.limit_io(aparam_limit_io=aparam_limit_io)
+
#share check/update
#raise Exception(self.amodule.params['shareable'])
- if self.amodule.params['shareable'] != self.disk_info['shareable']:
- self.disk_share(self.disk_id,self.amodule.params['shareable'])
+ if self.amodule.params['shareable'] != self.disk_info['shareable']:
+ if self.amodule.params['shareable']:
+ self.sdk_checkmode(self.api.cloudapi.disks.share)(
+ disk_id=self.disk_id,
+ )
+ else:
+ self.sdk_checkmode(self.api.cloudapi.disks.unshare)(
+ disk_id=self.disk_id,
+ )
aparam_storage_policy_id = self.aparams['storage_policy_id']
if (
aparam_storage_policy_id is not None
and aparam_storage_policy_id != self.disk_info['storage_policy_id']
):
- self.disk_change_storage_policy(
+ self.sdk_checkmode(self.api.ca.disks.change_disk_storage_policy)(
disk_id=self.disk_id,
storage_policy_id=aparam_storage_policy_id,
)
-
return
-
+
def delete(self):
- self.disk_delete(disk_id=self.disk_id,
- detach=self.amodule.params['force_detach'],
- permanently=self.amodule.params['permanently'],
- reason=self.amodule.params['reason'])
+ self.sdk_checkmode(self.api.cloudapi.disks.delete)(
+ disk_id=self.disk_id,
+ detach=self.amodule.params['force_detach'],
+ permanently=self.amodule.params['permanently'],
+ )
self.disk_id, self.disk_info = self._disk_get_by_id(self.disk_id)
return
def rename(self):
-
-
- self.disk_rename(diskId = self.disk_id,
- name = self.amodule.params['name'])
- self.disk_info['name'] = self.amodule.params['name']
+ self.sdk_checkmode(self.api.cloudapi.disks.rename)(
+ disk_id=self.disk_id,
+ name=self.amodule.params['name'],
+ )
return
def nop(self):
@@ -194,6 +237,8 @@ class decort_disk(DecortController):
ret_dict['size_used'] = self.disk_info['sizeUsed']
ret_dict['storage_policy_id'] = self.disk_info['storage_policy_id']
ret_dict['to_clean'] = self.disk_info['to_clean']
+ ret_dict['cache_mode'] = self.disk_info['cache']
+ ret_dict['blkdiscard'] = self.disk_info['blkdiscard']
return ret_dict
@@ -240,6 +285,7 @@ class decort_disk(DecortController):
),
limitIO=dict(
type='dict',
+ apply_defaults=True,
options=dict(
total_bytes_sec=dict(
type='int',
@@ -290,10 +336,6 @@ class decort_disk(DecortController):
type='bool',
default=False,
),
- reason=dict(
- type='str',
- default='Managed by Ansible decort_disk',
- ),
state=dict(
type='str',
default='present',
@@ -379,57 +421,60 @@ class decort_disk(DecortController):
return not check_errors
-def main():
- decon = decort_disk()
- amodule = decon.amodule
- #
- #Full range of Disk status is as follows:
- #
- # "ASSIGNED","MODELED", "CREATING","CREATED","DELETED", "DESTROYED","PURGED",
- #
- if decon.disk_id:
- #disk exist
- if decon.disk_info['status'] in ["MODELED", "CREATING"]:
- decon.result['failed'] = True
- decon.result['changed'] = False
- decon.result['msg'] = ("No change can be done for existing Disk ID {} because of its current "
- "status '{}'").format(decon.disk_id, decon.disk_info['status'])
- # "ASSIGNED","CREATED","DELETED","PURGED", "DESTROYED"
- elif decon.disk_info['status'] in ["ASSIGNED","CREATED"]:
- if amodule.params['state'] == 'absent':
- decon.delete()
- elif amodule.params['state'] == 'present':
- decon.action()
- elif decon.disk_info['status'] in ["PURGED", "DESTROYED"]:
- #re-provision disk
- if amodule.params['state'] in ('present'):
- decon.create()
- else:
- decon.nop()
- elif decon.disk_info['status'] == "DELETED":
- if amodule.params['state'] in ('present'):
- decon.action(restore=True)
- elif (amodule.params['state'] == 'absent' and
- amodule.params['permanently']):
- decon.delete()
- else:
- decon.nop()
- else:
- # preexisting Disk was not found
- if amodule.params['state'] == 'absent':
- decon.nop()
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ #
+ #Full range of Disk status is as follows:
+ #
+ # "ASSIGNED","MODELED", "CREATING","CREATED","DELETED", "DESTROYED","PURGED",
+ #
+ amodule = self.amodule
+ if self.disk_id:
+ #disk exist
+ if self.disk_info['status'] in ["MODELED", "CREATING"]:
+ self.result['failed'] = True
+ self.result['changed'] = False
+ self.result['msg'] = ("No change can be done for existing Disk ID {} because of its current "
+ "status '{}'").format(self.disk_id, self.disk_info['status'])
+ # "ASSIGNED","CREATED","DELETED","PURGED", "DESTROYED"
+ elif self.disk_info['status'] in ["ASSIGNED","CREATED"]:
+ if amodule.params['state'] == 'absent':
+ self.delete()
+ elif amodule.params['state'] == 'present':
+ self.action()
+ elif self.disk_info['status'] in ["PURGED", "DESTROYED"]:
+ #re-provision disk
+ if amodule.params['state'] in ('present'):
+ self.create()
+ else:
+ self.nop()
+ elif self.disk_info['status'] == "DELETED":
+ if amodule.params['state'] in ('present'):
+ self.action(restore=True)
+ elif (amodule.params['state'] == 'absent' and
+ amodule.params['permanently']):
+ self.delete()
+ else:
+ self.nop()
else:
- decon.create()
+ # preexisting Disk was not found
+ if amodule.params['state'] == 'absent':
+ self.nop()
+ else:
+ self.create()
- if decon.result['failed']:
- amodule.fail_json(**decon.result)
- else:
- if decon.result['changed']:
- _, decon.disk_info = decon.disk_find(decon.disk_id)
- decon.result['facts'] = decon.package_facts(amodule.check_mode)
- amodule.exit_json(**decon.result)
+ if self.result['failed']:
+ amodule.fail_json(**self.result)
+ else:
+ if self.result['changed']:
+ _, self.disk_info = self.disk_find(self.disk_id)
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
-if __name__ == "__main__":
+
+def main():
+ decort_disk().run()
+
+
+if __name__ == '__main__':
main()
-
-#SHARE
diff --git a/library/decort_disk_list.py b/library/decort_disk_list.py
new file mode 100644
index 0000000..bb194bc
--- /dev/null
+++ b/library/decort_disk_list.py
@@ -0,0 +1,155 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_disk_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortDiskList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ account_id=dict(
+ type='int',
+ ),
+ account_name=dict(
+ type='str',
+ ),
+ id=dict(
+ type='int',
+ ),
+ name=dict(
+ type='str',
+ ),
+ sep_id=dict(
+ type='int',
+ ),
+ sep_pool_name=dict(
+ type='str',
+ ),
+ shared=dict(
+ type='bool',
+ ),
+ disk_max_size_gb=dict(
+ type='int',
+ ),
+ status=dict(
+ type='str',
+ choices=sdk_types.DiskStatus._member_names_,
+ ),
+ storage_policy_id=dict(
+ type='int',
+ ),
+ type=dict(
+ type='str',
+ choices=sdk_types.DiskType._member_names_,
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.DiskForListAndListDeletedAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+ aparam_status: str | None = aparam_filter['status']
+ aparam_type: str | None = aparam_filter['type']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.DiskForListAndListDeletedAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.disks.list(
+ account_id=aparam_filter['account_id'],
+ account_name=aparam_filter['account_name'],
+ disk_max_size_gb=aparam_filter['disk_max_size_gb'],
+ id=aparam_filter['id'],
+ name=aparam_filter['name'],
+ sep_id=aparam_filter['sep_id'],
+ sep_pool_name=aparam_filter['sep_pool_name'],
+ shared=aparam_filter['shared'],
+ status=(
+ sdk_types.DiskStatus[aparam_status]
+ if aparam_status else None
+ ),
+ storage_policy_id=aparam_filter['storage_policy_id'],
+ type=(
+ sdk_types.DiskType[aparam_type]
+ if aparam_type else None
+ ),
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortDiskList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_flip_group_list.py b/library/decort_flip_group_list.py
new file mode 100644
index 0000000..63bec52
--- /dev/null
+++ b/library/decort_flip_group_list.py
@@ -0,0 +1,150 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_flip_group_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortFlipGroupList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ account_id=dict(
+ type='int',
+ ),
+ client_ids=dict(
+ type='list',
+ elements='int',
+ ),
+ conn_id=dict(
+ type='int',
+ ),
+ ext_net_id=dict(
+ type='int',
+ ),
+ id=dict(
+ type='int',
+ ),
+ include_deleted=dict(
+ type='bool',
+ default=False,
+ ),
+ ip_addr=dict(
+ type='str',
+ ),
+ name=dict(
+ type='str',
+ ),
+ status=dict(
+ type='str',
+ choices=sdk_types.FlipGroupStatus._member_names_,
+ ),
+ vins_id=dict(
+ type='int',
+ ),
+ vins_name=dict(
+ type='str',
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.FlipGroupForListAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+ aparam_status: str | None = aparam_filter['status']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.FlipGroupForListAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.flipgroup.list(
+ account_id=aparam_filter['account_id'],
+ client_ids=aparam_filter['client_ids'],
+ conn_id=aparam_filter['conn_id'],
+ ext_net_id=aparam_filter['ext_net_id'],
+ id=aparam_filter['id'],
+ ip_addr=aparam_filter['ip_addr'],
+ name=aparam_filter['name'],
+ status=(
+ sdk_types.FlipGroupStatus[aparam_status]
+ if aparam_status else None
+ ),
+ vins_id=aparam_filter['vins_id'],
+ vins_name=aparam_filter['vins_name'],
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortFlipGroupList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_group.py b/library/decort_group.py
index 00a5c41..d92debd 100644
--- a/library/decort_group.py
+++ b/library/decort_group.py
@@ -81,24 +81,13 @@ class decort_group(DecortController):
def create(self):
chipset = self.aparams['chipset']
if chipset is None:
- chipset = 'i440fx'
+ chipset = 'Q35'
self.message(
msg=f'Chipset not specified, '
f'default value "{chipset}" will be used.',
warning=True,
)
- driver = self.aparams['driver']
- if driver is None:
- driver = 'KVM_X86'
- self.message(
- msg=self.MESSAGES.default_value_used(
- param_name='driver',
- default_value=driver,
- ),
- warning=True,
- )
-
self.group_id=self.group_provision(
bs_id=self.bservice_id,
arg_name=self.amodule.params['name'],
@@ -112,7 +101,6 @@ class decort_group(DecortController):
arg_timeout=self.amodule.params['timeoutStart'],
chipset=chipset,
storage_policy_id=self.aparams['storage_policy_id'],
- driver=driver,
)
if self.amodule.params['state'] in ('started','present'):
@@ -296,9 +284,6 @@ class decort_group(DecortController):
storage_policy_id=dict(
type='int',
),
- driver=dict(
- type='str',
- ),
),
supports_check_mode=True,
required_one_of=[
@@ -354,17 +339,6 @@ class decort_group(DecortController):
f'disk ID {disk['id']}'
)
- aparam_driver = self.aparams['driver']
- if (
- aparam_driver is not None
- and aparam_driver != self.group_info['driver']
- ):
- check_errors = True
- self.message(
- msg='Check for parameter "driver" failed: '
- 'driver can not be changed'
- )
-
if check_errors:
self.exit(fail=True)
@@ -405,54 +379,59 @@ class decort_group(DecortController):
self.exit(fail=True)
-def main():
- subj = decort_group()
- amodule = subj.amodule
-
- if amodule.params['state'] == 'check':
- subj.result['changed'] = False
- if subj.group_id:
- # cluster is found - package facts and report success to Ansible
- subj.result['failed'] = False
- subj.result['facts'] = subj.package_facts(amodule.check_mode)
- amodule.exit_json(**subj.result)
- # we exit the module at this point
- else:
- subj.result['failed'] = True
- subj.result['msg'] = ("Cannot locate Group name '{}'. "
- "B-service ID {}").format(amodule.params['name'],
- amodule.params['bservice_id'],)
- amodule.fail_json(**subj.result)
-
- if subj.group_id:
- if subj.group_info['status'] in ("DELETING","DESTROYNG","CREATING","DESTROYING",
- "ENABLING","DISABLING","RESTORING","MODELED","DESTROYED"):
- subj.error()
- elif subj.group_info['status'] in ("DELETED","DESTROYED"):
- if amodule.params['state'] == 'absent':
- subj.nop()
- if amodule.params['state'] in ('present','started','stopped'):
- subj.create()
- elif subj.group_info['techStatus'] in ("STARTED","STOPPED"):
- if amodule.params['state'] == 'absent':
- subj.destroy()
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ amodule = self.amodule
+
+ if amodule.params['state'] == 'check':
+ self.result['changed'] = False
+ if self.group_id:
+ # cluster is found - package facts and report success to Ansible
+ self.result['failed'] = False
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
+ # we exit the module at this point
else:
- subj.action()
+ self.result['failed'] = True
+ self.result['msg'] = ("Cannot locate Group name '{}'. "
+ "B-service ID {}").format(amodule.params['name'],
+ amodule.params['bservice_id'],)
+ amodule.fail_json(**self.result)
+
+ if self.group_id:
+ if self.group_info['status'] in ("DELETING","DESTROYNG","CREATING","DESTROYING",
+ "ENABLING","DISABLING","RESTORING","MODELED","DESTROYED"):
+ self.error()
+ elif self.group_info['status'] in ("DELETED","DESTROYED"):
+ if amodule.params['state'] == 'absent':
+ self.nop()
+ if amodule.params['state'] in ('present','started','stopped'):
+ self.create()
+ elif self.group_info['techStatus'] in ("STARTED","STOPPED"):
+ if amodule.params['state'] == 'absent':
+ self.destroy()
+ else:
+ self.action()
- else:
- if amodule.params['state'] == 'absent':
- subj.nop()
- if amodule.params['state'] in ('present','started','stopped'):
- subj.create()
-
- if subj.result['failed']:
- amodule.fail_json(**subj.result)
- else:
- if subj.group_should_exist:
- subj.result['facts'] = subj.package_facts(amodule.check_mode)
- amodule.exit_json(**subj.result)
else:
- amodule.exit_json(**subj.result)
+ if amodule.params['state'] == 'absent':
+ self.nop()
+ if amodule.params['state'] in ('present','started','stopped'):
+ self.create()
+
+ if self.result['failed']:
+ amodule.fail_json(**self.result)
+ else:
+ if self.group_should_exist:
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
+ else:
+ amodule.exit_json(**self.result)
-if __name__ == "__main__":
+
+def main():
+ decort_group().run()
+
+
+if __name__ == '__main__':
main()
diff --git a/library/decort_image.py b/library/decort_image.py
new file mode 100644
index 0000000..f8b8a18
--- /dev/null
+++ b/library/decort_image.py
@@ -0,0 +1,558 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_image
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home).
+'''
+
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.basic import env_fallback
+
+from ansible.module_utils.decort_utils import *
+
+
+class decort_image(DecortController):
+ def __init__(self):
+ super(decort_image, self).__init__(AnsibleModule(**self.amodule_init_args))
+ amodule = self.amodule
+
+ self.validated_image_id = 0
+ self.validated_virt_image_id = 0
+ self.validated_image_name = amodule.params['image_name']
+ self.validated_virt_image_name = None
+ self.image_info: dict
+ self.virt_image_info: dict
+ if amodule.params['account_name']:
+ self.validated_account_id, _ = self.account_find(amodule.params['account_name'])
+ else:
+ self.validated_account_id = amodule.params['account_id']
+
+ if self.validated_account_id == 0:
+ # we failed either to find or access the specified account - fail the module
+ self.result['failed'] = True
+ self.result['changed'] = False
+ self.result['msg'] = ("Cannot find account '{}'").format(amodule.params['account_name'])
+ amodule.fail_json(**self.result)
+ self.acc_id = self.validated_account_id
+
+ if (
+ self.aparams['virt_id'] != 0
+ or self.aparams['virt_name'] is not None
+ ):
+ self.validated_virt_image_id, self.virt_image_info = (
+ self.decort_virt_image_find(amodule)
+ )
+ if self.virt_image_info:
+ _, linked_image_info = self._image_get_by_id(
+ image_id=self.virt_image_info['linkTo']
+ )
+ self.acc_id = linked_image_info['accountId']
+ if (
+ self.aparams['virt_name'] is not None
+ and self.aparams['virt_name']
+ != self.virt_image_info['name']
+ ):
+ self.decort_virt_image_rename(amodule)
+ self.result['msg'] = 'Virtual image renamed successfully'
+ elif (
+ self.aparams['image_id'] != 0
+ or self.aparams['image_name'] is not None
+ ):
+ self.validated_image_id, self.image_info = (
+ self.decort_image_find(amodule)
+ )
+ if self.image_info:
+ self.acc_id = self.image_info['accountId']
+ if (
+ amodule.params['image_name']
+ and amodule.params['image_name'] != self.image_info['name']
+ ):
+ decort_image.decort_image_rename(self,amodule)
+ self.result['msg'] = ("Image renamed successfully")
+
+ if self.validated_image_id:
+ self.check_amodule_args_for_change()
+ elif self.validated_virt_image_id:
+ self.check_amodule_args_for_change_virt_image()
+ elif self.aparams['virt_name']:
+ self.check_amodule_args_for_create_virt_image()
+ else:
+ self.check_amodule_args_for_create_image()
+
+ def decort_image_find(self, amodule):
+ # function that finds the OS image
+ image_id, image_facts = self.image_find(image_id=amodule.params['image_id'], image_name=self.validated_image_name,
+ account_id=self.validated_account_id, rg_id=0,
+ sepid=amodule.params['sep_id'],
+ pool=amodule.params['pool'])
+ return image_id, image_facts
+
+ def decort_virt_image_find(self, amodule):
+ # function that finds a virtual image
+ image_id, image_facts = self.virt_image_find(image_id=amodule.params['virt_id'],
+ account_id=self.validated_account_id, rg_id=0,
+ sepid=amodule.params['sep_id'],
+ virt_name=amodule.params['virt_name'],
+ pool=amodule.params['pool'])
+ return image_id, image_facts
+
+
+
+ def decort_image_create(self,amodule):
+ aparam_boot = self.aparams['boot']
+ boot_mode = 'bios'
+ loader_type = 'unknown'
+ if aparam_boot is not None:
+ if aparam_boot['mode'] is None:
+ self.message(
+ msg=self.MESSAGES.default_value_used(
+ param_name='boot.mode',
+ default_value=boot_mode
+ ),
+ warning=True,
+ )
+ else:
+ boot_mode = aparam_boot['mode']
+
+ if aparam_boot['loader_type'] is None:
+ self.message(
+ msg=self.MESSAGES.default_value_used(
+ param_name='boot.loader_type',
+ default_value=loader_type
+ ),
+ warning=True,
+ )
+ else:
+ loader_type = aparam_boot['loader_type']
+
+ network_interface_naming = self.aparams['network_interface_naming']
+ if network_interface_naming is None:
+ network_interface_naming = 'ens'
+ self.message(
+ msg=self.MESSAGES.default_value_used(
+ param_name='network_interface_naming',
+ default_value=network_interface_naming
+ ),
+ warning=True,
+ )
+
+ hot_resize = self.aparams['hot_resize']
+ if hot_resize is None:
+ hot_resize = False
+ self.message(
+ msg=self.MESSAGES.default_value_used(
+ param_name='hot_resize',
+ default_value=hot_resize
+ ),
+ warning=True,
+ )
+
+ # function that creates OS image
+ image_facts = self.image_create(
+ img_name=self.validated_image_name,
+ url=amodule.params['url'],
+ boot_mode=boot_mode,
+ boot_loader_type=loader_type,
+ hot_resize=hot_resize,
+ username=amodule.params['image_username'],
+ password=amodule.params['image_password'],
+ account_id=self.validated_account_id,
+ usernameDL=amodule.params['usernameDL'],
+ passwordDL=amodule.params['passwordDL'],
+ sepId=amodule.params['sepId'],
+ poolName=amodule.params['poolName'],
+ network_interface_naming=network_interface_naming,
+ storage_policy_id=amodule.params['storage_policy_id'],
+ )
+ self.result['changed'] = True
+ return image_facts
+
+ def decort_virt_image_link(self,amodule):
+ # function that links an OS image to a virtual one
+ self.virt_image_link(imageId=self.validated_virt_image_id, targetId=self.target_image_id)
+ image_id, image_facts = decort_image.decort_virt_image_find(self, amodule)
+ self.result['facts'] = decort_image.decort_image_package_facts(image_facts, amodule.check_mode)
+ self.result['msg'] = ("Image '{}' linked to virtual image '{}'").format(self.target_image_id,
+ decort_image.decort_image_package_facts(image_facts)['id'],)
+ return image_id, image_facts
+
+ def decort_image_delete(self,amodule):
+ # function that removes an image
+ self.image_delete(imageId=amodule.image_id_delete)
+ _, image_facts = decort_image._image_get_by_id(self, amodule.image_id_delete)
+ self.result['facts'] = decort_image.decort_image_package_facts(image_facts, amodule.check_mode)
+ return
+
+ def decort_virt_image_create(self,amodule):
+ # function that creates a virtual image
+ image_facts = self.virt_image_create(
+ name=amodule.params['virt_name'],
+ target_id=self.target_image_id,
+ account_id=self.aparams['account_id'],
+ )
+ image_id, image_facts = decort_image.decort_virt_image_find(self, amodule)
+ self.result['facts'] = decort_image.decort_image_package_facts(image_facts, amodule.check_mode)
+ return image_id, image_facts
+
+ def decort_image_rename(self,amodule):
+ # image renaming function
+ image_facts = self.image_rename(imageId=self.validated_image_id, name=amodule.params['image_name'])
+ self.result['msg'] = ("Image renamed successfully")
+ image_id, image_facts = decort_image.decort_image_find(self, amodule)
+ return image_id, image_facts
+
+ def decort_virt_image_rename(self, amodule):
+ image_facts = self.image_rename(imageId=self.validated_virt_image_id,
+ name=amodule.params['virt_name'])
+ self.result['msg'] = ("Virtual image renamed successfully")
+ image_id, image_facts = self.decort_virt_image_find(amodule)
+ return image_id, image_facts
+
+ @staticmethod
+ def decort_image_package_facts(
+ arg_image_facts: dict | None,
+ arg_check_mode=False,
+ ):
+ """Package a dictionary of OS image according to the decort_image module specification. This
+ dictionary will be returned to the upstream Ansible engine at the completion of the module run.
+
+ @param arg_image_facts: dictionary with OS image facts as returned by API call to .../images/list
+ @param arg_check_mode: boolean that tells if this Ansible module is run in check mode.
+
+ @return: dictionary with OS image specs populated from arg_image_facts.
+ """
+
+ ret_dict = dict(id=0,
+ name="none",
+ size=0,
+ type="none",
+ state="CHECK_MODE", )
+
+ if arg_check_mode:
+ # in check mode return immediately with the default values
+ return ret_dict
+
+ if arg_image_facts is None:
+ # if void facts provided - change state value to ABSENT and return
+ ret_dict['state'] = "ABSENT"
+ return ret_dict
+
+ ret_dict['id'] = arg_image_facts['id']
+ ret_dict['name'] = arg_image_facts['name']
+ ret_dict['size'] = arg_image_facts['size']
+ # ret_dict['arch'] = arg_image_facts['architecture']
+ ret_dict['sep_id'] = arg_image_facts['sepId']
+ ret_dict['pool'] = arg_image_facts['pool']
+ ret_dict['state'] = arg_image_facts['status']
+ ret_dict['linkto'] = arg_image_facts['linkTo']
+ ret_dict['accountId'] = arg_image_facts['accountId']
+ ret_dict['boot_mode'] = arg_image_facts['bootType']
+
+ ret_dict['boot_loader_type'] = ''
+ match arg_image_facts['type']:
+ case 'cdrom' | 'virtual' as type:
+ ret_dict['type'] = type
+ case _ as boot_loader_type:
+ ret_dict['type'] = 'template'
+ ret_dict['boot_loader_type'] = boot_loader_type
+
+ ret_dict['network_interface_naming'] = arg_image_facts[
+ 'networkInterfaceNaming'
+ ]
+ ret_dict['hot_resize'] = arg_image_facts['hotResize']
+ ret_dict['storage_policy_id'] = arg_image_facts['storage_policy_id']
+ ret_dict['to_clean'] = arg_image_facts['to_clean']
+ return ret_dict
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ pool=dict(
+ type='str',
+ default='',
+ ),
+ sep_id=dict(
+ type='int',
+ default=0,
+ ),
+ account_name=dict(
+ type='str',
+ ),
+ account_id=dict(
+ type='int',
+ ),
+ image_name=dict(
+ type='str',
+ ),
+ image_id=dict(
+ type='int',
+ default=0,
+ ),
+ virt_id=dict(
+ type='int',
+ default=0,
+ ),
+ virt_name=dict(
+ type='str',
+ ),
+ state=dict(
+ type='str',
+ default='present',
+ choices=[
+ 'absent',
+ 'present',
+ ],
+ ),
+ url=dict(
+ type='str',
+ ),
+ sepId=dict(
+ type='int',
+ default=0,
+ ),
+ poolName=dict(
+ type='str',
+ ),
+ hot_resize=dict(
+ type='bool',
+ ),
+ image_username=dict(
+ type='str',
+ ),
+ image_password=dict(
+ type='str',
+ ),
+ usernameDL=dict(
+ type='str',
+ ),
+ passwordDL=dict(
+ type='str',
+ ),
+ boot=dict(
+ type='dict',
+ options=dict(
+ mode=dict(
+ type='str',
+ choices=[
+ 'bios',
+ 'uefi',
+ ],
+ ),
+ loader_type=dict(
+ type='str',
+ choices=[
+ 'windows',
+ 'linux',
+ 'unknown',
+ ],
+ ),
+ ),
+ ),
+ network_interface_naming=dict(
+ type='str',
+ choices=[
+ 'ens',
+ 'eth',
+ ],
+ ),
+ storage_policy_id=dict(
+ type='int',
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ def check_amodule_args_for_change(self):
+ check_errors = False
+
+ aparam_storage_policy_id = self.aparams['storage_policy_id']
+ if (
+ aparam_storage_policy_id is not None
+ and aparam_storage_policy_id
+ not in self.acc_info['storage_policy_ids']
+ ):
+ check_errors = True
+ self.message(
+ msg='Check for parameter "storage_policy_id" failed: '
+ f'Account ID {self.acc_id} does not have access to '
+ f'storage_policy_id {aparam_storage_policy_id}'
+ )
+
+ if check_errors:
+ self.exit(fail=True)
+
+ def check_amodule_args_for_change_virt_image(self):
+ check_errors = False
+
+ aparam_storage_policy_id = self.aparams['storage_policy_id']
+ if (
+ aparam_storage_policy_id is not None
+ and (
+ aparam_storage_policy_id
+ != self.virt_image_info['storage_policy_id']
+ )
+ ):
+ check_errors = True
+ self.message(
+ msg='Check for parameter "storage_policy_id" failed: '
+ 'storage_policy_id can not be changed in virtual image'
+ )
+
+ if check_errors:
+ self.exit(fail=True)
+
+ def check_amodule_args_for_create_image(self):
+ check_errors = False
+
+ aparam_account_id = self.aparams['account_id']
+ if aparam_account_id is None:
+ check_errors = True
+ self.message(
+ msg='Check for parameter "account_id" failed: '
+ 'account_id must be specified when creating '
+ 'a new image'
+ )
+
+ aparam_storage_policy_id = self.aparams['storage_policy_id']
+ if aparam_storage_policy_id is None:
+ check_errors = True
+ self.message(
+ msg='Check for parameter "storage_policy_id" failed: '
+ 'storage_policy_id must be specified when creating '
+ 'a new image'
+ )
+ elif (
+ aparam_storage_policy_id
+ not in self.acc_info['storage_policy_ids']
+ ):
+ check_errors = True
+ self.message(
+ msg='Check for parameter "storage_policy_id" failed: '
+ f'Account ID {self.acc_id} does not have access to '
+ f'storage_policy_id {aparam_storage_policy_id}'
+ )
+
+ if check_errors:
+ self.exit(fail=True)
+
+ def check_amodule_args_for_create_virt_image(self):
+ check_errors = False
+
+ aparam_storage_policy_id = self.aparams['storage_policy_id']
+ if aparam_storage_policy_id is not None:
+ check_errors = True
+ self.message(
+ msg='Check for parameter "storage_policy_id" failed: '
+ 'storage_policy_id can not be specified when creating '
+ 'virtual image'
+ )
+
+ if check_errors:
+ self.exit(fail=True)
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ amodule = self.amodule
+ if amodule.params['virt_name'] or amodule.params['virt_id']:
+
+ image_id, image_facts = self.decort_virt_image_find(amodule)
+ if amodule.params['image_name'] or amodule.params['image_id']:
+ self.target_image_id, _ = self.decort_image_find(amodule)
+ else:
+ self.target_image_id = 0
+ if self.decort_image_package_facts(image_facts)['id'] > 0:
+ self.result['facts'] = self.decort_image_package_facts(image_facts, amodule.check_mode)
+ self.validated_virt_image_id = self.decort_image_package_facts(image_facts)['id']
+ self.validated_virt_image_name = self.decort_image_package_facts(image_facts)['name']
+
+ if self.decort_image_package_facts(image_facts)['id'] == 0 and amodule.params['state'] == "present" and self.target_image_id > 0:
+ image_id, image_facts = self.decort_virt_image_create(amodule)
+ self.result['msg'] = ("Virtual image '{}' created").format(self.decort_image_package_facts(image_facts)['id'])
+ self.result['changed'] = True
+ elif self.decort_image_package_facts(image_facts)['id'] == 0 and amodule.params['state'] == "present" and self.target_image_id == 0:
+ self.result['msg'] = ("Cannot find OS image")
+ amodule.fail_json(**self.result)
+
+ if self.validated_virt_image_id:
+ if (
+ self.target_image_id
+ and self.decort_image_package_facts(image_facts)[
+ 'linkto'
+ ] != self.target_image_id
+ ):
+ self.decort_virt_image_link(amodule)
+ self.result['changed'] = True
+ amodule.exit_json(**self.result)
+ if (
+ amodule.params['storage_policy_id'] is not None
+ and amodule.params['storage_policy_id']
+ != image_facts['storage_policy_id']
+ ):
+ self.image_change_storage_policy(
+ image_id=self.validated_virt_image_id,
+ storage_policy_id=amodule.params['storage_policy_id'],
+ )
+
+ if amodule.params['state'] == "absent" and self.validated_virt_image_id:
+ amodule.image_id_delete = self.validated_virt_image_id
+ image_id, image_facts = self.decort_virt_image_find(amodule)
+ if image_facts['status'] != 'PURGED':
+ self.decort_image_delete(amodule)
+
+ elif amodule.params['image_name'] or amodule.params['image_id']:
+ image_id, image_facts = self.decort_image_find(amodule)
+ self.validated_image_id = self.decort_image_package_facts(image_facts)['id']
+ if self.decort_image_package_facts(image_facts)['id'] > 0:
+ self.result['facts'] = self.decort_image_package_facts(image_facts, amodule.check_mode)
+
+ if amodule.params['state'] == "present" and self.validated_image_id == 0 and amodule.params['image_name'] and amodule.params['url']:
+ self.decort_image_create(amodule)
+ self.result['changed'] = True
+ image_id, image_facts = self.decort_image_find(amodule)
+ self.result['msg'] = ("OS image '{}' created").format(self.decort_image_package_facts(image_facts)['id'])
+ self.result['facts'] = self.decort_image_package_facts(image_facts, amodule.check_mode)
+ self.validated_image_id = self.decort_image_package_facts(image_facts)['id']
+
+ elif amodule.params['state'] == "absent" and self.validated_image_id:
+ amodule.image_id_delete = self.validated_image_id
+ image_id, image_facts = self.decort_image_find(amodule)
+ if image_facts['status'] != 'DESTROYED':
+ self.decort_image_delete(amodule)
+
+ if self.validated_image_id:
+ if (
+ amodule.params['storage_policy_id'] is not None
+ and amodule.params['storage_policy_id']
+ != image_facts['storage_policy_id']
+ ):
+ self.image_change_storage_policy(
+ image_id=self.validated_image_id,
+ storage_policy_id=amodule.params['storage_policy_id'],
+ )
+
+ if self.result['failed'] == True:
+ # we failed to find the specified image - fail the module
+ self.result['changed'] = False
+ amodule.fail_json(**self.result)
+ else:
+ if self.validated_image_id:
+ _, image_facts = self.decort_image_find(amodule=amodule)
+ elif self.validated_virt_image_id:
+ _, image_facts = self.decort_virt_image_find(amodule=amodule)
+ self.result['facts'] = self.decort_image_package_facts(
+ arg_image_facts=image_facts,
+ arg_check_mode=amodule.check_mode,
+ )
+
+ amodule.exit_json(**self.result)
+
+
+def main():
+ decort_image().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_image_list.py b/library/decort_image_list.py
new file mode 100644
index 0000000..918b909
--- /dev/null
+++ b/library/decort_image_list.py
@@ -0,0 +1,162 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_image_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortImageList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ bootable=dict(
+ type='bool',
+ ),
+ id=dict(
+ type='int',
+ ),
+ enabled=dict(
+ type='bool',
+ ),
+ hot_resize=dict(
+ type='bool',
+ ),
+ size_gb=dict(
+ type='int',
+ ),
+ name=dict(
+ type='str',
+ ),
+ public=dict(
+ type='bool',
+ ),
+ sep_id=dict(
+ type='int',
+ ),
+ sep_name=dict(
+ type='str',
+ ),
+ sep_pool_name=dict(
+ type='str',
+ ),
+ status=dict(
+ type='str',
+ choices=sdk_types.ImageStatus._member_names_,
+ ),
+ type=dict(
+ type='str',
+ choices=sdk_types.ImageType._member_names_,
+ ),
+ storage_policy_id=dict(
+ type='int',
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.ImageForListAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+ aparam_status: str | None = aparam_filter['status']
+ aparam_type: str | None = aparam_filter['type']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.ImageForListAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.ca.image.list(
+ bootable=aparam_filter['bootable'],
+ enabled=aparam_filter['enabled'],
+ hot_resize=aparam_filter['hot_resize'],
+ id=aparam_filter['id'],
+ name=aparam_filter['name'],
+ public=aparam_filter['public'],
+ sep_id=aparam_filter['sep_id'],
+ sep_name=aparam_filter['sep_name'],
+ sep_pool_name=aparam_filter['sep_pool_name'],
+ size_gb=aparam_filter['size_gb'],
+ status=(
+ sdk_types.ImageStatus[aparam_status]
+ if aparam_status else None
+ ),
+ storage_policy_id=aparam_filter['storage_policy_id'],
+ type=(
+ sdk_types.ImageType[aparam_type]
+ if aparam_type else None
+ ),
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortImageList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_jwt.py b/library/decort_jwt.py
index 80f7903..1ef067a 100644
--- a/library/decort_jwt.py
+++ b/library/decort_jwt.py
@@ -25,6 +25,7 @@ class DecortJWT(DecortController):
return amodule_init_args
+ @DecortController.handle_sdk_exceptions
def run(self):
self.result['jwt'] = self.jwt
self.amodule.exit_json(**self.result)
diff --git a/library/decort_k8s.py b/library/decort_k8s.py
index 1313ab9..b320b6b 100644
--- a/library/decort_k8s.py
+++ b/library/decort_k8s.py
@@ -33,10 +33,10 @@ class decort_k8s(DecortController):
'taints': [],
'annotations': [],
'ci_user_data': {},
- 'chipset': 'i440fx',
+ 'chipset': 'Q35',
}
- if arg_amodule.params['name'] == "" and arg_amodule.params['id'] is None:
+ if arg_amodule.params['name'] is None and arg_amodule.params['id'] is None:
self.result['failed'] = True
self.result['changed'] = False
self.result['msg'] = "Cannot manage k8s cluster when its ID is 0 and name is empty."
@@ -158,7 +158,7 @@ class decort_k8s(DecortController):
def create(self):
master_chipset = self.amodule.params['master_chipset']
if master_chipset is None:
- master_chipset = 'i440fx'
+ master_chipset = 'Q35'
target_wgs = deepcopy(self.amodule.params['workers'])
for wg in target_wgs:
@@ -301,7 +301,6 @@ class decort_k8s(DecortController):
),
name=dict(
type='str',
- default='',
),
id=dict(
type='int',
@@ -598,67 +597,72 @@ class decort_k8s(DecortController):
self.exit(fail=True)
-def main():
- subj = decort_k8s()
- amodule = subj.amodule
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ amodule = self.amodule
- if subj.amodule.check_mode:
- subj.result['changed'] = False
- if subj.k8s_id:
- # cluster is found - package facts and report success to Ansible
- subj.result['failed'] = False
- subj.result['facts'] = subj.package_facts(amodule.check_mode)
- amodule.exit_json(**subj.result)
- # we exit the module at this point
- else:
- subj.result['failed'] = True
- subj.result['msg'] = ("Cannot locate K8s cluster name '{}'. "
- "RG ID {}").format(amodule.params['name'],
- amodule.params['rg_id'],)
- amodule.fail_json(**subj.result)
-
- if subj.k8s_id:
- if subj.k8s_info['status'] in ("DELETING","DESTROYNG","CREATING","DESTROYING",
- "ENABLING","DISABLING","RESTORING","MODELED"):
- subj.error()
- elif subj.k8s_info['status'] == "DELETED":
- if amodule.params['state'] in (
- 'disabled', 'enabled', 'present', 'started', 'stopped'
- ):
- subj.k8s_restore(subj.k8s_id)
- subj.action(disared_state=amodule.params['state'],
- preupdate=True)
- if amodule.params['state'] == 'absent':
- if amodule.params['permanent']:
- subj.destroy()
- else:
- subj.nop()
- elif subj.k8s_info['status'] in ('ENABLED', 'DISABLED'):
- if amodule.params['state'] == 'absent':
- subj.destroy()
+ if self.amodule.check_mode:
+ self.result['changed'] = False
+ if self.k8s_id:
+ # cluster is found - package facts and report success to Ansible
+ self.result['failed'] = False
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
+ # we exit the module at this point
else:
- subj.action(disared_state=amodule.params['state'])
- elif subj.k8s_info['status'] == "DESTROYED":
- if amodule.params['state'] in ('present','enabled'):
- subj.create()
- if amodule.params['state'] == 'absent':
- subj.nop()
- else:
- if amodule.params['state'] == 'absent':
- subj.nop()
- if amodule.params['state'] in ('present','started'):
- subj.create()
- elif amodule.params['state'] in ('stopped', 'disabled','enabled'):
- subj.error()
-
- if subj.result['failed']:
- amodule.fail_json(**subj.result)
- else:
- if subj.k8s_should_exist:
- subj.result['facts'] = subj.package_facts(amodule.check_mode)
- amodule.exit_json(**subj.result)
- else:
- amodule.exit_json(**subj.result)
+ self.result['failed'] = True
+ self.result['msg'] = ("Cannot locate K8s cluster name '{}'. "
+ "RG ID {}").format(amodule.params['name'],
+ amodule.params['rg_id'],)
+ amodule.fail_json(**self.result)
-if __name__ == "__main__":
+ if self.k8s_id:
+ if self.k8s_info['status'] in ("DELETING","DESTROYNG","CREATING","DESTROYING",
+ "ENABLING","DISABLING","RESTORING","MODELED"):
+ self.error()
+ elif self.k8s_info['status'] == "DELETED":
+ if amodule.params['state'] in (
+ 'disabled', 'enabled', 'present', 'started', 'stopped'
+ ):
+ self.k8s_restore(self.k8s_id)
+ self.action(disared_state=amodule.params['state'],
+ preupdate=True)
+ if amodule.params['state'] == 'absent':
+ if amodule.params['permanent']:
+ self.destroy()
+ else:
+ self.nop()
+ elif self.k8s_info['status'] in ('ENABLED', 'DISABLED'):
+ if amodule.params['state'] == 'absent':
+ self.destroy()
+ else:
+ self.action(disared_state=amodule.params['state'])
+ elif self.k8s_info['status'] == "DESTROYED":
+ if amodule.params['state'] in ('present','enabled'):
+ self.create()
+ if amodule.params['state'] == 'absent':
+ self.nop()
+ else:
+ if amodule.params['state'] == 'absent':
+ self.nop()
+ if amodule.params['state'] in ('present','started'):
+ self.create()
+ elif amodule.params['state'] in ('stopped', 'disabled','enabled'):
+ self.error()
+
+ if self.result['failed']:
+ amodule.fail_json(**self.result)
+ else:
+ if self.k8s_should_exist:
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
+ else:
+ amodule.exit_json(**self.result)
+
+
+def main():
+ decort_k8s().run()
+
+
+if __name__ == '__main__':
main()
diff --git a/library/decort_kvmvm.py b/library/decort_kvmvm.py
index 75f8f79..29a8113 100644
--- a/library/decort_kvmvm.py
+++ b/library/decort_kvmvm.py
@@ -1,2492 +1,87 @@
#!/usr/bin/python
-import re
-from typing import Sequence, Any, TypeVar
DOCUMENTATION = r'''
---
module: decort_kvmvm
-description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home).
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
'''
from ansible.module_utils.basic import AnsibleModule
-from ansible.module_utils.basic import env_fallback
-from ansible.module_utils.decort_utils import *
-DefaultT = TypeVar('DefaultT')
-
-
-class decort_kvmvm(DecortController):
- is_vm_stopped_or_will_be_stopped: None | bool = None
- guest_agent_exec_result: None | str = None
-
- def __init__(self):
- # call superclass constructor first
- super(decort_kvmvm, self).__init__(AnsibleModule(**self.amodule_init_args))
- arg_amodule = self.amodule
-
- self.aparam_networks_has_dpdk = None
-
- self.check_amodule_args()
-
- self.comp_should_exist = False
- # This following flag is used to avoid extra (and unnecessary) get of compute details prior to
- # packaging facts before the module completes. As ""
- self.skip_final_get = False
- self.force_final_get = False
- self.comp_id = 0
- self.comp_info = None
- self.rg_id = 0
- self.aparam_image = None
-
- validated_acc_id = 0
- validated_rg_id = 0
- validated_rg_facts = None
-
- self.vm_to_clone_id = 0
- self.vm_to_clone_info = None
-
- if self.aparams['get_snapshot_merge_status']:
- self.force_final_get = True
-
- if arg_amodule.params['clone_from'] is not None:
- self.vm_to_clone_id, self.vm_to_clone_info, _ = (
- self._compute_get_by_id(
- comp_id=self.aparams['clone_from']['id'],
- )
- )
- self.rg_id = self.vm_to_clone_info['rgId']
- if not self.vm_to_clone_id:
- self.message(
- f'Check for parameter "clone_from.id" failed: '
- f'VM ID {self.aparams["clone_from"]["id"]} does not exist.'
- )
- self.exit(fail=True)
- elif self.vm_to_clone_info['status'] in ('DESTROYED', 'DELETED'):
- self.message(
- f'Check for parameter "clone_from.id" failed: '
- f'VM ID {self.aparams["clone_from"]["id"]} is in '
- f'{self.vm_to_clone_info["status"]} state and '
- f'cannot be cloned.'
- )
- self.exit(fail=True)
-
- clone_id, clone_dict, _ = self.compute_find(
- comp_name=self.aparams['name'],
- rg_id=self.rg_id,
- )
- self.check_amodule_args_for_clone(
- clone_id=clone_id,
- clone_dict=clone_dict,
- )
- self.check_amodule_args_for_change()
-
- if not clone_id:
- clone_id = self.clone()
- if self.amodule.check_mode:
- self.exit()
-
- self.comp_id, self.comp_info, self.rg_id = self._compute_get_by_id(
- comp_id=clone_id,
- need_custom_fields=True,
- need_console_url=self.aparams['get_console_url'],
- )
-
- return
-
- comp_id = arg_amodule.params['id']
-
- # Analyze Compute name & ID, RG name & ID and build arguments to compute_find accordingly.
- if arg_amodule.params['name'] == "" and comp_id == 0:
- self.result['failed'] = True
- self.result['changed'] = False
- self.result['msg'] = "Cannot manage Compute when its ID is 0 and name is empty."
- self.fail_json(**self.result)
- # fail the module - exit
-
- if not comp_id: # manage Compute by name -> need RG identity
- if not arg_amodule.params['rg_id']: # RG ID is not set -> locate RG by name -> need account ID
- validated_acc_id, self._acc_info = self.account_find(arg_amodule.params['account_name'],
- arg_amodule.params['account_id'])
- if not validated_acc_id:
- self.result['failed'] = True
- self.result['changed'] = False
- self.result['msg'] = ("Current user does not have access to the account ID {} / "
- "name '{}' or non-existent account specified.").format(arg_amodule.params['account_id'],
- arg_amodule.params['account_name'])
- self.fail_json(**self.result)
- # fail the module -> exit
- # now validate RG
- validated_rg_id, validated_rg_facts = self.rg_find(validated_acc_id,
- arg_amodule.params['rg_id'],
- arg_amodule.params['rg_name'])
- if not validated_rg_id:
- self.result['failed'] = True
- self.result['changed'] = False
- self.result['msg'] = "Cannot find RG ID {} / name '{}'.".format(arg_amodule.params['rg_id'],
- arg_amodule.params['rg_name'])
- self.fail_json(**self.result)
- # fail the module - exit
-
- self.rg_id = validated_rg_id
- arg_amodule.params['rg_id'] = validated_rg_id
- arg_amodule.params['rg_name'] = validated_rg_facts['name']
- self.acc_id = validated_rg_facts['accountId']
-
- # at this point we are ready to locate Compute, and if anything fails now, then it must be
- # because this Compute does not exist or something goes wrong in the upstream API
- # We call compute_find with check_state=False as we also consider the case when a Compute
- # specified by account / RG / compute name never existed and will be created for the first time.
- self.comp_id, self.comp_info, self.rg_id = self.compute_find(comp_id=comp_id,
- comp_name=arg_amodule.params['name'],
- rg_id=validated_rg_id,
- check_state=False,
- need_custom_fields=True,
- need_console_url=self.aparams['get_console_url'])
-
- if self.comp_id:
- self.comp_should_exist = True
- self.acc_id = self.comp_info['accountId']
- self.rg_id = self.comp_info['rgId']
- self.check_amodule_args_for_change()
- else:
- if self.amodule.params['state'] != 'absent':
- self.check_amodule_args_for_create()
-
- return
-
- def check_amodule_args(self):
- """
- Additional Ansible Module arguments validation that
- cannot be implemented using Ansible Argument spec.
- """
-
- check_error = False
-
- # Check parameter "networks"
- aparam_nets = self.aparams['networks']
- if aparam_nets:
- net_types = {net['type'] for net in aparam_nets}
- # DPDK and other networks
- self.aparam_networks_has_dpdk = False
- if self.VMNetType.DPDK.value in net_types:
- self.aparam_networks_has_dpdk = True
- if not net_types.issubset(
- {self.VMNetType.DPDK.value, self.VMNetType.EMPTY.value}
- ):
- check_error = True
- self.message(
- 'Check for parameter "networks" failed:'
- ' a compute cannot be connected to a DPDK network and'
- ' a network of another type at the same time.'
- )
-
- if (
- self.aparams['hp_backed'] is not None
- and not self.aparams['hp_backed']
- ):
- check_error = True
- self.message(
- 'Check for parameter "networks" failed: '
- 'hp_backed must be set to True to connect a compute '
- 'to a DPDK network.'
- )
- for net in aparam_nets:
- net_type = net['type']
-
- if (
- net['type'] not in (
- self.VMNetType.SDN.value,
- self.VMNetType.EMPTY.value,
- )
- and not isinstance(net['id'], int)
- ):
- check_error = True
- self.message(
- 'Check for parameter "networks" failed: '
- 'Type of parameter "id" must be integer for '
- f'{net["type"]} network type'
- )
-
- # MTU
- net_mtu = net['mtu']
- if net_mtu is not None:
- mtu_net_types = (
- self.VMNetType.DPDK.value,
- self.VMNetType.EXTNET.value,
- )
-
- # Allowed network types for set MTU
- if net_type not in mtu_net_types:
- check_error = True
- self.message(
- 'Check for parameter "networks" failed:'
- ' MTU can be specifed'
- ' only for DPDK or EXTNET network'
- ' (remove parameter "mtu" for network'
- f' {net["type"]} with ID {net["id"]}).'
- )
-
- # Maximum MTU
- MAX_MTU = 9216
- if net_type in mtu_net_types and net_mtu > MAX_MTU:
- check_error = True
- self.message(
- 'Check for parameter "networks" failed:'
- f' MTU must be no more than {MAX_MTU}'
- ' (change value for parameter "mtu" for network'
- f' {net["type"]} with ID {net["id"]}).'
- )
-
- # EXTNET minimum MTU
- EXTNET_MIN_MTU = 1500
- if (
- net_type == self.VMNetType.EXTNET.value
- and net_mtu < EXTNET_MIN_MTU
- ):
- check_error = True
- self.message(
- 'Check for parameter "networks" failed:'
- f' MTU for {self.VMNetType.EXTNET.value} network'
- f' must be at least {EXTNET_MIN_MTU}'
- ' (change value for parameter "mtu" for network'
- f' {net["type"]} with ID {net["id"]}).'
- )
-
- # DPDK minimum MTU
- DPDK_MIN_MTU = 1
- if (
- net_type == self.VMNetType.DPDK.value
- and net_mtu < DPDK_MIN_MTU
- ):
- check_error = True
- self.message(
- 'Check for parameter "networks" failed:'
- f' MTU for {self.VMNetType.DPDK.value} network'
- f' must be at least {DPDK_MIN_MTU}'
- ' (change value for parameter "mtu" for network'
- f' {net["type"]} with ID {net["id"]}).'
- )
-
- # MAC address
- if net['mac'] is not None:
- if net['type'] == self.VMNetType.EMPTY.value:
- check_error = True
- self.message(
- 'Check for parameter "networks.mac" failed: '
- 'MAC-address cannot be specified for an '
- 'EMPTY type network.'
- )
-
- mac_validation_result = re.match(
- '[0-9a-f]{2}([:]?)[0-9a-f]{2}(\\1[0-9a-f]{2}){4}$',
- net['mac'].lower(),
- )
- if not mac_validation_result:
- check_error = True
- self.message(
- 'Check for parameter "networks.mac" failed: '
- f'MAC-address for network ID {net["id"]} must be '
- 'specified in quotes and in the format '
- '"XX:XX:XX:XX:XX:XX".'
- )
- if self.VMNetType.SDN.value in net_types:
- if not net_types.issubset(
- {
- self.VMNetType.SDN.value,
- self.VMNetType.EMPTY.value,
- self.VMNetType.VFNIC.value,
- }
- ):
- check_error = True
- self.message(
- 'Check for parameter "networks" failed: '
- 'a compute can be connected to a SDN network and '
- 'only to VFNIC, EMPTY networks at the same time.'
- )
- aparam_custom_fields = self.aparams['custom_fields']
- if aparam_custom_fields is not None:
- if (
- aparam_custom_fields['disable']
- and aparam_custom_fields['fields'] is not None
- ):
- check_error = True
- self.message(
- 'Check for parameter "custom_fields" failed: '
- '"fields" cannot be set if "disable" is True.'
- )
-
- aparam_pref_cpu_cores = self.aparams['preferred_cpu_cores']
- if (
- aparam_pref_cpu_cores
- and len(set(aparam_pref_cpu_cores)) != len(aparam_pref_cpu_cores)
- ):
- check_error = True
- self.message(
- 'Check for parameter "preferred_cpu_cores" failed: '
- 'the list must contain only unique elements.'
- )
-
- aparam_state = self.aparams['state']
- new_state = None
- match aparam_state:
- case 'halted' | 'poweredoff':
- new_state = 'stopped'
- case 'poweredon':
- new_state = 'started'
-
- if new_state:
- self.message(
- msg=f'"{aparam_state}" state is deprecated and might be '
- f'removed in newer versions. '
- f'Please use "{new_state}" instead.',
- warning=True,
- )
-
- if check_error:
- self.exit(fail=True)
-
- def nop(self):
- """No operation (NOP) handler for Compute management by decort_kvmvm module.
- This function is intended to be called from the main switch construct of the module
- when current state -> desired state change logic does not require any changes to
- the actual Compute state.
- """
- self.result['failed'] = False
- self.result['changed'] = False
- if self.comp_id:
- self.result['msg'] = ("No state change required for Compute ID {} because of its "
- "current status '{}'.").format(self.comp_id, self.comp_info['status'])
- else:
- self.result['msg'] = ("No state change to '{}' can be done for "
- "non-existent Compute instance.").format(self.amodule.params['state'])
- return
-
- def error(self):
- """Error handler for Compute instance management by decort_kvmvm module.
- This function is intended to be called when an invalid state change is requested.
- Invalid means that the current is invalid for any operations on the Compute or the
- transition from current to desired state is not technically possible.
- """
- self.result['failed'] = True
- self.result['changed'] = False
- if self.comp_id:
- self.result['msg'] = ("Invalid target state '{}' requested for Compute ID {} in the "
- "current status '{}'.").format(self.comp_id,
- self.amodule.params['state'],
- self.comp_info['status'])
- else:
- self.result['msg'] = ("Invalid target state '{}' requested for non-existent Compute name '{}' "
- "in RG ID {} / name '{}'").format(self.amodule.params['state'],
- self.amodule.params['name'],
- self.amodule.params['rg_id'],
- self.amodule.params['rg_name'])
- return
-
- def create(self):
- """New Compute instance creation handler for decort_kvmvm module.
- This function checks for the presence of required parameters and deploys a new KVM VM
- Compute instance with the specified characteristics into the target Resource Group.
- The target RG must exist.
- """
- # the following parameters must be present: cpu, ram, image_id
- # each of the following calls will abort if argument is missing
- self.check_amodule_argument('cpu')
- self.check_amodule_argument('ram')
-
- aparam_boot = self.aparams['boot']
- validated_bdisk_size = 0
- boot_mode = 'bios'
- loader_type = 'unknown'
- if aparam_boot is not None:
- validated_bdisk_size = self.amodule.params['boot'].get(
- 'disk_size', 0
- )
-
- if aparam_boot['mode'] is None:
- self.message(
- msg=self.MESSAGES.default_value_used(
- param_name='boot.mode',
- default_value=boot_mode
- ),
- warning=True,
- )
- else:
- boot_mode = aparam_boot['mode']
-
- if aparam_boot['loader_type'] is None:
- self.message(
- msg=self.MESSAGES.default_value_used(
- param_name='boot.loader_type',
- default_value=loader_type
- ),
- warning=True,
- )
-
- else:
- loader_type = aparam_boot['loader_type']
-
-
- image_id, image_facts = None, None
- if self.aparam_image:
- if (
- self.check_amodule_argument('image_id', abort=False)
- and self.amodule.params['image_id'] > 0
- ):
- # find image by image ID and account ID
- # image_find(self, image_id, account_id, rg_id=0, sepid=0, pool=""):
- image_id, image_facts = self.image_find(
- image_id=self.amodule.params['image_id'],
- account_id=self.acc_id)
-
- if validated_bdisk_size <= image_facts['size']:
- # adjust disk size to the minimum allowed by OS image, which will be used to spin off this Compute
- validated_bdisk_size = image_facts['size']
-
- # NOTE: due to a libvirt "feature", that impacts management of a VM created without any network interfaces,
- # we create KVM VM in HALTED state.
- # Consequently, if desired state is different from 'halted' or 'porewedoff", we should explicitly start it
- # in the upstream code.
- # See corresponding NOTE below for another place where this "feature" is redressed for.
- #
- # Once this "feature" is fixed, make sure VM is created according to the actual desired state
- #
- start_compute = False # change this once a workaround for the aforementioned libvirt "feature" is implemented
- if self.amodule.params['state'] in ('halted', 'poweredoff', 'stopped'):
- start_compute = False
-
- if self.amodule.params['ssh_key'] and self.amodule.params['ssh_key_user'] and not self.amodule.params['ci_user_data']:
- cloud_init_params = {'users': [
- {"name": self.amodule.params['ssh_key_user'],
- "ssh-authorized-keys": [self.amodule.params['ssh_key']],
- "shell": '/bin/bash'}
- ]}
- elif self.amodule.params['ci_user_data']:
- cloud_init_params = self.amodule.params['ci_user_data']
- else:
- cloud_init_params = None
-
- cpu_pin = self.aparams['cpu_pin']
- if cpu_pin is None:
- cpu_pin = False
-
- hp_backed = self.aparams['hp_backed']
- if hp_backed is None:
- hp_backed = False
-
- numa_affinity = self.aparams['numa_affinity']
- if numa_affinity is None:
- numa_affinity = 'none'
-
- chipset = self.amodule.params['chipset']
- if chipset is None:
- chipset = 'i440fx'
- self.message(
- msg=f'Chipset not specified, '
- f'default value "{chipset}" will be used.',
- warning=True,
- )
-
- network_interface_naming = self.aparams['network_interface_naming']
- if network_interface_naming is None:
- network_interface_naming = 'ens'
- self.message(
- msg=self.MESSAGES.default_value_used(
- param_name='network_interface_naming',
- default_value=network_interface_naming
- ),
- warning=True,
- )
-
- hot_resize = self.aparams['hot_resize']
- if hot_resize is None:
- hot_resize = False
- self.message(
- msg=self.MESSAGES.default_value_used(
- param_name='hot_resize',
- default_value=hot_resize
- ),
- warning=True,
- )
- # if we get through here, all parameters required to create new Compute instance should be at hand
-
- # NOTE: KVM VM is created in HALTED state and must be explicitly started
- self.comp_id = self.kvmvm_provision(
- rg_id=self.rg_id,
- comp_name=self.amodule.params['name'],
- cpu=self.amodule.params['cpu'],
- ram=self.amodule.params['ram'],
- boot_disk_size=validated_bdisk_size,
- image_id=image_id,
- description=self.amodule.params['description'],
- userdata=cloud_init_params,
- sep_id=self.amodule.params['sep_id' ] if "sep_id" in self.amodule.params else None,
- pool_name=self.amodule.params['pool'] if "pool" in self.amodule.params else None,
- start_on_create=start_compute,
- chipset=chipset,
- cpu_pin=cpu_pin,
- hp_backed=hp_backed,
- numa_affinity=numa_affinity,
- preferred_cpu_cores=self.amodule.params['preferred_cpu_cores'],
- boot_mode=boot_mode,
- boot_loader_type=loader_type,
- network_interface_naming=network_interface_naming,
- hot_resize=hot_resize,
- zone_id=self.aparams['zone_id'],
- storage_policy_id=self.aparams['storage_policy_id'],
- os_version=self.aparams['os_version'],
- )
- self.comp_should_exist = True
-
- # Originally we would have had to re-read comp_info after VM was provisioned
- # _, self.comp_info, _ = self.compute_find(self.comp_id)
-
- # However, to avoid extra call to compute/get API we need to construct comp_info so that
- # the below calls to compute_networks and compute_data_disks work properly.
- #
- # Here we are imitating comp_info structure as if it has been returned by a real call
- # to API compute/get
- self.comp_info = {
- 'id': self.comp_id,
- 'accountId': self.acc_id,
- 'status': "ENABLED",
- 'techStatus': "STOPPED",
- 'interfaces': [], # new compute instance is created network-less
- 'disks': [], # new compute instance is created without any data disks attached
- 'tags': {},
- 'affinityLabel': "",
- 'affinityRules': [],
- 'antiAffinityRules': [],
- }
-
- #
- # Compute was created
- #
- # Setup network connections
- if self.amodule.params['networks'] is not None:
- self.compute_networks(
- comp_dict=self.comp_info,
- new_networks=self.amodule.params['networks'],
- )
- # Next manage data disks
- if self.amodule.params['disks'] is not None:
- self.compute_disks(
- comp_dict=self.comp_info,
- aparam_disks_dict=self.amodule.params['disks'],
- )
-
- self.compute_affinity(self.comp_info,
- self.amodule.params['tag'],
- self.amodule.params['aff_rule'],
- self.amodule.params['aaff_rule'],
- label=self.amodule.params['affinity_label'],)
- # NOTE: see NOTE above regarding libvirt "feature" and new VMs created in HALTED state
- if self.aparam_image:
- if self.amodule.params['state'] in ('poweredon', 'started'):
- self.compute_powerstate(self.comp_info, 'started')
-
- if self.aparams['custom_fields'] is None:
- custom_fields_disable = True
- custom_fields_fields = None
- else:
- custom_fields_disable = self.aparams['custom_fields']['disable']
- custom_fields_fields = self.aparams['custom_fields']['fields']
- if not custom_fields_disable:
- self.compute_set_custom_fields(
- compute_id=self.comp_info['id'],
- custom_fields=custom_fields_fields,
- )
-
- # read in Compute facts once more after all initial setup is complete
- _, self.comp_info, _ = self.compute_find(
- comp_id=self.comp_id,
- need_custom_fields=True,
- need_console_url=self.amodule.params['get_console_url'],
- )
-
- if self.compute_update_args:
- self.compute_update(
- compute_id=self.comp_info['id'],
- **self.compute_update_args,
- )
- else:
- self.skip_final_get = True
-
- return
-
- def destroy(self):
- """Compute destroy handler for VM management by decort_kvmvm module.
- Note that this handler deletes the VM permanently together with all assigned disk resources.
- """
- self.compute_delete(comp_id=self.comp_id, permanently=True)
- self.comp_id, self.comp_info, _ = self._compute_get_by_id(self.comp_id)
- return
-
- def restore(self):
- """Compute restore handler for Compute instance management by decort_kvmvm module.
- Note that restoring Compute is only possible if it is in DELETED state. If called on a
- Compute instance in any other state, the method will throw an error and abort the execution
- of the module.
- """
- self.compute_restore(comp_id=self.comp_id)
- # TODO - do we need updated comp_info to manage port forwards and size after VM is restored?
- _, self.comp_info, _ = self.compute_find(
- comp_id=self.comp_id,
- need_custom_fields=True,
- need_console_url=self.amodule.params['get_console_url'],
- )
- self.modify()
- self.comp_should_exist = True
- return
-
- def modify(self, arg_wait_cycles=0):
- """Compute modify handler for KVM VM management by decort_kvmvm module.
- This method is a convenience wrapper that calls individual Compute modification functions from
- DECORT utility library (module).
-
- Note that it does not modify power state of KVM VM.
- """
- if self.compute_update_args:
- self.compute_update(
- compute_id=self.comp_info['id'],
- **self.compute_update_args,
- )
-
- if self.amodule.params['rollback_to'] is not None:
- self.compute_rollback(
- compute_id=self.comp_info['id'],
- snapshot_label=self.amodule.params['rollback_to'],
- )
-
- if self.amodule.params['networks'] is not None:
- self.compute_networks(
- comp_dict=self.comp_info,
- new_networks=self.aparams['networks'],
- order_changing=self.aparams['network_order_changing'],
- )
-
- if self.amodule.params['disks'] is not None:
- self.compute_disks(
- comp_dict=self.comp_info,
- aparam_disks_dict=self.amodule.params['disks'],
- )
-
- aparam_boot = self.amodule.params['boot']
- if aparam_boot is not None:
- aparam_disk_id = aparam_boot['disk_id']
- if aparam_disk_id is not None:
- for disk in self.comp_info['disks']:
- if disk['id'] == aparam_disk_id and disk['type'] != 'B':
- self.compute_boot_disk(
- comp_id=self.comp_info['id'],
- boot_disk=aparam_disk_id,
- )
- break
-
- boot_disk_new_size = aparam_boot['disk_size']
- if boot_disk_new_size:
- self.compute_bootdisk_size(self.comp_info, boot_disk_new_size)
-
- boot_order = aparam_boot['order']
- if (
- boot_order is not None
- and self.comp_info['bootOrder'] != boot_order
- ):
- self.compute_set_boot_order(
- vm_id=self.comp_id,
- order=boot_order,
- )
-
- disk_redeploy = aparam_boot['disk_redeploy']
- if disk_redeploy:
- auto_start = False
- if self.aparams['state'] is None:
- if self.comp_info['techStatus'] == 'STARTED':
- auto_start = True
- else:
- if self.aparams['state'] == 'started':
- auto_start = True
-
- disk_size = None
- if (
- aparam_boot is not None
- and aparam_boot['disk_size'] is not None
- ):
- disk_size = aparam_boot['disk_size']
- elif self.aparams['image_id'] is not None:
- _, image_facts = self.image_find(
- image_id=self.aparams['image_id'],
- )
- disk_size = image_facts['size']
-
- os_version = None
- if (
- self.aparams['image_id'] is None
- or self.aparams['image_id'] == self.comp_info['imageId']
- ):
- if self.aparams['os_version'] is None:
- os_version = self.comp_info['os_version']
- else:
- os_version = self.aparams['os_version']
- elif self.aparams['image_id'] != self.comp_info['imageId']:
- os_version = self.aparams['os_version']
-
- self.compute_disk_redeploy(
- vm_id=self.comp_id,
- storage_policy_id=self.aparams['storage_policy_id'],
- image_id=self.aparams['image_id'],
- disk_size=disk_size,
- auto_start=auto_start,
- os_version=os_version,
- )
-
- self.compute_resize(self.comp_info,
- self.amodule.params['cpu'], self.amodule.params['ram'],
- wait_for_state_change=arg_wait_cycles)
-
- self.compute_affinity(self.comp_info,
- self.amodule.params['tag'],
- self.amodule.params['aff_rule'],
- self.amodule.params['aaff_rule'],
- label=self.amodule.params['affinity_label'])
-
- aparam_custom_fields = self.amodule.params['custom_fields']
- if aparam_custom_fields is not None:
- compute_custom_fields = self.comp_info['custom_fields']
- if aparam_custom_fields['disable']:
- if compute_custom_fields is not None:
- self.compute_disable_custom_fields(
- compute_id=self.comp_info['id'],
- )
- else:
- if compute_custom_fields != aparam_custom_fields['fields']:
- self.compute_set_custom_fields(
- compute_id=self.comp_info['id'],
- custom_fields=aparam_custom_fields['fields'],
- )
-
- aparam_zone_id = self.aparams['zone_id']
- if aparam_zone_id is not None and aparam_zone_id != self.comp_info['zoneId']:
- self.compute_migrate_to_zone(
- compute_id=self.comp_id,
- zone_id=aparam_zone_id,
- )
-
- aparam_guest_agent = self.aparams['guest_agent']
- if aparam_guest_agent is not None:
- if aparam_guest_agent['enabled'] is not None:
- if (
- aparam_guest_agent['enabled']
- and not self.comp_info['qemu_guest']['enabled']
- ):
- self.compute_guest_agent_enable(vm_id=self.comp_id)
- elif (
- aparam_guest_agent['enabled'] is False
- and self.comp_info['qemu_guest']['enabled']
- ):
- self.compute_guest_agent_disable(vm_id=self.comp_id)
-
- if aparam_guest_agent['update_available_commands']:
- self.compute_guest_agent_feature_update(vm_id=self.comp_id)
-
- aparam_guest_agent_exec = aparam_guest_agent['exec']
- if aparam_guest_agent_exec is not None:
- self.guest_agent_exec_result = (
- self.compute_guest_agent_execute(
- vm_id=self.comp_id,
- cmd=aparam_guest_agent_exec['cmd'],
- args=aparam_guest_agent_exec['args'],
- )
- )
-
- aparam_cdrom = self.aparams['cdrom']
- if aparam_cdrom is not None:
- mode = aparam_cdrom['mode']
- image_id = aparam_cdrom['image_id']
- if (
- mode == 'insert'
- and self.comp_info['cdImageId'] != image_id
- ):
- self.compute_cd_insert(
- vm_id=self.comp_id,
- image_id=image_id,
- )
- elif mode == 'eject':
- self.compute_cd_eject(
- vm_id=self.comp_id,
- )
-
- if self.aparams['abort_cloning']:
- self.compute_clone_abort(
- vm_id=self.comp_id,
- )
-
- return
-
- @property
- def compute_update_args(self) -> dict:
- result_args = {}
-
- params_to_check = {
- 'name': 'name',
- 'chipset': 'chipset',
- 'cpu_pin': 'cpupin',
- 'hp_backed': 'hpBacked',
- 'numa_affinity': 'numaAffinity',
- 'description': 'desc',
- 'auto_start': 'autoStart',
- 'preferred_cpu_cores': 'preferredCpu',
- 'boot.mode': 'bootType',
- 'boot.loader_type': 'loaderType',
- 'network_interface_naming': 'networkInterfaceNaming',
- 'hot_resize': 'hotResize',
- 'os_version': 'os_version',
- }
-
- def get_nested_value(
- d: dict,
- keys: Sequence[str],
- default: DefaultT | None = None,
- ) -> Any | DefaultT:
- if not keys:
- raise ValueError
-
- key = keys[0]
- if key not in d:
- return default
- value = d[key]
-
- if len(keys) > 1:
- if isinstance(value, dict):
- nested_d = value
- return get_nested_value(
- d=nested_d,
- keys=keys[1:],
- default=default,
- )
- if value is None:
- return default
- raise ValueError(
- f'The key {key} found, but its value is not a dictionary.'
- )
- return value
-
- for aparam_name, comp_field_name in params_to_check.items():
- aparam_value = get_nested_value(
- d=self.aparams,
- keys=aparam_name.split('.'),
- )
- comp_value = get_nested_value(
- d=self.comp_info,
- keys=comp_field_name.split('.'),
- )
-
- if aparam_value is not None and aparam_value != comp_value:
- # If disk_redeploy = True no need to update os_version.
- # Updating os_version through compute_disk_redeploy
- if (
- aparam_name == 'os_version'
- and self.aparams['boot'] is not None
- and self.aparams['boot']['disk_redeploy']
- ):
- continue
- result_args[aparam_name.replace('.', '_')] = (
- aparam_value
- )
-
- return result_args
-
- def package_facts(self, check_mode=False):
- """Package a dictionary of KVM VM facts according to the decort_kvmvm module specification.
- This dictionary will be returned to the upstream Ansible engine at the completion of decort_kvmvm
- module run.
-
- @param check_mode: boolean that tells if this Ansible module is run in check mode
-
- @return: dictionary of KVM VM facts, containing suffucient information to manage the KVM VM in
- subsequent Ansible tasks.
- """
-
- ret_dict = dict(id=0,
- name="",
- arch="",
- cpu="",
- ram="",
- disk_size=0,
- state="CHECK_MODE",
- tech_status="",
- account_id=0,
- rg_id=0,
- username="",
- password="",
- public_ips=[], # direct IPs; this list can be empty
- private_ips=[], # IPs on ViNSes; usually, at least one IP is listed
- nat_ip="", # IP of the external ViNS interface; can be empty.
- tags={},
- chipset="",
- interfaces=[],
- cpu_pin="",
- hp_backed="",
- numa_affinity="",
- custom_fields={},
- vnc_password="",
- snapshots=[],
- preferred_cpu_cores=[],
- clones=[],
- clone_reference=0,
- )
-
- if check_mode or self.comp_info is None:
- # if in check mode (or void facts provided) return immediately with the default values
- return ret_dict
-
- # if not self.comp_should_exist:
- # ret_dict['state'] = "ABSENT"
- # return ret_dict
-
- ret_dict['id'] = self.comp_info['id']
- ret_dict['name'] = self.comp_info['name']
- ret_dict['arch'] = self.comp_info['arch']
- ret_dict['state'] = self.comp_info['status']
- ret_dict['tech_status'] = self.comp_info['techStatus']
- ret_dict['account_id'] = self.comp_info['accountId']
- ret_dict['rg_id'] = self.comp_info['rgId']
- if self.comp_info['tags']:
- ret_dict['tags'] = self.comp_info['tags']
- # if the VM is an imported VM, then the 'accounts' list may be empty,
- # so check for this case before trying to access login and passowrd values
- if len(self.comp_info['osUsers']):
- ret_dict['username'] = self.comp_info['osUsers'][0]['login']
- ret_dict['password'] = self.comp_info['osUsers'][0]['password']
-
- if self.comp_info['interfaces']:
- # We need a list of all ViNSes in the account, which owns this Compute
- # to find a ViNS, which may have active external connection. Then
- # we will save external IP address of that connection in ret_dict['nat_ip']
-
- for iface in self.comp_info['interfaces']:
- if iface['connType'] == "VXLAN": # This is ViNS connection
- ret_dict['private_ips'].append(iface['ipAddress'])
- # if iface['connId']
- # Now we need to check if this ViNS has GW function and external connection.
- # If it does - save public IP address of GW VNF in ret_dict['nat_ip']
- elif iface['connType'] == "VLAN": # This is direct external network connection
- ret_dict['public_ips'].append(iface['ipAddress'])
-
- iface['security_group_mode'] = iface.pop('enable_secgroups')
- iface['security_group_ids'] = iface.pop('security_groups')
-
- ret_dict['cpu'] = self.comp_info['cpus']
- ret_dict['ram'] = self.comp_info['ram']
-
- ret_dict['image_id'] = self.comp_info['imageId']
-
- ret_dict['disks'] = self.comp_info['disks']
- for disk in ret_dict['disks']:
- if disk['type'] == 'B':
- # if it is a boot disk - store its size
- ret_dict['disk_size'] = disk['sizeMax']
-
- ret_dict['chipset'] = self.comp_info['chipset']
-
- ret_dict['interfaces'] = self.comp_info['interfaces']
-
- ret_dict['cpu_pin'] = self.comp_info['cpupin']
- ret_dict['hp_backed'] = self.comp_info['hpBacked']
- ret_dict['numa_affinity'] = self.comp_info['numaAffinity']
-
- ret_dict['custom_fields'] = self.comp_info['custom_fields']
-
- ret_dict['vnc_password'] = self.comp_info['vncPasswd']
-
- ret_dict['auto_start'] = self.comp_info['autoStart']
-
- ret_dict['snapshots'] = self.comp_info['snapSets']
-
- ret_dict['preferred_cpu_cores'] = self.comp_info['preferredCpu']
-
- if self.amodule.params['get_console_url']:
- ret_dict['console_url'] = self.comp_info['console_url']
-
- ret_dict['clones'] = self.comp_info['clones']
- ret_dict['clone_reference'] = self.comp_info['cloneReference']
-
- ret_dict['boot_mode'] = self.comp_info['bootType']
- ret_dict['boot_loader_type'] = self.comp_info['loaderType']
- ret_dict['network_interface_naming'] = self.comp_info[
- 'networkInterfaceNaming'
- ]
- ret_dict['hot_resize'] = self.comp_info['hotResize']
-
- ret_dict['pinned_to_stack'] = self.comp_info['pinnedToStack']
-
- ret_dict['affinity_label'] = self.comp_info['affinityLabel']
- ret_dict['affinity_rules'] = self.comp_info['affinityRules']
- ret_dict['anti_affinity_rules'] = self.comp_info['antiAffinityRules']
-
- ret_dict['zone_id'] = self.comp_info['zoneId']
-
- ret_dict['guest_agent'] = self.comp_info['qemu_guest']
-
- if self.guest_agent_exec_result:
- ret_dict['guest_agent']['exec_result'] = self.guest_agent_exec_result # noqa: E501
-
- if self.amodule.params['get_snapshot_merge_status']:
- ret_dict['snapshot_merge_status'] = (
- self.comp_info['snapshot_merge_status']
- )
-
- ret_dict['cd_image_id'] = self.comp_info['cdImageId']
-
- ret_dict['boot_order'] = self.comp_info['bootOrder']
-
- ret_dict['os_version'] = self.comp_info['os_version']
-
- ret_dict['boot_loader_metaiso'] = self.comp_info['loaderMetaIso']
- if self.comp_info['loaderMetaIso'] is not None:
- ret_dict['boot_loader_metaiso'] = {
- 'device_name': self.comp_info['loaderMetaIso']['devicename'],
- 'path': self.comp_info['loaderMetaIso']['path'],
- }
-
- if self.amodule.params['get_cloning_status']:
- ret_dict['cloning_status'] = self.compute_get_clone_status(
- vm_id=self.comp_id,
- )
-
- return ret_dict
-
- def check_amodule_args_for_create(self):
- check_errors = False
- # Check for unacceptable parameters for a blank Compute
- if self.aparams['image_id'] is not None:
- self.aparam_image = True
- for param in (
- 'network_interface_naming',
- 'hot_resize',
- ):
- if self.aparams[param] is not None:
- check_errors = True
- self.message(
- f'Check for parameter "{param}" failed: '
- 'parameter can be specified only for a blank VM.'
- )
-
- if self.aparams['boot'] is not None:
- for param in ('mode', 'loader_type'):
- if self.aparams['boot'][param] is not None:
- check_errors = True
- self.message(
- f'Check for parameter "boot.{param}" failed: '
- 'parameter can be specified only for a blank VM.'
- )
-
- else:
- self.aparam_image = False
- if (
- self.aparams['state'] is not None
- and self.aparams['state'] not in (
- 'present',
- 'poweredoff',
- 'halted',
- 'stopped',
- )
- ):
- check_errors = True
- self.message(
- 'Check for parameter "state" failed: '
- 'state for a blank Compute must be either '
- '"present" or "stopped".'
- )
-
- for parameter in (
- 'ssh_key',
- 'ssh_key_user',
- 'ci_user_data',
- ):
- if self.aparams[parameter] is not None:
- check_errors = True
- self.message(
- f'Check for parameter "{parameter}" failed: '
- f'"image_id" must be specified '
- f'to set {parameter}.'
- )
-
- if (
- self.aparams['sep_id'] is not None
- and self.aparams['boot'] is None
- and self.aparams['boot']['disk_size'] is None
- ):
- check_errors = True
- self.message(
- 'Check for parameter "sep_id" failed: '
- '"image_id" or "boot.disk_size" '
- 'must be specified to set sep_id.'
- )
-
- if self.aparams['rollback_to'] is not None:
- check_errors = True
- self.message(
- 'Check for parameter "rollback_to" failed: '
- 'rollback_to can be specified only for existing compute.'
- )
-
- if self.aparam_networks_has_dpdk and not self.aparams['hp_backed']:
- check_errors = True
- self.message(
- 'Check for parameter "networks" failed:'
- ' hp_backed must be set to True to connect a compute'
- ' to a DPDK network.'
- )
-
- if self.check_aparam_zone_id() is False:
- check_errors = True
-
- if self.aparams['guest_agent'] is not None:
- check_errors = True
- self.message(
- 'Check for parameter "guest_agent" failed: '
- 'guest_agent can be specified only for existing VM.'
- )
-
- if self.aparams['get_snapshot_merge_status']:
- check_errors = True
- self.message(
- 'Check for parameter "get_snapshot_merge_status" failed: '
- 'snapshot merge status can be retrieved only for existing VM.'
- )
-
- aparam_networks = self.aparams['networks']
- if aparam_networks is not None:
- net_types = {net['type'] for net in aparam_networks}
- if self.VMNetType.TRUNK.value in net_types:
- if self.check_aparam_networks_trunk() is False:
- check_errors = True
-
- if self.aparams['cdrom'] is not None:
- check_errors = True
- self.message(
- 'Check for parameter "cdrom" failed: '
- 'cdrom can be specified only for existing compute.'
- )
-
- aparam_storage_policy_id = self.aparams['storage_policy_id']
- if aparam_storage_policy_id is None:
- check_errors = True
- self.message(
- msg='Check for parameter "storage_policy_id" failed: '
- 'storage_policy_id must be specified when creating '
- 'a new compute'
- )
- elif (
- aparam_storage_policy_id
- not in self.rg_info['storage_policy_ids']
- ):
- check_errors = True
- self.message(
- msg='Check for parameter "storage_policy_id" failed: '
- f'RG ID {self.rg_id} does not have access to '
- f'storage_policy_id {aparam_storage_policy_id}'
- )
-
- if self.aparams['abort_cloning'] is not None:
- check_errors = True
- self.message(
- 'Check for parameter "abort_cloning" failed: '
- 'abort_cloning can be specified only for existing compute.'
- )
-
- if check_errors:
- self.exit(fail=True)
-
- @property
- def amodule_init_args(self) -> dict:
- return self.pack_amodule_init_args(
- argument_spec=dict(
- account_id=dict(
- type='int',
- default=0,
- ),
- account_name=dict(
- type='str',
- default='',
- ),
- description=dict(
- type='str',
- ),
- boot=dict(
- type='dict',
- options=dict(
- disk_id=dict(
- type='int',
- ),
- disk_size=dict(
- type='int',
- ),
- mode=dict(
- type='str',
- choices=[
- 'bios',
- 'uefi',
- ],
- ),
- loader_type=dict(
- type='str',
- choices=[
- 'windows',
- 'linux',
- 'unknown',
- ],
- ),
- from_cdrom=dict(
- type='int',
- ),
- order=dict(
- type='list',
- elements='str',
- choices=[
- e.value for e in self.VMBootDevice
- ],
- ),
- disk_redeploy=dict(
- type='bool',
- ),
- ),
- ),
- sep_id=dict(
- type='int',
- ),
- pool=dict(
- type='str',
- ),
- controller_url=dict(
- type='str',
- required=True,
- ),
- cpu=dict(
- type='int',
- ),
- disks=dict(
- type='dict',
- options=dict(
- mode=dict(
- type='str',
- choices=[
- 'update',
- 'detach',
- 'delete',
- 'match',
- ],
- default='update',
- ),
- objects=dict(
- type='list',
- elements='dict',
- options=dict(
- id=dict(
- type='int',
- required=True,
- ),
- pci_slot_num_hex=dict(
- type='str',
- ),
- bus_num_hex=dict(
- type='str',
- ),
- ),
- required_together=[
- ('pci_slot_num_hex', 'bus_num_hex'),
- ],
- ),
- ),
- ),
- id=dict(
- type='int',
- default=0,
- ),
- image_id=dict(
- type='int',
- ),
- name=dict(
- type='str',
- ),
- networks=dict(
- type='list',
- elements='dict',
- options=dict(
- type=dict(
- type='str',
- required=True,
- choices=[
- 'VINS',
- 'EXTNET',
- 'VFNIC',
- 'DPDK',
- 'TRUNK',
- 'SDN',
- 'EMPTY',
- ],
- ),
- id=dict(
- type='raw',
- ),
- ip_addr=dict(
- type='str',
- ),
- mtu=dict(
- type='int',
- ),
- mac=dict(
- type='str',
- ),
- security_group_ids=dict(
- type='list',
- elements='int',
- ),
- security_group_mode=dict(
- type='bool',
- ),
- enabled=dict(
- type='bool',
- ),
- ),
- required_if=[
- ('type', 'VINS', ('id',)),
- ('type', 'EXTNET', ('id',)),
- ('type', 'VFNIC', ('id',)),
- ('type', 'DPDK', ('id',)),
- ('type', 'TRUNK', ('id',)),
- ('type', 'SDN', ('id',)),
- ],
- ),
- network_order_changing=dict(
- type='bool',
- default=False,
- ),
- ram=dict(
- type='int',
- ),
- rg_id=dict(
- type='int',
- default=0,
- ),
- rg_name=dict(
- type='str',
- default='',
- ),
- ssh_key=dict(
- type='str',
- ),
- ssh_key_user=dict(
- type='str',
- ),
- tag=dict(
- type='dict',
- ),
- affinity_label=dict(
- type='str',
- ),
- aff_rule=dict(
- type='list',
- ),
- aaff_rule=dict(
- type='list',
- ),
- ci_user_data=dict(
- type='dict',
- ),
- state=dict(
- type='str',
- choices=[
- 'absent',
- 'paused',
- 'poweredoff',
- 'halted',
- 'poweredon',
- 'stopped',
- 'started',
- 'present',
- ],
- ),
- tags=dict(
- type='str',
- ),
- chipset=dict(
- type='str',
- choices=[
- 'Q35',
- 'i440fx',
- ]
- ),
- cpu_pin=dict(
- type='bool',
- ),
- hp_backed=dict(
- type='bool',
- ),
- numa_affinity=dict(
- type='str',
- choices=[
- 'strict',
- 'loose',
- 'none',
- ],
- ),
- custom_fields=dict(
- type='dict',
- options=dict(
- fields=dict(
- type='dict',
- ),
- disable=dict(
- type='bool',
- ),
- ),
- ),
- auto_start=dict(
- type='bool',
- ),
- rollback_to=dict(
- type='str',
- ),
- preferred_cpu_cores=dict(
- type='list',
- elements='int',
- ),
- get_console_url=dict(
- type='bool',
- default=False,
- ),
- clone_from=dict(
- type='dict',
- options=dict(
- id=dict(
- type='int',
- required=True,
- ),
- force=dict(
- type='bool',
- default=False,
- ),
- snapshot=dict(
- type='dict',
- options=dict(
- name=dict(
- type='str',
- ),
- timestamp=dict(
- type='int',
- ),
- datetime=dict(
- type='str',
- ),
- ),
- mutually_exclusive=[
- ('name', 'timestamp', 'datetime'),
- ],
- ),
- sep_pool_name=dict(
- type='str',
- ),
- sep_id=dict(
- type='int',
- ),
- storage_policy_id=dict(
- type='int',
- requiered=True,
- ),
- ),
- ),
- network_interface_naming=dict(
- type='str',
- choices=[
- 'ens',
- 'eth',
- ],
- ),
- hot_resize=dict(
- type='bool',
- ),
- zone_id=dict(
- type='int',
- ),
- guest_agent=dict(
- type='dict',
- options=dict(
- enabled=dict(
- type='bool',
- ),
- exec=dict(
- type='dict',
- options=dict(
- cmd=dict(
- type='str',
- required=True,
- ),
- args=dict(
- type='dict',
- default={},
- ),
- ),
- ),
- update_available_commands=dict(
- type='bool',
- ),
- ),
- ),
- get_snapshot_merge_status=dict(
- type='bool',
- ),
- cdrom=dict(
- type='dict',
- options=dict(
- mode=dict(
- type='str',
- choices=[
- 'insert',
- 'eject',
- ],
- default='insert',
- ),
- image_id=dict(
- type='int',
- ),
- ),
- ),
- storage_policy_id=dict(
- type='int',
- ),
- os_version=dict(
- type='str',
- ),
- get_cloning_status=dict(
- type='bool',
- ),
- abort_cloning=dict(
- type='bool',
- ),
- ),
- supports_check_mode=True,
- required_one_of=[
- ('id', 'name'),
- ],
- required_by={
- 'clone_from': 'name',
- },
- )
-
- def check_amodule_args_for_change(self):
- check_errors = False
-
- comp_info = self.vm_to_clone_info or self.comp_info
- comp_id = comp_info['id']
-
- self.is_vm_stopped_or_will_be_stopped = (
- (
- comp_info['techStatus'] == 'STOPPED'
- and (
- self.amodule.params['state'] is None
- or self.amodule.params['state'] in (
- 'halted', 'poweredoff', 'present', 'stopped',
- )
- )
- )
- or (
- comp_info['techStatus'] != 'STOPPED'
- and self.amodule.params['state'] in (
- 'halted', 'poweredoff', 'stopped',
- )
- )
- )
-
- aparam_boot = self.amodule.params['boot']
- if aparam_boot is not None:
- aparam_disks = self.amodule.params['disks']
- aparam_boot_disk_id = aparam_boot['disk_id']
- comp_disk_ids = [disk['id'] for disk in self.comp_info['disks']]
- if aparam_disks is None:
- if (
- aparam_boot_disk_id is not None
- and aparam_boot_disk_id not in comp_disk_ids
- ):
- check_errors = True
- self.message(
- f'Check for parameter "boot.disk_id" failed: '
- f'disk {aparam_boot_disk_id} is not attached to '
- f'Compute ID {self.comp_id}.'
- )
- else:
- match aparam_disks['mode']:
- case 'update':
- if (
- aparam_boot_disk_id not in comp_disk_ids
- and aparam_boot_disk_id not in aparam_disks['ids']
- ):
- check_errors = True
- self.message(
- f'Check for parameter "boot.disk_id" failed: '
- f'disk {aparam_boot_disk_id} is not attached '
- f'to Compute ID {self.comp_id}.'
- )
- case 'match':
- if aparam_boot_disk_id not in aparam_disks['ids']:
- check_errors = True
- self.message(
- f'Check for parameter "boot.disk_id" failed: '
- f'disk {aparam_boot_disk_id} is not in '
- f'disks.ids'
- )
- case 'detach' | 'delete':
- if aparam_boot_disk_id in aparam_disks['ids']:
- check_errors = True
- self.message(
- f'Check for parameter "boot.disk_id" failed: '
- f'disk {aparam_boot_disk_id} cannot be '
- f'detached or deleted to set as boot disk.'
- )
- elif aparam_boot_disk_id not in comp_disk_ids:
- check_errors = True
- self.message(
- f'Check for parameter "boot.disk_id" failed: '
- f'disk {aparam_boot_disk_id} is not attached '
- f'to Compute ID {self.comp_id}.'
- )
-
- if self.check_aparam_boot_disk_redeploy() is False:
- check_errors = True
-
- new_boot_disk_size = aparam_boot['disk_size']
- if new_boot_disk_size is not None:
- boot_disk_size = 0
- for disk in self.comp_info['disks']:
- if disk['type'] == 'B':
- boot_disk_size = disk['sizeMax']
- break
- else:
- if aparam_boot is None or aparam_boot['disk_id'] is None:
- check_errors = True
- self.message(
- f'Can\'t set boot disk size for Compute '
- f'{comp_id}, because it doesn\'t '
- f'have a boot disk.'
- )
-
- if new_boot_disk_size < boot_disk_size:
- check_errors = True
- self.message(
- f'New boot disk size {new_boot_disk_size} is less'
- f' than current {boot_disk_size} for Compute ID '
- f'{comp_id}'
- )
-
- cd_rom_image_id = aparam_boot['from_cdrom']
- if cd_rom_image_id is not None:
- if not (
- self.comp_info['techStatus'] == 'STOPPED'
- and self.aparams['state'] == 'started'
- ):
- check_errors = True
- self.message(
- f'Check for parameter "boot.from_cdrom" failed: '
- f'VM ID {self.comp_id} must be stopped and "state" '
- 'must be "started" to boot from CD-ROM.'
- )
- _, image_info = self._image_get_by_id(
- image_id=cd_rom_image_id,
- )
- if image_info is None:
- check_errors = True
- self.message(
- 'Check for parameter "boot.from_cdrom" failed: '
- f'Image ID {cd_rom_image_id} not found.'
- )
- elif image_info['type'] != 'cdrom':
- check_errors = True
- self.message(
- 'Check for parameter "boot.from_cdrom" failed: '
- f'Image ID {cd_rom_image_id} is not a cd-rom type.'
- )
-
- boot_order_list = aparam_boot['order']
- if boot_order_list is not None:
- boot_order_duplicates = set([
- boot_dev for boot_dev in boot_order_list
- if boot_order_list.count(boot_dev) > 1
- ])
- if boot_order_duplicates:
- check_errors = True
- self.message(
- 'Check for parameter "boot.order" failed: '
- 'List of boot devices has duplicates: '
- f'{boot_order_duplicates}.'
- )
-
- if (
- not comp_info['imageId']
- and self.amodule.params['state'] in (
- 'poweredon', 'paused', 'started',
- )
- ):
- check_errors = True
- self.message(
- 'Check for parameter "state" failed: '
- 'state for a blank Compute can not be "started" or "paused".'
- )
-
- if self.amodule.params['rollback_to'] is not None:
- if not self.is_vm_stopped_or_will_be_stopped:
- check_errors = True
- self.message(
- 'Check for parameter "rollback_to" failed: '
- 'VM must be stopped to rollback.'
- )
-
- vm_snapshot_labels = [
- snapshot['label'] for snapshot in comp_info['snapSets']
- ]
- if self.amodule.params['rollback_to'] not in vm_snapshot_labels:
- check_errors = True
- self.message(
- f'Check for parameter "rollback_to" failed: '
- f'snapshot with label '
- f'{self.amodule.params["rollback_to"]} does not exist '
- f'for VM ID {comp_id}.'
- )
-
- params_to_check = {
- 'chipset': 'chipset',
- 'cpu_pin': 'cpupin',
- 'hp_backed': 'hpBacked',
- 'numa_affinity': 'numaAffinity',
- 'hot_resize': 'hotResize',
- }
- for param_name, comp_field_name in params_to_check.items():
- if (
- self.aparams[param_name] is not None
- and comp_info[comp_field_name] != self.aparams[param_name]
- and not self.is_vm_stopped_or_will_be_stopped
- ):
- check_errors = True
- self.message(
- f'Check for parameter "{param_name}" failed: '
- f'VM must be stopped to change {param_name}.'
- )
-
- if self.aparams['preferred_cpu_cores'] is not None:
- if not self.is_vm_stopped_or_will_be_stopped:
- check_errors = True
- self.message(
- 'Check for parameter "preferred_cpu_cores" failed: '
- 'VM must be stopped to change preferred_cpu_cores.'
- )
-
- if (
- self.aparam_networks_has_dpdk
- and not comp_info['hpBacked']
- and not self.aparams['hp_backed']
- ):
- check_errors = True
- self.message(
- 'Check for parameter "networks" failed: '
- 'hp_backed must be set to True to connect a compute '
- 'to a DPDK network.'
- )
-
- is_vm_started_or_will_be_started = (
- (
- comp_info['techStatus'] == 'STARTED'
- and (
- self.amodule.params['state'] is None
- or self.amodule.params['state'] in (
- 'poweredon', 'present', 'started',
- )
- )
- )
- or (
- comp_info['techStatus'] != 'STARTED'
- and self.amodule.params['state'] in ('poweredon', 'started')
- )
- )
-
- if self.amodule.params['get_console_url']:
- if not is_vm_started_or_will_be_started:
- check_errors = True
- self.message(
- 'Check for parameter "get_console_url" failed: '
- 'VM must be started to get console url.'
- )
-
- aparam_disks_dict = self.aparams['disks']
- if aparam_disks_dict is not None:
- aparam_disks = aparam_disks_dict.get('objects', [])
- aparam_disks_ids = [disk['id'] for disk in aparam_disks]
- comp_boot_disk_id = None
- for comp_disk in self.comp_info['disks']:
- if comp_disk['type'] == 'B':
- comp_boot_disk_id = comp_disk['id']
- break
- disks_to_detach = []
- match aparam_disks_dict['mode']:
- case 'detach' | 'delete':
- disks_to_detach = aparam_disks_ids
- case 'match':
- comp_disk_ids = {
- disk['id'] for disk in self.comp_info['disks']
- }
- disks_to_detach = comp_disk_ids - set(aparam_disks_ids)
- if (
- comp_boot_disk_id is not None
- and comp_boot_disk_id in disks_to_detach
- and not self.is_vm_stopped_or_will_be_stopped
- ):
- check_errors = True
- self.message(
- f'Check for parameter "disks" failed: '
- f'VM ID {comp_id} must be stopped to detach '
- f'boot disk ID {comp_boot_disk_id}.'
- )
- if self.comp_info['snapSets'] and disks_to_detach:
- check_errors = True
- self.message(
- f'Check for parameter "disks" failed: '
- f'cannot detach disks {disks_to_detach} from '
- f'Compute ID {self.comp_id} while snapshots exist.'
- )
-
- if aparam_disks_dict['mode'] in ('delete', 'detach'):
- for disk in aparam_disks:
- for param, value in disk.items():
- if param != 'id' and value is not None:
- check_errors = True
- self.message(
- msg='Check for parameter "disks.objects" '
- 'failed: only disk id can be specified if '
- 'disks.mode is "delete" or "detach"'
- )
- break
-
- if (
- (
- self.aparams['cpu'] is not None
- and self.aparams['cpu'] != comp_info['cpus']
- ) or (
- self.aparams['ram'] is not None
- and self.aparams['ram'] != comp_info['ram']
- )
- ) and not (self.aparams['hot_resize'] or comp_info['hotResize']):
- check_errors = True
- self.message(
- 'Check for parameters "cpu" and "ram" failed: '
- 'Hot resize must be enabled to change CPU or RAM.'
- )
-
- if self.check_aparam_zone_id() is False:
- check_errors = True
-
- if self.check_aparam_guest_agent() is False:
- check_errors = True
-
- if self.check_aparam_get_snapshot_merge_status() is False:
- check_errors = True
-
- aparam_networks = self.aparams['networks']
- if aparam_networks is not None:
- vm_networks = self.comp_info['interfaces']
- if (
- not vm_networks
- and not self.is_vm_stopped_or_will_be_stopped
- ):
- check_errors = True
- self.message(
- 'Check for parameter "networks" failed: '
- 'VM must be stopped before attach it\'s first network.'
- )
- vm_networks_ids = [
- network['netId'] for network in vm_networks
- if network['type'] != self.VMNetType.EMPTY.value
- ]
- aparam_networks_ids = [
- network['id'] for network in aparam_networks
- if network['type'] != self.VMNetType.EMPTY.value
- ]
- new_networks = list(
- set(aparam_networks_ids) - set(vm_networks_ids)
- )
- net_types = {net['type'] for net in aparam_networks}
- if new_networks:
- if not (
- len(new_networks) == 1
- and self.VMNetType.DPDK.value in net_types
- ) and not self.is_vm_stopped_or_will_be_stopped:
- check_errors = True
- self.message(
- 'Check for parameter "networks" failed: '
- 'VM must be stopped to attach non-DPDK network.'
- )
-
- if self.VMNetType.TRUNK.value in net_types:
- if self.check_aparam_networks_trunk() is False:
- check_errors = True
-
- for network in aparam_networks:
- if (
- network['enabled'] is not None
- and network['type'] not in [
- self.VMNetType.VINS.value,
- self.VMNetType.EXTNET.value,
- self.VMNetType.DPDK.value,
- self.VMNetType.SDN.value,
- self.VMNetType.TRUNK.value,
- ]
- ):
- check_errors = True
- self.message(
- 'Check for parameter "networks.enabled" failed: '
- 'Can not enable or disable network '
- f'ID {network['id']} and type {network['type']}.'
- 'Only networks of type VINS, EXTNET, DPDK, SDN, TRUNK '
- 'can be enabled or disabled.'
- )
-
- if self.check_aparam_cdrom() is False:
- check_errors = True
-
- if self.check_aparam_storage_policy_id() is False:
- check_errors = True
-
- if self.check_aparam_image_id() is False:
- check_errors = True
-
- if check_errors:
- self.exit(fail=True)
-
- def check_amodule_args_for_clone(self, clone_id: int, clone_dict: dict):
- check_errors = False
- aparam_clone_from = self.aparams['clone_from']
-
- if (
- clone_id
- and clone_dict['cloneReference'] != self.vm_to_clone_id
- ):
- check_errors = True
- self.message(
- 'Check for parameter "name" failed: '
- f'VM with name {self.aparams["name"]} '
- f'already exists.'
- )
- if (
- self.vm_to_clone_info['techStatus'] == 'STARTED'
- and not aparam_clone_from['force']
- ):
- check_errors = True
- self.message(
- 'Check for parameter "clone_from.force" failed: '
- 'VM must be stopped or parameter "force" must be True '
- 'to clone it.'
- )
-
- aparam_snapshot = aparam_clone_from['snapshot']
- snapshot_timestamps = [
- snapshot['timestamp']
- for snapshot in self.vm_to_clone_info['snapSets']
- ]
- if aparam_snapshot is not None:
- if (
- aparam_snapshot['name'] is not None
- and aparam_snapshot['name'] not in (
- snapshot['label']
- for snapshot in self.vm_to_clone_info['snapSets']
- )
- ):
- check_errors = True
- self.message(
- 'Check for parameter "clone_from.snapshot.name" '
- 'failed: snapshot with name '
- f'{aparam_snapshot["name"]} does not exist for VM ID '
- f'{self.vm_to_clone_id}.'
- )
- if (
- aparam_snapshot['timestamp'] is not None
- and aparam_snapshot['timestamp'] not in snapshot_timestamps
- ):
- check_errors = True
- self.message(
- 'Check for parameter "clone_from.snapshot.timestamp" '
- 'failed: snapshot with timestamp '
- f'{aparam_snapshot["timestamp"]} does not exist for '
- f'VM ID {self.vm_to_clone_id}.'
- )
- if aparam_snapshot['datetime'] is not None:
- timestamp_from_dt_str = self.dt_str_to_sec(
- dt_str=aparam_snapshot['datetime']
- )
- if timestamp_from_dt_str not in snapshot_timestamps:
- check_errors = True
- self.message(
- 'Check for parameter "clone_from.snapshot.datetime" '
- 'failed: snapshot with datetime '
- f'{aparam_snapshot["datetime"]} does not exist for '
- f'VM ID {self.vm_to_clone_id}.'
- )
-
- if check_errors:
- self.exit(fail=True)
-
- def clone(self):
- clone_from_snapshot = self.aparams['clone_from']['snapshot']
- snapshot_timestamp, snapshot_name, snapshot_datetime = None, None, None
- if clone_from_snapshot:
- snapshot_timestamp = clone_from_snapshot['timestamp']
- snapshot_name = clone_from_snapshot['name']
- snapshot_datetime = clone_from_snapshot['datetime']
- clone_id = self.compute_clone(
- compute_id=self.vm_to_clone_id,
- name=self.aparams['name'],
- force=self.aparams['clone_from']['force'],
- snapshot_timestamp=snapshot_timestamp,
- snapshot_name=snapshot_name,
- snapshot_datetime=snapshot_datetime,
- sep_pool_name=self.aparams['clone_from']['sep_pool_name'],
- sep_id=self.aparams['clone_from']['sep_id'],
- storage_policy_id=self.aparams['clone_from']['storage_policy_id'],
- )
- return clone_id
-
- def check_aparam_guest_agent(self) -> bool:
- check_errors = False
- aparam_guest_agent = self.aparams['guest_agent']
- if aparam_guest_agent:
- if self.is_vm_stopped_or_will_be_stopped:
- if aparam_guest_agent['update_available_commands']:
- check_errors = True
- self.message(
- 'Check for parameter '
- '"guest_agent.update_available_commands" failed: '
- f'VM ID {self.comp_id} must be started to update '
- 'available commands.'
- )
-
- is_guest_agent_enabled_or_will_be_enabled = (
- (
- self.comp_info['qemu_guest']['enabled']
- and aparam_guest_agent['enabled'] is not False
- )
- or (
- self.comp_info['qemu_guest']['enabled'] is False
- and aparam_guest_agent['enabled']
- )
- )
-
- aparam_guest_agent_exec = aparam_guest_agent['exec']
- if aparam_guest_agent_exec is not None:
- if self.is_vm_stopped_or_will_be_stopped:
- check_errors = True
- self.message(
- 'Check for parameter "guest_agent.exec" failed: '
- f'VM ID {self.comp_id} must be started '
- 'to execute commands.'
- )
-
- if not is_guest_agent_enabled_or_will_be_enabled:
- check_errors = True
- self.message(
- 'Check for parameter "guest_agent.exec" failed: '
- f'Guest agent for VM ID {self.comp_id} must be enabled'
- ' to execute commands.'
- )
-
- aparam_exec_cmd = aparam_guest_agent_exec['cmd']
- available_commands = (
- self.comp_info['qemu_guest']['enabled_agent_features']
- )
- if aparam_exec_cmd not in available_commands:
- check_errors = True
- self.message(
- 'Check for parameter "guest_agent.exec.cmd" failed: '
- f'Command "{aparam_exec_cmd}" is not '
- f'available for VM ID {self.comp_id}.'
- )
-
- return not check_errors
-
- def check_aparam_get_snapshot_merge_status(self) -> bool | None:
- check_errors = False
- if self.aparams['get_snapshot_merge_status']:
- vm_has_shared_sep_disk = False
- vm_disk_ids = [disk['id'] for disk in self.comp_info['disks']]
- for disk_id in vm_disk_ids:
- _, disk_info = self._disk_get_by_id(disk_id=disk_id)
- if disk_info['sepType'] == 'SHARED':
- vm_has_shared_sep_disk = True
- break
-
- if not vm_has_shared_sep_disk:
- check_errors = True
- self.message(
- 'Check for parameter "get_snapshot_merge_status" failed: '
- f'VM ID {self.comp_id} must have at least one disk with '
- 'SEP type SHARED to retrieve snapshot merge status.'
- )
-
- return not check_errors
-
- def check_aparam_cdrom(self) -> bool | None:
- check_errors = False
- aparam_cdrom = self.aparams['cdrom']
- if aparam_cdrom is not None:
- mode = aparam_cdrom['mode']
- if self.is_vm_stopped_or_will_be_stopped:
- check_errors = True
- self.message(
- 'Check for parameter "cdrom" failed: '
- f'VM ID {self.comp_id} must be started to {mode} '
- f'CD-ROM.'
- )
- image_id = aparam_cdrom['image_id']
- match mode:
- case 'insert':
- if image_id is None:
- check_errors = True
- self.message(
- 'Check for parameter "cdrom.image_id" failed: '
- f'cdrom.image_id must be specified '
- f'if cdrom.mode is "insert".'
- )
- _, image_info = self._image_get_by_id(
- image_id=image_id,
- )
- if image_info is None:
- check_errors = True
- self.message(
- 'Check for parameter "cdrom.image_id" failed: '
- f'Image ID {image_id} not found.'
- )
- elif image_info['type'] != 'cdrom':
- check_errors = True
- self.message(
- 'Check for parameter "cdrom.image_id" failed: '
- f'Image ID {image_id} is not a CD-ROM type.'
- )
- case 'eject':
- if image_id is not None:
- check_errors = True
- self.message(
- 'Check for parameter "cdrom.image_id" failed: '
- f'cdrom.image_id must not be specified '
- f'if cdrom.mode is "eject".'
- )
- if not self.comp_info['cdImageId']:
- check_errors = True
- self.message(
- 'Check for parameter "cdrom.mode" failed: '
- f'VM ID {self.comp_id} does not have CD-ROM '
- 'to eject.'
- )
-
- return not check_errors
-
- def check_aparam_storage_policy_id(self) -> bool:
- check_errors = False
-
- aparam_storage_policy_id = self.aparams['storage_policy_id']
- if aparam_storage_policy_id is not None:
- for disk in self.comp_info['disks']:
- if aparam_storage_policy_id != disk['storage_policy_id']:
- check_errors = True
- self.message(
- msg='Check for parameter "storage_policy_id" failed: '
- 'storage_policy_id can not be changed for compute '
- f'ID {self.comp_id} disk ID {disk['id']}'
- )
-
- return not check_errors
-
- def check_aparam_boot_disk_redeploy(self) -> bool:
- check_errors = False
-
- disk_redeploy = self.aparams['boot']['disk_redeploy']
- if disk_redeploy:
- if self.aparams['storage_policy_id'] is None:
- check_errors = True
- self.message(
- 'Check for parameter "storage_policy_id" failed:'
- '"storage_policy_id" must be specified to redeploy.'
- )
-
- vm_has_boot_disk = False
- for disk in self.comp_info['disks']:
- if disk['type'] == 'B':
- vm_has_boot_disk = True
- break
- if not vm_has_boot_disk:
- check_errors = True
- self.message(
- 'Check for parameter "boot.redeploy" failed: '
- 'VM does not have boot disk to redeploy.'
- )
-
- aparam_disks = self.amodule.params['disks']
- if aparam_disks is not None and aparam_disks['mode'] == 'match':
- check_errors = True
- self.message(
- 'Check for parameter "disks.mode" failed: '
- '"disks.mode" must not be "match" to redeploy.'
- )
-
- return not check_errors
-
- def check_aparam_image_id(self) -> bool:
- check_errors = False
-
- aparam_image_id = self.aparams['image_id']
- if aparam_image_id is not None:
- if aparam_image_id != self.comp_info['imageId']:
- if (
- self.aparams['boot'] is None
- or self.aparams['boot']['disk_redeploy'] is None
- ):
- check_errors = True
- self.message(
- 'Check for parameter "image_id" failed: '
- '"boot.disk_redeploy" must be set to True to change '
- 'VM image.'
- )
-
- return not check_errors
-
- def find_networks_tags_intersections(
- self,
- trunk_networks: list,
- extnet_networks: list,
- ) -> bool:
- has_intersections = False
-
- def parse_trunk_tags(trunk_tags_string: str):
- trunk_tags = set()
- for part in trunk_tags_string.split(','):
- if '-' in part:
- start, end = part.split('-')
- trunk_tags.update(range(int(start), int(end) + 1))
- else:
- trunk_tags.add(int(part))
- return trunk_tags
-
- trunk_tags_dicts = []
- for trunk_network in trunk_networks:
- trunk_tags_dicts.append({
- 'id': trunk_network['id'],
- 'tags_str': trunk_network['trunkTags'],
- 'tags': parse_trunk_tags(
- trunk_tags_string=trunk_network['trunkTags']
- ),
- 'native_vlan_id': trunk_network['nativeVlanId'],
- })
-
- # find for trunk tags intersections with other networks
- for i in range(len(trunk_tags_dicts)):
- for j in range(i + 1, len(trunk_tags_dicts)):
- intersection = (
- trunk_tags_dicts[i]['tags']
- & trunk_tags_dicts[j]['tags']
- )
- if intersection:
- has_intersections = True
- self.message(
- 'Check for parameter "networks" failed: '
- f'Trunk tags {trunk_tags_dicts[i]["tags_str"]} '
- f'of trunk ID {trunk_tags_dicts[i]["id"]} '
- f'overlaps with trunk tags '
- f'{trunk_tags_dicts[j]["tags_str"]} of trunk ID '
- f'{trunk_tags_dicts[j]["id"]}'
- )
- for extnet in extnet_networks:
- if extnet['vlanId'] in trunk_tags_dicts[i]['tags']:
- has_intersections = True
- self.message(
- 'Check for parameter "networks" failed: '
- f'Trunk tags {trunk_tags_dicts[i]["tags_str"]} '
- f'of trunk ID {trunk_tags_dicts[i]["id"]} '
- f'overlaps with tag {extnet["vlanId"]} of extnet ID '
- f'{extnet["id"]}'
- )
- if extnet['vlanId'] == trunk_tags_dicts[i]['native_vlan_id']:
- has_intersections = True
- self.message(
- 'Check for parameter "networks" failed: '
- f'Trunk native vlan ID '
- f'{trunk_tags_dicts[i]["native_vlan_id"]} of trunk ID '
- f'{trunk_tags_dicts[i]["id"]} '
- f'overlaps with vlan ID {extnet["vlanId"]} of extnet '
- f'ID {extnet["id"]}'
- )
-
- return has_intersections
-
- def check_aparam_networks_trunk(self) -> bool | None:
- check_errors = False
-
- # check if account has vm feature “trunk”
- if not self.check_account_vm_features(vm_feature=self.VMFeature.trunk):
- check_errors = True
- self.message(
- 'Check for parameter "networks" failed: '
- f'Account ID {self.acc_id} must have feature "trunk" to use '
- 'trunk type networks '
- )
- # check if rg has vm feature “trunk”
- if not self.check_rg_vm_features(vm_feature=self.VMFeature.trunk):
- check_errors = True
- self.message(
- 'Check for parameter "networks" failed: '
- f'RG ID {self.rg_id} must have feature "trunk" to use '
- 'trunk type networks '
- )
-
- aparam_trunk_networks = []
- aparam_extnet_networks = []
- for net in self.aparams['networks']:
- if net['type'] == self.VMNetType.TRUNK.value:
- aparam_trunk_networks.append(net)
- elif net['type'] == self.VMNetType.EXTNET.value:
- aparam_extnet_networks.append(net)
-
- trunk_networks_info = []
- # check that account has access to all specified trunks
- for trunk_network in aparam_trunk_networks:
- trunk_info = self.trunk_get(id=trunk_network['id'])
- trunk_networks_info.append(trunk_info)
- if (
- trunk_info['accountIds'] is None
- or self.acc_id not in trunk_info['accountIds']
- ):
- check_errors = True
- self.message(
- 'Check for parameter "networks" failed: '
- f'Account ID {self.acc_id} does not have access to '
- f'trunk ID {trunk_info['id']}'
- )
-
- extnet_networks_info = []
- for extnet_network in aparam_extnet_networks:
- extnet_networks_info.append(
- self.extnet_get(id=extnet_network['id'])
- )
- # check that trunk tags do not overlap with each other
- # and with extnets vlan id
- if self.find_networks_tags_intersections(
- trunk_networks=trunk_networks_info,
- extnet_networks=extnet_networks_info,
- ):
- check_errors = True
-
- return not check_errors
-
-# Workflow digest:
-# 1) authenticate to DECORT controller & validate authentication by issuing API call - done when creating DECSController
-# 2) check if the VM with the specified id or rg_name:name exists
-# 3) if VM does not exist, check if there is enough resources to deploy it in the target account / vdc
-# 4) if VM exists: check desired state, desired configuration -> initiate action accordingly
-# 5) VM does not exist: check desired state -> initiate action accordingly
-# - create VM: check if target VDC exists, create VDC as necessary, create VM
-# - delete VM: delete VM
-# - change power state: change as required
-# - change guest OS state: change as required
-# 6) report result to Ansible
-
def main():
- # Initialize DECORT KVM VM instance object
- # This object does not necessarily represent an existing KVM VM
- subj = decort_kvmvm()
- amodule = subj.amodule
+ module = AnsibleModule(
+ argument_spec=dict(
+ app_id=dict(type='raw'),
+ app_secret=dict(type='raw'),
+ authenticator=dict(type='raw'),
+ controller_url=dict(type='raw'),
+ domain=dict(type='raw'),
+ jwt=dict(type='raw'),
+ oauth2_url=dict(type='raw'),
+ password=dict(type='raw'),
+ username=dict(type='raw'),
+ verify_ssl=dict(type='raw'),
+ ignore_api_compatibility=dict(type='raw'),
+ ignore_sdk_version_check=dict(type='raw'),
+ account_id=dict(type='raw'),
+ account_name=dict(type='raw'),
+ description=dict(type='raw'),
+ boot=dict(type='raw'),
+ sep_id=dict(type='raw'),
+ pool=dict(type='raw'),
+ cpu=dict(type='raw'),
+ disks=dict(type='raw'),
+ id=dict(type='raw'),
+ image_id=dict(type='raw'),
+ name=dict(type='raw'),
+ networks=dict(type='raw'),
+ network_order_changing=dict(type='raw'),
+ ram=dict(type='raw'),
+ rg_id=dict(type='raw'),
+ rg_name=dict(type='raw'),
+ ssh_key=dict(type='raw'),
+ ssh_key_user=dict(type='raw'),
+ tag=dict(type='raw'),
+ affinity_label=dict(type='raw'),
+ aff_rule=dict(type='raw'),
+ aaff_rule=dict(type='raw'),
+ ci_user_data=dict(type='raw'),
+ state=dict(type='raw'),
+ tags=dict(type='raw'),
+ chipset=dict(type='raw'),
+ cpu_pin=dict(type='raw'),
+ hp_backed=dict(type='raw'),
+ numa_affinity=dict(type='raw'),
+ custom_fields=dict(type='raw'),
+ auto_start=dict(type='raw'),
+ rollback_to=dict(type='raw'),
+ preferred_cpu_cores=dict(type='raw'),
+ get_console_url=dict(type='raw'),
+ clone_from=dict(type='raw'),
+ network_interface_naming=dict(type='raw'),
+ hot_resize=dict(type='raw'),
+ zone_id=dict(type='raw'),
+ guest_agent=dict(type='raw'),
+ get_snapshot_merge_status=dict(type='raw'),
+ cdrom=dict(type='raw'),
+ storage_policy_id=dict(type='raw'),
+ os_version=dict(type='raw'),
+ get_cloning_status=dict(type='raw'),
+ abort_cloning=dict(type='raw'),
+ ),
+ supports_check_mode=True,
+ )
- if subj.comp_id:
- if subj.comp_info['status'] in ("MIGRATING", "DELETING", "DESTROYING", "ERROR", "REDEPLOYING"):
- # cannot do anything on the existing Compute in the listed states
- subj.error() # was subj.nop()
- elif subj.comp_info['status'] in ("ENABLED", "DISABLED"):
- if amodule.params['state'] == 'absent':
- subj.destroy()
- else:
- if amodule.params['state'] in (
- 'paused', 'poweredon', 'poweredoff',
- 'halted', 'started', 'stopped',
- ):
- subj.compute_powerstate(
- comp_facts=subj.comp_info,
- target_state=amodule.params['state'],
- )
- subj.modify(arg_wait_cycles=7)
- elif subj.comp_info['status'] == "DELETED":
- if amodule.params['state'] in ('present', 'poweredon', 'started'):
- # TODO - check if restore API returns VM ID (similarly to VM create API)
- subj.compute_restore(comp_id=subj.comp_id)
- # TODO - do we need updated comp_info to manage port forwards and size after VM is restored?
- _, subj.comp_info, _ = subj.compute_find(
- comp_id=subj.comp_id,
- need_custom_fields=True,
- need_console_url=amodule.params['get_console_url'],
- )
- subj.modify()
- elif amodule.params['state'] == 'absent':
- # subj.nop()
- # subj.comp_should_exist = False
- subj.destroy()
- elif amodule.params['state'] in (
- 'paused', 'poweredoff', 'halted', 'stopped'
- ):
- subj.error()
- elif subj.comp_info['status'] == "DESTROYED":
- if amodule.params['state'] in (
- 'present', 'poweredon', 'poweredoff',
- 'halted', 'started', 'stopped',
- ):
- subj.create() # this call will also handle data disk & network connection
- elif amodule.params['state'] == 'absent':
- subj.nop()
- subj.comp_should_exist = False
- elif amodule.params['state'] == 'paused':
- subj.error()
- else:
- state = amodule.params['state']
- if state is None:
- state = 'present'
- # Preexisting Compute of specified identity was not found.
- # If requested state is 'absent' - nothing to do
- if state == 'absent':
- subj.nop()
- elif state in (
- 'present', 'poweredon', 'poweredoff',
- 'halted', 'started', 'stopped',
- ):
- subj.create() # this call will also handle data disk & network connection
- elif state == 'paused':
- subj.error()
-
- if subj.result['failed']:
- amodule.fail_json(**subj.result)
- else:
- # prepare Compute facts to be returned as part of decon.result and then call exit_json(...)
- rg_facts = None
- if subj.comp_should_exist:
- if (
- (subj.result['changed'] and not subj.skip_final_get)
- or subj.force_final_get
- ):
- # There were changes to the Compute - refresh Compute facts.
- _, subj.comp_info, _ = subj.compute_find(
- comp_id=subj.comp_id,
- need_custom_fields=True,
- need_console_url=amodule.params['get_console_url'],
- need_snapshot_merge_status=amodule.params['get_snapshot_merge_status'], # noqa: E501
- )
- #
- # We no longer need to re-read RG facts, as all network info is now available inside
- # compute structure
- # _, rg_facts = subj.rg_find(arg_account_id=0, arg_rg_id=subj.rg_id)
- subj.result['facts'] = subj.package_facts(amodule.check_mode)
- amodule.exit_json(**subj.result)
+ module.fail_json(
+ msg=(
+ 'The module "decort_kvmvm" has been renamed to "decort_vm". '
+ 'Please update your playbook to use "decort_vm" '
+ 'instead of "decort_kvmvm".'
+ ),
+ )
-if __name__ == "__main__":
+if __name__ == '__main__':
main()
diff --git a/library/decort_lb.py b/library/decort_lb.py
index 0603649..8befa87 100644
--- a/library/decort_lb.py
+++ b/library/decort_lb.py
@@ -361,56 +361,60 @@ class decort_lb(DecortController):
if check_errors:
self.exit(fail=True)
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ amodule = self.amodule
+ if self.lb_id:
+ if self.lb_facts['status'] in ["MODELED", "DISABLING", "ENABLING", "DELETING","DESTROYING","RESTORING"]:
+ self.result['failed'] = True
+ self.result['changed'] = False
+ self.result['msg'] = ("No change can be done for existing LB ID {} because of its current "
+ "status '{}'").format(self.lb_id, self.lb_facts['status'])
+ elif self.lb_facts['status'] in ('DISABLED', 'ENABLED', 'CREATED'):
+ if amodule.params['state'] == 'absent':
+ self.delete()
+ else:
+ self.action(d_state=amodule.params['state'])
+ elif self.lb_facts['status'] == "DELETED":
+ if amodule.params['state'] == 'present':
+ self.action(restore=True)
+ elif amodule.params['state'] == 'enabled':
+ self.action(d_state='enabled', restore=True)
+ elif (amodule.params['state'] == 'absent' and
+ amodule.params['permanently']):
+ self.delete()
+ elif amodule.params['state'] == 'disabled':
+ self.error()
+ elif self.lb_facts['status'] == "DESTROYED":
+ if amodule.params['state'] in ('present', 'enabled'):
+ self.create()
+ elif amodule.params['state'] == 'absent':
+ self.nop()
+ elif amodule.params['state'] == 'disabled':
+ self.error()
+ else:
+ state = amodule.params['state']
+ if state is None:
+ state = 'present'
+ if state == 'absent':
+ self.nop()
+ elif state in ('present', 'enabled', 'stopped', 'started'):
+ self.create()
+ elif state == 'disabled':
+ self.error()
+
+ if self.result['failed']:
+ amodule.fail_json(**self.result)
+ else:
+ if self.result['changed']:
+ _, self.lb_facts = self.lb_find(lb_id=self.lb_id)
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
+
def main():
- decon = decort_lb()
- amodule = decon.amodule
- if decon.lb_id:
- if decon.lb_facts['status'] in ["MODELED", "DISABLING", "ENABLING", "DELETING","DESTROYING","RESTORING"]:
- decon.result['failed'] = True
- decon.result['changed'] = False
- decon.result['msg'] = ("No change can be done for existing LB ID {} because of its current "
- "status '{}'").format(decon.lb_id, decon.lb_facts['status'])
- elif decon.lb_facts['status'] in ('DISABLED', 'ENABLED', 'CREATED'):
- if amodule.params['state'] == 'absent':
- decon.delete()
- else:
- decon.action(d_state=amodule.params['state'])
- elif decon.lb_facts['status'] == "DELETED":
- if amodule.params['state'] == 'present':
- decon.action(restore=True)
- elif amodule.params['state'] == 'enabled':
- decon.action(d_state='enabled', restore=True)
- elif (amodule.params['state'] == 'absent' and
- amodule.params['permanently']):
- decon.delete()
- elif amodule.params['state'] == 'disabled':
- decon.error()
- elif decon.lb_facts['status'] == "DESTROYED":
- if amodule.params['state'] in ('present', 'enabled'):
- decon.create()
- elif amodule.params['state'] == 'absent':
- decon.nop()
- elif amodule.params['state'] == 'disabled':
- decon.error()
- else:
- state = amodule.params['state']
- if state is None:
- state = 'present'
- if state == 'absent':
- decon.nop()
- elif state in ('present', 'enabled', 'stopped', 'started'):
- decon.create()
- elif state == 'disabled':
- decon.error()
+ decort_lb().run()
- if decon.result['failed']:
- amodule.fail_json(**decon.result)
- else:
- if decon.result['changed']:
- _, decon.lb_facts = decon.lb_find(lb_id=decon.lb_id)
- decon.result['facts'] = decon.package_facts(amodule.check_mode)
- amodule.exit_json(**decon.result)
-if __name__ == "__main__":
+if __name__ == '__main__':
main()
diff --git a/library/decort_osimage.py b/library/decort_osimage.py
index 5e7f286..6f93b5a 100644
--- a/library/decort_osimage.py
+++ b/library/decort_osimage.py
@@ -4,552 +4,59 @@ DOCUMENTATION = r'''
---
module: decort_osimage
-description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home).
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
'''
from ansible.module_utils.basic import AnsibleModule
-from ansible.module_utils.basic import env_fallback
-
-from ansible.module_utils.decort_utils import *
-
-
-class decort_osimage(DecortController):
- def __init__(self):
- super(decort_osimage, self).__init__(AnsibleModule(**self.amodule_init_args))
- amodule = self.amodule
-
- self.validated_image_id = 0
- self.validated_virt_image_id = 0
- self.validated_image_name = amodule.params['image_name']
- self.validated_virt_image_name = None
- self.image_info: dict
- self.virt_image_info: dict
- if amodule.params['account_name']:
- self.validated_account_id, _ = self.account_find(amodule.params['account_name'])
- else:
- self.validated_account_id = amodule.params['account_id']
-
- if self.validated_account_id == 0:
- # we failed either to find or access the specified account - fail the module
- self.result['failed'] = True
- self.result['changed'] = False
- self.result['msg'] = ("Cannot find account '{}'").format(amodule.params['account_name'])
- amodule.fail_json(**self.result)
- self.acc_id = self.validated_account_id
-
- if (
- self.aparams['virt_id'] != 0
- or self.aparams['virt_name'] is not None
- ):
- self.validated_virt_image_id, self.virt_image_info = (
- self.decort_virt_image_find(amodule)
- )
- if self.virt_image_info:
- _, linked_image_info = self._image_get_by_id(
- image_id=self.virt_image_info['linkTo']
- )
- self.acc_id = linked_image_info['accountId']
- if (
- self.aparams['virt_name'] is not None
- and self.aparams['virt_name']
- != self.virt_image_info['name']
- ):
- self.decort_virt_image_rename(amodule)
- self.result['msg'] = 'Virtual image renamed successfully'
- elif (
- self.aparams['image_id'] != 0
- or self.aparams['image_name'] is not None
- ):
- self.validated_image_id, self.image_info = (
- self.decort_image_find(amodule)
- )
- if self.image_info:
- self.acc_id = self.image_info['accountId']
- if (
- amodule.params['image_name']
- and amodule.params['image_name'] != self.image_info['name']
- ):
- decort_osimage.decort_image_rename(self,amodule)
- self.result['msg'] = ("Image renamed successfully")
-
- if self.validated_image_id:
- self.check_amodule_args_for_change()
- elif self.validated_virt_image_id:
- self.check_amodule_args_for_change_virt_image()
- elif self.aparams['virt_name']:
- self.check_amodule_args_for_create_virt_image()
- else:
- self.check_amodule_args_for_create_image()
-
- def decort_image_find(self, amodule):
- # function that finds the OS image
- image_id, image_facts = self.image_find(image_id=amodule.params['image_id'], image_name=self.validated_image_name,
- account_id=self.validated_account_id, rg_id=0,
- sepid=amodule.params['sep_id'],
- pool=amodule.params['pool'])
- return image_id, image_facts
-
- def decort_virt_image_find(self, amodule):
- # function that finds a virtual image
- image_id, image_facts = self.virt_image_find(image_id=amodule.params['virt_id'],
- account_id=self.validated_account_id, rg_id=0,
- sepid=amodule.params['sep_id'],
- virt_name=amodule.params['virt_name'],
- pool=amodule.params['pool'])
- return image_id, image_facts
-
-
-
- def decort_image_create(self,amodule):
- aparam_boot = self.aparams['boot']
- boot_mode = 'bios'
- loader_type = 'unknown'
- if aparam_boot is not None:
- if aparam_boot['mode'] is None:
- self.message(
- msg=self.MESSAGES.default_value_used(
- param_name='boot.mode',
- default_value=boot_mode
- ),
- warning=True,
- )
- else:
- boot_mode = aparam_boot['mode']
-
- if aparam_boot['loader_type'] is None:
- self.message(
- msg=self.MESSAGES.default_value_used(
- param_name='boot.loader_type',
- default_value=loader_type
- ),
- warning=True,
- )
- else:
- loader_type = aparam_boot['loader_type']
-
- network_interface_naming = self.aparams['network_interface_naming']
- if network_interface_naming is None:
- network_interface_naming = 'ens'
- self.message(
- msg=self.MESSAGES.default_value_used(
- param_name='network_interface_naming',
- default_value=network_interface_naming
- ),
- warning=True,
- )
-
- hot_resize = self.aparams['hot_resize']
- if hot_resize is None:
- hot_resize = False
- self.message(
- msg=self.MESSAGES.default_value_used(
- param_name='hot_resize',
- default_value=hot_resize
- ),
- warning=True,
- )
-
- # function that creates OS image
- image_facts = self.image_create(
- img_name=self.validated_image_name,
- url=amodule.params['url'],
- boot_mode=boot_mode,
- boot_loader_type=loader_type,
- hot_resize=hot_resize,
- username=amodule.params['image_username'],
- password=amodule.params['image_password'],
- account_id=self.validated_account_id,
- usernameDL=amodule.params['usernameDL'],
- passwordDL=amodule.params['passwordDL'],
- sepId=amodule.params['sepId'],
- poolName=amodule.params['poolName'],
- network_interface_naming=network_interface_naming,
- storage_policy_id=amodule.params['storage_policy_id'],
- )
- self.result['changed'] = True
- return image_facts
-
- def decort_virt_image_link(self,amodule):
- # function that links an OS image to a virtual one
- self.virt_image_link(imageId=self.validated_virt_image_id, targetId=self.target_image_id)
- image_id, image_facts = decort_osimage.decort_virt_image_find(self, amodule)
- self.result['facts'] = decort_osimage.decort_osimage_package_facts(image_facts, amodule.check_mode)
- self.result['msg'] = ("Image '{}' linked to virtual image '{}'").format(self.target_image_id,
- decort_osimage.decort_osimage_package_facts(image_facts)['id'],)
- return image_id, image_facts
-
- def decort_image_delete(self,amodule):
- # function that removes an image
- self.image_delete(imageId=amodule.image_id_delete)
- _, image_facts = decort_osimage._image_get_by_id(self, amodule.image_id_delete)
- self.result['facts'] = decort_osimage.decort_osimage_package_facts(image_facts, amodule.check_mode)
- return
-
- def decort_virt_image_create(self,amodule):
- # function that creates a virtual image
- image_facts = self.virt_image_create(
- name=amodule.params['virt_name'],
- target_id=self.target_image_id,
- account_id=self.aparams['account_id'],
- )
- image_id, image_facts = decort_osimage.decort_virt_image_find(self, amodule)
- self.result['facts'] = decort_osimage.decort_osimage_package_facts(image_facts, amodule.check_mode)
- return image_id, image_facts
-
- def decort_image_rename(self,amodule):
- # image renaming function
- image_facts = self.image_rename(imageId=self.validated_image_id, name=amodule.params['image_name'])
- self.result['msg'] = ("Image renamed successfully")
- image_id, image_facts = decort_osimage.decort_image_find(self, amodule)
- return image_id, image_facts
-
- def decort_virt_image_rename(self, amodule):
- image_facts = self.image_rename(imageId=self.validated_virt_image_id,
- name=amodule.params['virt_name'])
- self.result['msg'] = ("Virtual image renamed successfully")
- image_id, image_facts = self.decort_virt_image_find(amodule)
- return image_id, image_facts
-
- @staticmethod
- def decort_osimage_package_facts(
- arg_osimage_facts: dict | None,
- arg_check_mode=False,
- ):
- """Package a dictionary of OS image according to the decort_osimage module specification. This
- dictionary will be returned to the upstream Ansible engine at the completion of the module run.
-
- @param arg_osimage_facts: dictionary with OS image facts as returned by API call to .../images/list
- @param arg_check_mode: boolean that tells if this Ansible module is run in check mode.
-
- @return: dictionary with OS image specs populated from arg_osimage_facts.
- """
-
- ret_dict = dict(id=0,
- name="none",
- size=0,
- type="none",
- state="CHECK_MODE", )
-
- if arg_check_mode:
- # in check mode return immediately with the default values
- return ret_dict
-
- if arg_osimage_facts is None:
- # if void facts provided - change state value to ABSENT and return
- ret_dict['state'] = "ABSENT"
- return ret_dict
-
- ret_dict['id'] = arg_osimage_facts['id']
- ret_dict['name'] = arg_osimage_facts['name']
- ret_dict['size'] = arg_osimage_facts['size']
- # ret_dict['arch'] = arg_osimage_facts['architecture']
- ret_dict['sep_id'] = arg_osimage_facts['sepId']
- ret_dict['pool'] = arg_osimage_facts['pool']
- ret_dict['state'] = arg_osimage_facts['status']
- ret_dict['linkto'] = arg_osimage_facts['linkTo']
- ret_dict['accountId'] = arg_osimage_facts['accountId']
- ret_dict['boot_mode'] = arg_osimage_facts['bootType']
-
- ret_dict['boot_loader_type'] = ''
- match arg_osimage_facts['type']:
- case 'cdrom' | 'virtual' as type:
- ret_dict['type'] = type
- case _ as boot_loader_type:
- ret_dict['type'] = 'template'
- ret_dict['boot_loader_type'] = boot_loader_type
-
- ret_dict['network_interface_naming'] = arg_osimage_facts[
- 'networkInterfaceNaming'
- ]
- ret_dict['hot_resize'] = arg_osimage_facts['hotResize']
- ret_dict['storage_policy_id'] = arg_osimage_facts['storage_policy_id']
- ret_dict['to_clean'] = arg_osimage_facts['to_clean']
- return ret_dict
-
- @property
- def amodule_init_args(self) -> dict:
- return self.pack_amodule_init_args(
- argument_spec=dict(
- pool=dict(
- type='str',
- default='',
- ),
- sep_id=dict(
- type='int',
- default=0,
- ),
- account_name=dict(
- type='str',
- ),
- account_id=dict(
- type='int',
- ),
- image_name=dict(
- type='str',
- ),
- image_id=dict(
- type='int',
- default=0,
- ),
- virt_id=dict(
- type='int',
- default=0,
- ),
- virt_name=dict(
- type='str',
- ),
- state=dict(
- type='str',
- default='present',
- choices=[
- 'absent',
- 'present',
- ],
- ),
- url=dict(
- type='str',
- ),
- sepId=dict(
- type='int',
- default=0,
- ),
- poolName=dict(
- type='str',
- ),
- hot_resize=dict(
- type='bool',
- ),
- image_username=dict(
- type='str',
- ),
- image_password=dict(
- type='str',
- ),
- usernameDL=dict(
- type='str',
- ),
- passwordDL=dict(
- type='str',
- ),
- boot=dict(
- type='dict',
- options=dict(
- mode=dict(
- type='str',
- choices=[
- 'bios',
- 'uefi',
- ],
- ),
- loader_type=dict(
- type='str',
- choices=[
- 'windows',
- 'linux',
- 'unknown',
- ],
- ),
- ),
- ),
- network_interface_naming=dict(
- type='str',
- choices=[
- 'ens',
- 'eth',
- ],
- ),
- storage_policy_id=dict(
- type='int',
- ),
- ),
- supports_check_mode=True,
- )
-
- def check_amodule_args_for_change(self):
- check_errors = False
-
- aparam_storage_policy_id = self.aparams['storage_policy_id']
- if (
- aparam_storage_policy_id is not None
- and aparam_storage_policy_id
- not in self.acc_info['storage_policy_ids']
- ):
- check_errors = True
- self.message(
- msg='Check for parameter "storage_policy_id" failed: '
- f'Account ID {self.acc_id} does not have access to '
- f'storage_policy_id {aparam_storage_policy_id}'
- )
-
- if check_errors:
- self.exit(fail=True)
-
- def check_amodule_args_for_change_virt_image(self):
- check_errors = False
-
- aparam_storage_policy_id = self.aparams['storage_policy_id']
- if (
- aparam_storage_policy_id is not None
- and (
- aparam_storage_policy_id
- != self.virt_image_info['storage_policy_id']
- )
- ):
- check_errors = True
- self.message(
- msg='Check for parameter "storage_policy_id" failed: '
- 'storage_policy_id can not be changed in virtual image'
- )
-
- if check_errors:
- self.exit(fail=True)
-
- def check_amodule_args_for_create_image(self):
- check_errors = False
-
- aparam_account_id = self.aparams['account_id']
- if aparam_account_id is None:
- check_errors = True
- self.message(
- msg='Check for parameter "account_id" failed: '
- 'account_id must be specified when creating '
- 'a new image'
- )
-
- aparam_storage_policy_id = self.aparams['storage_policy_id']
- if aparam_storage_policy_id is None:
- check_errors = True
- self.message(
- msg='Check for parameter "storage_policy_id" failed: '
- 'storage_policy_id must be specified when creating '
- 'a new image'
- )
- elif (
- aparam_storage_policy_id
- not in self.acc_info['storage_policy_ids']
- ):
- check_errors = True
- self.message(
- msg='Check for parameter "storage_policy_id" failed: '
- f'Account ID {self.acc_id} does not have access to '
- f'storage_policy_id {aparam_storage_policy_id}'
- )
-
- if check_errors:
- self.exit(fail=True)
-
- def check_amodule_args_for_create_virt_image(self):
- check_errors = False
-
- aparam_storage_policy_id = self.aparams['storage_policy_id']
- if aparam_storage_policy_id is not None:
- check_errors = True
- self.message(
- msg='Check for parameter "storage_policy_id" failed: '
- 'storage_policy_id can not be specified when creating '
- 'virtual image'
- )
-
- if check_errors:
- self.exit(fail=True)
def main():
- decon = decort_osimage()
- amodule = decon.amodule
- if amodule.params['virt_name'] or amodule.params['virt_id']:
+ module = AnsibleModule(
+ argument_spec=dict(
+ app_id=dict(type='raw'),
+ app_secret=dict(type='raw'),
+ authenticator=dict(type='raw'),
+ controller_url=dict(type='raw'),
+ domain=dict(type='raw'),
+ jwt=dict(type='raw'),
+ oauth2_url=dict(type='raw'),
+ password=dict(type='raw'),
+ username=dict(type='raw'),
+ verify_ssl=dict(type='raw'),
+ ignore_api_compatibility=dict(type='raw'),
+ ignore_sdk_version_check=dict(type='raw'),
+ pool=dict(type='raw'),
+ sep_id=dict(type='raw'),
+ account_name=dict(type='raw'),
+ account_id=dict(type='raw'),
+ image_name=dict(type='raw'),
+ image_id=dict(type='raw'),
+ virt_id=dict(type='raw'),
+ virt_name=dict(type='raw'),
+ state=dict(type='raw'),
+ url=dict(type='raw'),
+ sepId=dict(type='raw'),
+ poolName=dict(type='raw'),
+ hot_resize=dict(type='raw'),
+ image_username=dict(type='raw'),
+ image_password=dict(type='raw'),
+ usernameDL=dict(type='raw'),
+ passwordDL=dict(type='raw'),
+ boot=dict(type='raw'),
+ network_interface_naming=dict(type='raw'),
+ storage_policy_id=dict(type='raw'),
+ ),
+ supports_check_mode=True,
+ )
- image_id, image_facts = decort_osimage.decort_virt_image_find(decon, amodule)
- if amodule.params['image_name'] or amodule.params['image_id']:
- decon.target_image_id, _ = decort_osimage.decort_image_find(decon, amodule)
- else:
- decon.target_image_id = 0
- if decort_osimage.decort_osimage_package_facts(image_facts)['id'] > 0:
- decon.result['facts'] = decort_osimage.decort_osimage_package_facts(image_facts, amodule.check_mode)
- decon.validated_virt_image_id = decort_osimage.decort_osimage_package_facts(image_facts)['id']
- decon.validated_virt_image_name = decort_osimage.decort_osimage_package_facts(image_facts)['name']
-
- if decort_osimage.decort_osimage_package_facts(image_facts)['id'] == 0 and amodule.params['state'] == "present" and decon.target_image_id > 0:
- image_id, image_facts = decort_osimage.decort_virt_image_create(decon,amodule)
- decon.result['msg'] = ("Virtual image '{}' created").format(decort_osimage.decort_osimage_package_facts(image_facts)['id'])
- decon.result['changed'] = True
- elif decort_osimage.decort_osimage_package_facts(image_facts)['id'] == 0 and amodule.params['state'] == "present" and decon.target_image_id == 0:
- decon.result['msg'] = ("Cannot find OS image")
- amodule.fail_json(**decon.result)
-
- if decon.validated_virt_image_id:
- if (
- decon.target_image_id
- and decort_osimage.decort_osimage_package_facts(image_facts)[
- 'linkto'
- ] != decon.target_image_id
- ):
- decort_osimage.decort_virt_image_link(decon,amodule)
- decon.result['changed'] = True
- amodule.exit_json(**decon.result)
- if (
- amodule.params['storage_policy_id'] is not None
- and amodule.params['storage_policy_id']
- != image_facts['storage_policy_id']
- ):
- decon.image_change_storage_policy(
- image_id=decon.validated_virt_image_id,
- storage_policy_id=amodule.params['storage_policy_id'],
- )
-
- if amodule.params['state'] == "absent" and decon.validated_virt_image_id:
- amodule.image_id_delete = decon.validated_virt_image_id
- image_id, image_facts = decort_osimage.decort_virt_image_find(decon, amodule)
- if image_facts['status'] != 'PURGED':
- decort_osimage.decort_image_delete(decon,amodule)
-
- elif amodule.params['image_name'] or amodule.params['image_id']:
- image_id, image_facts = decort_osimage.decort_image_find(decon, amodule)
- decon.validated_image_id = decort_osimage.decort_osimage_package_facts(image_facts)['id']
- if decort_osimage.decort_osimage_package_facts(image_facts)['id'] > 0:
- decon.result['facts'] = decort_osimage.decort_osimage_package_facts(image_facts, amodule.check_mode)
-
- if amodule.params['state'] == "present" and decon.validated_image_id == 0 and amodule.params['image_name'] and amodule.params['url']:
- decort_osimage.decort_image_create(decon,amodule)
- decon.result['changed'] = True
- image_id, image_facts = decort_osimage.decort_image_find(decon, amodule)
- decon.result['msg'] = ("OS image '{}' created").format(decort_osimage.decort_osimage_package_facts(image_facts)['id'])
- decon.result['facts'] = decort_osimage.decort_osimage_package_facts(image_facts, amodule.check_mode)
- decon.validated_image_id = decort_osimage.decort_osimage_package_facts(image_facts)['id']
-
- elif amodule.params['state'] == "absent" and decon.validated_image_id:
- amodule.image_id_delete = decon.validated_image_id
- image_id, image_facts = decort_osimage.decort_image_find(decon, amodule)
- if image_facts['status'] != 'DESTROYED':
- decort_osimage.decort_image_delete(decon,amodule)
-
- if decon.validated_image_id:
- if (
- amodule.params['storage_policy_id'] is not None
- and amodule.params['storage_policy_id']
- != image_facts['storage_policy_id']
- ):
- decon.image_change_storage_policy(
- image_id=decon.validated_image_id,
- storage_policy_id=amodule.params['storage_policy_id'],
- )
-
- if decon.result['failed'] == True:
- # we failed to find the specified image - fail the module
- decon.result['changed'] = False
- amodule.fail_json(**decon.result)
- else:
- if decon.validated_image_id:
- _, image_facts = decon.decort_image_find(amodule=amodule)
- elif decon.validated_virt_image_id:
- _, image_facts = decon.decort_virt_image_find(amodule=amodule)
- decon.result['facts'] = decort_osimage.decort_osimage_package_facts(
- arg_osimage_facts=image_facts,
- arg_check_mode=amodule.check_mode,
- )
-
- amodule.exit_json(**decon.result)
+ module.fail_json(
+ msg=(
+ 'The module "decort_osimage" has been renamed to "decort_image". '
+ 'Please update your playbook to use "decort_image" '
+ 'instead of "decort_osimage".'
+ ),
+ )
-if __name__ == "__main__":
+if __name__ == '__main__':
main()
diff --git a/library/decort_pfw.py b/library/decort_pfw.py
index 34b7f03..15eef74 100644
--- a/library/decort_pfw.py
+++ b/library/decort_pfw.py
@@ -86,60 +86,65 @@ class decort_pfw(DecortController):
return
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ amodule = self.amodule
+
+ pfw_facts = None # will hold PFW facts as returned by pfw_configure
+
+ #
+ # Validate module arguments:
+ # 1) specified Compute instance exists in correct state
+ # 2) specified ViNS exists
+ # 3) ViNS has GW function
+ # 4) Compute is connected to this ViNS
+ #
+
+ validated_comp_id, comp_facts, rg_id = self.compute_find(amodule.params['compute_id'])
+ if not validated_comp_id:
+ self.result['failed'] = True
+ self.result['msg'] = "Cannot find specified Compute ID {}.".format(amodule.params['compute_id'])
+ amodule.fail_json(**self.result)
+
+ validated_vins_id, vins_facts = self.vins_find(amodule.params['vins_id'])
+ if not validated_vins_id:
+ self.result['failed'] = True
+ self.result['msg'] = "Cannot find specified ViNS ID {}.".format(amodule.params['vins_id'])
+ amodule.fail_json(**self.result)
+
+ gw_vnf_facts = vins_facts['vnfs'].get('GW')
+ if not gw_vnf_facts or gw_vnf_facts['status'] == "DESTROYED":
+ self.result['failed'] = True
+ self.result['msg'] = "ViNS ID {} does not have a configured external connection.".format(validated_vins_id)
+ amodule.fail_json(**self.result)
+
+ #
+ # Initial validation of module arguments is complete
+ #
+
+ if amodule.params['state'] == 'absent':
+ # ignore amodule.params['rules'] and remove all rules associated with this Compute
+ pfw_facts = self.pfw_configure(comp_facts, vins_facts, None)
+ elif amodule.params['rules'] is not None:
+ # manage PFW rules accodring to the module arguments
+ pfw_facts = self.pfw_configure(comp_facts, vins_facts, amodule.params['rules'])
+ else:
+ pfw_facts = self._pfw_get(comp_facts['id'], vins_facts['id'])
+
+ #
+ # complete module run
+ #
+ if self.result['failed']:
+ amodule.fail_json(**self.result)
+ else:
+ # prepare PFW facts to be returned as part of self.result and then call exit_json(...)
+ self.result['facts'] = self.decort_pfw_package_facts(comp_facts, vins_facts, pfw_facts, amodule.check_mode)
+ amodule.exit_json(**self.result)
+
+
def main():
- decon = decort_pfw()
- amodule = decon.amodule
+ decort_pfw().run()
- pfw_facts = None # will hold PFW facts as returned by pfw_configure
- #
- # Validate module arguments:
- # 1) specified Compute instance exists in correct state
- # 2) specified ViNS exists
- # 3) ViNS has GW function
- # 4) Compute is connected to this ViNS
- #
-
- validated_comp_id, comp_facts, rg_id = decon.compute_find(amodule.params['compute_id'])
- if not validated_comp_id:
- decon.result['failed'] = True
- decon.result['msg'] = "Cannot find specified Compute ID {}.".format(amodule.params['compute_id'])
- amodule.fail_json(**decon.result)
-
- validated_vins_id, vins_facts = decon.vins_find(amodule.params['vins_id'])
- if not validated_vins_id:
- decon.result['failed'] = True
- decon.result['msg'] = "Cannot find specified ViNS ID {}.".format(amodule.params['vins_id'])
- amodule.fail_json(**decon.result)
-
- gw_vnf_facts = vins_facts['vnfs'].get('GW')
- if not gw_vnf_facts or gw_vnf_facts['status'] == "DESTROYED":
- decon.result['failed'] = True
- decon.result['msg'] = "ViNS ID {} does not have a configured external connection.".format(validated_vins_id)
- amodule.fail_json(**decon.result)
-
- #
- # Initial validation of module arguments is complete
- #
-
- if amodule.params['state'] == 'absent':
- # ignore amodule.params['rules'] and remove all rules associated with this Compute
- pfw_facts = decon.pfw_configure(comp_facts, vins_facts, None)
- elif amodule.params['rules'] is not None:
- # manage PFW rules accodring to the module arguments
- pfw_facts = decon.pfw_configure(comp_facts, vins_facts, amodule.params['rules'])
- else:
- pfw_facts = decon._pfw_get(comp_facts['id'], vins_facts['id'])
-
- #
- # complete module run
- #
- if decon.result['failed']:
- amodule.fail_json(**decon.result)
- else:
- # prepare PFW facts to be returned as part of decon.result and then call exit_json(...)
- decon.result['facts'] = decon.decort_pfw_package_facts(comp_facts, vins_facts, pfw_facts, amodule.check_mode)
- amodule.exit_json(**decon.result)
-
-if __name__ == "__main__":
+if __name__ == '__main__':
main()
diff --git a/library/decort_rg.py b/library/decort_rg.py
index 2341d70..8bf204f 100644
--- a/library/decort_rg.py
+++ b/library/decort_rg.py
@@ -107,7 +107,6 @@ class decort_rg(DecortController):
ram='ram',
disk='disksize',
ext_ips='extips',
- net_transfer='exttraffic',
storage_policies='policies',
)
if self.amodule.params['quotas']:
@@ -407,73 +406,78 @@ class decort_rg(DecortController):
# 4) if RG exists: check desired state, desired configuration -> initiate action accordingly
# 5) report result to Ansible
-def main():
- decon = decort_rg()
- amodule = decon.amodule
- #amodule.check_mode=True
- if decon.validated_rg_id > 0:
- if decon.rg_facts['status'] in ["MODELED", "DISABLING", "ENABLING", "DELETING", "DESTROYING", "CONFIRMED"]:
- decon.error()
- elif decon.rg_facts['status'] in ("CREATED"):
- if amodule.params['state'] == 'absent':
- decon.destroy()
- elif amodule.params['state'] == "disabled":
- decon.enable()
- if amodule.params['state'] in ['present', 'enabled']:
- if (
- amodule.params['quotas']
- or amodule.params['resType']
- or amodule.params['rename'] != ""
- or amodule.params['sep_pools'] is not None
- or amodule.params['description'] is not None
- ):
- decon.update()
- if amodule.params['access']:
- decon.access()
- if amodule.params['def_netType'] is not None:
- decon.setDefNet()
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ amodule = self.amodule
+ #amodule.check_mode=True
+ if self.validated_rg_id > 0:
+ if self.rg_facts['status'] in ["MODELED", "DISABLING", "ENABLING", "DELETING", "DESTROYING", "CONFIRMED"]:
+ self.error()
+ elif self.rg_facts['status'] in ("CREATED"):
+ if amodule.params['state'] == 'absent':
+ self.destroy()
+ elif amodule.params['state'] == "disabled":
+ self.enable()
+ if amodule.params['state'] in ['present', 'enabled']:
+ if (
+ amodule.params['quotas']
+ or amodule.params['resType']
+ or amodule.params['rename'] != ""
+ or amodule.params['sep_pools'] is not None
+ or amodule.params['description'] is not None
+ ):
+ self.update()
+ if amodule.params['access']:
+ self.access()
+ if amodule.params['def_netType'] is not None:
+ self.setDefNet()
- elif decon.rg_facts['status'] == "DELETED":
- if amodule.params['state'] == 'absent' and amodule.params['permanently'] == True:
- decon.destroy()
- elif (amodule.params['state'] == 'present'
- or amodule.params['state'] == 'disabled'):
- decon.restore()
- elif amodule.params['state'] == 'enabled':
- decon.restore()
- decon.enable()
- elif decon.rg_facts['status'] in ("DISABLED"):
- if amodule.params['state'] == 'absent':
- decon.destroy()
- elif amodule.params['state'] == ("enabled"):
- decon.enable()
+ elif self.rg_facts['status'] == "DELETED":
+ if amodule.params['state'] == 'absent' and amodule.params['permanently'] == True:
+ self.destroy()
+ elif (amodule.params['state'] == 'present'
+ or amodule.params['state'] == 'disabled'):
+ self.restore()
+ elif amodule.params['state'] == 'enabled':
+ self.restore()
+ self.enable()
+ elif self.rg_facts['status'] in ("DISABLED"):
+ if amodule.params['state'] == 'absent':
+ self.destroy()
+ elif amodule.params['state'] == ("enabled"):
+ self.enable()
- else:
- if amodule.params['state'] in ('present', 'enabled'):
- if not amodule.params['rg_name']:
- decon.result['failed'] = True
- decon.result['msg'] = (
- 'Resource group could not be created because'
- ' the "rg_name" parameter was not specified.'
- )
- else:
- decon.create()
- if amodule.params['access'] and not amodule.check_mode:
- decon.access()
- elif amodule.params['state'] in ('disabled'):
- decon.error()
-
-
- if decon.result['failed']:
- amodule.fail_json(**decon.result)
- else:
- if decon.rg_should_exist:
- if decon.result['changed']:
- decon.get_info()
- decon.result['facts'] = decon.package_facts(amodule.check_mode)
- amodule.exit_json(**decon.result)
else:
- amodule.exit_json(**decon.result)
+ if amodule.params['state'] in ('present', 'enabled'):
+ if not amodule.params['rg_name']:
+ self.result['failed'] = True
+ self.result['msg'] = (
+ 'Resource group could not be created because'
+ ' the "rg_name" parameter was not specified.'
+ )
+ else:
+ self.create()
+ if amodule.params['access'] and not amodule.check_mode:
+ self.access()
+ elif amodule.params['state'] in ('disabled'):
+ self.error()
+
-if __name__ == "__main__":
+ if self.result['failed']:
+ amodule.fail_json(**self.result)
+ else:
+ if self.rg_should_exist:
+ if self.result['changed']:
+ self.get_info()
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
+ else:
+ amodule.exit_json(**self.result)
+
+
+def main():
+ decort_rg().run()
+
+
+if __name__ == '__main__':
main()
diff --git a/library/decort_rg_list.py b/library/decort_rg_list.py
new file mode 100644
index 0000000..7a1350d
--- /dev/null
+++ b/library/decort_rg_list.py
@@ -0,0 +1,148 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_rg_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortRGList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ account_id=dict(
+ type='int',
+ ),
+ account_name=dict(
+ type='str',
+ ),
+ created_after_timestamp=dict(
+ type='int',
+ ),
+ created_before_timestamp=dict(
+ type='int',
+ ),
+ id=dict(
+ type='int',
+ ),
+ include_deleted=dict(
+ type='bool',
+ ),
+ lock_status=dict(
+ type='str',
+ choices=sdk_types.LockStatus._member_names_,
+ ),
+ name=dict(
+ type='str',
+ ),
+ status=dict(
+ type='str',
+ choices=(
+ sdk_types.ResourceGroupStatus._member_names_
+ ),
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.ResourceGroupAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+ aparam_status: str | None = aparam_filter['status']
+ aparam_lock_status: str | None = aparam_filter['lock_status']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.ResourceGroupAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.rg.list(
+ account_id=aparam_filter['account_id'],
+ account_name=aparam_filter['account_name'],
+ created_after_timestamp=aparam_filter['created_after_timestamp'],
+ created_before_timestamp=aparam_filter['created_before_timestamp'],
+ id=aparam_filter['id'],
+ include_deleted=aparam_filter['include_deleted'] or False,
+ lock_status=(
+ sdk_types.LockStatus[aparam_lock_status]
+ if aparam_lock_status else None
+ ),
+ name=aparam_filter['name'],
+ status=(
+ sdk_types.ResourceGroupStatus[aparam_status]
+ if aparam_status else None
+ ),
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortRGList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_security_group.py b/library/decort_security_group.py
index d785824..8b611a7 100644
--- a/library/decort_security_group.py
+++ b/library/decort_security_group.py
@@ -10,6 +10,9 @@ description: See L(Module Documentation,https://repository.basistech.ru/BASIS/de
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.decort_utils import DecortController
+from dynamix_sdk import exceptions as sdk_exceptions
+import dynamix_sdk.types as sdk_types
+
class DecortSecurityGroup(DecortController):
id: int = 0
@@ -52,18 +55,18 @@ class DecortSecurityGroup(DecortController):
options=dict(
direction=dict(
type='str',
- choices=[
- e.name for e in
- self.SecurityGroupRuleDirection
- ],
+ choices=(
+ sdk_types.TrafficDirection.
+ _member_names_
+ ),
required=True,
),
ethertype=dict(
type='str',
- choices=[
- e.name for e in
- self.SecurityGroupRuleEtherType
- ],
+ choices=(
+ sdk_types.SGRuleEthertype.
+ _member_names_
+ ),
),
id=dict(
type='int',
@@ -81,12 +84,11 @@ class DecortSecurityGroup(DecortController):
),
protocol=dict(
type='str',
- choices=[
- e.name for e in
- self.SecurityGroupRuleProtocol
- ],
+ choices=(
+ sdk_types.SGRuleProtocol._member_names_
+ ),
),
- remote_ip_prefix=dict(
+ remote_net_cidr=dict(
type='str',
),
),
@@ -101,16 +103,17 @@ class DecortSecurityGroup(DecortController):
supports_check_mode=True,
)
+ @DecortController.handle_sdk_exceptions
def run(self):
if self.aparams['id'] is not None:
self.id = self.aparams['id']
elif self.aparams['name'] is not None:
- security_group = self.security_group_find(
+ security_groups = self.api.cloudapi.security_group.list(
account_id=self.aparams['account_id'],
name=self.aparams['name'],
)
- if security_group:
- self.id = security_group['id']
+ if security_groups.data:
+ self.id = security_groups.data[0].id
if self.id:
self.get_info()
@@ -127,14 +130,25 @@ class DecortSecurityGroup(DecortController):
self.exit()
def get_info(self):
- self.facts: dict = self.security_group_get(id=self.id)
- self.facts['created_timestamp'] = self.facts.pop('created_at')
- self.facts['updated_timestamp'] = self.facts.pop('updated_at')
- for rule in self.facts['rules']:
- rule['port_range'] = {
- 'min': rule.pop('port_range_min'),
- 'max': rule.pop('port_range_max'),
- }
+ try:
+ storage_policy_model = self.api.cloudapi.security_group.get(
+ security_group_id=self.id
+ )
+ except sdk_exceptions.RequestException as e:
+ if (
+ e.orig_exception.response
+ and e.orig_exception.response.status_code == 404
+ ):
+ self.message(
+ self.MESSAGES.obj_not_found(
+ obj='security_group',
+ id=self.id,
+ )
+ )
+ self.exit(fail=True)
+ raise e
+
+ self.facts = storage_policy_model.model_dump()
def check_amodule_args_for_create(self):
check_errors = False
@@ -242,16 +256,13 @@ class DecortSecurityGroup(DecortController):
return not check_errors
def create(self):
- security_groups_by_account_id = self.user_security_groups(
- account_id=self.aparams['account_id']
+ id = self.sdk_checkmode(self.api.cloudapi.security_group.create)(
+ account_id=self.aparams['account_id'],
+ name=self.aparams['name'],
+ description=self.aparams['description'],
)
- sg_names = [sg['name'] for sg in security_groups_by_account_id]
- if self.aparams['name'] not in sg_names:
- self.id = self.security_group_create(
- account_id=self.aparams['account_id'],
- name=self.aparams['name'],
- description=self.aparams['description'],
- )
+ if id:
+ self.id = id
def change(self):
self.change_state()
@@ -277,7 +288,7 @@ class DecortSecurityGroup(DecortController):
):
new_description = aparam_description
if new_name or new_description:
- self.security_group_update(
+ self.sdk_checkmode(self.api.cloudapi.security_group.update)(
security_group_id=self.id,
name=new_name,
description=new_description,
@@ -317,7 +328,9 @@ class DecortSecurityGroup(DecortController):
self.create_rule(rule=rule)
def delete(self):
- self.security_group_detele(security_group_id=self.id)
+ self.sdk_checkmode(self.api.cloudapi.security_group.delete)(
+ security_group_id=self.id,
+ )
self.facts = {}
self.exit()
@@ -326,20 +339,22 @@ class DecortSecurityGroup(DecortController):
if rule.get('port_range'):
port_range_min = rule['port_range'].get('min')
port_range_max = rule['port_range'].get('max')
- self.security_group_create_rule(
+ self.sdk_checkmode(self.api.cloudapi.security_group.create_rule)(
security_group_id=self.id,
- direction=self.SecurityGroupRuleDirection[rule['direction']],
+ traffic_direction=(
+ sdk_types.TrafficDirection[rule['direction']]
+ ),
ethertype=(
- self.SecurityGroupRuleEtherType[rule['ethertype']]
- if rule.get('ethertype') else None
+ sdk_types.SGRuleEthertype[rule['ethertype']]
+ if rule.get('ethertype') else sdk_types.SGRuleEthertype.IPV4
),
protocol=(
- self.SecurityGroupRuleProtocol[rule['protocol']]
+ sdk_types.SGRuleProtocol[rule['protocol']]
if rule.get('protocol') else None
),
port_range_min=port_range_min,
port_range_max=port_range_max,
- remote_ip_prefix=rule.get('remote_ip_prefix'),
+ remote_net_cidr=rule.get('remote_net_cidr'),
)
diff --git a/library/decort_security_group_list.py b/library/decort_security_group_list.py
new file mode 100644
index 0000000..d7bc22f
--- /dev/null
+++ b/library/decort_security_group_list.py
@@ -0,0 +1,132 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_security_group_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortSecurityGroupList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ account_id=dict(
+ type='int',
+ ),
+ created_after_timestamp=dict(
+ type='int',
+ ),
+ created_before_timestamp=dict(
+ type='int',
+ ),
+ description=dict(
+ type='str',
+ ),
+ id=dict(
+ type='int',
+ ),
+ name=dict(
+ type='str',
+ ),
+ updated_after_timestamp=dict(
+ type='int',
+ ),
+ updated_before_timestamp=dict(
+ type='int',
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.SecurityGroupAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.SecurityGroupAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.security_group.list(
+ account_id=aparam_filter['account_id'],
+ created_after_timestamp=aparam_filter['created_after_timestamp'],
+ created_before_timestamp=aparam_filter['created_before_timestamp'],
+ description=aparam_filter['description'],
+ id=aparam_filter['id'],
+ name=aparam_filter['name'],
+ updated_after_timestamp=aparam_filter['updated_after_timestamp'],
+ updated_before_timestamp=aparam_filter['updated_before_timestamp'],
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortSecurityGroupList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_storage_policy.py b/library/decort_storage_policy.py
index 5e08743..0e188f8 100644
--- a/library/decort_storage_policy.py
+++ b/library/decort_storage_policy.py
@@ -10,6 +10,8 @@ description: See L(Module Documentation,https://repository.basistech.ru/BASIS/de
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.decort_utils import DecortController
+from dynamix_sdk import exceptions as sdk_exceptions
+
class DecortStoragePolicy(DecortController):
def __init__(self):
@@ -28,18 +30,31 @@ class DecortStoragePolicy(DecortController):
supports_check_mode=True,
)
+ @DecortController.handle_sdk_exceptions
def run(self):
self.get_info()
self.exit()
def get_info(self):
- self.facts = self.storage_policy_get(id=self.id)
- self.facts['sep_pools'] = self.facts.pop('access_seps_pools')
- self.facts['iops_limit'] = self.facts.pop('limit_iops')
- self.facts['usage']['account_ids'] = self.facts['usage'].pop(
- 'accounts'
- )
- self.facts['usage']['rg_ids'] = self.facts['usage'].pop('resgroups')
+ try:
+ storage_policy_model = self.api.cloudapi.storage_policy.get(
+ id=self.id
+ )
+ except sdk_exceptions.RequestException as e:
+ if (
+ e.orig_exception.response
+ and e.orig_exception.response.status_code == 404
+ ):
+ self.message(
+ self.MESSAGES.obj_not_found(
+ obj='storage_policy',
+ id=self.id,
+ )
+ )
+ self.exit(fail=True)
+ raise e
+
+ self.facts = storage_policy_model.model_dump()
def main():
diff --git a/library/decort_storage_policy_list.py b/library/decort_storage_policy_list.py
new file mode 100644
index 0000000..6259523
--- /dev/null
+++ b/library/decort_storage_policy_list.py
@@ -0,0 +1,152 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_storage_policy_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortStoragePolicyList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ account_id=dict(
+ type='int',
+ ),
+ description=dict(
+ type='str',
+ ),
+ id=dict(
+ type='int',
+ ),
+ iops_limit=dict(
+ type='int',
+ ),
+ name=dict(
+ type='str',
+ ),
+ rg_id=dict(
+ type='int',
+ ),
+ sep_id=dict(
+ type='int',
+ ),
+ sep_pool_name=dict(
+ type='str',
+ ),
+ status=dict(
+ type='str',
+ choices=(
+ sdk_types.StoragePolicyStatus._member_names_
+ ),
+ ),
+ sep_tech_status=dict(
+ type='str',
+ choices=sdk_types.SEPTechStatus._member_names_,
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.StoragePolicyAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+ aparam_status: str | None = aparam_filter['status']
+ aparam_sep_tech_status: str | None = aparam_filter['sep_tech_status']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.StoragePolicyAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.storage_policy.list(
+ account_id=aparam_filter['account_id'],
+ description=aparam_filter['description'],
+ id=aparam_filter['id'],
+ iops_limit=aparam_filter['iops_limit'],
+ name=aparam_filter['name'],
+ rg_id=aparam_filter['rg_id'],
+ sep_id=aparam_filter['sep_id'],
+ sep_pool_name=aparam_filter['sep_pool_name'],
+ status=(
+ sdk_types.StoragePolicyStatus[aparam_status]
+ if aparam_status else None
+ ),
+ sep_tech_status=(
+ sdk_types.SEPTechStatus[aparam_sep_tech_status]
+ if aparam_sep_tech_status else None
+ ),
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortStoragePolicyList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_trunk.py b/library/decort_trunk.py
index 342616c..3a44a70 100644
--- a/library/decort_trunk.py
+++ b/library/decort_trunk.py
@@ -10,6 +10,8 @@ description: See L(Module Documentation,https://repository.basistech.ru/BASIS/de
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.decort_utils import DecortController
+from dynamix_sdk import exceptions as sdk_exceptions
+
class DecortTrunk(DecortController):
def __init__(self):
@@ -28,19 +30,31 @@ class DecortTrunk(DecortController):
supports_check_mode=True,
)
+ @DecortController.handle_sdk_exceptions
def run(self):
self.get_info()
self.exit()
def get_info(self):
- self.facts = self.trunk_get(id=self.id)
- self.facts['account_ids'] = self.facts.pop('accountIds')
- self.facts['created_timestamp'] = self.facts.pop('created_at')
- self.facts['deleted_timestamp'] = self.facts.pop('deleted_at')
- self.facts['updated_timestamp'] = self.facts.pop('updated_at')
- self.facts['native_vlan_id'] = self.facts.pop('nativeVlanId')
- self.facts['ovs_bridge'] = self.facts.pop('ovsBridge')
- self.facts['vlan_ids'] = self.facts.pop('trunkTags')
+ try:
+ trunk_model = self.api.cloudapi.trunk.get(
+ id=self.id
+ )
+ except sdk_exceptions.RequestException as e:
+ if (
+ e.orig_exception.response
+ and e.orig_exception.response.status_code == 404
+ ):
+ self.message(
+ self.MESSAGES.obj_not_found(
+ obj='trunk',
+ id=self.id,
+ )
+ )
+ self.exit(fail=True)
+ raise e
+
+ self.facts = trunk_model.model_dump()
def main():
diff --git a/library/decort_trunk_list.py b/library/decort_trunk_list.py
new file mode 100644
index 0000000..b5b0e6e
--- /dev/null
+++ b/library/decort_trunk_list.py
@@ -0,0 +1,123 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_trunk_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortTrunkList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ account_ids=dict(
+ type='list',
+ elements='int',
+ ),
+ ids=dict(
+ type='list',
+ elements='int',
+ ),
+ status=dict(
+ type='str',
+ choices=sdk_types.TrunkStatus._member_names_,
+ ),
+ vlan_ids=dict(
+ type='str',
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.TrunkAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+ aparam_status: str | None = aparam_filter['status']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.TrunkAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.trunk.list(
+ account_ids=aparam_filter['account_ids'],
+ ids=aparam_filter['ids'],
+ status=(
+ sdk_types.TrunkStatus[aparam_status]
+ if aparam_status else None
+ ),
+ vlan_ids=aparam_filter['vlan_ids'],
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortTrunkList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_user.py b/library/decort_user.py
new file mode 100644
index 0000000..dfe23f9
--- /dev/null
+++ b/library/decort_user.py
@@ -0,0 +1,68 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_user
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+
+class DecortUser(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ api_methods=dict(
+ type='bool',
+ default=False,
+ ),
+ objects_search=dict(
+ type='str',
+ ),
+ resource_consumption=dict(
+ type='bool',
+ default=False,
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ self.facts = self.usermanager_whoami_result
+ self.id = self.facts['name']
+
+ user_get = self.user_get(id=self.id)
+ for key in ['emailaddresses', 'data']:
+ self.facts[key] = user_get[key]
+
+ if self.aparams['resource_consumption']:
+ self.facts.update(self.user_resource_consumption())
+
+ if self.aparams['api_methods']:
+ self.facts['api_methods'] = self.user_api_methods(id=self.id)
+
+ search_string = self.aparams['objects_search']
+ if search_string:
+ self.facts['objects_search'] = self.user_objects_search(
+ search_string=search_string,
+ )
+
+
+def main():
+ DecortUser().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_user_info.py b/library/decort_user_info.py
index d3cb666..ddf4469 100644
--- a/library/decort_user_info.py
+++ b/library/decort_user_info.py
@@ -8,871 +8,37 @@ description: See L(Module Documentation,https://repository.basistech.ru/BASIS/de
'''
from ansible.module_utils.basic import AnsibleModule
-from ansible.module_utils.decort_utils import DecortController
-
-
-class DecortUserInfo(DecortController):
- def __init__(self):
- super().__init__(AnsibleModule(**self.amodule_init_args))
- self.check_amodule_args()
-
- @property
- def amodule_init_args(self) -> dict:
- return self.pack_amodule_init_args(
- argument_spec=dict(
- accounts=dict(
- type='dict',
- options=dict(
- deleted=dict(
- type='bool',
- default=False,
- ),
- filter=dict(
- type='dict',
- options=dict(
- rights=dict(
- type='str',
- choices=[
- e.value for e in self.AccountUserRights
- ],
- ),
- id=dict(
- type='int',
- ),
- name=dict(
- type='str',
- ),
- status=dict(
- type='str',
- choices=[
- e.value for e in self.AccountStatus
- ],
- ),
- zone_id=dict(
- type='int',
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- required=True,
- ),
- ),
- ),
- resource_consumption=dict(
- type='bool',
- default=False,
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=[
- e.value
- for e in self.AccountSortableField
- ],
- required=True,
- ),
- ),
- ),
- ),
- ),
- api_methods=dict(
- type='bool',
- default=False,
- ),
- audits=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- api_method=dict(
- type='str',
- ),
- status_code=dict(
- type='dict',
- options=dict(
- min=dict(
- type='int',
- ),
- max=dict(
- type='int',
- ),
- ),
- ),
- time=dict(
- type='dict',
- options=dict(
- start=dict(
- type='dict',
- options=dict(
- timestamp=dict(
- type='int',
- ),
- datetime=dict(
- type='str',
- ),
- ),
- mutually_exclusive=[
- ('timestamp', 'datetime'),
- ],
- ),
- end=dict(
- type='dict',
- options=dict(
- timestamp=dict(
- type='int',
- ),
- datetime=dict(
- type='str',
- ),
- ),
- mutually_exclusive=[
- ('timestamp', 'datetime'),
- ],
- ),
- ),
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- apply_defaults=True,
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- default=50,
- ),
- ),
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=[
- e.value
- for e in self.AuditsSortableField
- ],
- required=True,
- ),
- ),
- ),
- ),
- ),
- objects_search=dict(
- type='str',
- ),
- resource_consumption=dict(
- type='bool',
- default=False,
- ),
- zones=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- deletable=dict(
- type='bool',
- ),
- description=dict(
- type='str',
- ),
- grid_id=dict(
- type='int',
- ),
- id=dict(
- type='int',
- ),
- name=dict(
- type='str',
- ),
- node_id=dict(
- type='int',
- ),
- status=dict(
- type='str',
- choices=[
- e.value for e in self.ZoneStatus
- ],
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- apply_defaults=True,
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- default=50,
- ),
- ),
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=self.ZoneField._member_names_,
- required=True,
- ),
- ),
- ),
- ),
- ),
- trunks=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- ids=dict(
- type='list',
- ),
- account_ids=dict(
- type='list',
- ),
- status=dict(
- type='str',
- choices=[
- e.value for e in self.TrunkStatus
- ],
- ),
- vlan_ids=dict(
- type='list',
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- apply_defaults=True,
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- default=50,
- ),
- ),
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=[
- e.value
- for e in self.TrunksSortableField
- ],
- required=True,
- ),
- ),
- ),
- ),
- ),
- storage_policies=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- account_id=dict(
- type='int',
- ),
- description=dict(
- type='str',
- ),
- id=dict(
- type='int',
- ),
- iops_limit=dict(
- type='int',
- ),
- name=dict(
- type='str',
- ),
- pool_name=dict(
- type='str',
- ),
- rg_id=dict(
- type='int',
- ),
- sep_id=dict(
- type='int',
- ),
- status=dict(
- type='str',
- choices=[
- e.value for e
- in self.StoragePolicyStatus
- ],
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- apply_defaults=True,
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- default=50,
- ),
- ),
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=[
- e.value for e
- in self.StoragePoliciesSortableField
- ],
- required=True,
- ),
- ),
- ),
- ),
- ),
- security_groups=dict(
- type='dict',
- options=dict(
- filter=dict(
- type='dict',
- options=dict(
- account_id=dict(
- type='int',
- ),
- created_timestamp_max=dict(
- type='int',
- ),
- created_timestamp_min=dict(
- type='int',
- ),
- description=dict(
- type='str',
- ),
- id=dict(
- type='int',
- ),
- name=dict(
- type='str',
- ),
- updated_timestamp_max=dict(
- type='int',
- ),
- updated_timestamp_min=dict(
- type='int',
- ),
- ),
- ),
- pagination=dict(
- type='dict',
- apply_defaults=True,
- options=dict(
- number=dict(
- type='int',
- default=1,
- ),
- size=dict(
- type='int',
- default=50,
- ),
- ),
- ),
- sorting=dict(
- type='dict',
- options=dict(
- asc=dict(
- type='bool',
- default=True,
- ),
- field=dict(
- type='str',
- choices=[
- e.name for e
- in self.SecurityGroupSortableField
- ],
- required=True,
- ),
- ),
- ),
- ),
- ),
- ),
- supports_check_mode=True,
- )
-
- def check_amodule_args(self):
- """
- Additional validation of Ansible Module arguments.
- This validation cannot be implemented using
- Ansible Argument spec.
- """
-
- check_error = False
-
- match self.aparams['audits']:
- case {
- 'filter': {'time': {'start': {'datetime': str() as dt_str}}}
- }:
- if self.dt_str_to_sec(dt_str=dt_str) is None:
- self.message(self.MESSAGES.str_not_parsed(string=dt_str))
- check_error = True
- match self.aparams['audits']:
- case {
- 'filter': {'time': {'end': {'datetime': str() as dt_str}}}
- }:
- if self.dt_str_to_sec(dt_str=dt_str) is None:
- self.message(self.MESSAGES.str_not_parsed(string=dt_str))
- check_error = True
-
- aparam_trunks = self.aparams['trunks']
- if (
- aparam_trunks is not None
- and aparam_trunks['filter'] is not None
- and aparam_trunks['filter']['vlan_ids'] is not None
- ):
- for vlan_id in aparam_trunks['filter']['vlan_ids']:
- if not (
- self.TRUNK_VLAN_ID_MIN_VALUE
- <= vlan_id
- <= self.TRUNK_VLAN_ID_MAX_VALUE
- ):
- check_error = True
- self.message(
- 'Check for parameter "trunks.filter.vlan_ids" failed: '
- f'VLAN ID {vlan_id} must be in range 1-4095.'
- )
-
- if check_error:
- self.exit(fail=True)
-
- @property
- def mapped_accounts_args(self) -> None | dict:
- """
- Map the module argument `accounts` to
- arguments dictionary for the method
- `DecortController.user_accounts`.
- """
-
- input_args = self.aparams['accounts']
- if not input_args:
- return input_args
-
- mapped_args = {}
-
- mapped_args['deleted'] = input_args['deleted']
-
- mapped_args['resource_consumption'] = (
- input_args['resource_consumption']
- )
-
- input_args_filter = input_args['filter']
- if input_args_filter:
- input_args_filter_rights = input_args_filter['rights']
- if input_args_filter_rights:
- mapped_args['account_user_rights'] = (
- self.AccountUserRights(input_args_filter_rights)
- )
-
- mapped_args['account_id'] = input_args_filter['id']
-
- mapped_args['account_name'] = input_args_filter['name']
-
- input_args_filter_status = input_args_filter['status']
- if input_args_filter_status:
- mapped_args['account_status'] = (
- self.AccountStatus(input_args_filter_status)
- )
-
- mapped_args['zone_id'] = input_args_filter['zone_id']
-
- input_args_pagination = input_args['pagination']
- if input_args_pagination:
- mapped_args['page_number'] = input_args_pagination['number']
- mapped_args['page_size'] = input_args_pagination['size']
-
- input_args_sorting = input_args['sorting']
- if input_args_sorting:
- mapped_args['sort_by_asc'] = input_args_sorting['asc']
-
- input_args_sorting_field = input_args_sorting['field']
- if input_args_sorting_field:
- mapped_args['sort_by_field'] = (
- self.AccountSortableField(input_args_sorting_field)
- )
-
- return mapped_args
-
- @property
- def mapped_audits_args(self):
- """
- Map the module argument `audits` to
- arguments dictionary for the method
- `DecortController.user_audits`.
- """
-
- input_args = self.aparams['audits']
- if not input_args:
- return input_args
-
- mapped_args = {}
-
- input_args_filter = input_args['filter']
- if input_args_filter:
- mapped_args['api_method'] = input_args_filter['api_method']
-
- match input_args_filter['status_code']:
- case {'min': int() as min_status_code}:
- mapped_args['min_status_code'] = min_status_code
- match input_args_filter['status_code']:
- case {'max': int() as max_status_code}:
- mapped_args['max_status_code'] = max_status_code
-
- match input_args_filter['time']:
- case {'start': {'timestamp': int() as start_unix_time}}:
- mapped_args['start_unix_time'] = start_unix_time
- case {'start': {'datetime': str() as start_dt_str}}:
- mapped_args['start_unix_time'] = self.dt_str_to_sec(
- dt_str=start_dt_str
- )
- match input_args_filter['time']:
- case {'end': {'timestamp': int() as end_unix_time}}:
- mapped_args['end_unix_time'] = end_unix_time
- case {'end': {'datetime': str() as end_dt_str}}:
- mapped_args['end_unix_time'] = self.dt_str_to_sec(
- dt_str=end_dt_str
- )
-
- input_args_pagination = input_args['pagination']
- if input_args_pagination:
- mapped_args['page_number'] = input_args_pagination['number']
- mapped_args['page_size'] = input_args_pagination['size']
-
- input_args_sorting = input_args['sorting']
- if input_args_sorting:
- mapped_args['sort_by_asc'] = input_args_sorting['asc']
-
- input_args_sorting_field = input_args_sorting['field']
- if input_args_sorting_field:
- mapped_args['sort_by_field'] = (
- self.AuditsSortableField(input_args_sorting_field)
- )
-
- return mapped_args
-
- @property
- def mapped_zones_args(self):
- """
- Map the module argument `zones` to
- arguments dictionary for the method
- `DecortController.user_zones`.
- """
-
- input_args = self.aparams['zones']
- if not input_args:
- return input_args
-
- mapped_args = {}
-
- input_args_filter = input_args['filter']
- if input_args_filter:
- mapped_args.update(input_args_filter)
-
- input_args_filter_status = input_args_filter['status']
- if input_args_filter_status:
- mapped_args['status'] = (
- self.ZoneStatus(input_args_filter_status)
- )
-
- input_args_pagination = input_args['pagination']
- if input_args_pagination:
- mapped_args['page_number'] = input_args_pagination['number']
- mapped_args['page_size'] = input_args_pagination['size']
-
- input_args_sorting = input_args['sorting']
- if input_args_sorting:
- mapped_args['sort_by_asc'] = input_args_sorting['asc']
-
- input_args_sorting_field = input_args_sorting['field']
- if input_args_sorting_field:
- mapped_args['sort_by_field'] = (
- self.ZoneField._member_map_[input_args_sorting_field]
- )
-
- return mapped_args
-
- @property
- def mapped_trunks_args(self):
- """
- Map the module argument `trunks` to
- arguments dictionary for the method
- `DecortController.user_trunks`.
- """
-
- input_args = self.aparams['trunks']
- if not input_args:
- return input_args
-
- mapped_args = {}
-
- input_args_filter = input_args['filter']
- if input_args_filter:
- mapped_args.update(input_args_filter)
-
- input_args_filter_status = input_args_filter['status']
- if input_args_filter_status:
- mapped_args['status'] = (
- self.TrunkStatus(input_args_filter_status)
- )
-
- input_args_filter_vlan_ids = input_args_filter['vlan_ids']
- if input_args_filter_vlan_ids is not None:
- mapped_args['vlan_ids'] = ', '.join(
- map(str, input_args_filter_vlan_ids)
- )
-
- input_args_pagination = input_args['pagination']
- if input_args_pagination:
- mapped_args['page_number'] = input_args_pagination['number']
- mapped_args['page_size'] = input_args_pagination['size']
-
- input_args_sorting = input_args['sorting']
- if input_args_sorting:
- mapped_args['sort_by_asc'] = input_args_sorting['asc']
-
- input_args_sorting_field = input_args_sorting['field']
- if input_args_sorting_field:
- mapped_args['sort_by_field'] = (
- self.TrunksSortableField(input_args_sorting_field)
- )
-
- return mapped_args
-
- @property
- def mapped_storage_policies_args(self):
- """
- Map the module argument `storage_policies` to
- arguments dictionary for the method
- `DecortController.user_storage_policies`.
- """
-
- input_args = self.aparams['storage_policies']
- if not input_args:
- return input_args
-
- mapped_args = {}
-
- input_args_filter = input_args['filter']
- if input_args_filter:
- mapped_args.update(input_args_filter)
-
- input_args_filter_status = input_args_filter['status']
- if input_args_filter_status:
- mapped_args['status'] = (
- self.StoragePolicyStatus(input_args_filter_status)
- )
-
- input_args_pagination = input_args['pagination']
- if input_args_pagination:
- mapped_args['page_number'] = input_args_pagination['number']
- mapped_args['page_size'] = input_args_pagination['size']
-
- input_args_sorting = input_args['sorting']
- if input_args_sorting:
- mapped_args['sort_by_asc'] = input_args_sorting['asc']
-
- input_args_sorting_field = input_args_sorting['field']
- if input_args_sorting_field:
- mapped_args['sort_by_field'] = (
- self.StoragePoliciesSortableField(input_args_sorting_field)
- )
-
- return mapped_args
-
- @property
- def mapped_security_groups_args(self):
- """
- Map the module argument `security_groups` to
- arguments dictionary for the method
- `DecortController.user_security_groups`.
- """
-
- input_args = self.aparams['security_groups']
- if not input_args:
- return input_args
-
- mapped_args = {}
-
- input_args_filter = input_args['filter']
- if input_args_filter:
- mapped_args.update(input_args_filter)
-
- input_args_pagination = input_args['pagination']
- if input_args_pagination:
- mapped_args['page_number'] = input_args_pagination['number']
- mapped_args['page_size'] = input_args_pagination['size']
-
- input_args_sorting = input_args['sorting']
- if input_args_sorting:
- mapped_args['sort_by_asc'] = input_args_sorting['asc']
-
- input_args_sorting_field = input_args_sorting['field']
- if input_args_sorting_field:
- mapped_args['sort_by_field'] = (
- self.SecurityGroupSortableField[input_args_sorting_field]
- )
-
- return mapped_args
-
- def run(self):
- self.get_info()
- self.exit()
-
- def get_info(self):
- self.facts = self.user_whoami()
- self.id = self.facts['name']
-
- user_get = self.user_get(id=self.id)
- for key in ['emailaddresses', 'data']:
- self.facts[key] = user_get[key]
-
- if self.aparams['accounts']:
- self.facts['accounts'] = self.user_accounts(
- **self.mapped_accounts_args,
- )
-
- if self.aparams['resource_consumption']:
- self.facts.update(self.user_resource_consumption())
-
- if self.aparams['audits']:
- self.facts['audits'] = self.user_audits(**self.mapped_audits_args)
-
- if self.aparams['api_methods']:
- self.facts['api_methods'] = self.user_api_methods(id=self.id)
-
- search_string = self.aparams['objects_search']
- if search_string:
- self.facts['objects_search'] = self.user_objects_search(
- search_string=search_string,
- )
-
- if self.aparams['zones']:
- self.facts['zones'] = self.user_zones(**self.mapped_zones_args)
-
- if self.aparams['trunks']:
- self.facts['trunks'] = self.user_trunks(**self.mapped_trunks_args)
- for trunk_facts in self.facts['trunks']:
- trunk_facts['account_ids'] = trunk_facts.pop('accountIds')
- trunk_facts['created_timestamp'] = trunk_facts.pop(
- 'created_at'
- )
- trunk_facts['deleted_timestamp'] = trunk_facts.pop(
- 'deleted_at'
- )
- trunk_facts['updated_timestamp'] = trunk_facts.pop(
- 'updated_at'
- )
- trunk_facts['native_vlan_id'] = trunk_facts.pop(
- 'nativeVlanId'
- )
- trunk_facts['ovs_bridge'] = trunk_facts.pop('ovsBridge')
- trunk_facts['vlan_ids'] = trunk_facts.pop('trunkTags')
-
- if self.aparams['storage_policies']:
- self.facts['storage_policies'] = self.user_storage_policies(
- **self.mapped_storage_policies_args
- )
- for storage_policy_facts in self.facts['storage_policies']:
- storage_policy_facts['sep_pools'] = storage_policy_facts.pop(
- 'access_seps_pools'
- )
- storage_policy_facts['iops_limit'] = storage_policy_facts.pop(
- 'limit_iops'
- )
- storage_policy_facts['usage']['account_ids'] = (
- storage_policy_facts['usage'].pop('accounts')
- )
- storage_policy_facts['usage']['rg_ids'] = (
- storage_policy_facts['usage'].pop('resgroups')
- )
-
- if self.aparams['security_groups']:
- self.facts['security_groups'] = self.user_security_groups(
- **self.mapped_security_groups_args
- )
- for security_groups_facts in self.facts['security_groups']:
- for rule in security_groups_facts.get('rules', []):
- rule['port_range'] = {
- 'min': rule.pop('port_range_min'),
- 'max': rule.pop('port_range_max'),
- }
-
- security_groups_facts['created_timestamp'] = (
- security_groups_facts.pop('created_at')
- )
- security_groups_facts['created_timestamp_readable'] = (
- self.sec_to_dt_str(security_groups_facts[
- 'created_timestamp'
- ])
- )
- security_groups_facts['updated_timestamp'] = (
- security_groups_facts.pop('updated_at')
- )
- security_groups_facts['updated_timestamp_readable'] = (
- self.sec_to_dt_str(security_groups_facts[
- 'updated_timestamp'
- ])
- )
def main():
- DecortUserInfo().run()
+ module = AnsibleModule(
+ argument_spec=dict(
+ app_id=dict(type='raw'),
+ app_secret=dict(type='raw'),
+ authenticator=dict(type='raw'),
+ controller_url=dict(type='raw'),
+ domain=dict(type='raw'),
+ jwt=dict(type='raw'),
+ oauth2_url=dict(type='raw'),
+ password=dict(type='raw'),
+ username=dict(type='raw'),
+ verify_ssl=dict(type='raw'),
+ ignore_api_compatibility=dict(type='raw'),
+ ignore_sdk_version_check=dict(type='raw'),
+ api_methods=dict(type='raw'),
+ objects_search=dict(type='raw'),
+ resource_consumption=dict(type='raw'),
+ ),
+ supports_check_mode=True,
+ )
+
+ module.fail_json(
+ msg=(
+ 'The module "decort_user_info" has been renamed to "decort_user". '
+ 'Please update your playbook to use "decort_user" '
+ 'instead of "decort_user_info".'
+ ),
+ )
if __name__ == '__main__':
diff --git a/library/decort_vins.py b/library/decort_vins.py
index 1271617..9173cef 100644
--- a/library/decort_vins.py
+++ b/library/decort_vins.py
@@ -365,102 +365,106 @@ class decort_vins(DecortController):
# 4) if ViNS exists: check desired state, desired configuration -> initiate action(s) accordingly
# 5) report result to Ansible
-def main():
- decon = decort_vins()
- amodule = decon.amodule
- #
- # Initial validation of module arguments is complete
- #
- # At this point non-zero vins_id means that we will be managing pre-existing ViNS
- # Otherwise we are about to create a new one as follows:
- # - if validated_rg_id is non-zero, create ViNS @ RG level
- # - if validated_rg_id is zero, create ViNS @ account level
- #
- # When managing existing ViNS we need to account for both "static" and "transient"
- # status. Full range of ViNS statii is as follows:
- #
- # "MODELED", "CREATED", "ENABLED", "ENABLING", "DISABLED", "DISABLING", "DELETED", "DELETING", "DESTROYED", "DESTROYING"
- #
- # if cconfig_save is true, only config save without other updates
- vins_should_exist = False
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ amodule = self.amodule
+ #
+ # Initial validation of module arguments is complete
+ #
+ # At this point non-zero vins_id means that we will be managing pre-existing ViNS
+ # Otherwise we are about to create a new one as follows:
+ # - if validated_rg_id is non-zero, create ViNS @ RG level
+ # - if validated_rg_id is zero, create ViNS @ account level
+ #
+ # When managing existing ViNS we need to account for both "static" and "transient"
+ # status. Full range of ViNS statii is as follows:
+ #
+ # "MODELED", "CREATED", "ENABLED", "ENABLING", "DISABLED", "DISABLING", "DELETED", "DELETING", "DESTROYED", "DESTROYING"
+ #
+ # if cconfig_save is true, only config save without other updates
+ vins_should_exist = False
- if decon.vins_id:
- vins_should_exist = True
- if decon.vins_facts['status'] in ["MODELED", "DISABLING", "ENABLING", "DELETING", "DESTROYING"]:
- # error: nothing can be done to existing ViNS in the listed statii regardless of
- # the requested state
- decon.result['failed'] = True
- decon.result['changed'] = False
- decon.result['msg'] = ("No change can be done for existing ViNS ID {} because of its current "
- "status '{}'").format(decon.vins_id, decon.vins_facts['status'])
- elif decon.vins_facts['status'] == "DISABLED":
- if amodule.params['state'] == 'absent':
- decon.delete()
- vins_should_exist = False
- elif amodule.params['state'] in ('present', 'disabled'):
- # update ViNS, leave in disabled state
- decon.action()
- elif amodule.params['state'] == 'enabled':
- # update ViNS and enable
- decon.action('enabled')
- elif decon.vins_facts['status'] in ["CREATED", "ENABLED"]:
- if amodule.params['state'] == 'absent':
- decon.delete()
- vins_should_exist = False
- elif amodule.params['state'] in ('present', 'enabled'):
- # update ViNS
- decon.action()
- elif amodule.params['state'] == 'disabled':
- # disable and update ViNS
- decon.action('disabled')
- elif decon.vins_facts['status'] == "DELETED":
- if amodule.params['state'] in ['present', 'enabled']:
- # restore and enable
- decon.action(restore=True)
- vins_should_exist = True
- elif amodule.params['state'] == 'absent':
- # destroy permanently
- if decon.amodule.params['permanently']:
- decon.delete()
- vins_should_exist = False
- elif amodule.params['state'] == 'disabled':
- decon.error()
- vins_should_exist = False
- elif decon.vins_facts['status'] == "DESTROYED":
- if amodule.params['state'] in ('present', 'enabled'):
- # need to re-provision ViNS;
- decon.create()
- vins_should_exist = True
- elif amodule.params['state'] == 'absent':
- decon.nop()
- vins_should_exist = False
- elif amodule.params['state'] == 'disabled':
- decon.error()
- else:
- # Preexisting ViNS was not found.
- vins_should_exist = False # we will change it back to True if ViNS is created or restored
- # If requested state is 'absent' - nothing to do
- if amodule.params['state'] == 'absent':
- decon.nop()
- elif amodule.params['state'] in ('present', 'enabled'):
- decon.check_amodule_argument('vins_name')
- # as we already have account ID and RG ID we can create ViNS and get vins_id on success
- decon.create()
+ if self.vins_id:
vins_should_exist = True
- elif amodule.params['state'] == 'disabled':
- decon.error()
- #
- # conditional switch end - complete module run
- #
- if decon.result['failed']:
- amodule.fail_json(**decon.result)
- else:
- # prepare ViNS facts to be returned as part of decon.result and then call exit_json(...)
- if decon.result['changed']:
- _, decon.vins_facts = decon.vins_find(decon.vins_id)
- decon.result['facts'] = decon.package_facts(amodule.check_mode)
- amodule.exit_json(**decon.result)
+ if self.vins_facts['status'] in ["MODELED", "DISABLING", "ENABLING", "DELETING", "DESTROYING"]:
+ # error: nothing can be done to existing ViNS in the listed statii regardless of
+ # the requested state
+ self.result['failed'] = True
+ self.result['changed'] = False
+ self.result['msg'] = ("No change can be done for existing ViNS ID {} because of its current "
+ "status '{}'").format(self.vins_id, self.vins_facts['status'])
+ elif self.vins_facts['status'] == "DISABLED":
+ if amodule.params['state'] == 'absent':
+ self.delete()
+ vins_should_exist = False
+ elif amodule.params['state'] in ('present', 'disabled'):
+ # update ViNS, leave in disabled state
+ self.action()
+ elif amodule.params['state'] == 'enabled':
+ # update ViNS and enable
+ self.action('enabled')
+ elif self.vins_facts['status'] in ["CREATED", "ENABLED"]:
+ if amodule.params['state'] == 'absent':
+ self.delete()
+ vins_should_exist = False
+ elif amodule.params['state'] in ('present', 'enabled'):
+ # update ViNS
+ self.action()
+ elif amodule.params['state'] == 'disabled':
+ # disable and update ViNS
+ self.action('disabled')
+ elif self.vins_facts['status'] == "DELETED":
+ if amodule.params['state'] in ['present', 'enabled']:
+ # restore and enable
+ self.action(restore=True)
+ vins_should_exist = True
+ elif amodule.params['state'] == 'absent':
+ # destroy permanently
+ if self.amodule.params['permanently']:
+ self.delete()
+ vins_should_exist = False
+ elif amodule.params['state'] == 'disabled':
+ self.error()
+ vins_should_exist = False
+ elif self.vins_facts['status'] == "DESTROYED":
+ if amodule.params['state'] in ('present', 'enabled'):
+ # need to re-provision ViNS;
+ self.create()
+ vins_should_exist = True
+ elif amodule.params['state'] == 'absent':
+ self.nop()
+ vins_should_exist = False
+ elif amodule.params['state'] == 'disabled':
+ self.error()
+ else:
+ # Preexisting ViNS was not found.
+ vins_should_exist = False # we will change it back to True if ViNS is created or restored
+ # If requested state is 'absent' - nothing to do
+ if amodule.params['state'] == 'absent':
+ self.nop()
+ elif amodule.params['state'] in ('present', 'enabled'):
+ self.check_amodule_argument('vins_name')
+ # as we already have account ID and RG ID we can create ViNS and get vins_id on success
+ self.create()
+ vins_should_exist = True
+ elif amodule.params['state'] == 'disabled':
+ self.error()
+ #
+ # conditional switch end - complete module run
+ #
+ if self.result['failed']:
+ amodule.fail_json(**self.result)
+ else:
+ # prepare ViNS facts to be returned as part of self.result and then call exit_json(...)
+ if self.result['changed']:
+ _, self.vins_facts = self.vins_find(self.vins_id)
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
-if __name__ == "__main__":
+def main():
+ decort_vins().run()
+
+
+if __name__ == '__main__':
main()
diff --git a/library/decort_vins_list.py b/library/decort_vins_list.py
new file mode 100644
index 0000000..bb5a3c5
--- /dev/null
+++ b/library/decort_vins_list.py
@@ -0,0 +1,141 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_vins_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortVINSList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ account_id=dict(
+ type='int',
+ ),
+ ext_net_ip=dict(
+ type='str',
+ ),
+ id=dict(
+ type='int',
+ ),
+ include_deleted=dict(
+ type='bool',
+ ),
+ name=dict(
+ type='str',
+ ),
+ rg_id=dict(
+ type='int',
+ ),
+ status=dict(
+ type='str',
+ choices=sdk_types.VINSStatus._member_names_,
+ ),
+ vnfdev_id=dict(
+ type='int',
+ ),
+ zone_id=dict(
+ type='int',
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.VINSForListAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+ aparam_status: str | None = aparam_filter['status']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.VINSForListAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.vins.list(
+ account_id=aparam_filter['account_id'],
+ ext_net_ip=aparam_filter['ext_net_ip'],
+ id=aparam_filter['id'],
+ include_deleted=aparam_filter['include_deleted'] or False,
+ name=aparam_filter['name'],
+ rg_id=aparam_filter['rg_id'],
+ status=(
+ sdk_types.VINSStatus[aparam_status]
+ if aparam_status else None
+ ),
+ vnfdev_id=aparam_filter['vnfdev_id'],
+ zone_id=aparam_filter['zone_id'],
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortVINSList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_vm.py b/library/decort_vm.py
new file mode 100644
index 0000000..90d546b
--- /dev/null
+++ b/library/decort_vm.py
@@ -0,0 +1,2510 @@
+#!/usr/bin/python
+import re
+from typing import Sequence, Any, TypeVar
+
+DOCUMENTATION = r'''
+---
+module: decort_vm
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.basic import env_fallback
+from ansible.module_utils.decort_utils import *
+
+
+DefaultT = TypeVar('DefaultT')
+
+
+class decort_vm(DecortController):
+ is_vm_stopped_or_will_be_stopped: None | bool = None
+ guest_agent_exec_result: None | str = None
+
+ def __init__(self):
+ # call superclass constructor first
+ super(decort_vm, self).__init__(AnsibleModule(**self.amodule_init_args))
+ arg_amodule = self.amodule
+
+ self.aparam_networks_has_dpdk = None
+
+ self.check_amodule_args()
+
+ self.comp_should_exist = False
+ # This following flag is used to avoid extra (and unnecessary) get of compute details prior to
+ # packaging facts before the module completes. As ""
+ self.skip_final_get = False
+ self.force_final_get = False
+ self.comp_id = 0
+ self.comp_info = None
+ self.rg_id = 0
+ self.aparam_image = None
+
+ validated_acc_id = 0
+ validated_rg_id = 0
+ validated_rg_facts = None
+
+ self.vm_to_clone_id = 0
+ self.vm_to_clone_info = None
+
+ if self.aparams['get_snapshot_merge_status']:
+ self.force_final_get = True
+
+ if arg_amodule.params['clone_from'] is not None:
+ self.vm_to_clone_id, self.vm_to_clone_info, _ = (
+ self._compute_get_by_id(
+ comp_id=self.aparams['clone_from']['id'],
+ )
+ )
+ self.rg_id = self.vm_to_clone_info['rgId']
+ if not self.vm_to_clone_id:
+ self.message(
+ f'Check for parameter "clone_from.id" failed: '
+ f'VM ID {self.aparams["clone_from"]["id"]} does not exist.'
+ )
+ self.exit(fail=True)
+ elif self.vm_to_clone_info['status'] in ('DESTROYED', 'DELETED'):
+ self.message(
+ f'Check for parameter "clone_from.id" failed: '
+ f'VM ID {self.aparams["clone_from"]["id"]} is in '
+ f'{self.vm_to_clone_info["status"]} state and '
+ f'cannot be cloned.'
+ )
+ self.exit(fail=True)
+
+ clone_id, clone_dict, _ = self.compute_find(
+ comp_name=self.aparams['name'],
+ rg_id=self.rg_id,
+ )
+ self.check_amodule_args_for_clone(
+ clone_id=clone_id,
+ clone_dict=clone_dict,
+ )
+ self.check_amodule_args_for_change()
+
+ if not clone_id:
+ clone_id = self.clone()
+ if self.amodule.check_mode:
+ self.exit()
+
+ self.comp_id, self.comp_info, self.rg_id = self._compute_get_by_id(
+ comp_id=clone_id,
+ need_custom_fields=True,
+ need_console_url=self.aparams['get_console_url'],
+ )
+
+ return
+
+ comp_id = arg_amodule.params['id']
+
+ # Analyze Compute name & ID, RG name & ID and build arguments to compute_find accordingly.
+ if arg_amodule.params['name'] == "" and comp_id == 0:
+ self.result['failed'] = True
+ self.result['changed'] = False
+ self.result['msg'] = "Cannot manage Compute when its ID is 0 and name is empty."
+ self.fail_json(**self.result)
+ # fail the module - exit
+
+ if not comp_id: # manage Compute by name -> need RG identity
+ if not arg_amodule.params['rg_id']: # RG ID is not set -> locate RG by name -> need account ID
+ validated_acc_id, self._acc_info = self.account_find(arg_amodule.params['account_name'],
+ arg_amodule.params['account_id'])
+ if not validated_acc_id:
+ self.result['failed'] = True
+ self.result['changed'] = False
+ self.result['msg'] = ("Current user does not have access to the account ID {} / "
+ "name '{}' or non-existent account specified.").format(arg_amodule.params['account_id'],
+ arg_amodule.params['account_name'])
+ self.fail_json(**self.result)
+ # fail the module -> exit
+ # now validate RG
+ validated_rg_id, validated_rg_facts = self.rg_find(validated_acc_id,
+ arg_amodule.params['rg_id'],
+ arg_amodule.params['rg_name'])
+ if not validated_rg_id:
+ self.result['failed'] = True
+ self.result['changed'] = False
+ self.result['msg'] = "Cannot find RG ID {} / name '{}'.".format(arg_amodule.params['rg_id'],
+ arg_amodule.params['rg_name'])
+ self.fail_json(**self.result)
+ # fail the module - exit
+
+ self.rg_id = validated_rg_id
+ arg_amodule.params['rg_id'] = validated_rg_id
+ arg_amodule.params['rg_name'] = validated_rg_facts['name']
+ self.acc_id = validated_rg_facts['accountId']
+
+ # at this point we are ready to locate Compute, and if anything fails now, then it must be
+ # because this Compute does not exist or something goes wrong in the upstream API
+ # We call compute_find with check_state=False as we also consider the case when a Compute
+ # specified by account / RG / compute name never existed and will be created for the first time.
+ self.comp_id, self.comp_info, self.rg_id = self.compute_find(comp_id=comp_id,
+ comp_name=arg_amodule.params['name'],
+ rg_id=validated_rg_id,
+ check_state=False,
+ need_custom_fields=True,
+ need_console_url=self.aparams['get_console_url'])
+
+ if self.comp_id:
+ self.comp_should_exist = True
+ self.acc_id = self.comp_info['accountId']
+ self.rg_id = self.comp_info['rgId']
+ self.check_amodule_args_for_change()
+ else:
+ if self.amodule.params['state'] != 'absent':
+ self.check_amodule_args_for_create()
+
+ return
+
+ def check_amodule_args(self):
+ """
+ Additional Ansible Module arguments validation that
+ cannot be implemented using Ansible Argument spec.
+ """
+
+ check_error = False
+
+ # Check parameter "networks"
+ aparam_nets = self.aparams['networks']
+ if aparam_nets:
+ net_types = {net['type'] for net in aparam_nets}
+ # DPDK and other networks
+ self.aparam_networks_has_dpdk = False
+ if self.VMNetType.DPDK.value in net_types:
+ self.aparam_networks_has_dpdk = True
+ if not net_types.issubset(
+ {self.VMNetType.DPDK.value, self.VMNetType.EMPTY.value}
+ ):
+ check_error = True
+ self.message(
+ 'Check for parameter "networks" failed:'
+ ' a compute cannot be connected to a DPDK network and'
+ ' a network of another type at the same time.'
+ )
+
+ if (
+ self.aparams['hp_backed'] is not None
+ and not self.aparams['hp_backed']
+ ):
+ check_error = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ 'hp_backed must be set to True to connect a compute '
+ 'to a DPDK network.'
+ )
+ for net in aparam_nets:
+ net_type = net['type']
+
+ if (
+ net['type'] not in (
+ self.VMNetType.SDN.value,
+ self.VMNetType.EMPTY.value,
+ )
+ and not isinstance(net['id'], int)
+ ):
+ check_error = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ 'Type of parameter "id" must be integer for '
+ f'{net["type"]} network type'
+ )
+
+ # MTU
+ net_mtu = net['mtu']
+ if net_mtu is not None:
+ mtu_net_types = (
+ self.VMNetType.DPDK.value,
+ self.VMNetType.EXTNET.value,
+ self.VMNetType.TRUNK.value,
+ )
+
+ # Allowed network types for set MTU
+ if net_type not in mtu_net_types:
+ check_error = True
+ self.message(
+ 'Check for parameter "networks" failed:'
+ ' MTU can be specifed'
+ ' only for DPDK, EXTNET or TRUNK network'
+ ' (remove parameter "mtu" for network'
+ f' {net["type"]} with ID {net["id"]}).'
+ )
+
+ # Maximum MTU
+ MAX_MTU = 9216
+ if net_type in mtu_net_types and net_mtu > MAX_MTU:
+ check_error = True
+ self.message(
+ 'Check for parameter "networks" failed:'
+ f' MTU must be no more than {MAX_MTU}'
+ ' (change value for parameter "mtu" for network'
+ f' {net["type"]} with ID {net["id"]}).'
+ )
+
+ # EXTNET minimum MTU
+ EXTNET_MIN_MTU = 1500
+ if (
+ net_type == self.VMNetType.EXTNET.value
+ and net_mtu < EXTNET_MIN_MTU
+ ):
+ check_error = True
+ self.message(
+ 'Check for parameter "networks" failed:'
+ f' MTU for {self.VMNetType.EXTNET.value} network'
+ f' must be at least {EXTNET_MIN_MTU}'
+ ' (change value for parameter "mtu" for network'
+ f' {net["type"]} with ID {net["id"]}).'
+ )
+
+ # DPDK minimum MTU
+ DPDK_MIN_MTU = 1
+ if (
+ net_type == self.VMNetType.DPDK.value
+ and net_mtu < DPDK_MIN_MTU
+ ):
+ check_error = True
+ self.message(
+ 'Check for parameter "networks" failed:'
+ f' MTU for {self.VMNetType.DPDK.value} network'
+ f' must be at least {DPDK_MIN_MTU}'
+ ' (change value for parameter "mtu" for network'
+ f' {net["type"]} with ID {net["id"]}).'
+ )
+
+ # MAC address
+ if net['mac'] is not None:
+ if net['type'] == self.VMNetType.EMPTY.value:
+ check_error = True
+ self.message(
+ 'Check for parameter "networks.mac" failed: '
+ 'MAC-address cannot be specified for an '
+ 'EMPTY type network.'
+ )
+
+ mac_validation_result = re.match(
+ '[0-9a-f]{2}([:]?)[0-9a-f]{2}(\\1[0-9a-f]{2}){4}$',
+ net['mac'].lower(),
+ )
+ if not mac_validation_result:
+ check_error = True
+ self.message(
+ 'Check for parameter "networks.mac" failed: '
+ f'MAC-address for network ID {net["id"]} must be '
+ 'specified in quotes and in the format '
+ '"XX:XX:XX:XX:XX:XX".'
+ )
+ if net['net_prefix'] is not None:
+ if net_type not in [
+ self.VMNetType.DPDK.value, self.VMNetType.VFNIC.value
+ ]:
+ check_error = True
+ self.message(
+ 'Check for parameter "net_prefix" failed: '
+ 'net_prefix can be specified only for '
+ 'DPDK and VFNIC net types.'
+ )
+ if not (0 <= net['net_prefix'] <= 32):
+ check_error = True
+ self.message(
+ 'Check for parameter "net_prefix" failed: '
+ 'net_prefix must be in range [0, 32].'
+ )
+ if self.VMNetType.SDN.value in net_types:
+ if not net_types.issubset(
+ {
+ self.VMNetType.SDN.value,
+ self.VMNetType.EMPTY.value,
+ self.VMNetType.VFNIC.value,
+ }
+ ):
+ check_error = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ 'a compute can be connected to a SDN network and '
+ 'only to VFNIC, EMPTY networks at the same time.'
+ )
+ aparam_custom_fields = self.aparams['custom_fields']
+ if aparam_custom_fields is not None:
+ if (
+ aparam_custom_fields['disable']
+ and aparam_custom_fields['fields'] is not None
+ ):
+ check_error = True
+ self.message(
+ 'Check for parameter "custom_fields" failed: '
+ '"fields" cannot be set if "disable" is True.'
+ )
+
+ aparam_pref_cpu_cores = self.aparams['preferred_cpu_cores']
+ if (
+ aparam_pref_cpu_cores
+ and len(set(aparam_pref_cpu_cores)) != len(aparam_pref_cpu_cores)
+ ):
+ check_error = True
+ self.message(
+ 'Check for parameter "preferred_cpu_cores" failed: '
+ 'the list must contain only unique elements.'
+ )
+
+ aparam_state = self.aparams['state']
+ new_state = None
+ match aparam_state:
+ case 'halted' | 'poweredoff':
+ new_state = 'stopped'
+ case 'poweredon':
+ new_state = 'started'
+
+ if new_state:
+ self.message(
+ msg=f'"{aparam_state}" state is deprecated and might be '
+ f'removed in newer versions. '
+ f'Please use "{new_state}" instead.',
+ warning=True,
+ )
+
+ if check_error:
+ self.exit(fail=True)
+
+ def nop(self):
+ """No operation (NOP) handler for Compute management by decort_vm module.
+ This function is intended to be called from the main switch construct of the module
+ when current state -> desired state change logic does not require any changes to
+ the actual Compute state.
+ """
+ self.result['failed'] = False
+ self.result['changed'] = False
+ if self.comp_id:
+ self.result['msg'] = ("No state change required for Compute ID {} because of its "
+ "current status '{}'.").format(self.comp_id, self.comp_info['status'])
+ else:
+ self.result['msg'] = ("No state change to '{}' can be done for "
+ "non-existent Compute instance.").format(self.amodule.params['state'])
+ return
+
+ def error(self):
+ """Error handler for Compute instance management by decort_vm module.
+ This function is intended to be called when an invalid state change is requested.
+ Invalid means that the current is invalid for any operations on the Compute or the
+ transition from current to desired state is not technically possible.
+ """
+ self.result['failed'] = True
+ self.result['changed'] = False
+ if self.comp_id:
+ self.result['msg'] = ("Invalid target state '{}' requested for Compute ID {} in the "
+ "current status '{}'.").format(self.comp_id,
+ self.amodule.params['state'],
+ self.comp_info['status'])
+ else:
+ self.result['msg'] = ("Invalid target state '{}' requested for non-existent Compute name '{}' "
+ "in RG ID {} / name '{}'").format(self.amodule.params['state'],
+ self.amodule.params['name'],
+ self.amodule.params['rg_id'],
+ self.amodule.params['rg_name'])
+ return
+
+ def create(self):
+ """New Compute instance creation handler for decort_vm module.
+ This function checks for the presence of required parameters and deploys a new KVM VM
+ Compute instance with the specified characteristics into the target Resource Group.
+ The target RG must exist.
+ """
+ # the following parameters must be present: cpu, ram, image_id
+ # each of the following calls will abort if argument is missing
+ self.check_amodule_argument('cpu')
+ self.check_amodule_argument('ram')
+
+ aparam_boot = self.aparams['boot']
+ validated_bdisk_size = 0
+ boot_mode = 'bios'
+ loader_type = 'unknown'
+ if aparam_boot is not None:
+ validated_bdisk_size = (
+ self.amodule.params['boot']['disk_size'] or 0
+ )
+
+ if aparam_boot['mode'] is None:
+ self.message(
+ msg=self.MESSAGES.default_value_used(
+ param_name='boot.mode',
+ default_value=boot_mode
+ ),
+ warning=True,
+ )
+ else:
+ boot_mode = aparam_boot['mode']
+
+ if aparam_boot['loader_type'] is None:
+ self.message(
+ msg=self.MESSAGES.default_value_used(
+ param_name='boot.loader_type',
+ default_value=loader_type
+ ),
+ warning=True,
+ )
+
+ else:
+ loader_type = aparam_boot['loader_type']
+
+ image_id, image_facts = None, None
+ if self.aparam_image:
+ if (
+ self.check_amodule_argument('image_id', abort=False)
+ and self.amodule.params['image_id'] > 0
+ ):
+ # find image by image ID and account ID
+ # image_find(self, image_id, account_id, rg_id=0, sepid=0, pool=""):
+ image_id, image_facts = self.image_find(
+ image_id=self.amodule.params['image_id'],
+ account_id=self.acc_id)
+
+ if validated_bdisk_size <= image_facts['size']:
+ # adjust disk size to the minimum allowed by OS image, which will be used to spin off this Compute
+ validated_bdisk_size = image_facts['size']
+
+ # NOTE: due to a libvirt "feature", that impacts management of a VM created without any network interfaces,
+ # we create KVM VM in HALTED state.
+ # Consequently, if desired state is different from 'halted' or 'porewedoff", we should explicitly start it
+ # in the upstream code.
+ # See corresponding NOTE below for another place where this "feature" is redressed for.
+ #
+ # Once this "feature" is fixed, make sure VM is created according to the actual desired state
+ #
+ start_compute = False # change this once a workaround for the aforementioned libvirt "feature" is implemented
+ if self.amodule.params['state'] in ('halted', 'poweredoff', 'stopped'):
+ start_compute = False
+
+ if self.amodule.params['ssh_key'] and self.amodule.params['ssh_key_user'] and not self.amodule.params['ci_user_data']:
+ cloud_init_params = {'users': [
+ {"name": self.amodule.params['ssh_key_user'],
+ "ssh-authorized-keys": [self.amodule.params['ssh_key']],
+ "shell": '/bin/bash'}
+ ]}
+ elif self.amodule.params['ci_user_data']:
+ cloud_init_params = self.amodule.params['ci_user_data']
+ else:
+ cloud_init_params = None
+
+ cpu_pin = self.aparams['cpu_pin']
+ if cpu_pin is None:
+ cpu_pin = False
+
+ hp_backed = self.aparams['hp_backed']
+ if hp_backed is None:
+ hp_backed = False
+
+ numa_affinity = self.aparams['numa_affinity']
+ if numa_affinity is None:
+ numa_affinity = 'none'
+
+ chipset = self.amodule.params['chipset']
+ if chipset is None:
+ chipset = 'Q35'
+ self.message(
+ msg=f'Chipset not specified, '
+ f'default value "{chipset}" will be used.',
+ warning=True,
+ )
+
+ network_interface_naming = self.aparams['network_interface_naming']
+ if network_interface_naming is None:
+ network_interface_naming = 'ens'
+ self.message(
+ msg=self.MESSAGES.default_value_used(
+ param_name='network_interface_naming',
+ default_value=network_interface_naming
+ ),
+ warning=True,
+ )
+
+ hot_resize = self.aparams['hot_resize']
+ if hot_resize is None:
+ hot_resize = False
+ self.message(
+ msg=self.MESSAGES.default_value_used(
+ param_name='hot_resize',
+ default_value=hot_resize
+ ),
+ warning=True,
+ )
+ # if we get through here, all parameters required to create new Compute instance should be at hand
+
+ # NOTE: KVM VM is created in HALTED state and must be explicitly started
+ self.comp_id = self.kvmvm_provision(
+ rg_id=self.rg_id,
+ comp_name=self.amodule.params['name'],
+ cpu=self.amodule.params['cpu'],
+ ram=self.amodule.params['ram'],
+ boot_disk_size=validated_bdisk_size,
+ image_id=image_id,
+ description=self.amodule.params['description'],
+ userdata=cloud_init_params,
+ sep_id=self.amodule.params['sep_id' ] if "sep_id" in self.amodule.params else None,
+ pool_name=self.amodule.params['pool'] if "pool" in self.amodule.params else None,
+ start_on_create=start_compute,
+ chipset=chipset,
+ cpu_pin=cpu_pin,
+ hp_backed=hp_backed,
+ numa_affinity=numa_affinity,
+ preferred_cpu_cores=self.amodule.params['preferred_cpu_cores'],
+ boot_mode=boot_mode,
+ boot_loader_type=loader_type,
+ network_interface_naming=network_interface_naming,
+ hot_resize=hot_resize,
+ zone_id=self.aparams['zone_id'],
+ storage_policy_id=self.aparams['storage_policy_id'],
+ os_version=self.aparams['os_version'],
+ )
+ self.comp_should_exist = True
+
+ # Originally we would have had to re-read comp_info after VM was provisioned
+ # _, self.comp_info, _ = self.compute_find(self.comp_id)
+
+ # However, to avoid extra call to compute/get API we need to construct comp_info so that
+ # the below calls to compute_networks and compute_data_disks work properly.
+ #
+ # Here we are imitating comp_info structure as if it has been returned by a real call
+ # to API compute/get
+ self.comp_info = {
+ 'id': self.comp_id,
+ 'accountId': self.acc_id,
+ 'status': "ENABLED",
+ 'techStatus': "STOPPED",
+ 'interfaces': [], # new compute instance is created network-less
+ 'disks': [], # new compute instance is created without any data disks attached
+ 'tags': {},
+ 'affinityLabel': "",
+ 'affinityRules': [],
+ 'antiAffinityRules': [],
+ }
+
+ #
+ # Compute was created
+ #
+ # Setup network connections
+ if self.amodule.params['networks'] is not None:
+ self.compute_networks(
+ comp_dict=self.comp_info,
+ new_networks=self.amodule.params['networks'],
+ )
+ # Next manage data disks
+ if self.amodule.params['disks'] is not None:
+ self.compute_disks(
+ comp_dict=self.comp_info,
+ aparam_disks_dict=self.amodule.params['disks'],
+ )
+
+ self.compute_affinity(self.comp_info,
+ self.amodule.params['tag'],
+ self.amodule.params['aff_rule'],
+ self.amodule.params['aaff_rule'],
+ label=self.amodule.params['affinity_label'],)
+ # NOTE: see NOTE above regarding libvirt "feature" and new VMs created in HALTED state
+ if self.aparam_image:
+ if self.amodule.params['state'] in ('poweredon', 'started'):
+ self.compute_powerstate(self.comp_info, 'started')
+
+ if self.aparams['custom_fields'] is None:
+ custom_fields_disable = True
+ custom_fields_fields = None
+ else:
+ custom_fields_disable = self.aparams['custom_fields']['disable']
+ custom_fields_fields = self.aparams['custom_fields']['fields']
+ if not custom_fields_disable:
+ self.compute_set_custom_fields(
+ compute_id=self.comp_info['id'],
+ custom_fields=custom_fields_fields,
+ )
+
+ # read in Compute facts once more after all initial setup is complete
+ _, self.comp_info, _ = self.compute_find(
+ comp_id=self.comp_id,
+ need_custom_fields=True,
+ need_console_url=self.amodule.params['get_console_url'],
+ )
+
+ if self.compute_update_args:
+ self.compute_update(
+ compute_id=self.comp_info['id'],
+ **self.compute_update_args,
+ )
+ else:
+ self.skip_final_get = True
+
+ return
+
+ def destroy(self):
+ """Compute destroy handler for VM management by decort_vm module.
+ Note that this handler deletes the VM permanently together with all assigned disk resources.
+ """
+ self.compute_delete(comp_id=self.comp_id, permanently=True)
+ self.comp_id, self.comp_info, _ = self._compute_get_by_id(self.comp_id)
+ return
+
+ def restore(self):
+ """Compute restore handler for Compute instance management by decort_vm module.
+ Note that restoring Compute is only possible if it is in DELETED state. If called on a
+ Compute instance in any other state, the method will throw an error and abort the execution
+ of the module.
+ """
+ self.compute_restore(comp_id=self.comp_id)
+ # TODO - do we need updated comp_info to manage port forwards and size after VM is restored?
+ _, self.comp_info, _ = self.compute_find(
+ comp_id=self.comp_id,
+ need_custom_fields=True,
+ need_console_url=self.amodule.params['get_console_url'],
+ )
+ self.modify()
+ self.comp_should_exist = True
+ return
+
+ def modify(self, arg_wait_cycles=0):
+ """Compute modify handler for KVM VM management by decort_vm module.
+ This method is a convenience wrapper that calls individual Compute modification functions from
+ DECORT utility library (module).
+
+ Note that it does not modify power state of KVM VM.
+ """
+ if self.compute_update_args:
+ self.compute_update(
+ compute_id=self.comp_info['id'],
+ **self.compute_update_args,
+ )
+
+ if self.amodule.params['rollback_to'] is not None:
+ self.sdk_checkmode(self.api.cloudapi.compute.snapshot_rollback)(
+ vm_id=self.comp_info['id'],
+ label=self.amodule.params['rollback_to'],
+ )
+
+ if self.amodule.params['networks'] is not None:
+ self.compute_networks(
+ comp_dict=self.comp_info,
+ new_networks=self.aparams['networks'],
+ order_changing=self.aparams['network_order_changing'],
+ )
+
+ if self.amodule.params['disks'] is not None:
+ self.compute_disks(
+ comp_dict=self.comp_info,
+ aparam_disks_dict=self.amodule.params['disks'],
+ )
+
+ aparam_boot = self.amodule.params['boot']
+ if aparam_boot is not None:
+ aparam_disk_id = aparam_boot['disk_id']
+ if aparam_disk_id is not None:
+ for disk in self.comp_info['disks']:
+ if disk['id'] == aparam_disk_id and disk['type'] != 'B':
+ self.compute_boot_disk(
+ comp_id=self.comp_info['id'],
+ boot_disk=aparam_disk_id,
+ )
+ break
+
+ boot_disk_new_size = aparam_boot['disk_size']
+ if boot_disk_new_size:
+ self.compute_bootdisk_size(self.comp_info, boot_disk_new_size)
+
+ boot_order = aparam_boot['order']
+ if (
+ boot_order is not None
+ and self.comp_info['bootOrder'] != boot_order
+ ):
+ self.compute_set_boot_order(
+ vm_id=self.comp_id,
+ order=boot_order,
+ )
+
+ disk_redeploy = aparam_boot['disk_redeploy']
+ if disk_redeploy:
+ auto_start = False
+ if self.aparams['state'] is None:
+ if self.comp_info['techStatus'] == 'STARTED':
+ auto_start = True
+ else:
+ if self.aparams['state'] == 'started':
+ auto_start = True
+
+ disk_size = None
+ if (
+ aparam_boot is not None
+ and aparam_boot['disk_size'] is not None
+ ):
+ disk_size = aparam_boot['disk_size']
+ elif self.aparams['image_id'] is not None:
+ _, image_facts = self.image_find(
+ image_id=self.aparams['image_id'],
+ )
+ disk_size = image_facts['size']
+
+ os_version = None
+ if (
+ self.aparams['image_id'] is None
+ or self.aparams['image_id'] == self.comp_info['imageId']
+ ):
+ if self.aparams['os_version'] is None:
+ os_version = self.comp_info['os_version']
+ else:
+ os_version = self.aparams['os_version']
+ elif self.aparams['image_id'] != self.comp_info['imageId']:
+ os_version = self.aparams['os_version']
+
+ self.compute_disk_redeploy(
+ vm_id=self.comp_id,
+ storage_policy_id=self.aparams['storage_policy_id'],
+ image_id=self.aparams['image_id'],
+ disk_size=disk_size,
+ auto_start=auto_start,
+ os_version=os_version,
+ )
+
+ self.compute_resize(self.comp_info,
+ self.amodule.params['cpu'], self.amodule.params['ram'],
+ wait_for_state_change=arg_wait_cycles)
+
+ self.compute_affinity(self.comp_info,
+ self.amodule.params['tag'],
+ self.amodule.params['aff_rule'],
+ self.amodule.params['aaff_rule'],
+ label=self.amodule.params['affinity_label'])
+
+ aparam_custom_fields = self.amodule.params['custom_fields']
+ if aparam_custom_fields is not None:
+ compute_custom_fields = self.comp_info['custom_fields']
+ if aparam_custom_fields['disable']:
+ if compute_custom_fields is not None:
+ self.compute_disable_custom_fields(
+ compute_id=self.comp_info['id'],
+ )
+ else:
+ if compute_custom_fields != aparam_custom_fields['fields']:
+ self.compute_set_custom_fields(
+ compute_id=self.comp_info['id'],
+ custom_fields=aparam_custom_fields['fields'],
+ )
+
+ aparam_zone_id = self.aparams['zone_id']
+ if aparam_zone_id is not None and aparam_zone_id != self.comp_info['zoneId']:
+ self.compute_migrate_to_zone(
+ compute_id=self.comp_id,
+ zone_id=aparam_zone_id,
+ )
+
+ aparam_guest_agent = self.aparams['guest_agent']
+ if aparam_guest_agent is not None:
+ if aparam_guest_agent['enabled'] is not None:
+ if (
+ aparam_guest_agent['enabled']
+ and not self.comp_info['qemu_guest']['enabled']
+ ):
+ self.compute_guest_agent_enable(vm_id=self.comp_id)
+ elif (
+ aparam_guest_agent['enabled'] is False
+ and self.comp_info['qemu_guest']['enabled']
+ ):
+ self.compute_guest_agent_disable(vm_id=self.comp_id)
+
+ if aparam_guest_agent['update_available_commands']:
+ self.compute_guest_agent_feature_update(vm_id=self.comp_id)
+
+ aparam_guest_agent_exec = aparam_guest_agent['exec']
+ if aparam_guest_agent_exec is not None:
+ self.guest_agent_exec_result = (
+ self.compute_guest_agent_execute(
+ vm_id=self.comp_id,
+ cmd=aparam_guest_agent_exec['cmd'],
+ args=aparam_guest_agent_exec['args'],
+ )
+ )
+
+ aparam_cdrom = self.aparams['cdrom']
+ if aparam_cdrom is not None:
+ mode = aparam_cdrom['mode']
+ image_id = aparam_cdrom['image_id']
+ if (
+ mode == 'insert'
+ and self.comp_info['cdImageId'] != image_id
+ ):
+ self.compute_cd_insert(
+ vm_id=self.comp_id,
+ image_id=image_id,
+ )
+ elif mode == 'eject':
+ self.compute_cd_eject(
+ vm_id=self.comp_id,
+ )
+
+ if self.aparams['abort_cloning']:
+ self.compute_clone_abort(
+ vm_id=self.comp_id,
+ )
+
+ return
+
+ @property
+ def compute_update_args(self) -> dict:
+ result_args = {}
+
+ params_to_check = {
+ 'name': 'name',
+ 'chipset': 'chipset',
+ 'cpu_pin': 'cpupin',
+ 'hp_backed': 'hpBacked',
+ 'numa_affinity': 'numaAffinity',
+ 'description': 'desc',
+ 'auto_start': 'autoStart',
+ 'preferred_cpu_cores': 'preferredCpu',
+ 'boot.mode': 'bootType',
+ 'boot.loader_type': 'loaderType',
+ 'network_interface_naming': 'networkInterfaceNaming',
+ 'hot_resize': 'hotResize',
+ 'os_version': 'os_version',
+ }
+
+ def get_nested_value(
+ d: dict,
+ keys: Sequence[str],
+ default: DefaultT | None = None,
+ ) -> Any | DefaultT:
+ if not keys:
+ raise ValueError
+
+ key = keys[0]
+ if key not in d:
+ return default
+ value = d[key]
+
+ if len(keys) > 1:
+ if isinstance(value, dict):
+ nested_d = value
+ return get_nested_value(
+ d=nested_d,
+ keys=keys[1:],
+ default=default,
+ )
+ if value is None:
+ return default
+ raise ValueError(
+ f'The key {key} found, but its value is not a dictionary.'
+ )
+ return value
+
+ for aparam_name, comp_field_name in params_to_check.items():
+ aparam_value = get_nested_value(
+ d=self.aparams,
+ keys=aparam_name.split('.'),
+ )
+ comp_value = get_nested_value(
+ d=self.comp_info,
+ keys=comp_field_name.split('.'),
+ )
+
+ if aparam_value is not None and aparam_value != comp_value:
+ # If disk_redeploy = True no need to update os_version.
+ # Updating os_version through compute_disk_redeploy
+ if (
+ aparam_name == 'os_version'
+ and self.aparams['boot'] is not None
+ and self.aparams['boot']['disk_redeploy']
+ ):
+ continue
+ result_args[aparam_name.replace('.', '_')] = (
+ aparam_value
+ )
+
+ return result_args
+
+ def package_facts(self, check_mode=False):
+ """Package a dictionary of KVM VM facts according to the decort_vm module specification.
+ This dictionary will be returned to the upstream Ansible engine at the completion of decort_vm
+ module run.
+
+ @param check_mode: boolean that tells if this Ansible module is run in check mode
+
+ @return: dictionary of KVM VM facts, containing suffucient information to manage the KVM VM in
+ subsequent Ansible tasks.
+ """
+
+ ret_dict = dict(id=0,
+ name="",
+ arch="",
+ cpu="",
+ ram="",
+ disk_size=0,
+ state="CHECK_MODE",
+ tech_status="",
+ account_id=0,
+ rg_id=0,
+ username="",
+ password="",
+ public_ips=[], # direct IPs; this list can be empty
+ private_ips=[], # IPs on ViNSes; usually, at least one IP is listed
+ nat_ip="", # IP of the external ViNS interface; can be empty.
+ tags={},
+ chipset="",
+ interfaces=[],
+ cpu_pin="",
+ hp_backed="",
+ numa_affinity="",
+ custom_fields={},
+ vnc_password="",
+ snapshots=[],
+ preferred_cpu_cores=[],
+ clones=[],
+ clone_reference=0,
+ )
+
+ if check_mode or self.comp_info is None:
+ # if in check mode (or void facts provided) return immediately with the default values
+ return ret_dict
+
+ # if not self.comp_should_exist:
+ # ret_dict['state'] = "ABSENT"
+ # return ret_dict
+
+ ret_dict['id'] = self.comp_info['id']
+ ret_dict['name'] = self.comp_info['name']
+ ret_dict['arch'] = self.comp_info['arch']
+ ret_dict['state'] = self.comp_info['status']
+ ret_dict['tech_status'] = self.comp_info['techStatus']
+ ret_dict['account_id'] = self.comp_info['accountId']
+ ret_dict['rg_id'] = self.comp_info['rgId']
+ if self.comp_info['tags']:
+ ret_dict['tags'] = self.comp_info['tags']
+ # if the VM is an imported VM, then the 'accounts' list may be empty,
+ # so check for this case before trying to access login and passowrd values
+ if len(self.comp_info['osUsers']):
+ ret_dict['username'] = self.comp_info['osUsers'][0]['login']
+ ret_dict['password'] = self.comp_info['osUsers'][0]['password']
+
+ if self.comp_info['interfaces']:
+ # We need a list of all ViNSes in the account, which owns this Compute
+ # to find a ViNS, which may have active external connection. Then
+ # we will save external IP address of that connection in ret_dict['nat_ip']
+
+ for iface in self.comp_info['interfaces']:
+ if iface['connType'] == "VXLAN": # This is ViNS connection
+ ret_dict['private_ips'].append(iface['ipAddress'])
+ # if iface['connId']
+ # Now we need to check if this ViNS has GW function and external connection.
+ # If it does - save public IP address of GW VNF in ret_dict['nat_ip']
+ elif iface['connType'] == "VLAN": # This is direct external network connection
+ ret_dict['public_ips'].append(iface['ipAddress'])
+
+ iface['security_group_mode'] = iface.pop('enable_secgroups')
+ iface['security_group_ids'] = iface.pop('security_groups')
+
+ ret_dict['cpu'] = self.comp_info['cpus']
+ ret_dict['ram'] = self.comp_info['ram']
+
+ ret_dict['image_id'] = self.comp_info['imageId']
+
+ ret_dict['disks'] = self.comp_info['disks']
+ for disk in ret_dict['disks']:
+ if disk['type'] == 'B':
+ # if it is a boot disk - store its size
+ ret_dict['disk_size'] = disk['sizeMax']
+
+ ret_dict['chipset'] = self.comp_info['chipset']
+
+ ret_dict['interfaces'] = self.comp_info['interfaces']
+
+ ret_dict['cpu_pin'] = self.comp_info['cpupin']
+ ret_dict['hp_backed'] = self.comp_info['hpBacked']
+ ret_dict['numa_affinity'] = self.comp_info['numaAffinity']
+
+ ret_dict['custom_fields'] = self.comp_info['custom_fields']
+
+ ret_dict['vnc_password'] = self.comp_info['vncPasswd']
+
+ ret_dict['auto_start'] = self.comp_info['autoStart']
+
+ ret_dict['snapshots'] = self.comp_info['snapSets']
+
+ ret_dict['preferred_cpu_cores'] = self.comp_info['preferredCpu']
+
+ if self.amodule.params['get_console_url']:
+ ret_dict['console_url'] = self.comp_info['console_url']
+
+ ret_dict['clones'] = self.comp_info['clones']
+ ret_dict['clone_reference'] = self.comp_info['cloneReference']
+
+ ret_dict['boot_mode'] = self.comp_info['bootType']
+ ret_dict['boot_loader_type'] = self.comp_info['loaderType']
+ ret_dict['network_interface_naming'] = self.comp_info[
+ 'networkInterfaceNaming'
+ ]
+ ret_dict['hot_resize'] = self.comp_info['hotResize']
+
+ ret_dict['pinned_to_node'] = self.comp_info['pinnedToNode']
+
+ ret_dict['affinity_label'] = self.comp_info['affinityLabel']
+ ret_dict['affinity_rules'] = self.comp_info['affinityRules']
+ ret_dict['anti_affinity_rules'] = self.comp_info['antiAffinityRules']
+
+ ret_dict['zone_id'] = self.comp_info['zoneId']
+
+ ret_dict['guest_agent'] = self.comp_info['qemu_guest']
+
+ if self.guest_agent_exec_result:
+ ret_dict['guest_agent']['exec_result'] = self.guest_agent_exec_result # noqa: E501
+
+ if self.amodule.params['get_snapshot_merge_status']:
+ ret_dict['snapshot_merge_status'] = (
+ self.comp_info['snapshot_merge_status']
+ )
+
+ ret_dict['cd_image_id'] = self.comp_info['cdImageId']
+
+ ret_dict['boot_order'] = self.comp_info['bootOrder']
+
+ ret_dict['os_version'] = self.comp_info['os_version']
+
+ ret_dict['boot_loader_metaiso'] = self.comp_info['loaderMetaIso']
+ if self.comp_info['loaderMetaIso'] is not None:
+ ret_dict['boot_loader_metaiso'] = {
+ 'device_name': self.comp_info['loaderMetaIso']['devicename'],
+ 'path': self.comp_info['loaderMetaIso']['path'],
+ }
+
+ if self.amodule.params['get_cloning_status']:
+ ret_dict['cloning_status'] = self.compute_get_clone_status(
+ vm_id=self.comp_id,
+ )
+
+ ret_dict['read_only'] = self.comp_info['read_only']
+
+ return ret_dict
+
+ def check_amodule_args_for_create(self):
+ check_errors = False
+ # Check for unacceptable parameters for a blank Compute
+ if self.aparams['image_id'] is not None:
+ self.aparam_image = True
+ for param in (
+ 'network_interface_naming',
+ 'hot_resize',
+ ):
+ if self.aparams[param] is not None:
+ check_errors = True
+ self.message(
+ f'Check for parameter "{param}" failed: '
+ 'parameter can be specified only for a blank VM.'
+ )
+
+ if self.aparams['boot'] is not None:
+ for param in ('mode', 'loader_type'):
+ if self.aparams['boot'][param] is not None:
+ check_errors = True
+ self.message(
+ f'Check for parameter "boot.{param}" failed: '
+ 'parameter can be specified only for a blank VM.'
+ )
+
+ else:
+ self.aparam_image = False
+ if (
+ self.aparams['state'] is not None
+ and self.aparams['state'] not in (
+ 'present',
+ 'poweredoff',
+ 'halted',
+ 'stopped',
+ )
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "state" failed: '
+ 'state for a blank Compute must be either '
+ '"present" or "stopped".'
+ )
+
+ for parameter in (
+ 'ssh_key',
+ 'ssh_key_user',
+ 'ci_user_data',
+ ):
+ if self.aparams[parameter] is not None:
+ check_errors = True
+ self.message(
+ f'Check for parameter "{parameter}" failed: '
+ f'"image_id" must be specified '
+ f'to set {parameter}.'
+ )
+
+ if (
+ self.aparams['sep_id'] is not None
+ and self.aparams['boot'] is None
+ and self.aparams['boot']['disk_size'] is None
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "sep_id" failed: '
+ '"image_id" or "boot.disk_size" '
+ 'must be specified to set sep_id.'
+ )
+
+ if self.aparams['rollback_to'] is not None:
+ check_errors = True
+ self.message(
+ 'Check for parameter "rollback_to" failed: '
+ 'rollback_to can be specified only for existing compute.'
+ )
+
+ if self.aparam_networks_has_dpdk and not self.aparams['hp_backed']:
+ check_errors = True
+ self.message(
+ 'Check for parameter "networks" failed:'
+ ' hp_backed must be set to True to connect a compute'
+ ' to a DPDK network.'
+ )
+
+ if self.check_aparam_zone_id() is False:
+ check_errors = True
+
+ if self.aparams['guest_agent'] is not None:
+ check_errors = True
+ self.message(
+ 'Check for parameter "guest_agent" failed: '
+ 'guest_agent can be specified only for existing VM.'
+ )
+
+ if self.aparams['get_snapshot_merge_status']:
+ check_errors = True
+ self.message(
+ 'Check for parameter "get_snapshot_merge_status" failed: '
+ 'snapshot merge status can be retrieved only for existing VM.'
+ )
+
+ aparam_networks = self.aparams['networks']
+ if aparam_networks is not None:
+ net_types = {net['type'] for net in aparam_networks}
+ if self.VMNetType.TRUNK.value in net_types:
+ if self.check_aparam_networks_trunk() is False:
+ check_errors = True
+
+ if self.aparams['cdrom'] is not None:
+ check_errors = True
+ self.message(
+ 'Check for parameter "cdrom" failed: '
+ 'cdrom can be specified only for existing compute.'
+ )
+
+ aparam_storage_policy_id = self.aparams['storage_policy_id']
+ if aparam_storage_policy_id is None:
+ check_errors = True
+ self.message(
+ msg='Check for parameter "storage_policy_id" failed: '
+ 'storage_policy_id must be specified when creating '
+ 'a new compute'
+ )
+ elif (
+ aparam_storage_policy_id
+ not in self.rg_info['storage_policy_ids']
+ ):
+ check_errors = True
+ self.message(
+ msg='Check for parameter "storage_policy_id" failed: '
+ f'RG ID {self.rg_id} does not have access to '
+ f'storage_policy_id {aparam_storage_policy_id}'
+ )
+
+ if self.aparams['abort_cloning'] is not None:
+ check_errors = True
+ self.message(
+ 'Check for parameter "abort_cloning" failed: '
+ 'abort_cloning can be specified only for existing compute.'
+ )
+
+ if check_errors:
+ self.exit(fail=True)
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ account_id=dict(
+ type='int',
+ default=0,
+ ),
+ account_name=dict(
+ type='str',
+ default='',
+ ),
+ description=dict(
+ type='str',
+ ),
+ boot=dict(
+ type='dict',
+ options=dict(
+ disk_id=dict(
+ type='int',
+ ),
+ disk_size=dict(
+ type='int',
+ ),
+ mode=dict(
+ type='str',
+ choices=[
+ 'bios',
+ 'uefi',
+ ],
+ ),
+ loader_type=dict(
+ type='str',
+ choices=[
+ 'windows',
+ 'linux',
+ 'unknown',
+ ],
+ ),
+ from_cdrom=dict(
+ type='int',
+ ),
+ order=dict(
+ type='list',
+ elements='str',
+ choices=[
+ e.value for e in self.VMBootDevice
+ ],
+ ),
+ disk_redeploy=dict(
+ type='bool',
+ ),
+ ),
+ ),
+ sep_id=dict(
+ type='int',
+ ),
+ pool=dict(
+ type='str',
+ ),
+ controller_url=dict(
+ type='str',
+ required=True,
+ ),
+ cpu=dict(
+ type='int',
+ ),
+ disks=dict(
+ type='dict',
+ options=dict(
+ mode=dict(
+ type='str',
+ choices=[
+ 'update',
+ 'detach',
+ 'delete',
+ 'match',
+ ],
+ default='update',
+ ),
+ objects=dict(
+ type='list',
+ elements='dict',
+ options=dict(
+ id=dict(
+ type='int',
+ required=True,
+ ),
+ pci_slot_num_hex=dict(
+ type='str',
+ ),
+ bus_num_hex=dict(
+ type='str',
+ ),
+ ),
+ required_together=[
+ ('pci_slot_num_hex', 'bus_num_hex'),
+ ],
+ ),
+ ),
+ ),
+ id=dict(
+ type='int',
+ default=0,
+ ),
+ image_id=dict(
+ type='int',
+ ),
+ name=dict(
+ type='str',
+ ),
+ networks=dict(
+ type='list',
+ elements='dict',
+ options=dict(
+ type=dict(
+ type='str',
+ required=True,
+ choices=[
+ 'VINS',
+ 'EXTNET',
+ 'VFNIC',
+ 'DPDK',
+ 'TRUNK',
+ 'SDN',
+ 'EMPTY',
+ ],
+ ),
+ id=dict(
+ type='raw',
+ ),
+ ip_addr=dict(
+ type='str',
+ ),
+ mtu=dict(
+ type='int',
+ ),
+ mac=dict(
+ type='str',
+ ),
+ security_group_ids=dict(
+ type='list',
+ elements='int',
+ ),
+ security_group_mode=dict(
+ type='bool',
+ ),
+ enabled=dict(
+ type='bool',
+ ),
+ net_prefix=dict(
+ type='int',
+ ),
+ ),
+ required_if=[
+ ('type', 'VINS', ('id',)),
+ ('type', 'EXTNET', ('id',)),
+ ('type', 'VFNIC', ('id',)),
+ ('type', 'DPDK', ('id',)),
+ ('type', 'TRUNK', ('id',)),
+ ('type', 'SDN', ('id',)),
+ ],
+ ),
+ network_order_changing=dict(
+ type='bool',
+ default=False,
+ ),
+ ram=dict(
+ type='int',
+ ),
+ rg_id=dict(
+ type='int',
+ default=0,
+ ),
+ rg_name=dict(
+ type='str',
+ default='',
+ ),
+ ssh_key=dict(
+ type='str',
+ ),
+ ssh_key_user=dict(
+ type='str',
+ ),
+ tag=dict(
+ type='dict',
+ ),
+ affinity_label=dict(
+ type='str',
+ ),
+ aff_rule=dict(
+ type='list',
+ ),
+ aaff_rule=dict(
+ type='list',
+ ),
+ ci_user_data=dict(
+ type='dict',
+ ),
+ state=dict(
+ type='str',
+ choices=[
+ 'absent',
+ 'paused',
+ 'poweredoff',
+ 'halted',
+ 'poweredon',
+ 'stopped',
+ 'started',
+ 'present',
+ ],
+ ),
+ tags=dict(
+ type='str',
+ ),
+ chipset=dict(
+ type='str',
+ choices=[
+ 'Q35',
+ 'i440fx',
+ ]
+ ),
+ cpu_pin=dict(
+ type='bool',
+ ),
+ hp_backed=dict(
+ type='bool',
+ ),
+ numa_affinity=dict(
+ type='str',
+ choices=[
+ 'strict',
+ 'loose',
+ 'none',
+ ],
+ ),
+ custom_fields=dict(
+ type='dict',
+ options=dict(
+ fields=dict(
+ type='dict',
+ ),
+ disable=dict(
+ type='bool',
+ ),
+ ),
+ ),
+ auto_start=dict(
+ type='bool',
+ ),
+ rollback_to=dict(
+ type='str',
+ ),
+ preferred_cpu_cores=dict(
+ type='list',
+ elements='int',
+ ),
+ get_console_url=dict(
+ type='bool',
+ default=False,
+ ),
+ clone_from=dict(
+ type='dict',
+ options=dict(
+ id=dict(
+ type='int',
+ required=True,
+ ),
+ force=dict(
+ type='bool',
+ default=False,
+ ),
+ snapshot=dict(
+ type='dict',
+ options=dict(
+ name=dict(
+ type='str',
+ ),
+ timestamp=dict(
+ type='int',
+ ),
+ datetime=dict(
+ type='str',
+ ),
+ ),
+ mutually_exclusive=[
+ ('name', 'timestamp', 'datetime'),
+ ],
+ ),
+ sep_pool_name=dict(
+ type='str',
+ ),
+ sep_id=dict(
+ type='int',
+ ),
+ storage_policy_id=dict(
+ type='int',
+ requiered=True,
+ ),
+ ),
+ ),
+ network_interface_naming=dict(
+ type='str',
+ choices=[
+ 'ens',
+ 'eth',
+ ],
+ ),
+ hot_resize=dict(
+ type='bool',
+ ),
+ zone_id=dict(
+ type='int',
+ ),
+ guest_agent=dict(
+ type='dict',
+ options=dict(
+ enabled=dict(
+ type='bool',
+ ),
+ exec=dict(
+ type='dict',
+ options=dict(
+ cmd=dict(
+ type='str',
+ required=True,
+ ),
+ args=dict(
+ type='dict',
+ default={},
+ ),
+ ),
+ ),
+ update_available_commands=dict(
+ type='bool',
+ ),
+ ),
+ ),
+ get_snapshot_merge_status=dict(
+ type='bool',
+ ),
+ cdrom=dict(
+ type='dict',
+ options=dict(
+ mode=dict(
+ type='str',
+ choices=[
+ 'insert',
+ 'eject',
+ ],
+ default='insert',
+ ),
+ image_id=dict(
+ type='int',
+ ),
+ ),
+ ),
+ storage_policy_id=dict(
+ type='int',
+ ),
+ os_version=dict(
+ type='str',
+ ),
+ get_cloning_status=dict(
+ type='bool',
+ ),
+ abort_cloning=dict(
+ type='bool',
+ ),
+ ),
+ supports_check_mode=True,
+ required_one_of=[
+ ('id', 'name'),
+ ],
+ required_by={
+ 'clone_from': 'name',
+ },
+ )
+
+ def check_amodule_args_for_change(self):
+ check_errors = False
+
+ comp_info = self.vm_to_clone_info or self.comp_info
+ comp_id = comp_info['id']
+
+ self.is_vm_stopped_or_will_be_stopped = (
+ (
+ comp_info['techStatus'] == 'STOPPED'
+ and (
+ self.amodule.params['state'] is None
+ or self.amodule.params['state'] in (
+ 'halted', 'poweredoff', 'present', 'stopped',
+ )
+ )
+ )
+ or (
+ comp_info['techStatus'] != 'STOPPED'
+ and self.amodule.params['state'] in (
+ 'halted', 'poweredoff', 'stopped',
+ )
+ )
+ )
+
+ aparam_boot = self.amodule.params['boot']
+ if aparam_boot is not None:
+ aparam_disks = self.amodule.params['disks']
+ aparam_boot_disk_id = aparam_boot['disk_id']
+ comp_disk_ids = [disk['id'] for disk in self.comp_info['disks']]
+ if aparam_disks is None:
+ if (
+ aparam_boot_disk_id is not None
+ and aparam_boot_disk_id not in comp_disk_ids
+ ):
+ check_errors = True
+ self.message(
+ f'Check for parameter "boot.disk_id" failed: '
+ f'disk {aparam_boot_disk_id} is not attached to '
+ f'Compute ID {self.comp_id}.'
+ )
+ else:
+ match aparam_disks['mode']:
+ case 'update':
+ if (
+ aparam_boot_disk_id not in comp_disk_ids
+ and aparam_boot_disk_id not in aparam_disks['ids']
+ ):
+ check_errors = True
+ self.message(
+ f'Check for parameter "boot.disk_id" failed: '
+ f'disk {aparam_boot_disk_id} is not attached '
+ f'to Compute ID {self.comp_id}.'
+ )
+ case 'match':
+ if aparam_boot_disk_id not in aparam_disks['ids']:
+ check_errors = True
+ self.message(
+ f'Check for parameter "boot.disk_id" failed: '
+ f'disk {aparam_boot_disk_id} is not in '
+ f'disks.ids'
+ )
+ case 'detach' | 'delete':
+ if aparam_boot_disk_id in aparam_disks['ids']:
+ check_errors = True
+ self.message(
+ f'Check for parameter "boot.disk_id" failed: '
+ f'disk {aparam_boot_disk_id} cannot be '
+ f'detached or deleted to set as boot disk.'
+ )
+ elif aparam_boot_disk_id not in comp_disk_ids:
+ check_errors = True
+ self.message(
+ f'Check for parameter "boot.disk_id" failed: '
+ f'disk {aparam_boot_disk_id} is not attached '
+ f'to Compute ID {self.comp_id}.'
+ )
+
+ if self.check_aparam_boot_disk_redeploy() is False:
+ check_errors = True
+
+ new_boot_disk_size = aparam_boot['disk_size']
+ if new_boot_disk_size is not None:
+ boot_disk_size = 0
+ for disk in self.comp_info['disks']:
+ if disk['type'] == 'B':
+ boot_disk_size = disk['sizeMax']
+ break
+ else:
+ if aparam_boot is None or aparam_boot['disk_id'] is None:
+ check_errors = True
+ self.message(
+ f'Can\'t set boot disk size for Compute '
+ f'{comp_id}, because it doesn\'t '
+ f'have a boot disk.'
+ )
+
+ if new_boot_disk_size < boot_disk_size:
+ check_errors = True
+ self.message(
+ f'New boot disk size {new_boot_disk_size} is less'
+ f' than current {boot_disk_size} for Compute ID '
+ f'{comp_id}'
+ )
+
+ cd_rom_image_id = aparam_boot['from_cdrom']
+ if cd_rom_image_id is not None:
+ if not (
+ self.comp_info['techStatus'] == 'STOPPED'
+ and self.aparams['state'] == 'started'
+ ):
+ check_errors = True
+ self.message(
+ f'Check for parameter "boot.from_cdrom" failed: '
+ f'VM ID {self.comp_id} must be stopped and "state" '
+ 'must be "started" to boot from CD-ROM.'
+ )
+ _, image_info = self._image_get_by_id(
+ image_id=cd_rom_image_id,
+ )
+ if image_info is None:
+ check_errors = True
+ self.message(
+ 'Check for parameter "boot.from_cdrom" failed: '
+ f'Image ID {cd_rom_image_id} not found.'
+ )
+ elif image_info['type'] != 'cdrom':
+ check_errors = True
+ self.message(
+ 'Check for parameter "boot.from_cdrom" failed: '
+ f'Image ID {cd_rom_image_id} is not a cd-rom type.'
+ )
+
+ boot_order_list = aparam_boot['order']
+ if boot_order_list is not None:
+ boot_order_duplicates = set([
+ boot_dev for boot_dev in boot_order_list
+ if boot_order_list.count(boot_dev) > 1
+ ])
+ if boot_order_duplicates:
+ check_errors = True
+ self.message(
+ 'Check for parameter "boot.order" failed: '
+ 'List of boot devices has duplicates: '
+ f'{boot_order_duplicates}.'
+ )
+
+ if (
+ not comp_info['imageId']
+ and self.amodule.params['state'] in (
+ 'poweredon', 'paused', 'started',
+ )
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "state" failed: '
+ 'state for a blank Compute can not be "started" or "paused".'
+ )
+
+ if self.amodule.params['rollback_to'] is not None:
+ if not self.is_vm_stopped_or_will_be_stopped:
+ check_errors = True
+ self.message(
+ 'Check for parameter "rollback_to" failed: '
+ 'VM must be stopped to rollback.'
+ )
+
+ vm_snapshot_labels = [
+ snapshot['label'] for snapshot in comp_info['snapSets']
+ ]
+ if self.amodule.params['rollback_to'] not in vm_snapshot_labels:
+ check_errors = True
+ self.message(
+ f'Check for parameter "rollback_to" failed: '
+ f'snapshot with label '
+ f'{self.amodule.params["rollback_to"]} does not exist '
+ f'for VM ID {comp_id}.'
+ )
+
+ params_to_check = {
+ 'chipset': 'chipset',
+ 'cpu_pin': 'cpupin',
+ 'hp_backed': 'hpBacked',
+ 'numa_affinity': 'numaAffinity',
+ 'hot_resize': 'hotResize',
+ }
+ for param_name, comp_field_name in params_to_check.items():
+ if (
+ self.aparams[param_name] is not None
+ and comp_info[comp_field_name] != self.aparams[param_name]
+ and not self.is_vm_stopped_or_will_be_stopped
+ ):
+ check_errors = True
+ self.message(
+ f'Check for parameter "{param_name}" failed: '
+ f'VM must be stopped to change {param_name}.'
+ )
+
+ if self.aparams['preferred_cpu_cores'] is not None:
+ if not self.is_vm_stopped_or_will_be_stopped:
+ check_errors = True
+ self.message(
+ 'Check for parameter "preferred_cpu_cores" failed: '
+ 'VM must be stopped to change preferred_cpu_cores.'
+ )
+
+ if (
+ self.aparam_networks_has_dpdk
+ and not comp_info['hpBacked']
+ and not self.aparams['hp_backed']
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ 'hp_backed must be set to True to connect a compute '
+ 'to a DPDK network.'
+ )
+
+ is_vm_started_or_will_be_started = (
+ (
+ comp_info['techStatus'] == 'STARTED'
+ and (
+ self.amodule.params['state'] is None
+ or self.amodule.params['state'] in (
+ 'poweredon', 'present', 'started',
+ )
+ )
+ )
+ or (
+ comp_info['techStatus'] != 'STARTED'
+ and self.amodule.params['state'] in ('poweredon', 'started')
+ )
+ )
+
+ if self.amodule.params['get_console_url']:
+ if not is_vm_started_or_will_be_started:
+ check_errors = True
+ self.message(
+ 'Check for parameter "get_console_url" failed: '
+ 'VM must be started to get console url.'
+ )
+
+ aparam_disks_dict = self.aparams['disks']
+ if aparam_disks_dict is not None:
+ aparam_disks = aparam_disks_dict.get('objects', [])
+ aparam_disks_ids = [disk['id'] for disk in aparam_disks]
+ comp_boot_disk_id = None
+ for comp_disk in self.comp_info['disks']:
+ if comp_disk['type'] == 'B':
+ comp_boot_disk_id = comp_disk['id']
+ break
+ disks_to_detach = []
+ match aparam_disks_dict['mode']:
+ case 'detach' | 'delete':
+ disks_to_detach = aparam_disks_ids
+ case 'match':
+ comp_disk_ids = {
+ disk['id'] for disk in self.comp_info['disks']
+ }
+ disks_to_detach = comp_disk_ids - set(aparam_disks_ids)
+ if (
+ comp_boot_disk_id is not None
+ and comp_boot_disk_id in disks_to_detach
+ and not self.is_vm_stopped_or_will_be_stopped
+ ):
+ check_errors = True
+ self.message(
+ f'Check for parameter "disks" failed: '
+ f'VM ID {comp_id} must be stopped to detach '
+ f'boot disk ID {comp_boot_disk_id}.'
+ )
+ if self.comp_info['snapSets'] and disks_to_detach:
+ check_errors = True
+ self.message(
+ f'Check for parameter "disks" failed: '
+ f'cannot detach disks {disks_to_detach} from '
+ f'Compute ID {self.comp_id} while snapshots exist.'
+ )
+
+ if aparam_disks_dict['mode'] in ('delete', 'detach'):
+ for disk in aparam_disks:
+ for param, value in disk.items():
+ if param != 'id' and value is not None:
+ check_errors = True
+ self.message(
+ msg='Check for parameter "disks.objects" '
+ 'failed: only disk id can be specified if '
+ 'disks.mode is "delete" or "detach"'
+ )
+ break
+
+ if (
+ (
+ self.aparams['cpu'] is not None
+ and self.aparams['cpu'] != comp_info['cpus']
+ ) or (
+ self.aparams['ram'] is not None
+ and self.aparams['ram'] != comp_info['ram']
+ )
+ ) and not (self.aparams['hot_resize'] or comp_info['hotResize']):
+ check_errors = True
+ self.message(
+ 'Check for parameters "cpu" and "ram" failed: '
+ 'Hot resize must be enabled to change CPU or RAM.'
+ )
+
+ if self.check_aparam_zone_id() is False:
+ check_errors = True
+
+ if self.check_aparam_guest_agent() is False:
+ check_errors = True
+
+ if self.check_aparam_get_snapshot_merge_status() is False:
+ check_errors = True
+
+ aparam_networks = self.aparams['networks']
+ if aparam_networks is not None:
+ vm_networks = self.comp_info['interfaces']
+ if (
+ not vm_networks
+ and not self.is_vm_stopped_or_will_be_stopped
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ 'VM must be stopped before attach it\'s first network.'
+ )
+ vm_networks_ids = [
+ network['netId'] for network in vm_networks
+ if network['type'] != self.VMNetType.EMPTY.value
+ ]
+ aparam_networks_ids = [
+ network['id'] for network in aparam_networks
+ if network['type'] != self.VMNetType.EMPTY.value
+ ]
+ new_networks = list(
+ set(aparam_networks_ids) - set(vm_networks_ids)
+ )
+ net_types = {net['type'] for net in aparam_networks}
+ if new_networks:
+ if not (
+ len(new_networks) == 1
+ and self.VMNetType.DPDK.value in net_types
+ ) and not self.is_vm_stopped_or_will_be_stopped:
+ check_errors = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ 'VM must be stopped to attach non-DPDK network.'
+ )
+
+ if self.VMNetType.TRUNK.value in net_types:
+ if self.check_aparam_networks_trunk() is False:
+ check_errors = True
+
+ for network in aparam_networks:
+ if (
+ network['enabled'] is not None
+ and network['type'] not in [
+ self.VMNetType.VINS.value,
+ self.VMNetType.EXTNET.value,
+ self.VMNetType.DPDK.value,
+ self.VMNetType.SDN.value,
+ self.VMNetType.TRUNK.value,
+ ]
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "networks.enabled" failed: '
+ 'Can not enable or disable network '
+ f'ID {network['id']} and type {network['type']}.'
+ 'Only networks of type VINS, EXTNET, DPDK, SDN, TRUNK '
+ 'can be enabled or disabled.'
+ )
+
+ if self.check_aparam_cdrom() is False:
+ check_errors = True
+
+ if self.check_aparam_storage_policy_id() is False:
+ check_errors = True
+
+ if self.check_aparam_image_id() is False:
+ check_errors = True
+
+ if check_errors:
+ self.exit(fail=True)
+
+ def check_amodule_args_for_clone(self, clone_id: int, clone_dict: dict):
+ check_errors = False
+ aparam_clone_from = self.aparams['clone_from']
+
+ if (
+ clone_id
+ and clone_dict['cloneReference'] != self.vm_to_clone_id
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "name" failed: '
+ f'VM with name {self.aparams["name"]} '
+ f'already exists.'
+ )
+ if (
+ self.vm_to_clone_info['techStatus'] == 'STARTED'
+ and not aparam_clone_from['force']
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "clone_from.force" failed: '
+ 'VM must be stopped or parameter "force" must be True '
+ 'to clone it.'
+ )
+
+ aparam_snapshot = aparam_clone_from['snapshot']
+ snapshot_timestamps = [
+ snapshot['timestamp']
+ for snapshot in self.vm_to_clone_info['snapSets']
+ ]
+ if aparam_snapshot is not None:
+ if (
+ aparam_snapshot['name'] is not None
+ and aparam_snapshot['name'] not in (
+ snapshot['label']
+ for snapshot in self.vm_to_clone_info['snapSets']
+ )
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "clone_from.snapshot.name" '
+ 'failed: snapshot with name '
+ f'{aparam_snapshot["name"]} does not exist for VM ID '
+ f'{self.vm_to_clone_id}.'
+ )
+ if (
+ aparam_snapshot['timestamp'] is not None
+ and aparam_snapshot['timestamp'] not in snapshot_timestamps
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "clone_from.snapshot.timestamp" '
+ 'failed: snapshot with timestamp '
+ f'{aparam_snapshot["timestamp"]} does not exist for '
+ f'VM ID {self.vm_to_clone_id}.'
+ )
+ if aparam_snapshot['datetime'] is not None:
+ timestamp_from_dt_str = self.dt_str_to_sec(
+ dt_str=aparam_snapshot['datetime']
+ )
+ if timestamp_from_dt_str not in snapshot_timestamps:
+ check_errors = True
+ self.message(
+ 'Check for parameter "clone_from.snapshot.datetime" '
+ 'failed: snapshot with datetime '
+ f'{aparam_snapshot["datetime"]} does not exist for '
+ f'VM ID {self.vm_to_clone_id}.'
+ )
+
+ if check_errors:
+ self.exit(fail=True)
+
+ def clone(self):
+ clone_from_snapshot = self.aparams['clone_from']['snapshot']
+ snapshot_timestamp, snapshot_name, snapshot_datetime = None, None, None
+ if clone_from_snapshot:
+ snapshot_timestamp = clone_from_snapshot['timestamp']
+ snapshot_name = clone_from_snapshot['name']
+ snapshot_datetime = clone_from_snapshot['datetime']
+ clone_id = self.compute_clone(
+ compute_id=self.vm_to_clone_id,
+ name=self.aparams['name'],
+ force=self.aparams['clone_from']['force'],
+ snapshot_timestamp=snapshot_timestamp,
+ snapshot_name=snapshot_name,
+ snapshot_datetime=snapshot_datetime,
+ sep_pool_name=self.aparams['clone_from']['sep_pool_name'],
+ sep_id=self.aparams['clone_from']['sep_id'],
+ storage_policy_id=self.aparams['clone_from']['storage_policy_id'],
+ )
+ return clone_id
+
+ def check_aparam_guest_agent(self) -> bool:
+ check_errors = False
+ aparam_guest_agent = self.aparams['guest_agent']
+ if aparam_guest_agent:
+ if self.is_vm_stopped_or_will_be_stopped:
+ if aparam_guest_agent['update_available_commands']:
+ check_errors = True
+ self.message(
+ 'Check for parameter '
+ '"guest_agent.update_available_commands" failed: '
+ f'VM ID {self.comp_id} must be started to update '
+ 'available commands.'
+ )
+
+ is_guest_agent_enabled_or_will_be_enabled = (
+ (
+ self.comp_info['qemu_guest']['enabled']
+ and aparam_guest_agent['enabled'] is not False
+ )
+ or (
+ self.comp_info['qemu_guest']['enabled'] is False
+ and aparam_guest_agent['enabled']
+ )
+ )
+
+ aparam_guest_agent_exec = aparam_guest_agent['exec']
+ if aparam_guest_agent_exec is not None:
+ if self.is_vm_stopped_or_will_be_stopped:
+ check_errors = True
+ self.message(
+ 'Check for parameter "guest_agent.exec" failed: '
+ f'VM ID {self.comp_id} must be started '
+ 'to execute commands.'
+ )
+
+ if not is_guest_agent_enabled_or_will_be_enabled:
+ check_errors = True
+ self.message(
+ 'Check for parameter "guest_agent.exec" failed: '
+ f'Guest agent for VM ID {self.comp_id} must be enabled'
+ ' to execute commands.'
+ )
+
+ aparam_exec_cmd = aparam_guest_agent_exec['cmd']
+ available_commands = (
+ self.comp_info['qemu_guest']['enabled_agent_features']
+ )
+ if aparam_exec_cmd not in available_commands:
+ check_errors = True
+ self.message(
+ 'Check for parameter "guest_agent.exec.cmd" failed: '
+ f'Command "{aparam_exec_cmd}" is not '
+ f'available for VM ID {self.comp_id}.'
+ )
+
+ return not check_errors
+
+ def check_aparam_get_snapshot_merge_status(self) -> bool | None:
+ check_errors = False
+ if self.aparams['get_snapshot_merge_status']:
+ vm_has_shared_sep_disk = False
+ vm_disk_ids = [disk['id'] for disk in self.comp_info['disks']]
+ for disk_id in vm_disk_ids:
+ _, disk_info = self._disk_get_by_id(disk_id=disk_id)
+ if disk_info['sepType'] == 'SHARED':
+ vm_has_shared_sep_disk = True
+ break
+
+ if not vm_has_shared_sep_disk:
+ check_errors = True
+ self.message(
+ 'Check for parameter "get_snapshot_merge_status" failed: '
+ f'VM ID {self.comp_id} must have at least one disk with '
+ 'SEP type SHARED to retrieve snapshot merge status.'
+ )
+
+ return not check_errors
+
+ def check_aparam_cdrom(self) -> bool | None:
+ check_errors = False
+ aparam_cdrom = self.aparams['cdrom']
+ if aparam_cdrom is not None:
+ mode = aparam_cdrom['mode']
+ if self.is_vm_stopped_or_will_be_stopped:
+ check_errors = True
+ self.message(
+ 'Check for parameter "cdrom" failed: '
+ f'VM ID {self.comp_id} must be started to {mode} '
+ f'CD-ROM.'
+ )
+ image_id = aparam_cdrom['image_id']
+ match mode:
+ case 'insert':
+ if image_id is None:
+ check_errors = True
+ self.message(
+ 'Check for parameter "cdrom.image_id" failed: '
+ f'cdrom.image_id must be specified '
+ f'if cdrom.mode is "insert".'
+ )
+ _, image_info = self._image_get_by_id(
+ image_id=image_id,
+ )
+ if image_info is None:
+ check_errors = True
+ self.message(
+ 'Check for parameter "cdrom.image_id" failed: '
+ f'Image ID {image_id} not found.'
+ )
+ elif image_info['type'] != 'cdrom':
+ check_errors = True
+ self.message(
+ 'Check for parameter "cdrom.image_id" failed: '
+ f'Image ID {image_id} is not a CD-ROM type.'
+ )
+ case 'eject':
+ if image_id is not None:
+ check_errors = True
+ self.message(
+ 'Check for parameter "cdrom.image_id" failed: '
+ f'cdrom.image_id must not be specified '
+ f'if cdrom.mode is "eject".'
+ )
+ if not self.comp_info['cdImageId']:
+ check_errors = True
+ self.message(
+ 'Check for parameter "cdrom.mode" failed: '
+ f'VM ID {self.comp_id} does not have CD-ROM '
+ 'to eject.'
+ )
+
+ return not check_errors
+
+ def check_aparam_storage_policy_id(self) -> bool:
+ check_errors = False
+
+ aparam_storage_policy_id = self.aparams['storage_policy_id']
+ if aparam_storage_policy_id is not None:
+ for disk in self.comp_info['disks']:
+ if aparam_storage_policy_id != disk['storage_policy_id']:
+ check_errors = True
+ self.message(
+ msg='Check for parameter "storage_policy_id" failed: '
+ 'storage_policy_id can not be changed for compute '
+ f'ID {self.comp_id} disk ID {disk['id']}'
+ )
+
+ return not check_errors
+
+ def check_aparam_boot_disk_redeploy(self) -> bool:
+ check_errors = False
+
+ disk_redeploy = self.aparams['boot']['disk_redeploy']
+ if disk_redeploy:
+ vm_has_boot_disk = False
+ for disk in self.comp_info['disks']:
+ if disk['type'] == 'B':
+ vm_has_boot_disk = True
+ break
+ if not vm_has_boot_disk:
+ check_errors = True
+ self.message(
+ 'Check for parameter "boot.redeploy" failed: '
+ 'VM does not have boot disk to redeploy.'
+ )
+
+ aparam_disks = self.amodule.params['disks']
+ if aparam_disks is not None and aparam_disks['mode'] == 'match':
+ check_errors = True
+ self.message(
+ 'Check for parameter "disks.mode" failed: '
+ '"disks.mode" must not be "match" to redeploy.'
+ )
+
+ return not check_errors
+
+ def check_aparam_image_id(self) -> bool:
+ check_errors = False
+
+ aparam_image_id = self.aparams['image_id']
+ if aparam_image_id is not None:
+ if aparam_image_id != self.comp_info['imageId']:
+ if (
+ self.aparams['boot'] is None
+ or self.aparams['boot']['disk_redeploy'] is None
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "image_id" failed: '
+ '"boot.disk_redeploy" must be set to True to change '
+ 'VM image.'
+ )
+
+ return not check_errors
+
+ def find_networks_tags_intersections(
+ self,
+ trunk_networks: list,
+ extnet_networks: list,
+ ) -> bool:
+ has_intersections = False
+
+ def parse_trunk_tags(trunk_tags_string: str):
+ trunk_tags = set()
+ for part in trunk_tags_string.split(','):
+ if '-' in part:
+ start, end = part.split('-')
+ trunk_tags.update(range(int(start), int(end) + 1))
+ else:
+ trunk_tags.add(int(part))
+ return trunk_tags
+
+ trunk_tags_dicts = []
+ for trunk_network in trunk_networks:
+ trunk_tags_dicts.append({
+ 'id': trunk_network.id,
+ 'tags_str': trunk_network.vlan_ids,
+ 'tags': parse_trunk_tags(
+ trunk_tags_string=trunk_network.vlan_ids
+ ),
+ 'native_vlan_id': trunk_network.native_vlan_id,
+ })
+
+ # find for trunk tags intersections with other networks
+ for i in range(len(trunk_tags_dicts)):
+ for j in range(i + 1, len(trunk_tags_dicts)):
+ intersection = (
+ trunk_tags_dicts[i]['tags']
+ & trunk_tags_dicts[j]['tags']
+ )
+ if intersection:
+ has_intersections = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ f'Trunk tags {trunk_tags_dicts[i]["tags_str"]} '
+ f'of trunk ID {trunk_tags_dicts[i]["id"]} '
+ f'overlaps with trunk tags '
+ f'{trunk_tags_dicts[j]["tags_str"]} of trunk ID '
+ f'{trunk_tags_dicts[j]["id"]}'
+ )
+ for extnet in extnet_networks:
+ if extnet['vlanId'] in trunk_tags_dicts[i]['tags']:
+ has_intersections = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ f'Trunk tags {trunk_tags_dicts[i]["tags_str"]} '
+ f'of trunk ID {trunk_tags_dicts[i]["id"]} '
+ f'overlaps with tag {extnet["vlanId"]} of extnet ID '
+ f'{extnet["id"]}'
+ )
+ if extnet['vlanId'] == trunk_tags_dicts[i]['native_vlan_id']:
+ has_intersections = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ f'Trunk native vlan ID '
+ f'{trunk_tags_dicts[i]["native_vlan_id"]} of trunk ID '
+ f'{trunk_tags_dicts[i]["id"]} '
+ f'overlaps with vlan ID {extnet["vlanId"]} of extnet '
+ f'ID {extnet["id"]}'
+ )
+
+ return has_intersections
+
+ def check_aparam_networks_trunk(self) -> bool | None:
+ check_errors = False
+
+ # check if account has vm feature “trunk”
+ if not self.check_account_vm_features(vm_feature=self.VMFeature.trunk):
+ check_errors = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ f'Account ID {self.acc_id} must have feature "trunk" to use '
+ 'trunk type networks '
+ )
+ # check if rg has vm feature “trunk”
+ if not self.check_rg_vm_features(vm_feature=self.VMFeature.trunk):
+ check_errors = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ f'RG ID {self.rg_id} must have feature "trunk" to use '
+ 'trunk type networks '
+ )
+
+ aparam_trunk_networks = []
+ aparam_extnet_networks = []
+ for net in self.aparams['networks']:
+ if net['type'] == self.VMNetType.TRUNK.value:
+ aparam_trunk_networks.append(net)
+ elif net['type'] == self.VMNetType.EXTNET.value:
+ aparam_extnet_networks.append(net)
+
+ trunk_networks_info = []
+ # check that account has access to all specified trunks
+ for trunk_network in aparam_trunk_networks:
+ trunk_info = self.api.ca.trunk.get(
+ id=trunk_network['id']
+ )
+ trunk_networks_info.append(trunk_info)
+ if (
+ trunk_info.account_ids is None
+ or self.acc_id not in trunk_info.account_ids
+ ):
+ check_errors = True
+ self.message(
+ 'Check for parameter "networks" failed: '
+ f'Account ID {self.acc_id} does not have access to '
+ f'trunk ID {trunk_info.id}'
+ )
+
+ extnet_networks_info = []
+ for extnet_network in aparam_extnet_networks:
+ extnet_networks_info.append(
+ self.extnet_get(id=extnet_network['id'])
+ )
+ # check that trunk tags do not overlap with each other
+ # and with extnets vlan id
+ if self.find_networks_tags_intersections(
+ trunk_networks=trunk_networks_info,
+ extnet_networks=extnet_networks_info,
+ ):
+ check_errors = True
+
+ return not check_errors
+
+# Workflow digest:
+# 1) authenticate to DECORT controller & validate authentication by issuing API call - done when creating DECSController
+# 2) check if the VM with the specified id or rg_name:name exists
+# 3) if VM does not exist, check if there is enough resources to deploy it in the target account / vdc
+# 4) if VM exists: check desired state, desired configuration -> initiate action accordingly
+# 5) VM does not exist: check desired state -> initiate action accordingly
+# - create VM: check if target VDC exists, create VDC as necessary, create VM
+# - delete VM: delete VM
+# - change power state: change as required
+# - change guest OS state: change as required
+# 6) report result to Ansible
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ amodule = self.amodule
+
+ if self.comp_id:
+ if self.comp_info['status'] in ("MIGRATING", "DELETING", "DESTROYING", "ERROR", "REDEPLOYING"):
+ # cannot do anything on the existing Compute in the listed states
+ self.error() # was self.nop()
+ elif self.comp_info['status'] in ("ENABLED", "DISABLED"):
+ if amodule.params['state'] == 'absent':
+ self.destroy()
+ else:
+ if amodule.params['state'] in (
+ 'paused', 'poweredon', 'poweredoff',
+ 'halted', 'started', 'stopped',
+ ):
+ self.compute_powerstate(
+ comp_facts=self.comp_info,
+ target_state=amodule.params['state'],
+ )
+ self.modify(arg_wait_cycles=7)
+ elif self.comp_info['status'] == "DELETED":
+ if amodule.params['state'] in ('present', 'poweredon', 'started'):
+ # TODO - check if restore API returns VM ID (similarly to VM create API)
+ self.compute_restore(comp_id=self.comp_id)
+ # TODO - do we need updated comp_info to manage port forwards and size after VM is restored?
+ _, self.comp_info, _ = self.compute_find(
+ comp_id=self.comp_id,
+ need_custom_fields=True,
+ need_console_url=amodule.params['get_console_url'],
+ )
+ self.modify()
+ elif amodule.params['state'] == 'absent':
+ # self.nop()
+ # self.comp_should_exist = False
+ self.destroy()
+ elif amodule.params['state'] in (
+ 'paused', 'poweredoff', 'halted', 'stopped'
+ ):
+ self.error()
+ elif self.comp_info['status'] == "DESTROYED":
+ if amodule.params['state'] in (
+ 'present', 'poweredon', 'poweredoff',
+ 'halted', 'started', 'stopped',
+ ):
+ self.create() # this call will also handle data disk & network connection
+ elif amodule.params['state'] == 'absent':
+ self.nop()
+ self.comp_should_exist = False
+ elif amodule.params['state'] == 'paused':
+ self.error()
+ else:
+ state = amodule.params['state']
+ if state is None:
+ state = 'present'
+ # Preexisting Compute of specified identity was not found.
+ # If requested state is 'absent' - nothing to do
+ if state == 'absent':
+ self.nop()
+ elif state in (
+ 'present', 'poweredon', 'poweredoff',
+ 'halted', 'started', 'stopped',
+ ):
+ self.create() # this call will also handle data disk & network connection
+ elif state == 'paused':
+ self.error()
+
+ if self.result['failed']:
+ amodule.fail_json(**self.result)
+ else:
+ # prepare Compute facts to be returned as part of decon.result and then call exit_json(...)
+ rg_facts = None
+ if self.comp_should_exist:
+ if (
+ (self.result['changed'] and not self.skip_final_get)
+ or self.force_final_get
+ ):
+ # There were changes to the Compute - refresh Compute facts.
+ _, self.comp_info, _ = self.compute_find(
+ comp_id=self.comp_id,
+ need_custom_fields=True,
+ need_console_url=amodule.params['get_console_url'],
+ need_snapshot_merge_status=amodule.params['get_snapshot_merge_status'], # noqa: E501
+ )
+ #
+ # We no longer need to re-read RG facts, as all network info is now available inside
+ # compute structure
+ # _, rg_facts = self.rg_find(arg_account_id=0, arg_rg_id=self.rg_id)
+ self.result['facts'] = self.package_facts(amodule.check_mode)
+ amodule.exit_json(**self.result)
+
+
+def main():
+ decort_vm().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_vm_list.py b/library/decort_vm_list.py
new file mode 100644
index 0000000..568b77b
--- /dev/null
+++ b/library/decort_vm_list.py
@@ -0,0 +1,158 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_vm_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortVMList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ account_id=dict(
+ type='int',
+ ),
+ ext_net_id=dict(
+ type='int',
+ ),
+ ext_net_name=dict(
+ type='str',
+ ),
+ id=dict(
+ type='int',
+ ),
+ include_deleted=dict(
+ type='bool',
+ ),
+ ip_addr=dict(
+ type='str',
+ ),
+ name=dict(
+ type='str',
+ ),
+ rg_id=dict(
+ type='int',
+ ),
+ rg_name=dict(
+ type='str',
+ ),
+ status=dict(
+ type='str',
+ choices=sdk_types.VMStatus._member_names_,
+ ),
+ tech_status=dict(
+ type='str',
+ choices=sdk_types.VMTechStatus._member_names_,
+ ),
+ zone_id=dict(
+ type='int',
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.VMAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+ aparam_status: str | None = aparam_filter['status']
+ aparam_tech_status: str | None = aparam_filter['tech_status']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.VMAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.compute.list(
+ account_id=aparam_filter['account_id'],
+ ext_net_id=aparam_filter['ext_net_id'],
+ ext_net_name=aparam_filter['ext_net_name'],
+ id=aparam_filter['id'],
+ include_deleted=aparam_filter['include_deleted'] or False,
+ ip_addr=aparam_filter['ip_addr'],
+ name=aparam_filter['name'],
+ rg_id=aparam_filter['rg_id'],
+ rg_name=aparam_filter['rg_name'],
+ status=(
+ sdk_types.VMStatus[aparam_status]
+ if aparam_status else None
+ ),
+ tech_status=(
+ sdk_types.VMTechStatus[aparam_tech_status]
+ if aparam_tech_status else None
+ ),
+ zone_id=aparam_filter['zone_id'],
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortVMList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/library/decort_vm_snapshot.py b/library/decort_vm_snapshot.py
index ea07497..f2c4c4d 100644
--- a/library/decort_vm_snapshot.py
+++ b/library/decort_vm_snapshot.py
@@ -30,9 +30,10 @@ class DecortVMSnapshot(DecortController):
self.exit(fail=True)
self.vm_name = self.vm_facts['name']
- self.vm_snapshots = self.vm_facts['snapSets']
+ self.vm_snapshots = self.api.ca.compute.snapshot_list(
+ vm_id=self.vm_id).data
self.vm_snapshot_labels = [
- snapshot['label'] for snapshot in self.vm_snapshots
+ snapshot.label for snapshot in self.vm_snapshots
]
self.new_snapshot_label = None
@@ -102,21 +103,23 @@ class DecortVMSnapshot(DecortController):
if check_error:
self.exit(fail=True)
+ @DecortController.handle_sdk_exceptions
def run(self):
- self.get_info(first_run=True)
+ self.get_info()
self.check_amodule_args_for_change()
self.change()
self.exit()
- def get_info(self, first_run: bool = False):
- if not first_run:
- self.vm_snapshots = self.snapshot_list(
- compute_id=self.aparams_vm_id,
- )
+ def get_info(self, update_vm_snapshots: bool = False):
+ if update_vm_snapshots:
+ self.vm_snapshots = self.api.cloudapi.compute.snapshot_list(
+ vm_id=self.aparams_vm_id,
+ ).data
+
label = self.new_snapshot_label or self.aparams_label
for snapshot in self.vm_snapshots:
- if snapshot['label'] == label:
- self.facts = snapshot
+ if snapshot.label == label:
+ self.facts = snapshot.model_dump()
if self.aparams['usage']:
self.facts['stored'] = self.get_snapshot_usage()
self.facts['vm_id'] = self.aparams_vm_id
@@ -134,11 +137,11 @@ class DecortVMSnapshot(DecortController):
self.abort_merge()
def create(self):
- self.snapshot_create(
- compute_id=self.aparams_vm_id,
+ self.sdk_checkmode(self.api.cloudapi.compute.snapshot_create)(
+ vm_id=self.aparams_vm_id,
label=self.new_snapshot_label,
)
- self.get_info()
+ self.get_info(update_vm_snapshots=True)
def delete(self):
self.snapshot_delete(
@@ -149,7 +152,7 @@ class DecortVMSnapshot(DecortController):
def abort_merge(self):
self.snapshot_abort_merge(
- vm_id=self.aparams_vm_id,
+ vm_id=self.aparams_vm_id,
label=self.aparams_label,
)
self.get_info()
@@ -161,7 +164,7 @@ class DecortVMSnapshot(DecortController):
label=label,
)
return common_snapshots_usage_info['stored']
-
+
def check_amodule_args_for_change(self):
check_errors = False
@@ -171,7 +174,7 @@ class DecortVMSnapshot(DecortController):
):
check_errors = True
self.message(
- f'Check for parameter "state" failed: '
+ 'Check for parameter "state" failed: '
'Merge can be aborted only for VM in "MERGE" tech status.'
)
@@ -179,7 +182,6 @@ class DecortVMSnapshot(DecortController):
self.exit(fail=True)
-
def main():
DecortVMSnapshot().run()
diff --git a/library/decort_zone.py b/library/decort_zone.py
index d8d18fc..16e7a87 100644
--- a/library/decort_zone.py
+++ b/library/decort_zone.py
@@ -10,6 +10,8 @@ description: See L(Module Documentation,https://repository.basistech.ru/BASIS/de
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.decort_utils import DecortController
+from dynamix_sdk import exceptions as sdk_exceptions
+
class DecortZone(DecortController):
def __init__(self):
@@ -28,16 +30,29 @@ class DecortZone(DecortController):
supports_check_mode=True,
)
+ @DecortController.handle_sdk_exceptions
def run(self):
self.get_info()
self.exit()
def get_info(self):
- self.facts = self.zone_get(id=self.id)
- self.facts['grid_id'] = self.facts.pop('gid')
- self.facts['created_timestamp'] = self.facts.pop('createdTime')
- self.facts['updated_timestamp'] = self.facts.pop('updatedTime')
- self.facts['node_ids'] = self.facts.pop('nodeIds')
+ try:
+ zone_model = self.api.cloudapi.zone.get(id=self.id)
+ except sdk_exceptions.RequestException as e:
+ if (
+ e.orig_exception.response
+ and e.orig_exception.response.status_code == 404
+ ):
+ self.message(
+ self.MESSAGES.obj_not_found(
+ obj='zone',
+ id=self.id,
+ )
+ )
+ self.exit(fail=True)
+ raise e
+
+ self.facts = zone_model.model_dump()
def main():
diff --git a/library/decort_zone_list.py b/library/decort_zone_list.py
new file mode 100644
index 0000000..78d2d3f
--- /dev/null
+++ b/library/decort_zone_list.py
@@ -0,0 +1,133 @@
+#!/usr/bin/python
+
+DOCUMENTATION = r'''
+---
+module: decort_zone_list
+
+description: See L(Module Documentation,https://repository.basistech.ru/BASIS/decort-ansible/wiki/Home). # noqa: E501
+'''
+
+from typing import Any
+from ansible.module_utils.basic import AnsibleModule
+from ansible.module_utils.decort_utils import DecortController
+
+from dynamix_sdk.base import get_alias, name_mapping_dict
+import dynamix_sdk.types as sdk_types
+
+
+class DecortZoneList(DecortController):
+ def __init__(self):
+ super().__init__(AnsibleModule(**self.amodule_init_args))
+
+ @property
+ def amodule_init_args(self) -> dict:
+ return self.pack_amodule_init_args(
+ argument_spec=dict(
+ filter=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ deletable=dict(
+ type='bool',
+ ),
+ description=dict(
+ type='str',
+ ),
+ grid_id=dict(
+ type='int',
+ ),
+ id=dict(
+ type='int',
+ ),
+ name=dict(
+ type='str',
+ ),
+ node_id=dict(
+ type='int',
+ ),
+ status=dict(
+ type='str',
+ choices=sdk_types.ZoneStatus._member_names_,
+ ),
+ ),
+ ),
+ pagination=dict(
+ type='dict',
+ apply_defaults=True,
+ options=dict(
+ number=dict(
+ type='int',
+ default=1,
+ ),
+ size=dict(
+ type='int',
+ default=50,
+ ),
+ ),
+ ),
+ sorting=dict(
+ type='dict',
+ options=dict(
+ asc=dict(
+ type='bool',
+ default=True,
+ ),
+ field=dict(
+ type='str',
+ choices=(
+ sdk_types.ZoneForListAPIResultNM
+ .model_fields.keys()
+ ),
+ required=True,
+ ),
+ ),
+ ),
+ ),
+ supports_check_mode=True,
+ )
+
+ @DecortController.handle_sdk_exceptions
+ def run(self):
+ self.get_info()
+ self.exit()
+
+ def get_info(self):
+ aparam_filter: dict[str, Any] = self.aparams['filter']
+ aparam_status: str | None = aparam_filter['status']
+
+ aparam_pagination: dict[str, Any] = self.aparams['pagination']
+
+ aparam_sorting: dict[str, Any] | None = self.aparams['sorting']
+ sort_by: str | None = None
+ if aparam_sorting:
+ sorting_field = get_alias(
+ field_name=aparam_sorting['field'],
+ model_cls=sdk_types.ZoneForListAPIResultNM,
+ name_mapping_dict=name_mapping_dict,
+ )
+ sort_by_prefix = '+' if aparam_sorting['asc'] else '-'
+ sort_by = f'{sort_by_prefix}{sorting_field}'
+
+ self.facts = self.api.cloudapi.zone.list(
+ deletable=aparam_filter['deletable'],
+ description=aparam_filter['description'],
+ grid_id=aparam_filter['grid_id'],
+ id=aparam_filter['id'],
+ name=aparam_filter['name'],
+ node_id=aparam_filter['node_id'],
+ status=(
+ sdk_types.ZoneStatus[aparam_status]
+ if aparam_status else None
+ ),
+ page_number=aparam_pagination['number'],
+ page_size=aparam_pagination['size'],
+ sort_by=sort_by,
+ ).model_dump()['data']
+
+
+def main():
+ DecortZoneList().run()
+
+
+if __name__ == '__main__':
+ main()
diff --git a/module_utils/decort_utils.py b/module_utils/decort_utils.py
index 276df8e..85396b4 100644
--- a/module_utils/decort_utils.py
+++ b/module_utils/decort_utils.py
@@ -1,9 +1,10 @@
from copy import deepcopy
from datetime import datetime
from enum import Enum
+from functools import wraps
+from importlib.metadata import version as get_package_version
import json
import re
-from functools import wraps
from typing import (
Any,
Callable,
@@ -12,11 +13,17 @@ from typing import (
Optional,
ParamSpec,
TypeVar,
+ cast,
)
import time
-import requests
from ansible.module_utils.basic import AnsibleModule, env_fallback
+from dynamix_sdk import BVSAuth, DECS3OAuth, Dynamix
+from dynamix_sdk import __name__ as SDK_PACKAGE_NAME
+from dynamix_sdk import exceptions as sdk_exceptions
+import dynamix_sdk.api._nested as _nested
+import dynamix_sdk.types as sdk_types
+import requests
import urllib3
@@ -33,153 +40,16 @@ class DecortController(object):
_acc_info: None | dict = None
rg_id: None | int = None
_rg_info: None | dict = None
+ _api: sdk_types.API | None = None
+ _usermanager_whoami_result: None | dict = None
- FIELDS_FOR_SORTING_ACCOUNT_COMPUTE_LIST = [
- 'cpus',
- 'createdBy',
- 'createdTime',
- 'deletedBy',
- 'deletedTime',
- 'id',
- 'name',
- 'ram',
- 'registered',
- 'rgId',
- 'rgName',
- 'status',
- 'techStatus',
- 'totalDisksSize',
- 'updatedBy',
- 'updatedTime',
- 'userManaged',
- 'vinsConnected',
- ]
-
- FIELDS_FOR_SORTING_ACCOUNT_DISK_LIST = [
- 'id',
- 'name',
- 'pool',
- 'sepId',
- 'shareable',
- 'sizeMax',
- 'type',
- ]
-
- FIELDS_FOR_SORTING_ACCOUNT_IMAGE_LIST = [
- 'UNCPath',
- 'desc',
- 'id',
- 'name',
- 'public',
- 'size',
- 'status',
- 'type',
- 'username',
- ]
-
- FIELDS_FOR_SORTING_ACCOUNT_RG_LIST = [
- 'createdBy',
- 'createdTime',
- 'deletedBy',
- 'deletedTime',
- 'id',
- 'milestones',
- 'name',
- 'status',
- 'updatedBy',
- 'updatedTime',
- 'vinses',
- ]
-
- FIELDS_FOR_SORTING_ACCOUNT_VINS_LIST = [
- 'computes',
- 'createdBy',
- 'createdTime',
- 'deletedBy',
- 'deletedTime',
- 'externalIP',
- 'extnetId',
- 'freeIPs',
- 'id',
- 'name',
- 'network',
- 'priVnfDevId',
- 'rgId',
- 'rgName',
- 'status',
- 'updatedBy',
- 'updatedTime',
- ]
-
- COMPUTE_TECH_STATUSES = [
- 'BACKUP_RUNNING',
- 'BACKUP_STOPPED',
- 'CLONING',
- 'DOWN',
- 'MERGE',
- 'MIGRATING',
- 'MIGRATING_IN',
- 'MIGRATING_OUT',
- 'PAUSED',
- 'PAUSING',
- 'ROLLBACK',
- 'SCHEDULED',
- 'SNAPCREATE',
- 'STARTED',
- 'STARTING',
- 'STOPPED',
- 'STOPPING',
- ]
-
- DISK_TYPES = ['B', 'D']
-
- IMAGE_TYPES = [
- 'cdrom',
- 'linux',
- 'unknown',
- 'virtual',
- 'windows',
- ]
-
- RESOURCE_GROUP_STATUSES = [
- 'CREATED',
- 'DELETED',
- 'DELETING',
- 'DESTROYED',
- 'DESTROYING',
- 'DISABLED',
- 'DISABLING',
- 'ENABLED',
- 'ENABLING',
- 'MODELED',
- 'RESTORING',
- ]
+ ANSIBLE_MODULES_VERSION = '11.0.0'
+ COMPATIBLE_SDK_MINOR_VERSION = '1.4'
VM_RESIZE_NOT = 0
VM_RESIZE_DOWN = 1
VM_RESIZE_UP = 2
- class AccountStatus(Enum):
- CONFIRMED = 'CONFIRMED'
- DELETED = 'DELETED'
- DESTROYED = 'DESTROYED'
- DESTROYING = 'DESTROYING'
- DISABLED = 'DISABLED'
-
- class AccountSortableField(Enum):
- createdTime = 'createdTime'
- deletedTime = 'deletedTime'
- id = 'id'
- name = 'name'
- status = 'status'
- updatedTime = 'updatedTime'
-
- class AccountUserRights(Enum):
- R = 'R'
- RCX = 'RCX'
- ARCXDU = 'ARCXDU'
- CXDRAU = 'CXDRAU'
-
class VMNetType(Enum):
VINS = 'VINS'
@@ -191,61 +61,6 @@ class DecortController(object):
SDN = 'SDN'
- class AuditsSortableField(Enum):
- Call = 'Call'
- Guid = 'Guid'
- ResponseTime = 'Response Time'
- StatusCode = 'Status Code'
- Time = 'Time'
-
-
- class ZoneField(Enum):
- created_timestamp = 'createdTime'
- deletable = 'deletable'
- description = 'description'
- grid_id = 'gid'
- guid = 'guid'
- id = 'id'
- name = 'name'
- node_ids = 'nodeIds'
- status = 'status'
- updated_timestamp = 'updatedTime'
-
-
- class ZoneStatus(Enum):
- CREATED = 'CREATED'
- DESTROYED = 'DESTROYED'
-
-
- class TrunkStatus(Enum):
- CREATED = 'CREATED'
- DESTROYED = 'DESTROYED'
- DESTROYING = 'DESTROYING'
- DISABLED = 'DISABLED'
- ENABLED = 'ENABLED'
- ENABLING = 'ENABLING'
- MODELED = 'MODELED'
-
-
- class TrunksSortableField(Enum):
- accountIds = 'account_ids'
- created_at = 'created_timestamp'
- created_by = 'created_by'
- deleted_at = 'deleted_timestamp'
- deleted_by = 'deleted_by'
- description = 'description'
- guid = 'guid'
- id = 'id'
- mac = 'mac'
- name = 'name'
- nativeVlanId = 'native_vlan_id'
- ovsBridge = 'ovs_bridge'
- status = 'status'
- trunkTags = 'vlan_ids'
- updated_at = 'updated_timestamp'
- updated_by = 'updated_by'
-
-
TRUNK_VLAN_ID_MIN_VALUE = 1
TRUNK_VLAN_ID_MAX_VALUE = 4095
@@ -266,22 +81,6 @@ class DecortController(object):
cdrom = 'cdrom'
- class StoragePolicyStatus(Enum):
- DISABLED = 'DISABLED'
- ENABLED = 'ENABLED'
-
-
- class StoragePoliciesSortableField(Enum):
- description = 'description'
- guid = 'guid'
- id = 'id'
- limit_iops = 'iops_limit'
- name = 'name'
- access_seps_pools = 'sep_pools'
- status = 'status'
- usage = 'usage'
-
-
class SecurityGroupState(Enum):
absent = 'absent'
present = 'present'
@@ -293,33 +92,6 @@ class DecortController(object):
update = 'update'
- class SecurityGroupRuleDirection(Enum):
- INBOUND = 'inbound'
- OUTBOUND = 'outbound'
-
-
- class SecurityGroupRuleEtherType(Enum):
- IPV4 = 'IPv4'
- IPV6 = 'IPv6'
-
-
- class SecurityGroupRuleProtocol(Enum):
- ICMP = 'icmp'
- TCP = 'tcp'
- UDP = 'udp'
-
- class SecurityGroupSortableField(Enum):
- account_id = 'account_id'
- description = 'description'
- id = 'id'
- name = 'name'
- rules = 'rules'
- created_timestamp = 'created_at'
- created_by = 'created_by'
- updated_timestamp = 'updated_at'
- updated_by = 'updated_by'
-
-
class MESSAGES:
@staticmethod
def ssl_error(url: None | str = None):
@@ -471,6 +243,8 @@ class DecortController(object):
# when you detect and error and are about to call exit_json() or fail_json()
self.result = {'failed': False, 'changed': False, 'waypoints': "Init"}
+ self._check_sdk_version()
+
self.authenticator = arg_amodule.params['authenticator']
self.controller_url = arg_amodule.params.get('controller_url')
@@ -550,19 +324,32 @@ class DecortController(object):
self.run_phase = "Run phase: Authenticating to DECORT controller."
- if self.authenticator == "jwt":
- # validate supplied JWT on the DECORT controller
- self.validate_jwt() # this call will abort the script if validation fails
- else:
- # Oauth2 based authorization mode
- # obtain JWT from Oauth2 provider and validate on the DECORT controller
+ if self.authenticator != 'jwt':
self.obtain_jwt()
- if self.controller_url is not None:
- self.validate_jwt() # this call will abort the script if validation fails
- # self.run_phase = "Initializing DecortController instance complete."
return
+ def _check_sdk_version(self):
+ sdk_version = get_package_version(SDK_PACKAGE_NAME)
+ if not sdk_version.startswith(f'{self.COMPATIBLE_SDK_MINOR_VERSION}.'):
+ message = (
+ f'Ansible modules version: {self.ANSIBLE_MODULES_VERSION}\n'
+ f'Incompatible version of {SDK_PACKAGE_NAME} is installed. '
+ f'Installed version: {sdk_version}. '
+ f'Compatible minor version: '
+ f'{self.COMPATIBLE_SDK_MINOR_VERSION}'
+ )
+ if self.aparams['ignore_sdk_version_check']:
+ self.message(
+ msg=message,
+ warning=True,
+ )
+ else:
+ self.message(
+ msg=message,
+ )
+ self.exit(fail=True)
+
@property
def acc_info(self) -> dict:
if self._acc_info is None:
@@ -589,6 +376,45 @@ class DecortController(object):
self._rg_info = rg_info
return self._rg_info
+ @property
+ def usermanager_whoami_result(self) -> dict:
+ if self._usermanager_whoami_result is None:
+ self._usermanager_whoami_result = self.get_whoami_result()
+ return self._usermanager_whoami_result
+
+ @property
+ def api(self) -> sdk_types.API:
+ if self._api is None:
+ if self.controller_url is None:
+ raise ValueError('Controller url must be set to call API')
+ try:
+ dynamix = Dynamix(
+ url=self.controller_url,
+ auth=self.jwt,
+ verify_ssl=self.verify_ssl,
+ wrap_request_exceptions=True,
+ f_decorators=[self.sdk_waypoint],
+ ignore_api_compatibility=self.aparams[
+ 'ignore_api_compatibility'
+ ],
+ )
+ except sdk_exceptions.IncompatibleAPIError as e:
+ self.message(msg=e.message)
+ self.exit(fail=True)
+ else:
+ self._api = dynamix.api
+ if dynamix.dx_build != dynamix.compatible_dx_build:
+ self.message(
+ msg=(
+ f'Incompatible Dynamix build. '
+ f'Dynamix build: {dynamix.dx_build}. '
+ f'Compatible build: {dynamix.compatible_dx_build}'
+ ),
+ warning=True,
+ )
+ self.validate_jwt()
+ return self._api
+
@staticmethod
def waypoint(orig_f: Callable[P, R]) -> Callable[P, R]:
"""
@@ -601,6 +427,18 @@ class DecortController(object):
return orig_f(self, *args, **kwargs)
return new_f
+ def sdk_waypoint(self, orig_f: Callable[P, R]) -> Callable[P, R]:
+ """
+ A decorator for adding the name of called SDK function to the string
+ `self.result['waypoints']`.
+ """
+ @wraps(orig_f)
+ def new_f(*args, **kwargs):
+ name = orig_f.__name__.replace('__', '.')
+ self.result['waypoints'] += f' -> {name}'
+ return orig_f(*args, **kwargs)
+ return new_f
+
@staticmethod
def checkmode(orig_f: Callable[P, R]) -> Callable[P, R | None]:
"""
@@ -623,6 +461,74 @@ class DecortController(object):
return orig_f(self, *args, **kwargs)
return new_f
+ def sdk_checkmode(self, orig_f: Callable[P, R]) -> Callable[P, R | None]:
+ """
+ A decorator for SDK methods that should not be executed in
+ Ansible Check Mode.
+ Instead of executing these methods, a message will be added
+ with the method name and the arguments with which it was called.
+ """
+ @wraps(orig_f)
+ def new_f(*args, **kwargs):
+ if self.amodule.check_mode:
+ self.message(
+ self.MESSAGES.method_in_check_mode(
+ method_name=orig_f.__name__.replace('__', '.'),
+ method_args=args,
+ method_kwargs=kwargs,
+ )
+ )
+ return None
+ else:
+ self.set_changed()
+ return orig_f(*args, **kwargs)
+
+ return new_f
+
+ @staticmethod
+ def handle_sdk_exceptions(f: Callable[P, R]) -> Callable[P, R]:
+ """
+ A decorator for handling dynamix_sdk exceptions.
+ """
+ @wraps(f)
+ def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
+ try:
+ return f(*args, **kwargs)
+ except sdk_exceptions.RequestException as e:
+ self = cast(DecortController, args[0])
+ orig_exception = e.orig_exception
+ match type(orig_exception):
+ case requests.exceptions.SSLError:
+ url = getattr(orig_exception.request, 'url')
+ self.message(self.MESSAGES.ssl_error(url=url))
+ case requests.exceptions.ConnectionError:
+ url = getattr(orig_exception.request, 'url')
+ self.message(msg=f'Failed to connect to "{url}": {e}.')
+ case requests.exceptions.Timeout:
+ url = getattr(orig_exception.request, 'url')
+ self.message(
+ msg=(
+ f'Timeout when trying to connect to '
+ f'"{url}": {e}.'
+ )
+ )
+ case requests.exceptions.HTTPError:
+ method = getattr(orig_exception.request, 'method', '')
+ body = getattr(orig_exception.request, 'body', '')
+ text = getattr(orig_exception.response, 'text', '')
+ self.message(
+ msg=(
+ f'HTTP Error: {orig_exception}\n'
+ f'HTTP method: {method}\n'
+ f'HTTP request body: {body}\n'
+ f'HTTP response text: {text}\n'
+ f'SDK function name: {e.func_name}\n'
+ f'SDK function parameters: {e.func_kwargs}'
+ )
+ )
+ self.exit(fail=True)
+ return wrapper
+
@staticmethod
def dt_str_to_sec(dt_str) -> None | int:
"""
@@ -703,7 +609,15 @@ class DecortController(object):
verify_ssl=dict(
type='bool',
default=True
- ),
+ ),
+ ignore_api_compatibility=dict(
+ type='bool',
+ default=False,
+ ),
+ ignore_sdk_version_check=dict(
+ type='bool',
+ default=False,
+ ),
),
required_if=[
(
@@ -802,6 +716,7 @@ class DecortController(object):
else:
return True
+ @handle_sdk_exceptions
def obtain_jwt(self):
if self.authenticator in ('oauth2', 'decs3o'):
self.jwt = self.obtain_decs3o_jwt()
@@ -809,169 +724,35 @@ class DecortController(object):
self.jwt = self.obtain_bvs_jwt()
def obtain_decs3o_jwt(self):
- """Obtain JWT from the Oauth2 DECS3O provider using application ID and application secret provided , as specified at
- class instance init method.
-
- If method fails to obtain JWT it will abort the execution of the script by calling AnsibleModule.fail_json()
- method.
-
- @return: JWT as string.
- """
-
- token_get_url = self.oauth2_url + "/v1/oauth/access_token"
- req_data = dict(grant_type="client_credentials",
- client_id=self.app_id,
- client_secret=self.app_secret,
- response_type="id_token",
- validity=3600, )
- # TODO: Need standard code snippet to handle server timeouts gracefully
- # Consider a few retries before giving up or use requests.Session & requests.HTTPAdapter
- # see https://stackoverflow.com/questions/15431044/can-i-set-max-retries-for-requests-request
-
- # catch requests.exceptions.ConnectionError to handle incorrect oauth2_url case
- try:
- token_get_resp = requests.post(token_get_url, data=req_data, verify=self.verify_ssl)
- except requests.exceptions.SSLError:
- self.message(self.MESSAGES.ssl_error(url=token_get_url))
- self.exit(fail=True)
- except requests.exceptions.ConnectionError as errco:
- self.result['failed'] = True
- self.result['msg'] = "Failed to connect to '{}' to obtain JWT access token: {}".format(token_get_url, errco)
- self.amodule.fail_json(**self.result)
- except requests.exceptions.Timeout as errti:
- self.result['failed'] = True
- self.result['msg'] = "Timeout when trying to connect to '{}' to obtain JWT access token: {}".format(
- token_get_url, errti)
- self.amodule.fail_json(**self.result)
-
- # alternative -- if resp == requests.codes.ok
- if token_get_resp.status_code != 200:
- self.result['failed'] = True
- self.result['msg'] = ("Failed to obtain JWT access token from oauth2_url '{}' for app_id '{}': "
- "HTTP status code {}, reason '{}'").format(token_get_url,
- self.amodule.params['app_id'],
- token_get_resp.status_code,
- token_get_resp.reason)
- self.amodule.fail_json(**self.result)
-
- # Common return values: https://docs.ansible.com/ansible/2.3/common_return_values.html
- self.jwt = token_get_resp.content.decode('utf8')
- return self.jwt
+ decs3o_auth = DECS3OAuth(
+ url=self.oauth2_url,
+ client_id=self.app_id,
+ client_secret=self.app_secret,
+ verify_ssl=self.verify_ssl,
+ wrap_request_exceptions=True,
+ f_decorators=[self.sdk_waypoint],
+ )
+ return decs3o_auth.get_jwt()
def obtain_bvs_jwt(self):
- """
- Obtain JWT from the Oauth2 BVS provider using
- application ID, application secret, username and password provided
-
- If method fails to obtain JWT it will abort the execution of the script
- by calling self.exit(fail=True).
-
- @return: JWT as string.
- """
- token_get_url = (
- f'{self.oauth2_url}/realms/{self.domain}/'
- f'protocol/openid-connect/token'
+ bvs_auth = BVSAuth(
+ url=self.oauth2_url,
+ domain=self.domain,
+ client_id=self.app_id,
+ client_secret=self.app_secret,
+ user_name=self.username,
+ password=self.password,
+ verify_ssl=self.verify_ssl,
+ wrap_request_exceptions=True,
+ f_decorators=[self.sdk_waypoint],
)
- request_data = {
- 'grant_type': 'password',
- 'client_id': self.app_id,
- 'client_secret': self.app_secret,
- 'username': self.username,
- 'password': self.password,
- 'response_type': 'token',
- 'scope': 'openid',
- }
+ return bvs_auth.get_jwt()
- token_get_resp = None
- try:
- token_get_resp = requests.post(
- url=token_get_url,
- data=request_data,
- verify=self.verify_ssl,
- )
- except requests.exceptions.SSLError:
- self.message(self.MESSAGES.ssl_error(url=token_get_url))
- self.exit(fail=True)
- except requests.exceptions.ConnectionError as con_error:
- self.message(
- f'Failed to connect to "{token_get_url}" '
- f'to obtain JWT access token: {con_error}'
- )
- self.exit(fail=True)
- except requests.exceptions.Timeout as timeout_error:
- self.message(
- f'Timeout when trying to connect to "{token_get_url}" '
- f'to obtain JWT access token: {timeout_error}'
- )
- self.exit(fail=True)
- if token_get_resp.status_code != 200:
- self.message(
- f'Failed to obtain JWT access token from '
- f'oauth2_url "{token_get_url}" '
- f'for app_id "{self.amodule.params["app_id"]}": '
- f'HTTP status code {token_get_resp.status_code}, '
- f'reason "{token_get_resp.reason}"'
- )
- self.exit(fail=True)
- self.jwt = token_get_resp.json()['access_token']
- return self.jwt
+ def get_whoami_result(self) -> dict:
+ return self.api.system.usermanager.whoami().model_dump()
- def validate_jwt(self, arg_jwt=None):
- """Validate JWT against DECORT controller. JWT can be supplied as argument to this method. If None supplied as
- argument, JWT will be taken from class attribute. DECORT controller URL will always be taken from the class
- attribute assigned at instantiation.
- Validation is accomplished by attempting API call that lists accounts for the invoking user.
-
- @param arg_jwt: the JWT to validate. If set to None, then JWT from the class instance will be validated.
-
- @return: True if validation succeeds. If validation fails, method aborts the execution by calling
- AnsibleModule.fail_json() method.
- """
-
- if not arg_jwt:
- # If no JWT is passed as argument to this method, we will validate JWT stored in the class
- # instance (if any)
- arg_jwt = self.jwt
-
- if not arg_jwt:
- # arg_jwt is still None - it mans self.jwt is also None, so generate error and abort the script
- self.result['failed'] = True
- self.result['msg'] = "Cannot validate empty JWT."
- self.amodule.fail_json(**self.result)
- # The above call to fail_json will abort the script, so the following return statement will
- # never be executed
- return False
-
- req_url = self.controller_url + "/restmachine/cloudapi/account/list"
- req_header = dict(Authorization="bearer {}".format(arg_jwt), )
-
- try:
- api_resp = requests.post(req_url, headers=req_header, verify=self.verify_ssl)
- except requests.exceptions.SSLError:
- self.message(self.MESSAGES.ssl_error(url=req_url))
- self.exit(fail=True)
- except requests.exceptions.ConnectionError:
- self.result['failed'] = True
- self.result['msg'] = "Failed to connect to '{}' while validating JWT".format(req_url)
- self.amodule.fail_json(**self.result)
- return False # actually, this directive will never be executed as fail_json exits the script
- except requests.exceptions.Timeout:
- self.result['failed'] = True
- self.result['msg'] = "Timeout when trying to connect to '{}' while validating JWT".format(req_url)
- self.amodule.fail_json(**self.result)
- return False
-
- if api_resp.status_code != 200:
- self.result['failed'] = True
- self.result['msg'] = ("Failed to validate JWT access token for DECORT controller URL '{}': "
- "HTTP status code {}, reason '{}', header '{}'").format(api_resp.url,
- api_resp.status_code,
- api_resp.reason, req_header)
- self.amodule.fail_json(**self.result)
- return False
-
- # If we fall through here, then everything went well.
- return True
+ def validate_jwt(self):
+ self.get_whoami_result()
def decort_api_call(
self,
@@ -1066,22 +847,7 @@ class DecortController(object):
format(api_resp.url, api_resp.status_code, api_resp.reason)
self.amodule.fail_json(**self.result)
return None
-
- @waypoint
- def user_whoami(self) -> dict:
- """
- Implementation of functionality of the API method
- `/system/usermanager/whoami`.
- """
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/system/usermanager/whoami',
- arg_params={},
- )
-
- return api_resp.json()
-
+
@waypoint
def user_get(self, id: str) -> dict:
"""
@@ -1099,78 +865,6 @@ class DecortController(object):
return api_resp.json()
- @waypoint
- def user_accounts(
- self,
- account_id: None | int = None,
- account_name: None | str = None,
- account_status: None | AccountStatus = None,
- account_user_rights: None | AccountUserRights = None,
- deleted: bool = False,
- page_number: int = 1,
- page_size: None | int = None,
- resource_consumption: bool = False,
- sort_by_asc=True,
- sort_by_field: None | AccountSortableField = None,
- zone_id: None | int = None,
- ) -> list[dict]:
- """
- Implementation of the functionality of API methods
- `/cloudapi/account/list`,
- `/cloudapi/account/listDeleted` and
- `/cloudapi/account/listResourceConsumption`.
- """
-
- sort_by = None
- if sort_by_field:
- sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field.value}'
-
- api_params = {
- 'by_id': account_id,
- 'name': account_name,
- 'acl': account_user_rights.value if account_user_rights else None,
- 'page': page_number if page_size else None,
- 'size': page_size,
- 'sortBy': sort_by,
- 'status': account_status.value if account_status else None,
- 'zone_id': zone_id,
- }
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name={
- False: '/restmachine/cloudapi/account/list',
- True: '/restmachine/cloudapi/account/listDeleted',
- }[deleted],
- arg_params=api_params,
- )
-
- accounts = api_resp.json()['data']
-
- if resource_consumption and not deleted:
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name=(
- '/restmachine/cloudapi/account/listResourceConsumption'
- ),
- arg_params={},
- )
- accounts_rc_list = api_resp.json()['data']
- accounts_rc_dict = {a['id']: a for a in accounts_rc_list}
- for a in accounts:
- a_id = a['id']
- a['resource_consumed'] = accounts_rc_dict[a_id]['Consumed']
- a['resource_reserved'] = accounts_rc_dict[a_id]['Reserved']
-
- for a in accounts:
- a['createdTime_readable'] = self.sec_to_dt_str(a['createdTime'])
- a['deletedTime_readable'] = self.sec_to_dt_str(a['deletedTime'])
- a['updatedTime_readable'] = self.sec_to_dt_str(a['updatedTime'])
- a['description'] = a.pop('desc')
- a['zone_ids'] = a.pop('zoneIds')
-
- return accounts
-
@waypoint
def user_resource_consumption(self) -> dict[str, dict]:
"""
@@ -1190,51 +884,6 @@ class DecortController(object):
'resource_reserved': api_resp_json['Reserved'],
}
- @waypoint
- def user_audits(self,
- page_size: int,
- api_method: None | str = None,
- min_status_code: None | int = None,
- max_status_code: None | int = None,
- start_unix_time: None | int = None,
- end_unix_time: None | int = None,
- page_number: int = 1,
- sort_by_asc: bool = True,
- sort_by_field: None | AuditsSortableField = None,
- ) -> dict[str, Any]:
- """
- Implementation of the functionality of API method
- `/cloudapi/user/getAudit`.
- """
-
- sort_by = None
- if sort_by_field:
- sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field.value}'
-
- api_params = {
- 'call': api_method,
- 'minStatusCode': min_status_code,
- 'maxStatusCode': max_status_code,
- 'timestampAt': start_unix_time,
- 'timestampTo': end_unix_time,
- 'page': page_number,
- 'size': page_size,
- 'sortBy': sort_by,
- }
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/user/getAudit',
- arg_params=api_params,
- )
-
- audits = api_resp.json()['data']
-
- for a in audits:
- a['timestamp_readable'] = self.sec_to_dt_str(a['timestamp'])
-
- return audits
-
@waypoint
def user_api_methods(self, id: str) -> dict:
"""
@@ -1270,61 +919,94 @@ class DecortController(object):
return api_resp.json()
@waypoint
- def user_zones(
+ def user_disks(
self,
- page_size: None | int = None,
- deletable: None | bool = None,
- description: None | str = None,
- grid_id: None | int = None,
+ account_id: None | int = None,
+ account_name: None | str = None,
id: None | int = None,
+ size_max_gb: None | int = None,
name: None | str = None,
- node_id: None | int = None,
- status: None | ZoneStatus = None,
+ sep_pool_name: None | str = None,
+ sep_id: None | int = None,
+ shared: None | bool = None,
+ storage_policy_id: None | int = None,
+ type: None | str = None,
+ status: None | str = None,
page_number: int = 1,
+ page_size: None | int = None,
sort_by_asc: bool = True,
- sort_by_field: None | ZoneField = None,
+ sort_by_field: None | str = None,
) -> list[dict]:
"""
Implementation of the functionality of API method
- `/cloudapi/zone/list`.
+ `/cloudapi/disks/list`.
"""
-
sort_by = None
if sort_by_field:
sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field.value}'
+ sort_by = f'{sort_by_prefix}{sort_by_field}'
- api_params = {
- 'by_id': id,
- 'gid': grid_id,
- 'name': name,
- 'description': description,
- 'status': status.value if status else None,
- 'deletable': deletable,
- 'nodeId': node_id,
- 'page': page_number if page_size else None,
- 'size': page_size,
- 'sortBy': sort_by,
- }
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/zone/list',
- arg_params=api_params,
- )
+ disks = self.api.cloudapi.disks.list(
+ account_id=account_id,
+ account_name=account_name,
+ disk_max_size_gb=size_max_gb,
+ id=id,
+ name=name,
+ sep_id=sep_id,
+ sep_pool_name=sep_pool_name,
+ shared=shared,
+ status=_nested.DiskStatus[status] if status else None,
+ storage_policy_id=storage_policy_id,
+ type=_nested.DiskType[type] if type else None,
+ page_number=page_number,
+ page_size=page_size,
+ sort_by=sort_by,
+ ).model_dump()
- zones_orig: list[dict] = api_resp.json()['data']
+ return disks['data']
- zones_result: list[dict] = []
+ @waypoint
+ def user_vins(
+ self,
+ account_id: int | None = None,
+ ext_net_ip: str | None = None,
+ id: int | None = None,
+ include_deleted: bool = False,
+ name: str | None = None,
+ rg_id: int | None = None,
+ status: str | None = None,
+ vnfdev_id: int | None = None,
+ zone_id: int | None = None,
+ page_number: int = 1,
+ page_size: None | int = None,
+ sort_by_asc: bool = True,
+ sort_by_field: None | str = None,
+ ) -> list[dict]:
+ """
+ Implementation of the functionality of API method
+ `/cloudapi/vins/list`.
+ """
+ sort_by = None
+ if sort_by_field:
+ sort_by_prefix = '+' if sort_by_asc else '-'
+ sort_by = f'{sort_by_prefix}{sort_by_field}'
- for zone_orig in zones_orig:
- zone_result = {}
+ vms = self.api.cloudapi.vins.list(
+ account_id=account_id,
+ ext_net_ip=ext_net_ip,
+ id=id,
+ include_deleted=include_deleted,
+ name=name,
+ rg_id=rg_id,
+ status=_nested.VINSStatus[status] if status else None,
+ vnfdev_id=vnfdev_id,
+ zone_id=zone_id,
+ page_number=page_number,
+ page_size=page_size,
+ sort_by=sort_by,
+ ).model_dump()
- for field in self.ZoneField:
- zone_result[field.name] = zone_orig[field.value]
-
- zones_result.append(zone_result)
-
- return zones_result
+ return vms['data']
###################################
# Compute and KVM VM resource manipulation methods
@@ -1785,7 +1467,7 @@ class DecortController(object):
ram,
boot_disk_size,
image_id,
- chipset: Literal['Q35', 'i440fx'] = 'i440fx',
+ chipset: Literal['Q35', 'i440fx'] = 'Q35',
description="",
userdata=None,
sep_id=None,
@@ -1843,6 +1525,9 @@ class DecortController(object):
'zoneId': zone_id,
'storage_policy_id': storage_policy_id,
'os_version': os_version,
+ 'cpupin': cpu_pin,
+ 'hpBacked': hp_backed,
+ 'numaAffinity': numa_affinity,
}
if description:
api_params['desc'] = description
@@ -1857,9 +1542,6 @@ class DecortController(object):
api_url = '/restmachine/cloudapi/kvmx86/create'
api_params['imageId'] = image_id
api_params['start'] = start_on_create
- api_params['cpupin'] = cpu_pin
- api_params['hpBacked'] = hp_backed
- api_params['numaAffinity'] = numa_affinity
if userdata:
api_params['userdata'] = json.dumps(userdata) # we need to pass a string object as "userdata"
@@ -1895,6 +1577,9 @@ class DecortController(object):
nets_for_change_sec_groups_dict = {}
nets_for_change_state_dict = {}
+ def get_net_key(net_type: str, net_id: int | str) -> str:
+ return f'{net_type}{net_id}'
+
# Either only attaching or only detaching networks
if not ifaces or not new_networks:
ifaces_for_delete = ifaces
@@ -1913,7 +1598,7 @@ class DecortController(object):
net_id = iface['sdn_interface_id']
else:
net_id = iface['netId']
- net_key = f'{net_type}{net_id}'
+ net_key = get_net_key(net_type=net_type, net_id=net_id)
ifaces_dict[net_key] = iface
empty_new_nets_count = 0
@@ -1924,7 +1609,7 @@ class DecortController(object):
net_id = empty_new_nets_count
else:
net_id = net['id']
- net_key = f'{net_type}{net_id}'
+ net_key = get_net_key(net_type=net_type, net_id=net_id)
# If DPDK iface MTU is new then add postfix
if net_type == self.VMNetType.DPDK.value:
@@ -1934,6 +1619,21 @@ class DecortController(object):
if iface and net_mtu != iface['mtu']:
net_key = f'{net_key}_new-mtu'
+ if net_type in [
+ self.VMNetType.VFNIC.value, self.VMNetType.DPDK.value
+ ]:
+ net_ip_addr = net['ip_addr']
+ if net_ip_addr is not None:
+ iface = ifaces_dict.get(net_key)
+ if iface and net_ip_addr != iface['ipAddress']:
+ net_key = f'{net_key}_new-ip-addr'
+
+ net_prefix = net['net_prefix']
+ if net_prefix is not None:
+ iface = ifaces_dict.get(net_key)
+ if iface and net_prefix != iface['netMask']:
+ net_key = f'{net_key}_new-net-prefix'
+
new_nets_dict[net_key] = net
# The networks that no need to be disconnected or reconnected
@@ -1984,7 +1684,10 @@ class DecortController(object):
nets_for_change_mac_dict[net_key] = net
# Adding networks for change MTU
- if net['type'] == self.VMNetType.EXTNET.value:
+ if net['type'] in (
+ self.VMNetType.EXTNET.value,
+ self.VMNetType.TRUNK.value,
+ ):
current_mtu = ifaces_dict[net_key]['mtu']
new_mtu = net['mtu']
if new_mtu and current_mtu != new_mtu:
@@ -2054,17 +1757,32 @@ class DecortController(object):
warning=True,
)
+ net_type = net['type']
+ net_id = net.get('id', 0)
+ net_key = get_net_key(net_type=net_type, net_id=net_id)
+ old_iface = ifaces_dict.get(net_key, {})
api_params = {
'computeId': vm_id,
- 'netType': net['type'],
- 'netId': net.get('id') or 0,
- 'ipAddr': net.get('ip_addr'),
- 'mtu': net.get('mtu'),
- 'mac_addr': net.get('mac'),
- 'security_groups': net.get('security_group_ids'),
- 'enable_secgroups': security_group_mode,
- 'enabled': enabled,
+ 'netType': net_type,
+ 'netId': net_id,
+ 'ipAddr': net.get('ip_addr') or old_iface.get('ipAddress'),
+ 'mtu': net.get('mtu') or old_iface.get('mtu'),
+ 'mac_addr': net.get('mac') or old_iface.get('mac'),
+ 'security_groups': (
+ net.get('security_group_ids')
+ or old_iface.get('security_groups')
+ ),
+ 'enable_secgroups': (
+ net.get('security_group_mode')
+ or old_iface.get('enable_secgroups')
+ ),
+ 'enabled': net.get('enabled') or old_iface.get('enabled'),
+ 'netMask': (
+ net.get('net_prefix')
+ or old_iface.get('net_prefix')
+ ),
}
+
if net['type'] == self.VMNetType.SDN.value:
api_params['sdn_interface_id'] = net['id']
api_params['netId'] = 0
@@ -2609,19 +2327,6 @@ class DecortController(object):
return api_resp.json()
- @waypoint
- @checkmode
- def compute_rollback(self, compute_id: int, snapshot_label: str):
- self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/compute/snapshotRollback',
- arg_params={
- 'computeId': compute_id,
- 'label': snapshot_label,
- },
- )
- self.set_changed()
-
@waypoint
@checkmode
def compute_clone(
@@ -2910,7 +2615,7 @@ class DecortController(object):
def compute_disk_redeploy(
self,
vm_id: int,
- storage_policy_id: int,
+ storage_policy_id: None | int,
image_id: None | int,
disk_size: None | int,
os_version: None | str,
@@ -3560,8 +3265,6 @@ class DecortController(object):
api_params['maxCPUCapacity'] = arg_quota['cpu']
if 'ext_ips' in arg_quota:
api_params['maxNumPublicIP'] = arg_quota['ext_ips']
- if 'net_transfer' in arg_quota:
- api_params['maxNetworkPeerTransfer'] = arg_quota['net_transfer']
if restype:
api_params['resourceTypes'] = restype
@@ -3629,7 +3332,6 @@ class DecortController(object):
ram='CU_M',
disk='CU_DM',
ext_ips='CU_I',
- net_transfer='CU_NP',
storage_policies='storage_policy',
)
set_key_map = dict(
@@ -3637,14 +3339,12 @@ class DecortController(object):
ram='maxMemoryCapacity',
disk='maxVDiskCapacity',
ext_ips='maxNumPublicIP',
- net_transfer='maxNetworkPeerTransfer',
storage_policies='storage_policies',
)
rg_resource_limits_dict = arg_rg_dict['resourceLimits']
for quota_type in (
- 'cpu', 'ram', 'disk', 'ext_ips',
- 'net_transfer', 'storage_policies',
+ 'cpu', 'ram', 'disk', 'ext_ips', 'storage_policies',
):
if arg_quotas:
if quota_type in arg_quotas:
@@ -3744,14 +3444,8 @@ class DecortController(object):
account_name: str = '',
account_id=0,
audits=False,
- computes_args: None | dict = None,
- disks_args: None | dict = None,
fail_if_not_found=False,
- flip_groups_args: None | dict = None,
- images_args: None | dict = None,
resource_consumption=False,
- resource_groups_args: None | dict = None,
- vinses_args: None | dict = None,
):
"""
Find account specified by account ID or name and return
@@ -3767,52 +3461,16 @@ class DecortController(object):
the call will be added to
account info dict (key `audits`).
- @param (None | dict) computes_args: If dict is
- specified, then the method `self.account_computes`
- will be called passing founded account ID
- and `**computes_args`. Result of the call will
- be added to account info dict (key `computes`).
-
- @param (None | dict) disks_args: If dict is
- specified, then the method `self.account_disks`
- will be called passing founded account ID
- and `**disks_args`. Result of the call will
- be added to account info dict (key `disks`).
-
@param (bool) fail_if_not_found: If `True` is specified, then
the method `self.amodule.fail_json(**self.result)` will be
called if account is not found.
- @param (None | dict) flip_groups_args: If dict is
- specified, then the method `self.account_flip_groups`
- will be called passing founded account ID
- and `**flip_groups_args`. Result of the call will
- be added to account info dict (key `flip_groups`).
-
- @param (None | dict) images_args: If dict is
- specified, then the method `self.account_images`
- will be called passing founded account ID
- and `**images_args`. Result of the call will
- be added to account info dict (key `images`).
-
@param (bool) resource_consumption: If `True` is specified,
then the method `self.account_resource_consumption`
will be called passing founded account ID and result of
the call will be added to
account info dict (key `resource_consumption`).
- @param (None | dict) resource_groups_args: If dict is
- specified, then the method `self.account_resource_groups`
- will be called passing founded account ID
- and `**resource_groups_args`. Result of the call will
- be added to account info dict (key `resource_groups`).
-
- @param (None | dict) vinses_args: If dict is
- specified, then the method `self.account_vinses`
- will be called passing founded account ID
- and `**vinses_args`. Result of the call will
- be added to account info dict (key `vinses`).
-
Returns non zero account ID and account info dict on success,
0 and empty dict otherwise (if `fail_if_not_found=False`).
"""
@@ -3898,55 +3556,14 @@ class DecortController(object):
storage_policy['storage_size_gb'] = storage_policy.pop('limit')
if resource_consumption:
- resource_consumption = self.account_resource_consumption(
- account_id=account_details['id'],
- fail_if_not_found=True
- )
- account_details['resource_consumed'] =\
- resource_consumption['Consumed']
- account_details['resource_reserved'] =\
- resource_consumption['Reserved']
-
- if resource_groups_args is not None:
- account_details['resource_groups'] =\
- self.account_resource_groups(
+ resource_consumption_dict = (
+ self.api.cloudapi.account.get_resource_consumption(
account_id=account_details['id'],
- **resource_groups_args
- )
-
- if computes_args is not None:
- account_details['computes'] =\
- self.account_computes(
- account_id=account_details['id'],
- **computes_args
- )
-
- if vinses_args is not None:
- account_details['vinses'] = self.account_vinses(
- account_id=account_details['id'],
- **vinses_args
- )
-
- if disks_args is not None:
- account_details['disks'] =\
- self.account_disks(
- account_id=account_details['id'],
- **disks_args
- )
-
- if images_args is not None:
- account_details['images'] =\
- self.account_images(
- account_id=account_details['id'],
- **images_args
- )
-
- if flip_groups_args is not None:
- account_details['flip_groups'] =\
- self.account_flip_groups(
- account_id=account_details['id'],
- **flip_groups_args
- )
+ ).model_dump()
+ )
+ resource_consumption_dict.pop('id')
+ resource_consumption_dict.pop('quotas')
+ account_details['resource_consumption'] = resource_consumption_dict
if audits:
account_details['audits'] = self.account_audits(
@@ -3984,364 +3601,6 @@ class DecortController(object):
" account specified.")
self.amodule.fail_json(**self.result)
- @waypoint
- def account_resource_groups(
- self,
- account_id: int,
- page_number: int = 1,
- page_size: None | int = None,
- rg_id: None | int = None,
- rg_name: None | str = None,
- rg_status: None | str = None,
- sort_by_field: None | str = None,
- sort_by_asc=True,
- vins_id: None | int = None,
- vm_id: None | int = None,
- ) -> list[dict]:
- """
- Implementation of functionality of the API method
- `/cloudapi/account/listRG`.
- """
-
- if rg_status and not rg_status in self.RESOURCE_GROUP_STATUSES:
- self.result['msg'] = (
- f'{rg_status} is not valid RG status for filtering'
- f' account resource groups list.'
- )
- self.amodule.fail_json(**self.result)
-
- sort_by = None
- if sort_by_field:
- if not sort_by_field in self.FIELDS_FOR_SORTING_ACCOUNT_RG_LIST:
- self.result['msg'] = (
- f'{sort_by_field} is not valid field for sorting'
- f' account resource groups list.'
- )
- self.amodule.fail_json(**self.result)
-
- sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field}'
-
- api_params = {
- 'accountId': account_id,
- 'name': rg_name,
- 'page': page_number if page_size else None,
- 'rgId': rg_id,
- 'size': page_size,
- 'sortBy': sort_by,
- 'status': rg_status,
- 'vinsId': vins_id,
- 'vmId': vm_id,
- }
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/account/listRG',
- arg_params=api_params,
- )
-
- resource_groups = api_resp.json()['data']
-
- for rg in resource_groups:
- rg['createdTime_readable'] = self.sec_to_dt_str(rg['createdTime'])
- rg['deletedTime_readable'] = self.sec_to_dt_str(rg['deletedTime'])
- rg['updatedTime_readable'] = self.sec_to_dt_str(rg['updatedTime'])
- rg['description'] = rg.pop('desc')
-
- return resource_groups
-
- @waypoint
- def account_computes(
- self,
- account_id: int,
- compute_id: None | int = None,
- compute_ip: None | str = None,
- compute_name: None | str = None,
- compute_tech_status: None | str = None,
- ext_net_id: None | int = None,
- ext_net_name: None | str = None,
- page_number: int = 1,
- page_size: None | int = None,
- rg_id: None | int = None,
- rg_name: None | str = None,
- sort_by_asc=True,
- sort_by_field: None | str = None,
- ) -> list[dict]:
- """
- Implementation of functionality of the API method
- `/cloudapi/account/listComputes`.
- """
-
- if compute_tech_status and (
- not compute_tech_status in self.COMPUTE_TECH_STATUSES
- ):
- self.result['msg'] = (
- f'{compute_tech_status} is not valid compute tech status'
- f' for filtering account computes list.'
- )
- self.amodule.fail_json(**self.result)
-
- sort_by = None
- if sort_by_field:
- if not sort_by_field in self.FIELDS_FOR_SORTING_ACCOUNT_COMPUTE_LIST:
- self.result['msg'] = (
- f'{sort_by_field} is not valid field for sorting'
- f' account computes list.'
- )
- self.amodule.fail_json(**self.result)
-
- sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field}'
-
- api_params = {
- 'accountId': account_id,
- 'computeId': compute_id,
- 'extNetId': ext_net_id,
- 'extNetName': ext_net_name,
- 'ipAddress': compute_ip,
- 'name': compute_name,
- 'page': page_number if page_size else None,
- 'rgId': rg_id,
- 'rgName': rg_name,
- 'size': page_size,
- 'sortBy': sort_by,
- 'techStatus': compute_tech_status,
- }
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/account/listComputes',
- arg_params=api_params,
- )
-
- computes = api_resp.json()['data']
-
- for c in computes:
- c['createdTime_readable'] = self.sec_to_dt_str(c['createdTime'])
- c['deletedTime_readable'] = self.sec_to_dt_str(c['deletedTime'])
- c['updatedTime_readable'] = self.sec_to_dt_str(c['updatedTime'])
-
- return computes
-
- @waypoint
- def account_vinses(
- self,
- account_id: int,
- ext_ip: None | str = None,
- page_number: int = 1,
- page_size: None | int = None,
- rg_id: None | int = None,
- sort_by_asc=True,
- sort_by_field: None | str = None,
- vins_id: None | int = None,
- vins_name: None | str = None,
- ) -> list[dict]:
- """
- Implementation of functionality of the API method
- `/cloudapi/account/listVins`.
- """
-
- sort_by = None
- if sort_by_field:
- if not sort_by_field in self.FIELDS_FOR_SORTING_ACCOUNT_VINS_LIST:
- self.result['msg'] = (
- f'{sort_by_field} is not valid field for sorting'
- f' account vins list.'
- )
- self.amodule.fail_json(**self.result)
-
- sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field}'
-
- api_params = {
- 'accountId': account_id,
- 'extIp': ext_ip,
- 'name': vins_name,
- 'page': page_number if page_size else None,
- 'rgId': rg_id,
- 'size': page_size,
- 'sortBy': sort_by,
- 'vinsId': vins_id,
- }
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/account/listVins',
- arg_params=api_params,
- )
-
- vinses = api_resp.json()['data']
-
- for v in vinses:
- v['createdTime_readable'] = self.sec_to_dt_str(v['createdTime'])
- v['deletedTime_readable'] = self.sec_to_dt_str(v['deletedTime'])
- v['updatedTime_readable'] = self.sec_to_dt_str(v['updatedTime'])
-
- return vinses
-
- @waypoint
- def account_disks(
- self,
- account_id: int,
- disk_id: None | int = None,
- disk_name: None | str = None,
- disk_size: None | int = None,
- disk_type: None | str = None,
- page_number: int = 1,
- page_size: None | int = None,
- sort_by_asc=True,
- sort_by_field: None | str = None,
- ) -> list[dict]:
- """
- Implementation of functionality of the API method
- `/cloudapi/account/listDisks`.
- """
-
- if disk_type and (
- not disk_type in self.DISK_TYPES
- ):
- self.result['msg'] = (
- f'{disk_type} is not valid disk type'
- f' for filtering account disk list.'
- )
- self.amodule.fail_json(**self.result)
-
- sort_by = None
- if sort_by_field:
- if not sort_by_field in self.FIELDS_FOR_SORTING_ACCOUNT_DISK_LIST:
- self.result['msg'] = (
- f'{sort_by_field} is not valid field for sorting'
- f' account disks list.'
- )
- self.amodule.fail_json(**self.result)
-
- sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field}'
-
- api_params = {
- 'accountId': account_id,
- 'diskId': disk_id,
- 'diskMaxSize': disk_size,
- 'name': disk_name,
- 'page': page_number if page_size else None,
- 'size': page_size,
- 'sortBy': sort_by,
- 'type': disk_type,
- }
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/account/listDisks',
- arg_params=api_params,
- )
-
- disks = api_resp.json()['data']
-
- return disks
-
- @waypoint
- def account_images(
- self,
- account_id: int,
- image_id: None | int = None,
- image_name: None | str = None,
- image_type: None | str = None,
- page_number: int = 1,
- page_size: None | int = None,
- sort_by_asc=True,
- sort_by_field: None | str = None,
- ) -> list[dict]:
- """
- Implementation of functionality of the API method
- `/cloudapi/account/listTemplates`.
- """
-
- if image_type and (
- not image_type in self.IMAGE_TYPES
- ):
- self.result['msg'] = (
- f'{image_type} is not valid image type'
- f' for filtering account image list.'
- )
- self.amodule.fail_json(**self.result)
-
- sort_by = None
- if sort_by_field:
- if not sort_by_field in self.FIELDS_FOR_SORTING_ACCOUNT_IMAGE_LIST:
- self.result['msg'] = (
- f'{sort_by_field} is not valid field for sorting'
- f' account image list.'
- )
- self.amodule.fail_json(**self.result)
-
- sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field}'
-
- api_params = {
- 'accountId': account_id,
- 'imageId': image_id,
- 'name': image_name,
- 'page': page_number if page_size else None,
- 'size': page_size,
- 'sortBy': sort_by,
- 'type': image_type,
- }
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/account/listTemplates',
- arg_params=api_params,
- )
-
- images = api_resp.json()['data']
-
- return images
-
- @waypoint
- def account_flip_groups(
- self,
- account_id: int,
- ext_net_id: None | int = None,
- flig_group_id: None | int = None,
- flig_group_ip: None | str = None,
- flig_group_name: None | str = None,
- page_number: int = 1,
- page_size: None | int = None,
- vins_id: None | int = None,
- vins_name: None | str = None,
- ) -> list[dict]:
- """
- Implementation of functionality of the API method
- `/cloudapi/account/listFlipGroups`.
- """
-
- api_params = {
- 'accountId': account_id,
- 'byIp': flig_group_ip,
- 'extnetId': ext_net_id,
- 'flipGroupId': flig_group_id,
- 'name': flig_group_name,
- 'page': page_number if page_size else None,
- 'size': page_size,
- 'vinsId': vins_id,
- 'vinsName': vins_name,
- }
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/account/listFlipGroups',
- arg_params=api_params,
- )
-
- flip_groups = api_resp.json()['data']
-
- for fg in flip_groups:
- fg['createdTime_readable'] = self.sec_to_dt_str(fg['createdTime'])
- fg['deletedTime_readable'] = self.sec_to_dt_str(fg['deletedTime'])
- fg['updatedTime_readable'] = self.sec_to_dt_str(fg['updatedTime'])
-
- return flip_groups
-
@waypoint
def account_audits(self, account_id: int,
fail_if_not_found=False) -> None | list:
@@ -4755,7 +4014,6 @@ class DecortController(object):
name: None | str = None,
cpu_quota: None | int = None,
disks_size_quota: None | int = None,
- ext_traffic_quota: None | int = None,
gpu_quota: None | int = None,
public_ip_quota: None | int = None,
ram_quota: None | int = None,
@@ -4781,7 +4039,6 @@ class DecortController(object):
'gpu_units': gpu_quota,
'maxCPUCapacity': cpu_quota,
'maxMemoryCapacity': ram_quota,
- 'maxNetworkPeerTransfer': ext_traffic_quota,
'maxNumPublicIP': public_ip_quota,
'maxVDiskCapacity': disks_size_quota,
'name': name,
@@ -4819,7 +4076,6 @@ class DecortController(object):
quotas = {
'CPU quota': cpu_quota,
'disks size quota': disks_size_quota,
- 'external networks traffic quota': ext_traffic_quota,
'GPU quota': gpu_quota,
'public IP amount quota': public_ip_quota,
'RAM quota': ram_quota,
@@ -5576,31 +4832,6 @@ class DecortController(object):
if val and val < MIN_IOPS:
self.result['msg'] = (f"{arg} was set below the minimum iops {MIN_IOPS}: {val} provided")
return
-
- def disk_delete(self, disk_id, permanently, detach, reason):
- """Deletes specified Disk.
-
- @param (int) disk_id: ID of the Disk to be deleted.
- @param (bool) arg_permanently: a bool that tells if deletion should be permanent. If False, the Disk will be
- marked as DELETED and placed into a trash bin for predefined period of time (usually, a few days). Until
- this period passes such a Disk can be restored by calling the corresponding 'restore' method.
- """
-
- self.result['waypoints'] = "{} -> {}".format(self.result['waypoints'], "disk_delete")
-
- if self.amodule.check_mode:
- self.result['failed'] = False
- self.result['msg'] = "disk_delete() in check mode: delete Disk ID {} was requested.".format(disk_id)
- return
-
- api_params = dict(diskId=disk_id,
- detach=detach,
- permanently=permanently)
- self.decort_api_call(requests.post, "/restmachine/cloudapi/disks/delete", api_params)
- # On success the above call will return here. On error it will abort execution by calling fail_json.
- self.result['failed'] = False
- self.result['changed'] = True
- return disk_id
def _disk_get_by_id(self, disk_id):
"""Helper function that locates Disk by ID and returns Disk facts. This function
@@ -5711,204 +4942,6 @@ class DecortController(object):
return 0, None
- def disk_create(
- self,
- accountId,
- name,
- description,
- size,
- sep_id,
- pool,
- storage_policy_id: int,
- ):
- """Provision Disk according to the specified arguments.
- Note that disks created by this method will be of type 'D' (data disks).
- If critical error occurs the embedded call to API function will abort further execution
- of the script and relay error to Ansible.
-
- @param (string) name: name to assign to the Disk.
- @param (int) size: size of the disk in GB.
- @param (int) accountId: ID of the account where disk will belong.
- @param (int) sep_id: ID of the SEP (Storage Endpoint Provider), where disk will be created.
- @param (string) pool: optional name of the pool, where this disk will be created.
- @param (string) description: optional text description of this disk.
- @param (int) gid: optional Grid id, if specified the disk will be created in selected
- location.
-
- @return: ID of the newly created Disk (in Ansible check mode 0 is returned).
- """
-
- self.result['waypoints'] = "{} -> {}".format(self.result['waypoints'], "disk_creation")
- api_params = dict(
- accountId=accountId,
- name=name,
- description=description,
- size=size,
- sep_id=sep_id,
- pool=pool,
- storage_policy_id=storage_policy_id,
- )
- api_resp = self.decort_api_call(requests.post, "/restmachine/cloudapi/disks/create", api_params)
- if api_resp.status_code == 200:
- ret_disk_id = json.loads(api_resp.content.decode('utf8'))
-
- self.result['failed'] = False
- self.result['changed'] = True
- return ret_disk_id
-
- def disk_resize(self, disk_facts, new_size):
- """Resize Disk. Only increasing disk size is allowed.
-
- @param (dict) disk_dict: dictionary with target Disk details as returned by ../disks/get
- API call or disk_find() method.
- @param (int) new_size: new size of the disk in GB. It must be greater than current disk
- size for the method to succeed.
- """
-
- self.result['waypoints'] = "{} -> {}".format(self.result['waypoints'], "disk_resize")
-
- # Values that are specified via Jinja templating engine (e.g. "{{ new_size }}") may come
- # as strings. To make sure comparison of new values against current compute size is done
- # correcly, we explicitly cast them to type int here.
- new_size = int(new_size)
-
- if self.amodule.check_mode:
- self.result['failed'] = False
- self.result['msg'] = ("disk_resize() in check mode: resize Disk ID {} "
- "to {} GB was requested.").format(disk_facts['id'], new_size)
- return
-
- if not new_size:
- self.result['failed'] = False
- self.result['warning'] = "disk_resize(): zero size requested for Disk ID {} - ignoring.".format(
- disk_facts['id'])
- return
-
- if new_size < disk_facts['sizeMax']:
- self.result['failed'] = True
- self.result['msg'] = ("disk_resize(): downsizing Disk ID {} is not allowed - current "
- "size {}, requeste size {}.").format(disk_facts['id'],
- disk_facts['sizeMax'], new_size)
- return
-
- if new_size == disk_facts['sizeMax']:
- self.result['failed'] = False
- self.result['warning'] = ("disk_resize(): nothing to do for Disk ID {} - new size is the "
- "same as current.").format(disk_facts['id'])
- return
-
- api_params = dict(diskId=disk_facts['id'], size=new_size)
- # NOTE: we are using API "resize2", as in this module we are managing
- # disks attached to compute(s) (DSF ver.2 only)
- api_resp = self.decort_api_call(requests.post, "/restmachine/cloudapi/disks/resize2", api_params)
- # On success the above call will return here. On error it will abort execution by calling fail_json.
- self.result['failed'] = False
- self.result['changed'] = True
- disk_facts['sizeMax'] = new_size
-
- return
-
- def disk_limitIO(self,disk_id, limits):
- """Limits already created Disk identified by its ID.
- @param (dict) limits: Dictionary with limits.
- @param (int) diskId: ID of the Disk to limit.
- @returns: nothing on success. On error this method will abort module execution.
- """
- self.result['waypoints'] = "{} -> {}".format(self.result['waypoints'], "disk_limitIO")
- api_params = dict(diskId=disk_id,
- **limits)
- self.decort_api_call(requests.post, "/restmachine/cloudapi/disks/limitIO", api_params)
- self.result['changed'] = True
- self.result['msg'] = "Specified Disk ID {} limited successfully.".format(disk_id)
- return
-
- def disk_rename(self, disk_id, name):
- """Renames disk to the specified new name.
-
- @param disk_id: ID of the Disk to rename.
- @param name: New name.
-
- @returns: nothing on success. On error this method will abort module execution.
- """
- self.result['waypoints'] = "{} -> {}".format(self.result['waypoints'], "disk_rename")
- api_params = dict(diskId=disk_id,
- name=name)
- self.decort_api_call(requests.post, "/restmachine/cloudapi/disks/rename", api_params)
- # On success the above call will return here. On error it will abort execution by calling fail_json.
- self.result['failed'] = False
- self.result['changed'] = True
- self.result['msg'] = ("Disk with id '{}',successfully renamed to '{}'.").format(disk_id, name)
- return
-
- def disk_restore(self, disk_id):
- """Restores previously deleted Disk identified by its ID. For restore to succeed
- the Disk must be in 'DELETED' state.
-
- @param disk_id: ID of the Disk to restore.
-
- @returns: nothing on success. On error this method will abort module execution.
- """
-
- self.result['waypoints'] = "{} -> {}".format(self.result['waypoints'], "disk_restore")
-
- if self.amodule.check_mode:
- self.result['failed'] = False
- self.result['msg'] = "disk_restore() in check mode: restore Disk ID {} was requested.".format(disk_id)
- return
-
- api_params = dict(diskId=disk_id)
- self.decort_api_call(requests.post, "/restmachine/cloudapi/disks/restore", api_params)
- # On success the above call will return here. On error it will abort execution by calling fail_json.
- self.result['failed'] = False
- self.result['changed'] = True
- return
- def disk_share(self, disk_id, share='false'):
- """Share data disk
-
- @param disk_id: ID of the Disk to share.
-
- @param share: share status of the disk
-
- @returns: nothing on success. On error this method will abort module execution.
- """
- self.result['waypoints'] = "{} -> {}".format(self.result['waypoints'], "disk_share")
- if self.amodule.check_mode:
- self.result['failed'] = False
- self.result['msg'] = "disk_share() in check mode: share Disk ID {} was requested.".format(disk_id)
- return
-
- api_params = dict(diskId=disk_id)
- if share:
- self.decort_api_call(requests.post, "/restmachine/cloudapi/disks/share", api_params)
- else:
- self.decort_api_call(requests.post, "/restmachine/cloudapi/disks/unshare", api_params)
- # On success the above call will return here. On error it will abort execution by calling fail_json.
- self.result['failed'] = False
- self.result['changed'] = True
- return
-
- @waypoint
- @checkmode
- def disk_change_storage_policy(
- self,
- disk_id: int,
- storage_policy_id: int,
- ):
- """
- Implementation of functionality of the API method
- `/cloudapi/disks/change_disk_storage_policy`.
- """
- api_params = {
- 'disk_id': disk_id,
- 'storage_policy_id': storage_policy_id,
- }
- response = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/disks/change_disk_storage_policy', # noqa: E501
- arg_params=api_params,
- )
- self.set_changed()
- return response.json()
##############################
#
@@ -6374,7 +5407,7 @@ class DecortController(object):
description,
extnet_only,
storage_policy_id: int,
- master_chipset: Literal['Q35', 'i440fx'] = 'i440fx',
+ master_chipset: Literal['Q35', 'i440fx'] = 'Q35',
lb_sysctl: dict | None = None,
zone_id: None | int = None,
):
@@ -7000,7 +6033,7 @@ class DecortController(object):
bs_id,
gr_dict,
desired_count,
- chipset: Literal['Q35', 'i440fx'] = 'i440fx',
+ chipset: Literal['Q35', 'i440fx'],
):
self.result['waypoints'] = "{} -> {}".format(self.result['waypoints'], "group_resize_count")
@@ -7083,7 +6116,6 @@ class DecortController(object):
arg_name,
chipset: Literal['Q35', 'i440fx'],
storage_policy_id: int,
- driver: str,
arg_count=1,
arg_cpu=1,
arg_ram=1024,
@@ -7113,7 +6145,6 @@ class DecortController(object):
timeoutStart = arg_timeout,
chipset=chipset,
storage_policy_id=storage_policy_id,
- driver=driver,
)
api_resp = self.decort_api_call(requests.post, api_url, api_params)
new_bsgroup_id = int(api_resp.text)
@@ -7958,25 +6989,6 @@ class DecortController(object):
self.set_changed()
return api_response.json()
- @waypoint
- @checkmode
- def snapshot_create(self, compute_id: int, label: str):
- """
- Implementation of functionality of the API method
- `/cloudapi/compute/snapshotCreate`.
- """
- self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/compute/snapshotCreate',
- arg_params={
- 'computeId': compute_id,
- 'label': label,
- },
- )
- self.set_changed()
- self.message(
- f'Snapshot {label} for VM ID {compute_id} created successfully.'
- )
@waypoint
@checkmode
@@ -8053,21 +7065,6 @@ class DecortController(object):
break
time.sleep(sleep_interval)
- @waypoint
- def snapshot_list(self, compute_id: int) -> list:
- """
- Implementation of functionality of the API method
- `/cloudapi/compute/snapshotList`.
- """
- snapshots_list_response = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/compute/snapshotList',
- arg_params={
- 'computeId': compute_id,
- }
- )
- return snapshots_list_response.json()['data']
-
@waypoint
def snapshot_usage(
self,
@@ -8148,73 +7145,6 @@ class DecortController(object):
return zone_info
- @waypoint
- def trunk_get(self, id: int) -> dict:
- """
- Implementation of functionality of the API method
- `/cloudapi/trunk/get`.
- """
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.get,
- arg_api_name='/restmachine/cloudapi/trunk/get',
- arg_params={
- 'id': id,
- },
- not_fail_codes=[404],
- )
-
- trunk_info = None
- if api_resp.status_code == 200:
- trunk_info = api_resp.json()
-
- if not trunk_info:
- self.message(
- self.MESSAGES.obj_not_found(obj='trunk', id=id)
- )
- self.exit(fail=True)
-
- return trunk_info
-
- @waypoint
- def user_trunks(
- self,
- account_ids: None | list = None,
- ids: None | list = None,
- status: None | TrunkStatus = None,
- vlan_ids: None | list = None,
- page_number: int = 1,
- page_size: None | int = None,
- sort_by_asc: bool = True,
- sort_by_field: None | TrunksSortableField = None,
- ) -> list[dict]:
- """
- Implementation of the functionality of API method
- `/cloudapi/trunk/list`.
- """
- sort_by = None
- if sort_by_field:
- sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field.name}'
-
- api_params = {
- 'account_ids': account_ids,
- 'ids': ids,
- 'trunk_tags': vlan_ids,
- 'status': status.value if status else None,
- 'page': page_number if page_size else None,
- 'size': page_size,
- 'sort_by': sort_by,
- }
- api_resp = self.decort_api_call(
- arg_req_function=requests.get,
- arg_api_name='/restmachine/cloudapi/trunk/list',
- arg_params=api_params,
- )
-
- trunks = api_resp.json()['data']
-
- return trunks
def check_account_vm_features(self, vm_feature: VMFeature) -> bool:
return vm_feature.value in self.acc_info['computeFeatures']
@@ -8250,273 +7180,6 @@ class DecortController(object):
return extnet_info
- @waypoint
- def storage_policy_get(self, id: int) -> dict:
- """
- Implementation of functionality of the API method
- `/cloudapi/storage_policy/get`.
- """
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.get,
- arg_api_name='/restmachine/cloudapi/storage_policy/get',
- arg_params={
- 'storage_policy_id': id,
- },
- not_fail_codes=[404],
- )
-
- storage_policy_info = None
- if api_resp.status_code == 200:
- storage_policy_info = api_resp.json()
-
- if not storage_policy_info:
- self.message(
- self.MESSAGES.obj_not_found(obj='storage_policy', id=id)
- )
- self.exit(fail=True)
-
- return storage_policy_info
-
- @waypoint
- def user_storage_policies(
- self,
- account_id: None | int = None,
- description: None | str = None,
- id: None | int = None,
- iops_limit: None | int = None,
- name: None | str = None,
- pool_name: None | str = None,
- rg_id: None | int = None,
- sep_id: None | int = None,
- status: None | StoragePolicyStatus = None,
- page_number: int = 1,
- page_size: None | int = None,
- sort_by_asc: bool = True,
- sort_by_field: None | StoragePoliciesSortableField = None,
- ) -> list[dict]:
- """
- Implementation of the functionality of API method
- `/cloudapi/storage_policy/list`.
- """
- sort_by = None
- if sort_by_field:
- sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field.name}'
-
- api_params = {
- 'account_id': account_id,
- 'by_id': id,
- 'name': name,
- 'status': status.value if status else None,
- 'desc': description,
- 'limit_iops': iops_limit,
- 'resgroup_id': rg_id,
- 'sep_id': sep_id,
- 'pool_name': pool_name,
- 'page': page_number if page_size else None,
- 'size': page_size,
- 'sort_by': sort_by,
- }
- api_resp = self.decort_api_call(
- arg_req_function=requests.get,
- arg_api_name='/restmachine/cloudapi/storage_policy/list',
- arg_params=api_params,
- )
-
- storage_policies = api_resp.json()['data']
-
- return storage_policies
-
- @waypoint
- def user_security_groups(
- self,
- account_id: None | int = None,
- description: None | str = None,
- id: None | int = None,
- name: None | str = None,
- created_timestamp_min: None | int = None,
- created_timestamp_max: None | int = None,
- updated_timestamp_min: None | int = None,
- updated_timestamp_max: None | int = None,
- page_number: int = 1,
- page_size: None | int = None,
- sort_by_asc: bool = True,
- sort_by_field: None | SecurityGroupSortableField = None,
- ) -> list[dict]:
- """
- Implementation of the functionality of API method
- `/cloudapi/security_group/list`.
- """
- sort_by = None
- if sort_by_field:
- sort_by_prefix = '+' if sort_by_asc else '-'
- sort_by = f'{sort_by_prefix}{sort_by_field.value}'
-
- api_params = {
- 'account_id': account_id,
- 'by_id': id,
- 'name': name,
- 'description': description,
- 'created_min': created_timestamp_min,
- 'created_max': created_timestamp_max,
- 'updated_min': updated_timestamp_min,
- 'updated_max': updated_timestamp_max,
- 'page': page_number if page_size else None,
- 'size': page_size,
- 'sort_by': sort_by,
- }
- api_resp = self.decort_api_call(
- arg_req_function=requests.get,
- arg_api_name='/restmachine/cloudapi/security_group/list',
- arg_params=api_params,
- )
-
- storage_policies = api_resp.json()['data']
-
- return storage_policies
-
- def security_group_find(self, account_id: int, name: str) -> None | dict:
- security_groups_by_account_id = self.user_security_groups(
- account_id=account_id,
- )
- for sg in security_groups_by_account_id:
- if sg['name'] == name:
- return sg
-
- @waypoint
- def security_group_get(self, id: int) -> dict:
- """
- Implementation of functionality of the API method
- `/cloudapi/security_group/get`.
- """
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.get,
- arg_api_name='/restmachine/cloudapi/security_group/get',
- arg_params={
- 'security_group_id': id,
- },
- not_fail_codes=[404],
- )
-
- storage_policy_info = None
- if api_resp.status_code == 200:
- storage_policy_info = api_resp.json()
-
- if not storage_policy_info:
- self.message(
- self.MESSAGES.obj_not_found(obj='security_group', id=id)
- )
- self.exit(fail=True)
-
- return storage_policy_info
-
- @waypoint
- @checkmode
- def security_group_create(
- self,
- account_id: int,
- name: str,
- description: None | str,
- ) -> int:
- """
- Implementation of functionality of the API method
- `/cloudapi/security_group/create`.
- """
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/security_group/create',
- arg_params={
- 'account_id': account_id,
- 'name': name,
- 'description': description,
- },
- )
- self.set_changed()
-
- return api_resp.json()
-
- @waypoint
- @checkmode
- def security_group_detele(self, security_group_id: int) -> bool:
- """
- Implementation of functionality of the API method
- `/cloudapi/security_group/delete`.
- """
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/security_group/delete',
- arg_params={
- 'security_group_id': security_group_id,
- },
- )
- self.set_changed()
-
- return api_resp.json()
-
- @waypoint
- @checkmode
- def security_group_update(
- self,
- security_group_id: int,
- name: None | str,
- description: None | str,
- ) -> dict:
- """
- Implementation of functionality of the API method
- `/cloudapi/security_group/update`.
- """
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/security_group/update',
- arg_params={
- 'security_group_id': security_group_id,
- 'name': name,
- 'description': description,
- },
- )
- self.set_changed()
-
- return api_resp.json()
-
- @waypoint
- @checkmode
- def security_group_create_rule(
- self,
- security_group_id: int,
- direction: SecurityGroupRuleDirection,
- ethertype: None | SecurityGroupRuleEtherType,
- protocol: None | SecurityGroupRuleProtocol,
- port_range_min: None | int,
- port_range_max: None | int,
- remote_ip_prefix: None | str,
- ) -> int:
- """
- Implementation of functionality of the API method
- `/cloudapi/security_group/create_rule`.
- """
-
- api_resp = self.decort_api_call(
- arg_req_function=requests.post,
- arg_api_name='/restmachine/cloudapi/security_group/create_rule',
- arg_params={
- 'security_group_id': security_group_id,
- 'direction': direction.value,
- 'ethertype': ethertype.value if ethertype else None,
- 'protocol': protocol.value if protocol else None,
- 'port_range_min': port_range_min,
- 'port_range_max': port_range_max,
- 'remote_ip_prefix': remote_ip_prefix,
- },
- )
- self.set_changed()
-
- return api_resp.json()
-
@waypoint
@checkmode
def security_group_detele_rule(
diff --git a/requirements-dev.txt b/requirements-dev.txt
deleted file mode 100644
index 61f70b3..0000000
--- a/requirements-dev.txt
+++ /dev/null
@@ -1,2 +0,0 @@
--r requirements.txt
-pre-commit==4.1.0
diff --git a/requirements.txt b/requirements.txt
index 557caaf..14aeab2 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,2 +1,3 @@
ansible==11.6.0
requests==2.32.3
+git+https://repository.basistech.ru/BASIS/dynamix-python-sdk.git@1.4.latest