Overwiew¶
Notes¶
This manual provides an overview of the basic concepts about the Nano platform, also demonstrates the main features in version 1.0.
Welcome to the official website https://nanos.cloud/en-us/ for more information.
Author: Akumas (bokuore@github.com)
Git Repository: https://github.com/project-nano
This manual: https://nanocloud.readthedocs.io/projects/guide/en/latest/
Introduction¶
Nano is an IaaS (Infrastructure as a Service) software based on CentOS and KVM virtualization technology, which manages virtual machines using server clusters as resource pools.
Nano is using automated methods to place human operates as many as possible, aiming to provide a powerful platform while keeping things simple and easy.
Nano is developed using Golang, build a fast and reliable service with a low footprint. All Nano modules compiled to standalone binaries, no dependent library required when deployed.
You can turn any server enable Intel VT-d or AMD-v technology into an IaaS platform, and begin deploying virtual machines in 3-minutes with a tiny Nano installer.
Nano comes with a user-friendly GUI out of the box, also provide a machine-friendly REST API which can easily integrate into current products or automated scripts.
Nano uses MIT license, which is free for modification, personal or commercial use.
Features¶
- Resource Pool : Add, remove, enable and disable resource node/schedule instances/real-time resource usage&uptime monitor/multi-layer dashboard drill down
- Storage backend support : local disk storage/NFS shared storage
- IPv4 Address pool : instance address management, multi-address range, gateway, DNS configure.
- Cloud Instances :
- Lifecycle management: create/release/start/migrate/failover
- Configuration management: modify guest name/core/memory, QoS(CPU/Disk IO/Bandwidth), extend and shrink disk, host template, reset system
- Guest operating system: reset admin password, CPU/memory usage monitoring, automated disk format/mount, modify the hostname
- Remote access: embedded HTML5 page, third-party VNC connection support, VNC connection encryption
- Batch building: instance template build/clone/upload/download, batch creating and deleting
- Data security: incremental snapshot creation/restore/manage
- Media: ISO image upload/insert/eject
- Network:address binding, recycling, and migration, gateway, DNS assignment
- Platform management : system initialization, user/user group/role manage, resource visibility, network discovery, start and stop modules, connection/running status detection, operate log
- Tools : Installer
- Browser : Chrome/Firefox
- Multilingual : English/Chinese
Chapters¶
Chapters
Concepts¶
A Nano platform currently consists of three modules: Core, Cell, and FrontEnd.
The Cell creates and manages virtual machines. The Core forms resource pools using multiple Cells, to schedule and allocates instances on demand; FrontEnd provides an HTML5 web portal based on the API service of the Core module.
Although all modules can install on a single server, it recommends that each module deploys on a separate server for better availability on a production system. As blow figure:

the Nano has external and internal network communications.
The external communication includes the Web portal (TCP 5850 in default) and API service of the Core module(TCP 5870 in default), which can configure on your demand.
Internal communication includes the UDP packets between modules and HTTPS for data transmission. All ports are dynamically and automated allocated, no need to worry about them.

Communicate Domain¶
The modules of Nano can discover each other and automatically complete networking without any configuration.
The automatic discovery is implemented base on the multicast protocol. A communicate domain defined by a triple consist of a domain name, multicast address, and port (the default is < “Nano”: 224.0.0.226:5599 >). Modules in the same domain can discover and communicate with each other.
If you need to configure multiple clusters within a LAN, you can assign different domain triples.
The valid multicast address range from “224.0.0.0” to “224.0.0.255”. Refer to Multicast address for details.
As follow figure:

the Core must start first as a stub module. If the Core stops service or restarts, the module already started will automatically attempt to resume communication.
Resource Model¶
A local Nano cluster forms an available zone, A zone contains multiple resource pools, each resource pool contains one or more Cell resource nodes.
A Cell can only belong to one pool. When creating or migrating an instance, the Core chooses an appropriate Cell according to the real-time load of each Cell in the specified pool.

There will be an empty default resource pool after system installed. Do remember to add a Cell node to the resource pool before creating any instance.
Images¶
Nano has two kinds of images: disk image and media image.
The disk image represents the disk data of a virtual machine, which can be used to fast clone batch of instances with the identical OS and software as the original one in a short time.
The media image represents a DVD data in ISO format, which can load into instance working as a physical CD for installing OS.
You can upload the pre-built image to the system to save the time of customization. Visit the official website to Download the pre-built CentOS 7 image.

Chapters
System Deployment¶
The Nano has an installer for automated deployment, get the latest version from the official website or on Github.
For new users, it highly recommends that don’t adjust configuration when installing the first time, the installer will choose appropriate parameters.
Minimal System Requirements:
- Virtualization enabled X86 servers, or nested virtualization enabled virtual machines(Intel VT-x/AMD-v)
- 2 cores/4 GB memory/50 GB disk/1 network interface
- CentOS 7.6(1810) Minimal
- Operation system installed with network ready
- If you have raid or LVM, please configure them before installing Nano.
It recommends installing on a pure CentOS 7.6 system without any extra packages.
Install Modules¶
Download and unzip the package. Execute the installer and choose the modules needed to install.
execute the following instructions:
$wget https://nanos.cloud/media/nano_installer_1.0.0.tar.gz
$tar zxfv nano_installer_1.0.0.tar.gz
$cd nano_installer
$./installer
After the Installer starts, you are asked to choose the modules to install.
For example, “2” for the Cell only, and “3” for installing all module on current.
All modules installed in the directory: /opt/nano, and with domain identity “<nano: 224.0.0.226:5599>” by default.
It is not recommended to change the default parameters for first-time installation or only one domain in LAN.
The installer will ask the user to enter “yes” to build a bridged network “br0” before installing the Cell.
It recommends that remove previous “br0” not generated by Nano before installing.
A known issue is that the SSH connection will interrupt on some servers when the installer configures the bridged network, resulting in an installation failure. Use IPMI or iDRAC to execute the installer if you encounter the above issue.
The installer will ask you to choose a network interface to work on if there are multiple networks available on the current server.
For example, If the server has two network interfaces, the eth0 address is “192.168.1.56/24”, and the eth1 address is “172.16.8.55/24”. If you want the Nano to work on the segment “172.16.8.0/24”, choose “eth1”.
Note: when installing the dependent library, the installer will try the built-in RPM packages first. If any error occurs, it will use yum instead, so keep your network available.
Start Modules¶
All Nano modules provide the command-line interface, and called like :< module name > [start | stop | status | halt].
- start: start module service, output error message when start failed, or version information.
- stop: stops the service gracefully. Releases allocated resources and notify any related modules.
- status: checks if the module is running.
- halt: terminate service immediately.
You need to start the module to provide services, and all binaries are installed in the directory “/opt/nano” by default.
Please remember, always start the Core service first.
$cd /opt/nano/core
$./core start
$cd ../cell
$./cell start
$ cd ../frontend
$./frontend start
When the FrontEnd module successfully started, it will give you a listen address likes”192.168.6.3:5870”. Using Chrome or Firefox open this web portal to manage your newly installed Nano platform.
Initial System¶
The first time you open the Nano Web portal, you will be prompted to create an initial administrator.
The password must be no less than 8 digits long and require a number, a lowercase letter, and an uppercase letter at least.

After you log in, you can start managing the Nano platform or creating more user accounts for your team.

The logged-in user display at the bottom of pages, and you can click to log out.

Add Address Pool(optional)¶
By default, the virtual machine allocated by Nano directly obtain IP from physical networks through a bridged device. But for who want to manage instance’s IP more precisely, the address pool may be a better solution.
An address pool contains multiple IP segments.
When an instance creates, a new IP allocated from the pool and attach through the DHCP service on the Cell node.
When the instance deleted, the attached IP release to the address pool, which can reallocate later.
Click the “Create” button in the pool list

When an instance startup, it will fetch gateway and DNS in pool configuration via DHCP client.
After a pool created, add some usable address range in the detail page.

Click the “Add” button on the detail page, add a usable address range. When a new instance created, an IP will allocate in this range and attach to it using the DHCP service.
To avoid conflicts with existing network DHCP service, please notes:
- Ensure that there is no address overlap between the existing DHCP network and the Nano address range.
- Ensure that the address range is on the same subnet with the gateway, and reachable. The gateway IP should not appear in the usable range.Take the gateway IP “192.168.3.1” as an example, the address range should be “192.168.3.2~192.168.3.240/24”.

You can see the available ranges in the detail page after adding. You can also add multiple address ranges in an address pool if necessary.

An address pool should attach to a resource pool before taking effect. Choose an address pool in the drop-up menu of the resource pool.

Check the list of resource pools to see if the modification OK.

After the address pool configured, when creating a new instance, an IP is automatically allocated and attached.
You can view the attached IP in the instance detail or the address pool detail pages.


Note: An address pool can associate with multiple resource pools, you can change DNS or gateway of the address pool, also the association in real-time. But any allocated instance must release before unbinding the associated address pool.
Add Resource¶
Add Cell Node¶
When the Nano starts for the first time, an empty resource pool named “default” created. You must add a Cell node to have enough resources to allocate instances.
On the “Resource Pool” menu, click the “cells” button of the “default” pool.

Click the “Add” button to add cell node.

Select a Cell node available but not joined any resource pool in the drop-down menu.

You can see that the new node is online after added the pool.

Please note: You may receive a timeout warning when the Cell takes too long to join the pool if using shared storage. It does not matter in most cases, try refreshing the list to check status.
When using shared storage, ensure the backend storage attached successfully in the detail page of the Cell Node.

Once the Cell node and backend storage ready, you can begin creating new instances.
Upload Images¶
an empty instance is no use at all, so we need to install an operating system and software.
A disk image store the system data of a original virtual machine, which can fast clone batch of instances with the identical OS and software in a short time.
You can install Cloud-Init module in the disk image, for initializing administrator password, expanding system disk or formatting data disk automated.
The Download page of the official site provides a pre-built image of CentOS 7.5 Minimal (one of them is embedded with Cloud-Init).
Download the image, click “Upload” in the “Images” menu to upload it. Then you can clone from this image when creating new instances.

The media image represents a DVD data in ISO format, which can load into instance working as a physical CD for installing OS.
The media images usually use to build virtual machine template, see chapters of instance and platform for more details.
Chapters
Manage Instances¶
Please note: It recommends that directly connect instances to the physical network through a bridged interface on the host server, which obtains IP from the existing router as non-virtual servers. This mode also helps administrators to manage virtualized clusters as same as traditional clusters.
Create Instance¶
The instances in Nano is created based on a resource pool.
When the Core receives a creating request, it evaluates the free resources and the real-time load of each available Cell and chooses the one with the lowest weighted score to create.
The following parameters are required to create a new instance:
- Instance name: only numeric, letters and ‘- ‘ are allowed.
- Resource pool: the name of the resource pool that hosts the instance.
- The number of cores: The number of CPU cores allocated to the instance. It is not recommended to exceed the maximum number of physical threads of the Cell server.
- Memory: the amount of memory allocated for the instance, cannot exceed the maximum physical memory of the host node.
- System version: Nano optimizes hardware combination of the instance settings based on system version to provide better performance and compatibility. “Legacy System” recommends for older OS.
- System disk: Size of system volume for the instance.
- Data disk: Whether mounting an additional data disk on the instance.
- Autostart: Instances automatically start when the Cell server power on if enabled.
- System image: Whether cloning system disk of the new instance from a pre-built image.
- CPU priority: High priority instances guarantee more system resources when they are busy.
- IOPS: Set the maximum read and write limit for disk IO, which is unlimited by default
- Inbound/Outbound Bandwidth: Sets the network bandwidth limit, which defaults to unlimited

There are three kinds of system image available: the blank, the pre-built, and pre-built embedded with Cloud-init modules. The official site of Nano provides two pre-built images of CentOS 7.5 Minimal for Downloading .
Blank System¶
The default blank image means that there are not any operating system or software after the instance created, and you need to load a media image to install some.
The blank system is usually used to customize a template instance. See the chapter of building an image for more details.

Pre-built Images¶
A pre-built image already installed with the operating system and some software, and may even have some optimizations or modification. The image can build from a template instance, or upload via web portal directly.
Choose a pre-built image when creating a new instance. You will have a clone identical to the original template after created, which is usable out-of-the-box.

Pre-built Images Embedded With Cloud-init Module¶
This kind of image adds cloud-init and cloud-utils components (which can install using yum tools on CentOS) base on a normal pre-built one. Cooperating with Cloud-Init service provided by Nano, it is easily initializing administrator password, expanding system disk or formatting data disk automated.
When using a Cloud-Init embedded image, don’t forget turning on the CI option and configure initialization parameters on the creating page.

Please note: If you need automatically format and mount the data disk while expanding the system disk, both a Cloud-Init embedded image and option enabled when creating are required.
Start/Stop And Monitor Instance¶
Users can manage the life cycle of a virtual machine. A new instance stopped by default, and you can use the following functions.

- Start: start the instance into the running state
- Start With Media: Select a media image to boot the instance, usually is used to install the operating system or restore the system as the LiveCD.
- Snapshot: Manage snapshots created for this instance.
- Build Disk Image: Build a new disk image based on the current disk status for batch creation.
- Reset System: Restore the current system data to a specified system image.
- Delete: Permanently delete the instance.
- Migrate: Migrate instances to other nodes if created base on shared storage.
- Monitor: Monitor the real-time resource usage of the instance.
- Detail: You can configure and manage the instance in more detail.
When the instance is running, the control buttons change to below figure

from left to right:
- Remote Control: Open the secure VNC monitoring page to control the instance directly.
- Stop: Simulate pressing the power button, invoke to close the disk and then shut down. (note: The ACPID service must install on the CentOS instance to take effect)
- Force Stop: Forced shutdown, similar to a power outage, can lead to data inconsistency or even disk damage.
- Reboot: Restart the instance gracefully, usually without affecting the disk data (the ACPID service required for a CentOS instance).
- Force Reboot: Reboot instance immediately, can lead to data inconsistency or even disk damage.
- Insert/Eject Media: Insert the media image into the instance, or eject from it.
- Monitor: Monitor the real-time resource usage of the instance.
- Detail: You can configure and manage the instance in more detail.
On the monitoring page, you can operate directly on the instance as managing a local server. This feature does not require any software on the guest, so it is a guaranteed method for maintaining even the network or system of the internal system corrupted.

The monitoring page also provides the following utility buttons, from left to right:

- Send Ctrl+Alt+Del: For operating system login and restart such as Windows.
- Insert Media Image: Load into the CD drive of the instance to install additional software and systems.
- Eject Media Image: Eject from the instance.
- Shutdown: Simulate pressing the power button, invoke to close the disk and then shut down. (note: The ACPID service must install on the CentOS instance to take effect)
- Reboot: Restart the instance gracefully, usually without affecting the disk data (the ACPID service required for a CentOS instance).
- Force Reboot: Reboot instance immediately, can lead to data inconsistency or even disk damage.
Besides the embedded monitoring page, you can also access the instance by third-party VNC software using the authenticate info queried from the detail page.
Modify Configuration in the detail¶
Click the “monitor” button in the control bar of an instance in the list.

Learn about real-time resource usage and status about the running instance in the dashboard:

When you need to reconfigure the instance, you can click the “detail” button.

Into the detail page:

The details page provides the following functions:
- Modify instance name
- Modify the number of cores, memory, and other configuration about resources.
- Expand or shrink disk volume. (It only reduces the disk space occupied by the physical files, not the logical volume size of the instance. It may take a long time to run, so it is safe to ignore the timeout warning in most cases.)
- Change the administrator password (the qemu-guest-agent required in the guest system)
- Query the authenticate info about VNC
- Modify CPU priority, disk IO, and network bandwidth limits.
Some functions require a running instance, and some require a stopped one.
Snapshots¶
Snapshots are the stored states of an instance, which can restore when data corrupted or misoperation if necessary.
When an instance stopped, click the “snapshot” button into snapshot page.


You can input a name and description, create a new snapshot to store the current instance state.

You can also resume to a specified system state or delete a snapshot.

Below ICON mark the current activity.

A snapshot still active or depend by others cannot be deleted.
Load Media Image¶
Administrators can load a media image into running instances to install software or operating systems.
Click the “insert” button in the control bar of a running instance.

In the pop-up dialog, select the target media image. The insertion will take effect immediately, which is as simple as inserting a DVD into a notebook.

An icon will indicate that a media image attached when insert success:

Click the “eject” button can unload the image from the instance.

Reset System¶
When you need to restore or install a new operating system, you can reset the system directly from a disk image.
Click the “reset system” button in a stopped instance.

Select the image you want to install and click to start.

After reset complete, the system disk of the instance will restore to the newly installed state.

Migrate Instances¶
Instances migration helps adminstrator to manually optimize the resource configuration and daily downtime maintenance. Note: the migration only enabled in the resource pool using shared storage.
Migrate Single Instance¶
Single migration can migrate an instance to a specified Cell node, click the “Migrate” button of a stopped instance in the list.

Select the target node in the pop-up dialog, confirm and wait for the migration to complete.

After the migration completes, you can see that the instance hosting node has changed. if you use a third-party remote control tool, please check the latest monitor address in the detail page after migration.

Migrate Whole Node¶
Whole node migration moves all instances on one node to other nodes, usually for downtime maintenance or server relocation.
Click the “Migrate” button in the Cell list.

Select the target node in the pop-up dialog, confirm and wait for the migration to complete.

Batch Processing¶
Batch processing is quite effective when managing a large number of instances.
Batch Creating¶
Batch creating is identical to the original creation besides it creates a group of instances with similar configuration. Click the “Batch Create” button in the instance list.

Batch Deleting¶
Entering batch mode in the instance list, then you can select multiple instances and delete them at the same time.

Batch Stopping¶
Entering batch mode in the instance list, then you can select multiple instances and stop them at the same time.
Chapters
Manage Platform¶
Nano provides a bunch of utility functions to simplify the daily maintenance.
Dashboard¶
The landing page of the web portal provides a global usage dashboard, which covers both virtual and physical resources, helps to understand the real-time system load.

You can click on the dashboard to drill down to see the detail usage of resource pools, host nodes, or instances.

Add Resource Node¶
When the system load is heavy, you can add a new Cell node to increase the resources available in the pool.
First, you need to deploy and start the Cell module on a new server, and then click the “Add” button in the Cell list.

Choose the newly installed node from the drop-down menu.

When the status of the Cell node changes to Online, it will able to host new instances.

Build Template Image¶
Nano can make the system data of any instance into a disk image, and then quickly clone a new one from the image. The cloned instance has an operating system and software identical to the original one.

To build a template image, you should create an empty instance without a data disk, and then install the operating system and software by uploaded ISO and network.
When building a template instance, consider the following steps. (Take the CentOS 7 as an example)
- Set default hostname and password.
- Bring up the network and enable DHCP to get IP.
- Install the ACPID service to enable shutdown and restart.
- Install the qemu-guest-agent to support online password modification, memory usage monitoring, etc.
- Update the latest software using yum.
- Install the cloud-init/cloud-utils if automatically initialization required.
After configuration completed, shut down the instance. Click the “BUILD” button in the “Images” page, select previous template from the drop-down menu, and click to create.

When the image built finished, you can clone from it when you create a new instance.
User Management¶
Administrators can manage the accesses of users, groups, and roles.
The access control is base on the menu items. A role defines the menu items can access. A group has multiple roles.
After the user logs in, the menu list creating based on the roles of user group belongs. A user can only belong to one user group.
User¶
A user account is an identity to log in the system, and also the required key to check resource owner and visibility. Click the “New” button in the user list, and input required info to create a new account.

Note: Although the new account can log in, it must add to a user group to access menu items.
User Group¶
The user group is the core of permissions. A group can have multiple roles. And the member can access all menu items of roles belongs to the group.

In the group list, enter the list of members. Click the “Add” button and choose an existed user you want to add to the group.

After the addition, the user can access the authenticated menu when login.
Resource Visibility¶
Instances, disk images, and media images are system resources which only visible by their creator by default.
Through visibility settings, administrators can allow access to resources created by other users within the same group for resource sharing.

Operate Logs¶
Nano records user operating logs, including login failures, to audit user operations and troubleshoot.

Upgrade System¶
All modules of Nano are compiled binary without any external library dependency, and configuration and data file formats are usually backward compatible.
It highly recommends that executing the installer and selecting “4” to upgrade all modules automatically. The installer will check which module installed need to update, and stop and restart a running module automated.
When you have a problem with the automatic upgrade, you manually upgrade all modules.
All you need is stop a running module, replace the binary and restart. The only exception is that the FrontEnd module also contains resource files need to replace.
Assuming all modules installed in the path “/opt/nano”.
Download and unzip
$cd ~
$wget https://nanos.cloud/media/nano_installer_1.0.0.tar.gz
$tar zxfv nano_installer_1.0.0.tar.gz
Replace Cell
$cd /opt/nano/cell
$./cell stop
$cp ~/nano_installer/bin/cell .
$./cell start
Replace Core
$cd /opt/nano/core
$./core stop
$cp ~/nano_installer/bin/core .
$./core start
Replace FrontEnd
$cd /opt/nano/frontend
$./frontend stop
$cp ~/nano_installer/bin/frontend .
$\cp ~/nano_installer/bin/frontend_files/resource/. resource/ -Rf
$./frontend start
Network Change¶
Core and FrontEnd use the specified address to provide services, when the server IP changes, you need to modify the configuration IP and restart the module.
When the IP of the Cell Server changed, you only need to restart the module. It will discovery the networking using the multicast protocol and rejoin the communication domain automated.
When migrating the whole system or moving to a different network, modify the listening IP and multicast configuration at first. Please remember starting the Core before the Cell, finish new network discovery and switching.
Failover¶
You can enable the Failover in the resource pool using shared storage.
If a Cell node lost when the Failover enabled, all instance on that node will migrate automatically to other nodes in the same pool.

If the instance is autostarting, the new node will automatically start it after the migration.
If the lost Cell node rejoins the pool, all instances on the Cell clear automatically and the Cell is disabled. The administrator needs to enable the node manually after that.
Disable Node¶
Nano will select the node with the lowest load to create a new instance by default. But users can disable the Cell node manually to avoid host new instances on that node, easy to maintain or balance node load.

On the disabled node, all instances work without any different. Disabled nodes can enable manually later to resume hosting new instances.
Multilingual¶
The web portal currently supports both Chinese and English, switch it in the page footer.
Chapters
- FAQ
- Install requirements for Nano?
- Can Nano be installed on a virtual machine?
- Can Nano install on a public cloud like AWS?
- Network/SSH disconnected when installing Nano
- Prompt “no default route available” when Installer or Cell starts
- Prompt “query timeout” when starting Cell
- All instances and images absent after upgrading 0.9.1
- No “log” or “visibility” menu available after upgrading
FAQ¶
Install requirements for Nano?¶
- Virtualization enabled X86 servers, or nested virtualization enabled virtual machines
- 2 cores/4 GB memory/50 GB disk/1 network interface
- CentOS 7.6(1810) Minimal
- Operation system installed with network ready
Can Nano be installed on a virtual machine?¶
All products enable nesting virtualization is possible in theoretically. Tested products:
- VMware Station, test OK with Intell VT-x/AMD-V enabled
- VMware ESXi, test OK with Promiscuous Mode
- VirtualBox, test fail
Can Nano install on a public cloud like AWS?¶
No, most public cloud platforms do not allow virtualization.
Network/SSH disconnected when installing Nano¶
The installer will configure network bridge and restart network service during installation, which does not affect the network connection for most Dell series servers and VMware instances.
However, it is true that some servers may cause network disconnection due to network drivers, which should install using the server’s IPMI or similar remote administration interface instead of SSH.
Prompt “no default route available” when Installer or Cell starts¶
Nano requires a default route configured to work, manually configure a new one and restarted. Assuming that the default gateway in the network is 192.168.1.1, execute below command.
$ip route add default via 192.168.1.1
Prompt “query timeout” when starting Cell¶
The Cell requires a running Core process to complete self-discovery and networking. Check if the Core module and network are running correctly or the domain parameters are identical to Core’s configure.
All instances and images absent after upgrading 0.9.1¶
Since the new version can only view the instances and images you created, execute the following instructions to modify the ownership of resources and restart the modules, otherwise you will not be able to see your instances and images. Take the user ‘nano’ and group ‘admin’ as an example:
Update ownership of images in the Core module
$sed -i 's/\"owner\": \"admin\"/\"owner\": \"nano\"/' /opt/nano/core/data/image.data
$sed -i 's/\"group\": \"manager\"/\"group\": \"admin\"/' /opt/nano/core/data/image.data
Update ownership of instances in the Cell module
$sed -i 's/\"user\": \"admin\"/\"user\": \"nano\"/' /opt/nano/cell/data/instance.data
$sed -i 's/\"group\": \"manager\"/\"group\": \"admin\"/' /opt/nano/cell/data/instance.data
Community¶
Nano uses MIT license, which is free for modification, personal or commercial use.
Git Repository: https://github.com/project-nano
Blueprint for REST API: https://nanoen.docs.apiary.io/
This manual: https://nanocloud.readthedocs.io/projects/guide/en/latest/
Thanks for your attention, and sincerely looks forward to your joining us.