Chapters
System Deployment¶
The Nano has an installer for automated deployment, get the latest version from the official website or on Github.
For new users, it highly recommends that don’t adjust configuration when installing the first time, the installer will choose appropriate parameters.
Minimal System Requirements:
- Virtualization enabled X86 servers, or nested virtualization enabled virtual machines(Intel VT-x/AMD-v)
- 2 cores/4 GB memory/50 GB disk/1 network interface
- CentOS 7.6(1810) Minimal
- Operation system installed with network ready
- If you have raid or LVM, please configure them before installing Nano.
It recommends installing on a pure CentOS 7.6 system without any extra packages.
Install Modules¶
Download and unzip the package. Execute the installer and choose the modules needed to install.
execute the following instructions:
$wget https://nanos.cloud/media/nano_installer_1.0.0.tar.gz
$tar zxfv nano_installer_1.0.0.tar.gz
$cd nano_installer
$./installer
After the Installer starts, you are asked to choose the modules to install.
For example, “2” for the Cell only, and “3” for installing all module on current.
All modules installed in the directory: /opt/nano, and with domain identity “<nano: 224.0.0.226:5599>” by default.
It is not recommended to change the default parameters for first-time installation or only one domain in LAN.
The installer will ask the user to enter “yes” to build a bridged network “br0” before installing the Cell.
It recommends that remove previous “br0” not generated by Nano before installing.
A known issue is that the SSH connection will interrupt on some servers when the installer configures the bridged network, resulting in an installation failure. Use IPMI or iDRAC to execute the installer if you encounter the above issue.
The installer will ask you to choose a network interface to work on if there are multiple networks available on the current server.
For example, If the server has two network interfaces, the eth0 address is “192.168.1.56/24”, and the eth1 address is “172.16.8.55/24”. If you want the Nano to work on the segment “172.16.8.0/24”, choose “eth1”.
Note: when installing the dependent library, the installer will try the built-in RPM packages first. If any error occurs, it will use yum instead, so keep your network available.
Start Modules¶
All Nano modules provide the command-line interface, and called like :< module name > [start | stop | status | halt].
- start: start module service, output error message when start failed, or version information.
- stop: stops the service gracefully. Releases allocated resources and notify any related modules.
- status: checks if the module is running.
- halt: terminate service immediately.
You need to start the module to provide services, and all binaries are installed in the directory “/opt/nano” by default.
Please remember, always start the Core service first.
$cd /opt/nano/core
$./core start
$cd ../cell
$./cell start
$ cd ../frontend
$./frontend start
When the FrontEnd module successfully started, it will give you a listen address likes”192.168.6.3:5870”. Using Chrome or Firefox open this web portal to manage your newly installed Nano platform.
Initial System¶
The first time you open the Nano Web portal, you will be prompted to create an initial administrator.
The password must be no less than 8 digits long and require a number, a lowercase letter, and an uppercase letter at least.

After you log in, you can start managing the Nano platform or creating more user accounts for your team.

The logged-in user display at the bottom of pages, and you can click to log out.

Add Address Pool(optional)¶
By default, the virtual machine allocated by Nano directly obtain IP from physical networks through a bridged device. But for who want to manage instance’s IP more precisely, the address pool may be a better solution.
An address pool contains multiple IP segments.
When an instance creates, a new IP allocated from the pool and attach through the DHCP service on the Cell node.
When the instance deleted, the attached IP release to the address pool, which can reallocate later.
Click the “Create” button in the pool list

When an instance startup, it will fetch gateway and DNS in pool configuration via DHCP client.
After a pool created, add some usable address range in the detail page.

Click the “Add” button on the detail page, add a usable address range. When a new instance created, an IP will allocate in this range and attach to it using the DHCP service.
To avoid conflicts with existing network DHCP service, please notes:
- Ensure that there is no address overlap between the existing DHCP network and the Nano address range.
- Ensure that the address range is on the same subnet with the gateway, and reachable. The gateway IP should not appear in the usable range.Take the gateway IP “192.168.3.1” as an example, the address range should be “192.168.3.2~192.168.3.240/24”.

You can see the available ranges in the detail page after adding. You can also add multiple address ranges in an address pool if necessary.

An address pool should attach to a resource pool before taking effect. Choose an address pool in the drop-up menu of the resource pool.

Check the list of resource pools to see if the modification OK.

After the address pool configured, when creating a new instance, an IP is automatically allocated and attached.
You can view the attached IP in the instance detail or the address pool detail pages.


Note: An address pool can associate with multiple resource pools, you can change DNS or gateway of the address pool, also the association in real-time. But any allocated instance must release before unbinding the associated address pool.
Add Resource¶
Add Cell Node¶
When the Nano starts for the first time, an empty resource pool named “default” created. You must add a Cell node to have enough resources to allocate instances.
On the “Resource Pool” menu, click the “cells” button of the “default” pool.

Click the “Add” button to add cell node.

Select a Cell node available but not joined any resource pool in the drop-down menu.

You can see that the new node is online after added the pool.

Please note: You may receive a timeout warning when the Cell takes too long to join the pool if using shared storage. It does not matter in most cases, try refreshing the list to check status.
When using shared storage, ensure the backend storage attached successfully in the detail page of the Cell Node.

Once the Cell node and backend storage ready, you can begin creating new instances.
Upload Images¶
an empty instance is no use at all, so we need to install an operating system and software.
Disk Images¶
A disk image store the system data of a original virtual machine, which can fast clone batch of instances with the identical OS and software in a short time.
You can install Cloud-Init module in the disk image, for initializing administrator password, expanding system disk or formatting data disk automated.
The Download page of the official site provides a pre-built image of CentOS 7.5 Minimal (one of them is embedded with Cloud-Init).
Download the image, click “Upload” in the “Images” menu to upload it. Then you can clone from this image when creating new instances.

Media Images¶
The media image represents a DVD data in ISO format, which can load into instance working as a physical CD for installing OS.
The media images usually use to build virtual machine template, see chapters of instance and platform for more details.