CycleCloud manages compute resources that can be acquired on demand, for example, virtual machines or instances in the cloud. These resources are called nodes and are grouped into clusters. Nodes in a cluster are logically related, but each node can be configured completely independently. A node specifies the cloud provider, operating system, and machine type to use, the region or location to run in, and the software to install on the node when it starts.
Clusters make use of resources from a cloud provider account that is configured in CycleCloud. This includes the credentials used to authenticate with the provider. In addition, CycleCloud makes use of scripts and packages kept in a cloud storage locker (e.g., an S3 bucket for Amazon Web Services) to configure nodes on startup.
One important feature of CycleCloud is the ability to autoscale nodes as needed to meet variable demand. Clusters can contain a node array for this purpose, which is a node definition used to create as many nodes as needed. Each node array can be set to hit a certain target capacity (measured by either instance count or core count). Nodes are started as needed to hit this count, and when nodes are idle for too long, they will automatically shutdown. The target can be set directly by the user. When used with supported HPC schedulers (currently Grid Engine and HTCondor), the target core count is set according to the backlog of jobs in the queue.
Nodes can be configured to match an existing environment. CycleCloud uses a ClusterÂ-Init system for installing software on a node. A set of executable scripts and installation packages are uploaded into a subdirectory of a bucket or container configured as the locker for the node, and when the nodes boot they run each script in turn, as the root user under Linux or Administrator under Windows. If any script fails, the node gets a status of Failed.
In This Guide