There are essentially two or three different aspects when it comes to elasticity. Firstly, when you think about elasticity, essentially you need the flexibility, so that whatever it is that you are operating, any set of resources that you are operating, it could be storage, it could be compute as in Vaultastic or it could be networking, databases, or any kind of advanced or high level services. The basic characteristic is that you want freedom with little or no lead time to scale up and scale down.
One of the key reasons why the AWS platform and services are elastic, is because all operations that you perform, whether it is to store more data, to run more virtual machines, to scale out networking etc. are simply API operations. This means that you can perform these in software, you can perform these using either the console or utilities, or SDK’s and so on.
All these operations complete in minutes or even seconds. For example if you are starting up new instances or databases, these will be available within minutes. If you are re-configuring them, if you are resizing them, if you believe in vertical scaling, if you make a choice to run an application on a certain instance type and then discover that it probably does not have enough memory or enough CPU capacity, you can then resize it, again, using API operations. Thus there is absolutely no possibility or no need for any human intervention, or any processes which can slow down the usual requirements around scaling up or scaling down.
Secondly, there are no penalties, that means there is no upfront provisioning required and there are no minimums, or commitments when you use these services. Lastly we are continuously adding capacity, every single day. There are teams at AWS who are dedicated to simply continuously add capacity and refresh the underlying platform, in terms of the actual hardware, the software, the configuration and all of the operational processes around delivering these to our customers.
If you take just compute, the Vaultastic product itself will operate using certain virtual machines on the EC2 service. You can provision any number of EC2 instances which have limits. These limits are on every AWS account for each of the service. These limits are nothing but simple mechanisms for protection. Because all these operations are API operations and these are operations that can also be called from software, it might happen that, due to let’s say, a bug, or maybe due to some accident or human error, customers might end up accidentally spinning up large number of EC2 instances or VM’s.
For example, say you wanted to start maybe 9 instances, but due to a typo, you ended up creating 99 instances or 900 instances. These limits will give you some small number that you can routinely provision. When you are aware that your requirements exceed these numbers, you simply communicate a change request before hand, which lets us know that you need to provision more capacity.
The elastic load balancing service as the name says is elastic in nature. Unlike conventional load balancing where you must manage the infrastructure that is used, the elastic load balancing service is designed to automatically grow and shrink to make sure that any amount of load can be sent to your back end applications that are serving the traffic.
Similarly, your back-end applications themselves can be part of what is called an auto scaling group. The auto scaling group is a mechanism where the size of your fleet can grow or shrink on demand, based on the load that is currently hitting your application.
The auto scaling group works by itself to perform both scale out events and scale in events, with the scale out events adding capacity when needed. Either based on increases to traffic or scheduled events such as you know that certain applications are busy only at certain times of the day, or certain days of the week, you can perform scale out actions during those times, or ahead of those times and then you can do scale in actions automatically, when you don’t want to be running a larger fleet. Therefore the scale in actions will automatically retire the instances when you no longer need a large fleet of instances.
Another service which has elasticity built into it includes what we call server less computing, using lambda. In Lambda, you simply write some code to execute a function and we take care of all of the infrastructure management and execution of the code. We also scale it to the number of invocations, which could be thousands of invocations every second.
A higher level service using elasticity is Amazon DynamoDB. In this case as well, elasticity is built in because you could put in unlimited data into a DynamoDB table. DynamoDB tables are the unit of usage. There is no limit to the number of items and there is no limit to the total amount of data that you could have in a DynamoDB table. We routinely have customers who store billions of data items and petabytes of data in tables provided by this service.
Finally, unlike conventional storage, the elasticity in terms of the storage on AWS, does not require you to tell us or tell a service before hand, the amount of storage you need. A single object can be as large as 5 terrabytes and you can have unlimited number of objects per bucket. Also the service itself, the API’s, the S3 API’s will routinely scale, everyday, where we are handling a large volume of requests from our customers in terms of the number of operations per second.
In conclusion, if you are considering storage from some of these different categories of workloads, whether related to primary storage or to the Migration, Bursting or Tiering requirements. You are able to take advantage of our storage services as well as the elasticity so that you can meet different kinds of requirements.
Marketing Executive at Mithi Software Technologies. Curious by nature and a keen learner, she brings readers the latest updates on Mithi and its products.