Cloud Computing, since the NIST (National Institute for Standards and Technology) definition in 2011 highlighted a deep innovations in IT, which until then had known at best Hosting and ASP (Application Server Provisioning), via the Internet.

Among the fundamental principles of Cloud Computing we recall Multi-Tenancy (HW and SW sharing among several companies) and Rapid Elasticity (usage of the resources strictly necessary from time to time, activating new ones only in peak times and releasing them for other uses or for switching off in breaks). Bear in mind that an idle server still consumes about 50% of energy compared to full use, because the processor must be kept running together with the memory to be ready to respond Idle Server; What Does This Mean For Your Data Center? – Raritan

 

The “Cloud-native” birth

Although the peculiarities and advantages of the true Cloud Computing were clear, in order to exploit them it was necessary to radically redesign the software and most companies and programmers preferred to continue with the simple and traditional Hosting, calling it Cloud just to be fashionable.

The term “Cloud-native” has been introduced to re-establish the difference with Hosting and ASP. The credit also goes to the CNCF (Cloud Native Computing Foundation (cncf.io)) founded in 2015 to spread this new paradigm through open source, bringing together for the first time all the world’s cloud providers, including Chinese ones.

To take full advantage of a modern Cloud platform, it is necessary to split the traditional monolithic SW into relatively autonomous micro-services that collaborate with each other by means of asynchronous messages (such as mail) and Standard API (Application Programming Interfaces), facilitating interoperability, software reuse and efficient use of resources. Any workload peak on a micro-service in fact produce temporary messages queues awaiting processing, without blocking the others. If a micro-service becomes a real bottleneck, it is automatically replicated as many times as necessary to reduce the length of the queues, and therefore the waiting times, below a predetermined limit. With multi-tenancy come in also the advantage of statistical compensation of activity peaks and valleys between users and companies.

Concurrently with the term “Cloud-native” also “Edge-Computing” spread to indicate partial processing of data at the edge, where it is produced and used, and not only in the Cloud. Such “Distributed Architectures” are becoming more popular and, among other things, reduce the data exchange between the Cloud and the edge because “even the bits are subject to the force of gravity”. This paradoxical phrase indicates that to move even a single bit from New York to Los Angles it takes time (the speed of light is high but not infinite) and energy to power the devices that keep the channel open.

Computer ecology is finally spreading, with principles like components minimization in a system (what doesn’t exist, cannot break) and their full use, because an underused computer still consumes energy, in addition to what it required for its production, transport and installation. An example in the IoT (Internet of Things) sector are modern video cameras with Artificial Intelligence on board, recently also within the same chip to further minimize components count.

Another term used more and more often is “Serverless”. Literally “processing without server”, a technical paradox meaning that programmers can concentrate on code, delegating the management of the servers and their orchestration to a SW service that automatically adjusts the resources to the workload, up to total shutdown in the event of termination of all activities for a certain time, for example at night, and automatic restart on the first call, with enormous energy savings compared to traditional data centers.

An example in Retail

aKite, the SaaS for store management officially released in 2010, was the first and still one of the few Cloud-native service in Retail.

The distributed architecture is necessary in order to have a POS that could also work disconnected for normal features such as sales, promotions and loyalty. Servers are removed from stores and the efficient design allows the use of even the cheapest hardware for Front and Back Store operations.

This form of Edge Computing has also the benefit of faster sales, because the data is local. In addition, less traffic is generated on the network while reducing the workload on the Cloud, where resources are used very efficiently through rapid elasticity.

In a distributed system like aKite was natural to choose the communication through asynchronous messages, in order to have sales data and stock of entire chains updated in near-real-time. Messages from HQ to the stores allow POS to be immediately updated with new products, prices, customers and turned-off when the store closes. Even POS that are activated only on weekends or at peak times have a queue on the Cloud waiting to be emptied as soon as they are turned on.

Older generation store SW often require computers to be turned on even at night to exchange data with HQ. This cause energy waste, the degeneration of malfunctioning applications never restarted, greater risks of computer attacks and fire following particularly serious failures.

I was discussing about energy efficiency many years ago in aKite – Green-tailing estimating savings of up to 90% compared to traditional solutions with store servers. Meanwhile, the Cloud energy efficiency has further increased. Cloud Computing Is Not the Energy Hog That Had Been Feared – The New York Times (nytimes.com)

Now the need to be “green” is even more evident and shared among customers Cloud-native IT is good for the balance sheet, for carbon footprint and corporate image … and for the planet!

Via del Progresso 2/a
35010 Vigonza (PD)
P.IVA/C.F: 02110950264
REA 458897 Cap.soc. 50.000€

Software

© Copyright 2023 aKite srl – Privacy policy | Cookie policy