Clouds have come to dominate computing and data services. At the same time there are more and more devices connected to the internet, devices have more data, make more real time actions based on data and users want to have more control. This all means that one totally centralized cloud model is not enough. We have to adopt more distributed - technically speaking Edge computing - models.
I have written earlier about personal AI and distributed data models. Both are demonstrations that there is a need to distribute data and its processing. Those services could still be cloud based, but if we really make real time, data heavy applications, then it is important to optimize local and centralized computing and data transfer. This is the case with many IoT based services.
Wikipedia defines Edge computing as “a method of optimizing cloud computing systems by performing data processing at the edge of the network, near the source of the data. This reduces the communications bandwidth needed between sensors and the central datacenter by performing analytics and knowledge generation at or near the source of the data.”
Professor Mahadev Satyanarayanan from Carnegie Mellon University made a strong case at an AI panel at MWC2018 for Edge computing and how its time is now. He basically gave examples that more personal AI (e.g. glasses to give personal assistant or connected cars) cannot transfer all data to a data center and wait for instructions from there. A part of the processing must be local. The training and learning of AI will probably be a hybrid model where there is centralized learning and ‘instructions’ that can be adapted locally in devices to be e.g. more personal or more relevant to the local context.
Professor Satyanarayanan gave four main reasons for why Edge computing is crucial for the future AI and ML applications:
- Bandwidth: it doesn’t make sense or it is not possible to transfer all data fast enough.
- Latency: real time decision making cannot wait data transfer and decisions from a central hub.
- Privacy: Some data must stay locally stored.
- Availability: the services must work always, also when there are issues with connections or cloud services.
We can see this is also linked to security. How, for example, local devices can protect themselves against attacks and be able to operate in all situations. These devices will need more firewall and other protected solutions in the future and maybe also use AI to protect themselves, in kinds of self-defense solutions.
This development to more distributed models will have business impacts too. It gives opportunities to new companies to develop optimal services for these new needs, when the cloud services start to be quite consolidated to the biggest players. Professor Satyanarayanan actually saw that most likely the leading cloud companies, like Google and Amazon, might have problems adapting to this model. He saw that Microsoft might have a better capability to do it, thanks to its history with PCs.
At the same time, this is a way that startups can really get to the AI market too, when for them it is really hard to compete with Google, Amazon and Facebook and their huge resources in AI.
Finance and FinTech services is one area where these new distributed models are already happening. We can say distributed ledger and blockchain are linked to the same development. It is to distribute the data, processing of transactions and offer more user control. As we can see in FinTech services, it is important to find an optimal model to manage global centralized ecosystems with local services and data. Open APIs are also a part of this model, when they enable to develop local services and applications that work with centralized services and back offices.
We will see different layers of computing and data centers:
- Local computing and small data in local devices.
- Cloudlet to make more processing and data store near a device. (Wikipedia: A cloudlet is a mobility-enhanced small-scale cloud datacenter that is located at the edge of the Internet).
- Central clouds for big data to manage the ecosystem, larger scale learning and global processes.
At MWC2018 we can see AI models, needs and applications from many angles including computer science, venture capital investments and user applications (e.g. connected cars). The message from the AI expert is that this development is happening now, i.e. companies should invest in these things now, not first wait for proven business cases. The experts also wanted to emphasize that it is now the time to develop real user applications, not only conduct research, i.e. there are already enough theoretical models and research results to implement a lot of useful applications.
Distributed models have really started to emerge at the same time, when the big cloud services are still winning business from proprietary legacy IT solutions. This is an area where we will see a lot of new models and services in a few years, probably also a lot of startups and investments. The big players are also ready to make acquisitions. The development of connected digital services seem to be an endless rollercoaster between more centralized and more localized intelligence.