This is a debate leveled at recent issues that arise from the near banking collapse and how we have viewed market dominant companies since. So in responding to these questions and articles from the other day here is my opinion having been in the heart of these collapses and the spiral of recovery from the buy and sell side.
There are two communities of questions relating to this. The impact of the company on macro economics if it failed financially. And the impact on the global economy if the service these companies provide should fail.
We could ask the fist question of many companies so let’s focus on the more meaty issue of number 2.
I’m going to go one step further and say that, too big to fail and the test I am very familiar with, focus today on systemic corruption and abuse, allowed to take place within a totally deregulated and unpoliced environment where the regulators at the time (not now) were as much a part of the problem.
So, to the justified? paranoia. All these companies putting all their systems and data into public cloud service providers, either as a SaaS or building an online data centre. I sit in this thermal all day.
“What if Amazon, Google or MS are hacked at some fundamental level that means the hacker can either get the whole worlds data, or create a denial of service attack that stops the whole service?”
This serves a certain lack of understanding on how the modern CSP has structured their underlying infrastructure, how they scale it, and how they deliver immutable segmentation at the machine servicing layer before it’s even abstracted to software. Also with the spread of containers and micro PaaS, the reliance on wide spread exposed abstraction becomes even smaller. Micro segmentation and other popular terms talk also towards limiting be surface area of attack.
What’s more important is you listen to your consultants and experts on limiting your own companies exposure. Keep company PII and Sensitive PII data behind your own firewall including the cypher for the data’s encryption. However this is also really placebo forcing us to design poor performance into our cloud native aritectures. Because many companies are doing this it is worth exploring this at one extra layer.
Your data is not safer in your own data center. The real issue here is an out of date Legal approach to these problems. Because IT shops are not spending time helping their legal teams understand. SaaS and CSP providers are not going to sign up for capped unlimited liability for an unforeseen data breech, LETS DISPELL that myth right here. Nor are they going to allow you veto on their third party subcontractors because you signed up to wide open provisions in the business to win a big client.
People, Cloud is like electricity, it’s safe and easy to use, but critical in the long run to stay alive. So in your most vaunerable areas you put in a generator? May be even a UPS?
Well guess what it’s no different for off prem computing (actually never has been). You BACK IT UP! Your replicate data, some of the relocation is delayed. You have a cold and warm start strategy from tape if nessesary.
If your rearchirecting applications for the cloud or writing new ones, build recovery and resiliency into the app.
The world is safer in cyber land today than it ever was generally, the industry as a whole faces threats to the basic premise of encryption and cypher protection. Watson and DeepMind can already crack most of that.