Ransomware is no longer an abstract risk. It affects organizations in all industries: from hospitals and automakers to retailers and governments. See my previous blog: 'From incident to bankruptcy'. The question is no longer whether you will be affected, but rather how do you survive an attack?

Within hours, an entire IT environment can be down. So partners and customers in the chain also come to a standstill. That sometimes puts the jobs of thousands of employees at risk. Sometimes even the government has to step in with billions in aid, like recently at Jaguar Land Rover.
So the challenge is bigger than just your own organization: how do you keep the chain afloat? How do you ensure that systems can swerve, reboot and recover? And how do you prepare for the impossible?
In my career, I was allowed to test and practice these types of "disaster" scenarios with many clients. That 'dry run' alone often proved shocking enough. But in my memory, the organizations that took this threat seriously and made it negotiable ended up suffering the least.
An attack could mean the complete loss of your data center or cloud environment. You must then be prepared for a greenfield reboot: a restart as if you had to rebuild your production environment from an empty pasture. And especially if your environment has been running for years, that knowledge is often no longer readily available. And - unfortunately - inadequately recorded.
Every server, network, database and storage must be reconfigured. That requires not only up-to-date backups and disaster recovery centers, but also offline stored, secure and usable data of configurations, procedures and startup protocols. And the experts who can do that. That's one of the biggest problems at Jaguar Land Rover right now. 'Who remembers how it was . . . ?'
Such an exercise - where you assume total destruction - is often seen as excessive. But those who try this once - and I have done many - realize how unimaginable and far-reaching it is.
A ransomware attack is not an incident; it is digital warfare. Not only your own IT, but also the entire chain must be able to survive it.
Regular wargames also provide routine and make chaos manageable. "What if" scenarios should assume the blackest case: what if one catastrophic event - a plane crash, a fire or sabotage - wipes out your entire data center? In an emergency, can you shut down your systems in a controlled manner without getting corrupted data? The experience at Jaguar Land Rover shows that in the event of a hasty - and thus partially uncontrolled - shutdown during the attack, additional confusion and damage actually occurred.
Many organizations find it difficult to imagine this worst-case scenario. During exercises, I often heard, "That won't happen, it's adequately secured." But the simple counter-question "Have you ever tested what happens if you just turn off the power?" usually went unanswered. Do you - as an exercise - dare to press the red emergency button in your data center?
The current waves of ransomware show that the very unthinkable scenarios can become reality.
During the 9/11 attacks on the Twin Towers, EMC² customers not only lost lives, but entire data centers that were inside the towers. Literally sank into the ground.
Some organizations had fallback locations outside New York and were back online within hours. Most, however, opted for a fallback close to the city itself - and got stuck because New York was shut down, power supplies were unstable and locations were inaccessible.
One customer had even housed his fallback in the second tower. The unthinkable scenario became a reality and his company ceased to exist.
EMC² immediately made hundreds of employees freely available. Together with customers, we quickly got many operations up and running again. Within a week, most businesses were working enough to avoid bankruptcy.
The lesson: don't count blindly on infrastructure nearby, but practice scenarios where external help and rapid support are indispensable. And know who those are or can be.
Major cloud vendors have already invested heavily in continuity and security. Yet incidents continue to increase. Cloud guarantees are finite, especially in chains with many dependencies.
One example: the hack of ticket provider Collins shut down entire airports. Manual switching proved impossible. It shows once again that a chain is only as strong as its weakest link - and attackers are increasingly targeting exactly there.
In my career in the aviation and security world, I have learned one thing above all: compartmentalization is and always will be the core of any security. In the physical world, this means locks, lockable zones and controlled fallback. In the digital world, it involves logically separated systems, fully redundant paths and controlled shutdown and fallback scenarios. In the digital and virtual world, it is no different.
This is the only way to prevent one incident from crippling an entire organization or chain.
Preparing for ransomware means: assuming the worst, designing multiple fallback processes and making your architecture antifragile.
Antifragile means: designed for disruption, but capable of coming back stronger. The key strategic choice is: what can safely remain in the central cloud and what must be truly antifragile? Which data, applications, communications and employees are crucial to survival? That insight alone provides enormous added value.
Digital resilience requires more than technology. It is also about sovereignty, decision-making and communication. Who decides? Who executes? How do you communicate when everything is down? You can only discover this by practicing. Not only with fire drills, but also with IT failure scenarios and regular cyber wargames.
Organizations that do this seriously not only survive - they emerge even stronger.
