...
HomeblogWhat a 72-Hour IT Outage Would Actually Cost You And Why Nobody Has Run That Number 

What a 72-Hour IT Outage Would Actually Cost You And Why Nobody Has Run That Number 

Managed IT Services
share

Table of Contents

Ask your CIO what a major IT outage would cost. They will mention the SLA. Ask your CFO. They will mention cyber insurance. Ask your CISO. They will cite the last incident report. 

None of them will give you a number. Not because it doesn’t exist, but because no one has sat down and actually calculated it. 

This is that calculation. We are going to walk through every cost layer of a 72-hour enterprise IT outage, not a server blip, not a five-minute slowdown, but a genuine all-hands-on-deck infrastructure failure lasting three days. The kind that happens more often than your vendor wants to admit. 

The Gartner figure of $5,600 per minute in IT downtime costs is widely cited. What isn’t cited is what it misses: the tail costs that accumulate for months after the lights come back on. We will get to those. But first, we need to understand why this number has never appeared in your board pack. 

The reason is structural. Outage costs don’t live in one place. They are distributed across the P&L, the HR budget, the legal reserve, the marketing budget, and the customer success dashboard, and nobody in your organisation is responsible for adding them up. That fragmentation is precisely what allows enterprises to systematically underinvest in IT resilience while telling themselves they have it covered. 

The Five Cost Layers Nobody Tallies Together

The first thing to understand is that a 72-hour outage doesn’t generate one bill. It generates five, and each one lands on a different desk, owned by a different person, in a different quarter. 

The first layer is direct revenue loss: halted transactions, SaaS ARR churn, e-commerce downtime. This one belongs to the CFO and CRO, and for a mid-enterprise it sits in the range of $8.1M to $14.4M for a 72-hour event. 

The second layer is employee productivity loss. Idle staff, manual workarounds, emergency all-hands meetings, and the cascading effect of teams unable to do their core work. This lives in the COO and HR budget and typically runs $1.2M to $3.8M. 

The third layer is IT incident response labour: overtime, emergency contractor rates that procurement would never approve in a normal procurement cycle, and vendor escalation costs that your SLA didn’t fully account for. This is the CIO and CTO’s line, and it runs $480K to $1.1M. 

The fourth layer is regulatory and legal exposure, including GDPR and HIPAA fines, SLA breach penalties, and the litigation reserve your General Counsel will quietly request. Depending on your industry and the nature of the outage, this ranges from $500K to well above $4M. 

The fifth layer is the one that never makes it into an incident review: reputational and customer trust damage. Churn uplift, PR costs, NPS recovery campaigns, and enterprise contracts lost not during the outage but in the six months after it. The 12-month tail on this layer runs $2M to $8M. 

Add all five together and the total realistic exposure for a mid-market enterprise is between $12.3M and $31.3M. 

Notice that four of those five cost layers are never captured in a post-incident report. They get absorbed into other budget lines, written off as one-time items, or simply never calculated. This is how organisations systematically underestimate the return on IT resilience investment. They are measuring one bill and ignoring the other four.

average cost per minute of enterprise downtime (Gartner, 2024)
$ 0
of IT outages last longer than 8 hours (Uptime Institute Annual Report, 2023)
%
of enterprises have never tested their full disaster recovery plan at production scale
%

The 72-Hour Escalation Map: How Costs Compound by the Hour

IT Outage costs don’t scale linearly. The first eight hours are expensive. Hours 24 through 72 are catastrophic, because the nature of what you are losing changes entirely. In the first two hours, you are losing transactions. By hour 48, you are losing trust. Those are not the same thing, and they don’t have the same recovery curve. 

In the first two hours, the initial alert fires, teams scramble to isolate scope, and customers begin calling support. The first SLA breach clock starts. At $5,600 per minute, two hours alone represents approximately $672,000 in direct losses before a single engineer has opened a ticket. At this stage, most organisations are still in “this might be a short one” mode. That optimism is expensive. 

Between hours two and eight, the war room opens. The CIO, CTO, CISO, and legal counsel are all pulled in. Vendor SLAs are invoked. Emergency contractors are engaged at rates that would make your procurement team wince. The internal communications team begins drafting customer notices. By hour eight, cumulative losses have reached approximately $2M, and the rate is accelerating rather than decelerating, because the organisational drag of incident management compounds over time. 

Between hours eight and 24, a straightforward infrastructure failure can become something materially worse. If customer data is affected in any way, even indirectly, the GDPR 72-hour notification window is already running. Industry media picks up the story. Key enterprise customers start sending formal written notices. Your legal team is now a full cost centre. Cumulative losses at hour 24 sit at approximately $5.8M, with the regulatory clock still running. 

Between hours 24 and 48 is the inflection point that most post-incident analyses miss entirely. The nature of your loss shifts from operational to strategic. Customers activate backup vendors, and some of those relationships will stick. Your sales pipeline freezes. Enterprise renewal conversations stall. The probability of churn in affected accounts increases by a factor of three to five. Cumulative losses at hour 48: approximately $14.4M. 

At 72 hours, the incident itself almost doesn’t matter anymore. You are managing a narrative. Analyst calls begin. Investor relations is fielding questions. The board wants a briefing. The reputational tail on a 72-hour outage averages 11 months of elevated churn and suppressed pipeline conversion. Total cumulative exposure at the 72-hour mark: $22M and climbing. 

“At 72 hours, the incident ends. The cost doesn’t. The average reputational tail runs for 11 months and shows up in churn data, not incident reports.” 

The Three Costs That Appear Six Months Later

Here is what almost never appears in the incident post-mortem, and yet arrives with near-certainty in the year-end numbers. These costs are invisible in incident reviews because they don’t have a direct causal timestamp. No one books them against the outage. They simply manifest. 

The first is elevated cyber insurance premiums. A 72-hour outage flags as a material incident across most carrier risk models. At your next renewal, three to ten months away, you will find your premium has increased, your coverage terms have tightened, or both. Industry data suggests organisations that have experienced a major infrastructure event see premium increases of 18 to 35% at their next renewal. For a mid-market enterprise spending $400K annually on cyber insurance, that is $72K to $140K per year in perpetuity, compounding. 

The second is IT talent attrition, and it is the cost nobody wants to discuss in a board meeting. The engineers who spent 72 hours fighting a crisis, running on adrenaline, fielding calls from frustrated executives, making impossible decisions with incomplete information, will begin receiving LinkedIn messages within weeks. Approximately 12% of the incident response team will be gone within six months. At a replacement cost of 1.5x annual salary per head for senior IT talent, an 80-person department losing 10 people represents $400K to $900K in recruitment, onboarding, and productivity ramp costs. And the institutional knowledge they take with them is not on that invoice. 

The third is board-mandated compliance remediation. After a 72-hour outage, the board will mandate action. This takes the form of third-party security audits, penetration tests, architecture reviews, new tooling requirements, and updated DR documentation that needs external validation. The average cost of this post-incident remediation programme is $250K to $1.5M. And here is the detail that stings: most of this work would have been cheaper as preventative investment than as reactive remediation. You will pay for it either way. The question is only whether you pay before or after the outage. 

When you add these delayed costs to the acute costs, the 12-month total exposure for a mid-enterprise experiencing a 72-hour outage sits between $15M and $35M. That number is larger than most organisations’ annual IT resilience budget. It is almost certainly larger than the cost of preventing the outage in the first place.

Why No One Has Run This Number Yet

The uncomfortable truth is that this analysis is not technically difficult. The data exists in your organisation right now. Revenue figures live in the CRO’s dashboard. Headcount costs live in HR. Legal exposure is something your General Counsel can estimate in a single conversation. Insurance premiums are on a spreadsheet somewhere. 

The reason nobody has pulled it together is organisational, not analytical. It requires five people who rarely share a room to agree on a shared risk model, and it produces a number that is, frankly, alarming. Alarming numbers create uncomfortable conversations. Uncomfortable conversations create accountability. And accountability requires investment. 

There is also a subtler problem. Organisations that have experienced outages tend to round down their own history. The war room closes, the systems come back online, people go home. The total cost never gets written in one place with one number attached to it. So the institutional memory of “that bad outage two years ago” is anchored to the emotional experience of the recovery, not the financial reality of the aftermath. 

of enterprises have no documented total-cost estimate for a major IT outage.
0 %
is the average gap between what organisations estimate and what an outage actually costs.
x
is the return on IT resilience investment: every dollar spent prevents seven dollars in outage costs. (Ponemon Institute)
$ : $7

"The reason no one has run this number is that it lives in the gaps between org chart boxes, and those gaps are where your biggest financial risks always hide."

Industry Multipliers: Why Your Sector Changes Everything

The figures above are conservative mid-market baselines. Your actual exposure depends heavily on industry. Financial services and banking sit at approximately 1.9x the baseline, driven by transaction loss and regulatory exposure across FCA, PRA, and SEC frameworks. Healthcare and MedTech sit at 1.7x, where HIPAA penalties and patient safety liability compound the operational losses. SaaS and technology companies sit at 1.5x, because contractual SLA penalties and ARR churn velocity accelerate faster than in other sectors. Manufacturing sits at 1.2x, driven by production line downtime and supply chain penalties. Retail and e-commerce sit closest to the baseline, at 1.0x. 

For a financial services organisation, that 1.9x multiplier takes the mid-range 12-month exposure from $24M to approximately $45M. At that level, IT resilience stops being a technology investment and becomes a balance sheet protection strategy.

Five Questions Your Board Should Be Able to Answer Today

The first question is: What is our revenue per hour for our three most IT-dependent business units? This is the foundation of every other calculation in this piece. Without it, you are estimating, and organisations that estimate tend to undercount by a factor of four. Ask your CFO and CRO to produce this figure broken down by business unit, not at the enterprise level. An aggregate figure hides the concentration risk in your most critical systems. 

The second question is: When was our DR plan last tested at full production scale, with real data volumes? Not a tabletop exercise. Not a partial simulation. A full production-scale test with current data volumes, current team composition, and a realistic failure scenario including the regulatory notification workflow. If the answer is more than 18 months ago, your DR plan reflects a system you no longer operate. 

The third question is: Do we have a documented GDPR or HIPAA notification playbook for a 72-hour data event? The GDPR 72-hour notification clock does not wait for you to establish whether an outage involved data. It starts when you become aware of a potential breach. If you do not have a documented, tested playbook, you are improvising under legal time pressure during the worst moment of your IT year. 

The fourth question is: What is our single point of failure that no one is willing to say out loud? Every enterprise has one. It is usually a legacy system nobody wants to touch, a third-party dependency that has never been evaluated for resilience, or a network architecture decision made five years ago that the current team inherited and quietly works around. Saying it out loud creates a requirement to fix it. Not saying it out loud creates the outage. 

The fifth question is: What is the total 12-month cost number for a worst-case 72-hour scenario? All five layers. All three delayed costs. Adjusted for your industry multiplier. Written in a single document, owned by a named executive, with a review date. If this number does not exist in your organisation today, it is worth two hours of your time before your next board meeting. The number will be larger than anyone in the room expects. That is the point. 

IT Resilience Is Not a Cost Centre. It Is a Hedge.

The framing of IT resilience as an operational expense is one of the most expensive misclassifications in enterprise financial thinking. When the total 12-month cost of a single 72-hour outage sits between $15M and $35M for a mid-market enterprise, the investment required to meaningfully reduce that probability should be evaluated as balance sheet protection, using the same analytical framework you would apply to any other instrument designed to reduce a known financial exposure. 

The organisations that understand this are the ones that have run the number. They have sat in a room with the CFO, the CIO, the CISO, and the General Counsel and looked at a single figure representing their worst-case downside. And then they have made a rational, financially grounded decision about how much of that exposure they are willing to carry. 

The organisations that haven’t run the number are operating on faith: faith that their SLAs are accurate, faith that their DR plan reflects their current environment, faith that the incident they haven’t had yet is evidence that they won’t have one. 

“An IT outage doesn’t end when the systems come back online. It ends when the last invoice is paid, and that is usually 14 months later.” 

The number is knowable. The question is whether your organisation is willing to look at it.

Ready to see how Zazz can transform your IT operations? Schedule a consultation with our enterprise IT specialists today. 

Author
A portrait of Hemanth Kumar who is Vice President of Technology at Zazz
Hemanth Kumar
VP of Development & Delivery
Hemanth Kumar is an agile delivery leader focused on driving enterprise-scale transformation through cloud-native, AI-powered, and secure digital solutions. Hemanth oversees global engineering and delivery operations, ensuring high performance, reliability, and continuous innovation for Zazz’s enterprise clients.
Get Zazz Insights and Updates delivered to your inbox
Our Partners
Get in Touch With Our Team
Awards

Recent blogs

Zero trust architecture implementation for MSPs
Managed IT Services
Zero Trust Architecture Implementation:Why Enterprise MSPs Must Lead the Shift 
Table of Contents Zero trust architecture implementation has transitioned from an emerging security concept to an...
Zero Trust Architecture Implementation:Why Enterprise MSPs Must Lead the Shift 
how to choose a managed service provider featured image
Managed IT Services
How to Evaluate an MSP Before You Sign Anything: 12 Questions to Ask 
Table of Contents Most advice on how to choose a managed service provider reads like...
How to Evaluate an MSP Before You Sign Anything: 12 Questions to Ask 
Cybersecurity Threat Inside Your Network
Managed IT Services
Why Your Biggest Cybersecurity Threat Is Already Inside Your Network 
Table of Contents Most enterprise security strategies are built around a single, enduring assumption: every...
Why Your Biggest Cybersecurity Threat Is Already Inside Your Network 
Scroll to Top