Weighing the Risks #28

Weighing the Risks #27

We recently conducted a Linkedin poll about recent events and if they were  black swan events. With 144 respondents to the poll, 50% of respondents believe that none of the incidents should be considered a true black swan event, 36% voted for the Crowdstrike Incident, 14% for the Silicon Valley Bank failure and none for the OpenAI leadership crisis.

When we posed the same question to different generative AI tools (ChatGPT, Grok, Claude, Gemini, and Copilot), they all agreed with the survey respondents that the OpenAI leadership crisis was not a black swan event. ChatGPT and Copilot both answered that none of the events were black swan events, and Gemini and Claude both determined that the Silicon Valley Bank failure was, and only Grok answered that the Crowdstrike outage qualified.

ChatGPT 4o: None of the Above

A Black Swan event needs to be highly improbable and largely unforeseen, with extreme impact. None of these incidents truly qualify under that definition. While all were impactful, they all involved risk factors or systemic issues that could be anticipated to some degree. There have been precedents in each of these domains—software failures, bank collapses, and leadership crises—making them not sufficiently rare or unpredictable to be categorized as Black Swan events.

Grok 2 mini: The Crowdstrike Outage

Given these analyses, The Crowdstrike Outage stands out as the most akin to a black swan event based on the information from X posts and general understanding of black swan events. The Crowdstrike Outage seems the most like a black swan event due to its unexpected nature, the widespread impact across multiple sectors, and the fact that while cybersecurity threats are known, the scale of this particular outage was not anticipated. This event aligns closely with the criteria of being highly improbable, having a significant impact, and being subject to hindsight bias.

Claude 3.5 Sonnet: Silicon Valley Bank Failure

The Silicon Valley Bank failure in March 2023 most closely resembles a black swan event among the options provided. It was an outlier event with extreme impact that was rationalized after the fact, fitting the key characteristics of a black swan as defined by Nassim Nicholas Taleb.

Gemini: Silicon Valley Bank Failure

Based on the given options, the Silicon Valley Bank Failure is most likely to be considered a black swan event. It was unexpected, had a significant impact on the financial system and technology sector, and was difficult to predict.

Copilot: None of the Above

None of the incidents fully meet the criteria of a black swan event. Each had elements of predictability and did not entirely fit the definition of being an extreme outlier that was beyond the realm of normal expectations.

The variance in responses from the different generative AI platforms is worth further investigation to better understand why results vary, how they were trained and what transparency is available to understand outputs.

Black swans events have the following characteristics:

  1. The disproportionate role of high-profile, hard-to-predict, and rare events that are beyond the realm of normal expectations in history, science, finance, and technology.

  2. The non-computability of the probability of consequential rare events using scientific methods (owing to the very nature of small probabilities).

  3. The psychological biases that blind people, both individually and collectively, to uncertainty and to the substantial role of rare events in historical affairs.


Request more information on decision-making risk, DelCreo’s Risk Universe and risk assessment services.

As a reminder, here are our Risk Universe categories that we leverage to tackle and understand risk which include:

  • External Risk

  • Governance Risk

  • Strategic Risk

  • Product Risk

  • Business Operations Risk

  • Legal & Compliance Risk

  • Financial Risk

  • Technology Risk

We leverage our understanding of risk maps and risk universes to better advise our clients in strategic business decisions and to optimize the management of risk throughout the enterprise.

Weighing the Risks

Weekly Highlights

Three Key Ideas:

  • The surge in AI related data centers will stress electricity generation and grids globally.

  • Some companies have started to obscure the real carbon footprint of AI

  • The market for used electric vehicles is surging with more buyers willing interested in used EVs

  • The FTC is planning to crackdown on companies that create fake or misleading reviews

Strategic Risk

  • How AI Will Impact Governance, Risk And Compliance Programs

    • The adoption of AI in governance, risk, and compliance presents significant reputational risks, especially if ethical issues like bias, lack of transparency, or data privacy breaches emerge, potentially leading to public backlash and loss of trust.

    • Overreliance on AI for decision-making, without adequate human oversight and robust policies, could expose organizations to operational failures, legal liabilities, and the need for significant investments in AI-related skills to ensure effective and responsible implementation.

  • AI spending to reach $632 billion in the next 5 years

    • The rapid surge in AI spending presents significant risks for businesses, as many struggle to integrate AI effectively into their operations, leading to costly failures and potential setbacks in achieving desired outcomes.

    • AI technologies are transforming industries at a fast pace, creating competitive pressures; however, overreliance on AI investments, as seen with Microsoft and others, could lead to vulnerabilities if the anticipated returns on these technologies are not realized, especially amidst investor concerns.

  • AI’s Insatiable Need for Energy Is Straining Global Power Grids

    • The rapid expansion of data centers driven by AI's massive energy needs threatens global energy grids and renewable energy targets

    • The surging demand for data centers may create operational bottlenecks, increased energy costs, and long-term delays in connecting facilities to power grids, challenging the scalability of AI operations and exposing companies to infrastructure constraints and market competition for energy resources.

Business Operations Risk

  • Tesla’s Steep Price Cuts Help Get the Used EV Market Humming

    • Tesla's price cuts on new electric vehicles have significantly influenced the used EV market, creating a larger supply of affordable second-hand models and reshaping dealer inventory dynamics, leading to higher demand for used EVs relative to gas-powered vehicles.

  • Apple’s Hold on the App Store Is Loosening, at Least in Europe

    • Apple's loosening control over the App Store in Europe, driven by new regulations like the Digital Markets Act, exposes the company to operational risks from increased third-party reliance and potential for ongoing disputes with developers like Epic Games and Spotify over fees and access.

Legal & Compliance Risk

  • How Tech Companies Are Obscuring AI’s Real Carbon Footprint

    • Tech companies like Amazon, Microsoft, and Meta may be obscuring their carbon footprints through unbundled renewable energy certificates (RECs), leading to potentially misleading disclosures that could result in reputational and regulatory risks as carbon accounting standards evolve.

    • The growing demand for AI is driving up energy consumption in data centers, increasing reliance on carbon-intensive infrastructure, which tech companies may inadequately address, posing risks to long-term sustainability goals and operational continuity.

  • Tech Companies Furious at New Law That Would Hold Them Accountable When Their AI Does Bad Stuff

    • California’s SB 1047 bill poses significant legal and compliance risks for AI companies by imposing strict safety testing, third-party audits, and accountability measures for harmful AI outputs.

    • The bill's provisions, including mandatory kill switches and whistleblower protections, increase the ethical and privacy risks for AI developers, as it requires transparency and adherence to stringent safety protocols.

  • Tech companies rally behind FTC’s crackdown on fake reviews

    • The FTC's new rule targeting fake reviews enables the agency to impose civil penalties; this rule is supported by tech companies like Google, which seek to bolster their own efforts against fake reviews and avoid liability through Section 230 protections.

    • The rule imposes stricter ethical standards on businesses to ensure authenticity in consumer reviews, prohibiting the use of AI-generated or deceptive content.

Technology Risk

  • This App is Rejecting Generative AI Altogether

    • Procreate’s decision to reject generative AI preserves the integrity of its platform, avoiding issues seen in other companies, such as user backlash over AI's perceived exploitation of intellectual property.

    • Procreate maintains a stable, dependable product experience, reducing the risk of platform instability and reputational damage associated with AI's often unfinished or unpredictable behavior, as seen in competitors like Google and Adobe.

  • Should You Make Up Personal Information When Signing Up With Websites?

    • The use of false information by users increases the risk of platform abuse, potentially undermining identity verification processes and complicating account recovery mechanisms.

  • Sam Altman’s Worldcoin Is Battling With Governments Over Your Eyes

    • Worldcoin's reliance on biometric iris-scanning technology for identity verification raises significant privacy and security concerns, particularly around the potential misuse of sensitive data and the creation of a global biometric database with limited oversight.

    • Worldcoin faces global regulatory challenges, including investigations and suspensions due to data protection concerns, raising questions about the scalability and long-term viability of its cryptocurrency-based identity verification system.