Why Dr. Jane Goodall’s Passing Made Me Choose Pessimism at the UCI Conference

The annual flagship conference at the UCI Center for Digital Transformation, a convergence of industry leaders and forward-thinkers, culminated in a stark question posed by Professor Vijay Gurbaxani: “Are you feeling optimistic or pessimistic?” For the first time, I found myself squarely in the pessimistic camp.
This shift wasn’t just born from the day’s debates. Halfway through the event, the news broke of Dr. Jane Goodall’s passing. It was a deeply sinking feeling, a visceral, “oh no, who will be the voice for conservation and love for the planet now?” Her legacy is immense, a colossal beacon of hope and action, and her departure leaves a void that will be felt for decades. That loss, set against the backdrop of conversations dominated by the sheer scale of current socio-political, economic, and technological challenges, cemented my perspective.
This morning, seeing the numerous dedications to her, I found myself processing the anecdotes and insights gathered over a day and a half with global leaders at the UCI Center for Digital Transformation Conference. They coalesce into a complex picture of progress marred by profound ethical and structural questions, a technological acceleration running far ahead of our collective conscience.
The Stories Behind the Data
The conference opened with State Street Chief Economist Simona Mocuta’s view on the economy, urging us to look beyond high-level metrics. The low unemployment rate, for example, feels like a success story, but a deeper dive reveals that people are increasingly working multiple jobs and longer hours, ultimately leading to less overall consumer spending. This observation became a powerful lens through which to view the rest of the conference: that which looks good on the surface often hides complex, often troubling, human stories beneath.
The $400 Billion Footnote
Nowhere is this divergence more striking than in the conversation around Large Language Models (LLMs). The fact that these models, the engine of the current AI boom were built on a mountain of uncompensated internet and private data is treated as a mere footnote. There is a deafening silence around the minimal acknowledgement, let alone compensation, for the countless creators whose work fuels this technology.
While one high-profile lawsuit did result in Anthropic agreeing to a multi-billion dollar settlement with authors, this pales in contrast to the hundreds of billions being poured into these tech companies. In fact, $400 billion has been invested in AI companies and infrastructure this year alone, an amount larger than the economies of many nations. This number is projected to grow to a staggering $7 trillion by 2030.
Why has this become acceptable? Why are we not aggressively pushing for transparency on energy consumption from the data centers powering this revolution? Why is the blueprint for future AI not built on an ethical foundation of paid, transparently-sourced data? This is not a technical challenge; it is a crisis of ethical priorities.
The Chasm of Expectations
The third major observation was the striking dichotomy among executives: a profound belief in AI’s transformative potential, a “never before seen” level of impact, and yet, the progress in the past three years does not match their expectations. Since ChatGPT first forced executives to consider LLMs in 2022, real business transformation has been slow, with tangible value only realized in narrow use cases.
It is not a binary outcome, of course as Ewan Lowe, EVP Engineering at Pimco pointed out in the panel I moderated. We see steady, focused progress: companies sorting out data governance, hiring AI chiefs, embracing tools like Copilot for engineering efficiency, and testing the limits of “vibe coding.” Robin Gordon, newly appointed CDAO at Hippo Insurance and Mary Burke, SVP HR Experian both shed light on progress in two of the most critical factors in AI transformation - data and culture.