Back to Articles
By Meghna Sinha

Beyond the Screen: Will AI Free Us, or Own Us?

What happens when artificial intelligence moves beyond the screen and into the very fabric of our daily lives? That's the profound question posed by Jony Ive and Sam Altman's ambitious new 'AI companion' device. Their new AI hardware venture, io, aims to introduce a "third core device"—a seamless integration of AI into daily life, moving beyond our reliance on traditional screens. As reported by the Wall Street Journal on May 21, 20251, this device is envisioned as "fully aware of a user's surroundings and life," with Altman himself reportedly hailing a prototype as "the coolest piece of technology the world will have ever seen." This groundbreaking ambition demands our immediate and critical examination.

Backed by OpenAI's reported $6.5 billion investment and an ambitious target of shipping 100 million units by late 2026, this initiative represents a profound commitment to reshaping human-AI interaction. Yet, as we consider such a transformative shift, it's crucial to examine whether this trajectory leads toward a more beneficial future for all, or inadvertently exacerbates existing power and wealth imbalance between creators and users of technology.

Subscribe now


OpenAI's Legal Battles: A Warning Sign

Our collective experience with technological advancement consistently shows that while innovation offers immense potential, its benefits are not always universally distributed. A growing concern within the industry is the observed concentration of power and wealth among a select group of technology leaders. This dynamic has, at times, contributed to outcomes diverging from utopian ideals, manifesting as pervasive data collection, algorithmic biases, and a gradual erosion of personal privacy. OpenAI's approach to data highlights these concerns.

Consider the recent controversies surrounding OpenAI's data practices, which have triggered numerous lawsuits across various fronts. OpenAI faces legal challenges primarily centered on two key areas:

  1. Copyright Infringement: Several prominent entities, including The New York Times, have sued OpenAI for allegedly using vast amounts of copyrighted material without permission to train its large language models. The New York Times' lawsuit, filed in December 2023, asserts that OpenAI and Microsoft "unlawfully copied millions of their articles" to train AI models capable of mimicking or directly quoting their journalism, potentially undermining their business model (as reported by outlets like NPR2 and NYTimes3). Similarly, authors (e.g., Authors Guild, Sarah Silverman, Michael Chabon) have filed class-action lawsuits in 2023 and 2024, alleging unauthorized training on their copyrighted books (detailed by Authorsguild.org4). More recently, in April 2025, digital media giant Ziff Davis also sued OpenAI for alleged misuse of over 1.3 million copyrighted works (reported by NYTimes5, Reuters6). A federal judge's March 2025 decision to allow core copyright infringement claims in The New York Times' case to proceed marks a significant development for the entire AI industry (reported by NPR news7). Many of these individual copyright lawsuits have been consolidated into a single case in the Southern District of New York (reported by Guardian8).

  2. Data Scraping and Privacy Violations: Beyond copyright, OpenAI faces class-action lawsuits accusing it of mass data scraping and privacy infringements. A prominent example is the lawsuit filed by Clarkson Law Firm in June 2023, which alleges that OpenAI "stole private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge" to train ChatGPT and DALL-E (Clarkson Law Firm9). This lawsuit contends that personal data, often publicly available but shared for specific purposes, was repurposed for AI training, raising fundamental questions about consent and data ownership.

These instances highlight a prevailing tech industry inclination to prioritize rapid development and capability over stringent ethical considerations and explicit user consent. The proposed AI companion, by its very nature of being "constantly aware" of one's life—conversations, environment, activities—presents an unprecedented avenue for data aggregation. Without a fundamental recalibration of approach, such a device risks becoming the ultimate data vacuum, reinforcing existing surveillance capabilities and control. Given this track record, it's entirely rational to approach investments in such products with a degree of skepticism regarding their long-term societal impact and the protection of individual autonomy.

One might ask: If we're hesitant to engage with a vendor known for negative product reviews, much less those facing serious lawsuits, why should our standards for technology products be any different?


Putting AI in the Human Loop and Demanding Full Data Agency

For decades, consumers have implicitly served as unpaid data generators for technology companies. Our engagement, interactions, and personal information have fueled the iterative improvement of opaque systems, often without informed consent or equitable compensation. This dynamic has, ironically, twisted the very concept of "human in the loop."

Originally, the term "human in the loop" (HITL) was a technical concept, fundamental to the design of advanced systems, particularly in reinforcement learning. It referred to the crucial input and feedback provided by human experts to train and refine AI models, ensuring accuracy and alignment with human values. A human in the loop was a deliberate, empowering design choice to make AI systems better and safer, with humans retaining agency and control.

Unfortunately, the current trajectory of technology development increasingly positions humans not as the intentional and empowered "loop" in a system's design, but as in the loop at the whim of the technology they serve. We aren't primarily providing feedback to refine the system for our benefit; instead, we are inadvertently, and often unknowingly, serving as the constant, free data stream that allows these systems to grow, evolve, and profit. This reverses the intended relationship, making us subservient to the technology rather than its masters.

In this burgeoning intelligence era, the foundational value proposition must evolve. We must flip the user data compensation equation and firmly place AI in the human loop. If pervasive AI companions necessitate ever more intimate data from users for their systems to advance, then users should be compensated for their data contributions, rather than bearing the financial cost of the device itself. This paradigm shift mirrors models seen in other sectors, such as clinical trials in medical research, where participants are compensated for their invaluable contributions and inherent risks. Why should the data fueling the AI revolution be treated differently?

Moreover, pervasive intelligent products must offer maximum optionality for users to genuinely control their data:

  • Full Data Portability: Users must possess the unequivocal freedom to retrieve all their data, comprehensively and easily, should they choose to discontinue use of a product or service. This goes beyond simple account deletion; it signifies a complete and unimpeded data repatriation, akin to closing a bank account and withdrawing all assets.

  • Granular Consent Controls: There must be clear, intuitive, and granular controls over how data is collected, stored, processed, and utilized by these entities. This necessitates moving beyond convoluted menus and technical jargon, ensuring that settings are easily understood and managed.

  • Transparency by Design: The mechanisms of data collection and AI improvement should be transparent, comprehensible, and auditable, rather than remaining proprietary black boxes.

Crucially, this imperative for optionality extends to the most vulnerable users. Navigating the intricate privacy settings of contemporary smartphones or smart home devices often presents an insurmountable challenge for children or senior citizens. Layers of complex menus, specialized terminology, and fragmented controls across various applications create significant barriers. For a child, these settings are often incomprehensible, leaving them vulnerable. For a senior citizen, diminishing digital literacy, visual impairments, or cognitive changes can transform a basic privacy adjustment into an unmanageable obstacle. If an "AI companion" is truly to serve all of humanity, its privacy and data management interfaces must be intuitively designed, perhaps even voice-controlled, and easily comprehensible to individuals across all levels of technical proficiency. Default settings should prioritize privacy, and any sharing of data should require unambiguous, easily revocable consent, potentially with third-party verification for minors or those under guardianship.

This leads to a critical question: Why are we, as users, so willingly paying high prices for unfinished technology products, often accepting terms that grant companies vast control over our data? Why aren't we collectively negotiating for owning, preserving, and monetizing our data on our terms?


The Companion Paradox: Intimacy Demands Legal Safeguards

Ultimately, acquiring a device like the one proposed by Ive and Altman transcends the simple purchase of an electronic gadget. It represents a profound commitment, more comparable to opening a bank account, establishing a long-term telecommunications contract, buying property, or engaging with essential services in retail, healthcare, or education.

This is because it implies an ongoing, deeply interwoven relationship with the manufacturer, fundamentally impacting one's digital identity, personal autonomy, and the very fabric of daily life. If this device is truly to be a "companion," then the depth of its integration demands a robust legal framework.

Today's financial systems, telecommunications industries, and real estate markets function with a bedrock of consumer trust because they are underpinned by robust regulations and consumer protection mechanisms. As consumers, we possess established recourse when our financial assets are mishandled, our communication services are disrupted, or property transactions encounter issues. Similarly, sectors like retail, healthcare, and education, despite their current challenges and clear need for modernization in this intelligence era, have historically operated with established frameworks and expectations for consumer and user protection. We know, for instance, that medical records are subject to strict privacy laws, and educational institutions have protocols for student data. These existing sectors, while imperfect and needing an update for the current intelligence era, have served us thus far and acknowledge that the enduring nature and critical importance of their services necessitate a regulatory framework to safeguard consumer rights.

The artificial intelligence industry, particularly concerning pervasive AI companions, needs to mature into a similarly regulated space. Without such oversight, the power imbalance between the technology leaders developing these comprehensive devices and the users adopting them will inevitably widen. It is untenable to concede control over our most intimate data and digital experiences to entities operating largely outside a clearly defined, enforceable regulatory perimeter.

For these "AI companions" to genuinely serve humanity, they must operate within a framework that ensures:

  • Data Fiduciary Responsibility: Companies are legally obligated to act in the user's best interest regarding their data, transcending mere profit motives.

  • Auditable Compliance: Independent bodies can verify that stated privacy policies are rigorously implemented and maintained.

  • Clear Recourse Mechanisms: Users possess readily accessible legal avenues for redress should their data rights be infringed upon or if the device operates contrary to their well-being.

This brings us to a crucial paradox: Despite massive investments in data centers, ever-larger models, and AI token factories, why has there not been a commensurate commitment to investing in privacy-enhancing tools, copyright protection tools, and the legal frameworks necessary to protect user rights?


The Real Opportunity: Democratizing Innovation

It is crucial to emphasize that the concern here is not with the innovative spirit itself. Ideas like an ambient AI companion hold immense transformative potential. However, the apprehension arises from the approach consistently taken with these innovations, which tends to reinforce and intensify the concentration of power and wealth.

A significant challenge is the accelerating race among a limited, often interconnected, group of developers and institutions to build the "next big thing," leaving diminished space for new entrants with alternative visions or business models. The notable reaction to models like DeepSeek, which demonstrated highly competitive performance at a fraction of the customary cost, exemplifies this issue. Such developments, while promising, can threaten established economic structures and the perceived necessity of hyper-capitalized ventures, prompting defensive responses from incumbents.

The culture within Silicon Valley, characterized by its often insular funding and support ecosystems, is deeply concerning. It intrinsically limits the diversity of technological solutions brought to market and whether these truly benefit all individuals, as Karen Hao eloquently details in her recent book, Empires of AI10.

This dynamic is not unprecedented in history. As Daron Acemoglu and Simon Johnson meticulously illustrate in Power and Progress: Our 1000-Year Struggle Over Technology and Prosperity11, major technological breakthroughs have rarely translated directly into immediate, widespread improvements in human conditions. For instance, the agricultural advancements of the European Middle Ages, while technically transformative, saw benefits largely accrue to the feudal lords, enabling grand constructions, while peasant life often remained impoverished. Similarly, the initial century of the Industrial Revolution in England brought immense economic growth, yet the working class endured stagnant wages, arduous labor, and deteriorated living standards. It was only through decades of social and political struggle, collective action, and the implementation of regulatory frameworks that these innovations were democratized, and their benefits began to truly permeate society.

The pressing question now becomes: How do we significantly shorten these historical cycles of power and knowledge concentration? How can we fundamentally democratize the innovation process itself, ensuring that the profound advantages of artificial intelligence are broadly shared and lead to genuine societal betterment, rather than being confined to the hands of a privileged few? This, more than any technical hurdle, is the defining challenge of the AI era.

Share

About the author: Meghna is a trailblazing AI executive and strategist with 26+ years of experience. As Co-Founder of Kai Roses Inc., she develops AI solutions and offers consulting services. Having delivered over $1B in responsible enterprise AI value at fortune 50 firms, she actively champions responsible AI adoption as a Distinguished Fellow with AI2030 and promotes AI literacy and knowledge democratization as a Senior Industry Fellow at UC Irvine Center for Digital Transformation. Connect with her on Linkedin or Bluesky.


Meghna's Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.