Back to Articles
By Meghna Sinha

Beyond the Screen: Will AI Free Us, or Own Us?

Beyond the Screen: Will AI Free Us, or Own Us?

What happens when artificial intelligence moves beyond the screen and into the very fabric of our daily lives? That's the profound question posed by Jony Ive and Sam Altman's ambitious new 'AI companion' device. Their new AI hardware venture, io, aims to introduce a "third core device"—a seamless integration of AI into daily life, moving beyond our reliance on traditional screens. As reported by the Wall Street Journal on May 21, 20251, this device is envisioned as "fully aware of a user's surroundings and life," with Altman himself reportedly hailing a prototype as "the coolest piece of technology the world will have ever seen." This groundbreaking ambition demands our immediate and critical examination.

Backed by OpenAI's reported $6.5 billion investment and an ambitious target of shipping 100 million units by late 2026, this initiative represents a profound commitment to reshaping human-AI interaction. Yet, as we consider such a transformative shift, it's crucial to examine whether this trajectory leads toward a more beneficial future for all, or inadvertently exacerbates existing power and wealth imbalance between creators and users of technology.

Subscribe now


OpenAI's Legal Battles: A Warning Sign

Our collective experience with technological advancement consistently shows that while innovation offers immense potential, its benefits are not always universally distributed. A growing concern within the industry is the observed concentration of power and wealth among a select group of technology leaders. This dynamic has, at times, contributed to outcomes diverging from utopian ideals, manifesting as pervasive data collection, algorithmic biases, and a gradual erosion of personal privacy. OpenAI's approach to data highlights these concerns.

Consider the recent controversies surrounding OpenAI's data practices, which have triggered numerous lawsuits across various fronts. OpenAI faces legal challenges primarily centered on two key areas:

  1. Copyright Infringement: Several prominent entities, including The New York Times, have sued OpenAI for allegedly using vast amounts of copyrighted material without permission to train its large language models. The New York Times' lawsuit, filed in December 2023, asserts that OpenAI and Microsoft "unlawfully copied millions of their articles" to train AI models capable of mimicking or directly quoting their journalism, potentially undermining their business model (as reported by outlets like NPR2 and NYTimes3). Similarly, authors (e.g., Authors Guild, Sarah Silverman, Michael Chabon) have filed class-action lawsuits in 2023 and 2024, alleging unauthorized training on their copyrighted books (detailed by Authorsguild.org4). More recently, in April 2025, digital media giant Ziff Davis also sued OpenAI for alleged misuse of over 1.3 million copyrighted works (reported by NYTimes5, Reuters6). A federal judge's March 2025 decision to allow core copyright infringement claims in The New York Times' case to proceed marks a significant development for the entire AI industry (reported by NPR news7). Many of these individual copyright lawsuits have been consolidated into a single case in the Southern District of New York (reported by Guardian8).

  2. Data Scraping and Privacy Violations: Beyond copyright, OpenAI faces class-action lawsuits accusing it of mass data scraping and privacy infringements. A prominent example is the lawsuit filed by Clarkson Law Firm in June 2023, which alleges that OpenAI "stole private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge" to train ChatGPT and DALL-E (Clarkson Law Firm9). This lawsuit contends that personal data, often publicly available but shared for specific purposes, was repurposed for AI training, raising fundamental questions about consent and data ownership.

These instances highlight a prevailing tech industry inclination to prioritize rapid development and capability over stringent ethical considerations and explicit user consent. The proposed AI companion, by its very nature of being "constantly aware" of one's life—conversations, environment, activities—presents an unprecedented avenue for data aggregation. Without a fundamental recalibration of approach, such a device risks becoming the ultimate data vacuum, reinforcing existing surveillance capabilities and control. Given this track record, it's entirely rational to approach investments in such products with a degree of skepticism regarding their long-term societal impact and the protection of individual autonomy.

One might ask: If we're hesitant to engage with a vendor known for negative product reviews, much less those facing serious lawsuits, why should our standards for technology products be any different?


Putting AI in the Human Loop and Demanding Full Data Agency

For decades, consumers have implicitly served as unpaid data generators for technology companies. Our engagement, interactions, and personal information have fueled the iterative improvement of opaque systems, often without informed consent or equitable compensation. This dynamic has, ironically, twisted the very concept of "human in the loop."

Originally, the term "human in the loop" (HITL) was a technical concept, fundamental to the design of advanced systems, particularly in reinforcement learning. It referred to the crucial input and feedback provided by human experts to train and refine AI models, ensuring accuracy and alignment with human values. A human in the loop was a deliberate, empowering design choice to make AI systems better and safer, with humans retaining agency and control.

Unfortunately, the current trajectory of technology development increasingly positions humans not as the intentional and empowered "loop" in a system's design, but as in the loop at the whim of the technology they serve. We aren't primarily providing feedback to refine the system for our benefit; instead, we are inadvertently, and often unknowingly, serving as the constant, free data stream that allows these systems to grow, evolve, and profit. This reverses the intended relationship, making us subservient to the technology rather than its masters.

In this burgeoning intelligence era, the foundational value proposition must evolve. We must flip the user data compensation equation and firmly place AI in the human loop. If pervasive AI companions necessitate ever more intimate data from users for their systems to advance, then users should be compensated for their data contributions, rather than bearing the financial cost of the device itself. This paradigm shift mirrors models seen in other sectors, such as clinical trials in medical research, where participants are compensated for their invaluable contributions and inherent risks. Why should the data fueling the AI revolution be treated differently?

Moreover, pervasive intelligent products must offer maximum optionality for users to genuinely control their data:

  • Full Data Portability: Users must possess the unequivocal freedom to retrieve all their data, comprehensively and easily, should they choose to discontinue use of a product or service. This goes beyond simple account deletion; it signifies a complete and unimpeded data repatriation, akin to closing a bank account and withdrawing all assets.

  • Granular Consent Controls: There must be clear, intuitive, and granular controls over how data is collected, stored, processed, and utilized by these entities. This necessitates moving beyond convoluted menus and technical jargon, ensuring that settings are easily understood and managed.

  • Transparency by Design: The mechanisms of data collection and AI improvement should be transparent, comprehensible, and auditable, rather than remaining proprietary black boxes.