Thursday, March 26, 2026

AI Yi Yi

The firm I work for has been aggressively pushing employees to find ways to use generative AI -- particularly OpenAI's Chat GPT, Anthropic's Claude and GitHub's Copilot-- as much as possible in their work.  When you buy a hammer, it's usually because you have something you want to nail.  With AI, it's as if the hammer purchaser now insists that everyone in the household find ways to incorporate the hammer into their daily tasks.  How can a hammer help you wash those dishes?  Could anything be nailed or clawed when you walk the dog?   The message from the Executive suite has been so consistently insistent that it seems there is a decided effort amongst them to pretend that an employee's natural curiosity to know "WHY" one' s work habits must be re-designed by one to accommodate what should be a tool doesn't exist.  I figured i was never going to get an answer from any of the geniuses upstairs, so I asked AI itself: Why are companies "expecting" employees to use AI without explaining why?

Companies are pushing AI adoption without clear, detailed explanations because they are operating under intense competitive pressure to achieve rapid efficiency gains, often driven by a "fear of missing out" (FOMO) and the need to justify heavy investment costs. While many leaders view this transition as an inevitable "partner" to employees, the rush to implement tools like ChatGPT and GitHub Copilot often results in top-down mandates that lack necessary training or a clear "why" for daily operations. 

Here is a breakdown of why companies are acting this way:

1. Competitive Pressure and FOMO 

The Hype Cycle: Management is bombarded with information that AI is a "game-changer," leading them to implement it quickly to avoid appearing obsolete, often without fully understanding the practical applications themselves.

Competitor Fear: Companies fear that if their competitors adopt AI faster, they will lose market leadership and revenue.

Investor Pressure: Companies are investing heavily in AI and must prove to shareholders that these investments are being used, prompting them to mandate usage to drive up adoption metrics. 

2. "Performative" or Rushed Adoption

Mistaking Efficiency for Productivity: Many organizations equate AI "usage" with "productivity," wrongly assuming that simply using the tool equals better results.

The Sunk-Cost Fallacy: Having spent massive amounts on AI licenses, leaders feel compelled to force staff to use them, regardless of whether they add immediate value, hoping the technology will improve over time. 

3. Misguided Focus on Cost Reduction 

Cover for Cost-Cutting: Some companies are using AI as a strategic justification for layoffs or to avoid hiring, replacing human headcount with automation.

Overestimating Current Capability: Leaders often rely on hype and assume AI can do more than it actually can, leading to scenarios where staff are forced to use tools that are not yet effective for their specific roles. 

4. A Disconnect Between Leadership and Staff 

The "Training Gap": A recent AWS report noted that 56% of organizations have AI plans, but only 31% of workers feel their employer provides training, demonstrating a major gap in preparing staff for the shift.

Lack of Understanding: Executives often lack awareness of the practical, day-to-day training needs of their employees, leading to a "tool push" rather than a true adoption strategy. 

5. Hope for Self-Directed Learning

Assuming Organic Adoption: Many leaders mistakenly assume that because AI tools are "easy" to use, employees will figure them out on their own, not realizing that proper implementation requires new workflows and training. 

Disclaimer: Some companies are now moving from encouragement to mandating AI, with firms like Coinbase taking disciplinary action against employees who fail to adopt AI, which has increased employee anxiety and resistance. 

I couldn't have said it better myself.   I was inspired to see what AI had to say on the topic by a recent video from Senator Bernie Sanders in which he chats with Anthropic's Claude AI model about the implications and portents of the explosion of AI for workers and for humanity.  The atmospheric setting for the video is evocative of stark sci-fi horror and appropriately so: Claude's candor about the threat it poses is at times chilling. 

While watching Naomi Klein in conversation with author Karen Hao at British Columbia's Chan Center for the Performing Arts about Hao's sensational 2025 investigation Empire of AI  (in an extremely worthwhile hour and a half that I cannot recommend highly enough), I was inspired again by Bernie's AI Chat to ask Google the following:

How is it ethical to use generative AI given its profoundly deleterious environmental effects, its unacknowledged exploitation of people, the way it is hyped as being something it is not, and the race that some of the worst actors in commerce are engaged in to win the AGI race?

The results were led off with an AI summary of the response:

AI Overview

Using generative AI ethically is challenging due to high energy/water consumption, exploitation of training data and human labor, potential for misinformation, and the competitive "AGI race" prioritizing profit over safety. Ethical use requires high transparency, opting for models trained with consent, and strict limits on usage, according to perspectives from Medium, TechTarget, and Earth.Org

Core Ethical Issues

Environmental Impact: Training and operating AI models require massive energy and water for cooling, often straining local resources and increasing carbon emissions.

Exploitation of Labor/Data: AI systems are often trained on massive amounts of data scraped without consent, including copyrighted creative works. Additionally, they rely on "invisible labor" from low-wage workers, often in the Global South, for labeling and moderation.

Misinformation and Hype: Generative AI is frequently marketed as more autonomous or intelligent than it is, leading to inaccuracies (hallucinations) and a lack of accountability.

The AGI Race: The commercial race to achieve Artificial General Intelligence (AGI) causes companies to prioritize speed over safety, ignoring potential harms. 

Is Ethical Use Possible?

Strict Use Cases: Only using AI as a tool for support, not to replace human judgment, creative work, or to generate misinformation.

Transparency and Consent: Selecting models that offer transparency about their training data and ensure fair compensation for creators.

Data Security: Ensuring no confidential or personal information is entered into public AI tools to prevent exploitation of user data.

Regulation: Urging for, and abiding by, new legal frameworks that require accountability, accountability for environmental impact, and protection for labor. 

Some argue that because of the foundational issues with how these models are developed, truly "ethical AI" does not currently exist, as noted in a WIRED article and a blog post by Matthew Cheney

%%%%%

Should we trust AI's (copped) opinion on the question? Either AI is wrong here-- in which case it proves the point that AI cannot be used in any way that requires trust in its output-- or it's not.  And if it is right in spite of itself, the ethical use of AI is questionable for the very reasons it states.



No comments:

Post a Comment