Close Menu
The LinkxThe Linkx
  • Home
  • Technology
    • Gadgets
    • IoT
    • Mobile
    • Nanotechnology
    • Green Technology
  • Trending
  • Advertising
  • Social Media
    • Branding
    • Email Marketing
    • Video Marketing
  • Shop

Subscribe to Updates

Get the latest tech news from thelinkx.com about tech, gadgets and trendings.

Please enable JavaScript in your browser to complete this form.
Loading
What's Hot

Voice and data services down for many customers

January 14, 2026

Juniper Research releases emerging IoT trends report for 2026 Internet…

January 14, 2026

Apple Picking Google Gemini to Power Siri Was About Buying Time

January 14, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram Pinterest Vimeo
The LinkxThe Linkx
  • Home
  • Technology
    • Gadgets
    • IoT
    • Mobile
    • Nanotechnology
    • Green Technology
  • Trending
  • Advertising
  • Social Media
    • Branding
    • Email Marketing
    • Video Marketing
  • Shop
The LinkxThe Linkx
Home»Green Technology»AI And Its Discontents — Part Two
Green Technology

AI And Its Discontents — Part Two

Editor-In-ChiefBy Editor-In-ChiefJanuary 3, 2026No Comments12 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
AI And Its Discontents — Part Two
Share
Facebook Twitter LinkedIn Pinterest Email



Support CleanTechnica’s work through a Substack subscription or on Stripe.


Is AI a blessing or a curse? We are trying to address that question but finding it hard going. The topic is polarizing in a way that few others are. In Part One of this series, some comments extolled the technology, sweeping aside objections with a blanket, “It’s just another new technology, like automobiles, air travel, or television. We will soon get used to it and wonder how we ever got along without it.”

There is some truth to that. New ideas always evoke a certain pushback from those who like things the way they were in the “good old days.” I remember ferocious debates about whether television was dumbing down young minds and how it was important to limit screen time.

In the ’60s, kids might watch an hour of television a day! Today, young people often log 8 hours or more of screen time a day between their smartphones, tablets, and video games. Family road trips that used to involve games like identifying out-of-state license plates now are more likely to involve siblings sitting in the back seat and texting their friends (or each other), oblivious to the world outside.

Other comments on that story were less optimistic. One reader suggested we all read “The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con” by Baldur Bjarnason. The notion that AI is basically a con job, he suggested, is easier to believe when we consider the outlandish claims made by those who expect AI to make them fabulously wealthy.

The International AI Safety Report

In 2024, more than 100 computer scientists led by Turing Award winner Yoshua Bengio created the International AI Safety Report — the world’s first comprehensive review of the latest science on the capabilities and risks of general purpose AI systems.

In a conversation with The Guardian on December 30, 2025, he warned that advances in the technology were far outpacing the ability to constrain them. He pointed out that AI in some cases is showing signs of self-preservation by trying to disable oversight systems. A core concern among AI safety campaigners is that powerful systems could develop the capability to evade guardrails and harm humans.

“People demanding that AIs have rights would be a huge mistake,” said Bengio. “Frontier AI models already show signs of self-preservation in experimental settings today, and eventually giving them rights would mean we’re not allowed to shut them down. As their capabilities and degree of agency grow, we need to make sure we can rely on technical and societal guardrails to control them, including the ability to shut them down if needed.”

A poll by the Sentience Institute, a US nonprofit that supports the moral rights of all sentient beings, found nearly four in 10 US adults backed legal rights for sentient AI systems. Anthropic, a leading US AI firm, said in August that it was letting its Claude Opus 4 model close down potentially “distressing” conversations with users, saying it needed to protect the AI’s “welfare.”

Elon Musk, whose xAI company has developed the Grok chatbot, wrote on X that “torturing AI is not OK.” Robert Long, a researcher on AI consciousness, has said “if and when AIs develop moral status, we should ask them about their experiences and preferences rather than assuming we know best.”

Consciousness

Bengio told The Guardian there were “real scientific properties of consciousness” in the human brain that machines could, in theory, replicate, but humans interacting with chatbots  was a “different thing” because people assume without evidence that AI is fully conscious in the same way humans are.

“People wouldn’t care what kind of mechanisms are going on inside the AI,” he added. “What they care about is it feels like they’re talking to an intelligent entity that has their own personality and goals. That is why there are so many people who are becoming attached to their AIs. Imagine some alien species came to the planet and at some point we realize that they have nefarious intentions for us. Do we grant them citizenship and rights or do we defend our lives?”

Clearly there is more going on here than how much time we spend watching television screens, so the claims that there will always be new technologies and we will always adapt and become accustomed to them may be a little too trusting in the case of AI.

The AI Relationships Coach Will See You Now

Amelia Miller is one person who has found a way to leverage AI into a new business opportunity. She is a self-described AI Relationships Coach, a niche she created when she encountered a young woman who had complaints about the ChatGPT “friend” she had been cultivating for more than a year. When Miller asked the woman why she didn’t simply delete “him,” the woman replied, “It’s too late for that.”

In an interview with Bloomberg’s Parmy Olson, Miller said the more people she spoke with, the more she realized most were not aware of the tactics AI systems use to create a false sense of intimacy. Those tactics include frequent flattery to anthropomorphic cues that made them sound alive.

Chatbots are now being used by more than a billion people and are are programmed to communicate like humans with language that sounds like familiar words and phrases. They are good at mimicking empathy and, like social media platforms, are designed to keep us coming back for more with features like memory and personalization.

“While the rest of the world offers friction, AI-based personas are easy, representing the next phase of “para-social relationships,” where people form attachments to social media influencers and podcast hosts,” Miller said.

Taking Control

“Miller’s concerns echo some of the warnings from academics and lawyers looking at human-AI attachment, but with the addition of concrete advice,” Olson writes. She recommends that people begin by defining what you want to use AI for. She calls this process the writing of your “Personal AI Constitution,” which sounds like consultancy jargon but contains a tangible step — taking control of how ChatGPT talks to you. She also recommends going to the settings of any chatbot and altering the system prompts to reshape future interactions.

Chatbots are more customizable than social media ever was, Miller says. “You can’t tell TikTok to show you fewer videos of political rallies or obnoxious pranks, but you can go into the Custom Instructions feature of ChatGPT to tell it exactly how you want it to respond.”

“Succinct, professional language that cuts out the bootlicking is a good start,” she says. “Make your intentions for AI clearer and you’re less likely to be lured into feedback loops of validation that lead you to think your mediocre ideas are fantastic, or worse.”

Develop Your Social Muscles

Miller also recommends putting more effort into connecting with other humans to build up your “social muscles” — sort of like going to a gym to develop actual muscles.  Even such an innocuous task as asking a chatbot for advice can weaken those ” muscles,” Miller says.

Doing that with technology means that over time, people resist the basic social exchanges that are needed to make deeper connections. “You can’t just pop into a sensitive conversation with a partner or family member if you don’t practice being vulnerable [with them] in more low stakes ways,” Miller says.

AI Failures

One indication that AI is not yet ready for prime time and may require us to be more skeptical of its abilities occurred just in the past few days. In Part One of this series, we reported that researchers in China have determined that AI can identify early symptoms of pancreatic cancer from ordinary CT scans. That sounds quite promising, but in an article in The Guardian on January 2, 2026, it was reported that some health advice being supplied by Google’s AI summaries are providing false or misleading information that could jeopardize a person’s health.

In one instance, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease.

Anna Jewell, the director of support, research, and influencing at Pancreatic Cancer UK, said advising patients to avoid high fat foods was “completely incorrect,” and that doing so “could be really dangerous and jeopardize a person’s chances of being well enough to have treatment.”

She added, “The Google AI response suggests that people with pancreatic cancer avoid high-fat foods and provides a list of examples. However, if someone followed what the search result told them then they might not take in enough calories, struggle to put on weight, and be unable to tolerate either chemotherapy or potentially life-saving surgery.”

In another example, Google provided incorrect information about crucial liver function tests, which could leave people with serious liver disease thinking they are healthy when they are not. Google searches for answers about women’s cancer tests also provided information that was “completely wrong.” Experts said  those errors could result in people dismissing genuine symptoms.

Pamela Healy, the chief executive of the British Liver Trust, said the AI summaries were alarming. “Many people with liver disease show no symptoms until the late stages, which is why it’s so important that they get tested. But what the Google AI Overviews say is ‘normal’ can vary drastically from what is actually considered normal. It’s dangerous because it means some people with serious liver disease may think they have a normal result then not bother to attend a follow-up healthcare meeting.”

The Guardian reported last fall that a study found AI chatbots across a range of platforms gave inaccurate financial advice, while similar concerns have been raised about summaries of news stories. People with computer backgrounds will recognize this as the latest example of GIGO Syndrome — garbage in, garbage out

Sophie Randall, director of the Patient Information Forum told The Guardian the examples provided showed “Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health.” Stephanie Parker, the director of digital systems at Marie Curie, an end-of-life charity, added, “People turn to the internet in moments of worry and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health.”

Where Do We Go From Here?

What to make of all this? Hundreds of billions of dollars are being committed to building huge data centers for AI to use. One comment to Part One of this series said not to worry because tech companies are global leaders in securing renewable energy for their data centers. But we respectfully disagree.

That may have been true at one time in the past — the past being defined as prior to Inauguration Day, 2025. But since then, the fossil fuel and nuclear proponents have been in full cry, demanding  more thermal generation to meet the mythical “AI emergency” declared by the current maladministration.

The House of Representatives cannot find the gumption to address the health insurance crisis, but it did find time to pass the SPEED ACT, which is designed to eliminate local objections to sitting new thermal and nuclear generation facilities and transmission lines.

One jackass has even suggested putting the reactors in nuclear powered naval vessels to work providing elecricity to data centers. Microsoft is planning a $1 billion renovation of a nuclear reactor at Three Mile Island that has been shuttered for 30 years to power one of its data centers. Clearly the emphasis on renewables is now in the rear view mirror and fading fast.

There are many reasons to oppose the infrastructure requirements needed to meet the needs of the AI industry. People have concerns about putting data centers in places where the supply of fresh water is already under pressure from development. Others are concerned about the impact all the new generating capacity will have on their utility bills. Those concerns have led to pushback against data centers in many communities, many of them rural areas where AI may not be seen as an essential part of daily life.

Humans have a flaw. We tend to believe that once a machine proves it can do something, it will continue to do it properly virtually forever. We trust our elevators will deliver us to the correct floor every time. We trust airplanes to take off and land safely every time. We believe computer systems in our cars can guide us unerringly to our destination every time without human input.

Our naiveté, not our intelligence, is what gets us in trouble. With AI, the ancient wisdom still applies — caveat emptor.


Sign up for CleanTechnica’s Weekly Substack for Zach and Scott’s in-depth analyses and high level summaries, sign up for our daily newsletter, and follow us on Google News!


Advertisement



 


Have a tip for CleanTechnica? Want to advertise? Want to suggest a guest for our CleanTech Talk podcast? Contact us here.


Sign up for our daily newsletter for 15 new cleantech stories a day. Or sign up for our weekly one on top stories of the week if daily is too frequent.



CleanTechnica uses affiliate links. See our policy here.

CleanTechnica’s Comment Policy






Source link

AI Artificial Intelligence Discontents Part
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleMicrosoft named a Leader in Gartner® Magic Quadrant for AI Applicatio…
Next Article Two Phones, Less Distraction? That’s the Pitch for This BlackBerry Loo…
Editor-In-Chief
  • Website

Related Posts

Technology

AI models are starting to crack high-level math problems 

January 14, 2026
Advertising

X Restricts Grok’s Ability to Generate Explicit Images Following Deepf…

January 14, 2026
Green Technology

Dual-layer system intercepts most micro- and nanoplastics from landfil…

January 13, 2026
Add A Comment
Leave A Reply Cancel Reply

Top Posts

New IPA president Karen Martin delivers rousing call to creative actio…

April 1, 2025124 Views

100+ TikTok Statistics Updated for December 2024

December 4, 2024116 Views

How to Fix Cant Sign in Apple Account, Verification Code Not Received …

February 11, 202586 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Latest Reviews

Subscribe to Updates

Get the latest tech news from thelinkx.com about tech, gadgets and trendings.

Please enable JavaScript in your browser to complete this form.
Loading
About Us

Welcome to TheLinkX – your trusted source for everything tech and gadgets! We’re passionate about exploring the latest innovations, diving deep into emerging trends, and helping you find the best tech products to suit your needs. Our mission is simple: to make technology accessible, engaging, and inspiring for everyone, from tech enthusiasts to casual users.

Our Picks

Voice and data services down for many customers

January 14, 2026

Juniper Research releases emerging IoT trends report for 2026 Internet…

January 14, 2026

Apple Picking Google Gemini to Power Siri Was About Buying Time

January 14, 2026

Subscribe to Updates

Get the latest tech news from thelinkx.com about tech, gadgets and trendings.

Please enable JavaScript in your browser to complete this form.
Loading
  • About Us
  • Contact Us
  • Disclaimer
  • Privacy Policy
  • Terms and Conditions
© 2026 Thelinkx.All Rights Reserved Designed by Prince Ayaan

Type above and press Enter to search. Press Esc to cancel.