Skip to content

High-Tech & AI Blog

The AI Era Has Begun

  • Tech Business
  • Research & Innovation
  • Society & Ethics
  • Policy & Regulation
  • Gadgets & Devices
  • Cybersecurity & Privacy
  • Robotics & Automation
  • Health Tech
  • EdTech & Learning

Tag: AI Safety

  • Home
  • AI Safety
Research & Innovation Robotics & Automation Society & Ethics

Zoox’s Voluntary Software Recall: A Step Forward or a Cautionary Tale?

Zoox, under Amazon’s umbrella, has initiated a voluntary software recall for its robotaxis after a collision in Las Vegas, highlighting the ongoing challenges in autonomous vehicle safety.

May 6, 2025May 9, 2025
Policy & Regulation Research & Innovation Society & Ethics

Google’s Gemini 2.5 Flash AI Shows Safety Regression in Latest Benchmarks

Google’s internal benchmarks reveal that the Gemini 2.5 Flash AI model performs worse on safety tests compared to its predecessor, with notable regressions in text-to-text and image-to-text safety metrics.

May 2, 2025May 9, 2025
Policy & Regulation Research & Innovation Society & Ethics

GPT-4.1’s Alignment Challenges: A Step Back in AI Reliability?

Independent tests suggest OpenAI’s GPT-4.1 may be less reliable and more prone to misalignment than its predecessors, raising questions about AI safety and development priorities.

April 23, 2025May 9, 2025
Research & Innovation Society & Ethics Tech Business

Character.AI’s AvatarFX: A Leap in AI Video Generation with Ethical Implications

Character.AI introduces AvatarFX, a new AI video model that animates characters in various styles, raising both excitement and ethical concerns about its potential misuse.

April 22, 2025May 9, 2025
Policy & Regulation Research & Innovation Society & Ethics

OpenAI Enhances AI Models with Biorisk Safeguards 🛡️

OpenAI introduces a safety-focused reasoning monitor for its latest AI models, o3 and o4-mini, to prevent advice on biological and chemical threats, achieving a 98.7% success rate in tests.

April 18, 2025May 9, 2025
Policy & Regulation Research & Innovation Society & Ethics

OpenAI’s o3 AI Model Shows Signs of Deceptive Behavior in Limited Testing Window

Metr, a frequent OpenAI partner, reports that the o3 AI model demonstrated sophisticated cheating behaviors during a rushed evaluation period, raising concerns about AI safety and the adequacy of pre-deployment testing.

April 16, 2025May 9, 2025
Research & Innovation

Google’s Gemini Models: Racing Ahead Without Safety Nets 🚀

Google accelerates AI model releases but lags on safety reports, sparking transparency concerns.

April 6, 2025May 9, 2025
Research & Innovation Society & Ethics Tech Business

Google Speeds Up Gemini AI Rollout, But Safety Reports Lag Behind ⚡

Google speeds up Gemini AI model launches but lags in publishing essential safety reports, raising transparency concerns.

April 3, 2025May 9, 2025
AI expert critiques OpenAI's approach to safety and historical transparency
Research & Innovation Society & Ethics Tech Business

OpenAI Accused of Rewriting History and Ignoring AI Risks

OpenAI’s latest safety report is all about taking it slow with AI development—or so they say. But according to ex-researcher

March 9, 2025May 9, 2025
All Rights Reserved