Writing
Nily AI Newsletter, Edition 1

AI models hallucinate, Gemini skips safety, and dolphins might talk?! Catch this week's weirdest, smartest AI news fast.

Nily Team2025-04-24

The Wild Side of AI, Safety Gaps, Dolphin Talks, and More!

Welcome to Nily AI Newsletter—your no-fluff, scroll-friendly scoop on all things AI. We know the news cycle is chaotic (and kinda overwhelming), so we made this newsletter short, sweet, and actually fun to read. For the first round of this newsletter, we’ll break it down into three bite-sized sections:

Is That How AI Works? – Wild model tests, quirky features, and headlines that you should know.

Niche but Neat – Demographic-specific interesting news that is good to know.

What’s cooking in the oven – Cool projects, breakthroughs, and the "next big thing" in motion.

Let’s get into it.

How AI Works?

Where are the transparency and safety evaluations for Gemini 2.5 Pro?

Google’s Gemini 2.5 Pro report just dropped—but forgot the safety section. Experts are calling it out for being vague and late, with no real update since June 2024. Oh, and Gemini 2.5 Flash? Still waiting on that report, too. Not just a Google thing, though—Meta and OpenAI are also playing hide-and-seek with their safety details. (Source)

Apparently, OpenAI's New Models Hallucinate!

OpenAI’s new models, o3 and o4-mini, are smart—but they still make stuff up. They’re better at reasoning, but also more prone to hallucinating (great for poetry, not so much for lawyers). GPT-4o with web search hits 90% accuracy, so maybe the fix is just… Googling better. Not sure what AI Model hallucination means? AI model hallucination is when a model like ChatGPT or other language models makes stuff up. That means it generates information that sounds real and confident—but is actually wrong, misleading, or completely fictional (Source)

The US has its concerns when it comes to the Chinese AI Model, DeepSeek! Shocking!

Congress just labeled China’s AI model DeepSeek a national security threat.

Meanwhile, researchers figured out a way to dial down censorship and toxic replies without wrecking the model—jumping response rates to sensitive questions from 32% to 96%. No retraining needed.

TL;DR: DeepSeek’s getting smarter, and the U.S. wants tighter export controls before things get spicy.

If you’re wondering what counts as a sensitive question, a “sensitive question” means a prompt or topic that an AI model might normally avoid, censor, or give a vague/non-answer to, because it’s seen as politically charged, controversial, or potentially risky. (Source) Niche but Neat

Good news for parents! Bad news for teens! Instagram Implements AI to Protect Teen Users.

Instagram is using AI to catch teens lying about their age and put their accounts in “safe mode.” Parents get a heads-up to have a talk about honesty online. This move follows Meta's new Teen Accounts on Facebook and Messenger, now enrolling 54 million teens worldwide. Instagram’s goal? Keep 97% of 13-15-year-olds safe. Of course, there might be a few mix-ups, but hey, users can tweak their settings if needed. (Source)

Turns out Europe’s pretty hooked on ChatGPT, which leads to regulatory challenges!

ChatGPT is blowing up in Europe and might soon be big enough to hit the EU’s "very large" platform status under the Digital Services Act. If it keeps growing, ChatGPT will have to let users skip recommendations, share data with officials, and get audited. It’s still not perfect—AI search results are a bit hit or miss compared to Google—but it’s catching up fast. If it doesn’t play by the rules, fines or a suspension could be on the table. Big moments for AI in the EU! (Source)

What’s cooking in the oven?

AI Music Models: Proof that even machines want to drop the beat!

Udio's AI model is making music that sounds just like human-made tunes. It turns noise into catchy beats, and some tracks are seriously indistinguishable from the real deal. But not everyone’s on board—some listeners aren’t ready to vibe with machine-made music. This sparks the ongoing debate about whether AI can truly replace human creativity in the arts. Meanwhile, record labels and AI startups are duking it out in legal battles that could shape the future of AI in music. (Source)

Translating squeaks, one “Eee-eee” at a time with DolphinGemma!

Google’s DolphinGemma AI is cracking the code on dolphin talk! By analyzing their sounds, it’s spotting patterns that could lead to a shared language between dolphins and humans. Teaming up with the Wild Dolphin Project, the AI uses Google Pixel phones to keep costs down and research efficient. This could seriously speed up dolphin communication—and who knows? Maybe we’ll be chatting with Flipper soon. Big win for marine biology! (Source) Isn't it obvious what I'm saying

That’s a wrap on this week’s news blast!

Don’t be a stranger—drop us a comment and tell us what’s buzzing in that brilliant brain of yours. This newsletter? Yeah, it’s literally just for you. Your opinion is the VIP pass around here. We’ll be back next week, same time, same place—until then, stay curious and awesome.

Your All-in-one AI Assistant

Get started with Nily AI Now

Frequently Asked Questions