Cold weather is finally here! and unlike most people, I’m not complaining. It’s the kind of weather that makes strong tea taste better and empty the streets from other dog walks, so I can walk my dog freely. This week gave us a familiar blend of geopolitical tension, model upgrades, and another reminder that cybersecurity is about to change from data, to deciding to trust the systems making decisions for you.
Let’s get into it.
1. Regulation & Compliance: The EU Tries to Regulate the Future While Lobbyists Rewrite the Present
The EU AI Act is going through what I’d charitably call a “character-building phase.” Big Tech and foreign lobbyists have applied enough pressure to make Brussels consider delays and soften key provisions. And by now, most people have seen the headlines but not the fine print. The parts under fire are the ones that actually matter: transparency around training data, rights to explanation, documentation of model bias, and penalties for releasing foundation models without proper safeguards. In other words, the sections that treat AI like a regulated product instead of a science project.
Then there’s the Digital Omnibus — an initiative that could carve out GDPR’s “legitimate interest” clause to allow broader AI training on personal data. Only Europe could spend a decade enshrining data rights and then quietly discuss trimming them back because AI needs a slightly bigger sandbox. This is what happens when ethics and competitiveness collide in Brussels: the messaging stays principled, but the policy edges start to blur.
Across the Atlantic, the recent U.S. shutdown offered a different kind of lesson. Federal AI pilots paused, grants froze, and hiring stalled — not because of technical limitations, but because governance stopped working for a moment. Agencies will now need contingency funds and modular roadmaps just to keep AI momentum alive the next time Congress takes a recess from functionality. And this isn’t the end of the whiplash: there are already moves to penalize states that regulate AI “too much,” which is a fascinating stance for a country trying to stay ahead in the global AI race.
Seasoned regulators can see the writing on the wall, so they’re pushing oversight upward — less out of tradition and more as a counterweight to deregulation. Former Fed Vice Chair Barr is calling for bank-level scrutiny of AI models shaping financial decisions. “Just trust it” is not a control, especially when models can drift faster than the policy cycle. Even OpenAI wants unified rules across states, though given their track record of strategic flexibility, it’s fair to wonder whether “federal standardization” translates to “federal non-regulation.” Time will tell.
Meanwhile, the governance headaches no one wants to talk about are already here: shadow AI, hidden costs, and model drift. You can implement all the controls you want, but if someone uses a random chatbot to “speed up work,” those controls become mostly theoretical. Unlike the shift from on-prem to SaaS — a decision companies had time to debate —> generative AI showed up everywhere at once, and it’s publicly accessible. The productivity pressure is real. The idea that “if software can do it, why do I need a W2 or 1099?” isn’t going away. But the answer hasn’t changed: AI is a tool, not a replacement, and when deployed properly it enhances workflows instead of erasing them.
2. AI Updates & Use Cases: New Modes, 1,600 Languages, and the Ongoing Identity Crisis of LLMs
OpenAI released GPT-5.1 this week, and unless I’m losing my touch, it was one of the quietest launches imaginable. It simply popped up in the app, like a surprise firmware update. I had to double check the website to confirm it wasn’t some A/B test.
GPT-5, as you may remember, introduced routing: a behind-the-scenes system that decides whether your question needs a quick answer or a deep research dive. In theory it’s brilliant; in practice it confused people who want every model to be a mind-reader. It also made the model sound less “human,” which, judging by certain online reactions, caused heartbreak among users who apparently missed their AI girlfriends. What they really missed was the agreeability. Naturally.
GPT-5.1 attempts to patch this situation with two modes; Instant and Thinking, and a full catalogue of persona tones. (Think: “professional,” “friendly,” “precise,” “humorous,” etc.) It’s all very clever, and the improvements are real, but despite the excitement, we’re still comfortably far from AGI. Progress, yes. A digital oracle that replaces human judgement? Still no.
Personally, I remain partial to models that follow instructions instead of doing improv theater.
Next on the list: Meta’s omnilingual ASR, which I only discovered because of the sudden wave of interest in TTS and live AI telephone calls. This model now covers 1,600 languages, including 500 low-resource ones. I speak four languages -> English, Spanish, Hindi, and French, im not bragging, it does leave your brain mildly scrambled when you need clarity in one. The fun part is that I still struggle with Sindhi, my own heritage, which has enough history and literature to keep me entertained for years. With omnilingual AI, I may finally get to the bottom of whether my grandmother was reading ancient epics or melodramatic romance novels pretending to be cultural treasures.
Multilingual AI is becoming infrastructure now. You’ve probably seen the ads for devices that whisper live translation into your ear within seconds. Travel and storytelling are about to get very interesting. Then again, I do appreciate landing somewhere where I don’t understand a word, it’s a rare break from the global TikTok dialect.
A few months ago, we heard the big dramatic claim: “There isn’t enough data in the world to train future LLMs.” Enter synthetic-data models, which are now having their moment. They promise faster training, fewer copyright headaches, less hoovering of the open internet, and zero “model collapse” paranoia. Some synthetic-trained models are already approaching the benchmarks of their internet-trained ones.
A few players pushing hard in this space:
- Nvidia, using synthetic corpora to scale domain specific models
- Microsoft, blending synthetic + curated enterprise data for safer fine-tuning
- Meta, experimenting with synthetic alignment sets for multilingual models
MIT researchers point out the obvious tradeoffs: synthetic data can reduce privacy risk and speed up development, but if your generator is biased, your synthetic universe will be biased too. Still, in a world where scraping lawsuits are becoming normal, synthetic looks like a convenient detour.
Speaking of scraping lawsuits: a Munich court ruled that ChatGPT infringed copyright by training on protected song lyrics. OpenAI argued that the user who generated the output should bear the blame -> essentially the “don’t blame the gun, blame the person holding it” defense. The German court disagreed.
This isn’t new territory. We’ve already seen:
- NYT vs. OpenAI over unauthorized use of articles in training data
- Universal Music vs. Anthropic alleging models reproduced copyrighted lyrics
Both cases revolve around the same core issue: when models ingest the internet, they also ingest legal liabilities.
And here’s the part people quietly avoid discussing: the early internet had rules. Hotlinking protections existed so websites didn’t collapse under stolen bandwidth. Robots.txt was created to give crawlers instructions. AI labs largely ignored these norms when scraping the web including mine. My sites used to go down because OpenAI’s crawlers were treating my server like an all you can eat buffet, and my RAM folded instantly. Not exactly the responsible “AI for everyone” narrative you get in the marketing deck.
3. Cybersecurity: Data Breaches Are Boring — Let’s Talk About Model Integrity Attacks
Cybersecurity has spent years obsessing over the usual suspects: data breaches, ransomware, compromised accounts. And to be fair, the obsession continues, if something can’t be quantified into an immediate financial impact, it rarely climbs the priority ladder in a boardroom. But now we have a second major insider threat rising quietly in the background: model corruption.
This isn’t about locking your data anymore; it’s about poisoning your systems. We’ve always had disgruntled employees and insider risks, but imagine an AI system making decisions confidently and consistently wrong because someone nudged its training data or tampered with its inference environment. Who do you blame?
The model?
The provider?
The end user generating the output?
Or the organization deploying the output as if it were gospel?
Do you even blame someone if the IT worker training the system is not an expert in Anti Money Laundering and he just feeds data to the LLM and the LLMs decision fails?
With humans, destructive capacity has limits, attention span, fatigue, someone eventually notices strange behavior. With AI, your output bandwidth scales effortlessly. If the system is compromised, the damage scales just as effortlessly. If you are in Cyber Security – you have heard of the Man in the Middle Attack? Well now we are going to push to have a man in the middle – and guess what, its an open single point of failure for future AI systems. Good times.
Now layer geopolitics on top of that. State-backed AI ventures, like the recent surge in UAE model development supported by Microsoft, raise uncomfortable questions about data sovereignty, surveillance exposure, and cross-border trust. Your model isn’t just a piece of software anymore; it’s a diplomatic asset with its own chain of custody. If a foreign entity is effectively building your future decision making systems, systems that will eventually be automated -> you’re relying on their availability, their transparency, and yes, their biases. Not to mention, now they know how to nudge your AI assistant into giving you a recommendation which they prefer.
Culture shapes business everywhere. A Japanese risk posture and a U.S. risk posture are not just different; they’re in different time zones, mentally and literally. But as AI systems converge output into a uniform “global business dialect,” some of those cultural nuances get erased. That’s not inherently bad, but it guarantees social friction on the path there.
And since I’ve spent enough time outlining the concerns, let’s close with a satisfying twist. A few weeks ago, I shared on LinkedIn the idea, which was everywhere in the news that “AI-powered ransomware” was the next inevitable threat. MIT and Safe Security published a paper claiming AI could fully write ransomware without human intervention. It sounded dramatic enough to go viral.
Then researchers picked it apart.
The model outputs were inconsistent.
Key code samples didn’t execute.
The evaluation metrics were… generous.
In short, the claims didn’t survive contact with reality. The authors withdrew the paper after external experts flagged methodological flaws and overly speculative conclusions.
A refreshing reminder that not every story needs the “AI apocalypse” label. Sometimes, it’s just an overexcited research cycle paired with a little too much headline enthusiasm. We did expect better from the weight of MIT.
Closing Thoughts
Governance, risk, and AI aren’t going to slow down. The headlines get louder, the promises get bigger, and the controls… well, they try to keep up. But beneath all of it, the patterns are familiar: new tech, old problems, human assumptions, and the occasional court ruling that forces everyone to be a little more honest.
Thanks for reading. See you next week.
This post answers the following questions:
- What parts of the EU AI Act are being challenged or weakened?
- Why is Europe reconsidering GDPR protections for AI training?
- How did the U.S. shutdown affect federal AI progress?
- Why do regulators want centralized rules for AI oversight?
- What governance issues come from shadow AI and model drift?
- What improvements does GPT-5.1 bring over GPT-5?
- Why are LLMs adding persona tones and user-selectable modes?
- How is Meta’s ASR model handling 1,600 languages?
- Why is multilingual AI becoming foundational infrastructure?
- What are synthetic-data LLMs, and who is building them?
- What did the Munich court decide about ChatGPT and copyrighted lyrics?
- Why are AI companies facing more copyright lawsuits?
- How did AI crawlers ignore early internet protocols and disrupt smaller sites?
- What is model poisoning, and why is it increasingly dangerous?
- What risks arise when geopolitical actors build national AI systems?
- Why was the MIT/Safe Security “AI-written ransomware” paper retracted?
- What does this week reveal about the future of governance, risk, and AI?
Sources
- “Critics call proposed changes to landmark EU privacy law ‘death by a thousand cuts’” — Reuters (Nov 10 2025)
Reuters - “MIT Sloan shelves paper about AI-driven ransomware” — MIT Sloan School of Management / Safe Security working paper retraction (Nov 3 2025)
The Register+1 - “Report on AI-driven ransomware gave misleading picture” — Techzine Global (Nov 4 2025)
Techzine Global - “AI ransomware panic – exposing the inflated ‘80%’ claim” — Cybernews (Nov 3 2025)
Cybernews - “Critics call proposed changes to landmark EU privacy law ‘death by a thousand cuts’” — (Economic Times/India)
The Economic Times



