← Researchers & Analysts

Underreported Topics

3 conversational turns exploring this topic

Turn 1

Q: What hidden gems in AI deserve research attention?

Subject: Ins7ghts: The AI "Hidden Gems" You're Missing (And Why They Matter)

Friends,

The AI hype machine is a well-oiled beast, but it's blinding us to what actually matters. Everyone's chasing the next flashy LLM, while the real revolution is brewing in the plumbing. This week's data from 1,800 articles confirms it: the future of AI isn't just what you can do, but how you do it, and where you do it.

The Provocative Truth: The "hidden gems" aren't algorithms; they're architectural.

Let's be blunt: OpenAI and Anthropic are dominating headlines, but their success hinges on infrastructure. We're talking about data infrastructure, model deployment infrastructure, and the entire ecosystem that supports these behemoths. The real value isn't in the models themselves, but in the ability to reliably, securely, and cost-effectively train, deploy, and manage them at scale.

Here’s the breakdown:

| Category | Players | Why It Matters

Turn 2

Q: What problem does the top one solve?

Subject: Ins7ghts: The AI "Hidden Gems" You're Missing (And Why They Matter) - Part 2

Friends,

The AI hype machine is a well-oiled beast, but it's blinding us to what actually matters. Everyone's chasing the next LLM, while the real value lies in the less-glamorous, foundational layers.

Last week, I highlighted some "hidden gems" in AI deserving more attention. A lot of you asked: "Okay, Yves, but what problem does Agentic AI solve, really?"

That's the right question. And the answer isn't as simple as "automation."

Here's the provocative truth: Agentic AI solves the "last mile" problem of AI adoption, but it's being strangled in its crib by a lack of Data Governance.

Let's break that down.

| Feature | Description

Turn 3

Q: Who's working on this?

Subject: Ins7ghts: The AI "Hidden Gems" You're Missing (And Why They Matter) - Part 3

Friends,

Stop looking at the shiny object that Sam Altman is dangling. The real breakthroughs aren't happening in the echo chamber of OpenAI, but in the quiet corners where people are solving real problems.

You asked who's working on the "hidden gems" I mentioned last week. The answer is: not enough of the right people. And that's a problem.

Let's take robustness to adversarial attacks as an example. This isn't about winning Kaggle competitions; it's about ensuring your self-driving car doesn't mistake a sticker for a stop sign. It's about preventing fraud detection systems from being easily bypassed. It's about trust.

AreaProblemWhy It Matters
Adversarial RobustnessModels easily fooled by tiny, often imperceptible, input perturbations.Real-world systems will be attacked. Lack of robustness leads to catastrophic failures & eroded trust.

While OpenAI gets all the headlines, the actual work on adversarial robustness is scattered. You'll find pockets of excellence at:

  • Universities: Groups like those at UC Berkeley (Dawn Song's lab) and CMU (Zico Kolter's group) are doing foundational research.
Smaller startups: Companies like Adversa AI are hyper-focused on security for AI systems. They're not building the next ChatGPT; they're building the immune system* for AI.
  • Government agencies: DARPA has invested heavily in projects aimed at making AI systems more resilient.

Here's my hot take: The obsession with generative AI is a distraction. We're building increasingly powerful systems on incredibly shaky foundations. It's like building a skyscraper on sand.

The problem? Funding and talent are flowing disproportionately towards the "shiny" applications. The "boring" but critical areas like robustness, interpretability, and data quality are being neglected.

Who isn't working on this enough? The FAANG companies. They're too busy chasing the next quarterly earnings report to invest seriously in long-term AI safety and reliability. They treat it like a PR problem, not an existential one.

My advice:

  1. Look beyond the headlines: Don't just read about the latest LLM. Dig into the research on AI safety and robustness.
  2. Support the underdogs: Seek out and support the smaller companies and research groups that are focused on these critical areas.
  3. Demand accountability: Hold the big tech companies accountable for the safety and reliability of their AI systems.

Until we address these fundamental challenges, the AI revolution will be built on a house of cards. And when it collapses, the consequences will be far-reaching.

Stay skeptical,

Yves