August 5th - Vail, Colorado
I’m here as an “Industry Expert” for a 2-day conference. My role is to chat with a variety of analysts about trends in everyone’s hot topic of the season: Generative AI. Candidly, I’m much more excited to find the other “experts” in the hotel and pick their brains.
It’s large enterprise I’m interested in. During my career in data engineering I’ve worked with some large organizations. Fortune 500 (and 100) household names. But over the past few years I’ve focused on consulting for smaller customers. The work I do with generative AI has been for startups, communities, medium sized agencies…
So I’m itching to pull back the curtain on enterprise AI use cases. And what luck! The opening keynote is a panel discussion on exactly that topic!
I sit down with coffee and a notebook, ready to hear about advanced use cases, complex implementations, best practices for the “AI data stack”...
…and none of it comes.
Enterprise AI Adoption is Broad and Shallow
Are large organizations getting into AI? Yes. But they’re taking it slow. The most common starting point? Company wide subscriptions to “co-pilot” software, maybe some large scale document search and summary tools.
Knowledge Graphs? Nope.
Custom Automation? Nope.
Agents? Nope.
I asked one of the panelists later: “When companies ask you to help implement those use cases… how many of them want to start there because it meets a specific goal (or a step towards one?) And how many just think they should do “something with AI?”
“I’d say 9 out of 10 are the latter,” she said.
This is a good time to fast-forward.
August 20th - Austin, Texas
I’m sitting in a Tribe.AI event in AWS Austin HQ. Along with folks from Tribe, Vista and AWS, the room is full of reps from companies that sent them on that vague mission: “do ‘something’ with AI”.
Some of them are there with no more of a plan than that. Some of them have a laser specific goal. Most are in between. One tells me “we’re mandated to release a customer-facing AI feature by Q4.” Most have some enterprise license to a co-pilot or ChatGPT, Claude, etc.
But the difference in tone here, 15 days later, is stark. In Vail, I spoke with analysts and practitioners about what the state of AI adoption is. Here, with Tribe, we’re trying to shape it.
Our goal over the next two days is not to send anyone away with a generic plan to “try AI out”. Instead, we’ll work with the organizations in attendance to make sure each of them leave with a clear plan for a project that will drive a key company level goal (like an OKR), and a scoped POC for the work to prove it out.
This is not easy! There is still a significant pull toward the simple “let’s just throw a summary generator here” and the grandiose “let’s automate the whole business!” But we get there, and I’m impressed at the progress we make in two short days.
Still, I notice that a few pessimistic trends from my prior trip carry over.
Noticeably, despite a push to find projects that will drive revenue, no one leaves the workshop with a plan to add a new feature they can upcharge for. One shares a new feature they hope will drive more usage. Most focus on reducing cost or creating efficiency.
That’s something I’m seeing across the board, and I understand why it’s seen as a bearish sign. (I’m an optimist myself.)
So lists get into those signs - positive and negative. Here’s a breakdown of a few trends I’ve noticed in my own work, at conferences and workshop like these, and in conversations with other practitioners.
A clear focus on Efficiency…
We just said this, but it bears repeating. For the most part, those finding ROI from Gen AI are finding it in the bottom line, not the top line.
…with Personal Productivity leading the charge.
AI powered automations, autonomous agents, and other tools can bring tons of efficiency… but did you know that OpenAI’s usage (at the time of writing) is less than 15% API? The number of people adopting a chat assistant or co-pilot in their day-to-day work dwarfs any other Gen AI use case.
A preference for Small Bets
Expectations have cooled a bit on Gen AI as a magic bullet. Execs are realizing it might not be able to do everything they think it can. The desire to adopt is still there, but there’s a matching desire to invest as little as possible to find out.
For some organizations this means sticking to the “broad and shallow” use cases we’ve described. For those going deeper, it means a laser tight focus - often with an internal POC to sound out an idea before dedicating funds.
No AI Data Infrastructure
A lot of folks want to know if there’s a winning “AI data stack” yet. There isn’t. Data engineering for AI is, at most organizations, still ad-hoc and unscalable. There seem to be two causes for this:
- The aforementioned desire to make smaller bets: Organizations are still in the proving stage and don’t want to invest in robust solutions for use cases that may not stick around.
- LLMs are great at working with long-form and unstructured data. In the past this data has been very difficult to work with in any automated way, so these data sources are almost never a part of a company’s existing data pipeline.
1 + 2 = a lot of “AI data infrastructure” basically being a python script to pull data direct from source, create embeddings, and chuck them into a vector DB.
A gap between Innovation and Adoption
I’ll pick on knowledge graphs for a minute here. If you follow anyone on LinkedIn that writes about AI you’ll hear us talk about Graph RAG. It’s an exciting topic! Not only do knowledge graphs help RAG work better, LLMs make building knowledge graphs easier!
But I’ve started poking around other dev, consultants, etc. and asking “Have you seen anyone using a knowledge graph in production?” So far I’ve got a lot of “No”s and only 1 loney “Yes” that was quickly quickly followed with “...but not at the enterprise level”.
I’ll pick on LangChain too (don’t worry, I’m a big fan.) If you look through the solutions libraries in LangChain’s documentation you may start to think that every possible LLM use case has been solved already! But you’ll learn (I have through experience) that very few of them have been tested (or will hold up) at scale.
This is common for any new tech, but it seems accentuated here. The gap between “I’ve tried it” and “we’ve implemented it” is large.
And still plenty of Excitement!
Is this all bad news? Signs of an AI winter?
Lack of adoption, shallow use cases, no top-line ROI, small bets…!?
Don’t panic yet. Despite a general cooling of expectations I haven’t seen a cooling of interest.
Organizations large and small are NOT stepping away from… they just seem to be getting smarter about it. And that’s good in the long run.