How to Evaluate Generative AI Opportunities – A Framework for VCs

Andrew Carr

A successful investor has to cut through the hype by focusing on fundamentals to evaluate a company’s potential for long-term growth. But what happens when a new technology like generative AI disrupts existing frameworks for evaluation? Suddenly, it’s a lot more difficult. 

Generative AI represents a paradigm shift for many businesses. For some companies, it will enhance existing products, for others it will enable entirely new ones. Clearly, this is an exciting area for venture capitalist (VC) investment. However, as with any new technology, it can be challenging for investors to cut through the hype and assess the true value of potential investments.

Through Tribe, I have had a number of advisory calls with VC firms (and even had some of our friendly neighborhood VCs over for dinner to talk Generative AI). What follows is one framework I’ve found particularly useful in helping VCs navigate opportunities in the generative AI landscape – for both future investments and current portfolio companies. It’s reductive, but also useful first step in understanding how a business fits into a framework of generative AI adoption.

Toward that aim, this article will: 

  • Categorize generative AI companies into three groups – model builders, model consumers, and vertical model players
  • Give examples of how to apply this framework to existing companies
  • Show how a new company could build a defensible moat by progressing from one category to another

The goal is not to be exhaustive – putting anything into broad categories is necessarily reductive – but to give investors a framework to help recognize disruptive companies.

Model Builders

These companies have access to vast amounts of capital, compute, data, and specialized talent. They are typically leading the way when it comes to model capabilities. They often publish their findings in academic research and in polished marketing content. Many of their staff are researchers and engineers building systems to train and deploy large models. Many companies in this category have an API or provide models as a service.

The path to monetization with these groups is through usage-based billing, subscriptions, and partnerships with large organizations. These three prongs of business strategy give a large surface area for growth and market capitalization. 

These groups typically rely on existing hardware and cloud providers for their computational needs which necessitates large and structured partnerships. 

The main benefit of these groups is their ability to draw value from the entire Generative AI landscape. They serve as a type of orchestration layer for the rest of the community to build on. However, they themselves rarely build products on top of their own models.

There is a somewhat large risk, however, that they will be unable to continue the pace of innovation and end up with competitor models or open source models catching them in terms of capabilities. The necessity of continued R&D makes these organizations attractive for talent but expensive to maintain. 

Examples: OpenAI, StabilityAI, Co:here

Model Consumers

These companies are typically started with minimal barriers to entry. They consume models from open source providers and Model Builders. They can quickly iterate with customers to provide novel user interfaces and user experiences. These have historically been consumer facing products that quickly generate hype in the media. 

Oftentimes, however, we find that companies in this category have low defensibility and high churn. The double-edged sword of easy entry means that at the first sign of traction, many competitors will rush to build copy-cat services and eat away at the original company’s growth. Additionally, there is a potential risk building your core business on someone else’s API without an exit strategy. This is different from building on the public cloud today, simply because the norms in the industry are not yet established. 

The goal of many of these companies is to generate enough traction to gain a defensible market positioning with brand and marketing efforts. They can then translate this into a data moat which allows them to fine-tune the open source or Model Builder provided models to higher levels of performance. 

Many of these companies will follow standard monetization strategies such as being subscription based, ad supported, or potentially having enterprise deals.

The benefit of these companies is the low capital costs for an initial go to market. The speed at which they can innovate on the existing models allows them to experiment with novel products that users might find quite valuable. This iteration process has a natural conclusion with the companies attempting to train their own models and potentially break free from the risky dependence on external providers. However, with proprietary prompts, data for fine-tuning, and product savvy many of these companies can make a splash very quickly in the Generative AI field. The long term aim of these groups is likely to start to bring some of the expertise in-house while capturing significant market share in the process.

Examples: Jasper AI, Lensa

Vertical Model Players

These companies have access to sufficient compute, talent, data, and capital to train their own specialized models. They will then build a dedicated product on top of their own models. The models may not be as powerful as those from the Model Builders, and the speed of development may be slower than the Model Consumers, but the control over the entire Generative AI stack enables truly novel products and reduces competitor risk. 

These companies typically start with open source models or frameworks, but quickly transition to building custom systems and solutions internally for training their models. They also have product-savvy players in the company that know how to gather user feedback and create delightful products. 

A potential benefit to companies in this category is the ability to build with more than a single modality without being limited to the existing models on the market. For example, if a company wants to build a novel image compression system to save bandwidth for streaming services or if they want to interact with a dedicated piece of design software, they will likely need to build their own models trained on proprietary data with a proprietary product. 

The monetization route for this follows the Model Consumer group since both groups are building end products. The need for growth in the Vertical Model Players may be slightly higher due to the increased capital costs required for improved defensibility. 

These companies are far more defensible than some other Generative AI companies, but still can be disrupted if they fail to capitalize on their novel product with a failed go to market attempt. In general, we expect more companies to emerge in this category with a lot of value being generated here. So let’s walk through a hypothetical example of what a company making a highly successful vertical Generative AI play could look like.

Constructing a Defensible Moat – an Example

Imagine a company EduSynth which is seeking to disrupt post secondary education. They understand the continued need to re-skill and educate for the general population. They want to build a personalized learning service that enables their customers to master a new subject. This idea has been attempted in the past, but with new Generative AI technologies - it is more likely to succeed. 

EduSynth starts by building a web interface with a number of interesting, human-designed, courses. These may include some about Programming, Marketing, AI development, Art, or Video Editing. In parallel, they are training a chat based model to summarize students’’ answers and explain where they may have gone wrong. 

They use a powerful open-source model FlanT5 and a proprietary dataset. They do intensive fine-tuning on the model and build infrastructure to serve it to their many customers. 

While this is happening, they are training a speech and video model to provide comfort and personalization to their customers. All the while, they are collecting feedback on students’ learning progress, satisfaction, and general performance in their courses. 

Over time, their systems develop to have multiple custom models interacting with students at various levels of the product. These models could be suggesting new content, correcting essays, explaining ideas, and personalizing feedback directly to the students. 

EduSynth started as a Model Consumer and quickly made the transition to a Vertical Model Player while marketing, building their product, and collecting user feedback. This is a natural path for many companies to follow in Generative AI to make use of the benefits of existing models while also constructing a defensible moat.

The Future of Generative AI

This paradigm shift is an exciting moment for AI startups. There are so many opportunities across industries for founders and investors to build new products and improve existing ones. No matter what framework you use to evaluate it: the future is bright. 

Andrew Carr is a senior applied research scientist at Gretel AI, an industrial generative AI company focused on synthetic data and privacy. He has worked in machine learning and AI for 8 years in various roles at Google Brain, OpenAI, and his own legal tech start up.

Images: Generated using Midjourney
Contributors: Bailey Seybolt

Related Stories

Applied AI

AI in Customer Relationship Management

Applied AI

AI and Predictive Analytics in Investment

Applied AI

AI Implementation: Ultimate Guide for Any Industry

Applied AI

Advanced AI Analytics: Strategies, Types and Best Practices

Applied AI

Machine Learning in Healthcare: 7 real-world use cases

Applied AI

A Guide to AI in Insurance: Use Cases, Examples, and Statistics

Applied AI

Announcing Tribe AI’s new CRO!

Applied AI

AI and Blockchain Integration: How They Work Together

Applied AI

10 ways to succeed at ML according to the data superstars

Applied AI

What our community of 200+ ML engineers and data scientist is reading now

Applied AI

How AI Enhances Real-Time Credit Risk Assessment in Lending

Applied AI

Everything you need to know about generative AI

Applied AI

Tribe's First Fundraise

Applied AI

How the U.S. can accelerate AI adoption: Tribe AI + U.S. Department of State

Applied AI

AI Security: How to Use AI to Ensure Data Privacy in Finance Sector

Applied AI

An Actionable Guide to Conversational AI for Customer Service

Applied AI

7 Prerequisites for AI Tranformation in Healthcare Industry

Applied AI

How 3 Companies Automated Manual Processes Using NLP

Applied AI

AI for Cybersecurity: How Online Safety is Enhanced by Artificial Intelligence

Applied AI

How to Reduce Costs and Maximize Efficiency With AI in Finance

Applied AI

Best Practices for Integrating AI in Healthcare Without Disrupting Workflows

Applied AI

AI Diagnostics in Healthcare: How Artificial Intelligence Streamlines Patient Care

Applied AI

Common Challenges of Applying AI in Insurance and Solutions

Applied AI

AI Consulting in Insurance Industry: Key Considerations for 2024 and Beyond

Applied AI

How to Reduce Costs and Maximize Efficiency With AI in Insurance

Applied AI

AI in Construction: How to Optimize Project Management and Reducing Costs

Applied AI

A Deep Dive Into Machine Learning Consulting: Case Studies and FAQs

Applied AI

How AI is Cutting Healthcare Costs and Streamlining Operations

Applied AI

Self-Hosting Llama 3.1 405B (FP8): Bringing Superintelligence In-House

Applied AI

AI Consulting in Healthcare: The Complete Guide

Applied AI

Leveraging Data Science – From Fintech to TradFi with Christine Hurtubise

Applied AI

No labels are all you need – how to build NLP models using little to no annotated data

Applied AI

Welcome to Tribe House New York 👋

Applied AI

Key Takeaways from Tribe AI’s LLM Hackathon

Applied AI

Navigating the Generative AI Landscape: Opportunities and Challenges for Investors

Applied AI

How to build a highly effective data science program

Applied AI

AI-Driven Digital Transformation

Applied AI

How AI for Fraud Detection in Finance Bolsters Trust in Fintech Products

Applied AI

Generative AI: Powering Business Growth across 7 Key Operations

Applied AI

What the OpenAI Drama Taught us About Enterprise AI

Applied AI

AI in Banking and Finance: Is It Worth The Risk? (TL;DR: Yes.)

Applied AI

How to Enhance Data Privacy with AI

Applied AI

How to Measure ROI on AI Investments

Applied AI

10 Expert Tips to Improve Patient Care with AI

Applied AI

How AI Enhances Hospital Resource Management and Reduces Operational Costs

Applied AI

Current State of Enterprise AI Adoption, A Tale of Two Cities

Applied AI

3 things we learned building Tribe and why project-based work will change AI

Applied AI

The Secret to Successful Enterprise RAG Solutions

Applied AI

Thoughts from AWS re:Invent

Applied AI

Why do businesses fail at machine learning?

Applied AI

Segmenting Anything with Segment Anything and FiftyOne

Applied AI

Understanding MLOps: Key Components, Benefits, and Risks

Applied AI

Using data to drive private equity with Drew Conway

Applied AI

Scalability in AI Projects: Strategies, Types & Challenges

Applied AI

Top 5 AI Solutions for the Construction Industry

Applied AI

How to Seamlessly Integrate AI in Existing Finance Systems

Applied AI

A primer on generative models for music production

Applied AI

How to Measure and Present ROI from AI Initiatives

Applied AI

Top 9 Criteria for Evaluating AI Talent

Applied AI

Making the moonshot real – what we can learn from a CTO using ML to transform drug discovery

Applied AI

Tribe welcomes data science legend Drew Conway as first advisor 🎉

Applied AI

AI and Predictive Analytics in the Cryptocurrency Market

Applied AI

How to Build a Data-Driven Culture With AI in 6 Steps

Applied AI

Key Generative AI Use Cases From 10 Industries

Applied AI

AI in Finance: Common Challenges and How to Solve Them

Applied AI

Write Smarter, Not Harder: AI-Powered Prompts for Every Product Manager

Applied AI

AI Implementation in Healthcare: How to Keep Data Secure and Stay Compliant

Applied AI

8 Ways AI for Healthcare Is Revolutionizing the Industry

Applied AI

AI Consulting in Finance: Benefits, Types, and What to Consider

Applied AI

How data science drives value for private equity from deal sourcing to post-investment data assets

Applied AI

The Hitchhiker’s Guide to Generative AI for Proteins

Applied AI

How to Optimize Supply Chains with AI

Applied AI

AI in Construction in 2023: Use Cases and Benefits

Applied AI

A Gentle Introduction to Structured Generation with Anthropic API

Applied AI

8 Prerequisites for AI Transformation in Insurance Industry

Applied AI

AI in Private Equity: A Guide to Smarter Investing

Applied AI

5 machine learning engineers predict the future of self-driving

Applied AI

AI in Portfolio Management

Get started with Tribe

Companies

Find the right AI experts for you

Talent

Join the top AI talent network

Close
AI Research Scientist
Andrew Carr
Andrew Carr is a senior applied research scientist at Gretel AI, an industrial generative AI company focused on synthetic data and privacy. He has worked in machine learning and AI for 8 years in various roles at Google Brain, OpenAI, and his own legal tech start up.