Co-authored by Gokul Kumaravelu.
Cover image produced with help of Stable Diffusion on Hugging Face
In the previous installment of our series on AI, we explored how we got here and why AI is an evolving intelligence ecosystem and not just a pipeline where data goes in and magical stuff comes out.
In this part, we go deeper and try to answer the hard and exciting questions about the future of humanity as AI takes charge.
Will creativity be commoditized? Will AI be Big Tech’s answer to evolving beyond ads? Or will it inspire new companies and movements, unlike anything we have seen before?
As this chapter of technology plays out, three elements will be key to helping write the AI story: the stage, the various actors, and the open questions.
Setting the stage
Before we get into how the various actors in AI’s story play their respective roles, it helps to understand the broader environment in which all this will play out and what are the contours of the landscape that will shape the flow of funding, innovation, and change.
“The AI stage” will be shaped by political, ethical, and economic factors which are equally important. The last few years and months have provided us with some clues on the ways it’s shaping up.
Retrospective regulation →Proactive enabling
Governments were caught off guard by the rise of Big Tech. This meant resorting to retrospective anti-trust measures to negate the harmful effects of platform power. This time, however, governments are taking a more proactive approach to enabling not just AI applications but also encouraging local manufacturers to produce the hardware for AI. The US and the EU are releasing mandates and roadmaps on how they will regulate AI and its effect on respective citizens with the AI Bill of Rights, the CHIPS Act, and White Papers.
Ultimately, governments, technologists, academicians and citizen communities will need to come together and open a dialogue to create a new social contract on how this powerful technology can be leveraged for the greater good. This will not happen overnight but evolve with time, breakthroughs, and new questions.
Unilateral investments→ Multilateral ventures
Earlier it would take years for the fruits of hardcore scientific research to translate into funding for scalable business and consumer use cases. With AI, however, the lines are blurring between research collectives and startups, university incubators and venture capital firms, foundational model creation and product development.
Ultimately, AI founders will have to think beyond just product and GTM and consider regulations, fair use, and other ethical factors. In the next section, we look at the various roles in every layer of the AI stack and how future founders and their startups can thrive in each layer.
The actors
So who are the agents of change that will help write the story of AI for the next decade and from where will they emerge?
In our previous part, we explored some of them individually as future coders, writers, researchers, and stewards of AI. However, the real heroes of the story will be entrepreneurs and their teams building on the bleeding edge of every stage of the AI value stack starting from the hardware to the end-point applications. Let’s start from the very first layer of the value ecosystem: the hardware.
The Hardware Layer: Driving the engine of AI
Right now, this layer is ruled by one and only one entity: NVIDIA. Its GPUs (graphic processing units) are the main processors used for machine learning. Their A100 and H100 chips are currently the gold standards of the industry.
The annualised data centre revenue of NVIDIA is currently more than the cumulative valuation of all the upstarts in this segment like Cerebras, SambaNova and Graphcore. Among these Graphcore is its closest competitor in the field of cutting-edge research.
So is there potential for new entrants to unseat the incumbent? There are some factors that could influence where the next challenger might emerge from
- Innovation in hardware design that increases efficiency while delivering high performance for specific applications. NVIDIA currently has the head start since it ties in software development and hardware engineering, creating some of the best image synthesis technology in the world.
- The hand of the government could play a critical role in ensuring NVIDIA doesn’t corner this entire value layer for itself. The UK government blocking its acquisition of ARM is a clear example of this.
Whatever unfolds, one thing is clear, the companies that innovate and capture value here will be among the biggest winners in this current wave of AI. They will capture a large share of revenue in this space much like how capital in consumer tech flows to Meta, Alphabet, Amazon, and Apple due to their advertising monopolies.
The Platform Layer: Battle of the foundational models
Once we cross the raw hardware that powers AI, the next layer of value lies in how foundational models combine data and hardware to produce outputs. Here too, imposing incumbents and well-funded unicorns like OpenAI, Stability AI, Meta, Google, and even Salesforce are setting the pace when it comes to building and monetizing large-scale, foundational models.
As foundational models get bigger, they become predictably more powerful. Hence, a race is underway to raise massive capital and build ever-larger models to get the first mover advantage.
Companies are spending millions of dollars on GPUs to scale and train mega models. A case in point is Microsoft’s 1 billion USD investment into OpenAI. This is expensive, but the hope is that the costs will get amortized across hundreds of apps built on top of the model.
Innovation in algorithm and model architecture
While scaling and building ever-larger models has been a critical area of investment, this approach doesn’t hold all the answers.
If challengers can’t beat the incumbents on sheer scale and capital, they will have to innovate on algorithm design and model architecture. While transformer architecture is currently the most widely employed in building foundational models, we will soon have architectures that are much better alternatives to it. Innovation in this area will need to deliver superior results in one or all three critical parameters of cost, speed, and output quality.
We are already seeing some prominent examples of this
- Dall-E 2: DALL-E's successor is better not because of scale but because of algorithmic innovation. In their upgraded platform they used ‘diffusion method’ to generate images vs. the token-by-token prediction procedure employed earlier.
- Stability AI: It came out of nowhere and using only 600k USD created an open-source foundational text-to-image model that they claim is 30 times more efficient than DALL-E 2.
Monetization of foundational models
The "Attract and Extract" approach
In the first approach, large organisations control training and access to foundational models will probably follow a 3-step playbook of attracting usage initially and extracting value eventually.
- Step 1: Train large foundational models using open-source data.
- Step 2: Give open access to applications that run on these foundational models to gain publicity and traction.
- Step 3: Monetize via service and customization
For e.g. Stability AI plans to remain open source but monetize in the future by helping other companies create custom models and scale them, besides providing the hardware infrastructure.
APIs-as-service approach
Some companies that develop foundational models may choose to provide APIs as a subscription to other companies building products on top of it. For e.g. OpenAI’s GPT-3 model powers some of the fastest-growing AI software tools like CopyAI and Jasper. Other specialized foundational models in the field of medicine and life sciences will also likely adopt this route.
Developments in foundational models will be exciting to watch as many different approaches (open vs. closed, pure-scaling vs. algorithmic innovations) battle it out for control over the OS of AI. What will most likely happen is that we see a landscape which has players executing hybrid approaches and experimenting with innovative ways of monetizing the technology.
The Applications Layer: Where users meet AI
The applications layer of AI will be the most visible and exciting space to watch out for. AI already powers most of recommendation engines of OTT providers and e-commerce, however as we will see, its impact will only grow and multiply with the coming of Generative AI or Gen-AI.
AI for businesses
- The B2B space is dominated by applications that run on established SaaS models. Value will emerge from existing and wholly new areas where AI-native SaaS businesses disrupt existing SaaS businesses. We are already seeing AI-driven applications being used in marketing, customer support, driving CRM engines. We will see more emerge in every functional vertical such as HR, Supply Chain and Finance.
- Startups that help enterprises incorporate AI in their workflows will emerge to cater to specific use cases. For example, Hexo AI - an Antler India portfolio company, helps companies leverage image generation into their workflows through custom APIs.
- AI chatbots could entirely take over internal functions like onboarding, knowledge management, and business intelligence/analytics.
- AI proofing your business using automated testing, bug resilience and phishing protection will change the way we currently look at cybersecurity.
AI for consumers
We have covered quite a bit about how generative AI will unlock the creator economy on a scale never seen before. We already see the creator’s toolbox expand with DALL-E, MidJourney, Imagen, RunwayML (video editing), Podcast.ai, Meta's Make-a-video and more. This is just the beginning though.
- On a more mass consumer scale, AI-powered apps and chatbots could unseat modern-day search engines, content platforms, and a multitude of apps.
- The content economy will transition from using AI to recommend content to creating an almost endless stream of personalised content for every user with custom music, TV shows and articles.
- Building AI assistants for professionals like doctors, lawyers, tutors and other professions that involve repetitive knowledge work will attract hundreds of millions of dollars of investments.
Be it the B2B or B2C, AI will replace many of the commonplace apps. We will go from “There’s an app for that” → “There’s an AI for that” across domains as users become more comfortable with an intelligent application that interacts and learns with them.
The Enabler Layer: Making AI better
One of the value layers that emerge from the rise of every new technology paradigm are solutions that help with product discovery, operations, development, and management. While we have explored how AI can enable other existing functions, the players in this layer are solely focused on improving the AI ecosystem itself.
MLOps (or DevOps for AI) and ML lifecycle management
- As AI developers grow in an organization, tools will be needed to make them more productive.
- Each part of the pipeline needed to build an ML application is an opportunity area - everything from data management, collaboration, model training and monitoring to safety, moderation, testing, and deployment. For e.g. Weights & Biases is an MLOps platform that provides tools for developers to build and fine tune models.
ML ecosystems
- New wholistic ML ecosystems will emerge to provide everything from foundational model access to product discovery and full-fledged communities that make it easier for developers to contribute and create in purpose-built AI playgrounds.
- The evolution of these ecosystems will be similar to how L2 protocols like Polygon have become developer ecosystems built on L1 layers like Ethereum in web 3 and what GitHub is for web 2. Hugging Face is one such platform that is building an end to end AI focussed ecosystem.
The success of AI adoption and the companies in this layer are interlinked. The more AI apps and platforms proliferate, the bigger the enablement layer will grow making AI apps and platforms even better and so forth.
Finally, there is one last actor to explore: The incumbents.
The Incumbents and their race to incorporate AI
Much of what we have explored till now is about the possibilities of new AI ventures. However, value capture will remain with incumbents where they can add AI into their core products as a feature or build new native-AI products and get it faster to their large existing markets. In fact many incumbents, flush with capital, are well positioned to acquire or merge to provide AI capabilities.
Enterprise giants who don’t adopt AI, will be left behind.
Much like the digital transformation wave that swept across the global enterprise ecosystem, the AI transformation wave will present legacy enterprises with multiple opportunities and questions.
Build vs Buy
With AI’s potential to disrupt almost every internal and external function, enterprises will have to look at the vertical integration potential of AI vs AI as a horizontal functional layer. If it's a horizontal function which is important to the company but not core to its existence - such as, say, making creatives for its website and socials - then enterprises will buy or subscribe from a third party.
If AI could change the core user experience and help improve business metrics, then AI solutions will be developed by in-house teams. There are plenty of examples already like Notion adding AI to note-taking, Microsoft offering Github Co-pilot to coders, Adobe with Sensei and more to follow.
Protecting vs Monetising IP
There are growing concerns over the protection of IP used to train large open language models and content. A Napster moment for AI is imminent and conglomerates will need to take a call as to how they license their IP or outright bar AI models from making use of them.
The Big Open Questions
The story of AI is being rewritten every hour and it can be exhausting to keep up with the ceaseless learning machine. As with every new revolutionary technology society struggles at first to keep pace with its evolution.
But addressing some critical questions early on can help us cope and ultimately flourish in a new AI-powered world. We can look at these questions not just from the lens of the entrepreneur but the entire ecosystem as well.
For the ecosystem
Open source vs gated access
This is the most important factor in determining the pace of innovation in this space. Should AI be made available to everyone or should it be gated for the safety of the public? Governments, industry experts and technologists need to make carefully considered regulations around this soon, rather than leave it to the public to use indiscriminately or have unelected companies dictate terms.
The question of biases
Algorithmic bias is not new. From gender bias in generative AI to racial bias in legal systems, the challenge of entrenched biases in AI is well known.
With increasing deployment, the introduction of bias, falsehood, and stereotypes can end up being amplified at scale; making AI models large hallucination chambers. Hence we need to figure out how to make everything from training data to the final output an objective representation of the real world.
The question of privacy
As models go into mainstream use across sectors, the surface area of potential attacks increases. Addressing how we protect the privacy of the people behind the data will help increase trust in the ecosystem.
The question of explainability
We are yet to agree on what makes an AI model reliable across ethical standards of privacy, security, and impartiality. It's hard to see what's happening under the hood in AI and how it's producing outputs. Even so, it is important to know AI's reasoning and to agree upon standards of fairness. For e.g. it's important for a recruiter to understand why did the AI approve or reject the job applicant?
The question of AI Alignment
AI alignment is the process of designing AI systems that behave in ways that are beneficial to humans or aligned with human values. AI alignment is a question of how to train and refine AI models safely and securely.
Aligning AI with certified trainers, and ethics experts to teach AI based on Reinforcement Learning through Human Feedback will be crucial to AI’s long-term future.
3 Questions and 1 takeaway for AI entrepreneurs
Given everything we have said above, if you are a future founder here are some questions for you to start thinking about.
Before starting up
Is AI even needed?
If you are an entrepreneur looking to introduce or impact a new space with AI, consider whether this area warrants a completely new AI-native product or whether this is a market where the incumbent could just add an AI layer.
How good is your model or product?
- If you are selecting an area where a large incumbent already exists with massive distribution, which can just add AI, is your ML version going to be 10x better?
- If not, are you going after a new audience or segment that the incumbent hasn't been able to tap into?
While starting up
What is the real durability of your part of the stack?
It's a land grab right now and value is measured by how much surface area of a value layer is captured. Future founders will need to choose their value layer carefully and build solutions that grow their footprint in one of the particular layers described above.
The sector is accelerating very, very fast, and founders building in the space need to be nimble and keep up with all that's happening and pivot to find their niche.
What is your world-beating GTM?
Your GTM strategy will be more important than almost everything else. Open-sourced foundational models, cheaper compute infrastructure, and an unstoppable community of research collectives have lowered barriers to start an AI business. The key differentiator in almost any category will be how you position your startup and how fast you go after the market.
If you liked our AI thesis so far and you are a future founder looking for answers to the above questions, reach out to us at Antler by applying to us at antler.co or write to isha.dash@antler.co and we will be happy to help!