The Future and AI, Part 1
Tags: ai, leadership, learning • Categories: Learning
Introduction
I’m in the midst of hunting for the next startup idea, and I’ve been thinking about and playing around with AI nearly every day over the last year. I’ve had a lot of conversations with fellow technologists about how the advances in AI are going to change our world, and I wanted to get some of them down on paper.
I’d love to hear critiques and comments on any of these thoughts! I’m still revising my thinking here and would love to hear different points of view.
How Will AI Improve?
It’s impossible to predict exactly how things will play out, but it’s important to at least define a point of view and think about the categories where AI may or may not improve:
- Cost. Undoubtedly, inference costs will drop. Like all new technology, the costs start high and then drop quickly. Although running AI is relatively cheap right now, the cost of inference and training will only drop exponentially.
- Efficiency. As we’ve already seen with open source models, efficiency will improve. The ability to run GPT-3.5 on your consumer laptop was unthinkable in 2023. There’s a lot of improvement in the computing side of running AI models that will continue to drive us down the cost curve, even without new invention on the hardware side.
- Intelligence. Models will improve, but at what rate? It’s possible that models cant get that much better—even with increased parameter count—because we’ve already consumed all human writing. An order of magnitude improvement along an exponential improvement curve would drastically change what’s possible an continue to rewire society. However, an incremental change in AI performance will not change society as much as the latest generation of improvements.
- Modalities. Right now, the current models are great at text. Image generation is possible, but still hard to work with in many ways. Video generation is just coming online. I’d expect additional modalities and the quality of outputs across these modalities to quickly increase over the coming years. Compute and training data are the main limiters for progress right now, but there’s no major technological hurdles in those areas.
How Will AI Be Regulated?
Open source software is inherently impossible to regulate. Look at crypto.
Although right now, the leaders in foundational models are all closed source traditional businesses, this is not what the future will look like.
Initially, when computing was just starting to comes all operating systems were closed source and owned by a small set of businesses. this was als true for core technology infrastructure like databases, programming languages and other to tools that developers used to build technology. Very quickly, the open-source community built alternatives to this proprietary toolchain. And now nearly all of the software development happening runs on an open source ecosystem where the underlying infrastructure is completely free and maintained by a distributed set of volunteers with basically no coordination.
We can already see this happening.
Meta has open sourced their model. Mistral’s model is completely open. Are they as good as OpenAI right now? No. Are they improving rapidly and will quickly match the performance of these commercial models? Yes. Even if they don’t match the performance of these closed source models, will their performance be good enough for most tasks that people need to use it for? Absolutely.
As an example, try out Superwhisper which runs without any servers on your MacBook. Ollama is making it easy to run an OpenAI-like system right on your computer. There are new open source tools released every day that compete against the commercial foundational model companies.
I think this creates a really interesting regulatory environment. Like crypto, and unlike nuclear technologies, it’s going to be impossible to regulate AI effectively. Sure, they can introduce regulation to decrease the rate of innovation, but they can’t stop it. And even if one particular government introduces regulation that decreases the rate of AI progress, there’s nothing to stop some sovereign country like Estonia from building a data center and becoming the hub for AI innovation.
This is all to say: in my view, regulating AI is a fool’s errand. Sure, the United States might try, but it won’t stop anything. It’ll just slow things down for a short period or accrue value for some time to a small set of actors attempting to acquire regulatory capture on the AI industry. And more importantly, I would rather have the United States be the leader in AI than some other country like China.
How Will AI Reshape Thinking?
When obvious bias is introduced into a model like Gemini, it’s easy to assume that AI will have a great impact on what humans think and believe.
However, if you believe there’s going to be a proliferation of AI models all with good enough performance, I don’t think this is something to be worried about. Open source models will be trained in a way where the biases are clear and many models will be trained to strip out biases.
It’s not like biases don’t currently exist. Media organizations clearly has a bias. Google ranking has its own bias. Any data source or information aggregation system has a bias built in, even without machine learning or any advanced AI introduced. If anything, there’s a good argument to made for AI making biases more clear and enabling the current set of tools we have to operate with less bias 1 .
I don’t think the AI bias is a change in kind; I believe it’s only a change in degree.
Here, I’m scoping "AI bias" to mean the resulting output generated by a text, image, or video model.
However, I do believe that having an always-available super-smart never-tired assistant or tutor, trained by the very best thinkers in the world, at your disposal will fundamentally change the human mind. It will be a great leveller: enabling someone in a poor country with a $50 laptop to access the same information of the American elite.
At the same time, I wonder about the impact on a human drive to learn. If you always felt like there was someone much smarter, better, and faster at you than literally everything and you could never catch up, would you have the same motivation to put your nose to the grindstone and do the work required to be great at your craft? Some people have the drive to create and invent regardless of those around them, but most of us are impacted by mimetic conflict in a way that could severely shape our interests and internal drives.
How Will AI Reshape Privacy?
Privacy will fundamentally change how our society thinks about privacy. Survalience abilities will fundamentally change in three ways:
New
We’ll be able to extract structured data from content that was never before possible. For example, looking at brainwave information and decoding what someone was thinking.
Cheap
There’s lots of surveillance that was possible in the past. For instance, you could have an army of humans watching every second of a video and watching for particular actions or events. However, most surveillance of this kind did not make sense economically.
As the cost curve on AI inference continues to drop, analyzing massive amounts of content, whether text, video, audio, or any other communication, will become incredibly cheap. This will enable governments, companies, and individuals to run surveillance across more activity than ever before.
Ubiquitous
Surveillance and wide-scale analysis of various types of data is not going to be limited just to large actors like governments or mega corporations.
As we’ve seen with the open source models, AI can run effectively on consumer hardware. This is going to enable the everyday consumer to run surveillance that isn’t too different from the largest and most advanced corporations in the world.
1 A good example here is search. Models compress data so much more than traditional search that you can effectively run your own search engine locally. Each person could have their own individual Google—eliminating the monopoly on search.