On intelligence: tools and systems
#1 : tools that humans build and use → the bigest opportunity since whenever → dodging x risk with technology → automation startups explosion → systems of intelligence
Note: this was written mid 2022, at the very start of LLM hype. It was fun brushing this up and I will try to publish some older stuff in the future too. The idea behind was born due to pure observation of the impact recent developments in AI [will] have on the startup, venture and labor markets. Initial thesis was that if you want to work on good problems to solve, then leveraging AI has the best potential outcome (value, interesting problem, impact and many other dimensions). You still need to pick a good problem to solve though. Even in the age of LLMs core business rules still apply, the customer will hire your product to solve their problem. So I tried to wrap it up with an overview of what kind of problems can be tackled and an emerging new category of software, system of intelligence.
The most important problem to work on.
Software is eating the world and AI is eating software. We have seen this with the internet, mobile apps and now machine learning is changing both how we use computers and what role they play in our lives.
Computers and in particular software are very powerful tools. And the funny thing about these tools is that for the first time it seems that they will not only help us achieve certain objectives marginally more efficiently. They will actually achieve the objectives that we set out for them more efficiently than humans could do.
I started seriously thinking about tooling for human work around 2016. That’s pretty much around the time I co founded Juro and was thinking about how automation for the legal profession could look like. It was clear that a lot of what is possible today could be greatly improved.


After working on the machine learning component of Juro for a number of years with a bright team it was clear that there’s still a long journey ahead. It is hard to avoid being carried away by the hype that surrounds AI today. But participating in that journey during the last 7 years gives me a strong feeling that we are making important progress. Also as a side note, we paused major development in Juro ML components pretty early on. The tech stack was not mature enough for of the shelf business consumption when we started, but I’m seeing some people getting good results in 2022 with the newest iteration of models in the legaltech space.
reflecting the recent changes from 2023 would mean rewriting a huge part of the article, so i leave all the dates as they were. its really hard to underestimate how much the ml landscape changed in the last couple of years, so i’ll leave it at that
Most of what we were trying to do back then and what people are trying to automate today is something that can be framed as assisted tooling. It’s clear that the problem of assisting people in doing their jobs has a long history and is by no means solved. One of the big reasons for that is the moving goalpost (a good thing if you care about people being employed and not automated away). You automate the shit out of manual calculations and you got yourself some more job positions created in the process. It seems (and I hope) that there’s also a long journey ahead and it’s helpful to understand where we are in that journey.
One way of measuring the progress of the automation journey is comparing progress against High Level Machine Intelligence (HLMI). In short, HLMI means a machine doing the same work that a human can do better and chaper. Think about a chess game with a computer and your chances of winning it, you kind of get the idea. In 2018 several smart people from Oxford and Berkley wrote a paper on when will AI exceed human performance. They have interviewed a bunch of researchers and experts in the field and got this nice curves that shows the probability and time we have left until our robot overlords come:
this is actually a funny graph since it was created from a lot of forecasts around 2018. it would for sure look different today, but let’s be absolutely conservative here
There’s no way to correctly predict the future, but there are a couple of things that are clear (both from the paper as well as observations):
The HMLI is not a binary event but a gradual improvement in machine capabilities (for example: some problems will be solved earlier than others, some problems will be solved just partially, etc.);
There’s continuous accelerating progress, this one is mostly a question about timing, in short continuous progress — yes, when’s the next AI winter — no one knows;
A lot of industries are going to be affected by machine learning, as we progress with improvements in AI. This one is for the non-believers, try to craft an argument why that would not be the case.
These changes represent a huge challenge but also an opportunity. The technology change and pace means that we have the chance to shape amazing tools to help us achieve our goals. This is not a new phenomenon, humans have been using new technologies to create tools for the whole of our history. And then used those tools to better understand and shape the world around us.
The difference today is that we are seeing increasingly more and more powerful tools available to us.
truth be told using chatgpt and similar tools still feels more like google search on steroids (and sometimes on a “tiny” bit of lsd) than something that will change the world coming monday; still it’s magic that was introduced to our world last monday and that’s mindblowing
Let’s talk tools then. It’s hard to not appreciate both how far we’ve come from stone age tools and how long it took (almost 2.5 million years) as well as the accelerated pace of technological enablement (plotting this on a graph would produce at least an exponential curve).






Shaping these tools scores very high on multiple dimensions and can be considered one of the most important problems to work on. It’s rational, commercially lucrative, can accelerate human development and it’s morally important to keep them accessible to a broad community of users. It turns out computers were just the start.
The future is already here, just distributed unevenly.
Thinking about shaping the world, personally I’m much more interested in not replacing humans with machines or algorithms. I might be a huge optimist, but I would see the future and technology helping people reduce the burden of repetitive tasks that our brains have a hard time dealing with as well as giving them something that they didn’t have before. And boy we need it.
The world is complex, unpredictable and volatile. We also have less free time. Add to that an increase of input data to sort through and digest — we’re on a path that will make it quite hard to make progress in the world. We need help to better understand our environment. But we also need help with making decisions, understanding the impact of those decisions and any downstream or side effects.
i can only reiterate that both of these points are key to making progress as well as avoid destruction. one of the fundamental x risks is that the feedback loop of human activity is becoming longer, that means that bad effects can come later, stack, etc, think of climate change or microplastics. choose your own “bad thing in the world” du jour. technical and cultural progress, the thing that can counteract a lot of x risks, is also slowing down, they say that most of the low hanging fruits have been gathered
Adding an extra 10 or 50 IQ points would help with exactly that. Now, arguably for the first time in human history, we can actually gain an intelligence boost.
Just in the last couple of years we have seen a cambrian explosion of tools. All of them are in one way or another powered by recent developments in Machine Learning, mostly focusing on generation or creation of something new:
Text (GPT3, Chinchilla)
Image (DALLE2, Stable Diffusion, Imagen)
Video (Make-a-Video)
Science (AlphaFold)
Software engineering (Codex)
And we see how these and similar foundational models are being used to create companies:
DreamStudio, automating illustration and art creation, just raised $101M series A;
Jasper, automating blog post and article creation, just raised $125M series A;
Github Copilot, automating software engineering;
AI Dungeon, an ai generated text based adventure game.
the list has grown much more in the last year, so here’s one example of the landscape from sequoia:
let’s be real here, all of the above seems pretty much obvious to anyone who regularly went to the internet during 2022/2023, so the real question is how can you create a product, project or company that will ride this new wave of technological change?
Even a small underlying change in technology or tooling will create an opportunity for a company to be created and become big by utilizing this technology and moving fast. There are plenty of examples of this. Even simple improvements in UX have allowed companies like Typeform to pretty much become unicorns with existing incumbents and basically a commodity solution (web forms).
A big underlying change in technology will create a very big opportunity. Even a relatively medium size change in technology platform (going from on-prem to cloud) allowed Salesforce to dominate the CRM market (with Oracle Siebel as the de facto number 1 player before Salesforce). The fundamental shift towards ML first or ML supported products will mean at least similar fundamental opportunities.
there’s an ongoing debate who is going to be getting most of the value from those llms, incumbents or startups, if you voted for incumbents — you have no imagination.
The capital market recognizes that reality (and of course hype) better than anyone. AI related investments have 10xed in the last 10 years (btw you should check out State of AI Report, a great summary/prediction of the whole AI landscape):
And the reason is pretty simple. The scope of human activities that can be addressed by machine learning is massive:
All of the above doesn’t look like a fair fight to me. Seems like we’re outcompeted both in the strength as well as in the intellect domain. The human physical strength domain is settled long ago. I don’t think anyone will or even wants to compete with a forklift. What about intellect? If the machines are so smart that we don’t have much chance of becoming smarter than them, what can we do? One answer is we can use the machines to augment our own capabilities.
“If you can’t beat them, join them”.
How much would you pay for an extra 50 IQ points or the rise of Systems of Intelligence.
Let’s say we want to get a 50 IQ boost, where do we start?
There’s a misconception that intelligence is just a trait. Any role playing game allows you to tweak your level of intelligence and max it out if you want to.
The reality of intelligence is much more complex. For starters the scientific community has not been able to settle on a specific definition of intelligence. As of today the wikipedia page for intelligence lists 10+ different capabilities that are all used in defining intelligence.
It’s clear that there’s no right answer, but I still want to highlight two aspects of intelligence that I personally agree with and have been thinking about for quite some time now.
The first one is the idea that intelligence is not singular. In practice this presents to us intelligence as a combination of different activities and interactions between individual agents. Think about a child that is raised by wild animals, even though their brain “hardware” is the same as everyone else's, they still can’t speak. To a very high degree we can’t decouple our ability to communicate with IQ or EQ, but at the same time I find it hard to imagine anyone who can’t communicate at all having a very high IQ or EQ in a practical sense. A good description of this theory is the classical The Society of Mind by Marvin Minsky where he goes on building a whole (philosophical) structure to support the argument.
The second is the idea that humans augment their intelligence by leveraging the surrounding physical world. Extended Mind Thesis (EMT) was introduced by Andy Clark and David Chalmers. Is language a tool? Is culture? If you view it like that anything can be viewed as a tool that is invented by humans and then used to achieve certain objectives. In specific cases this means literally extending the capabilities of brain “hardware”.
If we take both of the above assumptions as valid (and I haven’t seen strong arguments why that would not be the case) then creating tools to increase intelligence is not only feasible, we’ve done it before. There’s nothing magical about it, in fact we can already imagine how that would like in a very simple workflow in a company.
What does a tool to increase intelligence look like?
We want to look at several aspects of work that require intelligence.
Tasks that are prerequisites of doing intelligent work:
Repetitive tasks (e.g. enter today's date and time);
Tasks that require cognitive load (e.g. addition, spell checking);
Information gathering (e.g. asking for input);
All other very boring tasks.
Tasks that relate to working with large volumes of information, things like summarizing, snalyzing, synthesizing, creating and updating models of reality and simulations (reducing costs for making a decision).
Tasks that relate to aligning actions for human and non human agents:
Sharing information;
Aligning incentives;
Uncovering new information gathered and produced by different actors.
It’s not just that all of these require intelligence, but also the quality and speed at which they are performed also depends on intelligence. At the core of intelligence is stored information and its processing. So far we have two options, it’s language and math. Unfortunately my feeling is that complex math is much harder to use for humans than complex language. If I would write this article through math formulas and data (it actually might be a better model at describing these ideas) but the number of people who would read that math article would probably decrease to 1 in the best case scenario. It also would be much harder for me to write!
That means that language will have to be at the core of an interface of such a tool if humans are to use it. And this is actually reflected in current developments of AI research and application. We create new text through text. Image generation happens through text. Video generation happens through text. Tesla is using language models to design better self-driving cars!
it’s kind of true on the surface but longer term i’m not sure that language alone will suffice. also this does not mean that a chat interface is the best interface!
You get the point, if we want to communicate with any kind of AI we will have to use text until we find something better. Also accidentally text and language is what we use to communicate between humans, so that helps, which also means we have a lot of data to train the models on.
In summary, we are just starting and will see many more new ways of creating startups. The more interesting ones will take a different approach and will be built on top of this new AI wave of technological change. Are you working on such a startup or project?





