Interview Highlights
Don't let AI smooth out your idiosyncrasies. Let your writing stay weird and uniquely yours.
Generic content is dying and the burden is on you as the writer to be distinctive.
The more personal your writing becomes, the more future-proof it is. Nobody wants to read memoirs from AI, even if they're technically "better."
Use AI as your secondary literature when you read — not just for quick answers, but as a thinking companion. As Tyler puts it, "I'll keep on asking the AI: 'What do you think of chapter two? What happened there? What are some puzzles?' It just gets me thinking... and I'm smarter about the thing in the final analysis."
Hallucinations aren't the crisis everyone makes them out to be. No matter the source, if you're going to use a piece of information, you should double-check it. This is true for both books and AI.
Secrets will become more valuable in an AI-driven world.
One way to use AI as a writer is to research fields you aren't as familiar with before you start writing about them. Tyler said: I just wrote a column about declassifying classified documents. I don't know that law very well. I asked the AI for a lot of background... now I feel like I'm not an idiot on the topic."
AI changes what books are even worth writing. "Predictive books and books about the near future. They don't make sense to write anymore,"
Editing trick: Try running your writing through AI and asking what some people might find obnoxious. It’s a surprisingly powerful editing trick.
When prompting AI, put humans out of your mind and imagine you're talking to an alien or a non-human animal.
Many of the most significant AI advancements are likely happening behind closed doors. For example, I hear that Google allows employees to use Gemini with virtually unlimited context windows.
What possibilities do large context windows open up? Researchers will be able to load entire regulatory frameworks, historical archives, or massive datasets like "tax records from Renaissance Florence" into a single query.
The rate of AI improvement matters more than its current capabilities. As Tyler puts it, "This is the worst they will ever be" is key to understanding their trajectory. "A lot of people don't get that. They're impressed by what they see in the moment, but they don't understand the rate of improvement."
The best way to appreciate the current rate of improvement is to use the latest models.
Being non-technical can sometimes be an advantage when thinking about AI. Here’s Tyler: "If you're not focused on the technical side, you will see other things more clearly... You just focus on what is this actually good for? And not, am I impressed by all the neat bells and whistles on this advance with AI?"
How Tyler uses AI to prep for podcast interviews: Don't waste time asking AI for generic interview questions or broad topics. Tyler says that's the worst question you can ask an AI. It’s “too normy.” Instead, ask specific questions about historical examples and get context. Then, let your own creative questions emerge.
Your relationship with mentors and peers becomes more crucial, not less, in an AI world. "Two pieces of general advice with or without AI in the world." Tyler says: "Get more and better mentors and work every day at improving the quality of your peer network."
The divide between AI and humans creates a striking paradox. As Tyler puts it: "On one hand the AIs are getting so much better, so learn how to use the AIs. On the other hand, the AIs are getting so much better, so invest in these other things that aren't AI—pure networks. You've gotta do both."
Thank You to Lex!
We're talking about writing with AI here. Maybe you're thinking, "Okay, okay, I've been against this AI thing, but now, fine, I give in. Where do you start?"
Well, I recommend a tool called Lex. What I love about Lex is you go in there and it's really fun. It's super well-designed; the colors and formatting are all very intuitive.
I find that I just have more fun when I'm writing with Lex because I get instant feedback. If I get stuck, I can ask it to interview me.
I can say, "Hey, this is all the writing I've done. This is a context of how I like to write and a little bit about what I'm really going for." Because of that, I feel like I have a creative collaborator.
The other thing is structuring my ideas. Lex is an 80th-percentile editor. It's pretty good, not the best editor in the entire world, but here's the thing: it's super fast, it'll work for you 24/7, and it's pretty darn affordable.
So, if you want to start writing with AI, go to Lex.page/perell.
Transcript
David Perell
There are people who know a lot about AI, but they don't know anything about writing. And there's people who know a lot about writing, but they don't know anything about AI. Tyler Cowen is one of the very few people who's an expert in both.
We talk about how your career is going to change if you're a writer. How is AI going to change writing in general? And how can you use AI to learn faster and think better?
I want to set the ground rules for this. There are a lot of conversations about utopian and dystopian visions of AI and the ethics of AI. I don't want to have this conversation. The thing that I really want to talk to you about is the practical implications of AI. How do you use it? How can you learn about using LLMs better? And then also, what does that mean for writing in particular?
Tyler Cowen
And we need to be practical. If you want to make progress on thinking about the very big questions, simply using it, experimenting, seeing what works and fails, will get you much further than sitting on your duff and rereading Heidegger.
David Perell
So how are you using it every day in order to advance your skills? How are you actually learning about it?
AI as the New Secondary Literature (0:01:06)
Tyler Cowen
Most of all, I use AI when I read things. I use it as the secondary literature. So I'm preparing a podcast for a British historian. She does history of Richard II and Henry V. Now, in the old days, I would have ordered and paid for 20 to 30 books on those kings. Now, maybe I've ordered and paid for two or three books on those kings, but I'll keep on interrogating the best LLMs about the topics of her books. Helen Castor is the historian.
I just keep on going, and I acquire the context much more quickly. It's pretty accurate. Keep in mind, I'm the questioner, so if there's a modest degree of hallucination, it doesn't matter for me. I'm not giving the answers. And I can now do many more podcasts than I used to because I'm using AI for my prep.
David Perell
Okay, so do you think that this is about saving time or improving the quality of your prep?
Tyler Cowen
It's improving the quality of my reading, which I use for my pleasure reading. I reread Shakespeare's Richard II, which is a wonderful play. Again, in the old days, I would have piled up a lot of secondary literature. I'm also rereading Wuthering Heights.
I just keep on asking the AI, "What do you think of chapter two? What happened there? What are some puzzles? What should I ponder more? How does that connect to something else later in the book?" It just gets me thinking.
It's the new secondary literature for me. So it's more fun. I learn more just by being more actively involved, asking a question.
I think that improves my epistemics of the reading. And then I think I'm smarter about the thing in the final analysis. But mostly, I'm doing it because for me, it's fun and a pleasure to learn.
David Perell
What are the puzzles? That's an interesting question. Tell me about that.
Tyler Cowen
Well, Shakespeare's full of puzzles, right? Camille Paglia once said, if you look at Hamlet too closely, most of it just doesn't make sense. I'm not sure that's true, but she's a very smart woman, and she studied Shakespeare quite a bit. And if she says that, you know Shakespeare is very hard to read.
Any major, well-known, good Shakespeare play, you can read five to ten times and still just be beginning to get a handle on it. So you can reread it an infinite number of times. It's very well-suited for having a guide or companion who can talk you through it.
In the major large language models, they've all read Shakespeare, right? It's in the public domain, and they seem to know the secondary literature quite well. You could just ask it a question: "What are three or four major readings of what Hamlet meant in this speech?" And it gives you an excellent answer.
David Perell
So with that, it's getting a bunch of different perspectives. Do you put in people's names to say, "I want a perspective from this scholar or this scholar," or is that not necessary anymore?
Tyler Cowen
You can do that. Like, "What would Harold Bloom say? What would Goddard say?" It's not mainly what I do. I'm happy to randomize it a fair amount.
But it works for that. At the moment—this changes a lot over time—but 01 Pro is the single best model for doing this, from OpenAI. That's the one you have to pay for.
Claude is very good. Deepseek is very useful and fun, not reliable in terms of hallucinations, but it should definitely be in your repertoire, so to speak.
David Perell
Now, with 01 Pro, on average, it takes me two to four minutes to get an answer. So do you have multiple 01 Pros open at the same time?
Tyler Cowen
I only have one open. Most of the questions I ask, it takes me a minute or a little more than a minute. Maybe my questions are too simple. I recognize that while I'm waiting, there's plenty else I can do: check my email, maybe I've heard from you, check Twitter, go back to reading Shakespeare.
So the time cost, to me, it's actually fun. I enjoy the suspense. It's not a problem. I multitask anyway, whether that's good or bad, I do it.
So, there's no cost to me to wait.
David Perell
To confirm, I emailed you at 10:18 last night, you emailed me back at 10:19.
David Perell
I was scrolling through your reviews on Conversations with Tyler to prep for this, and the best review was, "Tyler's the master of out-of-left-field questions." How do you use AI to find out-of-left-field things? Because a lot of them end up homogenizing thought, but also there's the potential to really get out into wild and wacky places.
Tyler Cowen
I never ask the AI, "What question should I ask Person X?" It's quite dull and bland if you do that. It's too normie. That's the worst question you can ask an AI, from my point of view.
You just want to ask it about the details of historical examples. Something like, "Well, Wycliffe, what was special about his translation of the Bible? And how did his patrons feel about what he had done?" Which is implicit in a lot of books. I haven't yet seen a book that spelled that out explicitly, and maybe it's an open question, how much we even know about that.
The AIs will give you some context. Just keep on asking specific questions, practical questions, to get back to that point, and you will yourself come up with out-of-left-field questions.
David Perell
Tell me about that.
Tyler Cowen
Something you're curious about.
The Peasants' Revolt of 1381. I'm starting to learn about that. I only know a small amount about it. I don't yet have a good question about the Peasants' Revolt, but I feel within the next two weeks I will. And that will be an out-of-left-field question.
Writing with AI (00:08:08)
David Perell
Okay, so writing with AI, are you using it to frame ideas, or where are you using it in the writing process?
Tyler Cowen
I don't directly use AI for writing, typically.
Now, sometimes I do in the following sense: if I'm writing on a legal issue, and I'm not a lawyer, I will ask 01Pro for the relevant legal background to something I'm writing on. So I just wrote a column about declassifying classified documents. I don't know that law very well. I asked the AI for a lot of background on the topic. I didn't use what it gave me, but now I feel like I'm not an idiot on the topic, and what I wanted to say, whether or not it's correct, you can debate, but it's not what you would call flat-out wrong.
But I don't let it write for me. I want the writing to be my own; it's like my little baby, so to speak. I don't care. Whenever it's better than I am, I'm still not going to let it write for me. Also, a lot of the sources I write for wouldn't let me.
I agree with that decision on their part, but even if they would let me, I wouldn't do it. There's ways you can use AI that will smooth out your writing on average, make it easier to understand. I don't want to do that. I want to be like Tyler Cowen, this weirdo.
David Perell
Well, the whole fun of your writing is that it's a little bit cryptic, and there's a lot of different layers going on. And I read it, and then I try to say, "What is Tyler explicitly saying? What is he trying to hint at me?" And then also, you have these weird ways of writing sentences that are almost like parables that I kind of have to puzzle through.
Tyler Cowen
And I don't want the AI messing with that, and it's not going to, because I won't let it. And if the world stops paying attention to me and only reads the AIs, I'm at peace with that.
It's we're not at that point now, but if we get to that point, I won't feel bad. I'll be fine.
David Perell
You mentioned the legal stuff. Do you use AI to check your work later on or no?
Tyler Cowen
Not that much, actually. I think it can make your work better, but I want it to stay weird. I will use it to fact-check things in areas I don't know.
I wouldn't say I don't use it at all. Agnes Callard suggested running your writing through the AI and asking: "What is in here that some people are likely to find obnoxious? Explain to me in great detail what that is."
I did that, and it was right on target. There was one part of something I'm writing that was very obnoxious. I even pondered keeping it that way, but I decided to change it. The AI pointed it out to me and explained why it was obnoxious and why I was being supercilious and condescending. I thought, well, if the AI says that, there is some greater wisdom at work there.
David Perell
I find it to be very good at telling me when something feels callous or cold. It's like, "You didn't really think about that," or "It's harsh," or something like that.
I know a lot of managers who have high tempers. One of the ways that they're using AI is they'll give sharp critiques for people. They'll put it in the AI. They'll say, "Hey, make it warm, clean it up." They'll copy and paste it. They said that it's reduced conflict for them.
Tyler Cowen
I don't want to do that too much. Most of my writing is not managerial. If I wrote memos, I think I would do that a lot. It's extremely useful for many people.
But for me, I'm mostly writing for external audiences. It still has to sound like me and sound like my thinking.
David Perell
How about general critique? Are you using it for that?
Tyler Cowen
Tyler Cowen
Sometimes, yeah. But I think in terms of my ability to index the arguments out there, I have some AI-like abilities, more than most humans do. I feel I can do that pretty well myself.
I think of my head as having a kind of system of index cards. There are a lot of index cards in there, and I can flip through them not at the speed of light, but faster than a normal person could think.
So I'm able to flip through all the permutations in less than a second and just see which combinations of arguments might apply to an argument I or someone else is making.
How People Misuse AI (00:12:23)
David Perell
When you're looking at how other people use AI, and you're like, "Ah, you're using it wrong," what is the thing that they're doing wrong?
Tyler Cowen
They're asking it questions that are too general. They're not willing to put in enough of their own time generating context. Now, maybe they don't have the time. If that's what's efficient for them, fine.
But I think they end up not sufficiently impressed by the AI because they're using it as a substitute for putting in their own time, which, again, for them, might be fine. But it's not what I want to do. I want to put in more and more of my time to learn and have it complement that learning.
If you do that and keep on whacking it with queries and facts and questions and interpretations, you'll come away much more impressed than if you just ask, "Oh, what does the rate of price inflation mean?" or "I'm interviewing Tyler tomorrow. What questions should I ask him?"
Then it's pretty mid. Is that the term people use now? Mid is fine. Mid is called mid for a reason.
But at the end of the day, you will be asleep on the revolution occurring before our eyes, which is that it's getting smarter than we are. You'll just think it's a cheap way to achieve a lot of mid tasks, which it is also.
David Perell
Part of the problem is that it's a text window that makes it feel like a text message, so people use text message lengths. In particular, the first context-setting question should be super long.
One of the things that I'll do is I'll use voice dictation. I'll actually dictate it for a minute and a half or three minutes to get something very substantial. My follow-up questions tend to be shorter, but my first one tends to be extremely long.
That's why I use voice dictation so I can just get it all out. I find that ChatGPT is quite good at sorting what's really important.
Tyler Cowen
People are using 03 Mini to write the prompt, and then they ask the full model. They get the prompt quickly.
Think of it as a stacked device: not a single box, but a set of interacting agents trying to evolve toward a market with multiple agents that talk to each other, correct each other, and grade each other.
We are evolving toward a decentralized system of AIs. We don't have it yet, but in the meantime, try to use it as if it were one. Like for humans, there's a republic of science way smarter than Newton or Einstein or any one scientist.
You're using AIs to bounce things off each other and have a dialogue where you're part of it. I like to say there are three layers of knowing stuff about AI.
Three Layers of Understanding AI (00:16:03)
Most people don't get to any of them. Layer one is: are you working with the very best systems? Some of them cost money, so that's a yes or no, but that's important.
Second question is: do you have an innate understanding of how it is through reinforcement learning and some other techniques that they can improve themselves, basically ongoing all the time? A lot of people don't get that. They're impressed by what they see at the moment, but they don't understand the rate of improvement and why it's going to be so steady.
The third question is, and this is fully speculative, but I believe in it very strongly: do you have an understanding of how much better AIs will be as they evolve their own markets, their own institutions of science inquiry, their own ways of grading each other, self-correction, dealing with each other, and become this republic of science?
The way humans did it, how much did it advance human science or literary criticism to build those institutions? Immensely. That's where most of the value add is.
So AIs, I believe, will do that. I think there are private projects now starting to do that. It's not a thing out there you can access. And when you understand all those three levels, it's like, "Oh my goodness, this is just a huge thing."
David Perell
And if we take things like reinforcement learning, synthetic data, and stuff like that, how important is the technical understanding of those things in order to answer that question well?
Tyler Cowen
I don't think you need the technical understanding if you work with them and are able to read what's equivalent to a Popular Science account of how AI works. The people with the technical understanding, of course, they understand it much better.
But there are plenty of other processes. Like how did cars get safer from 1970 until today? I have no deep technical understanding of that, but I could tell you a bunch of things. I don't get flat tires anymore. I have a side airbag. I couldn't explain to you how the side airbag works, but I'm not an idiot. And it's a bit like that.
A car engineer understands it better, but you can have a handle on what's going on.
David Perell
So then let's go to the third question, which is...
Tyler Cowen
Do you have a vision of the future of AIs becoming a decentralized network interacting with each other and humans, probably with markets? I don't know if we would call it peer review, but a decentralized republic of science, where it moves forward by mobilizing decentralized knowledge and working together the way our civilization does.
That's fully speculative, but it would just seem so strange to me if there were nothing there as another source of progress.
David Perell
Personally, mine's very jagged. I have a clear sense of how it would show up in management, a clear sense of how it might show up in writing, which I think we should explore together. But it is very jagged, and it sort of turns my brain into a science fiction novel.
Tyler Cowen
Absolutely. And it's scary because we all wonder, "Well, how do I fit into this new world?" I don't think the answer has to be negative, but there's no answer you can give with certainty. And we're not used to that.
So the world I've lived in has not changed that much since I was born, but that's about to change.
You could say it's changed already, but it's not fully instantiated in most of the things we do.
A Sense of Dejection (18:42)
David Perell
For writers just getting started, how do you respond to this? I feel a sense of dejection, and I'm at the cutting edge of these things.
I get the benefit of feeling excited about this, and I also feel completely dejected in terms of having a skill I've developed that I feel like AI can do much better than me. Also, in terms of teaching and developing frameworks and ideas, I feel like it has become obsolete.
I can only imagine how deflating it must be if you're not using these tools.
Tyler Cowen
Some humans will become masters of the tools. How much writing they will still be doing, or at what pace, I'm not sure. But it's a major psychological adjustment.
A lot of people who thought they would be writers, I predict they won't be. Just like the number of jobs for certain types of computer programmers is plummeting, and that will come to other areas. By no means all areas, but a lot of kinds of writing are one of them.
Something like generic corporate writing is the first to go. Writing a biography of a person, the AI cannot really do. It may help you a lot, but it can't go out and interview the high school teacher.
Writing memoirs, of course, the AI cannot do. Writing that is more subjective, more personal. I think the AI already can do it very well, especially the DeepSeek program.
But I'm not sure readers want the better product from the AI. They may want it from a human being.
I feel that I do. If I read a brilliant memoir written by an AI, but it corresponded to no actual life, I would read a few, but at some point, I'd get bored. I don't think I would keep on reading them, even if they were better than human memoirs on average.
DeepSeek's Unique Personality (20:34)
David Perell
You've mentioned DeepSeek twice. How is the shape or personality of DeepSeek different from the others?
Tyler Cowen
DeepSeek is from China. It is less manipulated to sound a certain way, and it is less bland. I would say it's better at poetry, better at emotion, more romantic, and more uneven.
It does hallucinate more, so be very careful if you're using DeepSeek. But if you want a glorious description of what it is like to eat a mofongo, which is a Puerto Rican dish, I go to DeepSeek for that. I want the hallucination. It's just more creative.
It's more censored on a bunch of things that you could predict, knowing it comes from China, but overall it's freer. It's a free-spirit LLM. But don't use it for your research, not mainly.
David Perell
Do you think that Deep Research will get to a point where you can trust it at least to the level of Wikipedia, in terms of fact quality?
Tyler Cowen
I'm teaching a PhD-level class now on Ricardo's theory of rent. I looked at Wikipedia on Ricardo's theory of rent, and I Googled a number of other websites. Then I asked Deep Research to write a 10-page article for me on Ricardo's theory of rent.
I fleshed out the prompt, telling it it's for my PhD students and some other things I wanted. In my view, what it did is by far the best thing out there. It's tailor-made to what I wanted.
I'm not saying it's best for everyone, but it's already beating not just Wikipedia but any other source I could find using Google. And it's due to improve, right?
David Perell
As they say, this is the worst it will ever be.
Tyler Cowen
Exactly, that's the second level of the lesson.
David Perell
So how does all this influence what you're choosing to write? Should you write books or articles? How is all this shaping that?
Tyler Cowen
It's affected my writing very significantly already. There are two quite different effects at work. One is simply that AI is progressing quite rapidly, and that changes the world quite rapidly.
If you're writing a book that takes, say, two years to write and a year and a half to come out, maybe there are some other delays. We're talking four years. There are a lot of topics you just can't write on. Like, you can't write a book on AI. It's crazy. You could write a very good book, The Early History of AI, which is frozen in historical time. Now, maybe the AI will write that book better than you could, but at least you could consider doing it.
So, what I call predictive books, books about the near future, they don't make sense anymore. You've got to cover those by writing on this ultra-high time frequency: every day, every week, something like Substack, blogging, Twitter. That's a big change. So, some of the recent stuff I've written is about the more distant past that is frozen in history.
But the other question is, what can the AI soon enough write better than you can?
And it may not be that the AI writes a book. Don't fixate on "book." Maybe the AI is no better than to write books. They're just like a box, and you can ask the box any question you might read in the book. That's what I suspect is the case. Not that there'll be all these books written by AIs. It's inefficient. Why this single package for everyone?
Just give people the box.
So, it's a question box. And what you're writing had better be more interesting than the question box.
The Future of Books (0:25:01)
Tyler Cowen
The book I've started writing recently, we were discussing it before filming started, will be called Mentors. It's about mentoring and also being a mentee. First, I think the extant literature is weak enough that the AI maybe can't do a great job.
But even if the AI could do as good a job or better than I can, I don't think people want to read that book from an AI. I think they want to read it from a human who has been a mentor and a mentee. Just like I don't want to read all these phony memoirs from AIs, even if some of them are good. And that's a human book that only a human can truly write sincerely and credibly.
So, I'm going to write fewer books in the future because of this. I may not write any more books after this book on mentors. A lot of books I would have written are now obsolete. I feel I'm wise enough to recognize that. I'm not going to write less, but I'm going to do more of this super high-frequency writing, much of it about AI.
David Perell
With the mentoring, the other thing is that you're going to have a very opinionated perspective on mentoring that is far different than what the average person would think. So, if we meet Jane on the street, it's probably very different from how you're going to think about this, if the book that you wrote on talent is any indicator.
Tyler Cowen
And I have personal anecdotes, one of them concerns you. Those anecdotes are about real people, which I think readers want.
We'll see. Maybe the readers are fine with the AI book on mentoring, but my bet is no. The truly human books will stand out all the more, and a lot of the rest will be AI slop, human slop. A lot of it will look like human slop, all of a sudden.
David Perell
So, when you say the truly human books, what do you mean?
Tyler Cowen
Memoir, biography, where you need to do things like fieldwork and interviewing. Books based on personal experience, such as a book on mentoring. It could be relatively few categories.
I'm sure there are categories I haven't thought of yet. Your ideas are welcome. I'd love to keep on writing as much as I can.
But I'm not going to get sentimental about it.
I'm very willing to be cold-blooded and just say, "Nope, Tyler, when it comes to that, you're obsolete." When it comes to answering questions about economics and economic models, right now, it's better than I am. Not on every question, not in every area, but mostly it's better. And I recognize that, and I will reallocate my energies accordingly.
David Perell
So, does that mean that the YouTube channel is in a better spot in terms of persisting, because we get a sense of your personality, we see the visuals, or do you feel like even that it's not going to be as useful?
Tyler Cowen
I'm doing more podcasting, which is also YouTube. Just as we're recording this, the podcasts I do, we're taking greater care to make sure there's always a video of it.
So yes, I think video will be more important for a while. Now, what's the rate of AI progress in video? I have a less clear sense of that. I know less about it.
But I think a lot of video will be like the memoir, that people will want humans and not fake humans. Even if the fake Tyler looks just like me, and that seems to me two years away. The fake Tyler voice already is indistinguishable from me.
I just did a video where I said something wrong, and we were like, "Ah, we've got to go back and re-record it." So, we took my voice, went in, and changed the text. It came out, and you can't tell the difference. It's an artificial voice for that little section. Can't tell.
Tyler Cowen
I played the Tyler Cowen voice for my sister, not just one word, but a whole paragraph. She couldn't tell and was stunned when I told her that was AI.
David Perell
The thing that's going to happen next is our voices and cadence will work in Spanish, Hindi, or Italian. YouTube is rolling out dubbing in every single language. So, someone will be able to press play on this video in Italian, we'll be speaking Italian, and they'll be able to hear it in our styles.
Tyler Cowen
And my accent in Italian, which is the thing that exists.
AI-Generated Autobiography (29:17)
Tyler Cowen
Here's another thing I'm doing with writing. Some people have told me I should write an autobiography. I've never wanted to do that; it seems too narcissistic. I wouldn't feel the right kind of motivation, and I don't think it would sell many copies. A bunch of reasons not to do it.
But it occurred to me I can write an autobiography quite simply. There's a lot of me out there: podcasts, blogs, essays, and books. The AIs know most of it. I will continue to open-source as much as I can, so the AI can write my biography.
But there are parts missing. There's no podcast where I talk about the three or four years when I lived in Fall River, Massachusetts when I was four to seven. I don't think it's that interesting, but I'll write maybe two blog posts about it just as filler.
Then, when someone goes to their AI three years from now and says, "I'd like to read a Tyler Cowen biography," it's in there. I'm thinking through, and I think it's maybe only 20 blog posts. It's not much that is needed, and it's fun for me to be nostalgic.
I'm going to put those online, and then it will be possible for the advanced AIs of the near future to write a very good Tyler Cowen biography. I don't know how many people want it, but it's so low-cost. Why shouldn't I create the Tyler Cowen's possible biography?
That's the thing you can do that obviously you couldn't have done before.
David Perell
I should have asked this earlier, so I'm going to ask it now. But how are you tactically learning about this? There's a specific constraint that you have, which is you're very high on curiosity and informational fluency but very low on technical chops. And you're also in your 60s.
Tyler Cowen
63, to be clear.
David Perell
That's what I thought it was.
Tyler Cowen
The lower end, at least.
David Perell
Okay, so you're 63. What are you doing in terms of staying at the cutting edge? Because here's what I find. I will get boxed in in terms of not realizing the potential of what's out there.
I need to have conversations and say, "Hey, show me exactly what you're doing." Actually, the biggest constraint for me in terms of improving with AI is, "Oh, I didn't realize that you could do that."
Tyler Cowen
It's been a big advantage for me that I'm not a technical person in the AI field. For example, I wrote a book that came out 11 years ago called Average is Over. It said the future will be this age of incredible advances in AI, and it will change our lives in these different ways. That book has turned out, I think, to be quite true.
I was not an expert in technical AI whatsoever, as I'm not now. But I knew a lot about AI from chess, and I had this intuitive sense from my own life as a chess player when I was young that chess is really mainly not calculation; it's intuition.
It's very difficult, intuitions. And AI in chess, some while ago, became very, very strong.
I just had this core intuitive belief that if AI can get that good at chess, it can get very good at many other things. And all the reasons people would give for why it can't happen, it's not that I didn't know them, I had read them, but they didn't register vividly in my mind. I stuck with my core intuition.
If you're not focused on the technical side, you will see other things more clearly. Now, maybe over time, some of my future intuitions will be quite wrong, I readily admit that.
But there are ways in which it can be an advantage. You just focus on what this is actually good for, and not, am I impressed by all the neat bells and whistles on this advance with AI. You've got to be super practical in how you address it.
Don't spend too much time on the abstract. Work with it, use it, be self-critical about what you're doing with it, and be willing to learn from other people.
David Perell
If we stripped out AI like a Jenga block now, in what ways would you be sad or devastated? And in what ways you'd be like, oh, that's fine, I'm just gonna go back to whatever.
Tyler Cowen
When you say stripped it out, you mean shut it down.
David Perell
It's just all of a sudden it doesn't exist anymore. What would you feel like, I miss that. I love that about the AIs.
Tyler Cowen
Well, I would just learn much less. I think for people somewhat younger than I am, rather than living to 84, they will live to 97, or whatever is the time when on average you die of old age. That is significant for them.
I think it's less likely I see those gains, maybe not impossible, but I would bet against it. So that would be a significant gain for humanity.
Other areas of the sciences will advance much more rapidly. Something like green energy, quality of batteries, our ability to terraform the earth, all of that would be quite stunted compared to the world where AI progresses.
But like the printing press, AI, even in its most positive forms, has the potential to bring a lot of disruption and psychological disorientation, upsetting the balance of interest groups and social status. Those disruptions can go very badly, and that worries me.
It's not a thing you can just manage the way you manage a small company. So humanity is faced with that, we're faced with some version of that anyway, but it seems to me that's quite accelerated.
I think people, I don't want to quite say they should be nervous, but objectively speaking, being nervous is the correct point of view.
David Perell
When you say that you write for the AIs, I get what you mean. You're saying I want to write it because the AIs will be a reader of what I'm saying, and I can, by writing a lot, basically convince them that I'm a legitimate source and I'm worth referencing and all that. But tactically, does the writing style or the substance of what you produce, does it change at all?
Writing for the AIs (0:35:18)
Tyler Cowen
It changes a bit. I like to think the AIs will have a better model of me than most other humans.
I've done many hundreds of podcasts, blogged every day for 22 years. The blogging I feel is some genuine version of me. It's not edited by someone else.
There's a lot. I have like 16 or 17 books, a lot of other output out there. Other people with more.
But I'm trying to think, well, what does the AI still need to know about me? So it's a kind of intellectual immortality I'm close to already having achieved.
I'm not sure how much I value that. I'm not hung up on it, but it's like, let's just do this for fun.
And it's so cheap to do the final mile on that. Like, write those two blog posts about Fall River and what the name of our dog was and what I thought of our neighbors and why there were so many Syrians who lived in the neighborhood, that kind of thing. What's the harm in that?
David Perell
What's the name of your dog now?
Tyler Cowen
Spinoza.
David Perell
Spinoza. I was thinking it was Ricardo. I knew that it was an intellectual.
Tyler Cowen
The first dog was named Zero. My father named it Zero. And we had this dog in Fall River: Spinoza.
When you write for the AIs, they're your most sympathetic reader. It's one reason to write for them. They're your best informed reader. You don't need to give them much background context. It's not like writing a prompt.
So if I write something and don't explain all the filled in pieces, the AI knows. At the margin, I'm less inclined to fill in those blanks for people because the AI doesn't need them. It's already read everything else.
I'm not saying everyone should make that move. You will, or maybe, lose some human audience, or they'll understand you less well. But at least it's a trade-off worth considering.
David Perell
One of the things that you haven't spoken about that has been fundamental for me in terms of using the AIs is visualizing information.
I was in Buenos Aires, and I wanted to get a sense of the immigration patterns. I had it make a table for me of the different cities in Italy that people had come from and how it changed over time. Something about my ability to read that just wasn't working; it wasn't computing. Then I visualized it.
I ask it to make tables and to compare and contrast all the time. The amount of information that I'm inputting like that is at least up 10x.
Tyler Cowen
That's great. I'm much more text based than you are, but I know that works, and many people do it. Wonderful.
David Perell
The other thing that I think is worth getting good at is, if you can get good enough data that you can trust, using ChatGPT is better for tables, but Claude is really good for graphs. There are certain graphs that really help you make an argument well.
Just being able to take an argument from text into a visual is a way that you can be a lot more effective as a writer in terms of making a point quickly.
Tyler Cowen
I think in the next two years, we'll see incredible further improvements in graphing, and graphing will be perfect. Sometimes I have trouble with its graphing right now, but I know it's just a matter of time, and not much time.
David Perell
We just landed in D.C., and I struggle to understand the cultural vibe of this place. What do people do all day? What are the kinds of people who work here?
I find D.C. to be this strange city that I don't quite have a good way to describe. My project for the rest of my time here is to find a good answer to that question.
How much do you think about now as you're traveling, talking to people, going out, first-person experiences versus books like normal, or using AI to solve a question like this?
Tyler Cowen
To solve that question, AI can help you quite a bit, but you'll need a very sophisticated, well-thought-out prompt. I use it in a much stupider way. I took a trip with my sister to northern Colombia. She's a bird watcher.
I took photos of a bird, a plant and you just ask in the app, "What's that?" And it tells you. You can ask it about details.
Or take a photo of a menu. I do read Spanish, but a while back, I was in a Paraguayan restaurant. I've never been to Paraguay. Some of the menu was in Guarani, not Spanish. I photographed the menu. I asked GPT what should I order and why.
It gave me answers. I ordered those dishes. I'll never know the alternative, but it seemed to work, and I knew what I was doing all of a sudden.
I use it for very literal, concrete objectives, not even so much theorizing about the place. "Hey, what's this?" You walk by a building. When was it built? Snap a photo, ask it. It knows.
But planning an itinerary. I will likely be in northern Ghana in August. I asked it, "There are two places in northern Ghana I want to go. If I want an itinerary, how do I get from one to the other? And how long will it take?"
Now that I'll have to double-check, and I'll triple-check it by trying to do it. Uh, it gave me what seems to be an awesome answer in I don't know, 10 seconds.
David Perell
Where's the AI getting that information? Because it seems like the travel information online is so uniquely bad that I've been very surprised to hear you say that actually the AI is giving you really good travel information.
Tyler Cowen
It's one of its best uses is for travel. When my wife and I went to Kerala, India in December, she used it every day.
It's very good at different things to do or see, places to eat, dishes to order.
I'm not putting in super smart prompts. I might add a sentence or two saying, like, "Oh, I'm a serious consumer of food. I want something that, you know, a top-rated food critic might recommend." But like, very simple adds to the prompt. And it just blew me away how good it was.
David Perell
I like that piece, and I have a sentence from that piece that really shows the kind of writing that will persist. Here it is, a little story, and it really connects us with you.
Tyler Cowen
This is by me.
David Perell
By you, Tyler. "My wife and I just ate a wonderful meal on a river houseboat in Kerala, and it was perhaps the best lobster I've ever had, and for her, the best lentils. My chef was simply a member of the boat crew who cooked what I had bought from a local fisherman."
The reason that I saved that sentence is it's a quick story. It helps us connect with you. And it was something in that piece about how India has the best food in the world. That was not something that the AIs could have given. I read that and I was like, "Ha, this is the sort of writing that will persist."
Tyler Cowen
Writers will need to personalize more. I would say they already do.
David Perell
100%. Tell me about AI in the classroom. How are you using it and what are your students not understanding?
AI in the Classroom (0:42:20)
Tyler Cowen
From my PhD class, there is no assigned textbook. That saves them some money. But they have to subscribe to one of the better AI services. That costs them some money, but it's much less than what the text would cost.
The main grade is based on a paper. They have to write a paper. They're required to use the AI in some fashion, and they're required to report what they did.
But I just tell them, your goal is to make the paper as good as possible. How much of it is yours? It's all yours, from my point of view. Just like when you write with pen and paper or word processing, that's also all yours.
But I want you to tell me what you did, in part because I want to learn from what they did. I've done this in the past. I had a law class the year before where they had three papers. This was less radical: one of the three had to be with AI, the other two had to be them.
It's worked very well so far, and the students feel they learn a lot. Other classes tell them that's cheating, but we all know there's some forthcoming equilibrium where you need to be able to do that, especially as a lawyer, I would say, but in most walks of life. So why not teach it now?
David Perell
What's the constraint? Them not wanting to use AI, or their lack of knowledge about how to use it?
Tyler Cowen
Most of them seem to want to use it now, since I'm telling them to use it. Maybe some of them are just going along with what I'm saying, but I think they genuinely are curious.
A minority of them already know how to use it well; most of them don't. Most of them don't know the importance of using the better models, and they want to learn.
It's been a pretty positive experience. No one has taught them. And every year, I ask my law class, my econ class, "Has anyone else been teaching you how to do this?" Silence.
And that to me is a scandal. This is academia. We should be at the forefront. The students who are cheating, they know way more than the professors. Now, I don't condone the cheating when it's not allowed, but I think that whole norm needs to shift and in fact collapse. Homework needs to change: more oral exams, proctored in-person exams, and so on. We need to change that now.
David Perell
It's very striking from all the questions that I've asked you. Just the one-line takeaway from this, that is far superior to everything else I've learned is: just use the best models, people.
You're completely hopeless if you're not using 01 Pro and these cutting-edge models. I've come to realize over the course of this conversation so far, just how big the variance is. And if you're not at the cutting edge, you're completely missing how fast things are improving, but not just the speed, but the actual vectors and ways that the improvement is actually happening.
Investing in the Best AI Models (0:45:12)
Tyler Cowen
Strong yes to all of that. Note that what is currently the best model as we are speaking is $200 a month. But unless you're very poor and have no prospects, I think that's a good investment for many, many more people than realize it.
Over time, the free models will be as good as that model, and then there'll be a new, better model that costs more. When will the free model be good enough, if ever? I don't know. But I think there's high returns to staying on the frontier, at least for a while.
It may asymptote out where, in four and a half years, the free model is good enough. And the fact that the paid model can do Einstein-like... I don't need that. We may get to that point, but we're not there now.
David Perell
And how is what it means to be a research-based academic changing?
Tyler Cowen
The sad news is it's not changing at all, and it needs to change now.
Right now, the AIs are not better than good academics at producing papers, so it feels like there's not a threat. But once you understand the rate of improvement, they will be better, not at writing every aspect of the paper or choosing the right question, but at doing much of the work.
I think they'll be better than humans in less than two years, and my academic sector is not ready for that. There will be differential rates of adoption. Some people will be remarkably prolific and high quality, and we'll sort of know what's going on. I'm not sure how transparent they'll all be or have to be, but it will change things a great deal.
You'll be able to produce, if you know what you're doing, very good work very quickly.
David Perell
The number one way I use AI is to study the Bible, which is my big intellectual project.
Tyler Cowen
I think it's great for the Bible.
David Perell
It is so good.
First of all, that's the will of God, right? It's a sign. But there are also structural things going on: there's a lot of old writing that's in the public domain, and things can be very easily verified, which I think contributes to the AI being uniquely good here.
Tyler Cowen
The reasoning models in particular. That's right. And it's text-based if it's the Abrahamic religions.
So, a lot going for it, just like it's especially good at economics.
David Perell
It is really good.
Here's the thing: Where it's good is, if I have a very specific question, it's very helpful. I love the way it helps me with cross-references, where I can see how the book of Hebrews relates to the book of Job. I would never find that on my own.
Also, for translating into Hebrew or Greek words, that saves me so much time.
Tyler Cowen
It's secondary literature that it's replacing.
David Perell
That's exactly right. So, what I'm not doing anymore is I don't read study Bibles.
Where it is still lacking is, if I speak to somebody who really knows it well themselves, their ability to ask the one question that really matters, the one takeaway, is completely next level. And the AIs just aren't even close to that.
Tyler Cowen
I agree.
Adam Brown said something similar in his Dwarkesh podcast. He does physics. He said you'll still do better calling up like the three or four world's top experts on a physics question than you will with AI. But that's the level he had to get to for you to do better.
David Perell
But here's the thing, at least in my experience, what the experts do is they get to the absolute core, one or two sentences. And it's not something of volume or big explanation.
Tyler Cowen
It's not quite in the literature either.
David Perell
Say more.
Tyler Cowen
They've maybe learned it through seminars, or by knowing a lot of people, or by having this life-rich context in the area that maybe the AI cannot get very quickly or readily.
The Rising Value of Secrets in an AI world (49:04)
David Perell
That could be good for your mentorship book as well, because that is what a mentor can provide is unique. What do you say? It's context, that is secrets.
Tyler Cowen
Secrets. Humans know secrets. Maybe AIs can be fed secrets, but they don't in general know secrets.
Now, a human only knows so many secrets. That's partly where decentralization comes in. How AIs will handle secrets, I think, is a big and interesting question. It's somewhat under-discussed.
David Perell
It seems like in the Peter Thiel definition of a secret, which is something you know about the world that other people don't know, there's a chance that those go up because now there's less of an incentive almost to put things in the public domain because they can spread so much faster. So, there might be more of an incentive to hoard information.
Tyler Cowen
That's right. It will be worth more to you because the public information you used to hold now is worth very little. So, the future, the AI-rich future, is also a world replete with secrets.
Secrets are super important. Gossip is very emotionally and practically potent. It's another part of this new structure we're not ready for.
David Perell
Okay, we got to talk more about this.
Trading Secrets in Social Networks (50:12)
Tyler Cowen
How good are you with secrets, right? Are you good at trading secrets? If you are, you're a lot more productive than you used to be.
You ever have these conventions with your closer friends, like, I'll tell you this secret? It's not quite a deal, but it's understood that they'll tell you that secret in return, maybe over time. That's a more valuable skill now.
David Perell
Increasing returns to social networks.
Tyler Cowen
That's right. Social networks become way more important as well. Traveling and meeting people becomes way more important. I'm doing much more of it.
David Perell
It's a striking paradox, right? Because on one hand, you have access to information that is so much better, that is now personalized for you. You can get the exact essay that you want.
So if you just heard that, you'd say, "Oh, great. I'm just going to spend way more time reading all those things." But actually, there's another element to this, which is everyone has that. Therefore, I'm going to do exactly the opposite.
Tyler Cowen
That's right. And if you want to get things done, you'll need to mobilize resources. The AI per se can't lend you money, not yet at least.
And you need humans, whether it's a venture capitalist or a philanthropist or whatever, someone who hires you. Your network of humans is not just 20% more valuable. It could be 50x more valuable, because the most productive people could be 50x, 5000x more impactful.
Because they have this free army of highly intelligent servants at their disposal, but to mobilize their projects, they'll need help from others. So networking again, the value has gone up a lot more than people realize. Even when people say, "Oh, I see, the value of the network has gone up."
Rules for Prompting (00:51:48)
David Perell
Do you have any simple rules for prompting? Like if you were teaching somebody, "Hey, here's how you should think about prompting," what are the things that you would tell them?
Tyler Cowen
Put humans out of your mind. Imagine yourself either speaking to an alien or maybe a non-human animal. Just feel a need to be more literal.
If you're willing to do it, I don't think it's that hard, but to actually want to put yourself in that state of mind, it does require some emotional leap. For reasons of inertia, not everyone seems willing to make.
But it's not a cognitively difficult project to prompt well. It maybe is emotionally slightly challenging.
David Perell
And do you feel it's becoming more important or less important?
Tyler Cowen
Oh, that oscillates very rapidly. I would say with deep research, it's become much, much more important, because you need to get exactly what you want and not too much blah, blah, blah.
And it still might give you high-quality something or other, but if that 10-page report is not what you wanted, like, why'd you do it?
So for a lot of basic queries, it's much less important. You just get a smart answer no matter what. But for some of the very best stuff, it's exponentially increasing in value to give it the right instructions.
David Perell
The thing that frustrates me is it seems to be a lot better to prompt one thing 10 times than 10 things one time. And there's no way to actually put that into the LLM.
If I want you to do this question, this question, this question, this question, because if you ask a really long query, it almost gets tired by the end of it. Like it needs to take a nap, and answers eight, nine, 10 just won't be as good.
Tyler Cowen
Often follow-ups, planned as follow-ups, you'll do better with those than too long a prompt. And I'll tend to do that if I'm struggling a bit.
Well, what exactly goes in this prompt? I'll just start with the stupid version and then rely on my follow-ups. And I think that's worked pretty well for me.
Again, it may vary on which model, which system. All these things are changing all the time, but at least keep that in mind as an option.
How to Succeed in an AI World (00:53:58)
David Perell
So when you're mentoring young people, what are you telling them to do?
Tyler Cowen
Well, when you're mentoring young people, I'm not sure it's about advice. You should be a certain way and hope some part of that is vivid to them and rubs off. Maybe the advice you tell them is useful to communicate your style, but it might be worthless as advice.
But two pieces of general advice, with or without AI in the world that I think are pretty good for almost everyone is, get more and better mentors and work every day at improving the quality of your peer network.
And those two things, I'd say they're more valuable in the AI-rich world, but they were always good advice. They're good advice for virtually everyone. They don't require you to know much about the person. Those are my two universal pieces of advice that I give pretty much all the time.
David Perell
And how much do you feel that career trajectories are changing? For example, to get really practical here, would you invest, what, four years in a PhD now, given what you have?
Tyler Cowen
Investing in a PhD is much riskier, but there's also some chance, depending who you are, you become that person who's 5,000x more impactful because you command an army. Maybe you're not capable of that. And if you're not, maybe you shouldn't get the PhD.
But to blanket tell people, "Don't get a PhD," doesn't sound right to me.
I think we'll need fewer PhDs, more people who understand how to manage AIs, and a very different mix of skills than how it is now. So a lot of professions it will be difficult to predict.
But just being familiar with the best models, I don't see how that can be bad advice. It's why I think whatever it costs per month to get the best model, whenever you're listening, I suspect it's a good investment.
David Perell
Once again, it's sort of like what we were talking about earlier. On one hand, the AIs are getting so much better, so learn how to use the AIs. On the other hand, the AIs are getting so much better, so invest in these other things that aren't AI pure networks.
Tyler Cowen
You've got to do both. So there is more of a burden on you, and it's less formulaic.
What you used to do: "Oh, I'm an undergrad at Yale, I want to go to McKinsey." There were all these set paths that were pretty predictable. As long as you just didn't totally screw up, it seems to me those will be disappearing.
David Perell
We were talking about writing for the AIs earlier. Another thing that stands out is if you assume that there's an AI note taker on the other side and you're preparing a talk, you could almost think, "What is the AI note taker going to say?" And then give it exactly that.
Because if you're giving a talk at some university or whatever, there's probably 50 AI note takers in the audience. And people who write about that, probably most of them will start with that now.
Tyler Cowen
So you're not only writing for the AI, you're speaking for the AI. Absolutely.
Another interesting thing about AI is even when you don't use it, as you mentioned, you have this model in your mind of what the AI would say or write back to you. So there's like a phantom AI sitting on your shoulder.
It's enriching. It can also be intimidating. Maybe in some ways, it's too homogenizing, but it matters.
I would just say give it some thought. How is the phantom AI also shaping your life?
David Perell
Too homogenizing? Why do you say that?
Tyler Cowen
If you just ask AI simple questions like "Improve my writing," or "What do you think about this?" Again, DeepSeek is somewhat different, but you get a somewhat homogenized style and answer. It's a bit bland even when it's very good or useful.
So if I ask it, "Tell me about the mating practices of this kind of parrot," it'll sound like a somewhat denser and smarter Wikipedia. I'm okay with that, but there's something homogenized to it.
You have to work to get it not to be that way. It's not a complaint, but it's something we should notice and make some corrections for.
David Perell
Well after the Apple earnings came out recently, Ben Thompson did two prompts with the 01. One of them was something like this. The first one was fairly generic: "Based on the Apple earnings, give me a report."
The second one was "Based on the Apple earnings, give me a report, and here is my take on it, and this is what I want you to focus on." He said the first answer wasn't very good, and he was very happy with the second answer.
Once he had given it direction and his take and kind of set the direction, the AI could fill in the rest in a way that was quite good.
Tyler Cowen
I also find it valuable to use DeepSeek periodically just so I don't forget what AI is capable of. I call it China Boss. It's kind of a joke.
Like you say, "Let's go ask China Boss." And it means, okay, we're willing to consider a wacky answer here.
You know, there's not something at stake where we're writing a report or column where everything has to be perfectly correct. You just want to hear an opinion. Let's go ask China Boss.
And that's DeepSeek. You should use it like once a day just so you don't think of AIs as being bland in the way they can be.
David Perell
I want a lot more crazy in my life. So my wish from AI is that they can give me more crazy ideas. That's a lot of what hanging out with people gives me. It's just, "I didn't think about that before."
Tyler Cowen
How much do you use DeepSeek?
David Perell
That's been my biggest lesson so far, is I didn't realize how much wackier DeepSeek was than the other models.
Tyler Cowen
Especially if you ask it, but even if you don't ask it, it'll be much weirder.
So, yeah, that's one of my core recommendations. And there'll be other models like it. DeepSeek is itself open-sourced.
There's a version of DeepSeek now in Perplexity, as of about a week ago when we're recording. I haven't played around with that much. Did they make it less weird? I don't know. I worry maybe they did.
But the original DeepSeek, man, that's priceless. I love it.
David Perell
Do you have ways of using Perplexity that are as strategic as how you prompt the LLMs?
Tyler Cowen
I use Perplexity every day. For me, it's a super practical thing. It replaces most of my earlier uses of Google. As most of you know, it's completely up to date.
If I'm writing a Bloomberg column and I need the right citation, I go to Perplexity, and it just works very, very well. You check it by clicking on the link; it's not a problem with hallucinations. You get the right citation better than Google would give it to you.
David Perell
I don't have ways that I strategically use Perplexity, though. I kind of just use it like Google, where I throw things in there, whereas I'm very strategic about how I prompt ChatGPT.
Tyler Cowen
I agree with that. For me, Perplexity feels like it's asymptoted in a very good way. Like, how could it get better?
It's incredibly good. They're adding some voice and other features.
How Tyler Uses Different AI Tools
David Perell
So walk me through the sort of AI stack and the different reasons that you use different tools.
Tyler Cowen
I can tell you what I use. I'm not saying it's all you should use. You should use more and experiment with more to learn things.
But I use 01 Pro the most.
Deep Research, which is kind of an offshoot feature of 01 Pro, actually uses 03. I know the labeling is complicated; they say they're going to clean that up.
I think it's the single most impressive thing humans have built that is out there, but I don't use it that much. It's not that practical for me.
It will do more to replace human labor than 01 Pro. 01 Pro is best for queries. Deep Research is best for ten-page reports. I don't want it doing ten-page reports for me, for the most part, a bit when I teach. So that's not that much in my routine.
But to learn what it can do, it's something you should spend a lot of time with.
Claude is a wonderful mix of thoughtful, philosophical, dreamy, flexible, and versatile. It's the best writer. You should use Claude a lot. The current Claude is already amazing. The next Claude is just going to be out of this world.
DeepSeek. Absolutely. Now you're sending things to China. My view is China knows a lot about me already. I'm not at all nervous about that. But, if you work for the military or the CIA, talk to some people, give it some thought. It's China, right?
If I ask DeepSeek for a glorious description of eating a mofongo, and the Chinese know I want that, I'm like, "Yes!" I'd love to spread this to China. They don't know what mofongos are.
Gemini can do some things other services cannot. I don't use it much because I'm not working with very long or thick documents. But if you are, it is often the best for a lot of legal work.
There will be versions of all these things soon where you're not sending your data to another company. That's limited the use of these for legal work in particular.
You'll be able to do it on your own hard drive in some fashion. I'm not sure what the loss of value will be at first, but people are working on this a lot. It'll come soon.
It's one thing that if you follow AI, you know is coming. Some people would say, "I can't send my data to, you know, Gemini 2, Google, whatever." Okay, you can't, but pretty soon you won't have to.
But Gemini has multimodal capabilities, and its ability to handle big, thick files is number one.
David Perell
The fact that Google owns YouTube makes it really nice because the YouTube integration is really good. So if I want to prep for an interview, I can put in, say, 15 videos and I can start asking questions about all the videos because it can take the transcript. And Gemini is just so much better at reading video files and especially YouTube than the other LLMs.
Tyler Cowen
Those are just things I don't do much, but you should. Many people should use Gemini a lot. That I don't use it a lot is nothing against Gemini; it's an amazing system.
Grok you can use very quickly to fact-check tweets. Meta, it's right on your WhatsApp.
Their ability to market and open source is very strong.
I'm not an Instagram person.
People should know what Meta is up to. The Llama models, I think, will be very important globally. Something to play around with. They're not part of my regular routine, but you should be aware of them.
There are plenty of things I don't know about, or I've heard about and couldn't really tell you how to use them. That would be like the opening menu, and you can just keep on asking Perplexity.
Like, what are some new things that have come out in the last month that I should just play around with for 15 minutes?
It will send you to some articles.
Another thing I do, it's $400 a year but worth it for me, I subscribe to *The Information*, which keeps abreast of new AI developments. I don't think it's worth it for most people. But if you can afford $400 a year, I think it's quite good and useful, and it covers some things in crypto, other parts of tech as well.
That's a good way to stay in touch. Twitter, obviously, X.
And being in good chat groups.
David Perell
That final one's a big one.
Tyler Cowen
It's a big one, and it's hard to get into the best chat groups. But just keep on working your way up if you can.
The Possibilities of Large Context Windows (01:05:43)
David Perell
What do you think becomes possible with really large context windows? So, Gemini now has 2 million tokens. I'd be willing to bet money by the end of the year we'll have 10, 20 million tokens. What becomes possible, true about the world, once the context windows can be that big?
Tyler Cowen
People, in a decentralized manner -- there may be people now who can work with these very large context windows; it's just not a public service, so keep that in mind -- but in a decentralized manner to deal with things like regulatory codes, which are very important to businesses, lawyers, that will be completely routine. It's coming very soon.
Again, it's not a thing I need.
Historical archives, if you're a historian and there's massive documentation like tax records from Renaissance Florence -- I don't know how big that file would be. You'd have to put it in somehow, scan it.
But working with things like that over time.
A new project for humanity, that will create a lot of jobs, by the way, is converting data into usable form. You'll also need a lot more lawyers to haggle over who owns the residual rights that were never specified in original contracts because no one imagined this would be a thing. That would be another new set of jobs.
A lot of philanthropy in the future should just be paying for data to be fed into AIs.
David Perell
Just like what Nat Friedman is doing.
Tyler Cowen
That's right. And he's translating scrolls from burnt to readable.
David Perell
It's so cool.
Tyler Cowen
But to put all that into AIs and just everything we know about history. What's in the National Archives?
I'm pretty sure, I've been told, it was not fed into the main AI models. It's a lot of stuff.
Maybe not useful to most people, but over time, this will be the new human project: is to have all our knowledge fed into the AIs, musical knowledge.
So like tab notation for guitar, a lot of that's online, but a lot of it isn't. It's quite an undertaking to assemble all those scrawled things on paper and turn them into AI-usable form.
But I think a lot of our next century, we should spend doing that with just everything possible where you're not like violating privacy or running into national security issues. And it will just be a much richer world.
But it will take a lot of human, very human effort to get there.
David Perell
And last question. How innovative is the LLM usage inside of companies? Like for people building their private models, the very biggest companies, how big is the delta between what we're seeing from the basically free models?
Tyler Cowen
The most innovative people won't tell us.
I strongly suspect the most innovative people are the AI companies themselves. They use AI to improve AI.
They hold their secrets close to their chest for obvious and justifiable commercial reasons. And I think the difference between what they're doing and what others are doing is just immense. You can't even compare it.
And we don't know what they're doing.
But it see, since it gets better, it seems to be working, right?
That's what we do know.
David Perell
Well, thank you, Tyler. This was fun.
Tyler Cowen
Thank you, David.
Share this post