The Revolutionary Potential of AI with CSAIL Professor Manolis Kellis Transcript

Welcome to MIT'S Computer Science and Artificial Intelligence Labs Alliances podcast. I'm Kara Miller. On today's show, as we kick off a new year, we've got one of the world's leading thinkers on AI, looking at what's ahead for businesses and for us.
We are just at the first baby steps. And step 0 is just start using ChatGPT, for God's sake.
Manolis Kellis, a professor of computer science at MIT and head of the MIT Computational Biology Group, talks AI regulation.
Many people are saying, look, I can trick it to do this. We have to put guardrails. And I'm like, that's ridiculous. No, the moment you shift the responsibility from the tool to the user-- I'm not going to sue the hammer company if I kill somebody with a hammer.
And how medicine and pharmaceuticals could be completely revolutionized--
This is the final frontier. It's both the most exhilarating, the hardest, and the most impactful thing that we can do. That's all coming up in just a minute. But first, a quick note about CSAIL's 2025 online classes. This year, there's a wide range, from driving innovation with AI, to machine learning in business, to human-computer interaction for user experience design. If you're interested in learning more about these classes and others as they're announced, we've got a link to the course page in our show notes.
There you can also sign up to the mailing list. So before we get started here, I'm going to give you a little backstory on this episode. I showed up to record with Manolis at MIT. We were planning to record in a little room, audio only. But Manolis was like, let's do video. I've got space in my office. So Andrew, who was recording us, and Manolis and I all picked up the equipment, threw in some cameras, and awkwardly trekked through the halls of MIT.
And if you want to see our entire uncut conversation, we have got it for you on YouTube. You can get the link in the show notes. What you're going to hear in the podcast is a trimmed down version of what we discussed. But like I said, if you want that full treatment, plus a peek at the modernist furniture in Manolis's office, you'll find it in the show notes. So down to business.
What I wanted to find out in this conversation was, given that it's been just about two years, a little bit more, since ChatGPT entered our lives, how have things changed and what should we expect going forward? A lot, Manolis says, and we'll get to that. But first, let's talk artificial general intelligence. So just last month, OpenAI CEO Sam Altman said that he was thrilled to be working towards AGI, which has long been his dream. So will we? Could we ever get to the place where computers are as smart as humans?
To that question, Manolis has his own question. Haven't we already gotten there?
And the trouble with AGI, with artificial general intelligence, is that every single time we see a machine do something, we say oh, machines can do that. That's not intelligence. Intelligence must be that other thing that humans do. And it's been like that for decades, not just years. So before, it was oh, machines can't do integrals. That must be intelligence. And then machines could do integrals better than humans. Oh, that's not intelligence anymore.
Machines can't do-- basically, for example, intelligence used to be, oh, can you multiply like two giant numbers in your head? Yeah, some very intelligent people can do that. But maybe that doesn't translate to general intelligence for these people. And then machines got way better at that, then humans, oh, that's not intelligence. Chess-- machines will never beat humans at chess. And then guess what? They do and they just destroy us. Oh, that's not intelligence.
That's what the machines do. So intelligence must be something else. Intelligence must be looking at images. And now machines can look at images. And they can do that probably better than humans. And what's really funny is that we're now hitting the level where some of these capabilities are not uniquely human. So dogs can recognize images. And chimps can recognize images. So we suddenly started saying that intelligence is the things that machines do, that humans do very naturally.
And those things, guess what? They're also done very naturally by animals. So maybe that's not intelligence either. And then you get to language. And you're like, aha, language is the one thing that machines will never do. And now, I'm sorry, machines are doing language probably better than the vast majority of humans. So we're like, that's not intelligence. Something else must be.
And now intelligence is about reasoning and logic. And guess what? We have specialized systems that do reasoning and logic better than we do. And we're like, well, that's not intelligence where we understand [INAUDIBLE].
So you're saying with AGI, the bar keeps moving. So we never really get to it, because AGI isn't like you. I recognize you are a person, and it's not you. So then it's not AGI.
Exactly.
It can't get there.
And so basically the Turing test was a very well-defined thing that basically said, if behind a wall, I can spend 10 minutes interviewing someone and can't tell if they're human or machine. Then that's AGI. And I'm sorry, we've long passed that. [LAUGHS] So basically now, some people are saying, oh, but humans have emotions. And they grunt, and they feel, and they touch. Yeah, but so do chimps.
You have, on one side, the bar of animals. And on the other side, you have the bar of extraordinarily impressive single-task systems. And now you're getting to the multitask systems. And the reason why so many people are talking about AGI right now is because ChatGPT is just freaking so good at so many things.
And I think that's why they're basically saying, maybe we should be fearing AGI. And guess what? Humans can also just stand up, walk around, open a door, et cetera. And of course, humanoid robots are coming for that as well.
They are.
And we now have these extraordinary new actuation devices, new batteries, electric motors instead of the hydraulic motors. These machines are getting way better and better and better. So basically is AGI about embodied intelligence? Of course, there's some component of embodied intelligence.
And me, having experienced the natural world, gives me paradigms for thinking about complex abstract objects that AI doesn't have access to. Because it hasn't played with dirt, and jumped off a tree, and so on and so forth, that all of us have.
Also, I think the problem with AGI as a concept of people being replaced is that only goes as far as people want people to be replaced. If I want an attorney and I hire you-- and I don't want a robot, even if it can be just as good, and even maybe much smarter than you, because it knows every case-- and I'm like, no, but I want a person. I want a person to say-- There's also a barrier to how much people let automation or robotics or whatever be subbed for other people.
I agree. There's one component of that where when I go to the doctor's office, I don't want to say hi. Hi, my name is Dr. so-and-so. That's really [INAUDIBLE] human aspect
Even if they're more knowledgeable than your doctor.
Yeah. But the place where this becomes interesting is when you don't have to choose one or the other. You get to have both. In other words, do you want a lawyer human? Do you want a lawyer machine? Or do you want a human lawyer that also has access to the best lawyer machine? I'm like, I want the third. Do I want just a doctor like human doctor, or AI doctor, or a human doctor that has fast access to AI capabilities?
And good bedside manner.
Yeah. And this is almost starting to become the great equalizer, in the sense that AI can allow the most empathetic teachers to teach students with the most inspiration and kindness and comfort, while having AI do the most complex cognitive tasks. In other words, we don't have to choose. Do I want a smart teacher? Or do I want a kind teacher? I can have both. And I can err on the side of kindness, because that teacher will be augmented with AI.
And maybe the great equalizer will be that everyone will have a superhuman capable assistant for anything we do. And that also enables the human component, where ultimately there's some accountability. There's some responsibility. There's some basic common sense that no, I'm not going to kill 100,000 people because I misunderstood the question. There's that ability to stop and say, no, that doesn't actually make sense.
So you can't trick humans as easily as you trick machines. And part of the reason is the multimodality. So the fact that we understand from so many different sources of data, so many different paradigms, embodied and physical, and cognitive and emotional, and relationship-wise, et cetera. So I do think that AI is getting to superhuman capabilities for dozens of tasks. And I think that by almost any definition of AGI that we had through the years, we've surpassed them all.
So one could say AGI is here already. Or one would say that the human replacement is the level at which AGI is really defined. And that would involve reasoning and common sense, and emotion and movement, and empathy, and all of these things. And A, of course, I'm not sure we want that. But B, it starts at some point, questioning what makes us human.
When you look back on the last two years, which has been this whirlwind two years, what do you think in terms of the progress you've seen?
I think AI has been progressing at a tremendous pace for the last many decades. However, something fundamentally new happened in the mid-2010s. And that fundamentally new thing is representation learning. The concept that machines are no longer learning from raw data, but instead they're expanding representations from that raw data. They're learning at a more abstract level. And that transformation is a technological marvel. It's an engineering prowess.
But what made the difference is that it's in the hands of everyone. And we, in the AI world, have been using such models, foundation models and representation learning, and all of that stuff, for a long time. But the difference with ChatGPT is that it was smack in your face. Look at what this can do.
So do you feel like you were not that surprised when ChatGPT came along because, to you, this is a little bit of old news?
The technologies were definitely leaning in that direction. But anyone who says they were not surprised are just lying. I think we were all surprised. And I continue to be surprised today. I understand the underlying technology more than most people. And that basically means that I will not either get overly excited about what it does. But at the same time, I will not say oh, look, I can trick it to do stupid things. Because frankly, of course I can trick it, just like I can trick my children to do stupid things.
So basically, if you understand that it's a language model-- if you understand that it has access to a vast, vast corpus of data-- if you understand that it can abstract away from massive numbers of documents that you feed it-- and if you understand that by guiding it in the right way with the correct set of prompts, you can basically make it do extraordinary things. Then you appreciate it as what it is-- a tool, not some superhuman being, some intelligent thing.
No, it's just the next level of progress that started out with writing. That continued with the printed press. That continued with the internet and the world wide web. That continued with Wikipedia. That continued with social networks and the spread of information from everywhere to everywhere at once. That continued with preprints, and the fact that whatever I publish today can be read by thousands of people across the world immediately.
And that instantaneousness has now been taken not just to the level of information access, but to the level of information integration. And I would say that ChatGPT is the great integrator. It's not just the language model. You think of it like a rapid lookup of knowledge. But it's a rapid integration of a ton of documents. And if you're understanding that perspective, then it demystifies it. But it also makes it much more useful.
It's interesting, too, that you talk about it as a tool, which I think is such a good comparison. Because every tool can be used for good and for bad. Our books, inherently good as the internet inherently. Well, obviously, there are different things on the internet. There are different books. And then so this is neither inherently amazing nor inherently terrible. It's just a tool and that progression of tools.
And you can use a hammer the wrong way. You can bang your fingers.
Of course, of course.
Really make a [INAUDIBLE].
And build a house.
Literally. And the difference is that many people are saying, look, I can trick it to do this. We have to put guardrails. And I'm like, that's ridiculous. No, the moment you shift the responsibility from the tool to the user-- I'm not going to sue the hammer company if I kill somebody with a hammer, or if I run somebody over with a garbage truck. That's not what it was meant for. And I can't put enough guardrails around everything in society, that it can't be harmful.
And in the same way, we should say the user is responsible. If you use it to create spam, if you use it to create malicious stuff, if you use it to create bombs, et cetera, you are the responsible person. Or if there is information out there in other ways that you can integrate-- Just because it's the next level of more powerful technology doesn't take the responsibility away from the ultimate user of the system.
I know you talk a lot to people in business, to people who run businesses. Obviously, businesses have tried for two years to get their arms around this technology, to have people learn more about it. And I wonder what your sense of where business-- just at a high level, then we'll dig down a little deeper-- but where people are with that? Is it mostly that people are stuck in the pilot project phase, because they're afraid to do the full thing-- because, who knows? Just give me a sense of what you're seeing.
The trouble is that ChatGPT is already so extraordinarily powerful that many companies are faced with this dilemma. Do we train it from scratch? Do we incorporate our own data? Is it ready just off of the box to just do everything we want it to do? And I think that's the dilemma. On one hand, you can basically have a system that's ready to deploy. And that can do some very, very impressive things, but with no knowledge of your corporation.
On the other hand, you have this extraordinary ability with the foundational tools of large language models, and with open-source models like Llama 3, and you name it. To basically deploy just massive compute, train with your own data. And then have a
system that truly understands your corporation.
I would say that one of the things that we're missing, between those two alternatives, is a system that can not only answer that one question, but can show you the landscape of data and information in your own corpus, or in your own corporation, or in your own domain of knowledge that you're interested in.
Let me give you an analogy. Right now, ChatGPT sits behind a giant glass window. And there's a little slot through which you can slide questions. And it then goes zoooo, everywhere. Imagine that Aladdin and a genie flying around massive, massive amounts of information. And then gathering stuff for just you, and then gives you back a piece of paper through that hole. No one has opened that gate to let humans in.
And I think that's what corporations are missing right now. They don't want just one answer at a time. Because this is a very archaic way of communication. It's the traditional speech, which of course, has been around for 70,000 years. But it's very slow. And it's one answer at a time. And it's one bullet point at a time. And yes, maybe that's how most people consume information. But maybe the CEO of that corporation wants to actually see the entire landscape of data. And maybe that's not true of just the CEO.
Maybe every person will actually change their mindset if you ask the question, and it gave you 20 bullet points. And it highlighted in a map of all of the documents in your corporation, the 10 documents from which it pulled those highlights. And then you have the option to basically go into that part of the landscape, and look at these documents in more detail, and understand the provenance of the information. But also look sideways and say, hey, what else was there? And what am I missing?
And also say oh, well, I'm not touching at all that entire cluster of documents here. Why are you not telling me about that? And I think that component is something that will transform the way that we think of data.
And this leads to a couple of questions. But one is it feels like you're saying yes, people are impressed by AI. Yes, people want to integrate it. But the killer app you're describing hasn't come on yet, or has not been implemented yet. Has it not come along?
It hasn't come live yet. So part of the reason why I can imagine it in such great details, because we have built it. [LAUGHS].
We're talking about the killer app right here.
So we have basically built a system that allows you to visually see hundreds of thousands of results. Ask the AI about them. Use a lasso tool to select documents that you care about.
Give me an example of what might somebody want to know. And how would this be useful in a really concrete way. So I run a complex lab. We have maybe 12 different teams. And they're all working on different aspects. I have created ChatGPT sessions for every one of those teams. So every new meeting minute, transcript goes into the same ChatGPT session. And therefore it has an institutional knowledge that it's building for each of those.
So that basically means that every time I wrap up a conversation, I create that document there. A different team in my own group has its own thread. And right now, they're sitting as silos, separated from each other. So what we need is this ability to now take all of those ideas into idea space. And trace the progress of one group and create these micro links to another group.
And you think that is, at some point, coming for companies, so that the marketing in one part of a financial company and the marketing in another part, understands that they're just reinventing the wheel. Because these people already did this thing. And they might as well learn from that and whatever. And they should have a meeting.
You and I were just talking about that.
We were. We were.
You're saying, hey,
Silos are gone then.
There's always people who are doing the same thing. And even within MIT, to create such a system. And I've had such great feedback from so many people in the administration who are like, wow, we need this now. And when I showed this at Harvard Business School, they said oh, we want these yesterday. And, we want to connect with our alumni. And I work with people working on patents. And they're like, oh, I want to use this now.
Because every time I want to see prior art, I want to understand the whole landscape of prior art. I've spoken with people in the legal profession, who are basically saying, I want to find every legal case and create the landscape of all of that. And then understand what is the knowledge, what are the patterns, what are the principles, what are the prior decisions that I need to extract for every one of those cases. And I want to actually look at the data in the medical profession.
We're working with a company that's now dealing with 5 million medical records. And they want to be able to know why was the decision made? How is the patient progressing? What are other patients from whom we can learn? And how can we connect that across these different people? And in every single one of those cases-- attribution, transparency, accountability, traceability-they're all key. And right now, you don't have anything like that with the current systems.
So let's zoom out. It sounds like you think that yes, AI has a huge amount to offer businesses. But then a lot of that promise has not yet come to pass.
We are just at the first baby steps. And the step 0 is just start using ChatGPT, for God's sake. Just get everybody a prime, a paid account, and just unleash their potential-- number 1. And number 2 is start integrating some of your own data. Microsoft has some offerings and others. You can basically now get a deal for your corporation, where some of your data is integrated into their system. Level 3 is create your own language models, with the data from your corporation.
Level 4 is create multimodal representations of your data. Don't just think about text alone. Think about every diagram. Think about the images. Think about videos. Think about transcripts of videos. Think about meetings and recordings of attendance and structured data.
And how you can link all of that together in multimodal representations, which basically means that when ChatGPT is able to, for example, generate an image, how does it do it? Because they've trained it in a multimodal fashion. That basically means that when it sees an image, the parts of the image that are being recognized from the pixels to the lines and the shapes to the objects, are represented in both language and image at the same time. So you can actually ask questions about those elements.
When I asked Google to take me somewhere with Google Maps or Apple Maps, then what I get is not just turn left, turn right, turn left, turn right. That's the stage that we're at with ChatGPT now. As opposed to, hey, here's a whole map. Here's how you navigate from point A to point B.
And you can see a little bit, here's what people have been ordering at that restaurant. They've already started to add on-- This isn't just an answer to your question. Here are some questions you haven't asked, but you might want to ask.
That's exactly right. And that's the integration. That's the multimodality. That's the bringing in many different sources of data all together at your fingertips.
So everything you've talked about has been built on data. Mike Stonebraker here has talked a lot about how poor data is in many, many companies. Allowing for the possibility that in some companies it's pretty good, I wonder if you worry that in many companies-- You can't get something out, if you don't put something decent in. And I just wonder if that's a concern and a limitation. AI cannot work if you just don't have much to offer, or things are completely not cleaned up, or they're all mess.
In computer science, we always say garbage in, garbage out. In other words, if your data is crap, then the answer is going to be crap. I would argue that AI might allow us to go a step beyond that. And the reason is, for example, multimodal learning. If I learn in multiple modalities at the same time, I can then recognize when one modality doesn't quite agree. And I can flag potential errors in the data.
So for example, we built these multimodal patient electronic health record system for medical data, a few years ago. And we were able to look at the doctor notes, the prescriptions, the lab tests, the billing codes, the DRG codes, and all of that at the same time. And that revealed just how crappy any one of them was in isolation. But by putting it all together, you're suddenly building on top of each other.
Another component is fail fast. In other words, do checks early on, as you're ingesting the data, to basically make sure that things make sense. Ask questions about, hey, what do you think of this data, et cetera. So basically when I use ChatGPT, I always say, how do you understand this document? Actually, I disagree with your interpretation of number 4. Can you skip section 5? Can you revise section 3? Can you combine 1 and 7? And so on and so forth.
And that ability to build progressive understandings of context and data streams and modalities, et cetera, is extremely important. And of course, the last step is bringing the visual component. If you create now a landscape of all your data, then you can basically say that, hey, this thing is sticking way out.
And this landscape can be built based on the latent embedding representations of your documents, and of every sentence, and of every section, and of every paragraph, and of every image, and of every diagram, and so on and so forth. And this ability to visually see outliers is something that humans are extraordinarily good at. And if only we allowed our AI to be so transparent, that we can see where its thoughts are projecting.
So let's dive into one area of business. I know that people coming out of your lab, some folks go into pharma. It's a really interesting area when you think about how AI is going to be used, how do you think it will change the development of pharmaceuticals in this country, or around the world? I think that AI has changed so, so much of the landscape of knowledge already. In so many different areas, AI has had a profound impact, and is about to have a way more profound impact.
But perhaps the area where AI will be remembered as having truly changed the human condition is probably medicine. And the reason is that it is so darn complex. In other words, as I mentioned earlier, the human language is fairly simple by comparison. We only have had 70,000 years to develop language. Every child is presented a very different subset of the corpus of all of human data, every human book. And yet they pick up language extremely, extremely fast.
So it's a relatively simple, from a computational complexity perspective, form of communication. DNA has been evolving for 3.5 billion years. Proteins have been folding for 3.5 billion years. Chemistry has been changing and being reshaped initially by physics for the first 10 billion years or so. But then biology, in ways that are way more complex than physics alone could achieve, for the last 3.5 billion years. This is by far the most complex aspect of the universe.
All of physics could be described with, I don't know, maybe 50 or 100 equations, if we understand it all. Biology-- as many as there are genes. And we now have reached the limits of what science can do for understanding the human condition, for understanding aging, for understanding Alzheimer's and cancer, and cardiovascular disease and immune disorders, and so on and so forth.
But my hope and my wish and frankly, my day job is, can we leverage now all of these extraordinary capabilities for new AI systems, for new representations, for new multimodal representation learning, for extracting knowledge into knowledge graphs, into patterns of interaction, into all of these different components?
Things people couldn't have seen before, or figured it out with getting the pencil out.
A lot of people are worried that AI will just replace traditional jobs where humans could do it. And yeah, that's true. But humans will find new jobs that are much more complex hopefully. With biology, there's no hope of humans doing it alone. So this is the final frontier. It's both the most exhilarating, the hardest, and the most impactful thing that we can do.
Is your sense that much better drugs will come out of this than, let's say, have come out of a more traditional process over the last 30 or 40 years that that has been used?
So the traditional process of drug development is one gene at a time. You study one protein. You figure out how the structure of that protein works. You figure out what it does. You design a compound. You go after that one protein, et cetera. And that costs about a billion dollars per drug, at least. The serendipity approach has been much more successful, of try a bunch of molecules and see what they do. And then there's a few miracle drugs that are just pure serendipity.
But all of the rational drug development has been constrained by how we teach engineering in school, which is you isolate the system. You take apart the parts. Then you go into one component. You edit that one component. You hope for the best. Biology doesn't work that way. Biology is messy. There's things happening all over the place. Part of the reason why we're having so much trouble is because Alzheimer's is not down to one protein. That's ridiculous.
Alzheimer's is extremely multifactorial, and so is cardiovascular disease, et cetera. There's a small number of rare disorders, which are genetically driven, that have a single alteration. You can fix that alteration. But the vast majority of complex traits are extraordinarily multifactorial. So what we're doing now in my group is understanding the basic foundational building blocks of these disorders.
Because, yes, there's hundreds of genes evolved. And they are converging in a small number of hallmarks, in a small number of buckets of principles, of pathways, of biological processes. And by understanding the modular view of biology and medicine and disease, instead of just thinking of Alzheimer's as a monolithic disorder, let's think about the vascular component. Let's think about the lipid transport, cholesterol metabolism, neuroinflammation, microglial states, and so on and so forth.
And you can isolate each of these components, and start building therapeutics for these parts. And there's maybe going to be 20 modules, if you wish, that are underlying Alzheimer's. This is economically much more feasible. Why? Because if we want to design a therapy for one person, no one can afford a billion dollar drug. There's a handful of people. But if instead you say, I'm going to build a therapy for lipid dysregulation or cholesterol transport, then that module can be reused in millions of people.
And moreover, I might build it initially for that component of Alzheimer's. But it might be reused in cardiovascular disease, and reused in metabolic disorder, and you name it. So that modular approach allows you to now think about what are the core building blocks, how are they reused across disorders. And most importantly, how are they combined in every new person. And that's the secret to personalized medicine.
We're not going to personalize medicine by creating a pill for you. We're going to personalize it by understanding what is the number of dysregulated molecules.
So you've got LEGOs. And you're building your unique LEGO thing.
That's exactly right.
But the LEGOs are mass produced.
That's exactly right.
It sounds like a lot of this is work that you're doing. But I do wonder, do you think when you-- pharma companies, biotech, it's vast landscape out there, are they changing quickly? They've been doing things for a long time. People are in companies. And people might have 30, 40 years in a company.
People have long careers. They don't necessarily-- This is not just pharma. This is true of any industry. Are these big changes that you see taking the world by storm, or are they creeping in more slowly than you'd like to see?
There's a small number of pioneering efforts that are completely embracing AI, that are basically taking this multimodal representation learning-- this geometric deep learning for understanding how proteins function, for solving the structure to function problem, for understanding how chemicals can be represented in their latent spaces-- and how to map now building blocks of proteins and building blocks of chemicals, and how to put them together.
And there's a small number of billion-dollar investments into startups. They are basically saying, OK, great, let's revolutionize the way that we do generative learning for protein design.
Which is scary because you're like, well, let's do something completely different. This old way-- It's always hard to start something new, because you don't exactly know what the end result will be.
I agree completely. But it's not completely new.
Yeah.
It's something that we have seen the extraordinary power of. There was an AlphaFold moment, where suddenly we could solve, with these deep learning techniques, protein structures almost as well as the experimental methods. That was a tremendous moment in biology, where we could basically say, whoa, there's something fundamentally different here. And these new billiondollar investments are building on that transformation. They're basically saying, yeah, let's use now this ability to go to the next level.
I have two quick final questions for you. One is we were talking so much about how medicine will change and your hopes about how it will change. I think to an ordinary person, they think, look, there's a crisis in primary care physicians. I call up. I'm like, I'd like a doctor's appointment. They're like, we'll fit you in 15 months. You know what I mean? And I think there's a little bit of a mismatch between all this amazing stuff that's being worked on in a lab, whether it be in academia or in business, and people's actual experience of, I have to fight the insurance company. Could you really make medicine better? It feels like a system that's so broken.
It's a fantastic question. And I want to say a couple of things first. Let's start with how spread thin we are. Everybody says oh, you got to stop. Machines are coming for our jobs. And everybody's like, I don't have time for my family. I don't have time for my kids. I don't have time for my elderly parents. I don't have time for leisure. I don't have time to eat. And [LAUGHS] how do you reconcile those two? Like, don't take my job.
They also need money to eat.
[LAUGHTER]
But basically, what I'm trying to say is that we have made extraordinary exponential progress for thousands of years. It is the same exponential we keep writing. It's not a new exponential. It's just steeper where you are, because it's an exponential. It gets steeper as you go. So we are at the tail end of this super, super steep exponential. That has led to tremendous progress. And progress is undeniable.
We all have better health, better food, better shelter, better education than at any time in the human condition, in the thousands of years that precede it. At the same time, there's more inequality. At the same time, there's more stress. At the same time, people are eating way more junk, et cetera. Now the question is, is the world getting objectively worse? Is it getting objectively better? Or are there components that are getting better as other components are getting worse?
And in my view, in this crazy sinusoid that is not just going only up, but is going up and down, where some things are getting way better, and some things are getting way worse-- and the distribution is getting way wider about the level of awesomeness of different components of our lives-- maybe AI could be the solution, not the trouble. Maybe having AI assistance would allow 20 times more teachers to be able to teach, than the number we have now. Maybe it will give us more time to take care of the elderly.
Maybe it will give us time to spend more time with our kids. Maybe we'll give our kids the ability to learn while having more time to do other stuff. And so on and so forth. So I think that this stretched-thin aspect of society could perhaps be dramatically improved with AI. And now, of course, the question of income inequality comes in. The question of do I have a job comes in. The question of can I feed my family comes in.
And that's a place where we have been terrible in the past, perhaps because of limited resources. But maybe in a world of dramatically increased productivity, abundance, health, cognition, education, et cetera-- maybe in such a world, there will be actually the opportunity to just provide a basic income for everyone, to provide health care as a basic human right to everyone, to provide extraordinarily quality education as a basic human right for everyone.
And maybe that will lift up all of society. And maybe that will actually lead to that human condition improvement, where you don't have to complain about all these terrible things that are happening at the same time, as some things are objectively getting better.
It would be a fascinating part of AI.
[LAUGHS]
Last question. We're at the beginning of a new year, give me your sense of what is your prediction, maybe for 2025. You could go even beyond that. But is there something that you think is on the horizon that's really going to impress people, surprise people?
I will start with two quotes.
OK.
The first quote is it's very difficult to make predictions, especially about the future.
Yeah, it's a good one.
[LAUGHS]
The second quote is the best way to predict the future is to invent the future. And given those two, I will tell you that what's coming is, at least from my own work, this ability to combine the visual and the language, and the multimodality and the transparency, in a completely new way to interact with AI. And I think that will dramatically change the way we see data.
I think that data science right now is at its infancy. And that when my kids do-- I don't know. They sort little cards at the age of 3 They're sorting the square and the triangle, and the blue and the yellow, et cetera. And they have cards on the carpet that they're shuffling around. And I think that's a very natural human behavior, of just organizing stuff. And my extraordinary admin, her dad says that when she was a kid, she would sort all of the cereal.
[LAUGHTER]
I can tell.
All right.
So I feel that this very natural human behavior of organizing things into stacks is something that we've lost with AI. The ability to print out little cards and sort out the cards in the table, as I'm sorting ideas in my head, is something that is so natural. And I want to bring this back. And I think that the system that we have built is Mantis system, that I'd love to show you a demo of at some point, allows you to now take the human aspect back into data science.
I want this to transform the way we think about data, to make it much more human-accessible, much more human-centric, and much more enjoyable. To just ask really hard questions that would take dedicated programs in the span of milliseconds.
Yeah. Manolis Kellis, thanks so much. Thanks for letting us do this in your office. Thank you.
Truly a pleasure. Thank you.
I appreciate it.
[MUSIC PLAYING]
And before we go here, if you want to check out Manolis's creation, the Mantis, which you just heard him referenced, you can head to our show notes. There you will also find details on the 2025 slate of courses that CSAIL has coming up. I'm Kara Miller. The podcast is produced by Matt Purdy and Andrew Zukowski with help from Audrey Woods. Join us again next time. And stay ahead of the curve.