Proteins are the workhorses that keep our cells running, and there are many thousands of types of proteins in our cells, each performing a specialized function. Researchers have long known that the structure of a protein determines what it can do.
"The net effect [of DeepSeek] should be to significantly increase the pace of AI development, since the secrets are being let out and the models are now cheaper and easier to train by more people." ~ Associate Professor Phillip Isola
As the capabilities of generative AI models have grown, you've probably seen how they can transform simple text prompts into hyperrealistic images and even extended video clips.
Daniela Rus, a distinguished computer scientist and professor at the Massachusetts Institute of Technology (MIT), has been honored with induction into the prestigious Académie Nationale de Médecine (ANM) as a foreign member on January 7, 2025. As the Director of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Daniela leads over 1,700 researchers in pioneering innovations to advance computing and improve global well-being.
By adapting artificial intelligence models known as large language models, researchers have made great progress in their ability to predict a protein’s structure from its sequence. However, this approach hasn’t been as successful for antibodies, in part because of the hypervariability seen in this type of protein.
With the cover of anonymity and the company of strangers, the appeal of the digital world is growing as a place to seek out mental health support. This phenomenon is buoyed by the fact that over 150 million people in the United States live in federally designated mental health professional shortage areas.
One might argue that one of the primary duties of a physician is to constantly evaluate and re-evaluate the odds: What are the chances of a medical procedure’s success? Is the patient at risk of developing severe symptoms? When should the patient return for more testing? Amidst these critical deliberations, the rise of artificial intelligence promises to reduce risk in clinical settings and help physicians prioritize the care of high-risk patients.
Sara Beery, Marzyeh Ghassemi, and Yoon Kim, EECS faculty and CSAIL principal investigators, were awarded AI2050 Early Career Fellowships earlier this week for their pursuit of “bold and ambitious work on hard problems in AI.” They received this honor from Schmidt Futures, Eric and Wendy Schmidt’s philanthropic initiative that aims to accelerate scientific innovation.
Chatbots can wear a lot of proverbial hats: dictionary, therapist, poet, all-knowing friend. The artificial intelligence models that power these systems appear exceptionally skilled and efficient at providing answers, clarifying concepts, and distilling information. But to establish trustworthiness of content generated by such models, how can we really know if a particular statement is factual, a hallucination, or just a plain misunderstanding?