The U.S. National Science Foundation (NSF) announced today an investment of more than $100 million to establish five artificial intelligence (AI) institutes, each receiving roughly $20 million over five years. One of these, the NSF AI Institute for Artificial Intelligence and Fundamental Interactions (IAIFI), will be led by MIT’s Laboratory for Nuclear Science (LNS) and become the intellectual home of more than 25 physics and AI senior researchers at MIT and Harvard, Northeastern, and Tufts universities.
One of the biggest challenges in computing is handling the onslaught of information while still being able to efficiently store and process it. A team from MIT CSAIL believe that the answer rests with something called “instance-optimized systems.”
OpenAI unveiled the largest language model in the world, a text-generating tool called GPT-3 that can write creative fiction, translate legalese into plain English, and answer obscure trivia questions. It’s the latest feat of intelligence achieved by deep learning, a machine learning method patterned after the way neurons in the brain process and store information.
Researchers from CSAIL have developed a machine learning system that can either make a prediction about a task, or defer the decision to an expert. Most importantly, it can adapt when and how often it defers to its human collaborator, based on factors such as its teammate’s availability and level of experience.
With the boundless treasure trove of paintings that exist, the connections between these works of art from different periods of time and space can often go overlooked. It’s impossible for even the most knowledgeable of art critics to take in millions of paintings across thousands of years and be able to find unexpected parallels in themes, motifs, and visual styles.
Deep learning systems are revolutionizing technology around us, from voice recognition that pairs you with your phone to autonomous vehicles that are increasingly able to see and recognize obstacles ahead. But much of this success involves trial and error when it comes to the deep learning networks themselves. A group of MIT researchers recently reviewed their contributions to a better theoretical understanding of deep learning networks, providing direction for the field moving forward.
In the first study to comprehensively track how different types of brain cells respond to the mutation that causes Huntington’s disease (HD), MIT neuroscientists found that a significant cause of death for an especially afflicted kind of neuron might be an immune response to genetic material errantly released by mitochondria, the cellular components that provide cells with energy.
Researchers have published a series of papers that address shortcomings of existing meshing tools by seeking out mathematical structure in the problem. In collaboration with scientists at the University of Bern and the University of Texas at Austin, their work shows how areas of math like algebraic geometry, topology, and differential geometry could improve physical simulations used in computer-aided design (CAD), architecture, gaming, and other sectors.
The MIT Stephen A. Schwarzman College of Computing announced its first two named professorships, beginning July 1, to Frédo Durand and Samuel Madden in the Department of Electrical Engineering and Computer Science (EECS). These named positions recognize the outstanding achievements and future potential of their academic careers.
Regina Barzilay, the Delta Electronics Professor in the Department of Electrical Engineering and Computer Science, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science, join Ziv Bar-Joseph from Carnegie Mellon University for a project using machine learning to seek treatment for Covid-19.