Image
alt="Transformers process and generate inputs by “attending” to all previous inputs in each layer, which becomes expensive as the sequence length grows; in contrast, linear transformers maintain a fixed-size memory that is updated recurrently at each time step, allowing it to efficiently process and generate long sequences (Credit: The researchers)."
CSAIL article

Generative AI systems like large language models rely heavily on deep learning - and, in particular, transformers. Transformers make use of an “attention mechanism” for modeling interactions among inputs, which essentially involves doing nonlinear pairwise comparison between inputs and assigning different weights to tokens in a sequence, enabling a prioritization of some over others. The empirical effectiveness of this attention mechanism has led some in the community to claim that attention is “all you need” (the title of the original 2017 Google paper that introduced transformers).

Image
The MIT researchers developed an AI-powered simulator that generates unlimited, diverse, and realistic training data for robots. The team found that robots trained in this virtual environment called “LucidSim” can seamlessly transfer their skills to the real world, performing at expert levels without additional fine-tuning (Credit: Mike Grimmett/MIT CSAIL).
CSAIL article

For roboticists, one challenge towers above all others: generalization – the ability to create machines that can adapt to any environment or condition. Since the 1970s, the field has evolved from writing sophisticated programs to using deep learning, teaching robots to learn directly from human behavior. But a critical bottleneck remains: data quality. To improve, robots need to encounter scenarios that push the boundaries of their capabilities, operating at the edge of their mastery. 

Image
alt="The EECS Rising Stars Workshop welcomed graduate students and postdocs of historically underrepresented genders who are interested in pursuing academic careers in the field (Credit: Randall Garnick)."
CSAIL article

Earlier this month, electrical engineering and computer science researchers from around the world came together at MIT for the twelfth annual Rising Stars Workshop. The event welcomed graduate students and postdocs of historically underrepresented genders who are interested in pursuing academic careers in the field.

Image
The "hypometric genetics" approach uses these typically disregarded measurements to improve genetic discovery up to 2.8 times (Credit: The researchers).
CSAIL article

Research scientist Yosuke Tanigawa and Professor Manolis Kellis at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel methodology in human genetics to address an often-overlooked problem: how to handle clinical measurements that fall "below the limit of quantification" (BLQ). Recently published in the American Journal of Human Genetics, their new approach, "hypometric genetics," utilizes these typically discarded measurements to enhance genetic discovery, with significant implications for personalized genomic medicine and drug development.

Image
alt=" CSAIL framework reduces bias, treats comparable individual users similarly."
CSAIL article

Two of the trickiest qualities to balance in the world of machine learning are fairness and accuracy. Algorithms optimized for accuracy may unintentionally perpetuate bias against specific groups, while those prioritizing fairness may compromise accuracy by misclassifying some data points.