The field of machine learning is traditionally divided into two main categories: “supervised” and “unsupervised” learning. In supervised learning, algorithms are trained on labeled data, where each input is paired with its corresponding output, providing the algorithm with clear guidance. In contrast, unsupervised learning relies solely on input data, requiring the algorithm to uncover patterns or structures without any labeled outputs.
Sara Beery, Marzyeh Ghassemi, and Yoon Kim, EECS faculty and CSAIL principal investigators, were awarded AI2050 Early Career Fellowships earlier this week for their pursuit of “bold and ambitious work on hard problems in AI.” They received this honor from Schmidt Futures, Eric and Wendy Schmidt’s philanthropic initiative that aims to accelerate scientific innovation.
Chatbots can wear a lot of proverbial hats: dictionary, therapist, poet, all-knowing friend. The artificial intelligence models that power these systems appear exceptionally skilled and efficient at providing answers, clarifying concepts, and distilling information. But to establish trustworthiness of content generated by such models, how can we really know if a particular statement is factual, a hallucination, or just a plain misunderstanding?