Image
When users query a model, ContextCite highlights the specific sources from the external context that the AI relied upon for that answer. If the AI generates an inaccurate fact, for example, users can trace the error back to its source and understand the model’s reasoning (Credit: Alex Shipps/MIT CSAIL).
CSAIL article

Chatbots can wear a lot of proverbial hats: dictionary, therapist, poet, all-knowing friend. The artificial intelligence models that power these systems appear exceptionally skilled and efficient at providing answers, clarifying concepts, and distilling information. But to establish trustworthiness of content generated by such models, how can we really know if a particular statement is factual, a hallucination, or just a plain misunderstanding?

frontier AI

Frontier AI Safety & Policy Panel: Where We're at & Where We're Headed – Perspectives from the UK

It's been around a year since chatbots became widespread and governments worldwide turned their attention to advanced AI safety and governance. In this event co-hosted by MIT CSAIL Alliances, the MIT-UK program and the UK government’s AI Safety Institute, we will discuss the current state of research and where we're headed. Questions to be answered include: How will we control and govern AI agents?