Helping nonexperts build advanced generative AI models
MosaicML, co-founded by an MIT alumnus and a professor, made deep-learning models faster and more efficient. Its acquisition by Databricks broadened that mission.
Microscope system sharpens scientists’ view of neural circuit connections
A newly described technology improves the clarity and speed of using two-photon microscopy to image synapses in the living brain.
MIT-Takeda Program wraps up with 16 publications, a patent, and nearly two dozen projects completed
The program focused on AI in health care, drawing on Takeda’s R&D experience in drug development and MIT’s deep expertise in AI.
Understanding the visual knowledge of language models
LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.
A smarter way to streamline drug discovery
The SPARROW algorithm automatically identifies the best molecules to test as potential new medicines, given the vast number of factors affecting each choice.
A new way to spot life-threatening infections in cancer patients
Leuko, founded by a research team at MIT, is giving doctors a noninvasive way to monitor cancer patients’ health during chemotherapy — no blood tests needed.
Technique improves the reasoning capabilities of large language models
Combining natural language and programming, the method enables LLMs to solve numerical, analytical, and language-based tasks transparently.
Featured video: Researchers discuss queer visibility in academia
In “Scientific InQueery,” LGBTQ+ MIT faculty and graduate students describe finding community and living their authentic lives in the research enterprise.
Nancy Kanwisher, Robert Langer, and Sara Seager named Kavli Prize Laureates
MIT scientists honored in each of the three Kavli Prize categories: neuroscience, nanoscience, and astrophysics, respectively.
Researchers use large language models to help robots navigate
The method uses language-based inputs instead of costly visual data to direct a robot through a multistep navigation task.








