Training LLMs to self-detoxify their language
A new method from the MIT-IBM Watson AI Lab helps large language models to steer their own responses toward safer, more ethical, value-aligned outputs.
Learn about artificial intelligence, GPT usage, prompt engineering and other technology news and updates from Land of GPT. The site aggregates articles from official RSS feeds under their original authorship. Each article has a do-follow link to the original source.
A new method from the MIT-IBM Watson AI Lab helps large language models to steer their own responses toward safer, more ethical, value-aligned outputs.
A new method lets users ask, in plain language, for a new molecule with certain properties, and receive a detailed description of how to synthesize it.
The framework helps clinicians choose phrases that more accurately reflect the likelihood that certain conditions are present in X-rays.
This new framework leverages a model’s reasoning abilities to create a “smart assistant” that finds the optimal solution to multistep problems.
Researchers fuse the best of two popular methods to create an image generator that uses less energy and can run locally on a laptop or smartphone.
A new study shows LLMs represent different data types based on their underlying meaning and reason about data in their dominant language.
Researchers propose a simple fix to an existing technique that could help artists, designers, and engineers create better 3D models.
MIT and IBM researchers are creating linkage mechanisms to innovate human-AI kinematic engineering.
By sidestepping the need for costly interventions, a new method could potentially reveal gene regulatory programs, paving the way for targeted treatments.
“Co-LLM” algorithm helps a general-purpose AI model collaborate with an expert large language model by combining the best parts of both answers, leading to more factual responses.