Natural language boosts LLM performance in coding, planning, and robotics
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
Learn about artificial intelligence, GPT usage, prompt engineering and other technology news and updates from Land of GPT. The site aggregates articles from official RSS feeds under their original authorship. Each article has a do-follow link to the original source.
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
At MIT’s Festival of Learning 2024, panelists stressed the importance of developing critical thinking skills while leveraging technologies like generative AI.
A new technique can be used to predict the actions of human or AI agents who behave suboptimally while working toward unknown goals.
Researchers create a curious machine-learning model that finds a wider variety of prompts for training a chatbot to avoid hateful or harmful output.
Researchers demonstrate a technique that can be used to probe a model to see what it knows about new subjects.
After acquiring data science and AI skills from MIT, Jospin Hassan shared them with his community in the Dzaleka Refugee Camp in Malawi and built pathways for talented learners.
Researchers developed a simple yet effective solution for a puzzling problem that can worsen the performance of large language models such as ChatGPT.
PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.
Master's students Irene Terpstra ’23 and Rujul Gandhi ’22 use language to design new integrated circuits and make it understandable to robots.
MIT researchers develop a customized onboarding process that helps a human learn when a model’s advice is trustworthy.