Andrew Ng
Aug 28, 5:25 PM
Parallel agents are emerging as an important new direction for scaling up AI. AI capabilities have scaled with more training data, training-time compute, and test-time compute. Having multiple agents run in parallel is growing as a technique to further scale and improve performance. We know from work at Baidu by my former team, and later OpenAI, that AI models’ performance scales predictably with the amount of data and training computation. Performance rises further with test-time compute such as in agentic workflows and in reasoning models that think, reflect, and iterate on an answer. But these methods take longer to produce output. Agents working in parallel offer another path to improve results, without making users wait. Reasoning models generate tokens sequentially and can take a long time to run. Similarly, most agentic workflows are initially implemented in a sequential way. But as LLM prices per token continue to fall — thus making these techniques practical — and product teams want to deliver results to users faster, more and more agentic workflows are being parallelized. Some examples: - Many research agents now fetch multiple web pages and examine their texts in parallel to try to synthesize deeply thoughtful research reports more quickly. - Some agentic coding frameworks allow users to orchestrate many agents working simultaneously on different parts of a code base. Our short course on Claude Code shows how to do this using git worktrees. - A rapidly growing design pattern for agentic workflows is to have a compute-heavy agent work for minutes or longer to accomplish a task, while another agent monitors the first and gives brief updates to the user to keep them informed. From here, it’s a short hop to parallel agents that work in the background while the UI agent keeps users informed and perhaps also routes asynchronous user feedback to the other agents. It is difficult for a human manager to take a complex task (like building a complex software application) and break it down into smaller tasks for human engineers to work on in parallel; scaling to huge numbers of engineers is especially challenging. Similarly, it is also challenging to decompose tasks for parallel agents to carry out. But the falling cost of LLM inference makes it worthwhile to use a lot more tokens, and using them in parallel allows this to be done without significantly increasing the user’s waiting time. I am also encouraged by the growing body of research on parallel agents. For example, I enjoyed reading “CodeMonkeys: Scaling Test-Time Compute for Software Engineering” by Ryan Ehrlich and others, which shows how parallel code generation helps you to explore the solution space. The mixture-of-agents architecture by Junlin Wang is a surprisingly simple way to organize parallel agents: Have multiple LLMs come up with different answers, then have an aggregator LLM combine them into the final output. There remains a lot of research as well as engineering to explore how best to leverage parallel agents, and I believe the number of agents that can work productively in parallel — like the humans who can work productively in parallel — will be very high. [Original text, with links: https://t.co/ElcJZyzcfw ]
OpenAI
Oct 16, 3:18 AM
2 Sora 2 updates: - Storyboards are now available on web to Pro users - All users can now generate videos up to 15 seconds on app and web, Pro users up to 25 seconds on web https://t.co/iINg7alWGL
BBC News (World)
Oct 16, 2:15 AM
China sacks officials over viral Arc'teryx fireworks stunt in Tibet https://t.co/djt3KYfTiX
Commentary Elon Musk News
Oct 16, 2:10 AM
RT @elonmusknews30: How many faces do you see? https://t.co/2j5aRHWTjI
Commentary Elon Musk News
Oct 16, 2:09 AM
RT @elonmusknews30: Be honest! What made you a fan of Elon Musk? A. Personality B. Leadership C. Money D. Intelligence https://t.co/…
Commentary Elon Musk News
Oct 16, 2:09 AM
RT @elonmusknews30: Leave a heart ❤️ for Elon Musk’s son https://t.co/mAT8IIRyQM
Commentary Elon Musk News
Oct 16, 2:08 AM
RT @elonmusknews30: My mom’s really sick, and it’s serious 🥺. She could use some ❤️love to lift her spirits. https://t.co/7ucDmOVXON