Using the Manus AI agent to update a GitHub repo special edition
Thank you for being a part of the journey. This is a special bonus edition of The Lindahl Letter publication. A new edition normally arrives every Friday. This week the topic under consideration for this special bonus edition The Lindahl Letter is, “Using the Manus AI agent to update a GitHub repo special edition.”
Welcome to a few research notes about the future of interacting with the next generation of orchestrated AI agents. Truth has to be the cornerstone of this story. We did not skip all the way ahead to the future that Star Trek promised of being able to ask the computer to do things and they happened like magic, but make no mistake this is a reasonable step toward that future. Orchestrating agents to be able to take actions and work in a containerized environment was the next step. It’s happening and you can if you want just skip ahead to the end of this missive and just watch the replay via a link to the Manus site. That is not hyperbole, it might be buying into the hype, but you can see that a lot of people in various places online have been sharing videos and talking about this new company Manus [1][2][3]. I thought it was pretty interesting and filled out the request to be able to see it run in action.
You are able to request access over at https://manus.im/ and in my case you may get access and be able to kick the tires on this general AI agent [4]. Post getting the “You’re in!” email and logging into the system by creating a password and verifying my email I thought using one of my 3 task credits to do something interesting related to a previous project I had created would be a good place to start. Please don’t waste your task credits. Make sure you think of something you really want to do before jumping into the task process. Some of you may recall that I had put together a Python based code package that completes a knowledge reduction function to generate portable knowledge graphs of distinct referenceable elements [5].
My prompting ended up asking the Manus orchestrator interface (it’s an agent) to finish and review that project. I gave Manus the following rather simple, but very direct prompt:
“I was working on this paper KnoweldgeReduce: Building Stackable Knowledge Graphs and wrote this https://github.com/nelslindahlx/KnowledgeReduce/tree/main. I never got it turned into a fully functional set of Python code. Please help get the code to a complete state.”
At this point, Manus had a 10 page academic style paper and a lot of code in a GitHub repository. Keep in mind that the system has no idea if I’m actually the author of this code that is about to be modified. In this case, I am actually the author of the code and you can see what Manus shared back as a zip file on the same GitHub [6].
You can watch Manus’ computer screen from the interface and watch it work along a todo list which is really interesting. The entire process to work the task took more than 5 hours so you may not want to watch the entire replay. You probably just want to skip around and get a feel for what is happening during the orchestration. Right now I’m watching it move from the first section of that list related to examination and analysis to the missing components identification. That todo list actually looks like this:
# KnowledgeReduce Implementation Todo List
## Examination and Analysis
- [x] Examine the KnowledgeReduce paper to understand the concept
- [x] Explore the repository structure
- [x] Analyze core.py implementation
- [x] Review setup.py and dependencies
- [x] Check test files and identify expected functionality
- [x] Look at Jupyter notebooks to understand usage patterns
## Missing Components Identification
- [ ] Identify missing functionality in core.py (quality_score calculation)
- [ ] Determine additional methods needed for the KnowledgeGraph class
- [ ] Check for missing utility functions
- [ ] Identify missing import/export functionality
- [ ] Determine if visualization components need to be added
## Implementation
- [ ] Implement quality_score calculation in KnowledgeGraph class
- [ ] Add missing methods to KnowledgeGraph class
- [ ] Implement utility functions
- [ ] Add import/export functionality
- [ ] Implement visualization components if needed
## Testing
- [ ] Update existing tests
- [ ] Add new tests for implemented functionality
- [ ] Ensure all tests pass
## Documentation
- [ ] Update docstrings for all functions and classes
- [ ] Update README.md with usage instructions
- [ ] Add examples
Not only can you watch the todo list and see what is happening throughout the entire process, but also you can see all the files being created during the task.
The list of files in this task actually updates throughout the process which is nice if you are following along and engaging with the prompt line.
During the run of this task I did not interfere or give any additional instructions which is something you are able to do by messaging Manus at any time. I just wanted to see what would happen from that initial prompt without any additional interaction. I spent a lot more time working on that paper and code repository than the entire Manus process took. Probably about a 25:1 ratio of my work in hours to what Manus did is a reasonable way to look at both sets of effort. I’m probably going to use my last task credit to do the same exercise, but interact with it throughout the whole process.
You can actually watch the entire Manus replay via the following link and I highly suggest that you consider taking a look. It is really the most interesting part of the process.
I’m probably going to dig into this one again later after I spend some time thinking about the broader implications of this type of orchestrated agent interaction. Really going through the code that was generated to test it out is going to take some time. Aside from that deeper dive into the quality of the output, I can say that it is a really interesting way to work with these agents and models. This is certainly going to be a part of the path forward. This type of interface cannot be discounted as it makes working with a model and augmenting your work very accessible to knowledge workers. You could see huge productivity gains from this type of effort, but it could also produce huge blocks of code that are a false start or just difficult to triage and troubleshoot. We will see what ends up happening, but I think this type of orchestrated interaction with agents completing tasks is really going to become mainstream in the next 6 months.
Several hours were spent working and resolving dependencies during testing which I thought was very interesting that the orchestrator just kept persisting at working toward a solution. In the end the task orchestrator ran out of context space and failed out. I tried to run the whole thing again and ended up getting this message, “Due to the current high service load, tasks cannot be created. Please try again in a few minutes.” Stay tuned I guess and you can see all the code that was produced and the hours of replay.
Footnotes:
[1] https://www.technologyreview.com/2025/03/11/1113133/manus-ai-review/
[2] https://www.economist.com/leaders/2025/03/13/with-manus-ai-experimentation-has-burst-into-the-open
[3] https://techcrunch.com/2025/03/12/browser-use-one-of-the-tools-powering-manus-is-also-going-viral/
[6] https://github.com/nelslindahlx/KnowledgeReduce/tree/main
What’s next for The Lindahl Letter?
Week 187: The intersection of technology and modernity
Week 188: How do we even catalog attention?
Week 189: How is model memory improving within chat?
Week 190: Quantum resistant encryption
Week 191: Knowledge abounds
If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. Make sure to stay curious, stay informed, and enjoy the week ahead!