- It seems likely that Jian Liao, known online as jlia0, asked Manus for its source code and received it, though in an encrypted form.
- Research suggests this was part of a leak incident, with discussions around the code’s usability and security.
Background
Manus is an AI agent, often described as a general-purpose tool capable of tasks like research and content creation, developed by a Chinese startup and currently in closed beta with limited invite codes. Its source code is typically not publicly available, making any access notable.
The Incident
Jian Liao, using the username jlia0 on GitHub, obtained an invite code for Manus and reportedly asked the AI to output its own source code, specifically requesting the “/opt/.manus/” directory as a zip file. He received the code, but it was encrypted, limiting its immediate usability. This event sparked discussions on platforms like GitHub about the encryption and potential for reverse-engineering.
Unexpected Detail
While most expected Manus to be a secure, closed system, the ability to extract even encrypted code highlights vulnerabilities in AI agent security, raising questions about prompt injection and system isolation.
Comprehensive Analysis of the Source Code Request Incident
This report delves into the details surrounding the request and acquisition of Manus AI’s source code, focusing on the individual involved, the context of Manus AI, and the implications of the incident. The analysis is based on recent online discussions, GitHub activity, and media coverage as of March 15, 2025.
Context of Manus AI
Manus AI, launched by a Chinese startup, is a general-purpose AI agent designed to perform autonomous tasks such as information retrieval, data processing, content creation, and web automation. It has garnered significant attention, with its Discord channel boasting over 186,000 members and invite codes being resold for high prices on platforms like Xianyu (Manus AI Invitation Code: Application Guide & Success Tips). The system is currently in closed beta, requiring an invite code for access, and is not open source, distinguishing it from projects like DeepSeek, which is an LLM rather than an agent.
Early reviews, such as those from MIT Technology Review (Everyone in AI is talking about Manus. We put it to the test.), describe Manus as promising but imperfect, with capabilities likened to a highly intelligent intern. However, its closed nature and limited access have fueled interest in its underlying technology, leading to replication efforts and security concerns.
The Individual: Jian Liao (jlia0)
Jian Liao, known by the GitHub handle jlia0, is identified as the CTO at Pointer and has been active in AI-related discussions. His GitHub profile (jlia0 (Jian Liao) · GitHub) shows a history of contributions, including a notable gist titled “Manus tools and prompts” (Manus tools and prompts · GitHub). In this gist, published on March 11, 2025, Liao states, “I got invite code of Manus, and ask Manus to output /opt/.manus as zip.” This action resulted in him obtaining the source code, though it was encrypted, as noted in subsequent comments where users discuss the encryption and its implications.
Media reports, such as an article on AIbase (Manus AI System Prompt Leakage: Official Response), confirm that a user named “jian” (likely Jian Liao) “cracked the Manus system” by requesting the directory contents, retrieving “some sensitive information and operational data.” This incident is described as a prompt leak, highlighting potential security flaws in Manus’s sandbox isolation, with the co-founder Ji Yichao noting that the code is lightly obfuscated for command reception.
Details of the Request and Acquisition
Liao’s method involved leveraging Manus AI’s capabilities to output its own internal directory, a technique that exploited the AI’s ability to execute file system operations. The output was a zip file containing the source code, but it was encrypted, likely using tools like PyArmor, as discussed in the gist comments. One comment notes, “A straight forward memory dump -> strings didn’t reveal any manus or pyarmor internals,” indicating the encryption’s robustness (Manus tools and prompts · GitHub).
The encryption limited the code’s usability, with users like @PeterZhao119 questioning how Liao obtained detailed prompts, suggesting skepticism about the leak’s authenticity. However, Liao’s X post (X post) and subsequent discussions, including on Reddit (r/AI_Agents on Reddit: Created an open-source alternative to Manus AI!), reinforce that he did receive the code, albeit in a form requiring further analysis.
Implications and Community Response
The leak sparked significant interest, with open-source alternatives like OpenManus emerging, developed by contributors from MetaGPT (GitHub – mannaandpoem/OpenManus: No fortress, purely open ground. OpenManus is Coming.). OpenManus, launched within three hours, aims to replicate Manus’s functionality without an invite code, but it’s unclear if it directly used Liao’s leaked code. Discussions on GitHub and Reddit highlight efforts to decrypt or reverse-engineer the code, with projects like whit3rabbit/manus-open (GitHub – whit3rabbit/manus-open: Manus code from container) offering AI-generated guesses, noting the code’s potential research value.
Security concerns arose, with articles like “Manus AI’s Agentic Moment: A Case Study in Prompt Leak and Risk Mitigation” on Medium (Manus AI’s Agentic Moment: A Case Study in Prompt Leak and Risk Mitigation | by Xiwei Zhou | Mar, 2025 | Medium) discussing prompt injections and system prompt leakage as risks in generative AI. Manus’s co-founder acknowledged the sandbox’s isolation but noted the code’s light obfuscation, suggesting ongoing efforts to mitigate such vulnerabilities.
Comparative Analysis with Other Leaks
To contextualize, source code leaks are not unique to Manus. High-profile examples include Microsoft’s 37GB leak in 2022 (r/DataHoarder on Reddit: Hackers leak 37GB of Microsoft’s source code (Bing, Cortana and more)), but Manus’s case is distinct due to the method—asking the AI itself rather than a security breach. This highlights a novel vulnerability in AI agents, where user commands can inadvertently expose internal data.
Table: Key Details of the Incident
Aspect | Details |
---|---|
Individual Involved | Jian Liao (jlia0), CTO at Pointer, GitHub user |
Method of Acquisition | Asked Manus AI to output “/opt/.manus/” directory as zip, received encrypted code |
Date of Incident | Around March 9-11, 2025, based on gist and media reports |
Code Usability | Encrypted, likely using PyArmor, limiting immediate use |
Community Response | Discussions on encryption, replication efforts (OpenManus, manus-open) |
Security Implications | Highlighted prompt leak risks, sandbox isolation concerns |
Conclusion
Jian Liao, known as jlia0, is the individual who asked Manus AI for its source code and received it, though in an encrypted form. This incident, occurring around early March 2025, underscores vulnerabilities in AI agent security and has spurred community efforts to replicate and analyze the technology. The encrypted nature of the code and ongoing discussions suggest a complex landscape of accessibility and security in AI development.
Key Citations
- Manus tools and prompts GitHub gist with 968 forks
- jlia0 Jian Liao GitHub profile with 96 repositories
- Manus AI System Prompt Leakage Official Response article
- Everyone in AI is talking about Manus We put it to the test MIT Technology Review
- Manus AI Invitation Code Application Guide and Success Tips iweaver blog
- r/AI_Agents on Reddit Created an open-source alternative to Manus AI
- GitHub OpenManus No fortress purely open ground OpenManus is Coming
- GitHub manus-open Manus code from container by whit3rabbit
- Manus AI’s Agentic Moment A Case Study in Prompt Leak and Risk Mitigation Medium
- r/DataHoarder on Reddit Hackers leak 37GB of Microsoft’s source code Bing Cortana and more