Q: What is AiPy?
A: AiPy is the LLM Big Model + Python program writing + Python program running + everything the program can control.
Q: What is the difference between AiPy and the current Ai?
A: The current big model, can only Q&A, answer questions, can not actually operate the computer to help you complete specific tasks. AiPy is a task-oriented Ai system, you only need to tell him what you want to do, AiPy will help you complete. The existing big model is question and answer orientated, AiPy is task orientated.
Q: Is the AiPy a new larger model or a sleeved larger model?
A: AiPy is not a big model, but an application product based on a big model, a product that really realises the ability of general task understanding, planning, execution, and ultimately obtaining the task results through API calls from the big model.
Q: What is the AiPy paradigm and what is Python-Use? Is it an Agent-like product? How is it different from Manus, MCP, etc.?
A: AiPy is a product we developed based on a new paradigm, Python-Use, to enable more universal and rapid utilization of large models for various tasks.
The traditional classic paradigm for large model AI Agents involves developing a large number of tool agents and then relying on their collaboration to accomplish various tasks. This approach depends on the development, deployment, and installation of more and more agents. However, from a certain perspective, developing and deploying more agents actually limits the full potential of large models. In contrast, AiPy (Python-Use), a new paradigm, takes a different approach: it's a way of "enabling AI to use Python and Python to use AI." This means that the large model understands and breaks down user tasks, then achieves automatic coding and code execution through API Calling and Packages Calling. It can also continuously improve and iterate through a feedback mechanism, ultimately enabling AI to interact with the environment and complete tasks.
Therefore, we propose the concept: "The real general AI Agent is NO Agents!" AiPy (Python-Use) implements the new paradigm of "No Agents, Code is Agent." Python uses data, Python uses computers, Python uses the network, Python uses the Internet of Things, Python uses everything, ultimately achieving true AI Think Do!
The specific code related to Python-Use has already been open-sourced: https://github.com/knownsec/aipyapp
Based on this concept, we believe this is the biggest difference compared to Manus, MCP, etc.! For users:
The biggest difference between AiPy and Manus is that AiPy itself is open-source and free. Users only need to bear the cost of tokens for calling APIs of large models (of course, you can also use free large models). Because it doesn't require the invocation of numerous agents, AiPy also consumes relatively fewer tokens for the same task. Another major advantage is that AiPy supports local deployment, eliminating the need for users to upload their sensitive data and documents to the cloud. This is because AiPy is only responsible for the corresponding code generation for the task, and all data processing is done locally, offering a secure and reliable advantage for handling very large files and sensitive data.
The biggest advantage of AiPy compared to MCP Server is that users don't need to rely on various custom-developed MCP Servers for different services, nor do they need to deploy, install, or use them. They also don't need to worry about the security risks posed by unreliable MCP Server providers. AiPy can achieve the invocation of various APIs and accomplish diverse functions through real-time coding. You can see the examples shown above or experience the power of AiPy for yourselves.
In summary, AiPy offers multiple deployment options, is no longer limited by the various restrictions of cloud-based hosts, and doesn't require the development, downloading, installation, or complex configuration of various tools. All you need to do is converse with the large model.
Q: What kind of macromodels does AiPy support? Does AiPy support local big model calls? What are the recommended models?
A: AiPy theoretically supports all generic big model calls, you just need to set the API and model information of the generic big model in the configuration file to complete the call. We also support Ollama and LMStudio APIs for local big models.
Because of the Python-Use paradigm, a lot of capabilities depend on the big model itself, so the better the coding ability of the big model and other comprehensive capabilities of the big model, the better the performance of the model to achieve the task. Of course, we also need to take into account the large model API calls tokens to spend the cost of the problem, in the cost-effective point of view, we recommend the use of DeepSeek, after testing a very small amount of money can achieve most of the task execution work.
Q:What can AiPy do at the moment?
A: Theoretically speaking, AiPy can do all the tasks that can be automatically scheduled through Python. However, we are still in the early stages of development, and currently recommend trying some lightweight tasks.
Q: Can AiPy call other products and business APIs? How is it implemented? Does it support local private API?
A: Yes, AiPy supports a variety of Internet business API calls, including search, maps, trip planning, social media, weather and other API services, can be built-in can also be called to generate the code when you enter the corresponding API Key to call to use. As for the implementation of API calls we have implemented a function called ‘API Calling’, the big model estimates his understanding of the task to choose to call the corresponding API, you can also specify the way to achieve the call through the task prompt word.
Through the local deployment of AiPy is to support the local private words of the API call, you just need to write the corresponding API description and address in the configuration file.
Q: Why did AiPy choose Python over other programming languages?
A: We tried AiJava and AILua for a short time, the former was too big and bulky, and the latter was too weak in terms of capability and ecology, so we finally chose AiPython.
Q: Is AiPy an IDE? How is it different from Cursor, Windsurf, etc.?
A: AiPy is not an IDE. The biggest difference from AI IDE is that AiPy does not directly deliver code, but delivers task results. Of course, if your task is to deliver code, AiPy can also help you complete this task!
Q: Does AiPy have intelligence?
A: It depends on how intelligence is defined. If intelligence is defined as: making plans and generating actions based on one's state, environment, and goals, and giving continuous feedback during the actions, revising the plans to improve the actions, and guiding the approach to the goals, then this is intelligence. According to this definition, LLM does not have intelligence, while AiPy, according to the task goal, makes plans (writes a programme to complete the task), generates actions (runs the programme), and constantly gives feedback to improve itself (debugs the programme itself, revises the programme itself) until it achieves the goal, in this way, AiPy does have a real intelligence.
Q: What is the essence of AiPy?
A:Human use AI, AI use Python, Python use data, Python use computer, Python use network Python use IOT, Python use everything.
Q: Is the relationship between AiPy and MCP, Agent. Workflow a replacement? workflow?
A: According to the task, AiPy will write a workflow, agent, and interface transformation programme for the current task if needed, and if the current mcp and agent can be used, he will also call them directly. We think that in the future everyone will coexist.