LLM-Zero-Prompt is an amazing new C# and Python library for interacting with LLM's
This coming June 12th , as part of the Open Agentic beta program for the JetPackForAI family of open source products, Open Agentic is thrilled to introduce LLM-Zero-Prompt, a nunque zero-prompt approach to interacting with large language models and the first open-source technology from the Open Agentic team to be released, with more to come. These innovative libraries revolutionize the way you write C# or Python code to interact with large language models, using a unique zero-prompt approach. Yes, you read it correctly; the LLM-Zero-Prompt takes a zero-prompt approach to writing code to interact with the large language model. The Open Agentic team identified several code development struggles when writing code for interacting with large language models:
- With Zero-Prompt libraries, the struggles of writing code for large language models are significantly reduced. No longer will you need to spend countless hours on prompt engineering only to find that your prompts and formats need constant revision. You create prompts, but you find that over time, the model drifts, and your initial prompts begin to stop working.
- A new large language model arrives that you want to use, and you need to go back to refactoring your code and re-testing the prompts again to make your code work with this new large language model.
- You move to use a different LLM within the family or a different vendor, your original hand-crafted prompts stop working and do require refactoring and re-testing.
- You create prompts that do not return the expected formats, making your application brittle.
The LLM-Zero-Prompt libraries take a no-prompt approach, empowering you to interact with large language models more efficiently, and without the need to become a prompt engineer, you get to focus on writing code and not the LLM wrangling. The following is what you can expect to see in code:
var = SentimentClassification(“return true if this is positive”, “it was a nice day at the beach”).ExAsBool()
As you see, its easy, there is no need to put energy into prompting; the LLM-Zero-Prompt libraries have methods that can be invoked to perform functions like Question, SentimentClassification, TextCompletion, and GrammarCorrection, the list is too long to print here. To use the liberty, you select from one of the many supported large language model connectors from the library. For example, when selecting a connector for GPT-4o, choose from one of the many supported functions you want to perform, for example, SentimentClassification, and define the response format you want, for example, string list, array, JSON, etc. Then invoke the method and the LLM-Zero-Prompt library will perform the magic, return the response and even handle retries if required.
You wanted to know the magic, keep reading?
What we have done at Open Agentic is to separate your requirements, for example, SentimentClassification from the prompt that the large language model requires to perform the task and return the necessary format you need. By separating functions like SentimentClassification and your input from the prompt structure, format, and training, the large language model enables us to create a prompt optimization pack for each type of large language model belonging to different vendors.
The prompt optimization packs are generated, and LLM is trained, if necessary, to perform the task and provide the response in the format you require. The prompt optimization packs are updated when needed as models drift and new models arrive. The prompt optimization packs are loaded into the Connector you create and use to connect to the large language model. This way is transparent to you, the developer, and the library user.
As new models arrive or updates to existing models are required due to drift, the models will be updated; you have to have the Connector automatically download the prompt optimization packs. You can focus on writing code and not prompting the large language model. This is a big win for your product; your code never changes; just a new prompt optimization pack gets downloaded.
In the future, we will be adding an optimization method that will fine-tune the optimized prompt packages, this will enable even finer-grained optimization right on the inputs you are giving to the LLM-Zero-Prompt liberties. What separates the LLM-Zero-Prompt approach from others is the ability to create prompt packs in advance and update them outside of your inputs. Also involved is the logic involved in getting the format and type of responses without having to wrangle with the prompts and try to hit the moving target.
On release, LLM-Zero-Prompt will also be providing detailed documentation and quick start training videos focused on getting even non developers up and running to be able to sue for these liberties, and the cost is free and open source. Open Agentic hopes LLM-Zero-Prompt will be something that a community can grow from. On the 12th of June, the Open Agentic team will LLM-Zero-Prompt to its github account, and we will let you know the location.