Servicely Capability
Intelligent Automation
AI Prompt - Single shot LLM Classification/Prompting
7 min
please note that this may have licensing implications please discuss with your account manager if you are unsure if you have full access overview release 1 9 introduces a structured api and model to perform simple single shot classification/prompting using state of the art llm models examples where these type of models can provide significant benefits zero knowledge classification typically, classification engines (such as sofi) require data and examples to provide any form of classification llms are excellent at providing zero knowledge classification based on their understanding of the base training data (typically billions of documents from all languages) summarisation llms are excellent at summarising large blocks of text or structured data single shot classification could be used to provide provide a summary of all the interactions of a call summarisation of a large request into a shorter by line or short description provide a concise summary of a knowledge article prioritisation/sentiment analysis llms are good at understanding urgency or sentiment in correspondence knowledge generation llms can take information provided in problem records and provide a starting point for a knowledge article ai prompts ai prompts allow you to design and test single shot classification prompts fields field description name display value for the ai prompt key lookup value for the ai prompt used from the ai prompt api to initiate the prompt from an action llm model link to an llm model configuration contains the information on the provider, model, and authentication tokens to be used can be changed to quickly test the results from different providers and models description description/documentation for the prompt user prompt this is the main user/action prompt presented to the llm this is where the actions you are asking the llm to take are laid out and explained this field is a template, allowing you to provide information from associated records you can use the ‘pre processing script’ to retrieve information required for the template system prompt the system prompt refers to the initial input or instruction given to the model to generate a response this prompt sets the direction and parameters for the model's output typically, in the system prompt you provide the model with any instructions on direction the prompt guides the llm in generating content that is relevant to the user's request it directs the focus of the model's response context it provides the necessary context that influences the tone, style, and content of the model's output for instance, a prompt asking for a technical explanation will yield a different style of response than a prompt asking for a story or a joke limits the prompt can also set boundaries on the scope of the response, limiting what information should be included or highlighting specific details that need emphasis | \| pre processing script | provides the ability to retrieve information from servicely and include it in the request to the llm this could include information from related records, classifications, groups, etc script context context object key value pair of the context passed in from the initiating script determined by the developer options object key value pair representing options that can be set on the models promptrec tablerecord the systemaiprompt tablerecord evaluationphase string the evaluation phase \[evaluate|test] useful for performing different logic depending on whether we are executing the model or just testing | \| post processing script | provides the ability to process the response from the llm before returning it to the ai prompt api call script context context object key value pair of the context passed in from the initiating script determined by the developer promptrec tablerecord the systemaiprompt tablerecord evaluationphase string the evaluation phase \[evaluate|test] useful for performing different logic depending on whether we are executing the model or just testing | \| test setup script | provides a mechanism to specify context for the ‘test’ buttons on the ai prompt form the ‘test prompts only’ and ‘test model’ buttons allow you to rapidly iterate over the authoring of the prompt, and the ‘test setup script’ allows you to set the test environment for them | testing ‘test prompts only’ button the ‘test prompts only’ button allows you to test the system and userprompts before sending them to the llm this includes loading any content in the pre processing script the text for the prompts will be displayed, and can be copied to the clipboard using the ‘copy’ button ‘test model’ button the ‘test model’ button formats the request and sends it to the llm, and then displays the results along with information on the model, latency, token usage, cost, and result system llm models the systemllmmodels table contains information on the available llm models to use the initial 1 9 release provides support for openai and anthropic, and provides the current set of available models from those providers more providers will be implemented in upcoming releases system llm usage the system llm usage table tracks the model, provider, token and cost usage of the llm providers initiating from the api function description ai processaiprompt(aipromptkey string, context object) any executes the systemaiprompt and returns the value returned by the post processing script aipromptkey the unique ‘key’ field of the ai promptcontext a key/value pair containing any context that is to be passed to the pre processing script