Servicely Capability
Intelligent Automation
AI Prompt - Single shot LLM Classification/Prompting
7 min
please note that this may have licensing implications please discuss with your account manager if you are unsure if you have full access overview release 1 9 introduces a structured api and model to perform simple single shot classification/prompting using state of the art llm models examples where these type of models can provide significant benefits zero knowledge classification typically, classification engines (such as sofi) require data and examples to provide any form of classification llms are excellent at providing zero knowledge classification based on their understanding of the base training data (typically billions of documents from all languages) summarisation llms are excellent at summarising large blocks of text or structured data single shot classification could be used to provide provide a summary of all the interactions of a call summarisation of a large request into a shorter by line or short description provide a concise summary of a knowledge article prioritisation/sentiment analysis llms are good at understanding urgency or sentiment in correspondence knowledge generation llms can take information provided in problem records and provide a starting point for a knowledge article ai prompts ai prompts allow you to design and test single shot classification prompts fields true left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type left unhandled content type context it provides the necessary context that influences the tone, style, and content of the model's output for instance, a prompt asking for a technical explanation will yield a different style of response than a prompt asking for a story or a joke limits the prompt can also set boundaries on the scope of the response, limiting what information should be included or highlighting specific details that need emphasis | \| pre processing script | provides the ability to retrieve information from servicely and include it in the request to the llm this could include information from related records, classifications, groups, etc script context context object key value pair of the context passed in from the initiating script determined by the developer options object key value pair representing options that can be set on the models promptrec tablerecord the systemaiprompt tablerecord evaluationphase string the evaluation phase \[evaluate|test] useful for performing different logic depending on whether we are executing the model or just testing | \| post processing script | provides the ability to process the response from the llm before returning it to the ai prompt api call script context context object key value pair of the context passed in from the initiating script determined by the developer promptrec tablerecord the systemaiprompt tablerecord evaluationphase string the evaluation phase \[evaluate|test] useful for performing different logic depending on whether we are executing the model or just testing | \| test setup script | provides a mechanism to specify context for the ‘test’ buttons on the ai prompt form the ‘test prompts only’ and ‘test model’ buttons allow you to rapidly iterate over the authoring of the prompt, and the ‘test setup script’ allows you to set the test environment for them | testing ‘test prompts only’ button the ‘test prompts only’ button allows you to test the system and userprompts before sending them to the llm this includes loading any content in the pre processing script the text for the prompts will be displayed, and can be copied to the clipboard using the ‘copy’ button ‘test model’ button the ‘test model’ button formats the request and sends it to the llm, and then displays the results along with information on the model, latency, token usage, cost, and result system llm models the systemllmmodels table contains information on the available llm models to use the initial 1 9 release provides support for openai and anthropic, and provides the current set of available models from those providers more providers will be implemented in upcoming releases system llm usage the system llm usage table tracks the model, provider, token and cost usage of the llm providers initiating from the api true left unhandled content type left unhandled content type left unhandled content type left unhandled content type