Skip to Main Content
Shape the future of IBM watsonx Orchestrate


This is the IBM Automation portal for IBM watsonx Orchestrate. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Learn more about IBM watsonx Orchestrate - Use this site to find out additional information and details about the product.

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Under review
Created by Guest
Created on Feb 26, 2024

Caching watsonx LLM answers for better perfromance for RAG use cases

Whenever customers use Assistant for conversational search or with RAG patterns and use watsonx.ai or WA's inbuild LLM capabilities to answer the question, sometimes there are questions which are frequently asked and asked by many users.


For e.g. Question about leave entitlement in HR bot or Credit Card terms and conditions or features in Banking bot then can answers received from LLMs can be cached by the product so that every time there is no need to do a costly call to LLM and performance and response time can be improved. All the typical Cache related configuration can be done by the user (with default values provided by the product).


We see this as a need in many use cases we are working upon.

Idea priority Medium