Start by searching and reviewing ideas others have posted, and add a comment (private if needed), vote, or subscribe to updates on them if they matter to you.
If you can't find what you are looking for, create a new idea:
stick to one feature enhancement per idea
add as much detail as possible, including use-case, examples & screenshots (put anything confidential in Hidden details field or a private comment)
Explain business impact and timeline of project being affected
[For IBMers] Add customer/project name, details & timeline in Hidden details field or a private comment (only visible to you and the IBM product team).
This all helps to scope and prioritize your idea among many other good ones. Thank you for your feedback!
Specific links you will want to bookmark for future use
Learn more about IBM watsonx Orchestrate - Use this site to find out additional information and details about the product.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
Although I have not tested flow option, by opting OpenAPI spec is its being possible. As we just get endpoints after ML model deployment, this idea will have the potential to reduce the manual effort to manually write OpenAPI spec and use this as a tool. Please suggest your view on this.
Today are you able to achieve this within our flows experience by adding an openapi spec as a tool, or defining a python script as a tool for the same?
We have a project(client0) where we are trying to implement a concept called Talk to Model, where models can be accessible from skills(gen1), ai agents in IBM WatsonX. Orchestrate.
- Created agents which collects data for model inference.
- Automatically routing happens whenever user needs specific trained conventional model(specific dataset) to get the inference.
- Used OpenAPI spec to generate integrate models as tools.
- In orchestrate gen1 we had such options where we used to add extrenal services and used to add models (watsonx.ai)decision models. to create a skill.
- Similar features will definitely help and if we can give a button(Use this model as a tool) in watsonx.ai after model deployment that would also be awesome.
To add to the request, is the problem you're trying to solve here optimizing a response by essentially routing to the ideal model based on the type of ask? When would we expect to invoke these differnet models, or is the idea that they are tools we can describe and specifically control when they are utilized?