Much of the power of LLMs comes from lining up a series of separate commands, which often work far better than if they were combined into one prompt.
For example:
  • Translate this text from [language] to [language] using Model A.
  • Edit it in [this style and tone] using Model B.
  • Fact check using the deep search tool of Model C.
  • Prepare a briefing note based on that edit using Model D.
Or:
  • Take this audio input and transcribe it with Model A.
  • Then format the resulting text with Model B.
  • Then take the resulting text and create an image with it using Model C.
And so on. Right now these workflows are best done in Python. Long term, it'd be amazing - and empower many others - to be able to put them together in Bolt!