Streaming
If streaming is set when making prediction, tokens will be sent as data-only server-sent events as they become available.
Using Python/TS Library
Flowise provides 2 libraries:
- Python:
pip install flowise - Typescript:
npm install flowise-sdk
from flowise import Flowise, PredictionData
def test_streaming():
client = Flowise()
# Test streaming prediction
completion = client.create_prediction(
PredictionData(
chatflowId="<flow-id>",
question="Tell me a joke!",
streaming=True
)
)
# Process and print each streamed chunk
print("Streaming response:")
for chunk in completion:
# {event: "token", data: "hello"}
print(chunk)
if __name__ == "__main__":
test_streaming()event: token
data: Once upon a time...A prediction’s event stream consists of the following event types:
| Event | Description |
|---|---|
| start | The start of streaming |
| token | Emitted when the prediction is streaming new token output |
| error | Emitted when the prediction returns an error |
| end | Emitted when the prediction finishes |
| metadata | All metadata such as chatId, messageId, of the related flow. Emitted after all tokens have finished streaming, and before end event |
| sourceDocuments | Emitted when the flow returns sources from vector store |
| usedTools | Emitted when the flow used tools |