Seeking Guidance: Truncated Debug Outputs / File Storage?

I have access to Anthropic’s Claude 100k token Large Language Model API.

While I’ve been successfully stringing the calls together, but the output I get in return is truncated to 990 characters. Same with OpenAI’s GPT-3.5 API.

I understand that this could just be a feature of the Debug outputs.
I’d prefer to have these outputs be stored in a file, and with Cloud Flowforge, I should have a 100mb file storage system, but I can’t find any good documentation on how to access it.

Please offer me advice on this front.
Are there resources on using FlowForge cloud file storage? If not, how do I do it?

The FlowForge platform includes a File Storage service that can be used to provide persistent storage to Node-RED in two different ways:

  • A set of custom File nodes that behave the same way as the standard Node-RED File nodes
  • An optional Persistent Context store for storing context data within flows. This feature is only available for platforms running with a premium license.

For further information, visit: FlowForge File Storage • FlowForge Docs

FlowForge Cloud provides support for using the standard File nodes in flows with some limits. The standard filesystem is not persisted between Node-RED restarts, so a custom set of nodes are used to store the files in persistent storage.

Each Node-RED instance has a quota of 100MB of file storage. A single write operation is limited to 10MB in size.

Some 3rd party nodes try to access the filesystem directly. This can lead to unpredictable results if the data is not persisted between restarts.

1 Like