- Print
- DarkLight
- PDF
Nodes within EchoStream where you are allowed to provide customized code also support centralized collection of the log events that are emitted from that code.
For example, Processor, Cross Tenant Sending and Bitmap Router Nodes all allow you to provide a Python function that, in addtion to prcoessing or bitmapping the message can, through the use of context.logger
, log messages into EchoStream.
Managed Nodes also record all messages emitted to stdout
in their Docker containers to EchoStream logs.
Access to these logs is provided to you via the EchoStream Application, the API calls, or via the emission of the log messages themselves as messages that you can process in your Tenant's processing network.
Logs are only stored in EchoStream for 3 days. If you require longer log event storage you should create a Node that listens to the Log Emitter Node and store the received log events in a data store external to EchoStream.
Getting Log Events
You can use either the EchoStream Application or the API to get log events for a specific resource in your Tenant.
NOTE - getting log events in this manner requires that you know the resource!
Using the Application
Simply navigate to the resource (e.g. - Processor Node) that you wish to examine the changes for and choose List Log Events. The log events that were created by that Node will be presented to you.
Usign the API
To list log events using the API, you must construct a GraphQL query that gets the Node to list log events for (using the Query.GetNode
call) and within that call ask for the ListLogEvents field (make sure to provide the necessary parameters).
Processing Log Events in your Tenant's processing network
Every Tenant is automatically given a Log Emitter Node. This Log Emitter Node will emit every log event that a Node in your Tenant creates..
You can also create Nodes to process log events and connect those Nodes to the Log Emitter. Some use cases for this could be:
- sending those log events to a centralized logging mechanism (e.g. - Splunk, SumoLogic, ELK, etc.) in your datacenter for centralized analysis and processing.
- analyzing the log events real-time for errors and creating an alert in your alert management system.
- routing all log events to a data lake or data mart for analysis.