ScienceLogic's PowerFlow Training: Explore Built-In Integration and Custom Automation Capabilities
PowerFlow is ScienceLogic’s integration platform designed to seamlessly extract, transform, and load data between SL1 and third-party tools. Whether you're implementing built-in integrations or creating custom automations, PowerFlow empowers you to streamline workflows, integrate systems, and enhance IT operations. Find the Right Training for Your Needs Explore training options based on your role and objectives: PowerFlow: ScienceLogic's Bi-Directional Task Execution Platform (1 hour) Discover PowerFlow’s core functionality in this introductory learning path, covering key features, configuration, navigation, and troubleshooting. PowerFlow Integrations: ServiceNow (4 hours) Master PowerFlow’s integration with ServiceNow. This comprehensive learning path includes all content from the introductory PowerFlow course, then dives deeper into implementing SL1 and ServiceNow integration use cases. PowerFlow: Software Development Kit (SDK) (1 hour) For advanced users, this training course shows how to use the PowerFlow Software Development Kit (SDK) to build custom SyncPacks for automation, system integration, and workflow enhancements. Access Training Anytime, Anywhere ScienceLogic University is ScienceLogic's on-demand learning portal. Log in or create an account here to access these PowerFlow training options and other essential topics.39Views0likes0CommentsOptimising PowerFlow Integrations: Isolating Incident and CMDB Workloads
In complex IT environments, integrations like incident management and Configuration Management Database (CMDB) synchronisation are pivotal. ScienceLogic's PowerFlow platform offers robust capabilities to handle these integrations. However, to ensure optimal performance and prevent resource contention, it's crucial to configure dedicated steprunners and queues for different workloads. This article discusses on-premises instances of PowerFlow. If you are using a SaaS-hosted instance of PowerFlow, please submit a service request via the Support Portal outlining your requirements. The relevant team will then review your request and discuss the necessary changes to be made on your SaaS instance of PowerFlow. Understanding the Challenge Incident management and CMDB synchronisation have distinct characteristics: Incident Management: Typically involves lightweight, high-frequency tasks that require rapid processing to maintain real-time responsiveness. CMDB Synchronisation: Often deals with bulk data operations, such as syncing large volumes of configuration items, which are resource-intensive and time-consuming. Running both integrations on the same steprunner can lead to performance issues. For instance, a heavy CMDB sync might consume significant resources, delaying the processing of critical incident tasks. Implementing Dedicated Steprunners and Queues To address this, PowerFlow allows the configuration of steprunners to listen to specific queues. By assigning separate queues for incident and CMDB tasks, you can isolate their processing and allocate resources appropriately. Example Configuration Here's how you might define dedicated steprunners in your docker-compose.override.yml: Incident Steprunner: steprunner-incident: image: sciencelogic/is-worker:latest hostname: "incident-{{.Task.ID}}" deploy: resources: limits: memory: 2G replicas: 10 environment: user_queues: 'incident_queue' worker_threads: 4 CMDB Steprunner: steprunner-cmdb: image: sciencelogic/is-worker:latest hostname: "cmdb-{{.Task.ID}}" deploy: resources: limits: memory: 4G replicas: 5 environment: user_queues: 'cmdb_queue' worker_threads: 2 In this setup: user_queues: Assigns each steprunner to a specific queue (incident_queue or cmdb_queue), ensuring isolation of workloads. worker_threads: Defines how many concurrent tasks each steprunner container can process. Higher for incidents because incident syncs are typically lightweight and frequent. Lower for CMDB to reduce memory contention since CMDB data is often bulkier and more complex. deploy.resources.limits.memory: Caps how much memory each steprunner container can use. This helps prevent individual steprunners from consuming excessive memory, which is especially important when running many containers on shared infrastructure. Example: 2G for incidents (moderate), 4G for CMDB (higher due to heavier payloads). deploy.replicas: Specifies how many containers to run for each steprunner service. More replicas for incidents to handle high throughput. Fewer for CMDB, since each task may take longer and use more resources. Benefits of Isolation Performance Optimisation: Ensures that resource-heavy CMDB tasks don't impede the processing of time-sensitive incident tasks. Scalability: Allows independent scaling of steprunners based on the workload demands of each integration. Resource Management: Facilitates fine-tuned allocation of system resources, reducing the risk of bottlenecks and failures. Monitoring and Adjustments Regular monitoring is essential to maintain optimal performance: Queue Lengths: Persistent growth in queue lengths may indicate the need for additional steprunners or increased thread counts. Resource Utilisation: Monitor CPU and memory usage to prevent over utilisation. Error Rates: High error rates might necessitate adjustments in configurations or error-handling mechanisms. Final Thoughts By strategically configuring dedicated steprunners and queues for incident and CMDB integrations, you can enhance the efficiency, reliability, and scalability of your PowerFlow environment. This approach ensures that each integration operates within its optimal parameters, delivering better performance and resource utilisation.78Views4likes0CommentsConvert Customization to PowerFlow Jinja Template
Sometimes when syncing devices from SL1 into ServiceNow as a Configuration Items there can be a mismatch. ServiceNow may list the name as Fully Qualified Domain Name and SL1 will use short name. This setting can be updated in SL1, but in some cases the SL1 team would rather see short name than FQDN. This can be setup on a per SL1 Device Class basis. PowerFlow Using the following Jinja2 “if statement” the device name in SL1 can be converted to use “Device Hostname,” in SL1 instead for Microsoft SQL Server Databases. This excerpt of code would go under attribute mappings for name on the ScienceLogic side mapping to name on the ServiceNow side: {%- set output = [] -%} {%- if (device.device_class|trim) in ['Microsoft | SQL Server Database'] } {%- set output = device.hostname -%} {%- else -%} {%- set output = device.name -%} {%- endif -%} {{ output }} Example:32Views0likes0Comments