Node Navigation
Featured Content
Recent Content
This week's updates to the Doc sites at docs.sciencelogic.com
4/18/25 Doc updates to the main docs site at https://docs.sciencelogic.com/ Updated the manual for the ServiceNow Events SyncPack version 1.2.1. 4/18/25 Doc Updates to the release notes site at https://docs.sciencelogic.com/release_notes_html/ Restorepoint released the MR20250415 hotfix to address bulk editing issues. Updated the release notes for the ServiceNow Events SyncPack version 1.2.1.2Views0likes0CommentsPowerflow Application Immediate Failure
Hi All, I've Developed a Runbook Action within ScienceLogic, that triggers a Powerflow Application if certain Runbook Automation Policy conditions are met. Upon execution, it sends the following payload: "params": { "configuration": configuration, "event_policy_name": EM7_VALUES['%_event_policy_name'], "event_details": EM7_VALUES, "organization_name": EM7_VALUES['%O'], "system_url": system_url, "queue": queue } These parameters are specified as inputs, and some are derived from the EM7Values. When the Runbook Automation linked to this Action executes against a specific Event Policy raised on a Device, i can see a success message in the "Event Notifications" window, and i can see all inputs and variables are passed through in its entirety. The real issue comes when you open Powerflow. The Application immediately fails, giving no error and no explanation as to why it fails. I've been doing testing with the payloads, ensuring that the configuration being referenced exists, and confirmed all these are in place and are working fine. I then removed the "queue" value from the params body, and only passed the rest of the parameters, this succeeds and the application executes without incident. What's interesting, even if manually executing the application in the UI with a custom parameter set, as soon as you specify the queue, it fails. Now its important to know that we've only recently created the queues on a development system, and this is the first time we're testing with the newly created queues. I was hoping that someone in this community might be able to shed some light as to why this might be happening? Could it be that the queues are unreachable, and therefor when not specifying the custom queue, it defaults to the celery or 'priority.high' queue, which are reachable, and therefor works? If the queues are the problem, what could be done to try and fix the problem? Any and all contributions are welcome. Sincerely Andre36Views0likes1CommentAPM Monitoring for ScienceLogic
Hi All, Anyone thought of monitoring the ScienceLogic stacks with full stack observability tools like APPD, Dynatrace etc.? Basically i am interested to see what is happening inside the SL1 appliance like DC ,MC and CDB process level . Whether SL1 supports to install apm agents in the appliance.23Views0likes1CommentAs-built script for ScienceLogic configuration
Hi, dose anyone know of a script that you run, to detail your ScienceLogic application's configuration, so that you can write a ScienceLogic "as-built" application document? Not interested in the underlying servers for this particular ask, as my use case is a ScienceLogic SaaS instance. Thanks! Anne47Views1like4CommentsOrphaned Device Class
Hi All, I have come accross a puzzle managing some custom classes accross multiple SL stacks. With the following scenario... 1. You create a custom device class and add it to a powerpack. 2. You delete the powerpack without removing the class 3. In Device Class Editor, you can see the class had 'Yes' still in the 'Powerpack' Colunm. You can also see the linked (now deleted) PPK GUID in the master.definitions_dev_classes table. Is this correct/ expected behaivour that the class remains linked to the now non-existant powerpack? If so, is there a supported way to "unlink" it so the class can be resused in a diffeent powerpack (without having to recreate it and suffer a new GUID to be used) Many Thanks Colin.17Views0likes2CommentsUnderstanding DA caching in a CUG
We have a situation where a single DA aligned with a root device caches data about component devices which it then uses to populate metrics for both the root and component devices. Does anyone have some more detail on what happens - at a CUG level - if the collector aligned to the root device becomes unavailable? How does another collector in the CUG know that it needs to start caching data (is that via config push?) and does the whole DCM tree need to be rebuilt but aligned with a different collector? Equally - is there any smarter way to avoid the "Device failed availability" event for the child devices which is only occurring because the cached data from the root device is not available (and not because the child devices really have an availability problem)>31Views1like2CommentsLast week's updates to the Doc sites at docs.sciencelogic.com
4/11/25 Doc updates to the main docs site at https://docs.sciencelogic.com/ Updated the following PowerPacks manuals for Python 3 compliance and other features: Nginx: Open Source & Plus v103 Added Guided Discovery workflow Microsoft: Windows Server v118 Added 2025 Device classes Updated the Restorepoint 5.6 User Guide to include details about appliance-related settings in the [Devices] tab (previously [Device Defaults]) on the System Settings page, and its corresponding new "Global Device Settings" section. Removed the "Testing the CUCM Credential" section from the Cisco: CUCM Cisco Unified Communications Manager PowerPack manual as the "CUCM Credential Test" credential tester has been excluded from this PowerPack. Updated the Agents chapter of the Restorepoint 5.6 User Guide to include Config Policy information for agents. Updated the SL1 Recommended Upgrade Paths section to add the approved upgrade paths for SL1 12.3.3. 4/11/25 Doc Updates to the release notes site at https://docs.sciencelogic.com/release_notes_html/ Added release notes for SL1 12.3.3, which includes package updates to improve security and system performance and addresses multiple issues from previous releases. Updated the following PowerPacks Release Notes for Python 3 compliance and other features: Nginx: Open Source & Plus v103 Added Guided Discovery workflow Microsoft: Windows Server v118 Added 2025 Device classes Added releases notes for Skylar Analytics 1.4.1. Added release notes for version 114 of the Cisco: CUCM Cisco Unified Communications Manager PowerPack release, which includes compatibility with Python 3 for all Dynamic Applications using snippet argument and specific run book actions. Added release notes for version 101 of the Couchbase PowerPack release, which includes compatibility with Python 3.6 for all Dynamic Applications and the ability to leverage snippet framework when collecting data. Added release notes for version 8.16.1.41 of the AP2 Halwa.01 release, which addresses two issues relating to API and Global Manager that affected the previous AP2 Halwa version 8.16.1.14 release. Added release notes for version 8.17.23.45 of the AP2 Ice Pop.01 release, which addresses two issues relating to API and Global Manager that affected the AP2 Ice Pop version 8.17.23.18 release. Added release notes for Restorepoint MR20250409, which includes updates to global alert definitions. Added release notes for version 2.3.2 of the Restorepoint SyncPack, which updates the default backup schedule in the SyncPack from every 15minutes to once a day at midnight, and addresses an issue with the "Get List of Credentials from SL1" application.6Views1like0CommentsHow to Set Up an NFS Server for SL1 Backups
3 MIN READ Backing up your ScienceLogic SL1 database is essential for ensuring data integrity and disaster recovery. One effective way to store backups is by setting up a Network File System (NFS) server. NFS allows you to share a directory across multiple machines, making it an ideal solution for centralized SL1 backups. This guide will walk you through the process of installing and configuring an NFS server to store SL1 backups. Step 1: Install the NFS Server Before setting up the NFS server, ensure that your Linux machine has the necessary NFS packages installed. If the nfs-server package is missing, you need to install it. For RHEL, CentOS, Rocky Linux, or AlmaLinux: sudo yum install -y nfs-utils For Ubuntu or Debian: sudo apt update sudo apt install -y nfs-kernel-server After installation, start and enable the NFS service: sudo systemctl start nfs-server sudo systemctl enable nfs-server Verify the NFS server is running: sudo systemctl status nfs-server If it is not running, restart it: sudo systemctl restart nfs-server Step 2: Configure the NFS Server Once NFS is installed, follow these steps to configure the shared directory for SL1 backups. 1. Create a Backup Directory sudo mkdir -p /backups sudo chmod 777 /backups 2. Set a Fixed Port for mountd On Ubuntu/Debian: Edit /etc/default/nfs-kernel-server and add: RPCMOUNTDOPTS="--port 20048" On RHEL/CentOS/Rocky/Oracle: Edit /etc/sysconfig/nfs and add: MOUNTD_PORT=20048 This ensures the mountd service always uses port 20048, making firewall configuration simpler and more secure. 3. Define NFS Exports Edit the /etc/exports file to specify which clients can access the NFS share: sudo vi /etc/exports Add the following line, replacing with the IP address of your SL1 database server: /backups (rw,sync,no_root_squash,no_all_squash) This configuration allows the SL1 server to read and write (rw) to /backups, ensures data consistency (sync), and prevents permission issues. 3. Apply the NFS Configuration Run the following command to apply the changes: sudo exportfs -a Restart the NFS service to ensure the changes take effect: sudo systemctl restart nfs-server Step 3: Configure Firewall Rules for NFS If a firewall is enabled on your NFS server, you must allow NFS-related services. Run the following commands to open the necessary ports: sudo firewall-cmd --permanent --add-service=mountd sudo firewall-cmd --permanent --add-service=rpc-bind sudo firewall-cmd --permanent --add-service=nfs sudo firewall-cmd --reload If Command 'firewall-cmd' not found on your device: 1. Using iptables (For RHEL, CentOS, Debian, or older distributions) If your system uses iptables, you can manually allow NFS traffic with the following commands: sudo iptables -A INPUT -p tcp --dport 2049 -j ACCEPT # NFS sudo iptables -A INPUT -p tcp --dport 111 -j ACCEPT # Portmapper sudo iptables -A INPUT -p tcp --dport 20048 -j ACCEPT # Fixed mountd port sudo iptables -A INPUT -p udp --dport 2049 -j ACCEPT sudo iptables -A INPUT -p udp --dport 111 -j ACCEPT sudo iptables -A INPUT -p udp --dport 20048 -j ACCEPT # Save the rules sudo iptables-save | sudo tee /etc/sysconfig/iptables To restart iptables and apply the rules: sudo systemctl restart iptables 2. Using UFW (For Ubuntu and Debian) If your system uses ufw (Uncomplicated Firewall), enable NFS traffic with: Enable UFW (if it is inactive) sudo ufw enable Allow NFS, RPC, and Mountd Ports sudo ufw allow 2049/tcp # NFS sudo ufw allow 111/tcp # rpcbind sudo ufw allow 111/udp sudo ufw allow 20048/tcp # mountd (fixed) sudo ufw allow 20048/udp To apply the changes: sudo ufw reload To check if the rules are added: sudo ufw status Step 4: Verify the NFS Server To confirm that the NFS share is accessible, use the following command: showmount -e <NFS server IP> If the setup is correct, you should see the following listed as an exported directory. Export list for <NFS server IP>: /backups <NFS client IP> Next Steps: Mount the NFS Share on SL1 Database Server Now that the NFS server is set up, you need to mount the share on your SL1 database server to store backups. For step-by-step instructions on mounting an NFS share in SL1, refer to the official ScienceLogic documentation Backup Management.15Views1like0CommentsWeek of April 7th, 2025 - Latest KB Articles and Known Issues
1 MIN READ A set of Knowledgebase Articles published last week is listed below. All KBAs can be searched via Global Search on the Support Portal and filtered by various components like product release.PavaniKatkuri7 days agoPlace Latest KB Articles and Known Issues BlogLatest KB Articles and Known Issues BlogModerator10Views1like0CommentsServiceNow Events SyncPack v1.2.1
Hello, We are pleased to announce an update to the ServiceNow Events SyncPack. Version 1.2.1 addressed two issues to improve to address an issue that caused lock documents to be created, but not cleaned up: Added the new "Remove Lock Documents" application, which allows for the removal of stale lock documents in Couchbase that were created by the "Process and Cache SL1 Events" application. ScienceLogic recommends that you schedule the application every five minutes with the default configurations. Fifty thousand lock documents are removed in every application run. This could take, for example, an hour and 30 minutes when removing one million lock documents. Running this application will not impact other applications running in PowerFlow. The "Process and Cache SL1 Events" application now uses a Redis lock instead of a Couchbase lock. (Case: 00507358) This update is now available on the Support Portal downloads page. Thank you, Release Management11Views0likes0Comments
Check out our NEW Pro Services Blog for great SL1 Tips and Tricks!
