Node Navigation
Featured Content
Recent Content
Orphaned Device Class
Hi All, I have come accross a puzzle managing some custom classes accross multiple SL stacks. With the following scenario... 1. You create a custom device class and add it to a powerpack. 2. You delete the powerpack without removing the class 3. In Device Class Editor, you can see the class had 'Yes' still in the 'Powerpack' Colunm. You can also see the linked (now deleted) PPK GUID in the master.definitions_dev_classes table. Is this correct/ expected behaivour that the class remains linked to the now non-existant powerpack? If so, is there a supported way to "unlink" it so the class can be resused in a diffeent powerpack (without having to recreate it and suffer a new GUID to be used) Many Thanks Colin.16Views0likes2CommentsLast week's updates to the Doc sites at docs.sciencelogic.com
4/11/25 Doc updates to the main docs site at https://docs.sciencelogic.com/ Updated the following PowerPacks manuals for Python 3 compliance and other features: Nginx: Open Source & Plus v103 Added Guided Discovery workflow Microsoft: Windows Server v118 Added 2025 Device classes Updated the Restorepoint 5.6 User Guide to include details about appliance-related settings in the [Devices] tab (previously [Device Defaults]) on the System Settings page, and its corresponding new "Global Device Settings" section. Removed the "Testing the CUCM Credential" section from the Cisco: CUCM Cisco Unified Communications Manager PowerPack manual as the "CUCM Credential Test" credential tester has been excluded from this PowerPack. Updated the Agents chapter of the Restorepoint 5.6 User Guide to include Config Policy information for agents. Updated the SL1 Recommended Upgrade Paths section to add the approved upgrade paths for SL1 12.3.3. 4/11/25 Doc Updates to the release notes site at https://docs.sciencelogic.com/release_notes_html/ Added release notes for SL1 12.3.3, which includes package updates to improve security and system performance and addresses multiple issues from previous releases. Updated the following PowerPacks Release Notes for Python 3 compliance and other features: Nginx: Open Source & Plus v103 Added Guided Discovery workflow Microsoft: Windows Server v118 Added 2025 Device classes Added releases notes for Skylar Analytics 1.4.1. Added release notes for version 114 of the Cisco: CUCM Cisco Unified Communications Manager PowerPack release, which includes compatibility with Python 3 for all Dynamic Applications using snippet argument and specific run book actions. Added release notes for version 101 of the Couchbase PowerPack release, which includes compatibility with Python 3.6 for all Dynamic Applications and the ability to leverage snippet framework when collecting data. Added release notes for version 8.16.1.41 of the AP2 Halwa.01 release, which addresses two issues relating to API and Global Manager that affected the previous AP2 Halwa version 8.16.1.14 release. Added release notes for version 8.17.23.45 of the AP2 Ice Pop.01 release, which addresses two issues relating to API and Global Manager that affected the AP2 Ice Pop version 8.17.23.18 release. Added release notes for Restorepoint MR20250409, which includes updates to global alert definitions. Added release notes for version 2.3.2 of the Restorepoint SyncPack, which updates the default backup schedule in the SyncPack from every 15minutes to once a day at midnight, and addresses an issue with the "Get List of Credentials from SL1" application.5Views1like0CommentsHow to Set Up an NFS Server for SL1 Backups
3 MIN READ Backing up your ScienceLogic SL1 database is essential for ensuring data integrity and disaster recovery. One effective way to store backups is by setting up a Network File System (NFS) server. NFS allows you to share a directory across multiple machines, making it an ideal solution for centralized SL1 backups. This guide will walk you through the process of installing and configuring an NFS server to store SL1 backups. Step 1: Install the NFS Server Before setting up the NFS server, ensure that your Linux machine has the necessary NFS packages installed. If the nfs-server package is missing, you need to install it. For RHEL, CentOS, Rocky Linux, or AlmaLinux: sudo yum install -y nfs-utils For Ubuntu or Debian: sudo apt update sudo apt install -y nfs-kernel-server After installation, start and enable the NFS service: sudo systemctl start nfs-server sudo systemctl enable nfs-server Verify the NFS server is running: sudo systemctl status nfs-server If it is not running, restart it: sudo systemctl restart nfs-server Step 2: Configure the NFS Server Once NFS is installed, follow these steps to configure the shared directory for SL1 backups. 1. Create a Backup Directory sudo mkdir -p /backups sudo chmod 777 /backups 2. Set a Fixed Port for mountd On Ubuntu/Debian: Edit /etc/default/nfs-kernel-server and add: RPCMOUNTDOPTS="--port 20048" On RHEL/CentOS/Rocky/Oracle: Edit /etc/sysconfig/nfs and add: MOUNTD_PORT=20048 This ensures the mountd service always uses port 20048, making firewall configuration simpler and more secure. 3. Define NFS Exports Edit the /etc/exports file to specify which clients can access the NFS share: sudo vi /etc/exports Add the following line, replacing with the IP address of your SL1 database server: /backups (rw,sync,no_root_squash,no_all_squash) This configuration allows the SL1 server to read and write (rw) to /backups, ensures data consistency (sync), and prevents permission issues. 3. Apply the NFS Configuration Run the following command to apply the changes: sudo exportfs -a Restart the NFS service to ensure the changes take effect: sudo systemctl restart nfs-server Step 3: Configure Firewall Rules for NFS If a firewall is enabled on your NFS server, you must allow NFS-related services. Run the following commands to open the necessary ports: sudo firewall-cmd --permanent --add-service=mountd sudo firewall-cmd --permanent --add-service=rpc-bind sudo firewall-cmd --permanent --add-service=nfs sudo firewall-cmd --reload If Command 'firewall-cmd' not found on your device: 1. Using iptables (For RHEL, CentOS, Debian, or older distributions) If your system uses iptables, you can manually allow NFS traffic with the following commands: sudo iptables -A INPUT -p tcp --dport 2049 -j ACCEPT # NFS sudo iptables -A INPUT -p tcp --dport 111 -j ACCEPT # Portmapper sudo iptables -A INPUT -p tcp --dport 20048 -j ACCEPT # Fixed mountd port sudo iptables -A INPUT -p udp --dport 2049 -j ACCEPT sudo iptables -A INPUT -p udp --dport 111 -j ACCEPT sudo iptables -A INPUT -p udp --dport 20048 -j ACCEPT # Save the rules sudo iptables-save | sudo tee /etc/sysconfig/iptables To restart iptables and apply the rules: sudo systemctl restart iptables 2. Using UFW (For Ubuntu and Debian) If your system uses ufw (Uncomplicated Firewall), enable NFS traffic with: Enable UFW (if it is inactive) sudo ufw enable Allow NFS, RPC, and Mountd Ports sudo ufw allow 2049/tcp # NFS sudo ufw allow 111/tcp # rpcbind sudo ufw allow 111/udp sudo ufw allow 20048/tcp # mountd (fixed) sudo ufw allow 20048/udp To apply the changes: sudo ufw reload To check if the rules are added: sudo ufw status Step 4: Verify the NFS Server To confirm that the NFS share is accessible, use the following command: showmount -e <NFS server IP> If the setup is correct, you should see the following listed as an exported directory. Export list for <NFS server IP>: /backups <NFS client IP> Next Steps: Mount the NFS Share on SL1 Database Server Now that the NFS server is set up, you need to mount the share on your SL1 database server to store backups. For step-by-step instructions on mounting an NFS share in SL1, refer to the official ScienceLogic documentation Backup Management.15Views1like0CommentsUnderstanding DA caching in a CUG
We have a situation where a single DA aligned with a root device caches data about component devices which it then uses to populate metrics for both the root and component devices. Does anyone have some more detail on what happens - at a CUG level - if the collector aligned to the root device becomes unavailable? How does another collector in the CUG know that it needs to start caching data (is that via config push?) and does the whole DCM tree need to be rebuilt but aligned with a different collector? Equally - is there any smarter way to avoid the "Device failed availability" event for the child devices which is only occurring because the cached data from the root device is not available (and not because the child devices really have an availability problem)>29Views1like2CommentsWeek of April 7th, 2025 - Latest KB Articles and Known Issues
1 MIN READ A set of Knowledgebase Articles published last week is listed below. All KBAs can be searched via Global Search on the Support Portal and filtered by various components like product release.PavaniKatkuri5 days agoPlace Latest KB Articles and Known Issues BlogLatest KB Articles and Known Issues BlogModerator10Views1like0CommentsServiceNow Events SyncPack v1.2.1
Hello, We are pleased to announce an update to the ServiceNow Events SyncPack. Version 1.2.1 addressed two issues to improve to address an issue that caused lock documents to be created, but not cleaned up: Added the new "Remove Lock Documents" application, which allows for the removal of stale lock documents in Couchbase that were created by the "Process and Cache SL1 Events" application. ScienceLogic recommends that you schedule the application every five minutes with the default configurations. Fifty thousand lock documents are removed in every application run. This could take, for example, an hour and 30 minutes when removing one million lock documents. Running this application will not impact other applications running in PowerFlow. The "Process and Cache SL1 Events" application now uses a Redis lock instead of a Couchbase lock. (Case: 00507358) This update is now available on the Support Portal downloads page. Thank you, Release Management10Views0likes0CommentsNutanix Base Pack v106 Py3 PowerPack Release Notification
We are pleased to announce that the Nutanix Base Pack v106 Py3 PowerPack has been released. The download for this release can be found on the Support Portal under the PowerPack filename: https://support.sciencelogic.com/s/release-version/aBu0z000000XZXrCAO/nutanix-base-pack Enhancements and Issues Addressed The following enhancements and addressed issues are included in this release: The REST API endpoints for the Nutanix Lifecycle Management have been updated. The current endpoints used by the PowerPack will stop working in a future Nutanix cluster upgrade, and the new endpoints return data in a different format. Snippet Dynamic Applications now utilize a custom steps library ("silo-low-code-steps-nutanix") and have also been updated to version 2.0. The following run book actions (RBA) now support Python 3: Nutanix: Prism Central Classify Root Device Class Nutanix: Prism Element Classify Root Device Class The following Dynamic Applications can now utilize snippet framework and use snippet arguments to define, collect, and interpret data: Host Configuration Dynamic Application(s) Nutanix: Host Config & Disk Discovery Storage Container Dynamic Applications Nutanix: Storage Container Capacity and Usage Stats Nutanix: Storage Container Config Nutanix: Storage Container I/O Stats Nutanix: Storage Containers Config CVN Dynamic Applications Nutanix: Cluster Health Summary Stats Nutanix: Controller VM Discovery Nutanix: CVM Config Nutanix: CVM I/O Stats Stats Dynamic Applications Nutanix: Cache Usage Stats Nutanix: Cluster I/O Stats Nutanix: Controller I/O Stats Nutanix: Hypervisor I/O Stats Nutanix: Storage Capacity and Usage Stats Block Dynamic Applications Nutanix: Block Config & Host Discovery Nutanix: Cluster Config & Block Discovery Nutanix: Replication Stats Host Dynamic Applications Nutanix: Host Cache Stats Nutanix: Host I/O Stats Nutanix: Host Storage Stats Nutanix: Host System Stats Nutanix: Cluster Stats VMware Aggregator Dynamic Application(s) Nutanix: VMware Aggregator Config VM Dynamic Applications Nutanix: Container Workload VMs Config Nutanix: Host Workload VMs Config Nutanix: VM Config Nutanix: VM I/O Stats Nutanix: Workload VMs Discovery Nutanix: Workload Group Discovery Storage Pool Dynamic Applications Nutanix: Storage Pool Capacity and Usage Stats Nutanix: Storage Pool Config Nutanix: Storage Pool I/O Stats Nutanix: Storage Pool Group Discovery Disk Configuration Dynamic Applications Nutanix: CVM Disks Config Nutanix: Disk Config Nutanix: Disk I/O Stats LCM Entities Dynamic Applications Nutanix: LCM Cluster Config Nutanix: LCM Config Nutanix: LCM Host Config Discovery Dynamic Applications Nutanix: Prism Central Config Nutanix: Prism Central LCM Config Nutanix: Prism Element Config & Discovery Nutanix: Prism Elements Discovery Alert Dynamic Applications Nutanix: Storage Pools Discovery Nutanix: Storage Container Discovery Nutanix: Storage Container Events Nutanix: Storage Pool Events Nutanix: Cluster Events Nutanix: CVM Events Nutanix: Disk Events Nutanix: Host Events Nutanix: VM Events Nutanix: Prism Central Events Virtual Storage Dynamic Applications Nutanix: Storage Pools Discovery Nutanix: Storage Pool Capacity and Usage Stats Nutanix: Storage Pool Config Nutanix: Storage Pool Events Nutanix: Storage Container Discovery Nutanix: Storage Pool I/O Stats Nutanix: Storage Container Capacity and Usage Stats Nutanix: Storage Container Config Nutanix: Storage Container Events Nutanix: Storage Container I/O Stats Nutanix: Container Workload VMs Config Block Appliance Dynamic Applications Nutanix: Block Config & Host Discovery Nutanix: Host Workload VMs Config Nutanix: LCM Host Config Nutanix: Host Cache Stats Nutanix: Controller VM Discovery Nutanix: Host System Stats Nutanix: Host Storage Stats Nutanix: Host Config & Disk Discovery Nutanix: Host I/O Stats Nutanix: Host Events Nutanix: CVM Events Nutanix: CVM Config Nutanix: CVM I/O Stats Nutanix: CVM Disk Config Nutanix: Storage Containers Config Workload Dynamic Applications Nutanix: Workload VMs Discovery Nutanix: VM Events Nutanix: VM Config Nutanix: VM I/O Stats Nutanix: Disk Events Nutanix: Disk Config Nutanix: Disk I/O Stats Cluster Dynamic Applications Nutanix: Controller I/O Stats Nutanix: Health Check Run Config Nutanix: LCM Cluster Events Nutanix: Cluster Events Nutanix: Replication Stats Nutanix: License Status Config Nutanix: Cluster Config & Block Discovery Nutanix: LCM Entities Cache Nutanix: Collection Cache Nutanix: Storage Capacity and Usage Stats Nutanix: Workload Group Discovery Nutanix: Cache Usage Stats Nutanix: Cluster Health Summary Stats Nutanix: Health Check Catalog Cache Nutanix: Cluster I/O Stats Nutanix: VMware Aggregator Config Nutanix: Storage Pool Group Discovery Nutanix: Cluster Stats Nutanix: LCM Config Nutanix: Hypervisor I/O Stats Root Dynamic Applications Nutanix: Prism Central LCM Config Nutanix: Prism Elements Discovery Nutanix: Prism Central Config Nutanix: Prism Element Config & Discovery Nutanix: Prism Central Events Please refer to the Nutanix Base Pack v106 Py3 PowerPack File Details in the PowerPacks section of the Support Portal for all information pertaining to the Nutanix Base Pack v106 Py3 PowerPack Support Status, Minimum SL1 Version, Solution Information, and Pricing Information. The Nutanix Base Pack v106 Py3 PowerPack Release File Details also contains links to the Release Notes, Manual, and PowerPack Info Report. Issues Addressed in the Nutanix Base Pack v106 Py3 PowerPack Release can be found in the Release Notes6Views0likes0CommentsSL1 Ibiza 12.3.3 Release Notification
We are pleased to announce that SL1 Ibiza 12.3.3 is now available. The release and documentation can be accessed using the following links: https://support.sciencelogic.com/s/release-file/aBtVL0000000tED0AY/1233 If you are planning to consume SL1 Ibiza 12.3.3, be advised of the following: The 12.3.3 release is available only as a patch; there is no ISO version. You can upgrade to this release directly from the following releases: SL1 12.3.0 through 12.3.2 SL1 12.2.1.1 through 12.2.6 SL1 12.1.2, if all of your SL1 appliances have been converted to OL8 NOTE: If you are on12.1.2, you should upgrade directly to 12.3.3 without consuming the 12.2.x releases. If you are on 12.1.0.2 or 12.1.1, you can upgrade to 12.1.2, convert to OL8, and then upgrade directly to 12.3.3 without consuming the 12.2.x releases. 12.2.x and 12.3.0 STIG-compliant users can upgrade to this release. Users who are on an 11.x MUD system cannot upgrade directly to this release; they must first follow the approved conversion process from 11.x MUD to 12.2.1.1 STIG and then upgrade to 12.3.3 STIG. For more information, see the section on STIG Support in SL1 Ibiza 12.3.3 release notes. AWS deployments that are using Aurora 3 can upgrade to this release. If you are currently deployed using Aurora 2, you can upgrade to this release but you must perform a post-upgrade Aurora 2 to 3 conversion. SL1 12.3.3 is Department of Defense Information Network (DoDIN)-certified. For more information, see the SL1 Ibiza 12.3.3 release notes17Views0likes0Comments🌟 Nexus Member Top Contributors – March 2025 🌟
Honoring the Champions of Our Community March was a month full of vibrant conversations, insightful contributions, and outstanding support within the Nexus Community. We're thrilled to spotlight the members whose voices made the biggest impact! 🙌 Thank You to Our March Top Contributors To everyone who jumped in to answer questions, offered thoughtful feedback, or simply shared their expertise — we see you, and we appreciate you. Your generosity and collaboration are what fuel the heart of this community. 💡 Why Your Contributions Make a Difference Every comment, suggestion, and solution you post creates a ripple effect. You’re helping others troubleshoot, discover new possibilities, and make the most of their experience with us. Your dedication continues to elevate the community for everyone involved. 👏 Shoutout to These Standout Contributors Here are the members who truly went above and beyond in March: Most Page Views teppotahkapaa tied with Mani Most Replies jamesramsden closely followed by Colin, TexPaul & Santeri Most Idea Votes Given teppotahkapaa followed by SamualVick & Mani Most Idea Votes Received bmcsween followed by Mani 🚀 Let’s Keep the Momentum Going! 🚀 We encourage everyone to stay active, share insights, and continue making this space a go-to resource. If you’re new here, don’t hesitate to jump in—your voice matters! Once again, thank you for making March such a fantastic month. We can’t wait to see what we accomplish together in April! Best Regards, The Nexus Community Team14Views1like0CommentsRestorepoint RP 5.6 20250409 Release
Hello, We are pleased to announce the release of Restorepoint RP 5.6 20250409 on April 9, 2025. This release contained the following: The ability for administrators to define a global alert for backup sizes. This allows the administrator to configure a defined size for backups which, if exceeded, sends an email alert to the appliance owner. Various updates to increase the security of the Restorepoint appliances. Thank you, Release Management12Views0likes0CommentsMastering Terminal Security: Why TMUX Matters in Modern Enterprise Environments
3 MIN READ In the evolving landscape of enterprise IT, security isn't a feature—it’s a foundation. As organizations grow more distributed and systems become increasingly complex, securing terminal sessions accessed through SSH is a mission-critical component of any corporate security posture. One tool rising in prominence for its role in fortifying SSH access control is  tmux, and it's more than just a handy utility—it's a security enabler. As part of ScienceLogic’s harden the foundation initiative, the SL1 platform on the 12.2.1 or later release introduces improved tmux session control capabilities to meet industry leading security standards. ScienceLogic TMUX resources: SL1 Release Notes KB Article: What is TMUX and why is it now default on SL1? KB Article: Unable to Copy or Paste Text in SSH Sessions TMUX Configuration Cheat Sheet Increase ITerm TMUX Window What is TMUX? tmux (short for terminal multiplexer) is a command-line tool that allows users to open and manage multiple terminal sessions from a single SSH connection. Think of it as a window manager for your terminal—enabling users to split screens, scroll through logs, copy/paste content, and manage persistent sessions across disconnects. tmux is now running by default  when you SSH into an SL1 system. This isn’t just a user experience enhancement—it’s a strategic security upgrade aligned with best practices in access control and session management. Why TMUX Matters for Security Security teams understand idle or abandoned SSH sessions pose real risks—whether from unauthorized access, lateral movement, or session hijacking. The introduction of tmux into the SL1 platform adds several critical controls to mitigate these risks: Automatic Session Locking: Idle sessions lock automatically after 15 minutes or immediately upon unclean disconnects. This dramatically reduces the attack surface of unattended sessions. Session Persistence and Recovery: tmux can reattach to previous sessions on reconnect, preserving state without sacrificing security—great for admin continuity. Supervised Access: With tmux, authorized users can monitor or even share terminal sessions for auditing or support—without giving up full shell access. Value for Platform Teams and Security Officers For platform and security leaders, enabling tmux by default means: Stronger Compliance Posture: Session supervision, activity auditing, and inactivity timeouts align with frameworks like NIST 800-53, CIS Controls, and ISO 27001. Reduced Operational Risk: Dropped sessions and orphaned shells are automatically managed—minimizing both user frustration and security exposure. Enhanced Administrator Efficiency: Features like scroll-back search, split panes, and built-in clip boarding streamline complex workflows across systems. In essence, tmux isn't just helping sysadmins—it's helping CISOs sleep better. Risks of Not Using TMUX Choosing not to enable or enforce tmux in enterprise environments comes with hidden but serious risks: Unsecured Idle Sessions: Without timeouts or auto-locks, sessions left open are ripe for misuse or compromise. Poor Session Traceability: Lack of visibility into session states and handoffs creates audit and accountability gaps. Reduced Resilience: A dropped SSH connection can lead to lost work, misconfigurations, or operational inefficiencies—especially in multi-user environments. In contrast, tmux provides a clean, consistent, and secure environment for every shell session—backed by real-world enterprise needs. Final Thoughts The addition of tmux to SL1's default SSH environment reflects a broader industry trend: security is shifting left, right into the command line. For platform teams, this isn't just a convenience—it's a call to action. Enabling tmux is a simple yet powerful way to align with security policies, improve admin workflows, and fortify your infrastructure.107Views2likes0Comments
Check out our NEW Pro Services Blog for great SL1 Tips and Tricks!
