Recent Content
Integrating Okta Authentication with ScienceLogic PowerFlow
4 MIN READ Enabling SSO Authentication for PowerFlow ScienceLogic PowerFlow supports external authentication via Dex, which bridges OpenID Connect (OIDC) Identity Providers (IdPs) like Okta to PowerFlow. This guide outlines configuring Okta as an OIDC IdP, enabling user and group-based login. Important: All third-party IdP related configurations should be performed by the organization’s Identity and Access Management (IAM) team and must comply with corporate security policies. SaaS Deployments: This article applies to on-premises instances of PowerFlow. For SaaS-hosted PowerFlow, submit a request via the ScienceLogic Support Portal with your Dex connector config. Overview This guide walks through the following: Setting up Okta as an OIDC IdP Making Okta’s /default server accessible to Dex Optionally including group claims Configuring the Dex connector Supporting group/non-group logins Testing and Validation Summary Prerequisites Before beginning, ensure the following are available: Access to Okta admin console (e.g. <org>.okta.com) A DNS name or IP for Dex callback endpoint (e.g. https://<IP>:5556/dex/callback) Admin Okta account Step 1: Register a Web App in Okta Start by creating a new OIDC app integration in Okta: Navigate to Applications → Create App Integration Choose: Sign-in method: OIDC – OpenID Connect Type: Web App Configure the application: App Integration Name: PowerFlow Grant type: Authorization Code (Default) Sign-in redirect URI: https://<IP>:5556/dex/callback Sign-out redirect URI https://<IP>:5556/logged-out.html Controlled access: Skip group assignment for now Click on Save and note the Client ID and Client Secret - Required to configure the Dex connector Step 2: Assign Users or Groups Click Assignments within the newly created application. Assign to individuals or pre-created groups Step 3: Allow Access to Okta’s /default Authorization Server Okta’s authorization server (/oauth2/default) requires policy rules to allow access by external clients like Dex and to enable Groups scope to be transmitted in the token. To configure access: Go to Security → API → Authorization Servers → default → Access Policies Add/Edit a policy rule: Name: Allow Dex Assign to: All clients or specific client Grant Types: Implicit and Authorization Code Scopes: Ensure openid, email, profile, and optionally groups are allowed Save and activate the rule Step 4: Include Group Claims in ID Tokens (Optional) If group-based access is desired, a custom claim must be added: Go to Security → API → Authorization Servers → default → Claims Click Add Claim: Name: groups Token: ID Token Enable Always include in token Value type: Groups Filter: .* (to include all groups, or restrict as needed) Include in: Any scope or groups Save the claim This ensures group data is included in tokens Dex uses to perform identity mapping. Step 5: Configure the Dex Connector Edit '/etc/iservices/isconfig.yml' and append the following: DEX_CONNECTORS: - type: oidc id: okta name: Okta config: issuer: https://[instance].okta.com/oauth2/default clientID: [your-client-id] clientSecret: [your-client-secret] redirectURI: https://[powerflow-ip-or-host]:5556/dex/callback basicAuthUnsupported: true insecureSkipEmailVerified: true insecureEnableGroups: true userNameKey: email scopes: - openid - profile - email - groups Note: Omit groups in scope if not using group-based access. PowerFlow will authenticate user's based on the user’s email address. Explanation of Key Fields: issuer: Must match the exact Okta authorization server URL redirectURI: Should match what is registered in the Okta app scopes: Include groups only if using group-based claims userNameKey: Determines how the username is derived (e.g., email) Apply Configuration 1. Redeploy the Docker Stack: # Remove stack docker stack rm iservices # Wait for shutdown docker service ls # Redeploy docker stack deploy -c /opt/iservices/scripts/docker-compose.yml iservices --resolve-image never # Verify docker service ls 3. Monitor Dex Logs: # docker service logs -f iservices_dexserver Step 6: PowerFlow User and Groups Configuration In PowerFlow UI: Go to Admin Panel → Add User Group: Match Okta group name (e.g, Operator) or email where group-based access is not configured in IdP Assign permissions Click Create User Group Step 7: Testing and Validation 1. Verify Authentication: Access the PowerFlow login page and initiate OIDC login. Confirm redirection to Okta and successful login. 2. Monitor Dex Logs: # docker service logs -f iservices_dexserver Check for success: login successful: connector "OKTA", username="jane.doe@domain.com", email="jane.doe@domain.com", groups=["Operator"] Step 9: Troubleshooting Issue Solution Groups Scope not exposed Check policy rule under default server includes appropriate grant types and scopes Redirect URI mismatch Check Dex’s redirectURI exactly matches Okta app’s redirect URI Refer to Step 7.2 above for details on reviewing the Dex logs. Summary Integrating Okta (or another OIDC IdP) with PowerFlow via Dex enhances security, simplifies access, and supports both user and group‑based login. By following the outlined the above steps ensuring proper authentication setup —organizations can integrate SSO effectively while maintaining secure access controls. For assistance, contact ScienceLogic Support. For further details related to PowerFlow authentication and DEX connectors, refer to the official ScienceLogic documentation.128Views1like0CommentsScienceLogic's PowerFlow Training: Explore Built-In Integration and Custom Automation Capabilities
1 MIN READ PowerFlow is ScienceLogic’s integration platform designed to seamlessly extract, transform, and load data between SL1 and third-party tools. Whether you're implementing built-in integrations or creating custom automations, PowerFlow empowers you to streamline workflows, integrate systems, and enhance IT operations. Find the Right Training for Your Needs Explore training options based on your role and objectives: PowerFlow: ScienceLogic's Bi-Directional Task Execution Platform (1 hour) Discover PowerFlow’s core functionality in this introductory learning path, covering key features, configuration, navigation, and troubleshooting. PowerFlow Integrations: ServiceNow (4 hours) Master PowerFlow’s integration with ServiceNow. This comprehensive learning path includes all content from the introductory PowerFlow course, then dives deeper into implementing SL1 and ServiceNow integration use cases. PowerFlow: Software Development Kit (SDK) (1 hour) For advanced users, this training course shows how to use the PowerFlow Software Development Kit (SDK) to build custom SyncPacks for automation, system integration, and workflow enhancements. Access Training Anytime, Anywhere ScienceLogic University is ScienceLogic's on-demand learning portal. Log in or create an account here to access these PowerFlow training options and other essential topics.Six Years of Trust—and It’s All Thanks to You
1 MIN READ We’re thrilled to share some exciting news: ScienceLogic SL1 has been named a TrustRadius Top Rated product for the sixth year in a row! 🎉 This recognition means the world to us—not just because it’s based entirely on customer feedback, but because it reflects something deeper: your trust. Trust in our platform, our people, and our partnership. “Customer success is our north star. Being recognized by TrustRadius for the sixth year in a row shows that we’re not just delivering technology—we’re helping our customers achieve real, measurable outcomes.” — Wendy Wooley, VP of Customer Experience This year, SL1 earned Top Rated honors in eight key categories that matter most to modern IT teams: AIOps Observability Event Monitoring System Monitoring Network Monitoring IT Infrastructure Monitoring IT Operations Analytics Application Performance Management (APM) Your feedback is our compass—it guides how we innovate, where we focus, and how we continue to evolve SL1 to meet your needs. So thank you for sharing your stories, your challenges, and your wins. We’re honored to be on this journey with you. Here’s to what we’ve built together—and what’s next.Optimising PowerFlow Integrations: Isolating Incident and CMDB Workloads
3 MIN READ In complex IT environments, integrations like incident management and Configuration Management Database (CMDB) synchronisation are pivotal. ScienceLogic's PowerFlow platform offers robust capabilities to handle these integrations. However, to ensure optimal performance and prevent resource contention, it's crucial to configure dedicated steprunners and queues for different workloads. This article discusses on-premises instances of PowerFlow. If you are using a SaaS-hosted instance of PowerFlow, please submit a service request via the Support Portal outlining your requirements. The relevant team will then review your request and discuss the necessary changes to be made on your SaaS instance of PowerFlow. Understanding the Challenge Incident management and CMDB synchronisation have distinct characteristics: Incident Management: Typically involves lightweight, high-frequency tasks that require rapid processing to maintain real-time responsiveness. CMDB Synchronisation: Often deals with bulk data operations, such as syncing large volumes of configuration items, which are resource-intensive and time-consuming. Running both integrations on the same steprunner can lead to performance issues. For instance, a heavy CMDB sync might consume significant resources, delaying the processing of critical incident tasks. Implementing Dedicated Steprunners and Queues To address this, PowerFlow allows the configuration of steprunners to listen to specific queues. By assigning separate queues for incident and CMDB tasks, you can isolate their processing and allocate resources appropriately. Example Configuration Here's how you might define dedicated steprunners in your docker-compose.override.yml: Incident Steprunner: steprunner-incident: image: sciencelogic/is-worker:latest hostname: "incident-{{.Task.ID}}" deploy: resources: limits: memory: 2G replicas: 10 environment: user_queues: 'incident_queue' worker_threads: 4 CMDB Steprunner: steprunner-cmdb: image: sciencelogic/is-worker:latest hostname: "cmdb-{{.Task.ID}}" deploy: resources: limits: memory: 4G replicas: 5 environment: user_queues: 'cmdb_queue' worker_threads: 2 In this setup: user_queues: Assigns each steprunner to a specific queue (incident_queue or cmdb_queue), ensuring isolation of workloads. worker_threads: Defines how many concurrent tasks each steprunner container can process. Higher for incidents because incident syncs are typically lightweight and frequent. Lower for CMDB to reduce memory contention since CMDB data is often bulkier and more complex. deploy.resources.limits.memory: Caps how much memory each steprunner container can use. This helps prevent individual steprunners from consuming excessive memory, which is especially important when running many containers on shared infrastructure. Example: 2G for incidents (moderate), 4G for CMDB (higher due to heavier payloads). deploy.replicas: Specifies how many containers to run for each steprunner service. More replicas for incidents to handle high throughput. Fewer for CMDB, since each task may take longer and use more resources. Benefits of Isolation Performance Optimisation: Ensures that resource-heavy CMDB tasks don't impede the processing of time-sensitive incident tasks. Scalability: Allows independent scaling of steprunners based on the workload demands of each integration. Resource Management: Facilitates fine-tuned allocation of system resources, reducing the risk of bottlenecks and failures. Monitoring and Adjustments Regular monitoring is essential to maintain optimal performance: Queue Lengths: Persistent growth in queue lengths may indicate the need for additional steprunners or increased thread counts. Resource Utilisation: Monitor CPU and memory usage to prevent over utilisation. Error Rates: High error rates might necessitate adjustments in configurations or error-handling mechanisms. Final Thoughts By strategically configuring dedicated steprunners and queues for incident and CMDB integrations, you can enhance the efficiency, reliability, and scalability of your PowerFlow environment. This approach ensures that each integration operates within its optimal parameters, delivering better performance and resource utilisation.72Views4likes0CommentsHow to Generate a PowerPack Version Report in SL1 to Track Updates and Changes
3 MIN READ To successfully merge your custom changes into a new PowerPack version, you’ll need to understand how to identify the differences between versions. This process involves: Comparing the original PowerPack version (used to create your custom version) to your customized version – to identify custom changes. Comparing the original PowerPack version to the new ScienceLogic-released version – to identify new features and updates. After understanding these deltas, you can determine whether: The new version already includes features that cover your customizations, or You need to merge and reapply custom changes onto the new version. Let’s walk through the process using an example and then detail the steps to generate and compare PowerPack reports in SL1. Example Scenario Suppose your team customized PowerPack version 112 and named it 112.1. ScienceLogic has since released version 115. To upgrade your custom PowerPack to a new branch (say, 115.1), you’ll need to: ✅ Compare 112 vs 112.1 to identify what was customized. ✅ Compare 112 vs 115 to identify what’s new from ScienceLogic. ✅ Review release notes for versions 113, 114, and 115 to spot added features or fixes. ✅ Decide which customizations are still needed and merge them into 115.1. Step 1: Generate PowerPack Information Reports SL1 provides a built-in report to list the contents of a PowerPack version. Here’s how to generate it: 1️⃣ Ensure the PowerPack version you wish to report on is installed in your SL1 stack. ⚠️ Important: Do this in a non-production or test environment, as installing older versions may affect data or configurations. 2️⃣ Navigate to the Reports section in SL1. Go to Reports (Navigation Bar). Under Run Report > EM7 Administration, select Power-Pack Information. Choose the specific PowerPack version to report on. Select the Excel format as the output and click Generate. 3️⃣ Save the generated Excel file. 🔁 Repeat this process for each version you wish to compare (e.g., original, customized, and new versions). Step 2: Compare PowerPack Versions Using Excel Now that you have the reports: 1️⃣ Open both Excel files (e.g., 112.xlsx and 112.1.xlsx) in Excel. 2️⃣ If the Developer tab isn’t visible: Click File > Options > Customize Ribbon. Under Main Tabs, enable Developer and click OK. 3️⃣ Under the Developer tab, click COM Add-ins. Check the Inquire add-in and enable it. 4️⃣ You should now see an Inquire tab. Select Compare Files. Choose the two files you want to compare. A Spreadsheet Compare window will open showing the differences. 💡Pro Tip: 🔹 Ignore differences in fields like ID and Edit Date – these are environment-specific and reflect the PowerPack installation date. 🔹 To reduce confusion, consider hiding or removing these columns in Excel before performing the comparison. 🔍 Instead, focus on meaningful differences, such as: Additional or removed objects, including: Dynamic Apps Summary Dynamic Apps Details Event Policies Device Classes Reports Dashboard Widgets Dashboards SL1 Dashboards ScienceLogic Libraries Actions Credentials Changes to version numbers or descriptions for these objects, indicating feature updates or enhancements. This focused comparison helps ensure you’re identifying functional changes rather than irrelevant metadata. Final Thoughts By systematically generating and comparing PowerPack reports: You can clearly identify what customizations were made and what changes the new version introduces. This helps you confidently plan your PowerPack upgrade path and minimize risks. ✅ Review release notes for intermediate versions to avoid duplicating enhancements already included by ScienceLogic. ✅ Always perform this analysis in a non-production environment first. With this approach, you’ll be able to efficiently track PowerPack updates and changes while maintaining your critical customizations.28Views5likes0CommentsBest Practices for Device Discovery in SL1
1 MIN READ Effective device discovery is a foundational step in building a robust monitoring environment in ScienceLogic SL1. This guide focuses on best practices when performing manual or guided discoveries via the SL1 user interface. For a full overview of the discovery process, refer to the official documentation: 👉 SL1 Discovery Process Documentation Key Best Practices 1. DNS Configuration When discovering devices by hostname, ensure that DNS is properly configured and functional on the collector. Improper DNS settings can prevent successful device resolution and discovery. 2. Use CIDR Notation Thoughtfully If you're using CIDR notation to define the discovery range: Stick with smaller ranges, such as /24, to limit the scope. Large CIDR blocks can overwhelm the discovery process and slow down the collector. 3. Avoid Overloading the Collector Attempting to discover too many devices in one session can lead to performance degradation. A general rule of thumb (on a medium sized collector): SNMP: Up to 1,000 devices SSH (Linux): Around 500 devices PowerShell (Windows): Around 100 devices 💡 Tip: For large-scale discovery, distribute the workload across multiple collectors or collector groups. 4. Preview Discovery Results Before running a full discovery session, run it with "Model Devices" deselected. This allows you to see what will be discovered without impacting device modelling or performance. 5. Test Credentials First Always use the Credential Tester tool to validate new credentials before launching a discovery session. This ensures that: The collector can communicate with the target devices The credentials are correctly configured and accepted Helpful Resources 🔐 Creating SL1 Credentials 🛠️ Using the Credential Tester ❗ Troubleshooting Discovery Issues29Views2likes0CommentsScienceLogic Meraki Monitoring Best Practices
6 MIN READ Hello all, I wanted to take a little time to share my thoughts as the Product Manager for the Meraki PowerPack. I believe we have a great solution for integration with Meraki's API, but I find that due to Meraki's focus on more simple management and monitoring, a slight shift in mindset may be required to extract the most value. Unfortunately, when I meet with some of you, I find you may be unaware of some of our best practices that would really improve your experience! A condensed version of this information can be found in the PowerPack Manual. Some context to consider as you read: Meraki is not the typical power-user tool you're used to, although it is adding features constantly and at a rapid pace. It is not intended to have every knob and lever. It is intended to be simple and easy. Meraki monitoring is entirely through the cloud API. SNMP monitoring or SSH connections into appliances is not a typical workflow and doing so provides little benefit beyond using the REST API. Meraki really doesn't seem to want you to do this. Meraki's API does not expose all of the data you may expect. However, in my experience. the Meraki API is one of the best APIs out there. This is not because of breadth and depth of data, but due to Meraki's focus on being "API first", having proper documentation, and how quickly they iterate on their API's features. ScienceLogic Meraki Best Practices Don't expect to have all the data for everything. Meraki does not expose everything in the API and they don't intend the tools like ScienceLogic provides to, in effect, replicate their database into SL1. As Meraki abstracts some of the complexities away from the operator, reconsider what your goals are and what you want to monitor. For example, do you care about CPU util for an AP or do you just care about the overall health of the AP or the network as a whole? Don't expect per minute collections for interfaces. The Meraki API will not support that much data. Don't merge devices unless you have static IP address. Meraki recommends you use DHCP. Meraki also doesn't expose much information through SNMP anyway. If you merge physically discovered Meraki devices with components discovered through the API and IP addresses change, you will have a bunch of devices incorrectly merged. Perhaps discovering via hostname is an option for you, but in general it is advised to just stick with component mapping from the API. Use Email/Webhook alerts! The Meraki PowerPack is designed very carefully to not hammer the Meraki API and surpass the fairly gracious API rate limit. In theory SL1 could make up to 800,000 API calls per day per Meraki Org and you'd be surprised how quickly SL1 can hit that if you try to collect everything all the time. Our PowerPack is designed to scale to over 100,000 devices on a single SL1. As such, we do not attempt to collect much data that is already alerted on with the built-in Meraki Alerts. Enable Meraki Alerts and configure them to be sent into SL1 and you will effectively double your monitoring coverage of Meraki with SL1. Our PowerPack is designed to provide you visibility into the things Meraki doesn't alert you to out-of-the-box. Simplicity is key! I don't know about you, but I think the best software is simple software. We avoid doing as many "custom" things as we can in the Meraki PowerPack and we rely on core features of SL1 where possible to keep the integration stable and easy to support. Unfortunately, complexity couldn't be avoided entirely. You'll find things like RBAs to create new DCM trees for each Meraki Organization and the "Request Manager" Dynamic Application which is a complex mechanism that schedules and limits API calls to Meraki at a level of efficiency not possible without bespoke logic. Other than those items, you'll find that the Meraki PowerPack relies heavily on stock SL1 features like the following: SL1 allows you to select what DAs align to components when they are modeled, but does not enable different alignment based on device classes. As such, you may see some DAs align to devices that we don't expect to collect data (such as Uplink collections aligning to switches and APs although Meraki does not provide uplink data for those devices). You will also find that device class alignment is straight forward and simple in the Meraki Powerpack. We utilize class identifiers 1 and 2 to provide three levels of classification. If a specific model matches a class identifier, we give it that device class, if the model doesn't match entirely, but it starts with characters that give us an idea as to what kind of device it is (MS for switch, MR for AP, etc), we will give it a generic class for that kind of device. If none of the identifiers match, we will give it a generic Meraki class from the device component tab of the discovery Dynamic Application. Adding new device classes should easy, but you also should never have to add your own due to this three tier approach using basic SL1 features. Starting in Meraki API v115, most customization will be handled in the credential. Some Powerpacks may use changes in the snippet code or even use thresholds as "toggles" for certain features. The goal with the Meraki PowerPack is to allow customization in a sustainable way. In v115, you will find more options to configure what API calls to enable, SSL cert verification, and of course selective discovery all as options in the new "Universal" Credential subtype provided for Meraki. Be kind to the API! Think hard about what you really need to collect and monitor. As we get requests to collect more items from Meraki, we have no choice but to ship these to you in a disabled state. If you turn on every collection in the Meraki PowerPack, and you have more than a few thousands devices in a Meraki Organization, you are likely to hit the API rate limit quickly. Think hard about what you want to achieve and turn on collections selectively. You will find a handy guide in the PowerPack manual that lists out every Dynamic Application, what devices it collects data against, and the alignment and enablement status they default to out-of-the-box. The API rate limit is shared between tools. You may know that Meraki limits API calls per organization, but did you know, according to the Meraki documentation they also limit API rate limit based on source IP regardless of the Organization you're querying? This means that if you are monitoring 10 or more organizations from the same IP address, you will have a lower rate limit per organization as they all share 100 calls per second. If you are an MSP monitoring multiple customer Meraki Orgs, keep this in mind! Also, if you are monitoring the same org from multiple tools, you are sharing the rate limit between them. If you have another monitoring tool, or even another SL1 querying the same Meraki Org, you may be causing the rate limit to go into effect prematurely. If you have any concerns, navigate to the API Analytics page in the Merak Dashboard and you will see all of the various API tools hitting that bucket. Selective Discovery - The Meraki PowerPack allows you to limit discovery to devices and networks with certain tags. Add the tags to your credential and devices without those tags will not be modeled in SL1. As always, I'm happy to chat about Meraki or our other integrations, so don't hesitate to schedule time through your account manager! Do you have any tips or tricks? Share them in the comments!43Views1like1CommentA Big Thanks to You—Our Amazing Customers
2 MIN READ At ScienceLogic, our customers and partners aren’t just using our tech—you’re shaping what’s next. You’re the reason we show up every day ready to build, improve, and reimagine IT operations. So this Customer Appreciation Day, we want to pause and just say: thank you. Thanks for trusting us. Thanks for pushing us with your ideas. And thanks for being the reason we’re always reaching for better. 2025: What We’ve Built Together This year’s been all about leveling up—together. We’ve focused on making IT smarter, faster, and more self-sufficient. And your need for agility, speed, and innovation? That’s been our compass. From boosting observability across complex environments to introducing Agentic AI inside SL1, it’s all about helping IT teams move from reactive to proactive—and now, into the world of autonomy. Here are just a few of the things we’re proud to bring to the table: Agentic AI that doesn’t just automate—it thinks and acts on its own, giving your team more time for big-picture work. Next-gen observability tools that go beyond alerts and help you understand the full story—what’s happening, where, why, and what to do about it. More customer-driven product design—because your voice guides what we build, always. Our Progress Starts with You Every product release, every new feature, every late-night brainstorm—it all starts with you. Your feedback, your goals, your challenges. You push us to do more, and we’re better for it. Whether you’ve been with us from the beginning or just joined the ScienceLogic community, we’re so glad you’re here. Your journey shapes ours, and we’re beyond grateful for that. From all of us at ScienceLogic: thank you for being part of this ride. We’re excited to keep building the future with you. Thank you for being the best part of ScienceLogic!How to Set Up an NFS Server for SL1 Backups
3 MIN READ Backing up your ScienceLogic SL1 database is essential for ensuring data integrity and disaster recovery. One effective way to store backups is by setting up a Network File System (NFS) server. NFS allows you to share a directory across multiple machines, making it an ideal solution for centralized SL1 backups. This guide will walk you through the process of installing and configuring an NFS server to store SL1 backups. Step 1: Install the NFS Server Before setting up the NFS server, ensure that your Linux machine has the necessary NFS packages installed. If the nfs-server package is missing, you need to install it. For RHEL, CentOS, Rocky Linux, or AlmaLinux: sudo yum install -y nfs-utils For Ubuntu or Debian: sudo apt update sudo apt install -y nfs-kernel-server After installation, start and enable the NFS service: sudo systemctl start nfs-server sudo systemctl enable nfs-server Verify the NFS server is running: sudo systemctl status nfs-server If it is not running, restart it: sudo systemctl restart nfs-server Step 2: Configure the NFS Server Once NFS is installed, follow these steps to configure the shared directory for SL1 backups. 1. Create a Backup Directory sudo mkdir -p /backups sudo chmod 777 /backups 2. Set a Fixed Port for mountd On Ubuntu/Debian: Edit /etc/default/nfs-kernel-server and add: RPCMOUNTDOPTS="--port 20048" On RHEL/CentOS/Rocky/Oracle: Edit /etc/sysconfig/nfs and add: MOUNTD_PORT=20048 This ensures the mountd service always uses port 20048, making firewall configuration simpler and more secure. 3. Define NFS Exports Edit the /etc/exports file to specify which clients can access the NFS share: sudo vi /etc/exports Add the following line, replacing with the IP address of your SL1 database server: /backups (rw,sync,no_root_squash,no_all_squash) This configuration allows the SL1 server to read and write (rw) to /backups, ensures data consistency (sync), and prevents permission issues. 3. Apply the NFS Configuration Run the following command to apply the changes: sudo exportfs -a Restart the NFS service to ensure the changes take effect: sudo systemctl restart nfs-server Step 3: Configure Firewall Rules for NFS If a firewall is enabled on your NFS server, you must allow NFS-related services. Run the following commands to open the necessary ports: sudo firewall-cmd --permanent --add-service=mountd sudo firewall-cmd --permanent --add-service=rpc-bind sudo firewall-cmd --permanent --add-service=nfs sudo firewall-cmd --reload If Command 'firewall-cmd' not found on your device: 1. Using iptables (For RHEL, CentOS, Debian, or older distributions) If your system uses iptables, you can manually allow NFS traffic with the following commands: sudo iptables -A INPUT -p tcp --dport 2049 -j ACCEPT # NFS sudo iptables -A INPUT -p tcp --dport 111 -j ACCEPT # Portmapper sudo iptables -A INPUT -p tcp --dport 20048 -j ACCEPT # Fixed mountd port sudo iptables -A INPUT -p udp --dport 2049 -j ACCEPT sudo iptables -A INPUT -p udp --dport 111 -j ACCEPT sudo iptables -A INPUT -p udp --dport 20048 -j ACCEPT # Save the rules sudo iptables-save | sudo tee /etc/sysconfig/iptables To restart iptables and apply the rules: sudo systemctl restart iptables 2. Using UFW (For Ubuntu and Debian) If your system uses ufw (Uncomplicated Firewall), enable NFS traffic with: Enable UFW (if it is inactive) sudo ufw enable Allow NFS, RPC, and Mountd Ports sudo ufw allow 2049/tcp # NFS sudo ufw allow 111/tcp # rpcbind sudo ufw allow 111/udp sudo ufw allow 20048/tcp # mountd (fixed) sudo ufw allow 20048/udp To apply the changes: sudo ufw reload To check if the rules are added: sudo ufw status Step 4: Verify the NFS Server To confirm that the NFS share is accessible, use the following command: showmount -e <NFS server IP> If the setup is correct, you should see the following listed as an exported directory. Export list for <NFS server IP>: /backups <NFS client IP> Next Steps: Mount the NFS Share on SL1 Database Server Now that the NFS server is set up, you need to mount the share on your SL1 database server to store backups. For step-by-step instructions on mounting an NFS share in SL1, refer to the official ScienceLogic documentation Backup Management.47Views2likes0CommentsMastering Terminal Security: Why TMUX Matters in Modern Enterprise Environments
3 MIN READ In the evolving landscape of enterprise IT, security isn't a feature—it’s a foundation. As organizations grow more distributed and systems become increasingly complex, securing terminal sessions accessed through SSH is a mission-critical component of any corporate security posture. One tool rising in prominence for its role in fortifying SSH access control is tmux, and it's more than just a handy utility—it's a security enabler. As part of ScienceLogic’s harden the foundation initiative, the SL1 platform on the 12.2.1 or later release introduces improved tmux session control capabilities to meet industry leading security standards. ScienceLogic TMUX resources: SL1 Release Notes KB Article: What is TMUX and why is it now default on SL1? KB Article: Unable to Copy or Paste Text in SSH Sessions TMUX Configuration Cheat Sheet Increase ITerm TMUX Window What is TMUX? tmux (short for terminal multiplexer) is a command-line tool that allows users to open and manage multiple terminal sessions from a single SSH connection. Think of it as a window manager for your terminal—enabling users to split screens, scroll through logs, copy/paste content, and manage persistent sessions across disconnects. tmux is now running by default when you SSH into an SL1 system. This isn’t just a user experience enhancement—it’s a strategic security upgrade aligned with best practices in access control and session management. Why TMUX Matters for Security Security teams understand idle or abandoned SSH sessions pose real risks—whether from unauthorized access, lateral movement, or session hijacking. The introduction of tmux into the SL1 platform adds several critical controls to mitigate these risks: Automatic Session Locking: Idle sessions lock automatically after 15 minutes or immediately upon unclean disconnects. This dramatically reduces the attack surface of unattended sessions. Session Persistence and Recovery: tmux can reattach to previous sessions on reconnect, preserving state without sacrificing security—great for admin continuity. Supervised Access: With tmux, authorized users can monitor or even share terminal sessions for auditing or support—without giving up full shell access. Value for Platform Teams and Security Officers For platform and security leaders, enabling tmux by default means: Stronger Compliance Posture: Session supervision, activity auditing, and inactivity timeouts align with frameworks like NIST 800-53, CIS Controls, and ISO 27001. Reduced Operational Risk: Dropped sessions and orphaned shells are automatically managed—minimizing both user frustration and security exposure. Enhanced Administrator Efficiency: Features like scroll-back search, split panes, and built-in clip boarding streamline complex workflows across systems. In essence, tmux isn't just helping sysadmins—it's helping CISOs sleep better. Risks of Not Using TMUX Choosing not to enable or enforce tmux in enterprise environments comes with hidden but serious risks: Unsecured Idle Sessions: Without timeouts or auto-locks, sessions left open are ripe for misuse or compromise. Poor Session Traceability: Lack of visibility into session states and handoffs creates audit and accountability gaps. Reduced Resilience: A dropped SSH connection can lead to lost work, misconfigurations, or operational inefficiencies—especially in multi-user environments. In contrast, tmux provides a clean, consistent, and secure environment for every shell session—backed by real-world enterprise needs. Final Thoughts The addition of tmux to SL1's default SSH environment reflects a broader industry trend: security is shifting left, right into the command line. For platform teams, this isn't just a convenience—it's a call to action. Enabling tmux is a simple yet powerful way to align with security policies, improve admin workflows, and fortify your infrastructure.147Views2likes0Comments