Recent Content
How to Generate a PowerPack Version Report in SL1 to Track Updates and Changes
3 MIN READ To successfully merge your custom changes into a new PowerPack version, you’ll need to understand how to identify the differences between versions. This process involves: Comparing the original PowerPack version (used to create your custom version) to your customized version – to identify custom changes. Comparing the original PowerPack version to the new ScienceLogic-released version – to identify new features and updates. After understanding these deltas, you can determine whether: The new version already includes features that cover your customizations, or You need to merge and reapply custom changes onto the new version. Let’s walk through the process using an example and then detail the steps to generate and compare PowerPack reports in SL1. Example Scenario Suppose your team customized PowerPack version 112 and named it 112.1. ScienceLogic has since released version 115. To upgrade your custom PowerPack to a new branch (say, 115.1), you’ll need to: ✅ Compare 112 vs 112.1 to identify what was customized. ✅ Compare 112 vs 115 to identify what’s new from ScienceLogic. ✅ Review release notes for versions 113, 114, and 115 to spot added features or fixes. ✅ Decide which customizations are still needed and merge them into 115.1. Step 1: Generate PowerPack Information Reports SL1 provides a built-in report to list the contents of a PowerPack version. Here’s how to generate it: 1️⃣ Ensure the PowerPack version you wish to report on is installed in your SL1 stack. ⚠️ Important: Do this in a non-production or test environment, as installing older versions may affect data or configurations. 2️⃣ Navigate to the Reports section in SL1. Go to Reports (Navigation Bar). Under Run Report > EM7 Administration, select Power-Pack Information. Choose the specific PowerPack version to report on. Select the Excel format as the output and click Generate. 3️⃣ Save the generated Excel file. 🔁 Repeat this process for each version you wish to compare (e.g., original, customized, and new versions). Step 2: Compare PowerPack Versions Using Excel Now that you have the reports: 1️⃣ Open both Excel files (e.g., 112.xlsx and 112.1.xlsx) in Excel. 2️⃣ If the Developer tab isn’t visible: Click File > Options > Customize Ribbon. Under Main Tabs, enable Developer and click OK. 3️⃣ Under the Developer tab, click COM Add-ins. Check the Inquire add-in and enable it. 4️⃣ You should now see an Inquire tab. Select Compare Files. Choose the two files you want to compare. A Spreadsheet Compare window will open showing the differences. 💡Pro Tip: 🔹 Ignore differences in fields like ID and Edit Date – these are environment-specific and reflect the PowerPack installation date. 🔹 To reduce confusion, consider hiding or removing these columns in Excel before performing the comparison. 🔍 Instead, focus on meaningful differences, such as: Additional or removed objects, including: Dynamic Apps Summary Dynamic Apps Details Event Policies Device Classes Reports Dashboard Widgets Dashboards SL1 Dashboards ScienceLogic Libraries Actions Credentials Changes to version numbers or descriptions for these objects, indicating feature updates or enhancements. This focused comparison helps ensure you’re identifying functional changes rather than irrelevant metadata. Final Thoughts By systematically generating and comparing PowerPack reports: You can clearly identify what customizations were made and what changes the new version introduces. This helps you confidently plan your PowerPack upgrade path and minimize risks. ✅ Review release notes for intermediate versions to avoid duplicating enhancements already included by ScienceLogic. ✅ Always perform this analysis in a non-production environment first. With this approach, you’ll be able to efficiently track PowerPack updates and changes while maintaining your critical customizations.12Views1like0CommentsBest Practices for Device Discovery in SL1
1 MIN READ Effective device discovery is a foundational step in building a robust monitoring environment in ScienceLogic SL1. This guide focuses on best practices when performing manual or guided discoveries via the SL1 user interface. For a full overview of the discovery process, refer to the official documentation: 👉 SL1 Discovery Process Documentation Key Best Practices 1. DNS Configuration When discovering devices by hostname, ensure that DNS is properly configured and functional on the collector. Improper DNS settings can prevent successful device resolution and discovery. 2. Use CIDR Notation Thoughtfully If you're using CIDR notation to define the discovery range: Stick with smaller ranges, such as /24, to limit the scope. Large CIDR blocks can overwhelm the discovery process and slow down the collector. 3. Avoid Overloading the Collector Attempting to discover too many devices in one session can lead to performance degradation. A general rule of thumb (on a medium sized collector): SNMP: Up to 1,000 devices SSH (Linux): Around 500 devices PowerShell (Windows): Around 100 devices 💡 Tip: For large-scale discovery, distribute the workload across multiple collectors or collector groups. 4. Preview Discovery Results Before running a full discovery session, run it with "Model Devices" deselected. This allows you to see what will be discovered without impacting device modelling or performance. 5. Test Credentials First Always use the Credential Tester tool to validate new credentials before launching a discovery session. This ensures that: The collector can communicate with the target devices The credentials are correctly configured and accepted Helpful Resources 🔐 Creating SL1 Credentials 🛠️ Using the Credential Tester ❗ Troubleshooting Discovery Issues12Views0likes0CommentsScienceLogic Meraki Monitoring Best Practices
6 MIN READ Hello all, I wanted to take a little time to share my thoughts as the Product Manager for the Meraki PowerPack. I believe we have a great solution for integration with Meraki's API, but I find that due to Meraki's focus on more simple management and monitoring, a slight shift in mindset may be required to extract the most value. Unfortunately, when I meet with some of you, I find you may be unaware of some of our best practices that would really improve your experience! A condensed version of this information can be found in the PowerPack Manual. Some context to consider as you read: Meraki is not the typical power-user tool you're used to, although it is adding features constantly and at a rapid pace. It is not intended to have every knob and lever. It is intended to be simple and easy. Meraki monitoring is entirely through the cloud API. SNMP monitoring or SSH connections into appliances is not a typical workflow and doing so provides little benefit beyond using the REST API. Meraki really doesn't seem to want you to do this. Meraki's API does not expose all of the data you may expect. However, in my experience. the Meraki API is one of the best APIs out there. This is not because of breadth and depth of data, but due to Meraki's focus on being "API first", having proper documentation, and how quickly they iterate on their API's features. ScienceLogic Meraki Best Practices Don't expect to have all the data for everything. Meraki does not expose everything in the API and they don't intend the tools like ScienceLogic provides to, in effect, replicate their database into SL1. As Meraki abstracts some of the complexities away from the operator, reconsider what your goals are and what you want to monitor. For example, do you care about CPU util for an AP or do you just care about the overall health of the AP or the network as a whole? Don't expect per minute collections for interfaces. The Meraki API will not support that much data. Don't merge devices unless you have static IP address. Meraki recommends you use DHCP. Meraki also doesn't expose much information through SNMP anyway. If you merge physically discovered Meraki devices with components discovered through the API and IP addresses change, you will have a bunch of devices incorrectly merged. Perhaps discovering via hostname is an option for you, but in general it is advised to just stick with component mapping from the API. Use Email/Webhook alerts! The Meraki PowerPack is designed very carefully to not hammer the Meraki API and surpass the fairly gracious API rate limit. In theory SL1 could make up to 800,000 API calls per day per Meraki Org and you'd be surprised how quickly SL1 can hit that if you try to collect everything all the time. Our PowerPack is designed to scale to over 100,000 devices on a single SL1. As such, we do not attempt to collect much data that is already alerted on with the built-in Meraki Alerts. Enable Meraki Alerts and configure them to be sent into SL1 and you will effectively double your monitoring coverage of Meraki with SL1. Our PowerPack is designed to provide you visibility into the things Meraki doesn't alert you to out-of-the-box. Simplicity is key! I don't know about you, but I think the best software is simple software. We avoid doing as many "custom" things as we can in the Meraki PowerPack and we rely on core features of SL1 where possible to keep the integration stable and easy to support. Unfortunately, complexity couldn't be avoided entirely. You'll find things like RBAs to create new DCM trees for each Meraki Organization and the "Request Manager" Dynamic Application which is a complex mechanism that schedules and limits API calls to Meraki at a level of efficiency not possible without bespoke logic. Other than those items, you'll find that the Meraki PowerPack relies heavily on stock SL1 features like the following: SL1 allows you to select what DAs align to components when they are modeled, but does not enable different alignment based on device classes. As such, you may see some DAs align to devices that we don't expect to collect data (such as Uplink collections aligning to switches and APs although Meraki does not provide uplink data for those devices). You will also find that device class alignment is straight forward and simple in the Meraki Powerpack. We utilize class identifiers 1 and 2 to provide three levels of classification. If a specific model matches a class identifier, we give it that device class, if the model doesn't match entirely, but it starts with characters that give us an idea as to what kind of device it is (MS for switch, MR for AP, etc), we will give it a generic class for that kind of device. If none of the identifiers match, we will give it a generic Meraki class from the device component tab of the discovery Dynamic Application. Adding new device classes should easy, but you also should never have to add your own due to this three tier approach using basic SL1 features. Starting in Meraki API v115, most customization will be handled in the credential. Some Powerpacks may use changes in the snippet code or even use thresholds as "toggles" for certain features. The goal with the Meraki PowerPack is to allow customization in a sustainable way. In v115, you will find more options to configure what API calls to enable, SSL cert verification, and of course selective discovery all as options in the new "Universal" Credential subtype provided for Meraki. Be kind to the API! Think hard about what you really need to collect and monitor. As we get requests to collect more items from Meraki, we have no choice but to ship these to you in a disabled state. If you turn on every collection in the Meraki PowerPack, and you have more than a few thousands devices in a Meraki Organization, you are likely to hit the API rate limit quickly. Think hard about what you want to achieve and turn on collections selectively. You will find a handy guide in the PowerPack manual that lists out every Dynamic Application, what devices it collects data against, and the alignment and enablement status they default to out-of-the-box. The API rate limit is shared between tools. You may know that Meraki limits API calls per organization, but did you know, according to the Meraki documentation they also limit API rate limit based on source IP regardless of the Organization you're querying? This means that if you are monitoring 10 or more organizations from the same IP address, you will have a lower rate limit per organization as they all share 100 calls per second. If you are an MSP monitoring multiple customer Meraki Orgs, keep this in mind! Also, if you are monitoring the same org from multiple tools, you are sharing the rate limit between them. If you have another monitoring tool, or even another SL1 querying the same Meraki Org, you may be causing the rate limit to go into effect prematurely. If you have any concerns, navigate to the API Analytics page in the Merak Dashboard and you will see all of the various API tools hitting that bucket. Selective Discovery - The Meraki PowerPack allows you to limit discovery to devices and networks with certain tags. Add the tags to your credential and devices without those tags will not be modeled in SL1. As always, I'm happy to chat about Meraki or our other integrations, so don't hesitate to schedule time through your account manager! Do you have any tips or tricks? Share them in the comments!28Views1like1CommentA Big Thanks to You—Our Amazing Customers
2 MIN READ At ScienceLogic, our customers and partners aren’t just using our tech—you’re shaping what’s next. You’re the reason we show up every day ready to build, improve, and reimagine IT operations. So this Customer Appreciation Day, we want to pause and just say: thank you. Thanks for trusting us. Thanks for pushing us with your ideas. And thanks for being the reason we’re always reaching for better. 2025: What We’ve Built Together This year’s been all about leveling up—together. We’ve focused on making IT smarter, faster, and more self-sufficient. And your need for agility, speed, and innovation? That’s been our compass. From boosting observability across complex environments to introducing Agentic AI inside SL1, it’s all about helping IT teams move from reactive to proactive—and now, into the world of autonomy. Here are just a few of the things we’re proud to bring to the table: Agentic AI that doesn’t just automate—it thinks and acts on its own, giving your team more time for big-picture work. Next-gen observability tools that go beyond alerts and help you understand the full story—what’s happening, where, why, and what to do about it. More customer-driven product design—because your voice guides what we build, always. Our Progress Starts with You Every product release, every new feature, every late-night brainstorm—it all starts with you. Your feedback, your goals, your challenges. You push us to do more, and we’re better for it. Whether you’ve been with us from the beginning or just joined the ScienceLogic community, we’re so glad you’re here. Your journey shapes ours, and we’re beyond grateful for that. From all of us at ScienceLogic: thank you for being part of this ride. We’re excited to keep building the future with you. Thank you for being the best part of ScienceLogic!How to Set Up an NFS Server for SL1 Backups
3 MIN READ Backing up your ScienceLogic SL1 database is essential for ensuring data integrity and disaster recovery. One effective way to store backups is by setting up a Network File System (NFS) server. NFS allows you to share a directory across multiple machines, making it an ideal solution for centralized SL1 backups. This guide will walk you through the process of installing and configuring an NFS server to store SL1 backups. Step 1: Install the NFS Server Before setting up the NFS server, ensure that your Linux machine has the necessary NFS packages installed. If the nfs-server package is missing, you need to install it. For RHEL, CentOS, Rocky Linux, or AlmaLinux: sudo yum install -y nfs-utils For Ubuntu or Debian: sudo apt update sudo apt install -y nfs-kernel-server After installation, start and enable the NFS service: sudo systemctl start nfs-server sudo systemctl enable nfs-server Verify the NFS server is running: sudo systemctl status nfs-server If it is not running, restart it: sudo systemctl restart nfs-server Step 2: Configure the NFS Server Once NFS is installed, follow these steps to configure the shared directory for SL1 backups. 1. Create a Backup Directory sudo mkdir -p /backups sudo chmod 777 /backups 2. Set a Fixed Port for mountd On Ubuntu/Debian: Edit /etc/default/nfs-kernel-server and add: RPCMOUNTDOPTS="--port 20048" On RHEL/CentOS/Rocky/Oracle: Edit /etc/sysconfig/nfs and add: MOUNTD_PORT=20048 This ensures the mountd service always uses port 20048, making firewall configuration simpler and more secure. 3. Define NFS Exports Edit the /etc/exports file to specify which clients can access the NFS share: sudo vi /etc/exports Add the following line, replacing with the IP address of your SL1 database server: /backups (rw,sync,no_root_squash,no_all_squash) This configuration allows the SL1 server to read and write (rw) to /backups, ensures data consistency (sync), and prevents permission issues. 3. Apply the NFS Configuration Run the following command to apply the changes: sudo exportfs -a Restart the NFS service to ensure the changes take effect: sudo systemctl restart nfs-server Step 3: Configure Firewall Rules for NFS If a firewall is enabled on your NFS server, you must allow NFS-related services. Run the following commands to open the necessary ports: sudo firewall-cmd --permanent --add-service=mountd sudo firewall-cmd --permanent --add-service=rpc-bind sudo firewall-cmd --permanent --add-service=nfs sudo firewall-cmd --reload If Command 'firewall-cmd' not found on your device: 1. Using iptables (For RHEL, CentOS, Debian, or older distributions) If your system uses iptables, you can manually allow NFS traffic with the following commands: sudo iptables -A INPUT -p tcp --dport 2049 -j ACCEPT # NFS sudo iptables -A INPUT -p tcp --dport 111 -j ACCEPT # Portmapper sudo iptables -A INPUT -p tcp --dport 20048 -j ACCEPT # Fixed mountd port sudo iptables -A INPUT -p udp --dport 2049 -j ACCEPT sudo iptables -A INPUT -p udp --dport 111 -j ACCEPT sudo iptables -A INPUT -p udp --dport 20048 -j ACCEPT # Save the rules sudo iptables-save | sudo tee /etc/sysconfig/iptables To restart iptables and apply the rules: sudo systemctl restart iptables 2. Using UFW (For Ubuntu and Debian) If your system uses ufw (Uncomplicated Firewall), enable NFS traffic with: Enable UFW (if it is inactive) sudo ufw enable Allow NFS, RPC, and Mountd Ports sudo ufw allow 2049/tcp # NFS sudo ufw allow 111/tcp # rpcbind sudo ufw allow 111/udp sudo ufw allow 20048/tcp # mountd (fixed) sudo ufw allow 20048/udp To apply the changes: sudo ufw reload To check if the rules are added: sudo ufw status Step 4: Verify the NFS Server To confirm that the NFS share is accessible, use the following command: showmount -e <NFS server IP> If the setup is correct, you should see the following listed as an exported directory. Export list for <NFS server IP>: /backups <NFS client IP> Next Steps: Mount the NFS Share on SL1 Database Server Now that the NFS server is set up, you need to mount the share on your SL1 database server to store backups. For step-by-step instructions on mounting an NFS share in SL1, refer to the official ScienceLogic documentation Backup Management.29Views1like0CommentsMastering Terminal Security: Why TMUX Matters in Modern Enterprise Environments
3 MIN READ In the evolving landscape of enterprise IT, security isn't a feature—it’s a foundation. As organizations grow more distributed and systems become increasingly complex, securing terminal sessions accessed through SSH is a mission-critical component of any corporate security posture. One tool rising in prominence for its role in fortifying SSH access control is tmux, and it's more than just a handy utility—it's a security enabler. As part of ScienceLogic’s harden the foundation initiative, the SL1 platform on the 12.2.1 or later release introduces improved tmux session control capabilities to meet industry leading security standards. ScienceLogic TMUX resources: SL1 Release Notes KB Article: What is TMUX and why is it now default on SL1? KB Article: Unable to Copy or Paste Text in SSH Sessions TMUX Configuration Cheat Sheet Increase ITerm TMUX Window What is TMUX? tmux (short for terminal multiplexer) is a command-line tool that allows users to open and manage multiple terminal sessions from a single SSH connection. Think of it as a window manager for your terminal—enabling users to split screens, scroll through logs, copy/paste content, and manage persistent sessions across disconnects. tmux is now running by default when you SSH into an SL1 system. This isn’t just a user experience enhancement—it’s a strategic security upgrade aligned with best practices in access control and session management. Why TMUX Matters for Security Security teams understand idle or abandoned SSH sessions pose real risks—whether from unauthorized access, lateral movement, or session hijacking. The introduction of tmux into the SL1 platform adds several critical controls to mitigate these risks: Automatic Session Locking: Idle sessions lock automatically after 15 minutes or immediately upon unclean disconnects. This dramatically reduces the attack surface of unattended sessions. Session Persistence and Recovery: tmux can reattach to previous sessions on reconnect, preserving state without sacrificing security—great for admin continuity. Supervised Access: With tmux, authorized users can monitor or even share terminal sessions for auditing or support—without giving up full shell access. Value for Platform Teams and Security Officers For platform and security leaders, enabling tmux by default means: Stronger Compliance Posture: Session supervision, activity auditing, and inactivity timeouts align with frameworks like NIST 800-53, CIS Controls, and ISO 27001. Reduced Operational Risk: Dropped sessions and orphaned shells are automatically managed—minimizing both user frustration and security exposure. Enhanced Administrator Efficiency: Features like scroll-back search, split panes, and built-in clip boarding streamline complex workflows across systems. In essence, tmux isn't just helping sysadmins—it's helping CISOs sleep better. Risks of Not Using TMUX Choosing not to enable or enforce tmux in enterprise environments comes with hidden but serious risks: Unsecured Idle Sessions: Without timeouts or auto-locks, sessions left open are ripe for misuse or compromise. Poor Session Traceability: Lack of visibility into session states and handoffs creates audit and accountability gaps. Reduced Resilience: A dropped SSH connection can lead to lost work, misconfigurations, or operational inefficiencies—especially in multi-user environments. In contrast, tmux provides a clean, consistent, and secure environment for every shell session—backed by real-world enterprise needs. Final Thoughts The addition of tmux to SL1's default SSH environment reflects a broader industry trend: security is shifting left, right into the command line. For platform teams, this isn't just a convenience—it's a call to action. Enabling tmux is a simple yet powerful way to align with security policies, improve admin workflows, and fortify your infrastructure.138Views2likes0CommentsConvert Customization to PowerFlow Jinja Template
1 MIN READ Sometimes when syncing devices from SL1 into ServiceNow as a Configuration Items there can be a mismatch. ServiceNow may list the name as Fully Qualified Domain Name and SL1 will use short name. This setting can be updated in SL1, but in some cases the SL1 team would rather see short name than FQDN. This can be setup on a per SL1 Device Class basis. PowerFlow Using the following Jinja2 “if statement” the device name in SL1 can be converted to use “Device Hostname,” in SL1 instead for Microsoft SQL Server Databases. This excerpt of code would go under attribute mappings for name on the ScienceLogic side mapping to name on the ServiceNow side: {%- set output = [] -%} {%- if (device.device_class|trim) in ['Microsoft | SQL Server Database'] } {%- set output = device.hostname -%} {%- else -%} {%- set output = device.name -%} {%- endif -%} {{ output }} Example:21Views0likes0CommentsHow to Configure SSO Support for Global Manager
4 MIN READ Enabling SAML-Based SSO in ScienceLogic Global Manager ScienceLogic Global Manager (GM) is a powerful appliance designed to aggregate and display data from multiple SL1 systems, providing a centralized view of your entire infrastructure. Starting with SL1 version 12.2.1, ScienceLogic introduced support for Security Assertion Markup-Language (SAML) based Single Sign-On (SSO), simplifying authentication and enhancing security. This guide, walks through the process of enabling SAML-based SSO in ScienceLogic Global Manager, so that user access can be managed seamlessly and improve operational efficiency. Why Enable SAML-Based SSO? Enabling SSO through SAML allows users to log in once and gain access to multiple SL1 systems through the Global Manager providing the users are already authorized to access the target systems. This streamlines Identity and Access Management (IAM), reduces password fatigue, and strengthens the organization's security posture. Getting Started: Before beginning, ensure the following is true: SL1 Version: 12.2.1 or later Access Level: Administrator access to the Global Manager appliance. Prerequisites: ScienceLogic assumes that the "SL1: Global Manager" PowerPack has been installed and the child stacks have been discovered. No platform version mismatch between the GM and the child SL1 Stacks AP2 version across all Stacks as a minimum must be Gelato v8.14.26 The Child SL1 stacks are configured to authenticate using SSO authentication A local administrator account must exist on each Child Stack that GM can use to authenticate with the child stack. GM SSO authentication resource must be configured to authenticate with the same Identity Provider (IdP) configured on the Child SL1 stacks The ‘/opt/em7/nextui/nextui.conf’ file on the GM must have the following variable configured - If the GM platform is hosted by ScienceLogic a Service Request must be raised using the ScienceLogic Support Portal here to request an addition of the environmental variable: GM_STACKS_CREDENTIAL=enabled GM_STACKS_CACHE_TTL_MS=0 GM_SESSION_AUTH_CACHE_TTL_MS=0 GLOBAL_MANAGER_SESSION_COOKIE_CACHE_TTL_MILLIS=0 Unique SL1 Administrator accounts must exist on each child stack – These act as a global API key for users which allows authentication on the child stack. Once a user authenticated, the user data is loaded on to GM and the request proceeds as normal. Step 1: Configure Basic/Snippet Credentials a) Access GM UI and logon using an Administrator Account b) Navigate to Credentials page (Manage > Credentials) and Select ‘Create New’ followed by ‘Create Basic/Snippet Credential’. A dialog window will be presented, this must be completed with the details listed in the table below for each child stack using the Administrator Credentials to enable GM to authenticate with the child stacks Fields Values Name stack-<stack-id>-gm-auth All Organizations Toggled Timeout (ms) 0 Username <Target-Child-Stack-Admin-Username> Password Unique Password Hostname/IP <Target-Child-Stack-IP> Port. 443 c) Perform Credential Test using the Credential Tester and confirm the authentication is successful. Step 2: Credential Alignment - GraphQL Following the creation of the Basic Credential, each child stack credential must be aligned using GraphQL (GQL) mutation – The command requires supplying the ‘guid’ of the credentials created above in step 2 above. The following GQL will return all credentials created in Step 1 above providing the credential names contain ‘GM’. Access the GQL Browser by appending /gql to the GM URL I.E. https://<GlobalManager_HOST>/gql - This will provide access to the GQL Browser. Query: query allCreds{ credentials (search:{name:{contains:"GM"}}) { edges { node { id guid name definition } } } } Example Response: The example response shows the required ‘guid’ - Ensure a note of each ‘guid’ associated with each credential is noted for Step 4. { "data": { "credentials": { "edges": [ { "node": { "id": "41", "guid": "3C07AB8B0655A722712C46FA1DF821EA", "name": "stack_1_gm_auth", "definition": [..] } } ] } } } Step 3: Retrieve GM Stack ID The following GQL will return all existing Child SL1 Stacks present on the GM. Query: query getallstacks { globalManagerStacks { edges { node { id name credential { guid name } } } } } Example Response: Note the ‘id’ representing the GM Stack-ID’s for the next step. { "data": { "globalManagerStacks": { "edges": [ [..] { "node": { "id": "3", "name": "<sl1_stack_hostname>", "credential": null } } [..] } Step 4: GraphQL Credential Mutation The following GQL mutation will align the Basic Credential to permit GM to authenticate with the target child stacks. Mutation: mutation aligncred { alignGlobalManagerCredential(id: <Stack-ID>, credential: "<guid>") { id name credential { id name guid } } } Replace: with the GM Stack-ID for each child stack retrieved from Step 3. with the credential GUID from Step 2 that is associated with the same Child Stack. Example Mutation Response: { "data": { "alignGlobalManagerCredential": { "id": "3", "name": "<child_stack_name>", "credential": { "id": "41", "name": "stack_1_gm_auth", "guid": "3C07AB8B0655A722712C46FA1DF821EA" } } } } Repeat the above mutation for the remaining Child SL1 stacks discovered on GM. Summary Enabling SAML-based SSO in ScienceLogic Global Manager streamlines authentication, enhances security, and improves operational efficiency by allowing users to seamlessly access multiple SL1 stacks with a single login. By following the outlined steps — configuring credentials, aligning them via GraphQL, and ensuring proper authentication setup —organizations can integrate SSO effectively while maintaining secure access controls. After completing these steps, users will be able to log in once and have visibility of managed devices across multiple SL1 stacks via GM, enhancing productivity and reducing security risks. By leveraging SAML-based SSO, ScienceLogic not only simplifies access but also strengthens the overall security posture. If there are issues encountered, please contact ScienceLogic Support here. For further details related to GM setup, refer to the official ScienceLogic documentation here.78Views2likes0CommentsNew How-To Videos: Streamline Event Management with ScienceLogic
2 MIN READ Effective event management within ScienceLogic SL1 is key to maintaining visibility across your IT ecosystem and ensuring actionable data. When managed properly, event data can directly impact critical business outcomes, such as Mean Time to Resolution (MTTR). However, an overwhelming volume of events without proper context can create noise, making it difficult to extract the insights needed for timely action. So, how can you optimize event handling in SL1 to leverage the platform's full capabilities? Introducing the Event Management video series! This collection of seven short videos (1-5 minutes each) covers various aspects of event management within SL1, helping you understand how to fine-tune your configuration and maximize efficiency. Explore Video Content Click here to view the full Event Management video series on the Support site's video page, or play the video below to get started exploring one of the videos in the series: Video Series Overview Event Management Overview: An introduction to events in SL1, covering how events are triggered, best practices for managing events, and strategies for maintaining the health and efficiency of your infrastructure. Event Categories: Categorize and manage events effectively within SL1 by sorting, searching, filtering, and using role-based access control (RBAC). Responding to Events: Acknowledge and take ownership of events, collaborate across teams, and leverage RBAC to ensure smooth event management. Event Notification and Automation: Leverage PowerPacks to take actions on events, view essential information, run diagnostic tools, and integrate with your service desk. Event Correlation and Masked Events: Correlate events to highlight critical ones, mask less important events, and quickly identify root causes and streamline your response. Event Insights: Take advantage of the tools on the Event Insights page to manage alerts through correlation, deduplication, masking, and suppression. Event Policies and More: Edit, customize, and create Event Policies, and managing events using regular expressions, auto-expiration, and suppression. Looking for More How-To Videos? Stay tuned for the next installment in our Event Management series, and be sure to check out other helpful tutorials on the ScienceLogic Support site’s video page.77Views2likes0CommentsHow to limit discovery in Microsoft Azure
3 MIN READ How to Effectively Disable Azure VMs in SL1 Using VM Tags When managing resources in a dynamic cloud environment, such as Microsoft Azure, optimizing resource utilization and monitoring is crucial. ScienceLogic’s Azure PowerPack provides an effective way to control Azure Virtual Machines (VMs) using tags. This feature is particularly beneficial for organizations aiming to streamline operations and reduce costs by automating resource management based on predefined rules. What Is VM Tagging? VM tagging in Azure involves assigning metadata to resources in the form of key-value pairs. These tags can help identify, organize, and manage resources based on categories such as environment, department, or project. For example: Key: Environment, Value: Development Key: Owner, Value: IT-Support Tagging becomes a powerful tool when integrated with automation policies to control resource behavior dynamically. How to Add Tags to Azure Virtual Machines Adding tags to Azure VMs is straightforward and can be done via the Azure Portal, Azure CLI, or PowerShell. Below is a step-by-step guide for the Azure Portal: Log in to the Azure Portal. Navigate to Virtual Machines in the left-hand menu. Select the VM you want to tag from the list. Click on the Tags option in the VM's menu. Add key-value pairs to define your tags. For example: Key: Environment, Value: Production Key: Owner, Value: Finance Click Save to apply the tags. Using Azure CLI, you can also add tags with the following command: az resource tag --tags Environment=Production Owner=Finance --name <resource-name> --resource-group <resource-group-name> --resource-type "Microsoft.Compute/virtualMachines" Automation with ScienceLogic Azure PowerPack The ScienceLogic Azure PowerPack introduces Run Book Automation (RBA) that uses VM tags to enable or disable data collection. Specifically, the "Disable By VM Tag" action allows administrators to disable monitoring for specific VMs based on their assigned tags. This automation is particularly useful for scenarios such as: Disabling monitoring for development or test environments. Optimizing monitoring costs by focusing only on production resources. Dynamically managing resource monitoring based on organizational policies. Configuration Steps for Disabling VMs by Tag 1. Modify “Disable By VM Tag” Action The first step in implementing this automation is to define the tag criteria. Here’s how you can modify the parameters: Navigate to the Action Policy Manager page in the SL1 platform. Locate the “Microsoft Azure: Disable By VM Tag” action. Edit the DISABLE_TAGS snippet with your desired key-value pairs. The format should be DISABLE_TAGS = [('Key1', 'Value1'), ('Key2', 'Value2')] For example, to disable VMs tagged with an environment key of “Development” or “Test” DISABLE_TAGS = [('Environment', 'Development'), ('Environment', 'Test')] Save the changes to apply your configuration. 2. Enable the Necessary Event Policy ScienceLogic requires enabling the “Component Device Record Created” event policy to trigger automation: Go to the Event Policy Manager page. Search for the “Component Device Record Created” event policy. Set its Operational State to “Enabled.” Save your changes. 3. Activate the Run Book Automation Policy To ensure that the automation works as intended: Open the Automation Policy Manager. Locate the “Microsoft Azure: Disable and Discover from IP” Run Book Automation policy. Enable the policy and save the configuration. 4. Preserve Your Configuration To avoid losing these settings during future updates: Navigate to the Behavior Settings page. Enable the “Selective PowerPack Field Protection” option. Save the changes. Benefits of Using “Disable By VM Tag” Automation Cost Optimization: Reduces monitoring costs by disabling unnecessary data collection. Operational Efficiency: Automates routine tasks, allowing teams to focus on critical operations. Dynamic Management: Adjusts resource monitoring dynamically based on real-time needs. Conclusion Disabling Azure VMs by tag using ScienceLogic’s Azure PowerPack is an efficient way to manage resources and control costs. By leveraging automated Run Book Actions and event policies, organizations can enforce consistent monitoring policies while minimizing manual intervention. Start leveraging the “Disable By VM Tag” feature today to enhance your Azure resource management strategy. Azure Powerpack can be downloaded from https://support.sciencelogic.com/s/release-version/aBu0z000000XZSICA4/microsoft-azure71Views1like0Comments