Recent Content
How to Configure SSO Support for Global Manager
4 MIN READ Enabling SAML-Based SSO in ScienceLogic Global Manager ScienceLogic Global Manager (GM) is a powerful appliance designed to aggregate and display data from multiple SL1 systems, providing a centralized view of your entire infrastructure. Starting with SL1 version 12.2.1, ScienceLogic introduced support for Security Assertion Markup-Language (SAML) based Single Sign-On (SSO), simplifying authentication and enhancing security. This guide, walks through the process of enabling SAML-based SSO in ScienceLogic Global Manager, so that user access can be managed seamlessly and improve operational efficiency. Why Enable SAML-Based SSO? Enabling SSO through SAML allows users to log in once and gain access to multiple SL1 systems through the Global Manager providing the users are already authorized to access the target systems. This streamlines Identity and Access Management (IAM), reduces password fatigue, and strengthens the organization's security posture. Getting Started: Before beginning, ensure the following is true: SL1 Version: 12.2.1 or later Access Level: Administrator access to the Global Manager appliance. Prerequisites: ScienceLogic assumes that the "SL1: Global Manager" PowerPack has been installed and the child stacks have been discovered. No platform version mismatch between the GM and the child SL1 Stacks AP2 version across all Stacks as a minimum must be Gelato v8.14.26 The Child SL1 stacks are configured to authenticate using SSO authentication A local administrator account must exist on each Child Stack that GM can use to authenticate with the child stack. GM SSO authentication resource must be configured to authenticate with the same Identity Provider (IdP) configured on the Child SL1 stacks The ‘/opt/em7/nextui/nextui.conf’ file on the GM must have the following variable configured - If the GM platform is hosted by ScienceLogic a Service Request must be raised using the ScienceLogic Support Portal here to request an addition of the environmental variable: GM_STACKS_CREDENTIAL=enabled GM_STACKS_CACHE_TTL_MS=0 GM_SESSION_AUTH_CACHE_TTL_MS=0 GLOBAL_MANAGER_SESSION_COOKIE_CACHE_TTL_MILLIS=0 Unique SL1 Administrator accounts must exist on each child stack – These act as a global API key for users which allows authentication on the child stack. Once a user authenticated, the user data is loaded on to GM and the request proceeds as normal. Step 1: Configure Basic/Snippet Credentials a) Access GM UI and logon using an Administrator Account b) Navigate to Credentials page (Manage > Credentials) and Select ‘Create New’ followed by ‘Create Basic/Snippet Credential’. A dialog window will be presented, this must be completed with the details listed in the table below for each child stack using the Administrator Credentials to enable GM to authenticate with the child stacks Fields Values Name stack-<stack-id>-gm-auth All Organizations Toggled Timeout (ms) 0 Username <Target-Child-Stack-Admin-Username> Password Unique Password Hostname/IP <Target-Child-Stack-IP> Port. 443 c) Perform Credential Test using the Credential Tester and confirm the authentication is successful. Step 2: Credential Alignment - GraphQL Following the creation of the Basic Credential, each child stack credential must be aligned using GraphQL (GQL) mutation – The command requires supplying the ‘guid’ of the credentials created above in step 2 above. The following GQL will return all credentials created in Step 1 above providing the credential names contain ‘GM’. Access the GQL Browser by appending /gql to the GM URL I.E. https://<GlobalManager_HOST>/gql - This will provide access to the GQL Browser. Query: query allCreds{ credentials (search:{name:{contains:"GM"}}) { edges { node { id guid name definition } } } } Example Response: The example response shows the required ‘guid’ - Ensure a note of each ‘guid’ associated with each credential is noted for Step 4. { "data": { "credentials": { "edges": [ { "node": { "id": "41", "guid": "3C07AB8B0655A722712C46FA1DF821EA", "name": "stack_1_gm_auth", "definition": [..] } } ] } } } Step 3: Retrieve GM Stack ID The following GQL will return all existing Child SL1 Stacks present on the GM. Query: query getallstacks { globalManagerStacks { edges { node { id name credential { guid name } } } } } Example Response: Note the ‘id’ representing the GM Stack-ID’s for the next step. { "data": { "globalManagerStacks": { "edges": [ [..] { "node": { "id": "3", "name": "<sl1_stack_hostname>", "credential": null } } [..] } Step 4: GraphQL Credential Mutation The following GQL mutation will align the Basic Credential to permit GM to authenticate with the target child stacks. Mutation: mutation aligncred { alignGlobalManagerCredential(id: <Stack-ID>, credential: "<guid>") { id name credential { id name guid } } } Replace: with the GM Stack-ID for each child stack retrieved from Step 3. with the credential GUID from Step 2 that is associated with the same Child Stack. Example Mutation Response: { "data": { "alignGlobalManagerCredential": { "id": "3", "name": "<child_stack_name>", "credential": { "id": "41", "name": "stack_1_gm_auth", "guid": "3C07AB8B0655A722712C46FA1DF821EA" } } } } Repeat the above mutation for the remaining Child SL1 stacks discovered on GM. Summary Enabling SAML-based SSO in ScienceLogic Global Manager streamlines authentication, enhances security, and improves operational efficiency by allowing users to seamlessly access multiple SL1 stacks with a single login. By following the outlined steps — configuring credentials, aligning them via GraphQL, and ensuring proper authentication setup —organizations can integrate SSO effectively while maintaining secure access controls. After completing these steps, users will be able to log in once and have visibility of managed devices across multiple SL1 stacks via GM, enhancing productivity and reducing security risks. By leveraging SAML-based SSO, ScienceLogic not only simplifies access but also strengthens the overall security posture. If there are issues encountered, please contact ScienceLogic Support here. For further details related to GM setup, refer to the official ScienceLogic documentation here.49Views2likes0CommentsNew How-To Videos: Streamline Event Management with ScienceLogic
2 MIN READ Effective event management within ScienceLogic SL1 is key to maintaining visibility across your IT ecosystem and ensuring actionable data. When managed properly, event data can directly impact critical business outcomes, such as Mean Time to Resolution (MTTR). However, an overwhelming volume of events without proper context can create noise, making it difficult to extract the insights needed for timely action. So, how can you optimize event handling in SL1 to leverage the platform's full capabilities? Introducing the Event Management video series! This collection of seven short videos (1-5 minutes each) covers various aspects of event management within SL1, helping you understand how to fine-tune your configuration and maximize efficiency. Explore Video Content Click here to view the full Event Management video series on the Support site's video page, or play the video below to get started exploring one of the videos in the series: Video Series Overview Event Management Overview: An introduction to events in SL1, covering how events are triggered, best practices for managing events, and strategies for maintaining the health and efficiency of your infrastructure. Event Categories: Categorize and manage events effectively within SL1 by sorting, searching, filtering, and using role-based access control (RBAC). Responding to Events: Acknowledge and take ownership of events, collaborate across teams, and leverage RBAC to ensure smooth event management. Event Notification and Automation: Leverage PowerPacks to take actions on events, view essential information, run diagnostic tools, and integrate with your service desk. Event Correlation and Masked Events: Correlate events to highlight critical ones, mask less important events, and quickly identify root causes and streamline your response. Event Insights: Take advantage of the tools on the Event Insights page to manage alerts through correlation, deduplication, masking, and suppression. Event Policies and More: Edit, customize, and create Event Policies, and managing events using regular expressions, auto-expiration, and suppression. Looking for More How-To Videos? Stay tuned for the next installment in our Event Management series, and be sure to check out other helpful tutorials on the ScienceLogic Support site’s video page.39Views2likes0CommentsHow to limit discovery in Microsoft Azure
3 MIN READ How to Effectively Disable Azure VMs in SL1 Using VM Tags When managing resources in a dynamic cloud environment, such as Microsoft Azure, optimizing resource utilization and monitoring is crucial. ScienceLogic’s Azure PowerPack provides an effective way to control Azure Virtual Machines (VMs) using tags. This feature is particularly beneficial for organizations aiming to streamline operations and reduce costs by automating resource management based on predefined rules. What Is VM Tagging? VM tagging in Azure involves assigning metadata to resources in the form of key-value pairs. These tags can help identify, organize, and manage resources based on categories such as environment, department, or project. For example: Key: Environment, Value: Development Key: Owner, Value: IT-Support Tagging becomes a powerful tool when integrated with automation policies to control resource behavior dynamically. How to Add Tags to Azure Virtual Machines Adding tags to Azure VMs is straightforward and can be done via the Azure Portal, Azure CLI, or PowerShell. Below is a step-by-step guide for the Azure Portal: Log in to the Azure Portal. Navigate to Virtual Machines in the left-hand menu. Select the VM you want to tag from the list. Click on the Tags option in the VM's menu. Add key-value pairs to define your tags. For example: Key: Environment, Value: Production Key: Owner, Value: Finance Click Save to apply the tags. Using Azure CLI, you can also add tags with the following command: az resource tag --tags Environment=Production Owner=Finance --name <resource-name> --resource-group <resource-group-name> --resource-type "Microsoft.Compute/virtualMachines" Automation with ScienceLogic Azure PowerPack The ScienceLogic Azure PowerPack introduces Run Book Automation (RBA) that uses VM tags to enable or disable data collection. Specifically, the "Disable By VM Tag" action allows administrators to disable monitoring for specific VMs based on their assigned tags. This automation is particularly useful for scenarios such as: Disabling monitoring for development or test environments. Optimizing monitoring costs by focusing only on production resources. Dynamically managing resource monitoring based on organizational policies. Configuration Steps for Disabling VMs by Tag 1. Modify “Disable By VM Tag” Action The first step in implementing this automation is to define the tag criteria. Here’s how you can modify the parameters: Navigate to the Action Policy Manager page in the SL1 platform. Locate the “Microsoft Azure: Disable By VM Tag” action. Edit the DISABLE_TAGS snippet with your desired key-value pairs. The format should be DISABLE_TAGS = [('Key1', 'Value1'), ('Key2', 'Value2')] For example, to disable VMs tagged with an environment key of “Development” or “Test” DISABLE_TAGS = [('Environment', 'Development'), ('Environment', 'Test')] Save the changes to apply your configuration. 2. Enable the Necessary Event Policy ScienceLogic requires enabling the “Component Device Record Created” event policy to trigger automation: Go to the Event Policy Manager page. Search for the “Component Device Record Created” event policy. Set its Operational State to “Enabled.” Save your changes. 3. Activate the Run Book Automation Policy To ensure that the automation works as intended: Open the Automation Policy Manager. Locate the “Microsoft Azure: Disable and Discover from IP” Run Book Automation policy. Enable the policy and save the configuration. 4. Preserve Your Configuration To avoid losing these settings during future updates: Navigate to the Behavior Settings page. Enable the “Selective PowerPack Field Protection” option. Save the changes. Benefits of Using “Disable By VM Tag” Automation Cost Optimization: Reduces monitoring costs by disabling unnecessary data collection. Operational Efficiency: Automates routine tasks, allowing teams to focus on critical operations. Dynamic Management: Adjusts resource monitoring dynamically based on real-time needs. Conclusion Disabling Azure VMs by tag using ScienceLogic’s Azure PowerPack is an efficient way to manage resources and control costs. By leveraging automated Run Book Actions and event policies, organizations can enforce consistent monitoring policies while minimizing manual intervention. Start leveraging the “Disable By VM Tag” feature today to enhance your Azure resource management strategy. Azure Powerpack can be downloaded from https://support.sciencelogic.com/s/release-version/aBu0z000000XZSICA4/microsoft-azure50Views1like0CommentsTips for Formatting Datacenter Automation Output
2 MIN READ The "Datacenter Automation Utilities", also known as DCA, PowerPack includes run book automation and action policies that assist with general-purpose activities for other installed Automation PowerPacks. Within this powerpack there are multiple RunBooks that assist you in formatting the output of the data. This includes prettifying the data for the SL1 UI or formatting the data to send to other systems such as ServiceNow. This is already configured for Out-of-the-box automations, but there are a few things to consider when using your own commands. With, the data structure must be in a specific format. The data must be passed as a dictionary, the name of the key must be “command_list_out”, the data must be a list of tuples with the tuple containing three objects. The three objects should be: 1. The name of the automation being performed The output of the command/API call/etc. The word “False” or “None”, meaning no further flags need to be passed The RBA output should look something like this for a single command: command = “df -h” (stdin, stdout, stderr) = client.exec_command(command) Output_of_command = stdout.read() EM7_RESULT = {“command_list_out”: [(‘Running df -h’, Output_of_command, None)]} In the above example: “{}” – symbolizes the data structure is a dictionary “command_list_out” – the key of the dictionary “:” – required to separate key and value pair “[]” - symbolizes the data structure is a list “()” - symbolizes the data structure is a tuple “Running df -h” – this is the description of what is being performed, this can be free text and say whatever you would like “,” – separates each index in the tuple “Output_of_command” – this is the output of the command df -h being ran against the device “None” – a requirement that says nothing else to be passed for the custom flags in the python library. The RBA output should look something like this for a multiple command: command1 = “df -h” command2 = “ping -c 15 -i 2 google.com” (stdin, stdout, stderr) = client.exec_command(command1) Output_of_command1 = stdout.read() (stdin, stdout, stderr) = client.exec_command(command2) Output_of_command2 = stdout.read() EM7_RESULT = {“command_list_out”: [(‘Running df -h’, Output_of_command1, None), (‘Running ping, Output_of_command2, None)]} In the above example: “{}” – symbolizes the data structure is a dictionary “command_list_out” – the key of the dictionary “:” – required to separate key and value pair “[]” - symbolizes the data structure is a list “()” - symbolizes the data structure is a tuple “Running df -h” – this is the description of what is being performed, this can be free text and say whatever you would like “,” – separates each index in the tuple “Output_of_command1” – this is the output of the command df -h being ran against the device “Output_of_command2” – this is the output of the command ping being ran against google.com “None” – a requirement that says nothing else to be passed for the custom flags in the python library. Note: The difference with the multiple commands versus the single commands are the extra tuple. Meaning that after the first parenthesis is closed off a comma can now be passed to start the new tuple with the new output of commands.41Views2likes0CommentsUsing Skylar RCA for Root Cause Analysis
2 MIN READ This article assumes you already have a Skylar RCA account. If not, please contact your CSM for a 30-day trial of the product. Step 1: Contact ScienceLogic support to obtain a copy of the OTel collector. Step 2: Install the OTel collector as per installation steps (see the references section below) Step 3: Update the OTel configuration file. This is the otelcol.yaml file in otelcol-sciencelogic-zebrium_x86_64 directory. The following fields will need to be updated Include attribute in filelog block to match the log file location(s) regex in operators > type block. This needs to match the log file format. As a best practice, use a regular expression checker (for example, https://regex101.com/ , to check your regular expression before updating the configuration file endpoint and ze_token sections in the exporters block. These need to be copied from your Skylar RCA instance Step 4: Before sending logs to Skylar, it is recommended configuration is tested with local debugging. This can be achieved by using exporters: [debug] in the service: pipelines: logs: section of the otelcol.yaml config file. Also, in the receivers: filelog: section, add the line start_at: beginning to force the collector to read logs from the beginning. This will generate a log file in the logs sub-directory. Step 5: Restart the SciencelogicZebriumOpenTelemetryCollector service. Step 6: Once you are happy with the debug output, modify the config file so that logs will be sent to Skylar RCA. Remember to Restart the SciencelogicZebriumOpenTelemetryCollector service. Step 7: After a few minutes, check the Ingest History on the Skylar UI (in Ingest-history) to verify data is being received. Also, Diagnostics menu can provide useful information about how many log lines were received in the last 4 hours. Go to the Diagnostics menu and click on ‘Run Now’ button. References: Skylar Automated RCA documentation: https://docs.sciencelogic.com/latest/Content/Web_Zebrium/home_RCA.htm Windows OTel collector: https://docs.sciencelogic.com/latest/Content/Web_Zebrium/03_Log_Collectors_Uploads/Windows_OTel.html58Views2likes0CommentsWelcome to the Pro Services Blog
3 MIN READ Hello Nexus Community Members, Our professional services blog provides expert insights, industry trends, and practical advice to help businesses and professionals navigate challenges and seize opportunities. Stay informed with thought leadership, best practices, and strategies for success. Meet our Bloggers EugeneC Based in Taiwan, Eugene has been a key member of ScienceLogic's Expert Services team within the Professional Services group since June 2019. With a strong background in solution architecture, integrations, and automation, he specializes in designing and implementing scalable, sustainable solutions tailored to customer needs. Eugene has extensive experience with ScienceLogic SL1, PowerFlow, CMDB integrations, ITSM workflows, and event-driven automation. He is also passionate about internal knowledge sharing, customer engagement, and building reusable templates to drive efficiency and innovation. UsmanKhan Usman is a senior consultant in the EMEA Customer Experience team at ScienceLogic, based in Reading, UK. Since joining ScienceLogic in 2019, he has focused on designing and implementing integration solutions for SL1, particularly with ITSM and CMDB platforms. Over the years, he has worked with a wide range of customers, helping them tailor SL1 to their needs and optimize their monitoring and automation strategies. In recent years, Usman has taken on a technical leadership role in the EMEA region, leading the integration of SL1 using PowerFlow to drive complex automation and streamline IT operations for enterprise customers. With a background in software development and solution architecture, he works closely with organizations to build solutions that improve efficiency and maximize the value of their SL1 investment. LasithaL Lasitha is a senior member of the EMEA Professional Services team, which is now part of the Customer Experience team. He is based in London, UK. Lasitha has been with ScienceLogic since 2016. Since then he has worked with over 100 customers and partners from EMEA, America and Australia with deployment, configuration, customization of SL1 as well as providing consultancy on various aspects of SL1 such as best practices, noise reduction and customization. Recently he has worked on Skylar RCA implementation and SL1 integration. Lasitha is certified in ITIL and Project Management. Qasim Qasim Latif, Director of Solution Architecture at ScienceLogic, is a strategic leader helping organizations transform their IT operations. With deep expertise in automation, observability, and AIOps, he enables businesses across industries to optimize their dynamic technology environments with the ScienceLogic suite. Qasim works closely with customers to deliver intelligent, automated solutions that enhance efficiency, resilience, and real-time visibility. A dedicated mentor and innovation advocate, he guides organizations through complex migrations, large-scale integrations, and automation strategies, helping them stay ahead in an ever-evolving digital landscape. Kashif Kashif is an experienced IT professional specializing in ScienceLogic SL1, with extensive expertise in platform deployment, automation, and integrations. Over the years, he has successfully implemented and optimized SL1 solutions, configuring PowerFlow, developing custom automation, and enhancing monitoring capabilities to meet diverse business needs. His experience includes setting up high availability (HA) and disaster recovery (DR) environments, conducting system health checks, and streamlining workflows for improved operational efficiency. He is passionate about leveraging SL1 to drive automation, improve system visibility, and deliver scalable monitoring solutions that empower organizations to manage their IT ecosystems effectively. MarkMunford Mark lives in Sale, Manchester (UK) with his wife and 2 kids. He's a keen tennis player and Captain the 1st team at his Club. He enjoys DIY / fixing things and is called on by family and friends when something isn’t working! He started his IT career as a Microsoft Server engineer at a carpet manufacturer and then worked for an MSP gaining skills in all technologies, supporting, installing, and architecting solutions. He enjoyed roles as Support and Operations Manager before moving to ScienceLogic as a Customer Success Manager in 2021. HIs role is now focused on understanding the goals and objectives of his customers and working with them to build a plan to achieve them together. He looks forward to engaging with all of you here on Nexus! YaserQ Based in the UK, Yaser has been a key part of ScienceLogic's Expert Services team within the Professional Services group for over three years. Before joining ScienceLogic, Yaser worked as a Systems Engineer at a major UK telecommunications company, where he focused on designing and implementing network security and monitoring solutions for public sector organizations. With a BSc in Computer Science and certifications including CISSP and CEH, Yaser brings deep technical knowledge and practical experience to every project, helping organizations enhance their IT operations Looking forward to the expertise and contributions from this team for our Nexus Community Members.69Views2likes0CommentsNew Restorepoint Training Update: Expand Your NCCM Skills with Fresh Content
2 MIN READ ScienceLogic is excited to announce updates to the Restorepoint training series, now featuring additional content designed to help you build your Network Configuration and Change Management (NCCM) expertise. New training updates include expanded guidance on these topics: Deployment Guidance and Architecture: Understand best practices for deploying Restorepoint and key architecture components. Installation Instructions for Various Environments: Step-by-step guidance for installing Restorepoint in different deployment environments. Expert-Led Walkthroughs: Learn directly from Subject Matter Experts with easy-to-follow video tutorials. What’s Included in the Full Restorepoint Training Series These new updates are now part of the recently launched Restorepoint: Compliance-Focused NCCM training series, available through your ScienceLogic University portal. In the full training series, you’ll gain a comprehensive understanding of the platform and its capabilities, including: Restorepoint Platform Operation: Learn the basics of navigation, administration, and operational aspects of Restorepoint. Network Configuration Backups & Change Automation: Learn how to automate network changes, ensure compliance, and perform seamless configuration backups. Integrations and Plugins: Discover how Restorepoint integrates with other systems and tools to streamline your workflows. Hear From Our Customers Over 90 customers have already completed the Restorepoint training. Here’s what they had to say about their experience: “I now have an understanding of the Restorepoint platform, and how to apply the benefits to my work flow." "I enjoyed the examples of how you can utilize automation and cross-platform communication." "It was most valuable getting to see some of the SL1-to-Restorepoint interactions being carried out automatically." How to Enroll Join the growing number of professionals expanding their knowledge and improving their NCCM capabilities with the Restorepoint: Compliance-Focused NCCM training series. Click here to log in or register for ScienceLogic University and start your learning journey today!71Views5likes0CommentsVideo: Getting Started with PowerPacks to Monitor, Synchronize, and Automate Your IT Environment
2 MIN READ ScienceLogic’s extensive library of pre-built integrations—known as PowerPacks—empowers you to streamline your IT workflows and ensure seamless data flow across your entire environment. By leveraging PowerPacks, you can gain comprehensive visibility into your technologies, applications, and services, while automating critical IT processes to keep pace with the ever-changing demands of your business. What’s more, you have the flexibility to extend your platform by creating custom PowerPacks tailored to your specific needs. PowerPacks: What You Need to Know PowerPacks are essential building blocks for optimizing your SL1 platform, and here's what you should know about them: Importable & Exportable Packages: PowerPacks are customizable packages that you can import and export to manage data seamlessly across your IT environment. Default PowerPacks: Several PowerPacks come pre-installed in your SL1 platform, providing immediate value out of the box. Customization & Updates: You can download, install, and update additional PowerPacks to suit your organization’s evolving needs from the ScienceLogic Support page. Sharing Capabilities: PowerPacks allow you to share custom configurations, and also download curated content from ScienceLogic to enhance your platform. How to Install and Explore PowerPacks Ready to dive in? Watch the video below to get step-by-step guidance on how to download, install, and explore the contents of PowerPacks. It's a great starting point for anyone looking to take full advantage of this powerful feature. Ready to Unlock the Full Potential of Your SL1 Platform? If you’re looking to take your SL1 platform to the next level, consider customizing PowerPacks to align with your specific organizational goals. Log into the ScienceLogic Support Page, navigate to Product Downloads > SL1 Studio, and explore a suite of low-code tools and resources that will help you extend your platform and drive more value from your investments.36Views0likes0CommentsLooking Back – ScienceLogic’s Top Five Moments of 2024
3 MIN READ As we close the chapter on 2024, we want to extend our heartfelt gratitude to you—our customers and partners—for inspiring and driving us forward. The holiday season offers a unique moment to celebrate shared achievements, reflect on milestones, and set the stage for an even brighter future. This year has been one of growth, innovation, and collaboration, and we are thrilled to share ScienceLogic’s Top Five Moments of 2024 with you. #1: Launching Nexus – Your Gateway to the Future of Collaboration. One of our proudest moments this year was the unveiling of Nexus, ScienceLogic’s cutting-edge customer community. Nexus isn’t just a platform; it’s a dynamic ecosystem where ideas flourish, connections strengthen, and innovation takes flight. With access to expert guidance, peer insights, and a wealth of resources, Nexus empowers you to thrive in the ever-changing IT landscape. This community is your space to lead, learn, and shape the future. If you haven’t already, we invite you to register to unlock the power of community and be part of this transformative journey. Register on Nexus Today #2: Celebrating Excellence – The 2024 Innovators Award Winners This year, we celebrated the visionaries driving groundbreaking AIOps transformations with our 2024 Innovators Awards. These trailblazers have leveraged the ScienceLogic AI Platform to create meaningful, lasting impacts in their organizations and industries. Their stories inspire us to continue delivering solutions that empower our customers to achieve their boldest goals. Meet the Innovators #3: SL1 Earns FedRAMP Certification – A Milestone in Public Sector Excellence A defining moment this year was achieving “In Process” status with FedRAMP for our Government Cloud platform, now listed in the FedRAMP Marketplace. Building on our decade-long DoDIN APL certification for on-premise solutions, this step underscores our commitment to delivering secure, unified IT management for the public sector. With dual certifications and an ongoing pursuit of FedRAMP Moderate authorization, ScienceLogic continues to set the standard for compliance, security, and innovation. Learn More #4: PowerHour – Delivering Expertise at Your Fingertips This year, PowerHour became the go-to resource for hands-on learning and actionable insights. Designed to help you unlock the full potential of our solutions, this program empowers you to stay ahead in a fast-paced industry. Whether live or on-demand, each session delivers practical strategies and expert guidance. As we gear up for 2025, we remain committed to creating even more opportunities to equip you for success. Access 2024 On-Demand Sessions #5: Redefining Innovation with Skylar AI This year marked the debut of Skylar AI, our latest platform enhancement designed to push the boundaries of Autonomic IT. With Skylar RCA and Analytics now generally available, the platform empowers teams to make data-driven decisions, automate complex workflows, and deliver measurable business outcomes. Looking ahead to 2025, we’re thrilled to bring Skylar Advisor to life—a groundbreaking addition that will revolutionize the IT landscape. Discover the Skylar Roadmap As we reflect on 2024, we’re reminded that every milestone is a shared victory. Your trust, partnership, and commitment inspire us to aim higher, innovate further, and deliver even greater value. Here’s to another year of bold advancements, stronger relationships, and unparalleled success. Stay tuned—2025 promises to be our most transformative year yet. Wishing you a joyful holiday season and a prosperous New Year!Remediation with Restorepoint (Part I)
2 MIN READ The Basics It’s important to understand that remediation options are part of the compliance rule definition, not the policy. That means a single policy can contain rules with different remediation options. To see the remediation options, go to Compliance --> Device Policies, open up a policy, and bring up the rule editor by either creating a new rule or selecting an existing one. You will see the “Remediation” drop-down menu: 1 - Remediation Type "Manual" The first and simplest remediation type is “Manual”. This is simply a text string providing instructions to an operator who is responding to a compliance alert. For example, a simple rule that checks for the existence of a default “public” SNMP community on a Cisco IOS device could have these very simple instructions: When a device is in violation of this rule, the remediation text will be included in the alert that gets generated. Here, in an email alert: 2 - Remediation Type "Automatic" The second remediation type, “Automatic”, lets you specify a series of commands to execute on the device. For example, to enable auto-remediation of our example “No Public SNMP Community” rule, you could run the “no snmp-server community public” IOS command: When a device is in violation of this rule, the specified commands are automatically executed on the device, bringing it back into compliance. 3 - Remediation Type "Command" The final remediation type, “Command”, is similar to “Automatic” except that, instead of entering the commands to run on the device, you can specify a previously saved Device Control script to run. In our example: Here, the "Remove Public SNMP Community" script has previously been saved and contains the same commands we used in the "automatic" example: Since device controls can be created as Lua scripts instead of simple lists of commands, using the “command” remediation type allows for more complex actions. Summary The goal of this article was to introduce the different Remediation options in Restorepoint. Remember: You don't have to add remediation steps to every rule in a policy -- and the ones you do add don't have to be of the same type. Even if you are not ready to enable automatic reconfiguration of devices in your environment, don’t be afraid to add a “manual” Remediation action to your compliance rules. Coming soon, I’ll post a follow-up article about using variables and Lua scripting to improve on the simple remediations we used today.39Views0likes0Comments