Recent Content
How to Set Up an NFS Server for SL1 Backups
3 MIN READ Backing up your ScienceLogic SL1 database is essential for ensuring data integrity and disaster recovery. One effective way to store backups is by setting up a Network File System (NFS) server. NFS allows you to share a directory across multiple machines, making it an ideal solution for centralized SL1 backups. This guide will walk you through the process of installing and configuring an NFS server to store SL1 backups. Step 1: Install the NFS Server Before setting up the NFS server, ensure that your Linux machine has the necessary NFS packages installed. If the nfs-server package is missing, you need to install it. For RHEL, CentOS, Rocky Linux, or AlmaLinux: sudo yum install -y nfs-utils For Ubuntu or Debian: sudo apt update sudo apt install -y nfs-kernel-server After installation, start and enable the NFS service: sudo systemctl start nfs-server sudo systemctl enable nfs-server Verify the NFS server is running: sudo systemctl status nfs-server If it is not running, restart it: sudo systemctl restart nfs-server Step 2: Configure the NFS Server Once NFS is installed, follow these steps to configure the shared directory for SL1 backups. 1. Create a Backup Directory sudo mkdir -p /backups sudo chmod 777 /backups 2. Set a Fixed Port for mountd On Ubuntu/Debian: Edit /etc/default/nfs-kernel-server and add: RPCMOUNTDOPTS="--port 20048" On RHEL/CentOS/Rocky/Oracle: Edit /etc/sysconfig/nfs and add: MOUNTD_PORT=20048 This ensures the mountd service always uses port 20048, making firewall configuration simpler and more secure. 3. Define NFS Exports Edit the /etc/exports file to specify which clients can access the NFS share: sudo vi /etc/exports Add the following line, replacing with the IP address of your SL1 database server: /backups (rw,sync,no_root_squash,no_all_squash) This configuration allows the SL1 server to read and write (rw) to /backups, ensures data consistency (sync), and prevents permission issues. 3. Apply the NFS Configuration Run the following command to apply the changes: sudo exportfs -a Restart the NFS service to ensure the changes take effect: sudo systemctl restart nfs-server Step 3: Configure Firewall Rules for NFS If a firewall is enabled on your NFS server, you must allow NFS-related services. Run the following commands to open the necessary ports: sudo firewall-cmd --permanent --add-service=mountd sudo firewall-cmd --permanent --add-service=rpc-bind sudo firewall-cmd --permanent --add-service=nfs sudo firewall-cmd --reload If Command 'firewall-cmd' not found on your device: 1. Using iptables (For RHEL, CentOS, Debian, or older distributions) If your system uses iptables, you can manually allow NFS traffic with the following commands: sudo iptables -A INPUT -p tcp --dport 2049 -j ACCEPT # NFS sudo iptables -A INPUT -p tcp --dport 111 -j ACCEPT # Portmapper sudo iptables -A INPUT -p tcp --dport 20048 -j ACCEPT # Fixed mountd port sudo iptables -A INPUT -p udp --dport 2049 -j ACCEPT sudo iptables -A INPUT -p udp --dport 111 -j ACCEPT sudo iptables -A INPUT -p udp --dport 20048 -j ACCEPT # Save the rules sudo iptables-save | sudo tee /etc/sysconfig/iptables To restart iptables and apply the rules: sudo systemctl restart iptables 2. Using UFW (For Ubuntu and Debian) If your system uses ufw (Uncomplicated Firewall), enable NFS traffic with: Enable UFW (if it is inactive) sudo ufw enable Allow NFS, RPC, and Mountd Ports sudo ufw allow 2049/tcp # NFS sudo ufw allow 111/tcp # rpcbind sudo ufw allow 111/udp sudo ufw allow 20048/tcp # mountd (fixed) sudo ufw allow 20048/udp To apply the changes: sudo ufw reload To check if the rules are added: sudo ufw status Step 4: Verify the NFS Server To confirm that the NFS share is accessible, use the following command: showmount -e <NFS server IP> If the setup is correct, you should see the following listed as an exported directory. Export list for <NFS server IP>: /backups <NFS client IP> Next Steps: Mount the NFS Share on SL1 Database Server Now that the NFS server is set up, you need to mount the share on your SL1 database server to store backups. For step-by-step instructions on mounting an NFS share in SL1, refer to the official ScienceLogic documentation Backup Management.18Views1like0CommentsMastering Terminal Security: Why TMUX Matters in Modern Enterprise Environments
3 MIN READ In the evolving landscape of enterprise IT, security isn't a feature—it’s a foundation. As organizations grow more distributed and systems become increasingly complex, securing terminal sessions accessed through SSH is a mission-critical component of any corporate security posture. One tool rising in prominence for its role in fortifying SSH access control is tmux, and it's more than just a handy utility—it's a security enabler. As part of ScienceLogic’s harden the foundation initiative, the SL1 platform on the 12.2.1 or later release introduces improved tmux session control capabilities to meet industry leading security standards. ScienceLogic TMUX resources: SL1 Release Notes KB Article: What is TMUX and why is it now default on SL1? KB Article: Unable to Copy or Paste Text in SSH Sessions TMUX Configuration Cheat Sheet Increase ITerm TMUX Window What is TMUX? tmux (short for terminal multiplexer) is a command-line tool that allows users to open and manage multiple terminal sessions from a single SSH connection. Think of it as a window manager for your terminal—enabling users to split screens, scroll through logs, copy/paste content, and manage persistent sessions across disconnects. tmux is now running by default when you SSH into an SL1 system. This isn’t just a user experience enhancement—it’s a strategic security upgrade aligned with best practices in access control and session management. Why TMUX Matters for Security Security teams understand idle or abandoned SSH sessions pose real risks—whether from unauthorized access, lateral movement, or session hijacking. The introduction of tmux into the SL1 platform adds several critical controls to mitigate these risks: Automatic Session Locking: Idle sessions lock automatically after 15 minutes or immediately upon unclean disconnects. This dramatically reduces the attack surface of unattended sessions. Session Persistence and Recovery: tmux can reattach to previous sessions on reconnect, preserving state without sacrificing security—great for admin continuity. Supervised Access: With tmux, authorized users can monitor or even share terminal sessions for auditing or support—without giving up full shell access. Value for Platform Teams and Security Officers For platform and security leaders, enabling tmux by default means: Stronger Compliance Posture: Session supervision, activity auditing, and inactivity timeouts align with frameworks like NIST 800-53, CIS Controls, and ISO 27001. Reduced Operational Risk: Dropped sessions and orphaned shells are automatically managed—minimizing both user frustration and security exposure. Enhanced Administrator Efficiency: Features like scroll-back search, split panes, and built-in clip boarding streamline complex workflows across systems. In essence, tmux isn't just helping sysadmins—it's helping CISOs sleep better. Risks of Not Using TMUX Choosing not to enable or enforce tmux in enterprise environments comes with hidden but serious risks: Unsecured Idle Sessions: Without timeouts or auto-locks, sessions left open are ripe for misuse or compromise. Poor Session Traceability: Lack of visibility into session states and handoffs creates audit and accountability gaps. Reduced Resilience: A dropped SSH connection can lead to lost work, misconfigurations, or operational inefficiencies—especially in multi-user environments. In contrast, tmux provides a clean, consistent, and secure environment for every shell session—backed by real-world enterprise needs. Final Thoughts The addition of tmux to SL1's default SSH environment reflects a broader industry trend: security is shifting left, right into the command line. For platform teams, this isn't just a convenience—it's a call to action. Enabling tmux is a simple yet powerful way to align with security policies, improve admin workflows, and fortify your infrastructure.119Views2likes0CommentsConvert Customization to PowerFlow Jinja Template
1 MIN READ Sometimes when syncing devices from SL1 into ServiceNow as a Configuration Items there can be a mismatch. ServiceNow may list the name as Fully Qualified Domain Name and SL1 will use short name. This setting can be updated in SL1, but in some cases the SL1 team would rather see short name than FQDN. This can be setup on a per SL1 Device Class basis. PowerFlow Using the following Jinja2 “if statement” the device name in SL1 can be converted to use “Device Hostname,” in SL1 instead for Microsoft SQL Server Databases. This excerpt of code would go under attribute mappings for name on the ScienceLogic side mapping to name on the ServiceNow side: {%- set output = [] -%} {%- if (device.device_class|trim) in ['Microsoft | SQL Server Database'] } {%- set output = device.hostname -%} {%- else -%} {%- set output = device.name -%} {%- endif -%} {{ output }} Example:18Views0likes0CommentsHow to Configure SSO Support for Global Manager
4 MIN READ Enabling SAML-Based SSO in ScienceLogic Global Manager ScienceLogic Global Manager (GM) is a powerful appliance designed to aggregate and display data from multiple SL1 systems, providing a centralized view of your entire infrastructure. Starting with SL1 version 12.2.1, ScienceLogic introduced support for Security Assertion Markup-Language (SAML) based Single Sign-On (SSO), simplifying authentication and enhancing security. This guide, walks through the process of enabling SAML-based SSO in ScienceLogic Global Manager, so that user access can be managed seamlessly and improve operational efficiency. Why Enable SAML-Based SSO? Enabling SSO through SAML allows users to log in once and gain access to multiple SL1 systems through the Global Manager providing the users are already authorized to access the target systems. This streamlines Identity and Access Management (IAM), reduces password fatigue, and strengthens the organization's security posture. Getting Started: Before beginning, ensure the following is true: SL1 Version: 12.2.1 or later Access Level: Administrator access to the Global Manager appliance. Prerequisites: ScienceLogic assumes that the "SL1: Global Manager" PowerPack has been installed and the child stacks have been discovered. No platform version mismatch between the GM and the child SL1 Stacks AP2 version across all Stacks as a minimum must be Gelato v8.14.26 The Child SL1 stacks are configured to authenticate using SSO authentication A local administrator account must exist on each Child Stack that GM can use to authenticate with the child stack. GM SSO authentication resource must be configured to authenticate with the same Identity Provider (IdP) configured on the Child SL1 stacks The ‘/opt/em7/nextui/nextui.conf’ file on the GM must have the following variable configured - If the GM platform is hosted by ScienceLogic a Service Request must be raised using the ScienceLogic Support Portal here to request an addition of the environmental variable: GM_STACKS_CREDENTIAL=enabled GM_STACKS_CACHE_TTL_MS=0 GM_SESSION_AUTH_CACHE_TTL_MS=0 GLOBAL_MANAGER_SESSION_COOKIE_CACHE_TTL_MILLIS=0 Unique SL1 Administrator accounts must exist on each child stack – These act as a global API key for users which allows authentication on the child stack. Once a user authenticated, the user data is loaded on to GM and the request proceeds as normal. Step 1: Configure Basic/Snippet Credentials a) Access GM UI and logon using an Administrator Account b) Navigate to Credentials page (Manage > Credentials) and Select ‘Create New’ followed by ‘Create Basic/Snippet Credential’. A dialog window will be presented, this must be completed with the details listed in the table below for each child stack using the Administrator Credentials to enable GM to authenticate with the child stacks Fields Values Name stack-<stack-id>-gm-auth All Organizations Toggled Timeout (ms) 0 Username <Target-Child-Stack-Admin-Username> Password Unique Password Hostname/IP <Target-Child-Stack-IP> Port. 443 c) Perform Credential Test using the Credential Tester and confirm the authentication is successful. Step 2: Credential Alignment - GraphQL Following the creation of the Basic Credential, each child stack credential must be aligned using GraphQL (GQL) mutation – The command requires supplying the ‘guid’ of the credentials created above in step 2 above. The following GQL will return all credentials created in Step 1 above providing the credential names contain ‘GM’. Access the GQL Browser by appending /gql to the GM URL I.E. https://<GlobalManager_HOST>/gql - This will provide access to the GQL Browser. Query: query allCreds{ credentials (search:{name:{contains:"GM"}}) { edges { node { id guid name definition } } } } Example Response: The example response shows the required ‘guid’ - Ensure a note of each ‘guid’ associated with each credential is noted for Step 4. { "data": { "credentials": { "edges": [ { "node": { "id": "41", "guid": "3C07AB8B0655A722712C46FA1DF821EA", "name": "stack_1_gm_auth", "definition": [..] } } ] } } } Step 3: Retrieve GM Stack ID The following GQL will return all existing Child SL1 Stacks present on the GM. Query: query getallstacks { globalManagerStacks { edges { node { id name credential { guid name } } } } } Example Response: Note the ‘id’ representing the GM Stack-ID’s for the next step. { "data": { "globalManagerStacks": { "edges": [ [..] { "node": { "id": "3", "name": "<sl1_stack_hostname>", "credential": null } } [..] } Step 4: GraphQL Credential Mutation The following GQL mutation will align the Basic Credential to permit GM to authenticate with the target child stacks. Mutation: mutation aligncred { alignGlobalManagerCredential(id: <Stack-ID>, credential: "<guid>") { id name credential { id name guid } } } Replace: with the GM Stack-ID for each child stack retrieved from Step 3. with the credential GUID from Step 2 that is associated with the same Child Stack. Example Mutation Response: { "data": { "alignGlobalManagerCredential": { "id": "3", "name": "<child_stack_name>", "credential": { "id": "41", "name": "stack_1_gm_auth", "guid": "3C07AB8B0655A722712C46FA1DF821EA" } } } } Repeat the above mutation for the remaining Child SL1 stacks discovered on GM. Summary Enabling SAML-based SSO in ScienceLogic Global Manager streamlines authentication, enhances security, and improves operational efficiency by allowing users to seamlessly access multiple SL1 stacks with a single login. By following the outlined steps — configuring credentials, aligning them via GraphQL, and ensuring proper authentication setup —organizations can integrate SSO effectively while maintaining secure access controls. After completing these steps, users will be able to log in once and have visibility of managed devices across multiple SL1 stacks via GM, enhancing productivity and reducing security risks. By leveraging SAML-based SSO, ScienceLogic not only simplifies access but also strengthens the overall security posture. If there are issues encountered, please contact ScienceLogic Support here. For further details related to GM setup, refer to the official ScienceLogic documentation here.71Views2likes0CommentsNew How-To Videos: Streamline Event Management with ScienceLogic
2 MIN READ Effective event management within ScienceLogic SL1 is key to maintaining visibility across your IT ecosystem and ensuring actionable data. When managed properly, event data can directly impact critical business outcomes, such as Mean Time to Resolution (MTTR). However, an overwhelming volume of events without proper context can create noise, making it difficult to extract the insights needed for timely action. So, how can you optimize event handling in SL1 to leverage the platform's full capabilities? Introducing the Event Management video series! This collection of seven short videos (1-5 minutes each) covers various aspects of event management within SL1, helping you understand how to fine-tune your configuration and maximize efficiency. Explore Video Content Click here to view the full Event Management video series on the Support site's video page, or play the video below to get started exploring one of the videos in the series: Video Series Overview Event Management Overview: An introduction to events in SL1, covering how events are triggered, best practices for managing events, and strategies for maintaining the health and efficiency of your infrastructure. Event Categories: Categorize and manage events effectively within SL1 by sorting, searching, filtering, and using role-based access control (RBAC). Responding to Events: Acknowledge and take ownership of events, collaborate across teams, and leverage RBAC to ensure smooth event management. Event Notification and Automation: Leverage PowerPacks to take actions on events, view essential information, run diagnostic tools, and integrate with your service desk. Event Correlation and Masked Events: Correlate events to highlight critical ones, mask less important events, and quickly identify root causes and streamline your response. Event Insights: Take advantage of the tools on the Event Insights page to manage alerts through correlation, deduplication, masking, and suppression. Event Policies and More: Edit, customize, and create Event Policies, and managing events using regular expressions, auto-expiration, and suppression. Looking for More How-To Videos? Stay tuned for the next installment in our Event Management series, and be sure to check out other helpful tutorials on the ScienceLogic Support site’s video page.60Views2likes0CommentsHow to limit discovery in Microsoft Azure
3 MIN READ How to Effectively Disable Azure VMs in SL1 Using VM Tags When managing resources in a dynamic cloud environment, such as Microsoft Azure, optimizing resource utilization and monitoring is crucial. ScienceLogic’s Azure PowerPack provides an effective way to control Azure Virtual Machines (VMs) using tags. This feature is particularly beneficial for organizations aiming to streamline operations and reduce costs by automating resource management based on predefined rules. What Is VM Tagging? VM tagging in Azure involves assigning metadata to resources in the form of key-value pairs. These tags can help identify, organize, and manage resources based on categories such as environment, department, or project. For example: Key: Environment, Value: Development Key: Owner, Value: IT-Support Tagging becomes a powerful tool when integrated with automation policies to control resource behavior dynamically. How to Add Tags to Azure Virtual Machines Adding tags to Azure VMs is straightforward and can be done via the Azure Portal, Azure CLI, or PowerShell. Below is a step-by-step guide for the Azure Portal: Log in to the Azure Portal. Navigate to Virtual Machines in the left-hand menu. Select the VM you want to tag from the list. Click on the Tags option in the VM's menu. Add key-value pairs to define your tags. For example: Key: Environment, Value: Production Key: Owner, Value: Finance Click Save to apply the tags. Using Azure CLI, you can also add tags with the following command: az resource tag --tags Environment=Production Owner=Finance --name <resource-name> --resource-group <resource-group-name> --resource-type "Microsoft.Compute/virtualMachines" Automation with ScienceLogic Azure PowerPack The ScienceLogic Azure PowerPack introduces Run Book Automation (RBA) that uses VM tags to enable or disable data collection. Specifically, the "Disable By VM Tag" action allows administrators to disable monitoring for specific VMs based on their assigned tags. This automation is particularly useful for scenarios such as: Disabling monitoring for development or test environments. Optimizing monitoring costs by focusing only on production resources. Dynamically managing resource monitoring based on organizational policies. Configuration Steps for Disabling VMs by Tag 1. Modify “Disable By VM Tag” Action The first step in implementing this automation is to define the tag criteria. Here’s how you can modify the parameters: Navigate to the Action Policy Manager page in the SL1 platform. Locate the “Microsoft Azure: Disable By VM Tag” action. Edit the DISABLE_TAGS snippet with your desired key-value pairs. The format should be DISABLE_TAGS = [('Key1', 'Value1'), ('Key2', 'Value2')] For example, to disable VMs tagged with an environment key of “Development” or “Test” DISABLE_TAGS = [('Environment', 'Development'), ('Environment', 'Test')] Save the changes to apply your configuration. 2. Enable the Necessary Event Policy ScienceLogic requires enabling the “Component Device Record Created” event policy to trigger automation: Go to the Event Policy Manager page. Search for the “Component Device Record Created” event policy. Set its Operational State to “Enabled.” Save your changes. 3. Activate the Run Book Automation Policy To ensure that the automation works as intended: Open the Automation Policy Manager. Locate the “Microsoft Azure: Disable and Discover from IP” Run Book Automation policy. Enable the policy and save the configuration. 4. Preserve Your Configuration To avoid losing these settings during future updates: Navigate to the Behavior Settings page. Enable the “Selective PowerPack Field Protection” option. Save the changes. Benefits of Using “Disable By VM Tag” Automation Cost Optimization: Reduces monitoring costs by disabling unnecessary data collection. Operational Efficiency: Automates routine tasks, allowing teams to focus on critical operations. Dynamic Management: Adjusts resource monitoring dynamically based on real-time needs. Conclusion Disabling Azure VMs by tag using ScienceLogic’s Azure PowerPack is an efficient way to manage resources and control costs. By leveraging automated Run Book Actions and event policies, organizations can enforce consistent monitoring policies while minimizing manual intervention. Start leveraging the “Disable By VM Tag” feature today to enhance your Azure resource management strategy. Azure Powerpack can be downloaded from https://support.sciencelogic.com/s/release-version/aBu0z000000XZSICA4/microsoft-azure66Views1like0CommentsTips for Formatting Datacenter Automation Output
2 MIN READ The "Datacenter Automation Utilities", also known as DCA, PowerPack includes run book automation and action policies that assist with general-purpose activities for other installed Automation PowerPacks. Within this powerpack there are multiple RunBooks that assist you in formatting the output of the data. This includes prettifying the data for the SL1 UI or formatting the data to send to other systems such as ServiceNow. This is already configured for Out-of-the-box automations, but there are a few things to consider when using your own commands. With, the data structure must be in a specific format. The data must be passed as a dictionary, the name of the key must be “command_list_out”, the data must be a list of tuples with the tuple containing three objects. The three objects should be: 1. The name of the automation being performed The output of the command/API call/etc. The word “False” or “None”, meaning no further flags need to be passed The RBA output should look something like this for a single command: command = “df -h” (stdin, stdout, stderr) = client.exec_command(command) Output_of_command = stdout.read() EM7_RESULT = {“command_list_out”: [(‘Running df -h’, Output_of_command, None)]} In the above example: “{}” – symbolizes the data structure is a dictionary “command_list_out” – the key of the dictionary “:” – required to separate key and value pair “[]” - symbolizes the data structure is a list “()” - symbolizes the data structure is a tuple “Running df -h” – this is the description of what is being performed, this can be free text and say whatever you would like “,” – separates each index in the tuple “Output_of_command” – this is the output of the command df -h being ran against the device “None” – a requirement that says nothing else to be passed for the custom flags in the python library. The RBA output should look something like this for a multiple command: command1 = “df -h” command2 = “ping -c 15 -i 2 google.com” (stdin, stdout, stderr) = client.exec_command(command1) Output_of_command1 = stdout.read() (stdin, stdout, stderr) = client.exec_command(command2) Output_of_command2 = stdout.read() EM7_RESULT = {“command_list_out”: [(‘Running df -h’, Output_of_command1, None), (‘Running ping, Output_of_command2, None)]} In the above example: “{}” – symbolizes the data structure is a dictionary “command_list_out” – the key of the dictionary “:” – required to separate key and value pair “[]” - symbolizes the data structure is a list “()” - symbolizes the data structure is a tuple “Running df -h” – this is the description of what is being performed, this can be free text and say whatever you would like “,” – separates each index in the tuple “Output_of_command1” – this is the output of the command df -h being ran against the device “Output_of_command2” – this is the output of the command ping being ran against google.com “None” – a requirement that says nothing else to be passed for the custom flags in the python library. Note: The difference with the multiple commands versus the single commands are the extra tuple. Meaning that after the first parenthesis is closed off a comma can now be passed to start the new tuple with the new output of commands.48Views2likes0CommentsUsing Skylar RCA for Root Cause Analysis
2 MIN READ This article assumes you already have a Skylar RCA account. If not, please contact your CSM for a 30-day trial of the product. Step 1: Contact ScienceLogic support to obtain a copy of the OTel collector. Step 2: Install the OTel collector as per installation steps (see the references section below) Step 3: Update the OTel configuration file. This is the otelcol.yaml file in otelcol-sciencelogic-zebrium_x86_64 directory. The following fields will need to be updated Include attribute in filelog block to match the log file location(s) regex in operators > type block. This needs to match the log file format. As a best practice, use a regular expression checker (for example, https://regex101.com/ , to check your regular expression before updating the configuration file endpoint and ze_token sections in the exporters block. These need to be copied from your Skylar RCA instance Step 4: Before sending logs to Skylar, it is recommended configuration is tested with local debugging. This can be achieved by using exporters: [debug] in the service: pipelines: logs: section of the otelcol.yaml config file. Also, in the receivers: filelog: section, add the line start_at: beginning to force the collector to read logs from the beginning. This will generate a log file in the logs sub-directory. Step 5: Restart the SciencelogicZebriumOpenTelemetryCollector service. Step 6: Once you are happy with the debug output, modify the config file so that logs will be sent to Skylar RCA. Remember to Restart the SciencelogicZebriumOpenTelemetryCollector service. Step 7: After a few minutes, check the Ingest History on the Skylar UI (in Ingest-history) to verify data is being received. Also, Diagnostics menu can provide useful information about how many log lines were received in the last 4 hours. Go to the Diagnostics menu and click on ‘Run Now’ button. References: Skylar Automated RCA documentation: https://docs.sciencelogic.com/latest/Content/Web_Zebrium/home_RCA.htm Windows OTel collector: https://docs.sciencelogic.com/latest/Content/Web_Zebrium/03_Log_Collectors_Uploads/Windows_OTel.html65Views2likes0CommentsWelcome to the Pro Services Blog
3 MIN READ Hello Nexus Community Members, Our professional services blog provides expert insights, industry trends, and practical advice to help businesses and professionals navigate challenges and seize opportunities. Stay informed with thought leadership, best practices, and strategies for success. Meet our Bloggers EugeneC Based in Taiwan, Eugene has been a key member of ScienceLogic's Expert Services team within the Professional Services group since June 2019. With a strong background in solution architecture, integrations, and automation, he specializes in designing and implementing scalable, sustainable solutions tailored to customer needs. Eugene has extensive experience with ScienceLogic SL1, PowerFlow, CMDB integrations, ITSM workflows, and event-driven automation. He is also passionate about internal knowledge sharing, customer engagement, and building reusable templates to drive efficiency and innovation. UsmanKhan Usman is a senior consultant in the EMEA Customer Experience team at ScienceLogic, based in Reading, UK. Since joining ScienceLogic in 2019, he has focused on designing and implementing integration solutions for SL1, particularly with ITSM and CMDB platforms. Over the years, he has worked with a wide range of customers, helping them tailor SL1 to their needs and optimize their monitoring and automation strategies. In recent years, Usman has taken on a technical leadership role in the EMEA region, leading the integration of SL1 using PowerFlow to drive complex automation and streamline IT operations for enterprise customers. With a background in software development and solution architecture, he works closely with organizations to build solutions that improve efficiency and maximize the value of their SL1 investment. LasithaL Lasitha is a senior member of the EMEA Professional Services team, which is now part of the Customer Experience team. He is based in London, UK. Lasitha has been with ScienceLogic since 2016. Since then he has worked with over 100 customers and partners from EMEA, America and Australia with deployment, configuration, customization of SL1 as well as providing consultancy on various aspects of SL1 such as best practices, noise reduction and customization. Recently he has worked on Skylar RCA implementation and SL1 integration. Lasitha is certified in ITIL and Project Management. Qasim Qasim Latif, Director of Solution Architecture at ScienceLogic, is a strategic leader helping organizations transform their IT operations. With deep expertise in automation, observability, and AIOps, he enables businesses across industries to optimize their dynamic technology environments with the ScienceLogic suite. Qasim works closely with customers to deliver intelligent, automated solutions that enhance efficiency, resilience, and real-time visibility. A dedicated mentor and innovation advocate, he guides organizations through complex migrations, large-scale integrations, and automation strategies, helping them stay ahead in an ever-evolving digital landscape. Kashif Kashif is an experienced IT professional specializing in ScienceLogic SL1, with extensive expertise in platform deployment, automation, and integrations. Over the years, he has successfully implemented and optimized SL1 solutions, configuring PowerFlow, developing custom automation, and enhancing monitoring capabilities to meet diverse business needs. His experience includes setting up high availability (HA) and disaster recovery (DR) environments, conducting system health checks, and streamlining workflows for improved operational efficiency. He is passionate about leveraging SL1 to drive automation, improve system visibility, and deliver scalable monitoring solutions that empower organizations to manage their IT ecosystems effectively. MarkMunford Mark lives in Sale, Manchester (UK) with his wife and 2 kids. He's a keen tennis player and Captain the 1st team at his Club. He enjoys DIY / fixing things and is called on by family and friends when something isn’t working! He started his IT career as a Microsoft Server engineer at a carpet manufacturer and then worked for an MSP gaining skills in all technologies, supporting, installing, and architecting solutions. He enjoyed roles as Support and Operations Manager before moving to ScienceLogic as a Customer Success Manager in 2021. HIs role is now focused on understanding the goals and objectives of his customers and working with them to build a plan to achieve them together. He looks forward to engaging with all of you here on Nexus! YaserQ Based in the UK, Yaser has been a key part of ScienceLogic's Expert Services team within the Professional Services group for over three years. Before joining ScienceLogic, Yaser worked as a Systems Engineer at a major UK telecommunications company, where he focused on designing and implementing network security and monitoring solutions for public sector organizations. With a BSc in Computer Science and certifications including CISSP and CEH, Yaser brings deep technical knowledge and practical experience to every project, helping organizations enhance their IT operations Looking forward to the expertise and contributions from this team for our Nexus Community Members.74Views2likes0CommentsNew Restorepoint Training Update: Expand Your NCCM Skills with Fresh Content
2 MIN READ ScienceLogic is excited to announce updates to the Restorepoint training series, now featuring additional content designed to help you build your Network Configuration and Change Management (NCCM) expertise. New training updates include expanded guidance on these topics: Deployment Guidance and Architecture: Understand best practices for deploying Restorepoint and key architecture components. Installation Instructions for Various Environments: Step-by-step guidance for installing Restorepoint in different deployment environments. Expert-Led Walkthroughs: Learn directly from Subject Matter Experts with easy-to-follow video tutorials. What’s Included in the Full Restorepoint Training Series These new updates are now part of the recently launched Restorepoint: Compliance-Focused NCCM training series, available through your ScienceLogic University portal. In the full training series, you’ll gain a comprehensive understanding of the platform and its capabilities, including: Restorepoint Platform Operation: Learn the basics of navigation, administration, and operational aspects of Restorepoint. Network Configuration Backups & Change Automation: Learn how to automate network changes, ensure compliance, and perform seamless configuration backups. Integrations and Plugins: Discover how Restorepoint integrates with other systems and tools to streamline your workflows. Hear From Our Customers Over 90 customers have already completed the Restorepoint training. Here’s what they had to say about their experience: “I now have an understanding of the Restorepoint platform, and how to apply the benefits to my work flow." "I enjoyed the examples of how you can utilize automation and cross-platform communication." "It was most valuable getting to see some of the SL1-to-Restorepoint interactions being carried out automatically." How to Enroll Join the growing number of professionals expanding their knowledge and improving their NCCM capabilities with the Restorepoint: Compliance-Focused NCCM training series. Click here to log in or register for ScienceLogic University and start your learning journey today!75Views5likes0Comments