IP Monitoring & Diagnostics With Command Line Tools: Part 7 - Remote Agents

How to run diagnostic processes in each machine and call them remotely from a centralised system that can marshal the results from many other networked systems. Remote agents act on behalf of that central system and pass results back to it on demand.

Choose the right command line shell

Our examples use the BASH shell because it is widely available everywhere, but there are some caveats.

Your account has a default shell. If it is not BASH, you may see error messages when you try the examples . This is only problematic if you are executing commands directly from the keyboard.

On macOS, the default shell changed from BASH to ZSH at release 10.15 (Catalina). The BASH open source project adopted a GPL licence that many organisations (including Apple) could not support without compromising security and privacy. Consequently, BASH on macOS is frozen at an older version. This is very easy to fix by installing a later version of BASH yourself.

Which shell am I using?

To find out which shell you are using is easy but not obvious. The shell would have been invoked by a command. Use the positional variable $0 (dollar-zero) to see that command. On macOS, you might see the ZSH shell reported:

echo $0
-zsh

After typing a bash command to switch the shell, you would see this:

bash

echo $0
bash

Note that the initial shell value has a leading dash character that needs to be removed if you are making decisions based on the shell name.

You might have an older version of the shell that is missing newer features. The shells have all adopted a common way to report the current version number with the --version flag:

bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin19)
Copyright (C) 2007 Free Software Foundation, Inc.

The current stable version of BASH is 5.2.15 as of December 2022.

NOTE - The $SHELL special variable tells you which shell you logged in with, but unfortunately it does not change when you switch shells. This looks like a good solution but it is NOT!

Changing the default shell

Temporarily switch to a BASH shell with the bash command. This is inconvenient every time you login, so use the chsh command to permanently change your account to use a different default shell.

On macOS, right-click on the account in user settings to access some advanced options to change this if you prefer.

Get a list of the authorised shells from the /etc/shells file:

cat /etc/shells

Alter your account configuration and make BASH your default with this command using the full path for an authorised shell:

chsh -s /bin/bash

Shell switching Shebangs

If you place a Shebang at the head of your shell scripts, the command handler will switch to the correct shell as it starts to execute the script. Then your script will always run in the correct context.

Prefix the path with a hash-exclamation to complete the Shebang on the first line:

#!/bin/bash

There are a few rare cases where the OS puts the BASH shell in a different location. Check the /etc/shells file for the appropriate value to use for the Shebang at the top of your scripts.

Calling agents to action

There are many alternative ways to access your remote diagnostic and monitoring agents. We already covered remote calls to action using SSH and direct connections with the netcat tool in earlier articles.

Implementing HTTP web server endpoints and scheduled tasks should cover most of the remaining basic needs. These are quite simple to set up and extend with new probes later on.

Web page endpoints overview

We describe HTTP URLs as endpoints in the context of a monitoring system because they are not always web pages designed for browser viewing.

You would probably not be able to install an SSH key on third-party supplied equipment. Very often, those systems would provide an HTTP web server that to communicate with.

On your own machines, adding an HTTP web server is useful, even if SSH is available. A web server of your own would support custom endpoints to do anything you want.

Programming an endpoint to deliver your measurements is quite easy. I have found PHP to be very reliable but you may prefer something else. The PHP interpreter has a lot of useful extension libraries. It can also access the command line shell directly with an exec() call. The security configuration controls how much of the file system is accessible.

Use the curl or wget tools in the client to request the response from the endpoint.

Your HTTP endpoints could return results in any of these forms:

• A formatted web page for viewing on a browser
• A web page with embedded micro-formatted data
• A single word or value
• A string of text
• Comma or tab separated lists of values
• Rendered image files (JPEG or PNG)
• Dynamically created PDF files
• Dynamically created SVG images with bar, line or pie charts
• Complex data structures using XML mark-up
• Complex data structures using JSON syntax
• Other file and image assets stored in the local file system

The JSON syntax is especially useful and compact for serialising object-oriented data structures.

Introduction to scheduled tasks

The cron scheduler daemon will run commands, applications or scripts on a regular basis. Tasks can be scheduled by combinations of minutes, hours, day-of-week, day-of-month and month.

Use scheduled tasks to measure performance and machine status and record the results on a regular basis. Run other tasks on a daily basis to compress web log files and rotate them so each day is captured in a separate file. Weekly tasks can gather information from the stored measurements and email a report to the operations team.

The scheduler can also manage queued tasks which are stacked in a holding folder for deferred execution.

Building some supporting infrastructure around cron makes it very easy to load and unload tasks from a scheduler. We will look at that in more detail in a later article.

Solve problems before they happen

You may develop and test your scripts in your own user account. The web-server and cron scheduler use different accounts and machines may be running different operating systems on a variety of hardware platforms. Before deploying your scripts, run some tests to inspect their target environments and pre-empt any problems.

Which machine am I using?

If you deploy standard tools to all of the machines, you may need to set up machine-specific configurations.

Inside your shell script, use the hostname command with command substitution in a case-switcher to select alternative sections of code. The asterisk case is a default for unmatched host names. This example prints the machine name and dot-runs a specific config for each machine. You can switch based on other values too:

case $(hostname) in

  NASW)
    echo "Synology NASW disk store"
    source ./config_NASW.sh
    ;;

  NAS1)
    echo "Synology NAS1 disk store"
    source ./config_NAS1.sh
    ;;

  ADMIN.local)
    echo "Macintosh workstation"
    source ./config_macOS.sh
    ;;

  *)
    echo "Unknown machine. Default actions here"
    source ./config_catch_all.sh
    ;;
esac

What is my effective user account name?

It is not always obvious what account your shell script is running in. Any of these contexts will run the script in a different user account:

• Scheduled execution under cron.
• Called via an exec() function from PHP.
• After switching your session to a different user account with the su command.
• Running shell script as a different user with a sudo command.

Inside your script, the whoami command will tell you the effective user account name at run-time.

Write a short test script that will determine the user account. A simple redirect into a temporary log file is perfect. Run the script under the target context:

whoami > /tmp/output.txt

Note other properties of the environment at the same time if they are likely to affect your script execution.

Examine the output file to see if the user account name and environment was what you expected. This will help you diagnose a lot of access control issues where the script was being run in a user account that could not access the resources due to permissions problems. Then implement pre-emptive code to avoid the problems.

Note that the who am i command only tells you the original login user name and not the effective user name. Always use whoami instead. These commands are often confused with one another.

Advanced solutions

Start with simple implementations to begin with. Once your basic system is working, leverage these advanced techniques:

• Service listeners that auto-start when inetd detects a new connection to a port.
• Event watchers that are triggered when something happens such as a file system change or USB stick is plugged in.
• Direct socket-to-socket connections that are more sophisticated than what netcat already provides.
• Daemons that run continuously in the background performing various housekeeping tasks on your behalf.
• Queued tasks with priority settings that override the first-in - first-out nature of a queue. We will cover this with scheduled tasks later on.
• Generic SNMP/WMI management tools.
• Connectionless servers that publish information on the sub-net for machines to acquire on an opportune basis.

Conclusion

Avoid over engineering the solution. Being economic in the design consumes the minimum system resources to provide your monitoring services. Let the OS do the work for you.

You might also like...

HDR & WCG For Broadcast: Part 3 - Achieving Simultaneous HDR-SDR Workflows

Welcome to Part 3 of ‘HDR & WCG For Broadcast’ - a major 10 article exploration of the science and practical applications of all aspects of High Dynamic Range and Wide Color Gamut for broadcast production. Part 3 discusses the creative challenges of HDR…

IP Security For Broadcasters: Part 4 - MACsec Explained

IPsec and VPN provide much improved security over untrusted networks such as the internet. However, security may need to improve within a local area network, and to achieve this we have MACsec in our arsenal of security solutions.

Standards: Part 23 - Media Types Vs MIME Types

Media Types describe the container and content format when delivering media over a network. Historically they were described as MIME Types.

Building Software Defined Infrastructure: Part 1 - System Topologies

Welcome to Part 1 of Building Software Defined Infrastructure - a new multi-part content collection from Tony Orme. This series is for broadcast engineering & IT teams seeking to deepen their technical understanding of the microservices based IT technologies that are…

IP Security For Broadcasters: Part 3 - IPsec Explained

One of the great advantages of the internet is that it relies on open standards that promote routing of IP packets between multiple networks. But this provides many challenges when considering security. The good news is that we have solutions…