© The Author(s), under exclusive license to APress Media, LLC, part of Springer Nature 2023
C. Edge, R. TroutonApple Device Managementhttps://doi.org/10.1007/978-1-4842-9156-6_2

2. Agent-Based Management

Charles Edge1   and Rich Trouton2
(1)
Minneapolis, MN, USA
(2)
Middletown, MD, USA
 

This chapter is about agents that can run on a Mac. Agents are services, or programs, that run on devices. These agents are specifically designed to give a systems administrator command and control over a Mac and are usually agents that start a listener on the Mac or tell the Mac to log in to a server and pull down management tasks from that server on a routine basis. These give administrators the ability to control various aspects of computers from a centralized server. Commands are sent to the device from the server or pulled from the server and run on devices.

Over the past few years, Apple developers have started to reduce the importance of agents on the Mac. They do this when they remove capabilities from agents and/or make it easier to disable them. Agents are still an important aspect of macOS management, and so it’s important to understand what an agent is, what it does, and when to use one. Device management tools use agents, security software uses agents, and a number of tools use agents to track the licensing of their software on devices. Agents can do less and less with every passing year, but they are still necessary.

One place where “less and less” has been problematic is device management. Just keep in mind that any time a task can be done with an agent or MDM, make sure to use the MDM unless there’s a really good reason to use an agent. The Mac isn’t quite back in the era of Desk Accessories from System 7, but the platform is in an era where user consent is more and more important for tasks that could violate user privacy – even for various tasks that would be performed on devices we can prove the organization owns.

Neither iOS nor tvOS allows for custom agents, but agent-based management is (at least for now) a critical aspect of how to manage macOS devices. In this chapter, we’ll review common agents designed for the Mac and what they do. We’ll cover MDM, which is an agent-based management environment provided by Apple in the next chapter, and provide much more information around how MDM works. MDM has been referred to as “agentless” at times, but that really means it’s just an agent provided by Apple.

Daemons and Agents

As mentioned, an agent is a process that runs on a device. These run persistently and so they’re always running. When a daemon or agent is configured, they can be flagged to restart in case they stop. To see a few built-in agents, open System Settings and go to the Sharing System Setting pane. As seen in Figure 2-1, those are often for sharing resources over a network. Let’s turn File Sharing on for just a moment

A screenshot of the sharing panel. The computer name, Charles's virtual machine is at the top. There are several options below it with toggle buttons next to each. The button for file sharing is switched on.

Figure 2-1

The Sharing System Setting pane

Each of these agents is a LaunchDaemon or LaunchAgent that loads on the computer – for this example, we’ll start File Sharing with Windows File Sharing enabled. The first process that starts on a Mac is launchd, which is then responsible for starting, stopping, and controlling all subsequent processes based on the .plist file that defines them. This includes all services required to make the operating system function. The easiest way to see this is to open Activity Monitor from /Applications/Utilities and select “All Processes, Hierarchically” from the View menu. Here, search for the file (Figure 2-2) and note that it’s been started, has a PID of 194, and runs as root. The PID is the process ID.

A screenshot of the activity monitor tab. It has 8 rows for processes. The columns are for, % C P U, C P U time, threads, idle wake-ups, kind, % G P U, G P U time, P I D, and user.

Figure 2-2

Use Activity Monitor to see what processes are running (and what processes started them)

The kernel_task controls launchd, and all other processes fall under launchd. Some are still nested under others as well. To see how smbd gets started, let’s then look at /System/Library/LaunchDaemons/com.apple.smbd.plist. Each process has a property list similar to this that defines how a LaunchDaemon and LaunchAgent will start. This looks like
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
        <key>EnableTransactions</key>
        <true/>
        <key>Disabled</key>
        <true/>
        <key>Label</key>
        <string>com.apple.smbd</string>
        <key>MachServices</key>
        <dict>
                <key>com.apple.smbd</key>
                <dict>
                        <key>HideUntilCheckIn</key>
                        <true/>
                </dict>
        </dict>
        <key>ProgramArguments</key>
        <array>
                <string>/usr/sbin/smbd</string>
        </array>
        <key>Sockets</key>
        <dict>
                <key>direct</key>
                <dict>
                        <key>SockServiceName</key>
                        <string>microsoft-ds</string>
                        <key>Bonjour</key>
                        <array>
                                <string>smb</string>
                        </array>
                </dict>
        </dict>
</dict>
</plist>

In the preceding example, note that the /usr/sbin/smbd binary is loaded and the LaunchDaemon controls the binary. LaunchDaemons can run even without a user logged in. LaunchDaemons cannot display information with the graphical interface of a Mac; but they can provide data to apps that have graphical interfaces. The plist files are stored in the /System/Library/LaunchDaemons folder (for those provided by Apple) and /Library/LaunchDaemons (for the rest). There are also LaunchAgents, which run on behalf of a user and therefore need the user to be logged in to run. LaunchAgents can display information through the window server if they are entitled to do so. As with LaunchDaemons, LaunchAgents are controlled by property lists. The configuration plist files are stored in the /System/Library/LaunchAgents and /Library/LaunchAgents, and user launch agents are installed in the ~/Library/LaunchAgents folder.

Next, let’s look at a common graphical interface for managing LaunchDaemons and LaunchAgents, Lingon.

Use Lingon to See and Change Daemons and Agents Easily

Lingon is a tool available on the Mac App Store at https://itunes.apple.com/us/app/lingon-3/id450201424. Install Lingon to be able to quickly and easily manage LaunchDaemons and LaunchAgents. It can also be downloaded through Peter Borg’s site at www.peterborgapps.com/lingon. The version there has more features and control over system-level daemons and agents.

On first open, Lingon shows a list of non-Apple services installed on the system. In Figure 2-3, notice that you see two for Druva, one for Tunnelblick, and one for an older version of macOS Server.

A screenshot of the Lingon agent browser screen. There is a plus button with the text, new job, at the top left of the screen. There are 4 boxes for 4 different processes.

Figure 2-3

The Lingon agent browser screen

Create a new one by clicking New Job. At the New Job screen shown in Figure 2-4, there are the following fields:
  • Name: The name of the script. This can be something simple like Pretendco Agent but is usually saved as com.Pretendco.agent.

  • What: App or even just an arbitrary command like “say hello” if the command is short and simple.

  • When: When the script or binary that was selected in the What field will be invoked or should run.
    • At login and at load.

    • Keep running (runs all the time and restarts after a crash): Runs all the time. launchctl will watch for the process to terminate and restart it. This is usually something that persistently manages a socket or is always waiting for something to happen on a system.

    • Whenever a volume is mounted: This is similar to watching for a file to change given that it’s watching /Volumes, but when a volume mounts, the process will run.

    • Every: Runs the script or process at a regularly scheduled interval, like every 90 seconds or once an hour.

    • At a specific time: Runs the specified process at a given time on a schedule (this is similar in nature to how cron jobs worked).

    • This file is changed: Defines a path to a file so that if the LaunchDaemon notices a file has changed, the desired script will run. This is pretty common for scripting automations, such as “if a file gets placed in this directory, run it through an image converter.

  • Save & Load: Saves the LaunchAgent or LaunchDaemon, provides the correct permissions, and attempts to load.

A screenshot of a dialogue box above the Lingon window. The dialogue box has the name, what, and when tabs. There are buttons for cancel, save, and load at the bottom right.

Figure 2-4

Provide a name and location for a script or app to daemonize it

Next, click Save & Load and you’ll be prompted that the service will run even after you close Lingon (Figure 2-5). The reason for this is that when you save your entry, the Lingon app creates a LaunchDaemon and starts it.

A screenshot of a dialogue box with options for cancel and continue at the bottom. It has the text, this will save the job to a file, load it and it will continue to run after you have quit the app. Do you want to continue? There is a checkbox with the text, do not show this message again.

Figure 2-5

Save your new agent or daemon

If you select a job and then select “Copy Job to Clipboard” from the Job menu, then you can open a new document and paste the contents of what would be in a property list in. By default, the new LaunchAgent is saved in ~/Library/LaunchAgents/ so you can also easily just view it with cat once saved.

Now that we can create and delete LaunchAgents and LaunchDaemons, you know how to create an agent if you need to or stop one from processing if it’s running on a host. Now that we’ve described what goes into building a daemon or agent, let’s look at controlling them so we can then show how you interface with those used to send management commands to macOS devices.

Controlling LaunchDaemons with launchctl

Earlier, when we showed Activity Monitor, we could have stopped the process we were looking at. Doing so means that if the process is set to do so, it can start up again. It’s possible to add, edit, delete, and load these with the launchctl command. Using launchctl is pretty straightforward. In the following example, we'll look at disabling the disk arbitration daemon to show how to control a LaunchDaemon with launchctl. To disable disk arbitration, first run the following command to obtain a list of currently running launchd-initiated processes:
launchctl list
That’s going to output a few too many so let’s constrain our search to those that include the string “shazam”:
launchctl list | grep shazam
You’ll now see a PID and the name of the process, similar to when looking at these in Activity Monitor. Next, go ahead and stop it, again using launchctl, but this time with the stop option and the exact name:
launchctl stop com.apple.shazamd
Once stopped, let’s verify that shazamd is no longer running:
ps aux
Once you have completed your tasks and want to reenable shazam, it’s possible to reboot or restart it with the start option in launchctl:
launchctl start com.apple.shazamd
Finally, this process is not persistent across reboots. If you will be rebooting the system, unload shazam and then move the plist from /System/Library/LaunchDaemons/com.apple.shazamd.plist. For example, to move it to the desktop, use the following command:
mv /System/Library/LaunchDaemons/com.apple.shazamd.plist ~/Desktop/com.apple.shazamd.plist
If the launchd job you’re trying to manage doesn’t start, check out the system.log for a more specific error why:
tail -F /var/log/system.log

For more on LaunchDaemons, see the Apple developer documentation at https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html or check launchd.info, a site where you can see additional information.

Now that we’ve looked at LaunchDaemons and LaunchAgents, let’s review what each has access to before we move on to looking at some of the commercial and open source distributions of management agents.

Deeper Inspection: What Does the App Have Access To?

Apps must be signed. Not all persistent binaries need to be signed but all should be, and all should also have a corresponding sandbox profile (although even Apple hasn’t gotten around to signing everything that comes bundled with the operating system). To see a detailed description of how an app was signed:
codesign -dvvvv /Applications/Firefox.app
This also gives you the bundleID for further inspection of an app. There are then a number of tools to use to check out signing and go further into entitlements and sandboxing. For one, check the /usr/share/sandbox directory and the more modern /System/Library/Sandbox/Profiles/ and Versions/A/Resources inside each framework for a .sb file – those are the Apple sandbox profiles. Additionally, to see what each app has access to with the codesign command:
sudo codesign --display --entitlements=- /Applications/Safari.app

When building and testing sandbox profiles for apps to compile, you may want to test them thoroughly.

As of 10.14, any app looking to access Location Services, Contacts, Calendars, Reminders, Photos, Camera, Microphone, Accessibility, the hard drive, Automation services, Analytics, or Advertising kit will prompt the user to accept that connection. This is TCC, or Privacy Preferences. You can programmatically remove items but not otherwise augment or view the data, via the tccutil command along with the only verb currently supported, reset:
tccutil reset SERVICE com.smileonmymac.textexpander

Third-Party Management Agents

There are a number of tools that other people or organizations have built that enable you to tap into the power of the macOS command line. Organizations like Addigy, FileWave, Jamf, MobileIron, and VMware all have agents. And Munki has become a popular open source management agent for a number of reasons. We’ll start our look at agents with one of the more recently added, given how it’s built: Addigy.

Addigy

Addigy is a management solution for iOS and macOS. As Addigy was developed somewhat recently, the developers make use of a number of open source components to form a management solution that can track what’s on a device (or monitor inventory), deploy new software to a device, remove software from a device, run scripts remotely, and other tasks. The ability to do this en masse is derived by having an agent running on client systems and having that agent be able to talk back to a centralized management server. The Addigy agent is available by navigating to the Add Devices button in the sidebar (Figure 2-6).

A screenshot of the Addigy window. On the left there are tabs for, dashboards, devices, add devices, policies, catalog, monitoring, and community. The add devices tab has been selected. The screen has the text, add devices with a tab for selecting a policy.

Figure 2-6

Download the Addigy Agent

As seen in Figure 2-7, there are different options to install the agent (other than with MDM, which we cover in more depth throughout the rest of the book). Install with Terminal downloads a shell script that runs an installer, whereas the package option downloads a package.

A screenshot of install via agent window. It has the tabs, install with package and install with terminal. It also has end-user instructions with an option to email installation instructions.

Figure 2-7

Scripted or package deployment

As with many software packages today, the Addigy agent consists of a few different components. The package will install a number of LaunchDaemons and LaunchAgents according to the services you use in your environment. These services are as follows:
  • /Library/LaunchDaemons/com.addigy.agent.plist: The Addigy agent, responsible for controlling other services running on the system. This calls /Library/Addigy/go-agent with the agent option.

  • /Library/LaunchDaemons/com.addigy.collector.plist: The Collector, which maintains inventory and reports information back to the server. This calls /Library/Addigy/collector.

  • /Library/LaunchDaemons/com.addigy.lan-cache.plist: The process responsible for copying files to the local computer to be processed (e.g., to install a package). This loads /Library/Addigy/lan-cache, based on https://github.com/bntjah/lancache.

  • /Library/LaunchDaemons/com.addigy.policier.plist: The policy engine, calling Ansible to do orchestration and provisioning. After a network check, this runs /Library/Addigy/go-agent with the policier option.

  • /Library/LaunchDaemons/com.addigy.updater.plist: This is responsible for keeping the agent updated and calls /Library/Addigy/go-agent with the updater option specified.

  • /Library/LaunchDaemons/com.addigy.auditor.plist: Addigy’s audit tool, which can be used to get custom facts about the state of a host.

  • /Library/LaunchDaemons/com.addigy.watchdog.plist: Throttles processes if their CPU usage gets too high.

  • /Library/LaunchDaemons/screenconnect-92fde59311b74250.plist: Addigy’s screen connection agents.

  • /Library/LaunchAgents/screenconnect-92fde59311b74250-launch-prelogin.plist: Addigy’s screen connection agents.

  • /Library/LaunchAgents/screenconnect-92fde59311b74250-launch-onlogin.plist: Addigy’s screen connection agents.

To load or unload any of these, we’ll use the launchctl command as we did earlier in the chapter. For example, to unload the Go agent:
sudo launchctl unload /Library/LaunchDaemons/com.addigy.lan-cache.plist
sudo launchctl load /Library/LaunchDaemons/com.addigy.lan-cache.plist

In addition, there are a number of supporting files located in /Library/Addigy, including auditor-facts, which has information obtained by the auditor, /Library/Addigy/ansible/status.json which is the main ansible inventory file, and /Library/Addigy/user-job which runs shell scripts on behalf of the user.

Larger files, such as packages, are then cached to the client systems with LANCache. To see what resources the LANCache daemon is using, use ps to view processes and then grep the output for lan-cache as follows:
sudo ps aux | grep -v grep | grep lan-cache
A similar incantation of the command can be used to view the resources being used by any of the agents we’ll cover in this chapter. In general, if you notice a trend here, we use launchctl to check what binaries are called by the agents and then use the command structures for each agent to get more details, troubleshoot, and learn how to most efficiently deploy management to devices. For example, know where that LANCache binary is; we can see what peers are visible to a device using lan-cache along with the peers verb, as you can see here:
/Library/Addigy/lan-cache peers
One great aspect of LANCache is that it’s used to speed up downloads for many clients. By caching updates on peers, the download is faster, and organizations reduce the bandwidth required to download assets, making the Internet seem faster during a large deployment push. To set a device as a proxy for peers, use the -peer-proxy options with that binary along with the -set-proxy-setting as follows:
/Library/Addigy/lan-cache -peer-proxy -set-peer-proxy-setting

One of the reasons we placed the Addigy agent first is that it’s a simple, efficient, and transparent architecture. The other is of course that it alphabetically comes first, and when we list vendors, we try to do so alphabetically. But the main components of the agent and with others will be that there’s a process for connecting to the server and orchestrating events, another process for downloading updates, and a final process for executing and reporting. More daemons just means more logic behind the scenes and more options. But more daemons or agents also means more CPU usually.

The use of LANCache is a really great feature, provided there’s a checksum validation at installation of packages as it improves the experience but also keeps the bandwidth required to host assets for customers low. Caching updates on client devices is not a new concept. FileWave has supported “Boosters” for well over a decade. Notice that the “agent” for every tool we cover isn’t just a single binary or script that runs in the background, but is a collection of a few that do various tasks. In the next section, we’ll look at the FileWave agent in more depth.

FileWave

FileWave is a management solution for iOS, macOS, and Windows. FileWave deploys software to client Macs using what’s known as a fileset, or a set of files. These filesets are referenced using a manifest on a FileWave server, and the FileWave client, or agent, looks to the server manifest for a list of any actions it needs to perform. If a fileset needs to be installed, the FileWave client is provided with a path to access the fileset using the manifest file and retrieves the files necessary for installation using a FileWave booster or distributed repository that hosts those files.

The FileWave client agent is primarily made up of an app, located at /usr/local/sbin/FileWave.app; a preference file, located at /usr/local/etc/fwcld.plist; and a control script, found at /sbin/fwcontrol. These tools log to /var/log/ using log files that begin with the name fwcld. The scripts are started up using /Library/LaunchAgents/com.filewave.fwGUI.plist and /Library/LaunchDaemons/com.filewave.fwcld.plist.

Let’s start with a pretty basic task; let’s get the status of the agent:
sudo /usr/local/sbin/FileWave.app/Contents/MacOS/fwcld -s
The output will be similar to the following:
***************************
**FileWave Client Status**
***************************
User ID: 2243
Current Model Number: 134
Filesets in Inventory:
1. Enroll Macs into MDM, ID 25396 (version 2) - Active
2. OSX App - Lingon, ID 846 (version 3) - Installing via Mac App Store (can take some time)
3. Firefox.app, ID 1133 (version 7) - Active
4. FileWave_macOS_Client_14.7.0_317xyz, ID 24000 (version 1) - Active
5. FileWave_macOS_Client_14.8.0_076xyz, ID 21000 (version 1) - Active
The preceding data shows the user and the filesets the device has, the versions of those filesets, and the status of each. Another task you can do with the fwcld would be to set some custom information into a field and then save that up to a server. Supported fields to do so are custom_string_01, custom_integer_01, custom_bool_01, and custom_datetime_01, where there are 20 slots for each and they contain a string (or a standard varchar), number, a Boolean (so 0 or 1), and a date. In the following example, we’ll take some information telling us if a login hook is installed and send that into the ninth available string value:
/usr/local/sbin/FileWave.app/Contents/MacOS/fwcld -custom_write -key custom_string_09 -value `defaults read com.apple.LoginWindow`

As seen in the earlier example, we’ve sent information about a device back to a server. We can then build automations at the server that send further instructions to the client. For example, if there’s no login hook, install one. The FileWave manual will be a better guide to getting started using the command line and scripts to help manage FileWave. That can be found at www.filewave.com.

The Once Mighty Fleetsmith

Fleetsmith was acquired by Apple, and the team helped build out better APIs for built-in management options. However, it’s still worth mentioning in a book like this as it had features still not replicated by other solutions (but that an enterprising admin could build themselves) and an agent built on open source software in ways enterprising engineers could build another agent (and some third-party tools have been built similarly).

As with many of the agent-based management solutions, Fleetsmith was a solution that could run as an MDM for the Mac alongside an agent, which Fleetsmith referred to as Fully Managed. Fully Managed devices could be remotely locked, have kernel extensions whitelisted, and be remotely erased via MDM. Fleetsmith could also run with just an agent and no MDM initially. The agent was downloaded in a similar way as the Addigy agent is downloaded, as seen in Figure 2-8.

A screenshot of the preferences window. It has 3 types of U R Ls listed under Fleetsmith Agent. Each has a button to copy U R L. There are 4 tabs under installers and U R Ls. They are, download agent uninstaller, invalidate installer U R Ls, revoke agent installers, and invalidate configurator U R L.

Figure 2-8

Download the Fleetsmith installer

Once the package was downloaded, it could be run, and a number of assets were loaded on client computers. As with many of the “agents,” Fleetsmith had three LaunchDaemons:
  • com.fleetsmith.agent.plist: Invoked the /opt/fleetsmith/bin/run-fsagent shell script, which logged to /var/log/fleetsmith and invokes the agent daemon

  • com.fleetsmith.query.plist: Started /opt/fleetsmith/bin/fsquery, a customized osquery daemon

  • com.fleetsmith.updater.plist: Started /opt/fleetsmith/bin/fsupdater, a Go daemon that kept software up to date

The fsagent process was responsible for orchestrating events on behalf of the Fleetsmith tenant. The directory /opt/fleetsmith/bin contained a number of tools invoked by the daemon and used to manage devices:
  • force-notifier.app: Took over the screen to run updates when needed.

  • fsagent: The LaunchDaemon that ran in the background.

  • fsquery: The Fleetsmith fork of osquery.

  • fsupdater: Was responsible for keeping Fleetsmith up to date.

  • osqueryi: osquery, which we’ll cover later in this chapter, is distributed in order to provide inventory information for Fleetsmith.

  • run-fsagent: Started the agent.

The /opt/fleetsmith/data directory stored the agent.log, downloads directory, and a store.db sqlite3 database. All of this was used as small components to accomplish the tasks servers instructed clients to perform. As an example, to manage Google Chrome in Apps (Figure 2-9), users could enable the app to be managed and then configure the settings to be pushed to the app.

A screenshot for managing Google Chrome with Fleetsmith. The options for Google Chrome are, save, cancel, and remove from profile. The right side has the enforced bookmark list with a toggle button. There is a box for managed bookmarks items with name and U R L. The option, add another, is at the bottom right.

Figure 2-9

Manage Google Chrome with Fleetsmith

The Fleetsmith agent then installed the Chrome app. The Fleetsmith app from /Applications showed apps as “All your apps are up to date” (Figure 2-10) provided they were running the latest version.

A screenshot of a dialog box with the text, Fleetsmith, at the top. There is a settings icon at the top right. The text, all your apps are up to date, is at the center.

Figure 2-10

The Fleetsmith app in the menu bar

Addigy is (and Fleetsmith was) built on Go-based agents that included components from the open source community. Fleetsmith bolted on a lot of keys and certificates to further secure the communication channel and added a lot of logic on top of osquery. All of this could be done by any company and is likely to be replicated, especially given the open source solutions that can handle the MDM management stack. Perhaps one of the top names in device management is Jamf. Next, we’ll look at the jamf “binary” – which is one of the older agents but also one of the most widely distributed.

Jamf

Since the early days when it was called The Casper Suite, Jamf Pro has always had a binary that ran on a computer. That binary currently lives at /usr/local/jamf/bin/jamf, and it executes most of the non-MDM-based tasks that Jamf Pro sends to the agent. The “agent” is an oversimplification. There are others, which include
  • /usr/local/jamf/bin/jamfagent: The agent for processing user work and report on user data.

  • /Library/Application Support/JAMF/JAMF.app/Contents/MacOS/JamfDaemon.app: A bundle that contains the Jamf Pro daemon, for more global instructions (the Jamf.app is an app bundle that keeps all this together).

  • /Library/Application Support/JAMF/JAMF.app/Contents/MacOS/JamfAAD.app: For the Azure Active Directory integration.

  • /Library/LaunchDaemons/com.jamfsoftware.task.1.plist: Manages checking into Jamf Pro. Additionally, there are some symbolic links for either backward compatibility or to provide paths to files in various locations, according to the file.

Additionally, there are a number of daemons and agents that are not compiled binaries. The daemons are the global processes. /Library/LaunchDaemons/com.jamfsoftware.startupItem.plist launches a check-in script, and the daemon /Library/LaunchDaemons/com.jamfsoftware.jamf.daemon.plist collects application usage, FileVault data, network state changes, and restricted software as well as performs actions from Self Service. To manage check-ins to the servers, /Library/LaunchDaemons/com.jamfsoftware.task.1.plist is run routinely. /Library/LaunchAgents/com.jamf.management.jamfAAD.clean.agent.plist cleans up artifacts from Azure AD IDs, and /Library/Preferences/com.jamf.management.jamfAAD.plist is used to retain preferences of Azure AD information.

All of this is logged to /var/log/jamf.log. So the binary is handling non-MDM communications back to the server but also enables you to script various tasks quickly.

Manage User Accounts with Jamf

You can then add a new user, using the createAccount verb. To do so, run the jamf binary using the createAccount verb. This verb provides for a number of options, including a short name (-username), a full name (-realname), a password (-password), a home directory (-home), and a default shell (-shell). If you want the user to be an admin of the system, you can also add an -admin option. In the following, we’ll string it all together:
/usr/sbin/jamf createAccount -username charlesedge -realname "Charles Edge" -password mysupersecretpassword -home /Users/charlesedge -shell bash -admin
Or if you need to, you can easily delete an account using the deleteAccount verb. Here, use the -username operator to define a given user that you’d like to remove. That username is defined as the short name (or what dscl shows) of a given user. For example, to remove the user we just created (charlesedge), run the following command:
/usr/sbin/jamf deleteAccount -username charlesedge
You can then provide a pop-up on the screen that you completed that action using the displayMessage verb along with the -message option to indicate what was done:
/usr/sbin/jamf displayMessage -message "charlesedge has been deleted"
Once an action is complete, it’s always a good idea to perform a quick recon again to make sure everything is registered back to the server:
/usr/sbin/jamf recon

More Automation Through the Jamf Framework

The Jamf Framework is also capable of performing a number of tasks that the developers have provided, to make it easier to configure devices on your network. To get started, let’s see all of the options. As with many binaries, if you have any questions, you can use the help verb to see what all it can do:
/usr/sbin/jamf help
If you need more information on a given verb, run the help verb followed by the one you need more information on:
/usr/sbin/jamf help policy
You can also automate standard tasks. The following command will unmount a mounted server called mainserver:
jamf unmountServer -mountPoint /Volumes/mainserver
Or change a user’s home page in all of their web browsers:
sudo jamf setHomePage -homepage www.krypted.com
The following command can be used to fire up the SSH daemon:
sudo jamf startSSH
The following command can be used to fix the By Host files on the local machine:
sudo jamf fixByHostFiles -target 127.0.0.1
The following command can be used to run a Fix Permissions on the local machine:
sudo jamf fixPermissions /
The following can be used to flush all of the caches on your local system:
sudo jamf flushCaches -flushSystem
The following can be used to run a software update on the local system:
sudo jamf runSoftwareUpdate
The following can be used to bind to an AD environment (rather than dsconfigad) but would need all the parameters for your environment put in as flags in order to complete the binding:
sudo jamf bindAD
The jamf binary can also poll for a list of printers using the listprinters verb:
sudo jamf listprinters
The output looks like this:
MSP Lobby HP MSP_LobbyLobby lpd://192.168.12.201/ HP 6490 C5250 PS
As noted by the number of agents and daemons, there can be a bit of cruft spread throughout the system, especially for devices that have been enrolled in Jamf for some time. Therefore, the removeFramework option can be used to fully clean out the Jamf artifacts from a device (of course the device cannot check in once run):
/usr/local/bin/jamf removeFramework

In general, most of the agents will provide a few options. The Jamf binary goes a bit deeper than most, which makes Jamf the more advanced third-party Mac management tool available. It does still wrap a lot of shell commands that administrators can send through any management tool, which some admins have chosen to build on their own – either with the assistance of open source tools or as open source tools altogether. The top open source tool for Mac management is another common tool called Munki, which we’ll cover in the next section.

Munki

Munki is an open source device management framework originally developed by Greg Neagle and available via GitHub at https://github.com/munki/munki. Munki was initially designed to be similar to the Apple Software Update Server but for third-party products. The design is elegant in that simplicity. The client downloads one or more manifests and one or more catalogs, and a client computer takes its updates from the manifest(s) and catalog(s). As the project has gained traction and a greater level of maturity, a number of enhancements have been made; but you have to love that core concept that a client picks up a dictionary of information about the state the client should be in and then takes action based on that, including installing profiles, updating default domains, and of course installing software updates.

Munki runs an agent on client computers. As with many “agents” these days, it’s split up between a number of LaunchDaemons and LaunchAgents, each built for a specific task. There are four LaunchDaemons and three LaunchAgents, as well as a number of scripts that do specific tasks. As with a few of the tools we cover, Munki comes with an app that can be used to allow users to perform a number of tasks themselves.

Munki LaunchDaemons

As is a good practice, each task that the Munki client requires is a separate program, with the four tasks that require root privileges being run as LaunchDaemons and three LaunchAgents for the things visible in the Managed Software Center GUI. In this section, we’ll look at what each of the LaunchDaemons does:
  • /Library/LaunchDaemons/com.googlecode.munki.managedsoftwareupdate-check.plist causes managedsoftwareupdate to run approximately once an hour in the background to check for and possibly install new updates. This controls background task scheduling with the supervisor (/usr/local/munki/supervisor) to make sure it wasn’t removed and adds a delay to triggered managed softwareupdate events (/usr/local/munki/managedsoftwareupdate). This allows the local agent to process catalog changes and run unattended installations of software.

  • /Library/LaunchDaemons/com.googlecode.munki.managedsoftwareupdate-install.plist: Runs cached updates when user notification is required. The managedsoftwareupdate-install launchdaemon runs cached updates for Managed Software Center. This involves a sanity check that /private/tmp/.com.googlecode.munki.managedinstall.launchd is present. If so, managedsoftwareupdate runs using the –installwithnologout option when invoked.

  • /Library/LaunchDaemons/com.googlecode.munki.managedsoftwareupdate-manualcheck.plist: Gives Managed Software Center the ability to scan servers for updates to the Munki manifest file. Requires the /private/tmp/.com.googlecode.munki.updatecheck.launchd trigger file is present.

  • /Library/LaunchDaemons/com.googlecode.munki.logouthelper.plist: Notify users when the force_install_after_date approaches. This is done by invoking Managed Software Center which can terminate a user session, which uses the /usr/local/munki/logouthelperutility script.

Munki also comes with a number of LaunchAgents, which include the following:
  • /Library/LaunchAgents/com.googlecode.munki.ManagedSoftwareCenter.plist: Used to open Managed Software Center in the user context when user notification is required.

  • /Library/LaunchAgents/com.googlecode.munki.MunkiStatus.plist: Calls MunkiStatus in the Contents/Resources directory of the Managed Software Center app bundle and is used for notifications on top of the login window.

  • /Library/LaunchAgents/com.googlecode.munki.managedsoftwareupdate-loginwindow.plist: Processes user tasks at the login window. Can be triggered by /Users/Shared/.com.googlecode.munki.checkandinstallatstartup, /private/tmp/com.googlecode.munki.installatlogout, or /Users/Shared/.com.googlecode.munki.installatstartup.

The architecture of what processes are used to run what services are pretty telling, not only about how the product works but also how to troubleshoot that product. The fact that each task that will be performed has been pulled off into a separate daemon or agent speaks to preserving the security of managing endpoints using the least amount of privileges available and avoids requiring a kext always be loaded in order to orchestrate all of these tasks. Most, though, are in support of processing the manifest, catalog, and pkginfo plist files, which we’ll cover in the next section.

Customizing a Munki Manifest

The manifest is where the Munki agents take their instruction sets from. Now that we’ve looked at the components of Munki, let’s look at that format, the manifest, catalog, and pkginfo plist files, and the keys in those files that go to each client. Keep in mind that Munki was initially built to replicate what Apple did for Software Update Services where there is a manifest file distributing packages to install on clients. Therefore, Munki has catalogs of all software to be installed.

Over time, the scope of the project grew to include groupings of different client computers that received different manifest files and an app that allowed end users to install their own software, which we’ll cover in more detail in Chapter 11.

Manifests are standard property lists. We’ll cover manipulating property lists further in Chapter 3, but for now, think of them as simple XML files that have a collection of keypairs. Those are a simple list of the items to install or verify their installation or to remove or verify their removal. The manifest contains a list of one or more catalogs, defined using a catalogs array, along with an array of packages to install or just update if they are found on disk, which are a number of arrays for how you want the Munki agent to handle items listed. These include the following arrays:
  • managed_installs: Munki will install these items and keep them up to date.

  • managed_uninstalls: Munki will remove these items.

  • managed_updates: Munki will update these items, if present, whether or not they were installed by Munki.

  • optional_installs: Munki will allow users to install these items optionally and keep them up to date once installed (e.g., using Managed Software Center).

  • featured_items: Items listed at the top of Managed Software Center.

Munki Managed Installs
The managed_installs key is the first and so arguably one of the most important things Munki does. As mentioned, managed installs are software that is required to be deployed to a device. Once deployed, the software must be kept up to date in alignment with the catalog. You can see this in practice using the following manifest, which instructs the client computer to install Quickbooks, Slack, and Office from the Accounting catalog:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
   <key>catalogs</key>
   <array>
      <string>production</string>
   </array>
   <key>managed_installs</key>
   <array>
      <string>Quickbooks-2019</string>
      <string>Slack-3.3.8</string>
      <string>Office-16.23</string>
   </array>
</dict>
</plist>
Many environments use a production catalog and a testing catalog, where the testing catalog is populated by an automated packaging tool such as AutoPKG. Once software has been tested and validated as safe for distribution, it’s then added to the production catalog. Testing machines can then use the testing catalog to install software, instead of the safer production catalog. You can have multiple catalogs listed by adding items to the catalogs array. The following example shows adding a testing catalog above the production catalog. Doing so causes the Munki agent to search the testing catalog for the packages defined in the managed_installs array before trying to install those software titles or scripts from the production catalog, making for a seamless transition when the software you are testing is promoted to production.
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
   <dict>
        <key>catalogs</key>
        <array>
            <string>testing</string>
            <string>production</string>
        </array>
        <key>managed_installs</key>
        <array>
            <string>Firefox-104.0.2</string>
            <string>Chrome-105.0.5195.102</string>
        </array>
    </dict>
</plist>

It’s usually a good practice to deploy software without version numbers or, if there are version numbers, to only use major release numbers. In the preceding example, we’ve actually piped the point release version number for testing. This allows you to keep track of software during testing that’s destined for your production catalog. This catalog isn’t always exclusive for software you installed.

Updating Software That Munki Didn’t Install

There are a number of reasons to patch software that Munki didn’t install. Chief among them are security patches. But also, the general performance of a system can be greatly improved by treating a piece of software Munki didn’t install as you would treat other managed software. This is referred to as a managed update in Munki and defined using a managed_updates option.

The managed_updates array is handled similarly to managed_installs but looks for a software title on the host and runs an updater only if that title is found. For example, if you don’t deploy Firefox, Chrome, or the Microsoft Edge browser, you might still want to keep those patched if you find your users install them. Running an inventory through a tool like osquery (described later in this chapter) will supply you with a list of software on the computers in your deployment and can then be used to find any software you would like to either move into your managed catalog or at least keep updated.

The following example is similar to the previous example but using managed_updates for these pieces of software installed by users outside of the Munki deployment:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
   <dict>
        <key>catalogs</key>
        <array>
            <string>production</string>
        </array>
        <key>included_manifests</key>
        <array>
            <string>accounting </string>
            <string>allusers</string>
        </array>
        <key>managed_updates</key>
        <array>
            <string>Chrome</string>
            <string>Firefox</string>
        </array>
    </dict>
</plist>

The exception to updating a package would be if it’s been slated to be removed on a computer. If a piece of software is scheduled for removal, it will not be updated. As deployments grow, you need more complicated logic on client systems in order to handle the added burden that additional groups and iterations put on an environment. This has led to nesting manifests.

Nested Manifests

You can nest manifests. Much as you can do an include in an Apache configuration, you can logically group manifests of files. If you have a user in the accounting group, then you can create a manifest just for accounting, along with a manifest that all of the users receive. In the following example, we’ll remove the testing catalog and add an array of manifests to include, adding the accounting and allusers manifests and install Chrome as well, which wouldn’t be included for other devices:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
   <dict>
        <key>catalogs</key>
        <array>
            <string>production</string>
        </array>
        <key>included_manifests</key>
        <array>
            <string>accounting </string>
            <string>allusers</string>
        </array>
        <key>managed_installs</key>
        <array>
            <string>Chrome</string>
        </array>
    </dict>
</plist>
The preceding manifest includes two other manifests. Consider this akin to having nested groups. Those manifests specifically meant to be included in other manifests should not typically include a catalog, given that the catalog is defined in the parent manifest. In the following example, see an example of a manifest built to be included:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
   <key>managed_installs</key>
   <array>
      <string>Quickbooks-2022</string>
      <string>Slack</string>
      <string>Office-16.64</string>
   </array>
</dict>
</plist>

The preceding manifest is similar to the earlier example, defining Quickbooks, Slack, and Office but without listing the catalogs. This simple approach allows administrators to push out small changes, managing universal software and then either aligning a computer with a job function or, as the deployment grows, allowing for more complicated hierarchies. This is similar to Apple allowing for nested Software Update Servers, where you can limit software to be deployed on child servers. While the Apple technique is no longer supported, Munki has filled much of the gap for third parties and continues this tradition.

Removing Software with Munki

Managed installs get software and packages on devices and keep software updated. Managed uninstalls remove software. This is defined in the same property lists but with a managed_uninstalls array followed by a list of titles in the form of strings. Obviously, software must be installed in order to be uninstalled. Provided that a software title is installed that should be removed, the following example builds on the previous, keeping any software defined in the accounting and allusers manifest installed, keeping Chrome installed but also defining that the Symantec software will be removed any time it’s encountered:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
   <dict>
        <key>catalogs</key>
        <array>
            <string>production</string>
        </array>
        <key>included_manifests</key>
        <array>
            <string>accounting </string>
            <string>allusers</string>
        </array>
        <key>managed_installs</key>
        <array>
            <string>Chrome</string>
        </array>
<key>managed_uninstalls</key>
    <array>
        <string>Symantec</string>
    </array>
    </dict>
</plist>

The preceding example is mostly used to retire software, plan for major updates, and pull back any software accidentally released.

Optional Software Installation

Optional software are software titles that users can optionally install through Managed Software Center. If a user installs an optional software title, a package is installed as an administrator. Optional software is defined in manifests using an optional_installs array and then a number of packages, by name.

The following example builds off of our accounting include from earlier, listing VPN, Okta, Druva, and Zoom as optional installations:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
   <key>managed_installs</key>
   <array>
      <string>Quickbooks-2019</string>
      <string>Slack</string>
      <string>Office-16</string>
   </array>
    <key>optional_installs</key>
    <array>
        <string>VPN</string>
        <string>Okta</string>
        <string>Druva</string>
        <string>Zoom</string>
    </array>
</dict>
</plist>

Any software installed using an optional install is stored in a locally stored manifest file that is also reviewed by Munki, located at /Library/Managed Installs/manifests/SelfServeManifest. As you might guess, if a title is listed in optional installs and managed installs, the package will be a required install. Managed Software Center then has the logic not to list that package as an optional install. The beauty of these types of installs is that users don’t need administrative privileges. We’ll get into packaging further in Chapter 6, but because anything can be put in a package, you can also deploy automations using Managed Software Center this way. Therefore, basic support tasks that might otherwise require administrative privileges such as clearing print queues, installing certain printers, and clearing caches can then be deployed without a user being made an administrator or without a remote control session to the computer.

If an item is installed through an optional install, then it is treated as a managed install. Because the software is optional, it can be removed through Managed Software Center. If the optional install is then removed, it is treated as a managed uninstall. A type of optional item is a featured item.

Featured Items

The featured_items array indicates software that is listed at the top of Managed Software Center in the Featured section. Featured items are a subset of optional installs so should be listed in both places. Manifests may also have a featured_items key:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
   <key>managed_installs</key>
   <array>
      <string>Quickbooks-2022</string>
      <string>Slack</string>
      <string>Office</string>
   </array>
    <key>optional_installs</key>
    <array>
        <string>VPN</string>
        <string>Okta</string>
        <string>Druva</string>
        <string>Zoom</string>
    </array>
<key>featured_items</key>
    <array>
        <string>Okta</string>
        <string>Druva</string>
        <string>Zoom</string>
    </array>
</dict>
</plist>

One of our favorite aspects of Munki admins is that most know more than anyone else has ever known about anything; therefore, there will be a lot of disagreement on this explanation of manifest files. That is fine. Now that we’ve created manifests, let’s move on to getting the first catalog created and getting some software imported into it for distribution.

Building a Repository and a Catalog of Software

Munki is a tool designed for installing software. The catalog is a list of software titles available for installation. The catalog is stored locally at /Library/Managed Installs/catalogs but can be downloaded from the server when it’s changed and used to provide catalogs using a web service and items are imported into the catalog using munkiimport, by default installed at /usr/local/munki/munkiimport. The munkiimport script is a python script that acts as an assistant for importing disk images (.dmg files), packages (.pkg files), manual configuration profiles (which have been partially deprecated since macOS 10.15), and application bundles (.app files) into your repo.

A repository’s location is configured, along with other global configuration options for munkiimport, using a –configure option for munkiimport. Simply run that option and follow the interactive shell:
/usr/local/munki/munkiimport --configure

When prompted, provide a URL for your repo, which we’re using as /usr/local/var/www/munki_repo in this demonstration. The repo is set such that when the user runs munkiimport, imports will go to that location by default. The preferences set by the --configure option are stored in ~/Library/Preferences/com.googlecode.munki.munkiimport.plist. The repo should be provided as file://usr/local/var/www/munki_repo for our example location, although you could use an afp:// or smb:// mount instead or use one of the file-handler options to store your repo in an AWS or GCS file store.

Next, we’re going to create a PkgInfo property list based on a standard installer package that lists the catalogs an installer is a member of and other metadata about the installer. In this example, we’ll create the Zoom installer we used in the manifest earlier in this chapter: the PkgInfo plist. PkgInfo files are stored in the pkgsinfo directory inside the munki_repo.

The PkgInfo file is generated when using munkiimport to import an installer. To import software, we’ll use munkiimport along with options that allow the script to run without providing the information in these options interactively. This involves answering some basic questions about the software, including the name, name that should be displayed when installing, the category of software, the version of the package being imported, the organization that made the software, whether the software can be installed/uninstalled in an unattended fashion, and a -c option which defines what catalogs the software should be placed into:
munkiimport ~/Desktop/zoom.pkg --name=Zoom --displayname=Zoom --description="Our conferencing software"  --category=Productivity --developer=Zoom --pkgvers=4.6.4 -c allusers --unattended_install --unattended_uninstall
Because we didn’t specify an -n option, we will still have some interactive steps to provide information about our installer. We’ll show these steps so you can better understand what’s happening behind the scenes:
Import this item? [y/n] y
Upload item to subdirectory path []: apps/zoom
Path pkgsinfo/apps/Zoom doesnt exist. Create it? [y/n] y
No existing product icon found.
Attempt to create a product icon? [y/n] y
Attempting to extract and upload icon...
Imported icons/Zoom.png.
Copying zoom.pkg to repo...
Copied zoom.pkg to pkgs/apps/zoom/zoom.
Edit pkginfo before upload? [y/n]: n
Saved pkginfo to pkgsinfo/apps/Zoom/Zoom-4.4.53590..plist.
Rebuild catalogs? [y/n] y
Rebuilding catalogs at file://usr/local/var/www/munki_repo
Created icons/_icon_hashes.plist...

All of the preceding options can be added as additional parameters to your installer. This shows the amount of work being done each time you run a munkiimport, even creating an icon. The one important option is to rebuild catalogs. Answering yes to that option will result in a new catalog files being built based on pkginfo files.

The software itself is also then imported into the repo, and if successful, the pkginfo file will open in the editor you defined in the --configure step for your user. Now that we have a repo, a catalog, and manifests, let’s distribute the manifest to client devices that need to install software.

Distributing the Manifest File

We’ve described manifests and catalogs, but how is a device provided with a manifest? Upon installation, the Munki agent will look to a SoftwareRepoURL key for the main repository of manifests. If Munki’s SoftwareRepoURL preference is not defined, the Munki client will attempt to detect a Munki repo based on some common defaults. That web host should have a valid TLS certificate and host the URL via https in order to protect against any man-in-the-middle attacks. Munki is architected such that the administrator points the Munki client to the server and that the host running Munki implicitly trusts that server. Therefore, it’s not recommended to deploy Munki without https in order to ensure the authenticity of catalogs being deployed. Failure to do so could cause résumé-generating events.

If no SoftwareRepoURL is defined, Munki will go through a search order looking for a repository of manifests. This follows the following search order, where $domain is a search domain for a client:
  • https://munki.$domain/repo

  • https://munki.$domain/munki_repo

  • http://munki.$domain/repo

  • http://munki.$domain/munki_repo

  • http://munki/repo

Once Munki finds a repo, there is usually a manifest for all devices at that URL. This is the site_default manifest, and if a manifest is not found, that uses a better option. The URL for that site_default for a domain name of pretendco.com might then be https://munki.pretendco.com/repo/manifests/site_default. Those better options in order of priority would be a unique identifier for Munki known as the ClientIdentifier, a fully qualified hostname (e.g., the output of scutil --get HostName), a local hostname (e.g., the output of scutil --get LocalHostName), or the serial number. The file for a computer’s hostname using that pretendco.com domain name from earlier but with a hostname of client1234 might then be https://munki.pretendco.com/repo/manifests/client1234.pretendco.com.

The manifest can be created manually or using a device management tool. For example, some organizations use puppet, chef, VMware AirWatch, or Jamf Pro to distribute the Munki manifest files and settings that point to manifest files. While it might seem like these other tools can manage the software on devices natively, it’s worth noting that these other tools are more about state and policy management, where Munki is about managed software. The power of Munki is the fact that it has such a narrow set of duties. For smaller environments, managing software and leveraging some payload-free packages is often all they need. For larger environments with a state management tool, Munki perfectly complements their other tools, and engineers tasked with the management of large fleets of devices are accustomed to scripting middleware for their organization’s specific needs.

Many software packages are updated every couple of weeks. According to how many software titles a given organization is managing, it can be a challenge to maintain an extensive software catalog. Therefore, AutoPkg is often used alongside Munki to automatically build packages and put them in your testing catalog. We cover AutoPkg more in Chapter 7, when we review preparing apps for distribution. Now that we’ve covered Munki, and how Munki keeps devices up to date, let’s move to a tool often used to complement Munki but built more for tracking the state of a device than systems orchestration: osquery.

osquery

Facebook open sourced osquery, a tool they initially used to monitor servers, at https://osquery.readthedocs.io/en/stable/. Since then, a number of developers (including those responsible for each platform internally at Facebook) have built additional capabilities for managing a specific platform. This makes osquery capable of being used as part of the management stack of a variety of platforms, without having to learn the internals for each of those platforms. The point of osquery is to obtain information about a system.

The osquery framework is multiplatform and tracks all the information about a system in a simple SQL database, so that devices can run lookups efficiently on behalf of a process that calls a lookup. This makes otherwise costly (in terms of processing power) processes run quickly, meaning an organization can obtain more data about devices in a central location at a higher frequency, without impacting the performance of the device being monitored. This would include common settings used on a Mac, the daemons running, how a device is configured, and the version of software. But you can get lower level and analyze processes running, view network sockets, compare file hashes, and find any other fact required about a device at a given time.

When osquery is installed, the following files are deployed to the device:
  • /private/var/osquery/io.osquery.agent.plist: The configuration preferences for the osquery daemon.

  • /private/var/osquery/osquery.example.conf: The customized settings for each organization running osquery.

  • /private/var/log/osquery/: Log files are stored in this directory and written as to the specified parameters in the configuration file.

  • /private/var/osquery/lenses: A record of a rest call stored in Augeas' tree (thus the .aug files).

  • /private/var/osquery/packs: A set of queries configured with standard .conf files.

  • /opt/osquery/lib/osquery.app (moved from /usr/local/lib/osquery/ in version 3): The directory for the command tools for osquery.

  • /usr/local/bin/osqueryctl: Symlink to a control utility to wrap basic tasks, like starting the LaunchDaemon.

  • /usr/local/bin/osqueryd: The main osquery daemon, which starts the process.

  • /usr/local/bin/osqueryi: Provides a SQL interface to test queries. By default, comes with a number of built-in tables populated with more information than most can consume (more data is always a good thing).

Now that we’ve looked at the osquery components, let’s get it installed and check SQL to see what data we now have at our fingertips.

Install osquery

The osquery software package for Mac is available at osquery.io/downloads. The default package creates the files mentioned in the previous section. Then you’ll want to create a configuration file from the example:
sudo cp /var/osquery/osquery.example.conf /var/osquery/osquery.conf
When you edit this file, it’s a standard json file. Look for lines that begin with a // as those that are commented out. For this example, we’re going to uncomment the following lines by simply deleting the // that the lines begin with and then change the /usr/share/ to /var given that packs have moved (note the exact path to each file may be different based on the version of osquery run and how it was compiled):
//
"osquery-monitoring": "/usr/share/osquery/packs/osquery-monitoring.conf",
//
"incident-response": "/usr/share/osquery/packs/incident-response.conf",
// "it-compliance": "/usr/share/osquery/packs/it-compliance.conf",
// "osx-attacks": "/usr/share/osquery/packs/osx-attacks.conf",
So those four lines should then read
"osquery-monitoring": "/var/osquery/packs/osquery-monitoring.conf",
"incident-response": "/var/osquery/packs/incident-response.conf",
"it-compliance": "/var/osquery/packs/it-compliance.conf",
"osx-attacks": "/var/osquery/packs/osx-attacks.conf",
We’ll also uncomment this line in the same way, by removing the //:
//"database_path": "/var/osquery/osquery.db",
The osqueryd daemon provides you with queries run on a schedule. The daemon then aggregates the results of those queries and outputs logs. The following is an example query from the configuration file. Here, we’re looking for hostname, cpu, and memory from the system_info table. We also include the schedule for how frequently osqueryd updates the database per query using an interval option in seconds.
"system_info": {
// The exact query to run.
"query": "SELECT hostname, cpu_brand, physical_memory FROM system_info;",
//
The interval in seconds to run this query, not an exact interval.
      "interval": 3600
    }
We’re not going to make any changes to any of the example queries just yet. Now that we’ve customized the configuration file, we’ll copy the LaunchDaemon to /Library/LaunchDaemons and start it:
sudo cp /var/osquery/com.facebook.osqueryd.plist /Library/LaunchDaemons/
Once you’ve copied the file, we’ll start the LaunchDaemon:
sudo launchctl load /Library/LaunchDaemons/com.facebook.osqueryd.plist
The footprint for osquery is slight. As an example of this, to remove osquery simply stop the processes and remove /Library/LaunchDaemons/com.facebook.osqueryd.plist. Then remove all files from /private/var/log/osquery, /private/var/osquery, and /usr/local/bin/osquery and then use pkgutil to forget the osquery package was used using pkgutil:
pkgutil --forget com.facebook.osquery

To deploy osquery en masse, edit your own templates, script any additional installation steps as a postflight script, and repackage them for distribution. This can be more work for some environments than a third-party package that is purchased or could be less for some environments based on the scale and complexity requirements. Now that we have osquery running on a system, let’s look at running queries with osquery.

Running osquery

The best way to understand the real value of osquery is to use osqueryi as a stand-alone tool to query facts about a device. Architecturally, anything you report on locally is then available on the server as well or easily piped to a Security Information and Event Management (SIEM). In fact, if you’re threat hunting, doing research to write this book, or just obsessive compulsive about tracking your own personal device performance, you can run osquery locally.

First, we’ll start the osquery daemon, which now that everything is installed should be started, but just in case, we’ll use the following:
/usr/local/bin/osqueryctl start
Events and facts about devices are stored in a SQL database at /var/osquery/osquery.db (by default), and the schema for the tables in that database is documented at https://osquery.io/schema/3.3.2. The osqueryi binary can then be used to perform SQL queries. This is an interactive SQL shell and can be invoked by simply calling the file:
/usr/local/bin/osqueryi
Once in the interactive shell, just run a .SCHEMA command to see the lay of the land:
osquery>.SCHEMA

There are way too many attributes that are tracked than we have pages to go through them in this book. See https://link.springer.com/book/10.1007/978-1-4842-1955-3 for a great book on SQL queries.

For osquery specifically, use the link to the official schema to easily find information about what’s being tracked. It’s a much prettier map. Next, we’ll provide a few samples just to show the power of osquery. The first is from sample documentation, but it’s one of the most common. This query shows the USB devices that are plugged into a computer:
osquery>SELECT vendor, model FROM usb_devices;
The output would be as follows:
+------------+----------------------------------+
| vendor     | model                            |
+------------+----------------------------------+
| Apple Inc. | AppleUSBXHCI Root Hub Simulation |
| Apple Inc. | AppleUSBXHCI Root Hub Simulation |
| Apple Inc. | AppleUSBXHCI Root Hub Simulation |
| Apple Inc. | iBridge                          |
+------------+----------------------------------+
The preceding example is a standard SQL result set. It shows all USB devices on the bus. You can also use the WHERE clause to extract only those records that fulfill a specified criterion. The WHERE syntax uses a SELECT followed by the column and then a FROM for the table but now adds a WHERE at the end so you can specify table_name WHERE a column name is – and this is where it becomes powerful because it’s where it is either something in the data set or a comparative between columns. To show what this expands to fully:
osquery> SELECT vendor, model FROM usb_devices WHERE vendor !='Apple Inc.';
As you can see, we used single quotes around text. We could have also used double quotes. You do not need to quote numbers, but do need to quote strings. The following operators are available when using a WHERE clause:
  • = Equal

  • <> or != Not equal to

  • > Greater than

  • IN Indicates multiple potential values for a column

  • < Less than

  • >= Greater than or equal

  • <= Less than or equal

  • BETWEEN Between an inclusive range

  • LIKE Looks for a provided pattern

What would this look like in your configuration file?
{
  "usb_devices": {
    "query": "SELECT vendor, model FROM usb_devices;",
    "interval": 60
  }
}

In the preceding query, notice that we are running a standard SELECT statement. Most tasks executed against a SQL database are done with SQL statements. Think of statements as a query, an insert, a change, or a delete operation. For example, to see all data in the tables, select all of the records from a database using the SELECT statement.

Notice that this is just the name of a query (any old name will work) followed by a query, which is a standard SQL query, followed by an interval. This would run once a minute. Another option would be to list the amount of free space on Time Machine destinations once an hour:
{
  "time_machine": {
    "query":
    "SELECT bytes_available from time_machine_destinations;;",
    "interval": 60
  }
}
The ORDER BY keyword in a SQL SELECT statement is used to sort a given result set based on the contents of one or more columns of data. By default, results are in ascending order, but you can use either ASC or DESC to indicate that you’d like results sorted in ascending or descending order, respectively.
SELECT * FROM shared_folders ORDER BY name DESC

Now that we’ve looked at queries, let’s move to how the logging and reporting functions work so we understand how drift is tracked.

Logging and Reporting

The SQL result set we looked at earlier ends up getting tracked in the osquery database as a field in json. Each time the query runs, a new row is created in the table. The rows are empty until a change occurs the next time the query is told to run. The contents of the first run would appear as follows:
[
{"model":
"XHCI Root Hub SS Simulation","vendor":"Apple Inc."},
{"model":
"XHCI Root Hub USB 2.0 Simulation","vendor":"Apple Inc."},
{"model":
"XHCI Root Hub SS Simulation","vendor":"Apple Inc."},
{"model":
"Bluetooth USB Host Controller","vendor":"Apple Inc."}
]
Until a new device is added, no results are logged. But once I insert a USB drive, I would then see an entry that looks like the following:
[
  {"model":"WD Easystore USB 3.0","vendor":"Western Digital"}
]
There’s plenty of extensibility. Each deployment then has the option to add decorations, lenses, or additional packs. Now that we understand some basics about running these queries and automating them, let’s just do a quick check on shared folders:
osqueryi --json "SELECT * FROM shared_folders"
The output is then as follows:
[
  {"name":"CE’s Public Folder","path":"/Users/ce/Public"},
  {"name":"molly’s Public Folder","path":"/Users/molly/Public"}
]

This information can quickly and easily be picked up as inventory from other tools with agents, such as munki, Jamf Pro, Addigy, or Fleetsmith. As noted previously, Fleetsmith came with the ability to direct osquery information from managed clients into a server. Now that we’ve covered osquery, let’s look at another open source agent called Chef.

Chef

The purpose of osquery is to obtain information about devices. But an orchestration tool is required as well for large-scale systems administration. Chef is a tool originally built by Jesse Robbins to do server builds and is now maintained at https://chef.io. Chef uses a recipe to perform a configuration task. These recipes are organized into cookbooks.

Managing clients is harder than managing servers. Your server isn't likely to get up and walk away, doesn’t have a rouge root user, and will never connect to Starbucks wi-fi.

—Mike Dodge, Client Platform Engineer, Facebook

The most complete list of cookbooks available for the Mac can be obtained through the Facebook Client Platform Engineering team’s GitHub account at https://github.com/facebook/IT-CPE. Reading through these should provide a good understanding of the types of things that Facebook and other IT teams do to automate systems and get up to speed on how to orchestrate various events on the Mac.

Install Chef

We don’t go into detail in this book on how to set up a Chef instance and get client systems to connect to it. That’s an entire book of its own. But we do review the Chef client in this section. To install the client, download the installer from https://downloads.chef.io/chef-client/. When you install the package, chef-apply, chef-client, chef-shell, and chef-solo will be installed in /usr/local/bin.

To clone the repo mentioned earlier from Meta/Facebook (as of the time of this writing, that repo was last updated less than three weeks ago, so it’s an active community-run asset), use the following command (which would copy it to /Users/Shared/ChefAssets):
git clone https://github.com/facebook/IT-CPE /Users/Shared/ChefAssets
Once installed, there will be a company_init.rb script at /Users/Shared/ChefAssets/chef/cookbooks/cpe_init/recipes. There’s also a /Users/Shared/ChefAssets/chef/tools/chef_bootstrap.py bootstrap script. Next, customize the chef server URL and the organization name (which should match that of the chef server), and provide any certificates necessary. The main settings are in the header of the script:
CLIENT_RB = """
log_level              :info
log_location           STDOUT
validation_client_name 'YOUR_ORG_NAME-validator'
validation_key
File.expand_path('/etc/chef/validation.pem')
chef_server_url        "YOUR_CHEF_SERVER_URL_GOES_HERE"
json_attribs           '/etc/chef/run-list.json'
ssl_ca_file            '/etc/chef/YOUR_CERT.crt'
ssl_verify_mode        :verify_peer
local_key_generation   true
rest_timeout           30
http_retry_count       3
no_lazy_load           false
Additionally, look for any place that indicates MYCOMPANY and replace that with the name of the organization to personalize the installation. Also, make sure that if using chef to bootstrap a Munki installation, the correct URL is defined in SoftwareRepoURL:
# Be sure to replace all instances of MYCOMPANY with your actual company name
node.default['organization'] = 'MYCOMPANY'
prefix = "com.#{node['organization']}.chef"
node.default['cpe_launchd']['prefix'] = prefix
node.default['cpe_profiles']['prefix'] = prefix
# Install munki
node.default['cpe_munki']['install'] = false
# Configure munki
node.default['cpe_munki']['configure'] = false
# Override default munki settings
node.default['cpe_munki']['preferences']['SoftwareRepoURL'] =
  'https://munki.MYCOMPANY.com/repo'

The logs are written to /Library/Chef/Logs/first_chef_run.log when the script runs. The supporting files for chef will also be at /etc/chef, including certificates that secure communications, a client.rb file that contains the information you supplied the bootstrap.py. Provided it completes, you’ll then have a working quickstart.json file at /Users/Shared/ChefAssets/chef and a working run-list.json file that includes any recipes you want to run. You’ll also have a /var/chef/cache for caches.

The quickstart script can then be as simple as the following:
{
  "minimal_ohai" : true,
  "run_list": [
    "recipe[cpe_init]"
  ]
}
Cookbooks should be ordered in the run-list from least specific to most specific. That company_init.rb recipe defined the defaults for an organization with all of the CPE cookbooks provided. The cpe_init entry in the quickstart.json loads those recipes called in that init, which by default includes a platform run-list, a user run-list, and a node customization run-list. To know what anything is doing when it’s being called, simply look at the depends lines and then read the resource ruby script for each, such as /Users/Shared/ChefAssets/chef/cookbooks/cpe_hosts/resources/cpe_hosts.rb. Once everything is in place, it’s time to grill out with chef. Let’s simply run the chef-client along with the -j to specify your json file:
sudo chef-client -z -j /Users/Shared/ChefAssets/chef/quickstart.json

Edit a Recipe

Chef then verifies each resource in each included cookbook has been configured as defined and resolves any drift found in the current system. One of the most important things about a tool like chef is how configurable it is. Simply cat the /Users/Shared/ChefAssets/chef/cookbooks/cpe_munki/resources/cpe_munki_local.rb file to see how munki is installed and note that. Now that chef is running, let’s edit a recipe. To do so, edit that /Users/Shared/IT-CPE/chef/cookbooks/cpe_init/recipes/company_init.rb recipe in your favorite text editor to

 add the following lines to the bottom of the file:
node.default['cpe_autopkg']['repos'] = [
  'recipes',
  'https://github.com/facebook/Recipes-for-AutoPkg.git'
]
This adds the recipes from the Meta team to an autopkg instance running on the host. Other parts of the recipe will allow you to install autopkg and customize it, so you don’t have to do all the steps we’ll follow in a manual installation later in this book. Programmatic deployment of tools and configuration provides for a consistent experience. Once you’ve configured the change to the client init, rerun the chef-client:
sudo chef-client -z -j /Users/Shared/ChefAssets/chef/quickstart.json
These also write profiles, which you can then see in System Preferences. Meta was one of the first to publish cookbooks for Chef and an early proponent of Chef for large-scale Mac orchestration. A few others have also open sourced their cookbooks, which gives a number of options to choose from. And cookbooks can be obtained from multiple vendors. A few include the following:

The social community of Chef administrators and how they share cookbooks makes for a good reason to look into these types of workflows. Chef is open source and there are a lot of different methodologies around its use and deployment. The examples in this chapter have mostly been developed around a model that Apple began back in Software Update Server when they provided us with a manifest URL. Mac admins have been using a similar manifest, init script, etc., to deploy settings, apps, and operating systems ever since. Some organizations have developed integrations with Chef that go beyond this and leverage a chef server.

In the preceding example, we’re providing those certificates and the chef-client to endpoints from a central location, configuring what is required for a client to be able to communicate back to a server. The steps we followed in the previous examples can be strung together into an installer package. But being able to automatically deploy one and keep clients up to date automatically makes for a much simpler experience. This is where an orchestration tool like Puppet can come in handy.

Puppet

The tools covered in the previous sections are just a few in a red ocean that includes a large number of client management tools available for the Mac. We’ve seen Puppet, Vagrant, and other open source projects used to orchestrate events on the Mac in much the same way they would orchestrate events on a large farm of Linux servers.

The Puppet installer for Mac is available at https://downloads.puppetlabs.com/mac/, and when installed using a standard software package, the puppet-agent is used to orchestrate events on Macs. A number of other binaries for puppet can be found in /opt/puppetlabs/bin/. The service can be managed using launchctl or the puppet binary. For example, if puppet is stopped, it can be started using
sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
Configure changes to some of the ways the agent runs with settings found at https://puppet.com/docs/puppet/5.5/config_important_settings.html. The most important is to sign a certificate that’s then used to establish communications with the server. This is done using the puppet command-line utility followed by the cert option and then the sign verb for that option, followed by the name of a certificate that’s generated, as follows:
sudo /opt/puppetlabs/bin/puppet cert sign com.puppet.pretendco8734

These need to match with the server entry in the puppet.conf directory. We don’t want to oversimplify a full-blown puppet deployment. Getting a client to connect to a server is pretty straightforward. The real value in any of these tools comes in the form of how much time they save you once deployed. Puppet has nine configuration files, such as auth.conf and puppetdb.conf, for a reason. We won’t go into each of them (especially since our publisher has an entire book on the subject available at www.apress.com/gp/book/9781430230571).

Logs are then saved to /var/log/puppetlabs/puppetserver/puppetserver.log. This walk-through follows the same general standard as Chef and Munki. But each is really built for something specific. Puppet is for immediate orchestration. Munki is for software distribution. Chef is for keeping a device in a known state. Osquery is for keeping inventory of settings and events. There’s overlap between some of the options, but if you squint enough, the basic methodology and management principles across them are, in a very oversimplified way, similar. One such similarity is that most administrators of these tools prefer to check changes in and out using a tool called git.

Use Git to Manage All the Things

Git is a version control system (or tool) that can be used to manage files including code that is then version controlled so you can see changes over time. The main page indicates it’s actually the stupid content tracker. Git is good at tracking changes between files and allowing administrators to check code or files out and then check them back in when finished working. This is well suited to a workflow where you want someone else to review your changes before they get applied to a large fleet of devices. This makes git a common complement to chef, osquery, and munki deployments.

Ultimately though, git is a command with some verbs. Let’s start with the init verb, which creates an empty git repository in the working directory (or supply a path after the verb):
git init
Now let’s touch a file in that directory:
touch newfilename
Once a new file is there, with that new repo as the working directory, run git with the status verb:
git status
You now see that you’re “On branch master” – we’ll talk branching later. You see “No commits yet” and hey, what’s that, an untracked file! Run git with the add verb, and this time you need to specify a file or path (I’ll use . assuming your working directory is still the directory of your path):
git add .
Now let’s run the status command; again, the output should indicate that you now have a staged file (or files). Now let’s run our first commit. This takes the tracked and staged file that we just created and commits it. Until we do this, we can always revert back to the previous state of that file (which in this simple little walk-through would be for the file to no longer exist).
git commit -m "test"
Now let’s edit our file:
echo "This is an example." > newFile'
This time, let’s run git with the diff verb:
git diff
You can now see what changed between your file(s). Easy, right? Check out the logs to see what you’ve been doing to poor git:
git log
There’s a commit listed there, along with an author, a date and timestamp, as well as a name of the file(s) in the commit. Now, let’s run a reset to revert to our last commit. This will overwrite the changes we just made prior to doing the diff (you can use a specific commit by using it as the next position after —hard, or you can just leave it for the latest actual commit):
git reset —hard
This resets all files back to the way it was before you started mucking around with those poor files. OK, so we’ve been working off in our own little world. Next, we’ll look at branches. You know how we reset all of our files in the previous command? What if we had 30 files and we just wanted to reset one? You shouldn’t work in your master branch for a number of reasons. So let’s look at existing branches by running git with the branch verb:
git branch
You see that you have one branch, the “∗ master” branch. To create a new branch, simply type git followed by the name of the branch you wish to create (in this case, it will be called myspiffychanges1):
git branch myspiffychanges1
Run git with the branch verb again, and you’ll see that below master, your new branch appears. The asterisk is always used so you know which branch you’re working in. To switch between branches, use the checkout verb along with the name of the branch:
git checkout myspiffychanges1
I could have done both of the previous steps in one command, by using the -b flag with the checkout verb:
git checkout -b myspiffychanges1
OK now, the asterisk should be on your new branch, and you should be able to make changes. Let’s edit that file from earlier. Then let’s run another git status and note that your modifications can be seen. Let’s add them to the list of tracked changes using the git add for the working directory again:
git add .
Now let’s commit those changes:
git commit -m "some changes"
And now we have two branches, a little different from one another. Let’s merge the changes into the master branch next. First, let’s switch back to the master branch:
git checkout master
And then let’s merge those changes:
git merge myspiffychanges1
OK – so now you know how to init a project, branch, and merge. Before we go on the interwebs, let’s first set up your name. Notice in the logs that the Author field displays a name and an email address. Let’s see where that comes from:
git config –list
This is initially populated by ~/.gitconfig so you can edit that. Or let’s remove what is in that list:
git config --unset-all user.name
And then we can add a new set of information to the key we’d like to edit:
git config user.name "Charles Edge" --global
You might as well set an email address too, so people can yell at you for your crappy code some day:
git config user.email "[email protected]" --global
Next, let’s clone an existing repository onto our computer. The clone verb allows you to clone a repository into your home directory:
git clone https://github.com/autopkg/autopkg
The remote verb allows you to make a local copy of a branch. But it takes a couple of steps. First, init a project with the appropriate name and then cd into it. Then grab the URL from GitHub and add it using the remote verb:
git remote add AutoPkg https://github.com/autopkg/autopkg.git
Now let’s fetch a branch of that project, in this case, called test:
git fetch test myspiffychanges1
Now we’ll want to download the contents of that branch:
git pull myspiffychanges1
And once we’ve made some changes, let’s push our changes:
git push test myspiffychanges1

Now that you’ve deployed agents, MDM is a great complement to what agents can do, so we’ll cover the concept of User-Approved MDM in order to have less button mashing happening by our end users.

The Impact of UAMDM and Other Rootless Changes to macOS

Many of the third-party and open source tools use binaries that have been forced to evolve over the years due to the Mac becoming less like a Unix or Linux and more like an iOS (which is arguably one of the safest operating systems available). Until macOS High Sierra, some MDM functions would not run as well on personally owned Macs as on iOS devices owned by a company. This is because the iOS counterparts had supervision and Macs did not. As of High Sierra and beyond, Macs owned by a company, school, or institution can now be managed in a similar fashion as supervised iOS devices are managed because of the introduction of UAMDM. User-Approved MDM (UAMDM) in macOS 10.13.4 changed that by putting certain management privileges in a special category. The use of these special management privileges required both the use of an MDM solution and for that MDM solution to support User-Approved MDM. As of macOS Mojave 10.14.x, these special management privileges are the following:
  • Approval of third-party kernel extension loading (less of an issue now that kernel extensions aren’t used, but the same logic now applies to system extensions and other apps that require entitlements)

  • Approval of application requests to access privacy-protected data and functionality

  • Autonomous Single App Mode

For Mac environments which had traditionally not used MDM management solutions, this meant for the first time that an MDM solution was absolutely necessary for certain management functions to happen (unless SIP is disabled). Moreover, there are two ways to mark a Mac as being user approved:
  • Enrolling the Mac in Apple’s Automated Device Enrollment, or ADE, formerly called the Device Enrollment Program (DEP). Enrollment of a Mac into ADE means that Apple or an authorized reseller has designated that Mac one that is owned by a company, school, or other institutions. Since this Mac is now explicitly not a personally owned device, it gets UAMDM and other benefits that allow certain binaries to run in privileged ways automatically.

  • Having a human being click an approval button on the MDM profile issued by an MDM server which supports UAMDM. Notice that this cannot be scripted with graphical scripting tools as Apple blocks “synthetic clicking” on these screens to protect the privacy of end users.

The automatic granting of UAMDM to ADE-enrolled Macs means that ADE (and so MDM) is now almost a requirement for most organizations. The combination of UAMDM’s reserving of management privileges and the necessity of using MDM to employ those privileges means that using an MDM solution to manage Macs has moved from the “useful, but not essential” category to the “essential” category.

The rise of MDM management may signal the diminishment of using agents to manage Macs, but that has been a slow progression, and as seen in this chapter, agents are still quite beneficial. As more MDM management options become available every year, the more an MDM solution can use Apple’s built-in MDM management functionality to manage Macs in place of using a third-party agent to manage the Mac, the more future-proofed a deployment is likely to be. While agents likely won’t disappear overnight, the areas where they provide management value will shrink over time.

Rootless

The challenge with what some of these agents are doing is that they are operating in a way that is becoming challenging to keep up with the rapid pace of change at Apple engineering. Given the prevalence of some of these tools, Apple provides a group of apps that are whitelisted from many of the sandboxing requirements, which they call rootless. Some files need to be modifiable, even if they’re in a protected space. To see a listing of Apple tools that receive this exception, see /System/Library/Sandbox/rootless.conf:
cat /System/Library/Sandbox/rootless.conf

The degree with which each entry in the rootless.conf file is exempt varies. In addition to the list of SIP exceptions listed others can be found in the rootless.conf file.

Frameworks

Another aspect to be aware of when considering agents is the frameworks used in the agent. Frameworks are also sometimes important to consider as they’re added into apps and have to be approved for use by a user via an extension that loads the framework. A framework is a type of bundle that packages dynamic shared libraries with the resources that the library requires, including files (nibs and images), localized strings, header files, and maybe documentation. The .framework is an Apple structure that contains all of the files that make up a framework.

Frameworks are stored in the following location (where the ∗ is the name of an app or framework):
  • /Applications/∗contents/Frameworks

  • /Library/∗/

  • /Library/Application Support/∗/∗.app/Contents/

  • /Library/Developer/CommandLineTools/

  • /Library/Developer/

  • /Library/Frameworks

  • /Library/Printers/

  • /System/iOSSupport/System/Library/PrivateFrameworks

  • /System/iOSSupport/System/Library/Frameworks

  • /System/Library/CoreServices

  • /System/Library/Frameworks

  • /System/Library/PrivateFrameworks

  • /usr/local/Frameworks

If you just browse through these directories, you’ll see so many things you can use in apps. You can easily add an import followed by the name in your view controllers in Swift. For example, in /System/Library/Frameworks, you’ll find the Foundation.framework. Foundation is pretty common as it contains a number of APIs such as NSObject (NSDate, NSString, and NSDateFormatter).

You can import this into a script using the following line:
import Foundation

As with importing frameworks/modules/whatever (according to the language), you can then consume the methods/variables/etc. in your code (e.g., let url = NSURL(fileURLWithPath: “names.plist”).

The importance of frameworks here is that you should be able to run a command called otool to see what frameworks a given binary is dependent on in order to better understand what’s happening:
otool -L /usr/bin/lldb

Additionally, you can use an open source project called looto to see what is dependent on binaries in order to better understand how tools interact with other tools or with their own various frameworks. This is one of a number of open source tools that many administrators will need to understand at some point in order to have a well-rounded perspective on device management.

For noncompiled apps, dynamic libraries (.dylib) can be dangerous and therefore should no longer be used where possible. Most Swift apps now disable the ability to add a dylib by default due to the number of security flaws they have been used to implement.

Miscellaneous Automation Tools

There are also a number of automation tools that are easily called by agents that make planning and implementing a deployment easier by providing more flexible options to administrators for specific tasks. There are plenty of other tools described throughout the book, but these are specifically designed to help extend what agents can do.

The first tool we’ll cover is outset from Joseph Chilcote and available at https://github.com/chilcote/outset/. Outset processes packages and scripts at first boot and user logins. Outset is comprised of two launchd items that call loose packages or scripts in individual folders either at startup or user login. To add more tasks to the startup and login processes, add new items to the appropriate folders. Outset handles the execution.

If your Macs need to routinely run a series of startup scripts to reset user environments or computer variables, then making launchd plists may be burdensome and difficult to manage. And plists execute asynchronously, which means startup and login processes may not run in the same order every time.

The next tool is dockutil, available at https://github.com/kcrawford/dockutil. Dockutil makes it easier to manage the Dock on a Mac. Users need the right tools to do their jobs, and a thoughtfully crafted dock helps them find those tools. They need access to applications, their home folders, servers, and working directories. Dockutil adds, removes, and reorders dock items for users. The script allows an administrator to adjust dock settings to adjust the view of folders (grid, fan, list, or automatic), adjust the display of folders to show their contents or folder icons, and set folder sort order (name, date, or kind).

The last tool we’ll cover is duti, available at http://duti.org/index.html. Duti makes it easier to set default applications for document types and URL handlers/schemes. Enterprises often incorporate Macs into complex workflows that require consistent behaviors. If a workflow requires using the Firefox browser instead of Safari or using Microsoft Outlook instead of Apple’s Mail application, Andrew Mortensen’s duti can ensure the correct applications respond when opening a URL or new email message.

Note

A much more comprehensive list of these tools can be found in Appendix A.

Duti’s name means “default for UTI” or what Apple calls Uniform Type Identifiers. Every file type such as an HTML page or Microsoft Word document has a UTI, and developers constantly create their own new UTIs. Duti reads and applies UTI settings to pair applications with UTIs.

Summary

There are a number of agent-based solutions on the market that make managing Macs en masse possible. Some of these are proprietary, and others are open source. Most management agents should be paired with a Mobile Device Management (MDM) solution, which we cover further in Chapter 4. The focus here is on the Mac, simply because we cannot install “agents” on iOS, iPadOS, and tvOS devices (without some serious breaking of the devices).

These agents are typically used for device inventory, deploying software, keeping software up to date, managing settings, user notification, and a number of other tasks. The term “agent” is often an oversimplification. Each “agent” usually comes with anywhere between one and five LaunchAgents and LaunchDaemons. This is because each task should be run independently. These tasks usually invoke other tasks, preferably with native Swift frameworks but often by simply “shelling out” a command-line tool built into macOS. As an example, you can install profiles manually using the profiles command, which any agent-only management tool will use for profile management, given that some tasks require a profile. We’ll cover profiles in detail in Chapter 3.

More and more of these settings are now prompting users. Thus, we need to use an MDM solution to limit the number of prompts on behalf of the user and to get our management agents on devices without too much work from line tech support.

Now that we’ve covered agents, we’ll dig into MDM further in Chapter 4. But first, we’ll explore profiles even further in Chapter 3, so you can get more done with both agents and MDM.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.177.223