Skip to content
Michael Stanclift edited this page Oct 27, 2023 · 12 revisions

The Re-Release

Gravity Sync 4 is an extensive rewrite of the application. There is not a single function, variable, or chunk of code that has gone untouched.

In the past I've always tried to make sure that anyone who wanted to update, would not have to do much more than occasionally reapply automation jobs to take advantage of new features. The effect of this made me reliant on flawed or shortsighted decisions that I'd made two years ago, and forced building all new features on top of (or within) the structure of those decisions.

Gravity Sync 4 is a fresh start.

Beta Testers

  1. Thank you to everyone who tested 4.0 and provided any feedback.
  2. To move from the beta track to the production version, running gravity-sync dev will toggle you back over.

Rethought Installation

The Gravity Sync installer has been rewritten to accommodate for both new installations and upgrades. (Upgrading only after a successful migration to 4.0, see below.) Previously it would detect existing installs and refuse to complete. There is also a dedicated update script that can be used to fix a broken deployment by executing bash /etc/gravity-sync/.gs/update.sh but should only be required if for some reason gravity-sync update has failed.

Rearchitected Deployment

The architecture and installation have been simplified so that Gravity Sync is installed on the pair of Pi-hole boxes that you want to use it. There is no more primary or secondary Pi-hole, and both sides run replication jobs as peers. When either side detects changes to your Pi-hole configuration, they are sent to the peer.

Automation jobs are set to run at rotating-randomized times to avoid collisions, and to prevent continuous re-syncing the hashes that were previously only saved to one instance are now also sent to the peer at the end of a successful replication event.

With automation fully enabled, changes to either Pi-hole will usually result in both sides reflecting the change within 5 minutes. However if you're making large changes you want to monitor, or just feeling impatient, you can still manually run a replication job anytime.

If you only want to run Gravity Sync on one Pi-hole, that's still fine too. An optional "peerless" mode will detect the absence of a configuration at the remote site and bypass the hash sync. A warning will appear in the Gravity Sync console output to configure at the remote side for full functional, but can be ignored. (This also should allow folks who replicate between more than two Pi-hole instances to have multiples syncing to a single source of truth.)

Relocated Executables

Gravity Sync previously required itself to live in the user's $HOME directory. Alternatively, if it was installed as the root user, Gravity Sync could live in any self contained folder on the system.

In 4.0, Gravity Sync now lives in /usr/local/bin (along side pihole) and the configuration files and job logs are contained in the /etc/gravity-sync folder. This change to the configuration folder should make containerizing Gravity Sync easier in the future. (hint, hint)

The main script has been renamed from gravity-sync.sh to gravity-sync and due to it's relocation, has the nice effect of letting you execute just gravity-sync from anywhere in the system. Said another way, you no longer have to run ./gravity-sync.sh from the Gravity Sync folder in $HOME anytime you want to manually run it.

Resimplified Backups

Along with the removal of the dedicated backup and restore features that happened in 3.5, in 4.0 the backup functions are resimplified in that backup files are no longer timestamped, so jobs that no longer complete successfully will not continue to write new backup files each time, until the hard drive has been filled.

  • All data backup operations have moved to the /tmp folders on both the local and remote Pi-hole.
  • All backup files have the extension .gsb appended to them during processing. This replaces the various .backup, .pull, .push files of previous versions.

The amount of write activity, network traffic, and overall execution time has been reduced by cutting down on the number of backup data copies that are created, especially during push operations.

Reduced Dependence

The dependency on a standalone copy of SQLite3 has been removed. Gravity Sync now uses the integrated SQLite3 instance that is built into Pi-hole's FTL component (pihole-FTL sql) which also removes impediments for folks wanting to install Gravity Sync on systems that didn't have the ability to also install SQLite3.

All cron related automation functions have been removed. The systemd based replication function that was introduced in 3.7 comes forward, with further enhancements. This means packages like 'cronie' are no longer needed on systems that lacked a built in crontab.

Really New

In addition to everything above there are a few new functions.

  • gravity-sync purge now uninstalls and completely removes Gravity Sync from the system. It previously only reset the installation to new.
  • gravity-sync disable will fully stop and remove all automation jobs in systemd.
  • gravity-sync monitor will let you watch the real time status of replication jobs.
  • gravity-sync config 1234 (where 1234 is the remote SSH port) will let you pass custom SSH ports to the configuration utility to aid in setup when your target Pi-hole is on another network.
  • gravity-sync auto has additional frequency options.

If DNS Records and CNAME Records are detected at either side, they are replicated. Previously these functions required you to opt-in to replicating these, but this was done because their functionality was added after the initial release of Gravity Sync in 2020.

Static DHCP Assignments are now replicated by Gravity Sync.

Required to Upgrade

There are two ways to upgrade to Gravity Sync 4.

Fresh Install

curl -sSL https://raw.githubusercontent.com/vmstan/gs-install/main/gs-install.sh | bash

Using the curl to bash one liner listed above, you will deploy a new install of Gravity Sync 4 to both of your existing Pi-hole. Any current Gravity Sync configuration will be ignored.

You will need to generate a new configuration file using gravity-sync config but because the configuration workflow has been completely redone, in most cases you will only need to provide the IP address & username/password for your remote Pi-hole. Gravity Sync will automatically detect if you have a containerized instance of Pi-hole and prompt for the name of the instance(s). Once this is provided it will automatically detect the bind mounts in use for Pi-hole's two /etc directories. It's possible to have your new configuration generated in less than a minute.

Gravity Sync now uses a dedicated SSH key for connection to remote Pi-hold instances. This key is stored as /etc/gravity-sync/gravity-sync.rsa and should be used only for Gravity Sync. The configuration utility will create the new key file. Existing default id_rsa key associations to remote servers can be safely deleted/removed unless they're associated with some other process.

Migrate Existing Configuration

From your existing Gravity Sync folder, running any version 3.1 to 3.7, run ./gravity-sync.sh update and the version 4.0 code will temporarily come into the legacy folder. At this point ./gravity-sync.sh is the migration tool to move you to the version 4.0 format.

Run ./gravity-sync.sh again and a new Gravity Sync 4 installation will be laid down for you and your existing configuration, logs, SSH key, and hashes will be migrated to the fresh install. Legacy backup files are not retained. At the end of a successful script execution the legacy Gravity Sync folder will be deleted, so you may need to run a cd ~ to get yourself out of the now deleted folder.

Many of what were termed "Hidden Figures" as advanced configuration options have been removed, as they're no longer supported or the logic behind them being necessary have changed. So only the configuration settings relevant to 4.0 will be transferred.

Your automation settings will be defaulted to nothing. You will want to run a gravity-sync compare to make sure your migration was successful.

At this point you will want to perform a fresh install of Gravity Sync on your other Pi-hole to enable peer mode and hash sharing. Once that is successful enable automation on both nodes using gravity-sync auto.

Relevant Issues

Some relevant issues from early adopters that may be helpful during your migration.

  • #329 "gravity-sync.sh: No such file or directory"
  • #327 Multiple Peers (3+ piholes in sync)