Perforce Replication

What is replication?

Replication is the duplication of server data from one Perforce Server to another Perforce Server, ideally in real time. You can use replication to:

  • Provide warm standby servers

    A replica server can function as an up-to-date warm standby system, to be used if the master server fails. Such a replica server requires that both server metadata and versioned files are replicated.

  • Reduce load and downtime on a primary server

    Long-running queries and reports, builds, and checkpoints can be run against a replica server, reducing lock contention. For checkpoints and some reporting tasks, only metadata needs to be replicated. For reporting and builds, replica servers need access to both metadata and versioned files.

  • Provide support for build farms

    A replica with a local (non-replicated) storage for client workspaces (and their respective have lists) is capable of running as a build farm.

  • Forward write requests to a central server

    A forwarding replica holds a readable cache of both versioned files and metadata, and forwards commands that write metadata or file content towards a central server.

Combined with a centralized authorization server (see Centralized authorization server (P4AUTH)), Perforce administrators can configure the Perforce Broker (see “The Perforce Broker”) to redirect commands to replica servers to balance load efficiently across an arbitrary number of replica servers.

Note

Most replica configurations are intended for reading of data. If you require read/write access to a remote server, use either a forwarding replica, a distributed Perforce service, or the Perforce Proxy. See Configuring a forwarding replica, “Commit-edge Architecture” and “Perforce Proxy” for details.

System requirements

  • As a general rule, All replica servers must be at the same release level or at a release later as the master server. Any functionality that requires an upgrade for the master requires an upgrade for the replica, and vice versa.
  • All replica servers must have the same Unicode setting as the master server.
  • All replica servers must be hosted on a filesystem with the same case-sensitivity behavior as the master server’s filesystem.
  • p4 pull (when replicating metadata) does not read compressed journals. The master server must not compress journals until the replica server has fetched all journal records from older journals. Only one metadata-updating p4 pull thread may be active at one time.
  • The replica server does not need a duplicate license file.
  • The master and replica servers must have the same time zone setting.

    Note

    On Windows, the time zone setting is system-wide.

    On UNIX, the time zone setting is controlled by the TZ environment variable at the time the replica server is started.

Replication basics

Replication of Perforce servers depends upon several commands and configurables:

Command or Feature Typical use case

p4 pull

A command that can replicate both metadata and versioned files, and report diagnostic information about pending content transfers.

A replica server can run multiple p4 pull commands against the same master server. To replicate both metadata and file contents, you must run two p4 pull threads simultaneously: one (and only one) p4 pull (without the -u option) thread to replicate the master server’s metadata, and one (or more) p4 pull -u threads to replicate updates to the server’s versioned files.

p4 configure

A configuration mechanism that supports multiple servers.

Because p4 configure stores its data on the master server, all replica servers automatically pick up any changes you make.

p4 server

A configuration mechanism that defines a server in terms of its offered services. In order to be effective, the ServerID: field in the p4 server form must correspond with the server’s server.id file as defined by the p4 serverid command.

p4 serverid

A command to set or display the unique identifier for a Perforce Server. On startup, a server takes its ID from the contents of a server.id file in its root directory and examines the corresponding spec defined by the p4 server command.

p4 verify -t

Causes the replica to schedule a transfer of the contents of any damaged or missing revisions.

The command reports BAD! or MISSING! files with (transfer scheduled) at the end of the line.

For the transfer to work on a replica with lbr.replication=cache, the replica should have one or more p4 pull -u threads configured (perhaps also using the --batch=N flag.)

Server names

P4NAME
p4d -In name

Perforce Servers can be identified and configured by name.

When you use p4 configure on your master server, you can specify different sets of configurables for each named server. Each named server, upon startup, refers to its own set of configurables, and ignores configurables set for other servers.

Service users

p4d -u svcuser

A new type of user intended for authentication of server-to-server communications. Service users have extremely limited access to the depot and do not consume Perforce licenses.

To make logs easier to read, create one service user on your master server for each replica or proxy in your network of Perforce Servers.

Metadata access

p4d -M readonly
db.replication

Replica servers can be configured to automatically reject user commands that attempt to modify metadata (db.* files).

In -M readonly mode, the Perforce Server denies any command that attempts to write to server metadata. In this mode, a command such as p4 sync (which updates the server’s have list) is rejected, but p4 sync -p (which populates a client workspace without updating the server’s have list) is accepted.

Metadata filtering

Replica servers can be configured to filter in (or out) data on client workspaces and file revisions.

You can use the -P serverId option with the p4d command to create a filtered checkpoint based on a serverId.

You can use the -T tableexcludelist option with p4 pull to explicitly filter out updates to entire database tables.

Using the ClientDataFilter:, RevisionDataFilter:, and ArchiveDataFilter: fields of the p4 server form can provide you with far more fine-grained control over what data is replicated. Use the -P serverid option with p4 pull, and specify the Name: of the server whose p4 server spec holds the desired set of filter patterns.

Depot file access

p4d -D readonly
p4d -D shared
p4d -D ondemand
p4d -D cache
p4d -D none
lbr.replication

Replica servers can be configured to automatically reject user commands that attempt to modify archived depot files (the “library”).

  • In -D readonly mode, the Perforce Server accepts commands that read depot files, but denies commands that write to them. In this mode, p4 describe can display the diffs associated with a changelist, but p4 submit is rejected.
  • In -D ondemand mode, or -D shared mode (the two are synonymous) the Perforce server accepts commands that read metadata, but does not transfer new files nor remove purged files from the master. (p4 pull -u and p4 verify -t, which would otherwise transfer archive files, are disabled.) If a file is not present in the archives, commands that reference that file will fail.

    This mode must be used when a replica directly shares the same physical archives as the target, whether by running on the same machine or via network sharing. This mode can also be used when an external archive synchronization technique, such as rsync is used for archives.

  • In -D cache mode, the Perforce Server permits commands that reference file content, but does not automatically transfer new files. Files that are purged from the target are removed from the replica when the purge operation is replicated. If a file is not present in the archives, the replica will retrieve it from the target server.
  • In -D none mode, the Perforce Server denies any command that accesses the versioned files that make up the depot. In this mode, a command such as p4 describe changenum is rejected because the diffs displayed with a changelist require access to the versioned files, but p4 describe -s changenum (which describes a changelist without referring to the depot files in order to generate a set of diffs) is accepted.

These options can also be set using lbr.replication.* configurables, described in the "Configurables" appendix of the P4 Command Reference.

Target server

P4TARGET

As with the Perforce Proxy, you can use P4TARGET to specify the master server or another replica server to which a replica server points when retrieving its data.

You can set P4TARGET explicitly, or you can use p4 configure to set a P4TARGET for each named replica server.

A replica server with P4TARGET set must have both the -M and -D options, or their equivalent db.replication and lbr.replication configurables, correctly specified.

Startup commands

startup.1

Use the startup.n (where n is an integer) configurable to automatically spawn multiple p4 pull processes on startup.

State file

statefile

Replica servers track the most recent journal position in a small text file that holds a byte offset. When you stop either the master server or a replica server, the most recent journal position is recorded on the replica in the state file.

Upon restart, the replica reads the state file and picks up where it left off; do not alter this file or its contents. (When the state file is written, a temporary file is used and moved into place, which should preserve the existing state file if something goes wrong when updating it. If the state file should be empty or missing, the replica server will refetch from the start of its last used journal position.)

By default, the state file is named state and it resides in the replica server’s root directory. You can specify a different file name by setting the statefile configurable.

P4Broker

The Perforce Broker can be used for load balancing, command redirection, and more. See “The Perforce Broker” for details.

Warning

Replication requires uncompressed journals. Starting the master using the p4d -jc -z command breaks replication; use the -Z flag instead to prevent journals from being compressed.

The p4 pull command

Perforce’s p4 pull command provides the most general solution for replication. Use p4 pull to configure a replica server that:

  • replicates versioned files (the ,v files that contain the deltas that are produced when new versions are submitted) unidirectionally from a master server.
  • replicates server metadata (the information contained in the db.* files) unidirectionally from a master server.
  • uses the startup.n configurable to automatically spawn as many p4 pull processes as required.

    A common configuration for a warm standby server is one in which one (and only one) p4 pull process is spawned to replicate the master server’s metadata, and multiple p4 pull -u processes are spawned to run in parallel, and continually update the replica’s copy of the master server’s versioned files.

  • The startup.n configurables are processed sequentially. Processing stops at the first gap in the numerical sequence; any commands after a gap are ignored.

Although you can run p4 pull from the command line for testing and debugging purposes, it’s most useful when controlled by the startup.n configurables, and in conjunction with named servers, service users, and centrally-managed configurations.

The --batch option to the p4 pull specifies the number of files a pull thread should process in a single request. The default value of 1 is usually adequate. For high-latency configurations, a larger value might improve archive transfer speed for large numbers of small files. (Use of this option requires that both master and replica be at version 2015.2 or higher.)

Setting the rpl.compress configurable allows you to compress journal record data that is transmitted using p4 pull.

Note

If you are running a replica with monitoring enabled and you have not configured the monitor table to be disk-resident, you can run the following command to get more precise information about what pull threads are doing. (Remember to set monitor.lsof).

$ p4 monitor show -sB -la -L

Command output would look like this:

31701 B uservice-edge3 00:07:24 pull sleeping 1000 ms
    [server.locks/replica/49,d/pull(W)]

Server names and P4NAME

To set a Perforce server name, set the P4NAME environment variable or specify the -In command line option to p4d when you start the server. Assigning names to servers is essential for configuring replication. Assigning server names permits most of the server configuration data to be stored in Perforce itself, as an alternative to using startup options or environment values to specify configuration details. In replicated environments, named servers are a necessity, because p4 configure settings are replicated from the master server along with other Perforce metadata.

For example, if you start your master server as follows:

$ p4d -r /p4/master -In master -p master:11111

And your replica server as follows:

$ p4d -r /p4/replica -In Replica1 -p replica:22222

You can use p4 configure on the master to control settings on both the master and the replica, because configuration settings are part of a Perforce server’s metadata and are replicated accordingly.

For example, if you issue following commands on the master server:

$ p4 -p master:11111 configure set master#monitor=2
$ p4 -p master:11111 configure set Replica1#monitor=1

After the configuration data has been replicated, the two servers have different server monitoring levels. That is, if you run p4 monitor show against master:11111, you see both active and idle processes, because for the server named master, the monitor configurable is set to 2. If you run p4 monitor show against replica:22222, only active processes are shown, because for the Replica1 server, monitor is 1.

Because the master (and each replica) is likely to have its own journal and checkpoint, it is good practice to use the journalPrefix configurable (for each named server) to ensure that their prefixes are unique:

$ p4 configure set master#journalPrefix=/master_checkpoints/master
$ p4 configure set Replica1#journalPrefix=/replica_checkpoints/replica

For more information, see:

http://answers.perforce.com/articles/KB_Article/Master-and-Replica-Journal-Setup

Server IDs: the p4 server and p4 serverid commands

You can further define a set of services offered by a Perforce server by using the p4 server and p4 serverid commands. Configuring the following servers require the use of a server spec:

  • Commit server: central server in a distributed installation
  • Edge server: node in a distributed installation
  • Build server: replica that supports build farm integration
  • Depot master: commit server with automated failover
  • Depot standby: standby replica of the depot master
  • Standby server: read-only replica that uses p4 journalcopy
  • Forwarding standby: forwarding replica that uses p4 journalcopy

The p4 serverid command creates (or updates) a small text file named server.id. The server.id file always resides in a server’s root directory.

The p4 server command can be used to maintain a list of all servers known to your installation. It can also be used to create a unique server ID that can be passed to the p4 serverid command, and to define the services offered by any server that, upon startup, reads that server ID from a server.id file. The p4 server command can also be used to set a server’s name (P4NAME).

Service users

There are three types of Perforce users: standard users, operator users, and service users. A standard user is a traditional user of Perforce, an operator user is intended for human or automated system administrators, and a service user is used for server-to-server authentication, as part of the replication process.

Service users are useful for remote depots in single-server environments, but are required for multi-server and distributed environments.

Create a service user for each master, replica, or proxy server that you control. Doing so greatly simplifies the task of interpreting your server logs. Service users can also help you improve security, by requiring that your edge servers and other replicas have valid login tickets before they can communicate with the master or commit server. Service users do not consume Perforce licenses.

A service user can run only the following commands:

  • p4 dbschema
  • p4 export
  • p4 login
  • p4 logout
  • p4 passwd
  • p4 info
  • p4 user

To create a service user, run the command:

p4 user -f service1

The standard user form is displayed. Enter a new line to set the new user’s Type: to be service; for example:

User:      service1
Email:     services@example.com
FullName:  Service User for Replica Server 1
Type:      service

By default, the output of p4 users omits service users. To include service users, run p4 users -a.

Tickets and timeouts for service users

A newly-created service user that is not a member of any groups is subject to the default ticket timeout of 12 hours. To avoid issues that arise when a service user’s ticket ceases to be valid, create a group for your service users that features an extremely long timeout, or to unlimited. On the master server, issue the following command:

p4 group service_users

Add service1 to the list of Users: in the group, and set the Timeout: and PasswordTimeout: values to a large value or to unlimited.

Group:            service_users
Timeout:          unlimited
PasswordTimeout:  unlimited
Subgroups:
Owners:
Users:
        service1

Important

Service users must have a ticket created with the p4 login for replication to work.

Permissions for service users

On the master server, use p4 protect to grant the service user super permission. Service users are tightly restricted in the commands they can run, so granting them super permission is safe.

Server options to control metadata and depot access

When you start a replica that points to a master server with P4TARGET, you must specify both the -M (metadata access) and a -D (depot access) options, or set the configurables db.replication (access to metadata) and lbr.replication (access the depot’s library of versioned files) to control which Perforce commands are permitted or rejected by the replica server.

P4TARGET

Set P4TARGET to the the fully-qualified domain name or IP address of the master server from which a replica server is to retrieve its data. You can set P4TARGET explicitly, specify it on the p4d command line with the -t protocol:host:port option, or you can use p4 configure to set a P4TARGET for each named replica server. See the table below for the available protocol options.

If you specify a target, p4d examines its configuration for startup.n commands: if no valid p4 pull commands are found, p4d runs and waits for the user to manually start a p4 pull command. If you omit a target, p4d assumes the existence of an external metadata replication source such as p4 replicate. See p4 pull vs. p4 replicate for details.

Protocol Behavior

<not set>

Use tcp4: behavior, but if the address is numeric and contains two or more colons, assume tcp6:. If the net.rfc3484 configurable is set, allow the OS to determine which transport is used.

tcp:

Use tcp4: behavior, but if the address is numeric and contains two or more colons, assume tcp6:. If the net.rfc3484 configurable is set, allow the OS to determine which transport is used.

tcp4:

Listen on/connect to an IPv4 address/port only.

tcp6:

Listen on/connect to an IPv6 address/port only.

tcp46:

Attempt to listen on/connect to an IPv4 address/port. If this fails, try IPv6.

tcp64:

Attempt to listen on/connect to an IPv6 address/port. If this fails, try IPv4.

ssl:

Use ssl4: behavior, but if the address is numeric and contains two or more colons, assume ssl6:. If the net.rfc3484 configurable is set, allow the OS to determine which transport is used.

ssl4:

Listen on/connect to an IPv4 address/port only, using SSL encryption.

ssl6:

Listen on/connect to an IPv6 address/port only, using SSL encryption.

ssl46:

Attempt to listen on/connect to an IPv4 address/port. If this fails, try IPv6. After connecting, require SSL encryption.

ssl64:

Attempt to listen on/connect to an IPv6 address/port. If this fails, try IPv4. After connecting, require SSL encryption.

P4TARGET can be the hosts' hostname or its IP address; both IPv4 and IPv6 addresses are supported. For the listen setting, you can use the * wildcard to refer to all IP addresses, but only when you are not using CIDR notation.

If you use the * wildcard with an IPv6 address, you must enclose the entire IPv6 address in square brackets. For example, [2001:db8:1:2:*] is equivalent to [2001:db8:1:2::]/64. Best practice is to use CIDR notation, surround IPv6 addresses with square brackets, and to avoid the * wildcard.

Server startup commands

You can configure a Perforce Server to automatically run commands at startup using the p4 configure as follows:

p4 configure set "servername#startup.n=command"

Where n represents the order in which the commands are executed: the command specified for startup.1 runs first, then the command for startup.2, and so on. The only valid startup command is p4 pull.

p4 pull vs. p4 replicate

Perforce also supports a more limited form of replication based on the p4 replicate command. This command does not replicate file content, but supports filtering of metadata on a per-table basis.

For more information about p4 replicate, see "Perforce Metadata Replication" in the Perforce Knowledge Base:

http://answers.perforce.com/articles/KB_Article/Perforce-Metadata-Replication

Enabling SSL support

To encrypt the connection between a replica server and its end users, the replica must have its own valid private key and certificate pair in the directory specified by its P4SSLDIR environment variable. Certificate and key generation and management for replica servers works the same as it does for the (master) server. See Enabling SSL support. The users' Perforce applications must be configured to trust the fingerprint of the replica server.

To encrypt the connection between a replica server and its master, the replica must be configured so as to trust the fingerprint of the master server. That is, the user that runs the replica p4d (typically a service user) must create a P4TRUST file (using p4 trust) that recognizes the fingerprint of the master Perforce Server.

The P4TRUST variable specifies the path to the SSL trust file. You must set this environment variable in the following cases:

  • for a replica that needs to connect to an SSL-enabled master server, or
  • for an edge server that needs to connect to an SSL-enabled commit server.

Uses for replication

Here are some situations in which replica servers can be useful.

  • For a failover or warm standby server, replicate both server metadata and versioned files by running two p4 pull commands in parallel. Each replica server requires one or more p4 pull -u instances to replicate versioned files, and a single p4 pull to replicate the metadata.

    If you are using p4 pull for both metadata and p4 pull -u for versioned files, start your replica server with p4d -t protocol:host:port -Mreadonly -Dreadonly. Commands that require read-only access to server metadata and to depot files will succeed. Commands that attempt to write to server metadata and/or depot files will fail gracefully.

    For a detailed example of this configuration, see Configuring a read-only replica.

  • To configure an offline checkpointing or reporting server, only the master server’s metadata needs to be replicated; versioned files do not need to be replicated.

    To use p4 pull for metadata-only replication, start the server with p4d -t protocol:host:port -Mreadonly -Dnone. You must specify a target. Do not configure the server to spawn any p4 pull -u commands that would replicate the depot files.

    In either scenario, commands that require read-only access to server metadata will succeed and commands that attempt to write to server metadata or attempt to access depot files will be blocked by the replica server.

Replication and protections

To apply the IP address of a replica user’s workstation against the protections table, prepend the string proxy- to the workstation’s IP address.

For instance, consider an organization with a remote development site with workstations on a subnet of 192.168.10.0/24. The organization also has a central office where local development takes place; the central office exists on the 10.0.0.0/8 subnet. A Perforce service resides in the 10.0.0.0/8 subnet, and a replica resides in the 192.168.10.0/24 subnet. Users at the remote site belong to the group remotedev, and occasionally visit the central office. Each subnet also has a corresponding set of IPv6 addresses.

To ensure that members of the remotedev group use the replica while working at the remote site, but do not use the replica when visiting the local site, add the following lines to your protections table:

list    group    remotedev     192.168.10.0/24              -//...
list    group    remotedev     [2001:db8:16:81::]/48        -//...
write   group    remotedev     proxy-192.168.10.0/24         //...
write   group    remotedev     proxy-[2001:db8:16:81::]/48   //...
list    group    remotedev     proxy-10.0.0.0/8             -//...
list    group    remotedev     proxy-[2001:db8:1008::]/32   -//...
write   group    remotedev     10.0.0.0/8                    //...
write   group    remotedev     proxy-[2001:db8:1008::]/32    //...

The first line denies list access to all users in the remotedev group if they attempt to access Perforce without using the replica from their workstations in the 192.168.10.0/24 subnet. The second line denies access in identical fashion when access is attempted from the IPV6 [2001:db8:16:81::]/48 subnet.

The third line grants write access to all users in the remotedev group if they are using the replica and are working from the 192.168.10.0/24 subnet. Users of workstations at the remote site must use the replica. (The replica itself does not have to be in this subnet, for example, it could be at 192.168.20.0.) The fourth line grants access in identical fashion when access is attempted from the IPV6 [2001:db8:16:81::]/48 subnet.

Similarly, the fifth and sixth lines deny list access to remotedev users when they attempt to use the replica from workstations on the central office’s subnets (10.0.0.0/8 and [2001:db8:1008::]/32). The seventh and eighth lines grant write access to remotedev users who access the Perforce server directly from workstations on the central office’s subnets. When visiting the local site, users from the remotedev group must access the Perforce server directly.

When the Perforce service evaluates protections table entries, the dm.proxy.protects configurable is also evaluated.

dm.proxy.protects defaults to 1, which causes the proxy- prefix to be prepended to all client host addresses that connect via an intermediary (proxy, broker, replica, or edge server), indicating that the connection is not direct.

Setting dm.proxy.protects to 0 removes the proxy- prefix and allows you to write a single set of protection entries that apply both to directly-connected clients as well as to those that connect via an intermediary. This is more convenient but less secure if it matters that a connection is made using an intermediary. If you use this setting, all intermediaries must be at release 2012.1 or higher.

How replica types handle requests

One way of explaining the differences between replica types is to describe how each type handles user requests; whether the server processes them locally, whether it forwards them, or whether it returns an error. The following table describes these differences.

  • Read only commands include p4 files, p4 filelog, p4 fstat, p4 user -o
  • Work-in-progress commands include p4 sync, p4 edit, p4 add, p4 delete, p4 integrate, p4 resolve, p4 revert, p4 diff, p4 shelve, p4 unshelve, p4 submit, p4 reconcile.
  • Global update commands include p4 user, p4 group, p4 branch, p4 label, p4 depot, p4 stream, p4 protect, p4 triggers, p4 typemap, p4 server, p4 configure, p4 counter.
Replica type Read-only commands p4 sync, p4 client Work-in-progress commands Global update commands

Depot standby, standby, replica

Yes, local

Error

Error

Error

Forwarding standby, forwarding replica

Yes, local

Forward

Forward

Forward

Build server

Yes, local

Yes, local

Error

Error

Edge server, workspace server

Yes, local

Yes, local

Yes, local

Forward

Standard server, depot master, commit server

Yes, local

Yes, local

Yes, local

Yes, local

Configuring a read-only replica

To support warm standby servers, a replica server requires an up-to-date copy of both the master server’s metadata and its versioned files.

Note

Replication is asynchronous, and a replicated server is not recommended as the sole means of backup or disaster recovery. Maintaining a separate set of database checkpoints and depot backups (whether on tape, remote storage, or other means) is advised. Disaster recovery and failover strategies are complex and site-specific. Perforce Consultants are available to assist organizations in the planning and deployment of disaster recovery and failover strategies. For details, see:

http://www.perforce.com/services/consulting_overview

The following extended example configures a replica as a warm standby server for an existing Perforce Server with some data in it. For this example, assume that:

  • Your master server is named Master and is running on a host called master, using port 11111, and its server root directory is /p4/master
  • Your replica server will be named Replica1 and will be configured to run on a host machine named replica, using port 22222, and its root directory will be /p4/replica.
  • The service user name is service.

Note

You cannot define P4NAME using the p4 configure command, because a server must know its own name to use values set by p4 configure.

You cannot define P4ROOT using the p4 configure command, to avoid the risk of specifying an incorrect server root.

Master server setup

To define the behavior of the replica, you enter configuration information into the master server’s db.config file using the p4 configure set command. Configure the master server first; its settings will be replicated to the replica later.

To configure the master, log in to Perforce as a superuser and perform the following steps:

  1. To set the server named Replica1 to use master:11111 as the master server to pull metadata and versioned files, issue the command:

    $ p4 -p master:11111 configure set Replica1#P4TARGET=master:11111

    Perforce displays the following response:

    For server Replica1, configuration variable 'P4TARGET' set to 'master:11111'

    Note

    To avoid confusion when working with multiple servers that appear identical in many ways, use the -u option to specify the superuser account and -p to explicitly specify the master Perforce server’s host and port.

    These options have been omitted from this example for simplicity. In a production environment, specify the host and port on the command line.

  2. Set the Replica1 server to save the replica server’s log file using a specified file name. Keeping the log names unique prevents problems when collecting data for debugging or performance tracking purposes.

    $ p4 configure set Replica1#P4LOG=replica1Log.txt
  3. Set the Replica1 server configurable to 1, which is equivalent to specifying the -vserver=1 server startup option:

    $ p4 configure set Replica1#server=1
  4. To enable process monitoring, set Replica1’s monitor configurable to 1:

    $ p4 configure set Replica1#monitor=1
  5. To handle the Replica1 replication process, configure the following three startup.n commands. (When passing multiple items separated by spaces, you must wrap the entire set value in double quotes.)

    The first startup process sets p4 pull to poll once every second for journal data only:

    $ p4 configure set "Replica1#startup.1=pull -i 1"

    The next two settings configure the server to spawn two p4 pull threads at startup, each of which polls once per second for archive data transfers.

    $ p4 configure set "Replica1#startup.2=pull -u -i 1"
    $ p4 configure set "Replica1#startup.3=pull -u -i 1"

    Each p4 pull -u command creates a separate thread for replicating archive data. Heavily-loaded servers might require more threads, if archive data transfer begins to lag behind the replication of metadata. To determine if you need more p4 pull -u processes, read the contents of the rdb.lbr table, which records the archive data transferred from the master Perforce server to the replica.

    To display the contents of this table when a replica is running, run:

    $ p4 -p replica:22222 pull -l

    Likewise, if you only need to know how many file transfers are active or pending, use p4 -p replica:22222 pull -l -s.

    If p4 pull -l -s indicates a large number of pending transfers, consider adding more p4 pull -u startup.n commands to address the problem.

    If a specific file transfer is failing repeatedly (perhaps due to unrecoverable errors on the master), you can cancel the pending transfer with p4 pull -d -f file -r rev, where file and rev refer to the file and revision number.

  6. Set the db.replication (metadata access) and lbr.replication (depot file access) configurables to readonly:

    $ p4 configure set Replica1#db.replication=readonly
    $ p4 configure set Replica1#lbr.replication=readonly

    Because this replica server is intended as a warm standby (failover) server, both the master server’s metadata and its library of versioned depot files are being replicated. When the replica is running, users of the replica will be able to run commands that access both metadata and the server’s library of depot files.

  7. Create the service user:

    $ p4 user -f service

    The user specification for the service user opens in your default editor. Add the following line to the user specification:

    Type: service

    Save the user specification and exit your default editor.

    By default, the service user is granted the same 12-hour login timeout as standard users. To prevent the service user’s ticket from timing out, create a group with a long timeout on the master server. In this example, the Timeout: field is set to two billion seconds, approximately 63 years:

    $ p4 group service_group
    Users: service
    Timeout: 2000000000

    For more details, seeTickets and timeouts for service users.

  8. Set the service user protections to super in your protections table. (See Permissions for service users.) It is good practice to set the security level of all your Perforce Servers to at least 1 (preferably to 3, so as to require a strong password for the service user, and ideally to 4, to ensure that only authenticated service users may attempt to perform replica or remote depot transactions.)

    $ p4 configure set security=4
    $ p4 passwd
  9. Set the Replica1 configurable for the serviceUser to service.

    $ p4 configure set Replica1#serviceUser=service

    This step configures the replica server to authenticate itself to the master server as the service user; this is equivalent to starting p4d with the -u service option.

  10. If the user running the replica server does not have a home directory, or if the directory where the default .p4tickets file is typically stored is not writable by the replica’s Perforce server process, set the replica P4TICKETS value to point to a writable ticket file in the replica’s Perforce server root directory:

    $ p4 configure set "Replica1#P4TICKETS=/p4/replica/.p4tickets"

Creating the replica

To configure and start a replica server, perform the following steps:

  1. Boot-strap the replica server by checkpointing the master server, and restoring that checkpoint to the replica:

    $ p4 admin checkpoint

    (For a new setup, we can assume the checkpoint file is named checkpoint.1)

  2. Move the checkpoint to the replica server’s P4ROOT directory and replay the checkpoint:

    $ p4d -r /p4/replica -jr $P4ROOT/checkpoint.1
  3. Copy the versioned files from the master server to the replica.

    Versioned files include both text (in RCS format, ending with ,v) and binary files (directories of individual binary files, each directory ending with ,d). Ensure that you copy the text files in a manner that correctly translates line endings for the replica host’s filesystem.

    If your depots are specified using absolute paths on the master, use the same paths on the replica. (Or use relative paths in the Map: field for each depot, so that versioned files are stored relative to the server’s root.)

  4. To create a valid ticket file, use p4 login to connect to the master server and obtain a ticket on behalf of the replica server’s service user. On the machine that will host the replica server, run:

    $ p4 -u service -p master:11111 login

    Then move the ticket to the location that holds the P4TICKETS file for the replica server’s service user.

At this point, your replica server is configured to contact the master server and start replication. Specifically:

  • A service user (service) in a group (service_group) with a long ticket timeout
  • A valid ticket for the replica server’s service user (from p4 login)
  • A replicated copy of the master server’s db.config, holding the following preconfigured settings applicable to any server with a P4NAME of Replica1, specifically:

    • A specified service user (named service), which is equivalent to specifying -u service on the command line
    • A target server of master:11111, which is equivalent to specifying -t master:11111 on the command line
    • Both db.replication and lbr.replication set to readonly, which is equivalent to specifying -M readonly -D readonly on the command line
    • A series of p4 pull commands configured to run when the master server starts

Starting the replica

To name your server Replica1, set P4NAME or specify the -In option and start the replica as follows:

$ p4d -r /p4/replica -In Replica1 -p replica:22222 -d

When the replica starts, all of the master server’s configuration information is read from the replica’s copy of the db.config table (which you copied earlier). The replica then spawns three p4 pull threads: one to poll the master server for metadata, and two to poll the master server for versioned files.

Note

The p4 info command displays information about replicas and service fields for untagged output as well as tagged output.

Testing the replica

Testing p4 pull

To confirm that the p4 pull commands (specified in Replica1’s startup.n configurations) are running, issue the following command:

$ p4 -u super -p replica:22222 monitor show -a
18835 R service00:04:46 pull -i 1
18836 R service00:04:46 pull -u -i 1
18837 R service00:04:46 pull -u -i 1
18926 R super 00:00:00 monitor show -a

If you need to stop replication for any reason, use the p4 monitor terminate command:

$ p4 -u super -p replica:22222 monitor terminate 18837
 process '18837' marked for termination 

To restart replication, either restart the Perforce server process, or manually restart the replication command:

$ p4 -u super -p replica:22222 pull -u -i 1

If the p4 pull and/or p4 pull -u processes are terminated, read-only commands will continue to work for replica users as long as the replica server’s p4d is running.

Testing file replication

Create a new file under your workspace view:

$ echo "hello world" > myfile

Mark the file for add:

$ p4 -p master:11111 add myfile

And submit the file:

$ p4 -p master:11111 submit -d "testing replication"

Wait a few seconds for the pull commands on the replica to run, then check the replica for the replicated file:

$ p4 -p replica:22222 print //depot/myfile
//depot/myfile#1 - add change 1 (text)
hello world

If a file transfer is interrupted for any reason, and a versioned file is not present when requested by a user, the replica server silently retrieves the file from the master.

Note

Replica servers in -M readonly -D readonly mode will retrieve versioned files from master servers even if started without a p4 pull -u command to replicate versioned files to the replica. Such servers act as "on-demand" replicas, as do servers running in -M readonly -D ondemand mode or with their lbr.replication configurable set to ondemand.

Administrators: be aware that creating an on-demand replica of this sort can still affect server performance or resource consumption, for example, if a user enters a command such as p4 print //..., which reads every file in the depot.

Verifying the replica

When you copied the versioned files from the master server to the replica server, you relied on the operating system to transfer the files. To determine whether data was corrupted in the process, run p4 verify on the replica server:

$ p4 verify //...

Any errors that are present on the replica but not on the master indicate corruption of the data in transit or while being written to disk during the original copy operation. (Run p4 verify on a regular basis, because a failover server’s storage is just as vulnerable to corruption as a production server.)

Using the replica

You can perform all normal operations against your master server (p4 -p master:11111 command). To reduce the load on the master server, direct reporting (read-only) commands to the replica (p4 -p replica:22222 command). Because the replica is running in -M readonly -D readonly mode, commands that read both metadata and depot file contents are available, and reporting commands (such as p4 annotate, p4 changes, p4 filelog, p4 diff2, p4 jobs, and others) work normally. However, commands that update the server’s metadata or depot files are blocked.

Commands that update metadata

Some scenarios are relatively straightforward: consider a command such as p4 sync. A plain p4 sync fails, because whenever you sync your workspace, the Perforce Server must update its metadata (the "have" list, which is stored in the db.have table). Instead, use p4 sync -p to populate a workspace without updating the have list:

$ p4 -p replica:22222 sync -p //depot/project/...@1234

This operation succeeds because it does not update the server’s metadata.

Some commands affect metadata in more subtle ways. For example, many Perforce commands update the last-update time that is associated with a specification (for example, a user or client specification). Attempting to use such commands on replica servers produces errors unless you use the -o option. For example, p4 client (which updates the Update: and Access: fields of the client specification) fails:

$ p4 -p replica:22222 client replica_client
Replica does not support this command.

However, p4 client -o works:

$ p4 -p replica:22222 client -o replica_client
(client spec is output to STDOUT)

If a command is blocked due to an implicit attempt to write to the server’s metadata, consider workarounds such as those described above. (Some commands, like p4 submit, always fail, because they attempt to write to the replica server’s depot files; these commands are blocked by the -D readonly option.)

Using the Perforce Broker to redirect commands

You can use the Perforce Broker with a replica server to redirect read-only commands to replica servers. This approach enables all your users to connect to the same protocol:host:port setting (the broker). In this configuration, the broker is configured to transparently redirect key commands to whichever Perforce Server is appropriate to the task at hand.

For an example of such a configuration, see "Using P4Broker With Replica Servers" in the Perforce Knowledge Base:

http://answers.perforce.com/articles/KB_Article/Using-P4Broker-With-Replica-Servers

For more information about the Perforce Broker, see “The Perforce Broker”.

Upgrading replica servers

It is best practice to upgrade any server instance replicating from a master server first. If replicas are chained together, start at the replica that is furthest downstream from the master, and work upstream towards the master server. Keep downstream replicas stopped until the server immediately upstream is upgraded.

Note

There has been a significant change in release 2013.3 that affects how metadata is stored in db.* files; despite this change, the database schema and the format of the checkpoint and the journal files between 2013.2 and 2013.3, remains unchanged.

Consequently, in this one case (of upgrades between 2013.2 and 2013.3), it is sufficient to stop the replica until the master is upgraded, but the replica (and any replicas downstream of it) must be upgraded to at least 2013.2 before a 2013.3 master is restarted.

When upgrading between 2013.2 (or lower) and 2013.3 (or higher), it is recommended to wait for all archive transfers to end before shutting down the replica and commencing the upgrade. You must manually delete the rdb.lbr file in the replica server’s root before restarting the replica.

For more information, see "Upgrading Replica Servers" in the Perforce Knowledge Base:

http://answers.perforce.com/articles/KB_Article/Upgrading-Replica-Servers/

Configuring a forwarding replica

A forwarding replica offers a blend of the functionality of the Perforce Proxy with the improved performance of a replica. The following considerations are relevant:

The Perforce Proxy is easier to configure and maintain, but caches only file content; it holds no metadata. A forwarding replica caches both file content and metadata, and can therefore process many commands without requesting additional data from the master server. This behavior enables a forwarding replica to offload more tasks from the master server and provides improved performance. The trade-off is that a forwarding replica requires a higher level of machine provisioning and administrative considerations compared to a proxy.

A read-only replica rejects commands that update metadata; a forwarding replica does not reject such commands, but forwards them to the master server for processing, and then waits for the metadata update to be processed by the master server and returned to the forwarding replica. Although users connected to the forwarding replica cannot write to the replica’s metadata, they nevertheless receive a consistent view of the database.

If you are auditing server activity, each of your forwarding replica servers must have its own P4AUDIT log configured.

Configuring the master server

The following example assumes an environment with a regular server named master, and a forwarding replica server named fwd-replica on a host named forward.

  1. Start by configuring a read-only replica for warm standby; see Configuring a read-only replica for details. (Instead of Replica1, use the name fwd-replica.)
  2. On the master server, configure the forwarding replica as follows:

    $ p4 server fwd-1667

    The following form is displayed:

    ServerID:       fwd-1667
    Name:           fwd-replica
    Type:           server
    Services:       forwarding-replica
    Address:        tcp:forward:1667
    Description:
            Forwarding replica pointing to master:1666

Configuring the forwarding replica

  1. On the replica machine, assign the replica server a serverID:

    $ p4 serverid fwd-1667

    When the replica server with the serverID: of fwd-1667 (which was previously assigned the Name: of fwd-replica) pulls its configuration from the master server, it will behave as a forwarding replica.

  2. On the replica machine, restart the replica server:

    $ p4 admin restart

Configuring a build farm server

Continuous integration and other similar development processes can impose a significant workload on your Perforce infrastructure. Automated build processes frequently access the Perforce server to monitor recent changes and retrieve updated source files; their client workspace definitions and associated have lists also occupy storage and memory on the server. With a build farm server, you can offload the workload of the automated build processes to a separate machine, and ensure that your main Perforce server’s resources are available to your users for their normal day-to-day tasks.

Note

Build farm servers were implemented in Perforce server release 2012.1. With the implementation of edge servers in 2013.2, we now recommend that you use an edge server instead of a build farm server. As discussed in “Commit-edge Architecture”, edge servers offer all the functionality of build farm servers and yet offload more work from the main server and improve performance, with the additional flexibility of being able to run write commands as part of the build process.

A Perforce Server intended for use as a build farm must, by definition:

  • Permit the creation and configuration of client workspaces
  • Permit those workspaces to be synced

One issue with implementing a build farm rather than a read-only replica is that under Perforce, both of those operations involve writes to metadata: in order to use a client workspace in a build environment, the workspace must contain some information (even if nothing more than the client workspace root) specific to the build environment, and in order for a build tool to efficiently sync a client workspace, a build server must be able to keep some record of which files have already been synced.

To address these issues, build farm replicas host their own local copies of certain metadata: in addition to the Perforce commands supported in a read-only replica environment, build farm replicas support the p4 client and p4 sync commands when applied to workspaces that are bound to that replica.

If you are auditing server activity, each of your build farm replica servers must have its own P4AUDIT log configured.

Configuring the master server

The following example assumes an environment with a regular server named master, and a build farm replica server named buildfarm1 on a host named builder.

  1. Start by configuring a read-only replica for warm standby; see Configuring a read-only replica for details. (That is, create a read-only replica named buildfarm1.)
  2. On the master server, configure the master server as follows:

    $ p4 server master-1666

    The following form is displayed:

# A Perforce Server Specification.
#
#  ServerID:    The server identifier.
#  Type:        The server type: server/broker/proxy.
#  Name:        The P4NAME used by this server (optional).
#  Address:     The P4PORT used by this server (optional).
#  Description: A short description of the server (optional).
#  Services:    Services provided by this server, one of:
#          standard: standard Perforce server
#          replica: read-only replica server
#          broker: p4broker process
#          proxy: p4p caching proxy
#          commit-server: central server in a distributed installation
#          edge-server: node in a distributed installation
#          forwarding-replica: replica which forwards update commands
#          build-server: replica which supports build automation
#          P4AUTH: server which provides central authentication
#          P4CHANGE: server which provides central change numbers
#
# Use 'p4 help server' to see more about server ids and services.

ServerID:       master-1666
Name:           master-1666
Type:           server
Services:       standard
Address:        tcp:master:1666
Description:
        Master server - regular development work
  1. Create the master server’s server.id file. On the master server, run the following command:

    $ p4 -p master:1666 serverid master-1666
  2. Restart the master server.

    On startup, the master server reads its server ID of master-1666 from its server.id file. It takes on the P4NAME of master and uses the configurables that apply to a P4NAME setting of master.

Configuring the build farm replica

  1. On the master server, configure the build farm replica server as follows:

    $ p4 server builder-1667

    The following form is displayed:

    ServerID:       builder-1667
    Name:           builder-1667
    Type:           server
    Services:       build-server
    Address:        tcp:builder:1667
    Description:
            Build farm - bind workspaces to builder-1667
            and use a port of tcp:builder:1667
  2. Create the build farm replica server’s server.id file. On the replica server (not the master server), run the following command

    $ p4 -p builder:1667 serverid builder-1667
  3. Restart the replica server.

    On startup, the replica build farm server reads its server ID of builder-1667 from its server.id file.

    Because the server registry is automatically replicated from the master server to all replica servers, the restarted build farm server takes on the P4NAME of buildfarm1 and uses the configurables that apply to a P4NAME setting of buildfarm1.

    In this example, the build farm server also acknowledges the build-server setting in the Services: field of its p4 server form.

Binding workspaces to the build farm replica

At this point, there should be two servers in operation: a master server named master, with a server ID of master-1666, and a build-server replica named buildfarm1, with a server ID of builder-1667.

  1. Bind client workspaces to the build farm server.

    Because this server is configured to offer the build-server service, it maintains its own local copy of the list of client workspaces (db.domain and db.view.rp) and their respective have lists (db.have.rp).

    On the replica server, create a client workspace with p4 client:

    $ p4 -c build0001 -p builder:1667 client build0001

    When creating a new workspace on the build farm replica, you must ensure that your current client workspace has a ServerID that matches that required by builder:1667. Because workspace build0001 does not yet exist, you must manually specify build0001 as the current client workspace with the -c clientname option and simultaneously supply build0001 as the argument to the p4 client command. For more information, see:

    http://answers.perforce.com/articles/KB_Article/Build-Farm-Client-Management

    When the p4 client form appears, set the ServerID: field to builder-1667.

  2. Sync the bound workspace

    Because the client workspace build0001 is bound to builder-1667, users on the master server are unaffected, but users on the build farm server are not only able to edit its specification, they are able to sync it:

    $ export P4PORT=builder:1667
    $ export P4CLIENT=build0001
    $ p4 sync

    The replica’s have list is updated, and does not propagate back to the master. Users of the master server are unaffected.

In a real-world scenario, your organization’s build engineers would re-configure your site’s build system to use the new server by resetting their P4PORT to point directly at the build farm server. Even in an environment in which continuous integration and automated build tools create a client workspace (and sync it) for every change submitted to the master server, performance on the master would be unaffected.

In a real-world scenario, performance on the master would likely improve for all users, as the number of read and write operations on the master server’s database would be substantially reduced.

If there are database tables that you know your build farm replica does not require, consider using the -F and -T filter options to p4 pull. Also consider specifying the ArchiveDataFilter:, RevisionDataFilter: and ClientDataFilter: fields of the replica’s p4 server form.

If your automation load should exceed the capacity of a single machine, you can configure additional build farm servers. There is no limit to the number of build farm servers that you may operate in your installation.

Configuring a replica with shared archives

Normally, a Perforce replica service retrieves its metadata and file archives on the user-defined pull interval, for example p4 pull -i 1. When the lbr.replication configurable is set to ondemand or shared (the two are synonymous), metadata is retrieved on the pull interval and archive files are retrieved only when requested by a client; new files are not automatically transferred, nor are purged files removed.

When a replica server is configured to directly share the same physical archive files as the master server, whether the replica and master are running on the same machine or via network shared storage, the replica simply accesses the archives directly without requiring the master to send the archives files to the replica. This can form part of a High Availability configuration.

Warning

When archive files are directly shared between a replica and master server, the replica must have lbr.replication set to ondemand or shared, or archive corruption may occur.

To configure a replica to share archive files with a master, perform the following steps:

  1. Ensure that the clocks for the master and replica servers are synchronized.

    Nothing needs to be done if the master and replica servers are hosted on the same operating system.

    Synchronizing clocks is a system administration task that typically involves using a Network Time Protocol client to synchronize an operating system’s clock with a time server on the Internet, or a time server you maintain for your own network.

    See http://support.ntp.org/bin/view/Support/InstallingNTP for details.

  2. If you have not already done so, configure the replica server as a forwarding replica.

    See Configuring a read-only replica.

  3. Set lbr.replication.

    For example: p4 configure set REP13-1#lbr.replication=ondemand

  4. Restart the replica, specifying the share archive location for the replica’s root.

Once these steps have been completed, the following conditions are in effect:

  • archive file content is only retrieved when requested, and those requests are made against the shared archives.
  • no entries are written to the rdb.lbr librarian file during replication.
  • commands that would schedule the transfer of file content, such as p4 pull -u and p4 verify -t are rejected:

    $ p4 pull -u
    This command is not used with an ondemand replica server.
    $ p4 verify -t //depot/...
    This command is not used with an ondemand replica server.
  • if startup configurables, such as startup.N=pull -u, are defined, the replica server attempts to run such commands. Since the attempt to retrieve archive content is rejected, the replica’s server log will contain the corresponding error:

    Perforce server error:
            2014/01/23 13:02:31 pid 6952 service-od@21131 background 'pull -u -i 10'
            This command is not used with an ondemand replica server.

Filtering metadata during replication

As part of an HA/DR solution, one typically wants to ensure that all the metadata and all the versioned files are replicated. In most other use cases, particularly build farms and/or forwarding replicas, this leads to a great deal of redundant data being transferred.

It is often advantageous to configure your replica servers to filter in (or out) data on client workspaces and file revisions. For example, developers working on one project at a remote site do not typically need to know the state of every client workspace at other offices where other projects are being developed, and build farms don’t require access to the endless stream of changes to office documents and spreadsheets associated with a typical large enterprise.

The simplest way to filter metadata is by using the -T tableexcludelist option with p4 pull command. If you know, for example, that a build farm has no need to refer to any of your users' have lists or the state of their client workspaces, you can filter out db.have and db.working entirely with p4 pull -T db.have,db.working.

Excluding entire database tables is a coarse-grained method of managing the amount of data passed between servers, requires some knowledge of which tables are most likely to be referred to during Perforce command operations, and furthermore, offers no means of control over which versioned files are replicated.

You can gain much more fine-grained control over what data is replicated by using the ClientDataFilter:, RevisionDataFilter:, and ArchiveDataFilter: fields of the p4 server form. These options enable you to replicate (or exclude from replication) those portions of your server’s metadata and versioned files that are of interest at the replica site.

Example 1. Filtering out client workspace data and files.

If workspaces for users in each of three sites are named with site[123]-ws-username, a replica intended to act as partial backup for users at site1 could be configured as follows:

ServerID:       site1-1668
Name:           site1-1668
Type:           server
Services:       replica
Address:        tcp:site1bak:1668
Description:
        Replicate all client workspace data, except the states of
        workspaces of users at sites 2 and 3.
        Automatically replicate .c files in anticipation of user
        requests. Do not replicate .mp4 video files, which tend
        to be large and impose high bandwidth costs.
ClientDataFilter:
        -//site2-ws-*
        -//site3-ws-*
RevisionDataFilter:
ArchiveDataFilter:
        //....c
        -//....mp4

When you start the replica, your p4 pull metadata thread must specify the ServerID associated with the server spec that holds the filters:

$ p4 configure set "site1-1668#startup.1=pull -i 30 -P site1-1668"

In this configuration, only those portions of db.have that are associated with site1 are replicated; all metadata concerning workspaces associated with site2 and site3 is ignored.

All file-related metadata is replicated. All files in the depot are replicated, except for those ending in .mp4. Files ending in .c are transferred automatically to the replica when submitted.

To further illustrate the concept, consider a build farm replica scenario. The ongoing work of the organization (whether it be code, business documents, or the latest video commercial) can be stored anywhere in the depot, but this build farm is dedicated to building releasable products, and has no need to have the rest of the organization’s output at its immediate disposal:

Example 2. Replicating metadata and file contents for a subset of a depot.

Releasable code is placed into //depot/releases/... and automated builds are based on these changes. Changes to other portions of the depot, as well as the states of individual workers' client workspaces, are filtered out.

ServerID:       builder-1669
Name:           builder-1669
Type:           server
Services:       build-server
Address:        tcp:built:1669
Description:
        Exclude all client workspace data
        Replicate only revisions in release branches
ClientDataFilter:
        -//...
RevisionDataFilter:
        -//...
        //depot/releases/...
ArchiveDataFilter:
        -//...
        //depot/releases/...

To seed the replica you can use a command like the following to create a filtered checkpoint:

$ p4d -r /p4/master -P builder-1669 -jd myCheckpoint

The filters specified for builder-1669 are used in creating the checkpoint. You can then continue to update the replica using the p4 pull command.

When you start the replica, your p4 pull metadata thread must specify the ServerID associated with the server spec that holds the filters:

$ p4 configure set "builder-1669#startup.1=pull -i 30 -P builder-1669"

The p4 pull thread that pulls metadata for replication filters out all client workspace data (including the have lists) of all users.

The p4 pull -u thread(s) ignore all changes on the master except those that affect revisions in the //depot/releases/... branch, which are the only ones of interest to a build farm. The only metadata that is available is that which concerns released code. All released code is automatically transferred to the build farm before any requests are made, so that when the build farm performs a p4 sync, the sync is performed locally.

Verifying replica integrity

Tools to ensure data integrity in multi-server installations are accessed through the p4 journaldbchecksums command, and their behavior is controlled by three configurables: rpl.checksum.auto, rpl.checksum.change, and rpl.checksum.table.

When you run p4 journaldbchecksums against a specific database table (or the set of tables associated with one of the levels predefined by the rpl.checksum.auto configurable), the upstream server writes a journal note containing table checksum information. Downstream replicas, upon receiving this journal note, then proceed to verify these checksums and record their results in the structured log for integrity-related events.

These checks are also performed whenever the journal is rotated. In addition, newly defined triggers allow you to take some custom action when journals are rotated. For more information, see the section "Triggering on journal rotation" in Helix Versioning Engine Administrator Guide: Fundamentals.

Administrators who have one or more replica servers deployed should enable structured logging for integrity events, set the rpl.checksum.* configurables for their replica servers, and regularly monitor the logs for integrity events.

Configuration

Structured server logging must be enabled on every server, with at least one log recording events of type integrity, for example:

$ p4 configure set serverlog.file.8=integrity.csv

After you have enabled structured server logging, set the rpl.checksum.auto, rpl.checksum.change, and rpl.checksum.table configurables to the desired levels of integrity checking. Best practice for most sites is a balance between performance and log size:

p4 configure set rpl.checksum.auto=1 (or 2 for additional verification that is unlikely to vary between an upstream server and its replicas.)

p4 configure set rpl.checksum.change=2 (this setting checks the integrity of every changelist, but only writes to the log if there is an error.)

p4 configure set rpl.checksum.table=1 (this setting instructs replicas to verify table integrity on scan or unload operations, but only writes to the log if there is an error.)

Valid settings for rpl.checksum.auto are:

rpl.checksum.auto Database tables checked with every journal rotation

0

No checksums are performed.

1

Verify only the most important system and revision tables:

db.archmap, db.config, db.depot, db.group, db.groupx, db.integed, db.integtx, db.ldap, db.protect, db.rev, db.revcx, db.revdx, db.revhx, db.revtx, db.stream, db.trigger, db.user.

2

Verify all database tables from level 1, plus:

db.bodtext, db.bodtextcx, db.bodtexthx, db.counters, db.excl, db.fix, db.fixrev, db.ixtext, db.ixtexthx, db.job, db.logger, db.message, db.nameval, db.property, db.remote, db.revbx, db.review, db.revsx, db.revux, db.rmtview, db.server, db.svrview, db.traits, db.uxtext.

3

Verify all metadata, including metadata that is likely to differ, especially when comparing an upstream server with a build-farm or edge-server replica.

Valid settings for rpl.checksum.change are:

rpl.checksum.change Verification performed with each changelist

0

Perform no verification.

1

Write a journal note when a p4 submit, p4 fetch, p4 populate, p4 push, or p4 unzip command completes. The value of the rpl.checksum.change configurable will determine the level of verification performed for the command.

2

Replica verifies changelist summary, and writes to integrity.csv if the changelist does not match.

3

Replica verifies changelist summary, and writes to integrity log even when the changelist does match.

Valid settings for rpl.checksum.table are:

rpl.checksum.table Level of table verification performed

0

Table-level checksumming only.

1

When a table is unloaded or scanned, journal notes are written. These notes are processed by the replica and are logged to integrity.csv if the check fails.

2

When a table is unloaded or scanned, journal notes are written, and the results of journal note processing are logged even if the results match.

For more information, see p4 help journaldbchecksums.

Warnings, notes, and limitations

The following warnings, notes, and limitations apply to all configurations unless otherwise noted.

  • On master servers, do not reconfigure these replica settings while the replica is running:

    • P4TARGET
    • dm.domain.accessupdate
    • dm.user.accessupdate
  • Be careful not to inadvertently write to the replica’s database. This might happen by using an -r option without specifying the full path (and mistakingly specifying the current path), by removing db files in P4ROOT, and so on. For example, when using the p4d -r . -jc command, make sure you are not currently in the root directory of the replica or standby in which p4 journalcopy is writing journal files.
  • Large numbers of Perforce password (P4PASSWD) invalid or unset errors in the replica log indicate that the service user has not been logged in or that the P4TICKETS file is not writable.

    In the case of a read-only directory or P4TICKETS file, p4 login appears to succeed, but p4 login -s generates the "invalid or unset" error. Ensure that the P4TICKETS file exists and is writable by the replica server.

  • Client workspaces on the master and replica servers cannot overlap. Users must be certain that their P4PORT, P4CLIENT, and other settings are configured to ensure that files from the replica server are not synced to client workspaces used with the master server, and vice versa.
  • Replica servers maintain a separate table of users for each replica; by default, the p4 users command shows only users who have used that particular replica server. (To see the master server’s list of users, use p4 users -c).

    The advantage of having a separate user table (stored on the replica in db.user.rp) is that after having logged in for the first time, users can continue to use the replica without having to repeatedly contact the master server.

  • All server IDs must be unique. The examples in the section Configuring a build farm server illustrate the use of manually-assigned names that are easy to remember, but in very large environments, there may be more servers in a build farm than is practical to administer or remember. Use the command p4 server -g to create a new server specification with a numeric Server ID. Such a Server ID is guaranteed to be unique.

    Whether manually-named or automatically-generated, it is the responsibility of the system administrator to ensure that the Server ID associated with a server’s p4 server form corresponds exactly with the server.id file created (and/or read) by the p4 serverid command.

  • Users of P4V and forwarding replicas are urged to upgrade to P4V 2012.1 or higher. Perforce applications older than 2012.1 that attempt to use a forwarding replica can, under certain circumstances, require the user to log in twice to obtain two tickets: one for the first read (from the forwarding replica), and a separate ticket for the first write attempt (forwarded to the master) requires a separate ticket. This confusing behavior is resolved if P4V 2012.1 or higher is used.
  • Although replicas can be chained together as of Release 2013.1, (that is, a replica’s P4TARGET can be another replica, as well as from a central server), it is the administrator’s responsibility to ensure that no loops are inadvertently created in this process. Certain multi-level replication scenarios are permissible, but pointless; for example, a forwarding replica of a read-only replica offers no advantage because the read-only replica will merely reject all writes forwarded to it. Please contact Perforce technical support for guidance if you are considering a multi-level replica installation.
  • The rpl.compress configurable controls whether compression is used on the master-replica connection(s). This configurable defaults to 0. Enabling compression can provide notable performance improvements, particularly when the master and replica servers are separated by significant geographic distances.

    Enable compression with: p4 configure set fwd-replica#rpl.compress=1