p4 journaldbchecksums
Synopsis
Write journal notes with table checksums.
Syntax
p4 [g-opts
] journaldbchecksums [-t
tableincludelist
| -T
tableexcludelist
] [-l
level
]
p4 [g-opts
] journaldbchecksums -u
filename
-t tablename
[-v version
] [-z]
p4 [g-opts
] journaldbchecksums -s -t
tablename
[-b
blocksize
] [-v
version
]
p4 [g-opts
] journaldbchecksums -c
changelist
Description
The p4 journaldbchecksums
command provides a set of tools for
ensuring data integrity across a distributed or replicated installation.
The Perforce service automatically performs an integrity check whenever
you use the p4 admin checkpoint
or p4 admin journal
commands, or when you use p4 journaldbchecksums
to manually
perform an integrity check.
To use this command, structured logging (see p4
logparse
) must be enabled, and at least one structured log must be
capturing events of type integrity
.
When an integrity check is performed, the Perforce service writes
records to the journal that contains the checksums of the specified
tables (or, if no tables are specified, for all tables). Replica
servers, upon receiving these records, compare these checksums with
those computed against their own database tables, as they would with
p4 dbstat
. Results of the comparisons are written in
the replica’s log.
You can control which tables are checked, either by including and
excluding individual tables with the -t
and -T
options, or by using
one of three levels of verification.
Verification levels are controlled by the rpl.checksum.auto
configurable or the -l level
option.
- Level 1 corresponds to the most important system and revision tables.
- Level 2 includes all of level 1 as well as certain metadata that is not expected to differ between replicas.
- Level 3 includes all metadata, including metadata that is likely to differ between replicas, particularly build farms and edge servers.
When checking individual changelists and individual tables, the
rpl.checksum.change
and the rpl.checksum.table
configurables control
when events are written to the log.
For more information, including a list of database tables associated with each level of verification, see Helix Versioning Engine Administrator Guide: Multi-site Deployment.
Options
|
When scanning tables, scan blocksize records per block. The default is 5,000. For each block, the server computes a block checksum and writes it as a journal note. Replica servers automatically verify these blocks when processing these notes. This option can be used with large tables on a production system as the table is unlocked between each block. Inspecting the results of the block verifications will reveal the location of damage that affects only part of a database table. |
|
Compute a checksum for an individual submitted changelist. The checksum is written as a journal note, and replica servers automatically verify the checksum of the change when they process these notes. |
|
Specify a level for checksumming; each level corresponds to a larger
set of tables. These levels correspond to the levels used by the
|
|
Scan the specified database table. |
|
Specify the table(s) for which to compute checksums. To specify
multiple tables, double-quote the list and separate the table names
with spaces. The table names must start with " |
|
Compute checksums for all tables except those listed. |
|
Unload the specified database table to a file. This command also writes a journal note that documents this action, and instructs replica servers to automatically unload the same table to the same file when processing these notes. |
|
When unloading or scanning tables, specify the server version number to use. If no server version number is specified, the current server version is used. |
|
Compress the file when unloading a table. |
|
See “Global Options”. |
Usage Notes
Can File Arguments Use Revision Specifier? | Can File Arguments Use Revision Range? | Minimal Access Level Required |
---|---|---|
N/A |
N/A |
|
- For more about administering Perforce in distributed or replicated environments, see Helix Versioning Engine Administrator Guide: Multi-site Deployment.