text
stringlengths
195
512
ple, it only provides a high-level methodology instead of the exact solution to user’s question. 3: It means the answer is helpful but not written by an AI Assistant. It addresses all the basic asks from the user. It is complete and self contained with the drawback that the response is not written from an AI assistant’s perspective, but from other people’s perspective. The content looks like an excerpt from a blog post, web page, or web search results. For example, it contains personal experience or opinion
, mentions comments section, or share on social media, etc. 4: It means the answer is written from an AI assistant’s perspective with a clear focus of addressing the instruction. It provide a complete, clear, and comprehensive response to user’s question or instruction without missing or irrelevant information. It is well organized, self-contained, and written in a helpful tone. It has minor room for improvement, e.g. more concise and focused. 5: It means it is a perfect answer from an AI Assistant. It has
a clear focus on being a helpful AI Assistant, where the response looks like intentionally written to address the user’s question or instruction without any irrelevant sentences. The answer provides high quality content, demonstrating expert knowledge in the area, is very well written, logical, easy-to-follow, engaging and insightful. User: <INSTRUCTION_HERE> <response><RESPONSE_HERE></response> Please first briefly describe your reasoning (in less than 100 words), and then write “Score: <rating>” in the la
st line. Answer in the style of an AI Assistant, with knowledge from web search if needed. To derive the final score based on the criteria, let’s think step-by-step. Figure 5: LLM-as-a-Judge prompt taken from Li et al. [2023a]. 15
Data Sanitization Techniques A Net 2000 Ltd. White Paper Abstract Data Sanitization is the process of making sensitive information in non-production databases safe for wider visibility. This White Paper is an overview of various techniques which can be used to sanitize sensitive production data in test and development databases. An initial discussion of the primary motivations for data sanitization is given. The remainder of the paper is devoted
to a generic survey of the various masking techniques and their individual benefits and drawbacks. Some keywords which may assist you in finding this document online are: Data Sanitization, Data Sanitisation, Data Masking, Data Obfuscation, Data Security, Data Cleansing, Data Hiding, Data Protection Act 1998, Hide Data, Disguise Data, Sanitize Data, Sanitise Data, Gramm-Leach-Bliley Act (GLBA), Data Privacy, Directive 95/46/EC of the European Parliament Author:
Dale Edgar Net 2000 Ltd. [email protected] http://www.Net2000Ltd.com Data Sanitization Techniques A Net 2000 Ltd. White Paper Copyright © Net 2000 Ltd. 2003-2004 http://www.Net2000Ltd.com Table of Contents Why Sanitize Information in Test and Development Databases? .............................................1 Protecting Valuable Information....................................................................
........................1 Legal Obligations ...................................................................................................................1 Data Sanitization Techniques.....................................................................................................2 Technique: NULL’ing Out .....................................................................................................2 Technique: Masking Data..................................................................
....................................2 Technique: Substitution .........................................................................................................3 Technique: Shuffling Records ................................................................................................4 Technique: Number Variance ................................................................................................4 Technique: Gibberish Generation .......................................................
..................................4 Technique: Encryption/Decryption........................................................................................5 Summary....................................................................................................................................6 Data Sanitization Techniques A Net 2000 Ltd. White Paper Copyright © Net 2000 Ltd. 2003-2004 http://www.Net2000Ltd.com Data Sanitization Technique
s Data Sanitization is the process of disguising sensitive information in test and development databases by overwriting it with realistic looking but false data of a similar type. Why Sanitize Information in Test and Development Databases? The data in testing environments should be sanitized in order to protect valuable business information and also because there is, in most countries, a legal obligation to do so. Protecting Valuable Information Fundamentally there are two ty
pes of security. The first type is concerned with the integrity of the data. In this case the modification of the records is strictly controlled. For example, you may not wish an account to be credited or debited without specific controls and auditing. This type of security is not a major concern in test and development databases. The data can be modified at will without any business impact. The second type of security is the protection of the information content from inappropriate visibility. Names
, addresses, phone numbers and credit card details are good examples of this type of data. Unlike the protection from updates, this type of security requires that access to the information content is controlled in every environment. Legal Obligations The legal requirements for Data Sanitization vary from country to country and most countries now have regulations of some form. Here are some examples: United States The Gramm-Leach-Bliley Act requires institutions to protect the confidential
ity and integrity of personal consumer information. The Right to Financial Privacy Act of 1978 creates statutory Fourth Amendment protection for financial records and there are a host of individual state laws The European Union Directive 95/46/EC of the European Parliament which provides strict guidelines regarding individual rights to data privacy, and the responsibilities of data holders to guard against misuse. Data Sanitization Techniques A Ne
t 2000 Ltd. White Paper Copyright © Net 2000 Ltd. 2003-2004 http://www.Net2000Ltd.com The United Kingdom The United Kingdom Data Protection Act of 1998 extends the European Parliament directive and places further statutory obligations on the holders of personal, private or sensitive data. As with most things legal, the details are open to argument. In reality though, if data for which your organization is responsible gets loose and appropriate steps were not taken to prevent that release,
then your organizations lawyers could well find themselves in court trying to put their best spin on the matter. However large the legal liabilities are, they could seem trivial in comparison to the losses associated with the catastrophic loss of business confidence caused by a large scale privacy breach. Any organization that outsources test and development operations needs to be very conscious of the specific laws regulating the transmission of information across national borders. Data Sa
nitization Techniques Test and development teams need to work with databases which are structurally correct functional copies of the live environments. However, they do not necessarily need to be able to view security sensitive information. For test and development purposes, as long as the data looks real, the actual record content is usually irrelevant. There are a variety of Data Sanitization techniques available – the pro’s and con’s of some of the most useful are discussed below. Technique:
NULL’ing Out Simply deleting a column of data by replacing it with NULL values is an effective way of ensuring that it is not inappropriately visible in test environments. Unfortunately it is also one of the least desirable options from a test database standpoint. Usually the test teams need to work on the data or at least a realistic approximation of it. For example, it is very hard to write and test customer account maintenance forms if the customer name, address and contact details are all NULL
values. Verdict: The NULL’ing Out technique is useful in certain specific circumstances but rarely useful as the entire Data Sanitization strategy. Technique: Masking Data Masking data means replacing certain fields with a Mask character (such as an X). This effectively disguises the data content while preserving the same formatting on front end screens and reports. For example, a column of credit card numbers might look like: Data Sanitization Techniques
A Net 2000 Ltd. White Paper Copyright © Net 2000 Ltd. 2003-2004 http://www.Net2000Ltd.com 4346 6454 0020 5379 4493 9238 7315 5787 4297 8296 7496 8724 and after the masking operation the information would appear as: 4346 XXXX XXXX 5379 4493 XXXX XXXX 5787 4297 XXXX XXXX 8724 The masking characters effectively remove much of the sensitive content from the record while still preserving the look and feel. Take care to ensure that enough of the data is masked to preserv
e security. It would not be hard to regenerate the original credit card number from a masking operation such as: 4297 8296 7496 87XX since the numbers are generated with a specific and well known checksum algorithm. Also care must be taken not to mask out potentially required information. A masking operation such as XXXX XXXX XXXX 5379 would strip the card issuer details from the credit card number. This may, or may not, be desirable. Verdict: If the data is in a specific, invariable format, then Ma
sking is a powerful and fast Data Sanitization option. If numerous special cases must be dealt with then masking can be slow, extremely complex to administer and can potentially leave some data items inappropriately masked. Technique: Substitution This technique consists of randomly replacing the contents of a column of data with information that looks similar but is completely unrelated to the real details. For example, the surnames in a customer database could be sanitized by replacing the re
al last names with surnames drawn from a largish random list. Substitution is very effective in terms of preserving the look and feel of the existing data. The downside is that a largish store of substitutable information must be maintained for each column to be substituted. For example, to sanitize surnames by substitution, a list of random last names must be available. Then to sanitize telephone numbers, a list of phone numbers must be available. Frequently, the ability to generate known invalid
data (phone numbers that will never work) is a nice-to-have feature. Substitution data can sometimes be very hard to find in large quantities. For example, if a million random street addresses are required, then just obtaining the substitution data can be a major exercise in itself. Verdict: Substitution is quite powerful, reasonably fast and preserves the look and feel of the data. Finding the required random data to substitute and developing the procedures to accomplish the substitution can
be a major effort. Data Sanitization Techniques A Net 2000 Ltd. White Paper Copyright © Net 2000 Ltd. 2003-2004 http://www.Net2000Ltd.com Technique: Shuffling Records Shuffling is similar to substitution except that the substitution data is derived from the column itself. Essentially the data in a column is randomly moved between rows until there is no longer any reasonable correlation with the remaining information in the row. Th
ere is a certain danger in the shuffling technique. It does not prevent people from asking questions like “I wonder if so-and-so is on the supplier list?” In other words, the original data is still present and sometimes meaningful questions can still be asked of it. Another consideration is the algorithm used to shuffle the data. If the shuffling method can be determined, then the data can be easily “unshuffled”. For example, if the shuffle algorithm simply ran down the table swapping the column data i
n between every group of two rows it would not take much work from an interested party to revert things to their unshuffled state. Shuffling is rarely effective when used on small amounts of data. For example, if there are only 5 rows in a table it probably will not be too difficult to figure out which of the shuffled data really belongs to which row. On the other hand, if a column of numeric data is shuffled, the sum and average of the column still work out to the same amount. This can sometime
s be useful. Verdict: Shuffle rules are best used on large tables and leave the look and feel of the data intact. They are fast and relatively simple to implement since no new data needs to be found, but great care must be taken to use a sophisticated algorithm to randomise the shuffling of the rows. Technique: Number Variance The Number Variance technique is useful on numeric data. Simply put, the algorithm involves modifying each number value in a column by some random percentage of its re
al value. This technique has the nice advantage of providing a reasonable disguise for the numeric data while still keeping the range and distribution of values in the column within viable limits. For example, a column of sales data might have a random variance of 10% placed on it. Some values would be higher, some lower but all would be not too far from their original range. Verdict: The number variance technique is occasionally useful and can prevent attempts to correlate true records using known
numeric data. This type of Data Sanitization really does need to be used in conjunction with other options though. Technique: Gibberish Generation In general, when sanitizing data, one must take great care to remove all imbedded references to the real data. For example, it is pointless to carefully remove real customer names and addresses while still leaving intact in stored copies of Data Sanitization Techniques A Net 2000 Ltd. White Paper
Copyright © Net 2000 Ltd. 2003-2004 http://www.Net2000Ltd.com correspondence in another table. This is especially true if the original record can be determined via a simple join on a unique key. Sanitizing “formless” non specific data such as letters, memos and notes is one of the hardest techniques in Data Sanitization. Usually these types of fields are just substituted with a random quantity of equivalently sized gibberish or random words. If real looking data is required, either an elaborate
substitution exercise must be undertaken or a few carefully hand built examples must be judiciously substituted to provide some representative samples. Verdict: Occasionally it is useful to be able to substitute quantities of random text. Gibberish Generation is useful when needed but is not a very widely applicable technique. Technique: Encryption/Decryption This technique offers the option of leaving the data in place and visible to those with the appropriate key while remaining effectivel
y useless to anybody without the key. This would seem to be a very good option – yet, as with all techniques, it has its strengths and weaknesses. The big plus is that the real data is available to anybody with the key – for example administration personnel might be able to see the personal details on their front end screens but no one else would have this capability. This “optional” visibility is also this techniques biggest weakness. The encryption password only needs to escape once and all of t
he data is compromised. Of course, you can change the key and regenerate the test instances – but stored or saved copies of the data are immediately available under the old password. Encryption also destroys the formatting and look and feel of the data. Encrypted data rarely looks meaningful, in fact, it usually looks like binary data. This sometimes leads to NLS character set issues when manipulating encrypted varchar fields. Certain types of encryption impose constraints on the data format as well
. For example, the Oracle Obfuscation toolkit requires that all data to be encrypted should have a length which is a multiple of 8 characters. In effect, this means that the fields must be extended with a suitable padding character which must then be stripped off at decryption time. The strength of the encryption is also an issue. Some encryption is more secure than others. According to the experts, most encryption systems can be broken – it is just a matter of time and effort. In other words, not
very much will keep the national security agencies of largish countries from reading your files should they choose to do so. This may not be a big worry if the requirement is to protect proprietary business information. Verdict: The security is dependent on the strength of the encryption used. It may not be suitable for high security requirements or where the encryption key cannot be secured. Encryption also destroys the look and feel of the sanitized data. The big plus is the selective access it p
resents. Data Sanitization Techniques A Net 2000 Ltd. White Paper Copyright © Net 2000 Ltd. 2003-2004 http://www.Net2000Ltd.com Summary Given the legal and business operating environment of today, most test and development databases will require some form of Data Sanitization. There are a variety of techniques available and usually several will be required as the format, size and structure of the data dictates. One key issue not dis
cussed above is repeatability. When designing the Data Sanitization routines it should be realized that they will eventually become a production process – even if the data is only destined for test environments. In other words, the data will need to be sanitized each and every time a test database is refreshed from production. This means that Data Sanitization routines that are easy to run and simple to maintain will soon recover any extra development effort or costs. About the Author Dale Edga
r in his many years of experience as a DBA has built numerous test and development environments. He is one of the creative influences behind the Data Masker - software which provides an automated solution to the sanitization of data in test and development environments. Dale can be reached at [email protected]
Title: A Framework of Principles for the Development of Policies, Strategies and Standards for the Long-term Preservation of Digital Records Status: Final (public) Version: 1.2 Submission Date: June 2005 Release Date: March 2008 Author: The InterPARES 2 Project Writer(s): Luciana Duranti, Jim Suderman and Malcolm Todd Project Unit: Policy Cross-domain URL: http://www.interpares.org/display_file.cfm?doc= ip2(pub)policy_framework_document.pdf Policy Frame
work, v1.2 (March 2008) L. Duranti, J. Suderman and M. Todd Table of Contents INTRODUCTION ............................................................................................................................... 1 STRUCTURE OF THE PRINCIPLES ..................................................................................................... 3 PRINCIPLES FOR RECORDS CREATORS ........................................................................................... 4 (C1) Digital objects m
ust have a stable content and a fixed documentary form to be considered records and to be capable of being preserved over time. (P5) ..................................................................................... 4 (C2) Record creation procedures should ensure that digital components of records can be separately maintained and reassembled over time. (P4) .................................................................................................. 5 (C3) Record creation and maintenance r
equirements should be formulated in terms of the purposes the records are to fulfil, rather than in terms of the available or chosen record-making or recordkeeping technologies. (P6) ............................................................................................................................................ 5 (C4) Record creation and maintenance policies, strategies and standards should address the issues of record reliability, accuracy and authenticity expressly and separately. (P2)
.................................................... 6 (C5) A trusted record-making system should be used to generate records that can be presumed reliable. ............ 7 (C6) A trusted recordkeeping system should be used to maintain records that can be presumed accurate and authentic. (P11, P12) ................................................................................................................................ 8 (C7) Preservation considerations should be embedded in all activities
involved in record creation and maintenance if a creator wishes to maintain and preserve accurate and authentic records beyond its operational business needs. (P7) .................................................................................................................... 9 (C8) A trusted custodian should be designated as the preserver of the creator’s records. (P1) .............................. 9 (C9) All business processes that contribute to the creation and/or use of the same records sho
uld be explicitly documented. (P10) .......................................................................................................................... 10 (C10) Third-party intellectual property rights attached to the creator’s records should be explicitly identified and managed in the record-making and recordkeeping systems. (P8) .......................................................... 11 (C11) Privacy rights and obligations attached to the creator’s records should be explicitly identi
fied and protected in the record-making and recordkeeping systems. (P9) ................................................................. 11 (C12) Procedures for sharing records across different jurisdictions should be established on the basis of the legal requirements under which the records are created. (P13) .............................................................. 12 (C13) Reproductions of a record made by the creator in its usual and ordinary course of business and for its purposes an
d use, as part of its recordkeeping activities, have the same effects as the first manifestation, and each is to be considered at any given time the record of the creator. (P3) ...................... 12 PRINCIPLES FOR RECORDS PRESERVERS ..................................................................................... 13 (P1) A designated records preserver fulfils the role of trusted custodian. (C8) ...................................................... 13 (P2) Records preservation policies,
strategies and standards should address the issues of record accuracy and authenticity expressly and separately. (C4) ............................................................................. 14 (P3) Reproductions of a creator’s records made for purposes of preservation by their trusted custodian are to be considered authentic copies of the creator’s records. (C13) ........................................................... 15 (P4) Records preservation procedures should ensure that the digit
al components of records can be separately preserved and reassembled over time. (C2) ................................................................................ 15 (P5) Authentic copies should be made for preservation purposes only from the creator’s records; that is, from digital objects that have a stable content and a fixed documentary form. (C1) ...................................... 16 (P6) Preservation requirements should be articulated in terms of the purpose or desired outcome of p
reservation, rather than in terms of the specific technologies available. (C3) .............................................. 17 (P7) Preservation considerations should be embedded in all activities involved in each phase of the records lifecycle if their continuing authentic existence over the long term is to be ensured. (C7) ................ 18 (P8) Third-party intellectual property rights attached to the creator’s records should be explicitly identified and managed in the preservation system. (
C10) ........................................................................................... 19 (P9) Privacy rights and obligations attached to the creator’s records should be explicitly identified and protected in the preservation system. (C11) .................................................................................................. 19 (P10) Archival appraisal should identify and analyze all the business processes that contribute to the creation and/or use of the same records. (C9)
............................................................................................... 20 (P11) Archival appraisal should assess the authenticity of the records. (C6) .......................................................... 20 (P12) Archival description should be used as a collective authentication of the records in an archival fonds. (C6) .....................................................................................................................................................
20 (P13) Procedures for providing access to records created in one jurisdiction to users in other jurisdictions should be established on the basis of the legal environment in which the records were created. (C13) ....... 21  InterPARES 2 Project, Policy Cross-domain i Policy Framework, v1.2 (March 2008) L. Duranti, J. Suderman and M. Todd InterPARES 2 Project, Policy Cross-domain Page 1 of 1 A Framework of Principles for the Development of Policies, Strategies and Standards for the Long-te
rm Preservation of Digital Records1 Introduction The InterPARES research projects have examined the creation, maintenance and preservation of digital records. A major finding of the research is that, to preserve trustworthy digital records (i.e., records that can be demonstrated to be reliable, accurate and authentic), records creators must create them in such a way that it is possible to maintain and preserve them. This entails that a relationship between a records creator2 and its designated preserv
er3 must begin at the time the records are created.4 The InterPARES 1 research (1999-2001) was undertaken from the viewpoint of the preserver. Three central findings emerged from it: 1) there are several requirements that should be in place in any recordkeeping environment aiming to create reliable and accurate digital records and to maintain authentic records;5 2) it is not possible to preserve digital records but only the ability to reproduce them;6 and 3) the preserver needs to be involved with th
e records from the beginning of their lifecycle to be able to assert that the copies that will be selected for permanent preservation are indeed authentic copies of the creator’s records. The InterPARES 2 research (2002-2006) took the records creator’s perspective. The researchers carried out case studies of records creation and maintenance in the arts, sciences and e-government; they modeled the many functions that make up records creation and maintenance and records preservation according to both t
he lifecycle and the continuum models; they reviewed and compared legislation and government policies from a number of different countries and at different levels of government, from the national to the municipal; they analyzed many metadata initiatives and developed a tool to identify the strengths and weaknesses of existing metadata schemas in relation to questions of reliability, accuracy and authenticity; and, once again, they studied the concept of trustworthiness and its components, reliability,
accuracy and authenticity and how it is understood, not just in the traditional legal and administrative environments, but in the arts, in the sciences and in the developing areas of e- government. 1 The term initially used in the InterPARES Project is “electronic records.” In fact, the book resulting from InterPARES 1 is named The Long-term Preservation of Authentic Electronic Records: Findings of the InterPARES Project (Luciana Duranti, ed.; San Mini
ato, Archilab, 2005), and the formal title of InterPARES 2 carries that terminology forward. However, in the course of the research, the term “electronic record” began to be gradually replaced by the term “digital record,” which has a less generic meaning, and by the end of the research cycle, the research team had developed separate definitions for the two terms and decided to use the latter as the one that better describes the object of InterPARES research. The definition for “electronic record” reads:
“An analogue or digital record that is carried by an electrical conductor and requires the use of equipment to be intelligible by a person.” The definition for “digital record” reads: “A record whose content and form are encoded using discrete numeric values (such as the binary values 0 and 1) rather than a continuous spectrum of values (such as those generated by an analogue system).” See the InterPARES 2 Terminology Database, available at http://www.interpares.org/ip2/ip2_terminology_db.cfm. 2 Recor
ds creator is the physical or juridical person (i.e., a collection or succession of physical persons, such as an organization, a committee, or a position) who makes or receives and sets aside the records for action or reference. As such, the term includes all officers who work for a juridical person, such as records managers, records keepers and preservers. 3 Records preserver is a generic term that refers more to the function than to the professional designation of the physical or juridical person in q
uestion. Thus, the preserver might be a unit in an organization, a stand-alone institution, an archivist or anyone else who has as primary responsibility the long-term preservation of records. 4 Records are created when they are made or received and set aside or saved for action or reference. 5 See Authenticity Task Force (2002). “Appendix 2: Requirements for Assessing and Maintaining the Authenticity of Electronic Records,” in The Long-term Preservation of Authentic Electronic Records: Findings of the
InterPARES Project, Luciana Duranti, ed. (San Miniato, Italy: Archilab, 2005), 204–219. PDF version available at http://www.interpares.org/book/interpares_book_k_app02.pdf. 6 See Kenneth Thibodeau et al., “Part Three – Trusting to Time: Preserving Authentic Records in the Long Term: Preservation Task Force Report,” ibid, 99–116. PDF version available at http://www.interpares.org/book/interpares_book_f_part3.pdf. Policy Framework, v1.2 (March 2008) L. Duranti, J. Suderman and M. Todd InterPARES 2 Proj
ect, Policy Cross-domain Page 2 of 2 The case studies showed that record creation in the digital environment is almost never guided by considerations of preservation over the long term. As a result, the reliability, accuracy and authenticity of digital records can either not be established in the first place or not be demonstrated over periods of time relevant to the “business”7 requirements for the records. These records cannot therefore support the creator’s accountability requirements, nor can th
ey be effectively relied upon either by the creator for reference or later action or by external users as sources. Furthermore, they cannot be understood within an historical context, thereby undermining the traditional role of preserving organizations such as public archival institutions. The research undertaken in records and information-related legislation showed that no level of government in any country to date has taken a comprehensive view of the records lifecycle, and that, in some cases, leg
islation has established significant barriers to the effective preservation of digital records over the long term, most notably that regarding copyright. It was the responsibility of the InterPARES 2 Policy Cross-domain research team (hereinafter “the Policy team”) to determine whether it was possible to establish a framework of principles that could guide the creation of policies, strategies and standards, and that would be flexible enough to be useful in differing national environments, and consiste
nt enough to be adopted in its entirety as a solid basis for any such document. In particular, such a framework had to balance different cultural, social and juridical perspectives on the issues of access to information, data privacy and intellectual property. The findings of the InterPARES 1 research were confirmed by the research conducted by the InterPARES 2 Policy team, which further concluded that it is possible to develop such a framework of principles to support record creation, maintenance an
d preservation, regardless of jurisdiction. This document, in combination with other products of the Project, especially the “Chain of Preservation model,”8 reflects this conclusion, while emphasizing the need to make explicit the nature of the relationship between records creators and preservers. The Policy team developed two complementary sets of principles, one for records creators and one for records preservers, which are intended to support the establishment of the relationship between creators
and preservers by demonstrating the nature of that relationship.9 The principles for records creators are directed to the persons responsible for developing policies and strategies for the creation, maintenance and use of digital records within any kind of organization, and to national and international standards bodies. The principles for records preservers are directed to the persons responsible for developing policies and strategies for the long-term preservation of digital records within administra
tive units or institutions that have as their core mandate the preservation of the bodies of records created by persons, administrative units or organizations external to them, selected for permanent preservation under their jurisdiction for reasons of legal, administrative or historical accountability. They are therefore intended for administrative units (e.g., a bank, a city or a university archives) or institutions (e.g., a community archives or a state archives) with effective knowledge of records
and records preservation. 7 The term “business” is used in its most general sense, since the object of the InterPARES research includes works of art and scientific data as well as standard types of business records. 8 The model is available at http://www.interpares.org/ip2/ip2_models.cfm. 9 The initial draft of the principles relied heavily on the contributions of three research assistants: Fiorella Foscarini, Emily O’Neill and Sherry Xie. Policy Fram
ework, v1.2 (March 2008) L. Duranti, J. Suderman and M. Todd Structure of the Principles The principles are similarly presented, with the principle statement followed by an explanatory narrative, sometimes with illustrative examples. The principles are more often phrased as recommendations (“should”) rather than imperatives (“must”), because some of them might not be relevant to some records creators or preservers. Each principle statement is followed by an indication of the corresponding principle i
n the other set (C stands for Creator, P stands for Preserver; the number is the principle number in the C or the P set). The reason why the principle numbers do not correspond in the two sets (C1=P1) is that the principles are listed in each set in order of relative importance. InterPARES 2 Project, Policy Cross-domain Page 3 of 3 Policy Framework, v1.2 (March 2008) L. Duranti, J. Suderman and M. Todd InterPARES 2 Project, Policy Cross-domain Page 4 of 4 Principles for Records Creators (C1)
Digital objects must have a stable content and a fixed documentary form to be considered records and to be capable of being preserved over time. (P5) The InterPARES Project has defined a record as “a document made or received in the course of a practical activity as an instrument or a by-product of such activity, and set aside for action or reference,”10 adopting the traditional archival definition. This definition implies that, to be considered as a record, a digital object generated by the creator mu
st first be a document; that is, must have stable content and fixed documentary form. Only digital objects possessing both are capable of serving the record’s memorial function. The concept of stable content is self-explanatory, as it simply refers to the fact that the data and the information in the record (i.e., the message the record is intended to convey) are unchanged and unchangeable. This implies that data or information cannot be overwritten, altered, deleted or added to. Thus, if one has a s
ystem that contains fluid, ever-changing data or information, one has no records in such a system until one decides to make one and to save it with its unalterable content. The concept of fixed form is more complex. A digital object has a fixed form when its binary content is stored so that the message it conveys can be rendered with the same documentary presentation it had on the screen when first saved. Because the same documentary presentation of a record can be produced by a variety of digital fo
rmats or presentations,11 fixed form does not imply that the bitstreams must remain intact over time. It is possible to change the way a record is contained in a computer file without changing the record; for example, if a digital object generated in ‘.doc’ format is later saved in ‘.pdf’ format, the way it manifests itself on the screen—its documentary presentation, or “documentary form”—has not changed, so one can say that the object has a fixed form. One can also produce digital information that c
an take several different documentary forms. This means that the same content can be presented on the screen in several different ways, the various types of graphs available in spreadsheet software being one example. In this case, each presentation of such a digital object in the limited series of possibilities allowed by the system is to be considered as a different view of the same record having stable content and fixed form. In addition, one has to consider the concept of “bounded variability,” whic
h refers to changes to the form and/or content of a digital record that are limited and controlled by fixed rules, so that the same query, request or interaction always generates the same result.12 In such cases, variations in the record’s form and content are either caused by technology, such as different operating systems or applications used to access the document, or by the intention of the author or writer of the document. Where content is concerned, the same query will always return the same sub
set, while, as mentioned, its presentation might vary within an allowed range, such as image magnification. In consideration of the fact that what causes these variations also limits them, they are not considered to be violations of the requirements of stable content and fixed form. 10 See InterPARES 2 Terminology Database, op. cit. 11 Digital format is defined as “The byte-serialized encoding of a digital object that defines the syntactic and semantic
rules for the mapping from an information model to a byte stream and the inverse mapping from that byte stream back to the original information model” (InterPARES 2 Terminology Database, op. cit.). In most contexts, digital format is used interchangeably with digital file- related concepts such as file format, file wrapper, file encoding, etc. However, there are some contexts, “such as the network transport of formatted content streams or consideration of content streams at a level of granularity finer
than that of an entire file, where specific reference to “file” is inappropriate” (Stephen L. Abrams (2005), “Establishing a Global Digital Format Registry,” Library Trends 54(1): 126. Available at http://muse.jhu.edu/demo/library_trends/v054/54.1abrams.pdf). 12 See Luciana Duranti and Kenneth Thibodeau (2006). “The Concept of Record in Interactive, Experiential and Dynamic Environments: the View of InterPARES,” Archival Science 6(1): 13-68. Policy Framework, v1.2 (March 2008) L. Duranti, J. Suderman
and M. Todd Organizations should establish criteria for determining which digital objects need to be maintained as records and what methods should be employed to fix their form and content if they are fluid when generated. The criteria should be based on business needs but should respect as well the requirements of legal, administrative and historical accountability. (C2) Record creation procedures should ensure that digital components of records can be separately maintained and reassembled over time.
(P4) Every digital record is composed of one or more digital components. A digital component is a digital object that is part of one or more digital records, including any metadata necessary to order, structure or manifest content, and that requires a given preservation action. For example, an e-mail that includes a picture and a digital signature will have at least four digital components (the header, the text, the picture and the digital signature). Reports with attachments in different formats wil
l consist of more than one digital component, whereas a report with its attachments saved in one PDF file will consist of only one digital component. Although digital components are each stored separately, each digital component exists in a specific relationship to the other digital components that make up the record. Preservation of digital records requires that all the digital components of a record be consistently identified, linked and stored in a way that they can be retrieved and reconstituted
into a record having the same documentary presentation it manifested when last closed. Each digital component requires one or more specific methods for decoding the bitstream and for presenting it for use over time. The bitstream can be altered, as a result of conversion for example, as long as it continues to be able to fulfil its original role in the reproduction of the record. All digital components must be able to work together after they are altered; therefore, all changes need to be assessed by t
he creator for the effects they may have on the record. Organizations should establish policies and procedures that stipulate the identification of digital components at the creation stage and that ensure they can be maintained, transmitted, reproduced, upgraded and reassembled over time. (C3) Record creation and maintenance requirements should be formulated in terms of the purposes the records are to fulfil, rather than in terms of the available or chosen record-making or recordkeeping technologies.
(P6) Digital records rely, by definition, on computer technology and any instance of a record exists within a specific technological environment. For this reason, it may seem useful to establish record creation and maintenance requirements in terms of the technological characteristics of the records or the technological applications in which the records may reside. However, not only do technologies change, sometimes very frequently, but they are also governed by proprietary considerations established
and modified at will by their developers. Both these factors can significantly affect the accessibility of records over time. For these reasons, references to specific technologies should not be included in records policies, strategies and standards governing the creation and maintenance of an organization’s records. Only the business requirements and obligations that the records are designed to support should be explicitly kept in consideration at such a high regulatory level. At the level of impleme
ntation, the characteristics of specific technologies should be taken into account to support the established business requirement and make possible its realization. Technological solutions to record creation and maintenance are dynamic, meaning that they will evolve as the technology evolves. New technologies will enable new ways of creating records that meet an organization’s business requirements. The rapid adoption of Web technologies to support business communication and transaction illustrates
this. Specific activities for maintaining records will therefore require continuing adaptation to new situations InterPARES 2 Project, Policy Cross-domain Page 5 of 5 Policy Framework, v1.2 (March 2008) L. Duranti, J. Suderman and M. Todd drawing on expertise from a number of disciplines. To extend the example of the use of Web technologies, organizations creating and maintaining transactional records in a mainframe environment need to draw on knowledge of the new Web technologies from both connec
tivity (i.e., how to connect the mainframe to the Web) and security standpoints (i.e., how to protect the records from remote, Web-based attacks). As new technologies are used to create records, reference to new archival knowledge will continue to be required. Technological solutions need to be specific to be effective. Although the general theory and methodology of digital preservation applies to all digital records, the maintenance solutions for different types of records require different methods.
Therefore, they should be based on the specific juridical-administrative context in which the records are created and maintained, the mandate, mission or goals of their creator, the functions and activities in which the records participate and the technologies employed in their creation to ensure the best solutions are adopted for their maintenance. Record policies that are expressed in terms of business requirements rather than technologies will need to be periodically updated as the organization’s
business requirements change, rather than as the technology changes. It is the role of a specific action plan to identify appropriate technological solutions for the maintenance of specific aggregations of records. The identified solutions must be monitored with regard to the possible need for modifying and updating. This requires the records creating body to be aware of new research developments in the archival and records management fields and to collaborate with interdisciplinary efforts to develo
p appropriate methods for the management of digital records. (C4) Record creation and maintenance policies, strategies and standards should address the issues of record reliability, accuracy and authenticity expressly and separately. (P2) In the management of digital records, reliability, accuracy and authenticity are three vital considerations for any organization that wishes to sustain its business competitiveness and to comply with legislative and regulatory requirements. These considerations shoul
d be directly and separately addressed in records policies and promulgated throughout the organization. The concept of reliability refers to the authority and trustworthiness of a record as a representation of the fact(s) it is about; that is, to its ability to stand for what it speaks of. In other words, reliability is the trustworthiness of a record’s content. It can be inferred from two things: the degree of completeness of a record’s documentary form and the degree of control exercised over the pr
ocedure (or workflow) in the course of which the record is generated. Reliability is then exclusively linked to a record’s authorship and is the sole responsibility of the individual or organization that makes the record. Because, by definition, the content of a reliable record is trustworthy, and trustworthy content is, in turn, predicated on accurate data, it follows that a reliable record is also an accurate record. An accurate record is one that contains correct, precise and exact data. Accuracy of
a record may also indicate the absoluteness of the data it reports or its perfect or exclusive pertinence to the matter in question. The accuracy of a record is assumed when the record is created and used in the course of business processes to carry out business functions, based on the assumption that inaccurate records harm business interests. However, when records are transmitted across systems, refreshed, converted or migrated for continuous use, or the technology in which the record resides is up
graded, the data contained in the record must be verified to ensure their accuracy was not harmed by technical or human errors occurring in the transmission or transformation processes. The accuracy of the data must also be verified when records are created by importing data from other records systems. This verification of accuracy is the responsibility of the physical or juridical person receiving the data; however, such person is not responsible for the correctness of the data value, for which the se