PRIDE Requirements and Success Factors
Work Package 2 of Telematics for Libraries project PRIDE (LB 5624)
Table of Contents
Authorisation and payment mechanisms in distributed networked environments usually employ cryptographic techniques, which provide the following basic services: authentication, privacy, integrity, and non-repudiation.
Authentication is any process through which one proves and verifies certain information. Sometimes one may want to verify the origin of a document, the identity of the sender, the time and date a document was sent and/or signed, the identity of a computer or user, and so on. A digital signature is a cryptographic means through which many of these may be verified (see the Cryptography subsection for more details on digital signatures).
Privacy is achieved by encrypting the messages that are exchanged across a network. Although privacy is especially important for payment issues (e.g., for the transmission of credit card numbers), it is in fact worth more general consideration.
Integrity is ensured by digital signatures, which make it easy to detect if the message has been tampered with while in transit.
Non-repudiation is a property of a cryptosystem which means the ability to collect evidence of actions performed by the users.
While cryptography throughout its history has been mostly concerned with keeping communications private, today's cryptography is more than just that. Generally speaking, it may be summed up as the study of techniques and applications that depend on the existence of difficult problems. Cryptography attempts to make sure that certain tasks (such as decrypting a message without the key or reversing a hash function) are computationally infeasible.
Secret-key cryptography is a mechanism for encryption and decryption of information with a key. The same key is used for both encryption and decryption; that is why this technique is also called symmetric cryptography. The key must obviously be kept secret, for anyone who knows the key can decrypt the message.
Key agreement is a method whereby two parties, without prior arrangements, exchange messages in such a way that they agree upon a secret key that is known only to them. Key agreement can be achieved with a public-key algorithm (see below), or with other methods.
In public-key cryptography systems, different keys are used for encryption and decryption. Keys are always generated in pairs. Message encrypted with one of the keys can only be decrypted with the other one from the same pair. Typically, in a public-key cryptosystem each user has his/her own key pair. One of the keys of the pair is kept secretly (and is called the private key), the other one is widely distributed (the public key). Private messages are encrypted with the recipient's public key, so that only the recipient, who knows the matching key, can decrypt them. The key pairs are generated so that it is not possible to derive the private key from its public counterpart.
Public-key cryptography is much easier to use and more secure than the secret key cryptography, because it does not involve unprotected negotiation of secret keys. However, asymmetric cryptography algorithms are more computationally intensive. To overcome this obstacle, in practice the messages are encrypted with the conventional symmetric algorithms, but the symmetric keys used for that are randomly selected and encrypted using public-key technology. For example, the following steps are usually taken when sending a secret message:
To check the integrity of information, a cryptographic technique known as one-way hash functions is employed. These are relatively easy to compute functions that for any given input message return a fixed-size bit string (hash value) such that for any input it is computationally infeasible to find another message that would produce the same result. Normally hash values (also called digests) are not very big (much smaller than the average message), so it is easy to keep them securely. Since it is practically impossible to alter information in such a manner that its hash value remains the same, hash functions provide a way to check integrity of information.
Another application of one-way hash functions is digital signatures. A digital signature is a means of proving the identity of the message's author. In a public-key cryptosystem, the digital signature is computed as a hash value of the message, encrypted with the sender's private asymmetric key. To check the signature, the same hash function is applied to the body of the message and the result is compared with the original signature, decrypted with the sender's public key (which is well known).
This is a brief survey of existing cryptographic algorithms. It is compiled from RSA Laboratories' Frequently Asked Questions about Today's Cryptography .
DES, the Data Encryption Standard (FIPS 46-1), describes the data encryption algorithm (DEA). The DEA is also defined in the ANSI standard X9.32. It is a symmetric cryptosystem with the key length of 56 bits. Due to the fixed key size, DEA is gradually becoming weaker with respect to the brute force attacks.
A brief review of current standardisation processes in cryptography is presented in :
Several organizations are involved in defining standards related to aspects of cryptography and its application.
ANSI: The American National Standards Institute (ANSI) has a broadly based standards program, and some of the groups within its Financial Services area (Committee X9) establish standards related to cryptographic algorithms. Examples include X9.17 (key management: wholesale), X9.19 (message authentication: retail), and X9.30 (Public-Key Cryptography). Information can be found at <URL:http://www.x9.org> .
IEEE: The Institute of Electrical and Electronic Engineers (IEEE) has a broadly based standards program, including P1363 (Public-Key Cryptography). Information can be found at <URL:http://www.ieee.org> .
IETF: The Internet Engineering Task Force (IETF) is the defining body for Internet protocol standards. Its security area working groups specify means for incorporating security into the Internet's layered protocols. Examples include IP layer security (IPSec), transport layer security (TLS), Domain Name System security (DNSsec) and Generic Security Service API (GSS-API). Information can be found at <URL:http://www.ietf.org> .
ISO and ITU: The International Standards Organization's International Electrotechnical Commission (ISO/IEC) and the International Telecommunications Union's Telecommunication Standardization Sector (ITU-T) have broadly-based standards programs (many of which are collaborative between the organizations), which include cryptographically-related activities. Example results are: ITU-T Recommendation X.509, which defines facilities for public-key certification, and the ISO/IEC 9798 document series, which defines means for entity authentication. ITU information can be found at http://www.itu.ch, and ISO information at <URL:http://www.iso.ch> .
NIST: The U.S. National Institute of Standards and Technology (NIST)'s Information Technology Laboratory produces a series of information processing specifications (Federal Information Processing Standards (FIPS)), several of which are related to cryptographic algorithms and usage. Examples include FIPS PUB 46-2 (Data Encryption Standard (DES)) and FIPS PUB 186 (Digital Signature Standard (DSS)). Information is available at <URL:http://www.nist.gov> .
Open Group: The Open Group produces a range of standards, some of which are related to cryptographic interfaces (APIs) and infrastructure components. Examples include Common Data Security Architecture (CDSA) and Generic Crypto Service API (GCS-API). Information can be found at <URL:http://www.opengroup.org> .
PKCS: RSA Laboratories is responsible for the development of the Public-Key Cryptography Standards (PKCS) series of specifications, which define common cryptographic data elements and structures. Information can be found at <URL:http://www.rsa.com/rsalabs/pubs/> .
A public-key infrastructure (PKI) consists of protocols, services, and standards supporting application of public-key cryptography. PKI includes services and protocols for managing public keys, often through the use of Certification Authority (CA) and Registration Authority (RA) components. The most basic services likely to be found in a PKI are key registration, certificate revocation, key selection, and trust evaluation. Below is a list of services that a PKI may include.
Registration. In order for a user to join the Public Key Infrastructure (PKI) environments s/he must register with a certifying Trusted Third Party (TTP) belonging to the PKI. The primary goal of this service is to establish the reliable unique binding between a user and her/his public key.
Digital Signatures. In order to satisfy the message authentication, message integrity and non-repudiation of origin user requirements, the PKI should offer digital signature services.
Encryption. Encryption is a basic service providing the cryptographic functions for protection of message confidentiality in a computer network.
Time stamping. Time stamping is described as the process of attaching data and time to a document in order to prove that it existed at a particular moment of time.
Non-repudiation. Non-repudiation involves the generation, accumulation, retrieval, and interpretation of evidence that a particular party processed a particular data item. The evidence must be capable of convincing an independent third party, potentially at a much later time, as to the validity of a claim.
Key Management. Key management is a principal service within Public Key Infrastructure architecture. This service deals primarily with the handling of cryptographic keys in a proper, efficient, scaleable and secure way. It includes key pair generation, key archiving, key backup, key pair renewal, and some other functions.
Certificate Management. A digital certificate is an electronic token ensuring the binding between an entity and its public key. The functions of this service cover generation, distribution, storage, retrieval, and revocation of digital certificates.
Information Repository. This service maintains the collection of data critical for the operation of the TTP system. It states the general means and fashion for storing, archiving and maintaining several types of data ranging from organisation's legal requirements, to system recovery needs.
Directory Services. In order to interact, a member of PKI must have access to the useful information about other PKI members. This is achieved by the use of Directory Services.
Camouflaging communications. A camouflaging communication not only provides data confidentiality, but also hides the very fact of communication. This is achieved by adding dummy messages into the data stream enabling TTPs and users to hide real data transfers, both in terms of their occurrence and frequency.
Authorisation. The PKI should enable requesting entities with the right to delegate access rights at will to other PKI entities. This means that a PKI user who possesses a resource may grant the right to another PKI user to access this resource. TTPs should ensure the granting of rights, including the ability to access specific information or resources.
Audit. In order to ensure that certain operational, procedural, legal, qualitative and several other requirements are complied with so that trust is enhanced, auditing service is required.
Quality assurance and trust enhancement services. It is expected that the potential users of PKI services, would require products and services of a given quality to be delivered or be available by a given time and to be of a price which reflects value for money. In order to achieve this level of quality, quality assurance of PKI services is required.
Customer oriented services. This group of PKI services includes services, which directly involve users or need some contact, or some kind of dealing or bargaining with the end user. Those are the services like legal aspects and payment negotiations between a user and a TTP.
TTP to TTP interoperability. In a world-wide environment it is unlikely that all users will be connected to the same TTP. Interoperability services are concerned with the issues necessary for establishing a network of TTPs possibly operating by different companies with different policies and different domain specialisation.
The PKCS family of standards is aimed at providing a basis for interoperability between applications using public-key cryptography techniques. These standards evolved from the following design goals:
"PKCS describes the syntax for messages in an abstract manner, and gives complete details about algorithms. However, it does not specify how messages are to be represented, though BER is the logical choice. Thus PKCS implementations are free to exchange messages in any manner, depending on character set, record size constraints, and the like, as long as the abstract meaning of the messages can be preserved from sender to recipient." 
The PKCS standards are developed by RSA Laboratories. More information on PKCS and their compatibility with existing standards, such as PEM and X.509, can be found in .
PKCS #1 describes a method for encrypting data using the RSA public-key cryptosystem and a syntax for RSA public and private keys. The public-key syntax is identical to that in both X.509 and PEM. Thus X.509/PEM RSA keys can be used in PKCS #1.
PKCS #1 also defines three signature algorithms, based on MD2, MD4, and MD5.
PKCS #3 describes a method for implementing Diffie-Hellman key agreement, whereby two parties, without any prior arrangements, can agree upon a secret key that is known only to them (and, in particular, is not known to an eavesdropper listening to the dialogue by which the parties agree on the key). This secret key can then be used, for example, to encrypt further communications between the parties.
The intended application of PKCS #3 is in protocols for establishing secure connections, such as those proposed for OSI transport and network layers.
PKCS #5 describes a method for encrypting an octet string with a secret key derived from a password. The result of the method is an octet string. Although PKCS #5 can be used to encrypt arbitrary octet strings, its intended primary application to public-key cryptography is for encrypting private keys when transferring them from one computer system to another, as described in PKCS #8.
PKCS #5 defines two key-encryption algorithms, which employ DES secret-key encryption in cipher-block chaining mode, where the secret key is derived from a password with the MD2 or MD5 message-digest algorithm.
PKCS #6 describes a syntax for extended certificates. An extended certificate consists of an X.509 public-key certificate and a set of attributes, collectively signed by the issuer of the X.509 public-key certificate. Thus the attributes and the enclosed X.509 public-key certificate can be verified with a single public-key operation, and an ordinary X.509 certificate can be extracted if needed, e.g., for Privacy-Enhanced Mail.
PKCS #7 describes a general syntax for data that may have cryptography applied to it. The syntax admits recursion, so that, for example, one digital envelope can be nested inside another, or one party can sign some previously enveloped digital data.
PKCS #7 is compatible with Privacy-Enhanced Mail (PEM) in that signed-data and signed-and-enveloped-data content, constructed in a PEM-compatible mode, can be converted into PEM messages without any cryptographic operations. PEM messages can similarly be converted into the signed-data and signed-and-enveloped data content types. The values produced according to PKCS #7 are intended to be BER-encoded, which means that the values would typically be represented as octet strings. PKCS #7 does not address mechanisms for encoding octet strings as (say) strings of ASCII characters or other techniques for enabling reliable transmission by re-encoding the octet string. RFC 1421 suggests one possible solution to this problem.
PKCS #8 describes a syntax for private-key information. Private-key information includes a private key for some public-key algorithm and a set of attributes. PKCS #8 also describes a syntax for encrypted private keys. A password-based encryption algorithm (e.g., one of those described in PKCS #5) could be used to encrypt the private-key information.
The intention of including a set of attributes is to provide a simple way for a user to establish trust in information such as a distinguished name or a top-level certification authority's public key. While such trust could also be established with a digital signature, encryption with a secret key known only to the user is just as effective and possibly easier to implement.
PKCS #9 defines selected attribute types for use in PKCS #6 extended certificates, PKCS #7 digitally signed messages, and PKCS #8 private-key information.
PKCS #10 describes a syntax for certification requests. A certification request consists of a distinguished name, a public key, and optionally a set of attributes, collectively signed by the entity requesting certification. Certification requests are sent to a certification authority, who transforms the request to an X.509 public-key certificate, or a PKCS #6 extended certificate.
This standard specifies an application programming interface (API), called "Cryptoki," to devices which hold cryptographic information and perform cryptographic functions. Cryptoki follows a simple object-based approach, addressing the goals of technology independence (any kind of device) and resource sharing (multiple applications accessing multiple devices), presenting to applications a common, logical view of the device called a "cryptographic token". A number of cryptographic algorithms are supported; in addition, new mechanisms can easily be added later without changing the general interface. Additional mechanisms may be published from time to time in separate documents. It is possible for token vendors to define their own mechanisms, although registration through the PKCS process is preferable.
Cryptoki is intended to complement (not compete with) such interfaces as "Generic Security Services Application Programming Interface" (RFCs 1508 and 1509) and "Generic Cryptographic Service API" (GCS-API) from X/Open.
PKCS #12 describes a transfer syntax for personal identity information, including private keys, certificates, miscellaneous secrets, and extensions. Machines, applications, browsers, Internet kiosks, and so on, that support this standard, will allow a user to import, export, and exercise a single set of personal identity information. Direct transfer of personal information is supported under several privacy and integrity modes.
This standard can be viewed as building on PKCS #8 by including essential but ancillary identity information along with private keys and by instituting higher security through public-key privacy and integrity modes.
The focus of this standard is electronic identification. Its purpose is to promote interoperability between applications, hosts and cryptographic tokens with respect to security-related information stored on tokens. For example, the holder of a token containing a digital certificate should be able to present the token to any application running on any host connected to any smart card reader and successfully use it to present the contained certificate to the application. In order to reach this purpose, a file and directory format for storing security-related information on cryptographic tokens (IC Cards, memory cards, files, etc) is specified. The format builds on the PKCS#11 standard.
This standard currently exists only as a draft. The first official version of the standard is planned for release in February/March 1999.
ITU-T Recommendation X.509 specifies the authentication service for X.500 directories, as well as the widely adopted X.509 certificate syntax. Directory authentication in X.509 can be carried out using either secret-key techniques or public-key techniques. The latter is based on public-key certificates. The standard does not specify a particular cryptographic algorithm, although an informative annex of the standard describes the RSA algorithm.
The introduction below was taken from <URL:http://gee.cs.oswego.edu/dl/java/docs12/guide/security/cert3.html> .
All X.509 certificates have the following data, in addition to the signature:
identifies which version of the X.509 standard applies to this certificate,
which affects what information can be specified in it. Thus far, three versions
entity that created the certificate is responsible for assigning it a serial
number to distinguish it from other certificates it issues. This information is
used in numerous ways, for example when a certificate is revoked its serial
number is placed in a Certificate Revocation List (CRL).
identifies the algorithm used by the CA to sign the certificate.
X.500 name of the entity that signed the certificate. This is normally a CA.
Using this certificate implies trusting the entity that signed this
certificate. (Note that in some cases, such as root or top-level CA
certificates, the issuer signs its own certificate.)
certificate is valid only for a limited amount of time. This period is
described by a start date and time and an end date and time, and can be as
short as a few seconds or almost as long as a century. The validity period
chosen depends on a number of factors, such as the strength of the private key
used to sign the certificate or the amount one is willing to pay for a
certificate. This is the expected period that entities can rely on the public
value, if the associated private key has not been compromised.
name of the entity whose public key the certificate identifies. This name uses
the X.500 standard, so it is intended to be unique across the Internet. This is
the Distinguished Name (DN) of the entity.
Public Key Information
is the public key of the entity being named, together with an algorithm
identifier which specifies which public key cryptosystem this key belongs to
and any associated key parameters.
X.509 Version 1 has been available since 1988, is widely deployed, and is the most generic.
X.509 Version 2 introduced the concept of subject and issuer unique identifiers to handle the possibility of reuse of subject and/or issuer names over time. Most certificate profile documents strongly recommend that names not be reused, and that certificates should not make use of unique identifiers. Version 2 certificates are not widely used.
X.509 Version 3 is the most recent (1996) and supports the notion of extensions, whereby anyone can define an extension and include it in the certificate. Some common extensions in use today are: KeyUsage (limits the use of the keys to particular purposes such as "signing-only") and AlternativeNames (allows other identities to also be associated with this public key, e.g. DNS names, Email addresses, IP addresses). Extensions can be marked critical to indicate that the extension should be checked and enforced/used. For example, if a certificate has the KeyUsage extension marked critical and set to "keyCertSign", then if this certificate is presented during SSL communication, it should be rejected, as the certificate extension indicates that the associated private key should only be used for signing certificates and not for SSL use.
X.509 specification is written in ASN.1.
Access control systems typically operate in terms of access control lists (ACL), which associate each user with a number of actions he or she is allowed to perform. ACLs may also keep additional information, for example, which actions must be monitored for which users. ACLs often allow for user grouping to reduce administration overheads.
In this model, control is highly centralised: only one person or organisation administers and enforces the access control requirements. However, it is not always easy to centralise policy control. Akenti <URL:http://www-itg.lbl.gov/Akenti/docs/overview.html> provides a way to express and to enforce an access control policy without requiring a central enforcer and administrative authority. The system's architecture is intended to provide scalable security services in highly distributed network environments.
"Akenti was intended
The following is a citation from the overview of Akenti.
The resource that Akenti controls may be information, processing or communication capabilities, or a physical system such as a scientific instrument. Access can be the ability to obtain information from the resource (e.g., "read" access;), to modify the resource (e.g., "write" access), or to cause that resource to perform certain functions (e.g., changing instrument control set points).
The approach makes use of:
X.509 Certificate Authorities are used to issue and digitally sign the identity certificates for the user. The certificates are stored in LDAP servers. The users manage their certificates with standard programs that implement PKI, such as the Netscape browser.
Use-condition certificates are signed documents, remotely created and stored by resource stakeholders that specify the conditions for access to the resource. They include combinations of required attributes and values, name of the resource, and permitted actions.
Attribute certificates are signed documents, remotely created and stored that specify that a user possesses a specific attribute (for example, membership in a named group, completion of a certain training course, or membership in an organisation).
Currently, Akenti is used by an Apache Web server that implements a hierarchical policy model and runs over SSL. This server is being used to provide access control of experimental results that are generated by members of the DOE 2000 Diesel Combustion Collaboratory.
Akenti access control has been added to a CORBA ORB (Orbix with SSL).
Over the past few years there has been a growing amount of business conducted over the Internet -- this form of business is called electronic commerce or e-commerce. E-commerce comprises online banking, online brokerage accounts, and Internet shopping, to name a few of the many applications. As more and more business is conducted over the Internet, the need for protection against fraud, theft and corruption of vital information increases. One cryptographic solution to this problem is to encrypt the credit card number (or other private information) when it is entered on-line, another is to secure the entire session.
Electronic money (also called electronic cash or digital cash) is a term that is still fairly vague and undefined. It refers to transactions carried out electronically with a net result of funds transferred from one party to another. Electronic money may be either debit or credit. Digital cash is basically another currency, and digital cash transactions can be visualised as a foreign exchange market. This is because we need to convert an amount of money to digital cash before we can spend it. The conversion process is analogous to purchasing foreign currency.
Pioneer work on the theoretical foundations of digital cash was carried out by Chaum . Digital cash in its precise definition may be anonymous or identified. Anonymous schemes do not reveal the identity of the customer and are based on blind signature schemes described below. Identified spending schemes always reveal the identity of the customer and are based on more general forms of signature schemes. Anonymous schemes are the electronic analogue of cash, while identified schemes are the electronic analogue of a debit or credit card. There are other approaches, payments can be anonymous with respect to the merchant but not the bank, or anonymous to everyone, but traceable (a sequence of purchases can be related, but not linked directly to the spender's identity).
A technique known as blind signatures is employed to implement anonymous payment systems with the following properties [ 13]:
The basic concept of an untraceable payments system is that the bank will sign anything with its private key, and anything so signed is worth a fixed amount. A single note will be formed by the payer, signed by the bank, stripped by the payer, provided to the payee, and cleared by the bank.
Chaum demonstrated the implementation of this concept using RSA signatures .
Micropayments are payments of small sums of money, generally in denominations smaller than those in which physical currency is available. It is envisioned that sums of as little as 1/1000th of a cent may someday be used to pay for content access or for small quantities of network resources. Conventional electronic payment systems require too much computation to handle such sums with acceptable efficiency. Micropayment systems enable payments of this size to be achieved in a computationally lightweight manner, generally by sacrificing some degree of security.
Visa and MasterCard have jointly developed the Secure Electronic Transaction (SET) protocol as a method for secure, cost effective bankcard transactions over open networks. SET includes protocols for purchasing goods and services electronically, requesting authorisation of payment, and requesting "credentials" (i.e. certificates) binding public keys to identities, among other services. SET supports DES for bulk data encryption and RSA for signatures and public-key encryption of data encryption keys and bankcard numbers. SET uses 1024-bit keys for all operations except for certification signing by the Root Certification Authority, when 2048-bit keys are utilised. The SHA-1 algorithm is employed for hashes. The SET standard is published as an open specification. Information on SET can be found at <URL:http://www.visa.com/cgi-bin/vee/nt/ecomm/set/intro.html?2+0>.
Smart Card is a card similar in shape and size to credit card that has a chip embedded in it. The chip can process different kinds of information, and therefore, various industries use them in different ways. The typical smart card has an 8-bit microprocessor, 256 bytes of RAM, 8K bytes of non-volatile memory, and 20K bytes of ROM. Smart card chip design has concentrated on security features rather than on traditional speed, functionality, and capacity features.
Different types of cards being used today are contact, contactless and combination cards. Contact smart cards must be inserted into a smart card reader. These cards have a contact plate on the face which makes an electrical connector for reads and writes to and from the chip when inserted into the reader. Contactless smart cards have an antenna coil, as well as a chip embedded within the card. The internal antenna allows for communication and power with a receiving antenna at the transaction point to transfer information. Close proximity is required for such transactions, which can decrease transaction time while increasing convenience. A combination card functions as both a contact and contactless smart card.
ISO developed standards (ISO 7816) for integrated circuit cards with contacts. These specifications focused on interoperability at the physical, electrical, and data-link protocol levels. In 1996, Europay, MasterCard, and VISA (EMV) defined an industry-specific smart card specification that adopted the ISO 7816 standards and defined some additional data types and encoding rules for use by the financial services industry. ISO 7816 has six parts. Some have been completed; others are currently in draft stage.
Books on smart card hardware and software are available. Smart Card Developer's Kit  includes a smart card that can be used to familiarise a reader with this technology. More information is available at <URL:http://www.scdk.com> .
The term biometrics applies to a broad range of electronic techniques that employ the physical characteristics of human beings as a means of authentication. A number of biometric techniques have been proposed for use with computer systems. These include (among a wide variety of others) fingerprint readers, iris scanners, face imaging devices, hand geometry readers, and voice readers. Usage of biometric authentication techniques is often recommended in conjunction with other user authentication methods, rather than as a single, exclusive method.
Fingerprint readers are likely to become a common form of biometric authentication device in the coming years. To identify herself to a server using a fingerprint reader, a user places her finger on a small reading device. This device measures various characteristics of the patterns associated with the fingerprint of the user, and typically transmits these measurements to a server. The server compares the measurements taken by the reader against a registered set of measurements for the user. The server authenticates the user only if the two sets of measurements correspond closely to one another. One significant characteristic of this and other biometric technologies is that matching must generally be determined on an approximate basis, with parameters tuned appropriately to make the occurrence of false positive matches or false negative rejections acceptably infrequent.
The following categories of software are distinguished when speaking about software distribution policies .
Of all the above categories, only proprietary software is susceptible to the copy protection issues.
There are two basic methods in software based copy protection: registration on installation and "try and then buy" method.
The registration on installation procedure prompts the user to input information (name/organisation etc.) during installation, encrypts and embeds this information in the installed program and, where possible, on the installation media. It then displays this information prominently each time the software is used. While it is easy to make a copy and share it with others, it may be embarrassing to run a program with someone else's name on it. This type of copy protection is called "copy discouragement".
The "try and then buy" approach is used when authors wish to encourage copying of data or programs. Although copying is not restricted, the software distributed in such fashion is often not fully functional (some features may be disabled, or the program stops working after short period of time, or it may annoy the user with annoying message boxes & stuff). Potential customers install the package and can immediately access the demonstration of what is available for sale. If they find that this application covers their needs and want to purchase it, they can interact with a central location and pay for an access code or an activation key that unlocks the disabled features. Having registered the software, users also become entitled to the technical support, updates, manuals etc.
Some software products are supplied with a hardware device called dongle. A dongle must be connected to the computer while the program is running. Some programs attempt to query the dongle at start-up and terminate if it does not respond or responds incorrectly. Other programs use functions of the dongle, such as decryption of data. The disadvantages of dongles are their high cost and bulkiness. They are also known to cause hardware conflicts.
A cheaper alternative for copy protection is a master disk. If that disk is not inserted, the program will not run. The disk is created using special equipment so a user cannot duplicate it.
Another solution for enforcement of software distribution and usage policies is the installation of a special chip on the computer's motherboard during manufacturing, or on an interface card. Such chips can provide metering services or "pay per use" functionality. This special chip generally consists of:
Internet standards and recommendations are maintained by Internet Engineering Task Force (IETF). These documents are known as RFCs (Requests For Comments). RFCs and Internet drafts can be downloaded by anonymous FTP from ds.internic.net and its mirrors.
X.509 has specified a general architectural model for certificate management with many aspects left undefined (specific protocols, data formats, etc.). IETF is building a set of standards around X.509 called the Internet Public Key Infrastructure (PKIX), in order to support electronic trusted services on the Internet. The standards have been published as a multi-part Internet draft .
The development of security systems based on X.509 involves significant overhead because of the support for ASN.1 and global directory infrastructure. Simple Public Key Infrastructure (SPKI) is an IETF effort to create more efficient certificate technology. To improve efficiency and flexibility of digital certificates, SPKI defines flexible certificate syntax and several specialised certificate types (name certificate, authorisation certificate, access control list). SPKI has been published as an Internet draft consisting of four parts:
"Simple Public Key Certificate" (draft-ietf-spki-cert-structure-05.txt),
"SPKI Requirements" (draft-ietf-spki-cert-req-01.txt),
"SPKI Certificate Theory" (draft-ietf-spki-cert-theory-02.txt),
"SPKI Examples" (draft-ietf-spki-cert-examples-01.txt).
RFC 2065 defines security extensions to the Internet Domain Name Service. The procedures for secure name resolution are defined, which employ digital signatures to provide data integrity and authentication. Additionally, a way is specified for certified public keys to be stored within DNS entries.
IETF has specified a number of standards for securing e-mail communications.
Privacy Enhancement for electronic Mail (PEM) described in RFC 1421-1424 was historically the first application of X.509. Attempts to implement PEM failed because the initial version of X.509 was too strictly tied to the hierarchical structure of X.500. The development of PEM has lead to PKIX standards.
Secure MIME (S/MIME) specified in RFC 2312 is another standard for securing Internet e-mail. S/MIME defines security services for MIME , following the syntax given in PKCS #7 for digital signatures and encryption. S/MIME has been endorsed by a number of leading networking and messaging vendors, including ConnectSoft, Frontier, FTP Software, Qualcomm, Netscape, Lotus, Wollongong, Banyan, NCD, SecureWare, VeriSign, Microsoft, and Novell. See  for more details on S/MIME.
SSL is a protocol designed to provide secure communication channel between two applications in a client-server interaction, originally developed by Netscape Communications Corporation . It is placed directly above TCP or other transport service. The data passed through SSL connection is encrypted with a symmetric encryption algorithm such as DES or RC4. Public key cryptographic techniques are used for authentication and session key establishment between communicating applications.
Arrangements must be made for SSL channels to pass through firewalls and other proxies which may also provide routes for unwelcome visitors. Since the channel is encrypted, there is no method of monitoring what is passing though the firewall.
SSL provides confidentiality and authentication of request and response messages. It can be used to exchange certificates to authenticate the server and client machines, however these assume the presence of (commercial) third party Certificate Authorities which may not be appropriate within the UK. No record is kept of each authentication so non-repudiation is not possible. The main problems with SSL are the low level of security available in the export version and the difficulty of interacting with application-specific intermediaries such as proxies and caches. Despite its strong commercial support, SSL is not a complete solution to web security.
SSL has been adopted by the IETF under the name Transport Layer Security (TLS). The standard was recently published as an Internet draft draft-ietf-tls-protocol-05.txt.
RFC 1825 specifies security mechanisms and security services for the Internet Protocol (v.4 and v.6). The set of services includes: authentication and non-repudiation of IP datagram origin, IP data integrity, and IP data encryption. In order to provide these services, the standard defines two mechanisms: the IP Authentication Header (authentication, integrity, but no encryption) and IP Encapsulating Security Payload (all services). The information required for generation and verification of secured IP datagrams (such as keys, and algorithm IDs) forms a "Security Association". The security association is uniquely identified by the Security Parameter Index and destination host address in the IP datagram header.
The standard specifies default cryptographic algorithms (keyed MD5, DES CBC) to ensure interoperability in the global Internet. However, key management protocols are left undefined.
The Generic Security Services API (GSS-API) described in RCF 1508, RFC 1509, and RFC 2078 is an abstract specification of an interface to security services. GSS-API functionality provides data origin authentication and non-repudiation, data integrity, and data confidentiality. The services and primitives are specified in a way that makes them independent from particular programming languages, and cryptographic protocols. Mappings of GSS-API to particular computing environments are expected to be defined in complementary specifications.
PGP (Pretty Good Privacy) is an RSA based public-key cryptosystem, available in source code for various UNIX flavours, Windows 95/NT, and MacOS. It uses IDEA for message transfers. The International PGP Home Page is at <URL:http://www.pgpi.com> ; software can be downloaded from <URL:ftp://ftp.no.pgpi.com/pub/pgp> . PGP is free for non-commercial use. For commercial use, IDEA licence is required. See PGP distribution for details.
The SSLeay package contains libraries that implement SSL, DES, RC2, RC4, Blowfish, IDEA, MD2, MD5, SHA, SHA-1, MDC2, RSA, DSA, and Diffie-Hellman algorithms. X.509 v3 encoding/decoding is also included. The package is available from <URL:ftp://ftp.psy.uq.oz.au/pub/Crypto/> and its mirrors, such as <URL:ftp://src.doc.ic.ac.uk/Mirrors/ftp.psy.uq.oz.au/pub/Crypto/> .
|1999-01-22||PRIDE Requirements and Success Factors|