Вы находитесь на странице: 1из 62

Table of Contents

ACID ...................................................................................................................................................................3 Principio de Kerckhoffs .......................................................................................................................................3 Critrios de Shannon ...........................................................................................................................................3 Criptografia Clssica ...........................................................................................................................................4 Criptografia simtrica ..........................................................................................................................................4 Cifras de Bloco ....................................................................................................................................................5 DES ("Data Encryption Standard") ..........................................................................................................5 Use and export of DES ..................................................................................................................6 DES modes ........................................................................................................................6 DES strength ......................................................................................................................7 Improving the Security of DES .....................................................................................................8 Double DES .......................................................................................................................8 Triple DES .........................................................................................................................9 IDEA ......................................................................................................................................................10 Key schedule ...............................................................................................................................11 AES ........................................................................................................................................................11 Utilizao de cifras de bloco ..................................................................................................................11 Electronic codebook (ECB) ........................................................................................................11 Cipher-block chaining (CBC) .....................................................................................................12 Output feedback (OFB) ...............................................................................................................13 Counter (CTR) ............................................................................................................................15 One-time pad ..........................................................................................................................................16 Criptografia asimtrica ......................................................................................................................................16 Diffie-Hellman .......................................................................................................................................17 RSA ........................................................................................................................................................18 Gerao das chaves ................................................................................................................................18 Cifrao ..................................................................................................................................................18 Decifrao ..............................................................................................................................................19 Hash Segura .......................................................................................................................................................20 MD-5 ......................................................................................................................................................21 Algoritmo ....................................................................................................................................21 Vulnerabilidade ...........................................................................................................................22 SHA-1 .....................................................................................................................................................22 Checksums Criptogrcos .................................................................................................................................24 HMAC ....................................................................................................................................................25 Assinaturas digitais ............................................................................................................................................26 Propriedades a verificar ..........................................................................................................................27 Assinatura arbitrada ................................................................................................................................27 Assinatura directa ...................................................................................................................................28 DSA/DSS ...............................................................................................................................................28 Autoridade certificadora ....................................................................................................................................28 X.509 ......................................................................................................................................................29 Structure of a certificate ..............................................................................................................29 Sample X.509 certificates .......................................................................................................................30 Self-signed certificate .............................................................................................................................30 Centro de distribuio de chaves (KDC) ...........................................................................................................31 Assinatura cega ..................................................................................................................................................31 Blind RSA signatures .............................................................................................................................32 Dangers of blind signing ........................................................................................................................33 Zero-knowledge ......................................................................................................................................34 Abstract example .........................................................................................................................34 Definition ....................................................................................................................................34 Practical example ........................................................................................................................35

Variants of zero-knowledge ........................................................................................................37 Applications ................................................................................................................................37 Dinheiro digital ..................................................................................................................................................38 How does Digital Cash work? ................................................................................................................38 Key Properties of a Private Digital Cash System ...................................................................................38 Comunicao segura ..........................................................................................................................................40 Canal seguro ...........................................................................................................................................40 Secure channels in the real world ...............................................................................................40 Trusted computering base ..................................................................................................................................41 Definition and characterization ..............................................................................................................41 Properties of the TCB .............................................................................................................................42 Predicated upon the security policy: TCB is in the eye of the consultant ...............................42 A prerequisite to security ............................................................................................................42 Software parts of the TCB need to protect themselves ...............................................................43 Trusted vs. trustworthy ...............................................................................................................43 TCB size ......................................................................................................................................44 Matriz de controlo de acesso ..................................................................................................................44 Lista de controlo de acesso .....................................................................................................................44 Lista s de capacidade ..............................................................................................................................44 Controlo de acesso descriminatorio (DAC) ...........................................................................................45 Controlo de acesso mandatario ...............................................................................................................45 Modelos de autenticao ....................................................................................................................................47 Kerberos .................................................................................................................................................47 Protocolo .....................................................................................................................................47 User Client-based Logon .................................................................................................48 Client Authentication .......................................................................................................48 Client Service Authorization ...........................................................................................48 Client Service Request .....................................................................................................49 Firewall ..............................................................................................................................................................49 Comunicao segura...........................................................................................................................................51 SSL/TLS..................................................................................................................................................51 Cipher suite..................................................................................................................................51 Aplicao.....................................................................................................................................51 Segurana.....................................................................................................................................52 How it works................................................................................................................................54 TLS handshake in detail...............................................................................................................54 Simple TLS handshake.....................................................................................................55 Client-authenticated TLS handshake................................................................................56 Resumed TLS handshake.................................................................................................57 SSH..........................................................................................................................................................59 IPSec...................................................................................................................................................................59 PGP.....................................................................................................................................................................59 Compatibility...........................................................................................................................................60 Digital signatures.....................................................................................................................................60 Web of trust.............................................................................................................................................60 Certificates...............................................................................................................................................61 Security quality........................................................................................................................................61

ACID
Confidencialidade - Medida em que um servio/informao esta protegido contra o acesso em leitura de intrusos. Integridade - medida em que um servio/informao esta protegido contra a modificao/deteriorao por intrusos. Autenticidade - medida em que um servio/informao genuno, i.e., esta protegido contra a personificao por intrusos. Disponibilidade - medida em que um servio/informao esta protegido contra a recusa de provao/acesso provocada por intrusos.

Principio de Kerckhoffs
Para avaliar a segurana de uma tcnica criptogrfica devemos assumir que esta do conhecimento de eventuais inimigos.

Critrios de Shannon
1. Quantidade de segredo Sistema perfeito: Inimigo no aprende nada depois de interceptar a qualquer quantidade de dados cifrados.

2. Tamanho da chave desejvel possuir uma chave menor possvel (transmisso no interceptavel, memorizao).

3. Complexidade das operaes de cifrao e decifrao Deve ser mnima.

4. Propagao de erros Um erro de transmisso pode causar muitos erros no texto depois de decifrar.

5. Expanso de mensagens indesejvel aumentar o tamanho de texto depois de cifrao.

Criptografia Clssica
A criptografia pr-computacional era formada por um conjunto de mtodos de substituio e transposio dos caracteres de uma mensagem que pudessem ser executados manualmente (ou at mesmo mentalmente) pelo emissor e pelo destinatrio da mensagem. O surgimento de mquinas especializadas e, posteriormente, dos computadores ocasionou uma significativa evoluo das tcnicas criptogrficas.

Criptografia simtrica

A encriptao consiste na aplicao de um algoritmo aos dados por forma a que eles se tornem ilegveis, para recuperar os dados originais ser necessrio conhecer o algoritmo de desencriptao. As aplicaes bsicas da criptografia so a confidencialidade (garantir que apenas quem autorizado pode ler os dados) e a autenticao/integridade (garantir que os dados tm a origem correcta e que no foram alterados entre origem e destino). Na prtica, juntamente com os algoritmos utilizam-se chaves, mesmo que os algoritmos sejam conhecidos necessria a chave correta. A criptografia simtrica, tambm conhecida por criptografia tradicional utiliza uma nica chave que serve tanto para cifrar como para decifrar. A criptografia de chave pblica (mais recente) utiliza uma chave para cifrar e outra chave para decifrar. No existem mecanismos 100% eficazes, numa abordagem puramente terica imediato que qualquer chave pode ser quebrada pela fora bruta (supondo que dispe de um exemplar de uma mesma mensagem original e cifrada, e o algoritmo conhecido, basta tentar com todas as chaves possveis at acertar). O tempo necessrio para quebrar uma chave pela "fora bruta" depende do nmero de

chaves possveis (nmero de bits da chave) e do tempo de execuo do algoritmo. O grande problema desta abordagem que a capacidade de processamento dos equipamentos tem duplicado de 18 em 18 meses, logo de 18 em 18 meses necessrio aumentar um bit s chaves.

Cifras de Bloco
DES ("Data Encryption Standard")

One of the most widely used encryption systems today is the Data Encryption Standard (DES), developed in the 1970s and patented by researchers at IBM. The DES was an outgrowth of another IBM cipher known as Lucifer. IBM made the DES available for public use, and the federal government issued Federal Information Processing Standard Publication (FIPS PUB) Number 46 in 1977 describing the system. Since that time, the DES has been periodically reviewed and reaffirmed (most recently in December 30, 1993), until 1998 as FIPS PUB 46-2. It has also been adopted as an American National Standard (X3.92-1981/R1987). The DES performs a series of bit permutation, substitution, and recombination operations on blocks containing 64 bits of data and 56 bits of key (eight 7-bit characters). The 64 bits of input are permuted initially, and are then input to a function using static tables of permutations and substitutions (called S-boxes). The bits are permuted in combination with 48 bits of the key in each round. This process is iterated 16 times (rounds), each time with a different set of tables and different bits from the key. The algorithm then performs a final permutation, and 64 bits of output are provided. The algorithm is structured in such a way that changing any bit in the input has a major effect on almost all of the output bits. Indeed, the output of the DES function appears so unrelated to its input that the function is sometimes used as a random number generator.

Although there is no standard UNIX program that performs encryption using the DES, some vendors' versions of UNIX include a program called des which performs DES encryption. (This command may not be present in international versions of the operating system, as described in the next section.) Use and export of DES The DES was mandated as the encryption method to be used by all federal agencies in protecting sensitive but not classified information.[15] The DES is heavily used in many financial and communication exchanges. Many vendors make DES chips that can encode or decode information fast enough to be used in data-encrypting modems or network interfaces. Note that the DES is not (and has never been) certified as an encryption method that can be used with U.S. Department of Defense classified material. Export control rules restrict the export of hardware or software implementations of the DES, even though the algorithm has been widely published and implemented many times outside the United States. If you have the international version of UNIX, you may find that your system lacks a des command. If you find yourself in this position, don't worry; good implementations of the DES can be obtained via anonymous FTP from almost any archive service, including the Usenet comp.sources archives. For more information about export of cryptography, see "Encryption and U.S. Law," later in this chapter. DES modes FIPS PUB 81 explains how the DES algorithm can be used in four modes: Electronic Code Book (ECB) Cipher Block Chaining (CBC) Cipher Feedback (CFB) Output Feedback (OFB) Each mode has particular advantages in some circumstances, such as when transmitting text over a noisy channel, or when it is necessary to decrypt only a portion of a file. The following provides a brief discussion of these four methods; consult FIPS PUB 81 or a good textbook on cryptography for details. ECB Mode. In electronic code book (ECB) mode, each block of the input is encrypted using the same key, and the output is written as a block. This method performs simple encryption of a message, a block at a time. This method may not indicate when portions of a message have been inserted or removed. It works well with noisy transmission channels alteration of a few bits will corrupt only a single 64-bit block. CBC Mode. In cipher block chaining (CBC) mode, the plaintext is first XOR'ed with the encrypted value of the previous block. Some known value (usually referred to as the initialization vector, or IV) is used for the first block. The result is then encrypted using the

key. Unlike ECB mode, long runs of repeated characters in the plaintext will be masked in the output. CBC mode is the default mode for Sun Microsystems' des program. CFB Mode. In cipher feedback (CFB) mode, the output is fed back into the mechanism. After each block is encrypted, part of it is shifted into a shift register. The contents of this shift register are encrypted with the user's key value using (effectively) ECB mode, and this output is XOR'd with the data stream to produce the encrypted result. This method is self synchronizing, and enables the user to decrypt only a portion of a large database by starting a fixed distance before the start of the desired data. OFB Mode. In output feedback (OFB) mode, the output is also fed back into the mechanism. A register is initialized with some known value (again, the IV). This register is then encrypted with (effectively) ECB mode using the user's key. The result of this is used as the key to encrypt the data block (using an XOR operation), and it is also stored back into the register for use on the next block. The algorithm effectively generates a long stream of key bits that can be used to encrypt/decrypt communication streams, with good tolerance for small bit errors in the transmission. This mode is almost never used in UNIX-based systems. All of these modes require that byte and block boundaries remain synchronized between the sender and recipient. If information is inserted or removed from the encrypted data stream, it is likely that all of the following data from the point of modification can be rendered unintelligible. DES strength Ever since DES was first proposed as a national standard, some people have been suspicious of the algorithm. DES was based on a proprietary encryption algorithm developed by IBM called Lucifer, which IBM had submitted to the National Bureau of Standards (NBS)[16] for consideration as a national cryptographic standard. But whereas Lucifer had a key that was 112 bits long, the DES key was shortened to 56 bits at the request of the National Security Agency. The NSA also requested that certain changes be made in the algorithm's S-boxes. Many people suspected that NSA had intentionally weakened the Lucifer algorithm, so the final standard adopted by NBS would not pose a threat to the NSA's ongoing intelligence collection activities. But nobody had any proof. Today the DES is more than 20 years old, and the algorithm is definitely showing its age. Recently Michael Weiner, a researcher at Bell Northern Research, published a paper detailing how to build a machine capable of decrypting messages encrypted with the DES by conducting an exhaustive key search. Such a machine could be built for a few million dollars, and could break any DES-encrypted message in about a day. We can reasonably assume that such machines have been built by both governments and private industry. In June 1994, IBM published a paper describing the design criteria of the DES. The paper claims that the choices of the DES key size, S-boxes, and number of rounds were a direct result of the conflicting goals of making the DES simple enough to fit onto a single chip with 1972 chip-making technology, and the desire to make it resistant to differential

cryptanalysis. These two papers, coupled with many previously published analyses, appear to have finally settled a long-running controversy as to whether or not NSA had intentionally built in weaknesses to the DES. The NSA didn't build a back door into DES that would have allowed it to forcibly decrypt any DES-encrypted transmission: it didn't need to. Messages encrypted with DES can be forcibly decrypted simply by trying every possible key, given the appropriate hardware. Improving the Security of DES You can improve the security of DES by performing multiple encryptions, known as superencryption. The two most common ways of doing this are with double encryption (Double DES) and with triple encryption (Triple DES). While double DES appears to add significant security, research has found some points of attack, and therefore experts recommend Triple DES for applications where single DES is not adequate. Double DES In Double DES, each 64-bit block of data is encrypted twice with the DES algorithm, first with one key, then with another, as follows: Encrypt with (key 1). Encrypt with (key 2). Plaintext (key1) (key2) ciphertext

Double DES is not significantly more secure than single DES. In 1981, Ralph Merkle and Martin Hellman published an article[17] in which they outlined a so-called "meet-in-themiddle attack." [17] R. C. Merkle and M. Hellman, "On the Security of Multiple Encryption," Communications of the ACM, Volume 24, Number 7, July 1981, pp. 465-467. The meet-in-the-middle attack is a known plaintext attack which requires that an attacker have both a known piece of plaintext and a block of that same text that has been encrypted. (These pieces are surprisingly easily to get.) The attack requires storing 256 intermediate results when trying to crack a message that has been encrypted with DES (a total of 259 bytes), but it reduces the number of different keys you need to check from 2112 to 257. "This is still considerably more memory storage than one could comfortably comprehend, but it's enough to convince the most paranoid of cryptographers that double encryption is not worth anything," writes Bruce Schneier in his landmark volume, Applied Cryptography.

In other words, because a message encrypted with DES can be forcibly decrypted by an attacker performing an exhaustive key search today, an attacker might also be able to forcibly decrypt a message encrypted with Double DES using a meet-in-the-middle attack at some point in the future. Triple DES The dangers of the Merkle-Hellman meet-in-the-middle attack can be circumvented by performing three block encryption operations. This method is called Triple DES. In practice, the most common way to perform Triple DES is: Encrypt with (key1). Decrypt with (key2). Encrypt with (key3). The advantage of this technique is that it can be backward compatible with single DES, simply by setting all three keys to be the same value. To decrypt, reverse the steps: Decrypt with (key3). Encrypt with (key2). Decrypt with (key1). For many applications, you can use the same key for both key1 and key3 without creating a significant vulnerability. Triple DES appears to be roughly as secure as single DES would be if it had a 112-bit key. How secure is this really? Suppose you had an integrated circuit which could perform one million Triple DES encryptions per second, and you built a massive computer containing one million of these chips to forcibly try all Triple DES keys. This computer, capable of testing 1012 encryptions per second, would require: 2112 = 5.19 x 1033 encryption operations 5.19 x 1033 encryption operations / 1012 operations/sec = 5.19 x 1021 sec = 1.65 x 1014 years. This is more than 16,453 times older than the currently estimated age of the universe (approximately 1010 years). Apparently, barring new discoveries uncovering fundamental flaws or weaknesses with the DES algorithm, or new breakthroughs in the field of cryptanalysis, Triple DES is the most secure private key encryption algorithm that humanity will ever need (although niche

opportunities may exist for faster algorithms).

IDEA

IDEA operates on 64-bit blocks using a 128-bit key, and consists of a series of eight identical transformations (a round, see the illustration) and an output transformation (the half-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from different groups modular addition and multiplication, and bitwise eXclusive OR (XOR) which are algebraically "incompatible" in some sense. In more detail, these operators, which all deal with 16-bit quantities, are:

Bitwise eXclusive OR (denoted with a blue circled plus ). Addition modulo 216 (denoted with a green boxed plus ). Multiplication modulo 216+1, where the all-zero word (0x0000) is interpreted as 216 (denoted by a red circled dot ).

After the eight rounds comes a final "half round", the output transformation illustrated below:

Key schedule Each round uses six 16-bit sub-keys, while the half-round uses four, a total of 52 for 8.5 rounds. The first eight sub-keys are extracted directly from the key, with K1 from the first round being the lower sixteen bits; further groups of eight keys are created by rotating the main key left 25 bits between each group of eight. This means that it is rotated less than once per round, on average, for a total of six rotations.

AES
The Data Encryption Standard was found too weak because of its small key size and the technological advancements in processor power. It takes an input block of a certain size, usually 128, and produces a corresponding output block of the same size. The transformation requires a second input, which is the secret key. It is important to know that the secret key can be of any size (depending on the cipher used) and that AES uses three different key sizes: 128, 192 and 256 bits.

Utilizao de cifras de bloco


Electronic codebook (ECB) The simplest of the encryption modes is the electronic codebook (ECB) mode. The message is divided into blocks and each block is encrypted separately.

The disadvantage of this method is that identical plaintext blocks are encrypted into identical ciphertext blocks; thus, it does not hide data patterns well. In some senses, it doesn't provide serious message confidentiality, and it is not recommended for use in cryptographic protocols at all. Cipher-block chaining (CBC) Each block of plaintext is XORed with the previous ciphertext block before being encrypted. This way, each ciphertext block is dependent on all plaintext blocks processed up to that point. Also, to make each message unique, an initialization vector must be used in the first block.

If the first block has index 1, the mathematical formula for CBC encryption is

while the mathematical formula for CBC decryption is

CBC has been the most commonly used mode of operation. Its main drawbacks are that encryption is sequential (i.e., it cannot be parallelized), and that the message must be padded to a multiple of the cipher block size. One way to handle this last issue is through the method known as ciphertext stealing. Note that a one-bit change in a plaintext affects all following ciphertext blocks. A plaintext can be recovered from just two adjacent blocks of ciphertext. As a consequence, decryption can be parallelized, and a one-bit change to the ciphertext causes complete corruption of the corresponding block of plaintext, and inverts the corresponding bit in the following block of plaintext. Output feedback (OFB) The output feedback (OFB) mode makes a block cipher into a synchronous stream cipher. It generates keystream blocks, which are then XORed with the plaintext blocks to get the ciphertext. Just as with other stream ciphers, flipping a bit in the ciphertext produces a flipped bit in the plaintext at the same location. This property allows many error correcting codes to function normally even when applied before encryption. Because of the symmetry of the XOR operation, encryption and decryption are exactly the same:

Each output feedback block cipher operation depends on all previous ones, and so cannot be performed in parallel. However, because the plaintext or ciphertext is only used for the final XOR, the block cipher operations may be performed in advance, allowing the final step to be performed in parallel once the plaintext or ciphertext is available. It is possible to obtain an OFB mode keystream by using CBC mode with a constant string of zeroes as input. This can be useful, because it allows the usage of fast hardware implementations of CBC mode for OFB mode encryption. Using OFB mode with limited feedback like CFB mode reduces the average cycle length by a factor of 232 or more. A mathematical model proposed by Davies and Parkin and substantiated by experimental results showed that only with full feedback an average cycle length near to the obtainable maximum can be achieved. For this reason, support for limited feedback was removed from the specification of OFB.

Counter (CTR) Like OFB, counter mode turns a block cipher into a stream cipher. It generates the next keystream block by encrypting successive values of a "counter". The counter can be any function which produces a sequence which is guaranteed not to repeat for a long time, although an actual counter is the simplest and most popular. The usage of a simple deterministic input function used to be controversial; critics argued that "deliberately exposing a cryptosystem to a known systematic input represents an unnecessary risk." By now, CTR mode is widely accepted, and problems resulting from the input function are recognized as a weakness of the underlying block cipher instead of the CTR mode.Nevertheless, there are specialized attacks like a Hardware Fault Attack that is based on the usage of a simple counter function as input. CTR mode has similar characteristics to OFB, but also allows a random access property during decryption. CTR mode is well suited to operation on a multi-processor machine where blocks can be encrypted in parallel. Note that the nonce in this graph is the same thing as the initialization vector (IV) in the other graphs. The IV/nonce and the counter can be concatenated, added, or XORed together to produce the actual unique counter block for encryption.

One-time pad

Em criptografia, one-time pad (OTP), ou cifra de chave nica, um algoritmo de criptografia onde o plaintext combinado com uma chave aleatria ou uma pad que seja to grande quanto o plaintext e usado somente uma vez. Uma adio modular (por exemplo XOR) usada para combinar o plaintext com a pad. Se a chave for verdadeiramente aleatria, nunca reutilizada, e mantida em segredo, a onetime pad pode ser inquebrvel. Tambm provou-se que toda a cifra terica inquebrvel deve usar chaves com as mesmas exigncias que chaves de OTP. A chave consiste normalmente em um fluxo aleatrio de nmeros, cada qual indica o nmero dos lugares no alfabeto (ou nmero de fluxos, se a mensagem do plaintext estiver no formulrio numrico) que a letra ou o nmero correspondente na mensagem do plaintext devem ser deslocados. Para mensagens no alfabeto Latin, por exemplo, a chave consistir em uma cadeia aleatria dos nmeros de 0 a 25; para mensagens binrias a chave consistir em uma cadeia aleatria de 0s e de 1s; e assim por diante. A parte pad do nome vem das primeiras implementaes onde as chaves eram distribudas em um bloco de papel. Assim, a folha superior poderia facilmente ser removida e destruda depois de usada. Para facilitar sua ocultao, o bloco era s vezes muito pequeno.

Criptografia asimtrica
A criptografia assimtrica um mtodo de criptografia que utiliza um par de chaves: uma chave pblica e uma chave privada. A chave pblica distribuda livremente para todos os correspondentes via e-mail ou outras formas, enquanto a chave privada deve ser conhecida apenas pelo seu dono. Num algoritmo de criptografia assimtrica, uma mensagem cifrada com a chave pblica pode somente ser decifrada pela sua chave privada correspondente. Os algoritmos de chave pblica podem ser utilizados para autenticidade e confidencialidade. Para confidencialidade, a chave pblica usada para cifrar mensagens, com isso apenas o dono da chave privada pode decifr-la. Para autenticidade, a chave privada usada para cifrar mensagens, com isso garante-se que apenas o dono da chave privada poderia ter cifrado a mensagem que foi decifrada com a 'chave pblica'.

Diffie-Hellman

1. usada para intercmbio de chaves entre utilizadores; 2. baseado na operao de logaritmos discretos; 3. Logaritmo discreto baseado na raiz primitiva; 4. Requer autoridade de certificao (chave pblica confivel). Passos 1. Dados n primo e a uma raiz primitiva modulo n, ambos conhecidos pelos entes da conexo, nesse caso ser Alice e Bob; 2. Bob e Alice geram nmeros aleatrios Xa e Xb, respectivamente, sendo que Xa e Xb so menores que n, esses nmeros gerados so as chaves privadas se comparado com um mtodo assimtrico; 3. Bob e Alice calculam as chaves pblicas Ya aXa(mod n), Yb aXb(mod n) respectivamente; 4. Alice e Bob trocam as chaves (nmeros) publicas; 5. Bob calcula K YbXa(mod n) KaXbXa(mod n)

Alice calcula K YaXb(mod n) K aXaXb(mod n) 6. E assim eles possuem a mesma chave secreta K, vale ainda salientar que isso acontece para K sendo o menor numero inteiro positivo possvel, ou seja, 0<K<n, e isso sempre possvel de acontecer, pois o teorema de Euclides garante que existe K e para 0<K<n ele nico.

RSA
O RSA envolve um par de chaves, uma chave pblica que pode ser conhecida por todos e uma chave privada que deve ser mantida em sigilo. Toda mensagem cifrada usando uma chave pblica s pode ser decifrada usando a respectiva chave privada. A criptografia RSA atua diretamente na internet, por exemplo, em mensagens de emails, em compras on-line e o que voc imaginar; tudo isso codificado e recodificado pela criptografia RSA.

Gerao das chaves


No RSA as chaves so geradas desta maneira: Escolha de forma aleatria dois nmeros primos grandes e , da ordem de 10100 no mnimo. Compute Compute a funo cociente em : . Escolha um inteiro tal que 1 < < , de forma que e sejam primos entre si. 2. Compute de forma que , ou seja, seja o inverso
1.

multiplicativo de em

No passo 1 os nmeros podem ser testados probabilisticamente para primalidade No passo 5 usado o algoritmo de Euclides estendido, e o conceito de inverso multiplicativo que vem da aritmtica modular

Por final temos: A chave pblica: o par de nmeros e A chave privada: o par de nmeros e

Cifrao
Para transformar uma mensagem , onde , numa mensagem cifrada usando a

chave pblica do destinatrio

basta fazer uma potenciao modular:

A mensagem ento pode ser transmitida em canal inseguro para o receptor.

Decifrao
Para recuperar a mensagem da mensagem cifrada usando a respectiva chave privada do receptor e , basta fazer outra potenciao modular:

Hash Segura
Uma funo "hash" tipica recebe como "input" uma mensagem de comprimento varivel e produz um bloco de comprimento fixo que representa o contedo da mensagem. A funo deve ser tal que a minima alterao da mensagem produz uma alterao no bloco de saida. Por outro lado a probabilidade de duas mensagens diferentes produzirem o mesmo bloco deve ser praticamente nula. Os algoritmos "hash" mais conhecidos so: "Message-Digest" (MD2; MD4 e MD5)(RFC 1320) - aceita mensagens de qualquer tamanho e produz um bloco de 128 bits ("digest"), a mensagem inicialmente dividida em blocos de 512 bits que so processados. SHA ("Secure Hash Algorithm") - aceita mensagens de comprimento inferior a 264 e produz um "digest" de 160 bits. Baseado no MD4, o facto de gerar mais 32 bits do que o MD4 torna-o partida mais seguro.

Um funo "hash" no proporciona autenticao, qualquer um pode criar uma mensagem e gerar o respectivo "message digest" ou "hash code", para se conseguir autenticao necessrio usar conjuntamente algum mecanismo de cifragem, convencional ou no: O conjunto mensagem/hash-code so cifrados. Neste caso existe confidencialidade. Apenas o "hash-code" cifrado. Se usada cifragem de chave secreta, torna-se equivalente a um "checksum" criptogrfico. Se usada cifragem de chave pblica temos ainda uma assinatura digital(*). Uma outra opo possvel usar um valor secreto que concatenado mensagem para efeitos de clculo do "hash-code", mas no enviado.

(*) este tipo de assinatura pode ser atacado do seguinte modo ("birthday attack"): A entidade X pretende realizar a fraude, ento produz um conjunto de variaes da mensagem licita. Produz tambm um conjunto de variaes da mensagem fraudulenta. De entre os dois conjuntos procura um par que produza o mesmo "hash-code", envia ento o exemplar licito a entidade A para esta "assinar". A entidade A acrescenta a assinatura digital, ou seja calcula o "hash-code" e cifra-o com a sua chave secreta. A entidade X pode agora usar esta assinatura para a mensagem ilicita, como os "hash-code" so os mesmos, ento as assinaturas so iguais.

O grau de dificuldade com que este tipo de fraude realizado depende do nmero de "bits" do "hash-code", para m bits ser necessrio comear por gerar dois conjuntos de 2m/2 mensagens, nesta situao a probabilidade de se conseguir um par adequado (mesmo hashcode) de 0,5 (50%).

MD-5

O MD5 (Message-Digest algorithm 5) um algoritmo de hash de 128 bits unidirecional desenvolvido pela RSA Data Security, Inc., descrito na RFC 1321, usado por softwares com protocolo ponto-a-ponto (P2P), verificao de integridade e logins. Foi desenvolvido para suceder ao MD4 que tinha alguns problemas de segurana. Por ser um algoritmo unidirecional, um hash MD5 no pode ser transformado novamente na password (ou texto) que lhe deu origem. O mtodo de verificao , ento, feito pela comparao das duas hash (uma da base de dados, e a outra da tentativa de login). O MD5 tambm usado para verificar a integridade de um ficheiro atravs, por exemplo, do programa md5sum, que cria a hash de um ficheiro. Isto pode-se tornar muito til para downloads de ficheiros grandes, para programas P2P que constroem o ficheiro atravs de pedaos e esto sujeitos corrupo de ficheiros. O MD5 de domnio pblico para uso em geral. A partir de uma mensagem de um tamanho qualquer, ele gera um valor hash de 128 bits; com este algoritmo, computacionalmente impraticvel descobrir duas mensagens que gerem o mesmo valor, bem como reproduzir uma mensagem a partir do seu digest. O algoritmo MD5 utilizado como mecanismo de integridade em vrios protocolos de padro Internet (RFC1352, RFC1446, etc.), bem como pelo CERT e CIAC.

Algoritmo MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks (sixteen 32-bit little endian integers); the message is padded so that its length is divisible by 512. The padding works as follows:

first a single bit, 1, is appended to the end of the message. This is followed by as many zeros as are required to bring the length of the message up to 64 bits fewer than a multiple of 512. The remaining bits are filled up with a 64-bit integer representing the length of the original message, in bits. The main MD5 algorithm operates on a 128-bit state, divided into four 32-bit words, denoted A, B, C and D. These are initialized to certain fixed constants. The main algorithm then operates on each 512-bit message block in turn, each block modifying the state. The processing of a message block consists of four similar stages, termed rounds; each round is composed of 16 similar operations based on a non-linear function F, modular addition, and left rotation. Figure illustrates one operation within a round. There are four possible functions F; a different one is used in each round:

denote the XOR, AND, OR and NOT operations respectively.

Vulnerabilidade Como o MD5 faz apenas uma passagem sobre os dados, se dois prefixos com o mesmo hash forem construdos, um sufixo comum pode ser adicionado a ambos para tornar uma coliso mais provvel. Deste modo possvel que duas strings diferentes produzam o mesmo hash. O que no garante que a partir de uma senha criptografada especfica consigase a senha original, mas permite uma possibilidade de decifrar algumas senhas a partir de um conjunto grande de senhas criptografadas.

SHA-1
The Secure Hash Algorithm is one of a number of cryptographic hash functions published by the National Institute of Standards and Technology as a U.S. Federal Information Processing Standard. There are currently three generations of Secure Hash Algorithm: SHA-1 is the original 160-bit hash function. Resembling the earlier MD5 algorithm, this was designed by the National Security Agency (NSA) to be part of the Digital Signature Algorithm. Originally just called "SHA", it was withdrawn shortly after publication due to an undisclosed "significant flaw" and replaced by the slightly revised version SHA-1. The original withdrawn algorithm is now known by the retronym SHA-0. SHA-2 is a family of two similar hash functions, with different block sizes, known as SHA256 and SHA-512. They differ in the word size; SHA-256 uses 32-bit words where SHA512 uses 64-bit words. There are also truncated versions of each standardized, known as SHA-224 and SHA-384. These were also designed by the NSA. SHA-3 is a future hash function standard still in development. This is being chosen in a public review process from non-government designers. An ongoing NIST hash function

competition is scheduled to end with the selection of a winning function, which will be given the name SHA-3, in 2012. The corresponding standards have been FIPS PUB 180 (original SHA), FIPS PUB 180-1 (SHA-1), FIPS PUB 180-2 (SHA-1, SHA-256, SHA-384, and SHA-512), FIPS PUB 180-3 (SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512).

SHA-1 produces a 160-bit digest from a message with a maximum length of (264 1) bits. SHA-1 is based on principles similar to those used by Ronald L. Rivest of MIT in the design of the MD4 and MD5 message digest algorithms, but has a more conservative design.
Algorithm and variant SHA-0 SHA-1 SHA- SHA2 256/224 SHA512/384 256/224 512/384 256 512 512 1024 264 1 2128 1 32 64 64 80 +,and,or, xor,shr,rot +,and,or, xor,shr,rot Output Internal Block Max size state size size message (bits) (bits) (bits) size (bits) 160 160 512 264 1 Word size Rounds (bits) 32 80 Operations +,and,or, xor,rot Collisions found Yes None (263 attack) None None

Checksums Criptogrcos
Esta tcnica envolve a utilizao de uma chave secreta, em conjunto com um algoritmo so aplicados mensagem e produzem um bloco de dados de dimenso fixa conhecido por "Checksum" Criptogrfico ou "Message Authentication Code" (MAC). MAC = checksum(MENSAGEM,CHAVE SECRETA) O MAC enviado em conjunto com a mensagem da entidade A para a entidade B, a entidade B gera tambm um MAC e verifica se coincide com o que chegou juntamente com a mensagem. Como apenas A e B conhecem a chave secreta, B sabe que a mensagem veio de A e no foi adulterada. Como sempre algum pode recorrer fora bruta para tentar obter a chave secreta. Supondo que no se implementa confidencialidade, ento existe acesso pblico mensagem M e ao respectivo MAC, basta executar MAC1 = checksum(MENSAGEM,CHAVE) para todas as chaves possveis, tentando obter MAC = MAC1. Sendo k o nmero de bits da CHAVE, ento existe um total de 2k chaves possveis, este ser tambm o nmero de MAC's obtidos, contudo sendo n o nmero de bits do MAC, se n < k ento apenas sero produzidos 2n < 2k MAC's diferentes. Nestas condies ao empregar a fora bruta vo ser obtidas 2(k-n) chaves possveis. Ser necessrio recorrer a outro exemplar (M,MAC) e aplicar a mesma tcnica, agora apenas sero testadas as 2(k-n) chaves obtidas no passo anterior, sero ento obtidas 2(k-2xn) chaves possiveis. Em mdia sero necessrios k/n passos destes at obter uma chave nica. Uma implementao comum de "Checksum" Criptogrfico o FIPS PUB 113 ou ANSI X9.17. O X9.17 basia-se no DES em modo CBC, com um vector inicial zero. Este algoritmo tambm conhecido por DAA ("Data Authentication Algotithm"), usa uma chave DES de 56 bits e produz um resultado de 64 bits, neste algoritmo o MAC tambm designado de DAC ("Data Authentication Code"). Para DAC pode utilizar-se menos do que os 64 bits produzidos pelo DAA. Neste caso utilizam-se pelo mesnos os 16 bits da esquerda.

HMAC

In cryptography, HMAC (Hash-based Message Authentication Code), is a specific construction for calculating a message authentication code (MAC) involving a cryptographic hash function in combination with a secret key. As with any MAC, it may be used to simultaneously verify both the data integrity and the authenticity of a message. Any iterative cryptographic hash function, such as MD5 or SHA-1, may be used in the calculation of an HMAC; the resulting MAC algorithm is termed HMAC-MD5 or HMACSHA1 accordingly. The cryptographic strength of the HMAC depends upon the cryptographic strength of the underlying hash function, the size of its hash output length in bits and on the size and quality of the cryptographic key. An iterative hash function breaks up a message into blocks of a fixed size and iterates over them with a compression function. For example, MD5 and SHA-1 operate on 512-bit blocks. The size of the output of HMAC is the same as that of the underlying hash function (128 or 160 bits in the case of MD5 or SHA-1, respectively), although it can be truncated if desired. Let:

H() be a cryptographic hash function K be a secret key padded to the right with extra zeros to the block size of the hash function m be the message to be authenticated denote concatenation denote exclusive or (XOR) opad be the outer padding (0x5c5c5c5c5c, one-block-long hexadecimal constant)

ipad be the inner padding (0x3636363636, one-block-long hexadecimal constant)

Then HMAC(K,m) is mathematically defined by HMAC(K,m) = H((K opad) H((K ipad) m)).

Assinaturas digitais
Recomenda-se a leitura deste documento: [www.pjvenda.net/papers/acrypto/public-keysecurity-paper.pdf] Uma assinatura digital um tipo especfico de MAC que resulta de sistemas de criptografia assimtrica, o RSA por exemplo, e usado para proteger a informao. Para assinar uma mensagem, uma funo Message Digest (MD) usada para processar o documento, produzindo um pequeno pedao de dados, chamado de hash. Funes MD so mais parecidas com checksums quanto a no receber uma chave como parte de sua entrada. Na verdade, entra-se com os dados a serem "digeridos" e o algoritmo MD gera um hash de 128 ou 160 bits (dependendo do algoritmo, so exemplos: MD4, MD5 e Snefru). Uma vez computada uma message digest, criptografa-se o hash gerado com uma chave privada. O resultado de todo este procedimento chamado de assinatura digital da informao. A assinatura digital uma garantia que o documento uma cpia verdadeira e correta do original. O motivo para se usar funes message digest est diretamente ligado ao tamanho do bloco de dados a ser criptografado para se obter a assinatura. De fato, criptografar mensagens longas pode durar muito tempo, enquanto que criptografar hashs, que so blocos de dados pequenos e de tamanho fixo, gerados pela MD torna o processamento mais eficiente. Contudo, a simples presena de uma assinatura digital no documento no quer dizer nada. Assinaturas digitais, como outras convencionais, podem ser forjadas. A diferena que a assinatura digital pode ser matematicamente verificada. Dado um documento e sua assinatura digital, pode-se facilmente verificar sua integridade e autenticidade. Primeiro, executa-se a funo MD (usando o mesmo algoritmo MD que foi aplicado ao documento na origem), obtendo assim um hash para aquele documento, e posteriormente, decifra-se a assinatura digital com a chave pblica do remetente. A assinatura digital decifrada deve produzir o mesmo hash gerado pela funo MD executada anteriormente. Se estes valores so iguais determinado que o documento no foi modificado aps a assinatura do mesmo, caso contrrio o documento ou a assinatura, ou ambos foram alterados. Infelizmente, a assinatura digital pode dizer apenas que o documento foi modificado, mas no o que foi modificado e o quanto foi modificado. Todo o processo de gerao e verificao de assinatura digital pode ser visto na figura abaixo, utilizando o algoritmo de criptografia de chave pblica RSA.

Para ser possvel que um documento ou uma assinatura adulterada no seja detectada, o atacante deve ter acesso a chave privada de quem assinou esse documento. O que faz assinaturas digitais diferentes de MACs que enquanto estes ltimos requerem chaves privadas para verificao, assinaturas digitais so possveis de serem verificadas usando chaves pblicas. A assinatura digital tambm valiosa pois pode-se assinar informaes em um sistema de computador e depois provar sua autenticidade sem se preocupar com a segurana do sistema que as armazena.

Propriedades a verificar
Autenticidade - quem assinou identificado unicamente pela assinatura. No-forjamento - quem assinou o proprio e fiz o deliberadamente. Integridade - uma assinatura correcta em um documento garante que este no alteravel sem ser detectado. No-reutilizao - a assinatura ou parte de documento no reutilizavel em outro documento. No-repudio - o assinante no pode negar a sua assinatura.

Assinatura arbitrada
Envolve um arbitro que age como um mediador de confiana. Toda gente tem confiana no arbitro que pode ou no ver a mensagem. Usualmente baseam-se na criptografia simetrica. O arbitro unico ponto de falha. Propriedades

autenticidade no-forjamento integridade no-reutilizao no-repudio

Assinatura directa
Envolve apenas o emissor e o receptor. O receptor deve ter acesso a chave publica do emissor (para verificar a sua assinatura). Usualmente baseada em criptografia assimetrica. Propriedades autenticidade no-forjamento integridade no-repudio no garante a no-reutilizao

DSA/DSS
um algoritmo pblico. Parte do Digital Signature Standard (DSS). Recomendado para aplicaes que requeiram assinatura digital ao invs da escrita. No pode ser usado para encriptao por chave privada pois desencriptado pela chave pblica. A mensagem submetida ao algoritmo SHA que gera um hash da mensagem. Esse hash submetido ao algoritmo DSA usando a chave privada do remetente; ao final temos a mensagem assinada. O remetente submete a mensagem recebida ao SHA para gerar o hash, usa a chave pblica do remetente para decriptar a assinatura digital e compara o hash das mensagens. Se forem os mesmos, a mensagem autntica. Usa chaves de 512 bits a 128 bits.

Autoridade certificadora
A certificate authority or certification authority (CA) is an entity that issues digital certificates for use by other parties. It is an example of a trusted third party. CAs are characteristic of many public key infrastructure (PKI) schemes. Commercial CAs charge to issue certificates that will automatically be trusted by most web browsers (Mozilla maintains a list of at least 36 trusted root CAs, though multiple commercial CAs or their resellers may share the same trusted root). http://en.wikipedia.org/wiki/Certification_authority - cite_note-0 The number of web browsers and other devices and applications that trust a particular certificate authority is referred to as ubiquity. Aside from commercial CAs, some providers issue digital certificates to the public at no cost. Large institutions or government entities may have their own CAs.

A CA issues digital certificates that contain a public key and the identity of the owner. When an end-user tries to access an unknown URL, the web browser (e.g. Mozilla Firefox and Microsoft Internet Explorer) will contact the CA to confirm the public key of the URL. The matching private key is not similarly made available publicly, but kept secret by the end user who generated the key pair. The certificate is also a confirmation or validation by the CA that the public key contained in the certificate belongs to the person, organization, server or other entity noted in the certificate. A CA's obligation in such schemes is to verify an applicant's credentials, so that users and relying parties can trust the information in the CA's certificates. CAs use a variety of standards and tests to do so. In essence, the Certificate Authority is responsible for saying "yes, this person is who they say they are, and we, the CA, verify that". If the user trusts the CA and can verify the CA's signature, then he can also verify that a certain public key does indeed belong to whoever is identified in the certificate.

X.509
In the X.509 system, a Certification Authority issues a certificate binding a public key to a particular Distinguished Name in the X.500 tradition, or to an Alternative Name such as an e-mail address or a DNS-entry. An organization's trusted root certificates can be distributed to all employees so that they can use the company PKI system. Browsers such as Internet Explorer, Netscape/Mozilla, Opera, Safari and Chrome come with root certificates pre-installed, so SSL certificates from larger vendors will work instantly; in effect the browsers' developers determine which CAs are trusted third parties for the browsers' users. X.509 also includes standards for certificate revocation list (CRL) implementations, an often neglected aspect of PKI systems. The IETF-approved way of checking a certificate's validity is the Online Certificate Status Protocol (OCSP). Firefox 3 enables OCSP checking by default along with versions of Windows including Vista and later. Structure of a certificate The structure of an X.509 v3 digital certificate is as follows: Certificate o Version o Serial Number o Algorithm ID o Issuer o Validity Not Before Not After o Subject o Subject Public Key Info

Public Key Algorithm Subject Public Key o Issuer Unique Identifier (Optional) o Subject Unique Identifier (Optional) o Extensions (Optional) ... Certificate Signature Algorithm Certificate Signature

Issuer and subject unique identifiers were introduced in Version 2, Extensions in Version 3. Nevertheless, the Serial number must be unique for each certificate issued by a specific CA (as mentioned in RFC 2459).

Sample X.509 certificates


Ver [http://en.wikipedia.org/wiki/X.509]

Self-signed certificate
A self-signed certificate is an identity certificate that is signed by its own creator. That is, the person that created the certificate also signed off on its legitimacy. In typical public key infrastructure (PKI) arrangements, that a particular public key certificate is valid (i.e., contains correct information) is attested by a digital signature from a certificate authority (CA). Users, or their software on their behalf, check that the private key used to sign some certificate matches the public key in the CA's certificate. Since CA certificates are often signed by other, "higher ranking," CAs, there must necessarily be a highest CA, which provides the ultimate in attestation authority in that particular PKI scheme. Obviously, the highest-ranking CA's certificate can't be attested by some other higher CA (there being none), and so that certificate can only be "self-signed." Such certificates are also termed root certificates. Clearly, the lack of mistakes or corruption in the issuance of such certificates is critical to the operation of its associated PKI; they should be, and generally are, issued with great care. In a web of trust certificate scheme there is no central CA, and so identity certificates for each user can be self-signed. In this case, however, it has additional signatures from other users which are evaluated to determine whether a certificate should be accepted as correct. So, if users Bob, Carol, and Edward have signed Alice's certificate, user David may decide to trust that the public key in the certificate is Alice's (all these worthies having agreed by their signatures on that claim). But, if only user Bob has signed, David might (based on his knowledge of Bob) decide to take additional steps in evaluating Alice's certificate. On the other hand, Edward's signature alone on the certificate may by itself be enough for David to trust that he has Alice's public key (Edward being known to David to be a reliably careful and trustworthy person). There is of course, a potentially difficult regression here, as how can David know that Bob, Carol, or Edward have signed any certificate at all unless he

knows their public keys (which of course came to him in some sort of certificate)? In the case of a small group of users who know one another in advance and can meet in person (e.g., a family), users can sign one another's certificates when they meet as a group, but this solution does not scale to larger settings. This problem is solved by fiat in X.509 PKI schemes as one believes (i.e., trusts) the root certificate by definition. The problem of trusting certificates is real in both approaches, but less easily lost track of by users in a Web of Trust scheme.

Centro de distribuio de chaves (KDC)


A key distribution center (KDC) is part of a cryptosystem intended to reduce the risks inherent in exchanging keys. KDCs often operate in systems within which some users may have permission to use certain services at some times and not at others. For instance, an administrator may have established a policy that only certain users may use the tape backup facility. (Perhaps the administrator has concerns that unrestricted use might result in someone smuggling out a tape containing important information; but the precise reason does not matter for the purpose of explaining the functioning of the key distribution center.) Many operating systems can control access to the tape facility via a 'system service'. If that system service further restricts the tape drive to operate on behalf only of users who can submit a service-granting ticket when they wish to use it, there remains only the task of distributing such tickets to the appropriately permitted users. If the ticket consists of (or includes) a key, we can then term the mechanism which distributes it a KDC. Usually, in such situations, the KDC itself also operates as a system service.

Assinatura cega
A blind signature, is a form of digital signature in which the content of a message is disguised before it is signed. The resulting blind signature can be publicly verified against the original, unblinded message in the manner of a regular digital signature. Blind signatures are typically employed in privacy-related protocols where the signer and message author are different parties. Examples include cryptographic election systems and digital cash schemes. An often-used analogy to the cryptographic blind signature is the physical act of enclosing a message in a special write-through-capable envelope, which is then sealed and signed by a signing agent. Thus, the signer does not view the message content, but a third party can later verify the signature and know that the signature is valid within the limitations of the underlying signature scheme. Blind signatures can also be used to provide unlinkability, which prevents the signer from linking the blinded message it signs to a later un-blinded version that it may be called upon to verify. In this case, the signer's response is first "un-blinded" prior to verification in such a way that the signature remains valid for the un-blinded message. This can be useful in schemes where anonymity is required.

Blind signature schemes can be implemented using a number of common public key signing schemes, for instance RSA and DSA. To perform such a signature, the message is first "blinded", typically by combining it in some way with a random "blinding factor". The blinded message is passed to a signer, who then signs it using a standard signing algorithm. The resulting message, along with the blinding factor, can be later verified against the signer's public key. In some blind signature schemes, such as RSA, it is even possible to remove the blinding factor from the signature before it is verified. In these schemes, the final output (message/signature) of the blind signature scheme is identical to that of the normal signing protocol. Blind signature schemes exist for many public key signing protocols. Some examples are provided below. In each example, the message to be signed is contained in the value m. m is considered to be some legitimate input to the signature function. As an analogy, consider that Alice has a letter which should be signed by an authority (say Bob), but Alice does not want to reveal the content of the letter to Bob. She can place the letter in an envelope lined with carbon paper and send it to Bob. Bob will sign the outside of the carbon envelope without opening it and then send it back to Alice. Alice can then open it to find the letter signed by Bob, but without Bob having seen its contents. More formally a blind signature scheme is a cryptographic protocol that involves two parties, a user Alice that wants to obtain signatures on her messages, and a signer Bob that is in possession of his secret signing key. At the end of the protocol Alice obtains a signature on m without Bob learning anything about the message. This intuition of not learning anything is hard to capture in mathematical terms. The usual approach is to show that for every (adversarial) signer, there exists a simulator that can output the same information as the signer. This is similar to the way zero-knowledge is defined in zeroknowledge proof systems.

Blind RSA signatures


One of the simplest blind signature schemes is based on RSA signing. A traditional RSA signature is computed by raising the message m to the secret exponent d modulo the public modulus N. The blind version uses a random value r, such that r is relatively prime to N (i.e. gcd(r, N) = 1). r is raised to the public exponent e modulo N, and the resulting value remod N is used as a blinding factor. The author of the message computes the product of the message and blinding factor, i.e.

and sends the resulting value m' to the signing authority. Because r is a random value and the mapping is a permutation it follows that remod N is random too. This implies that m' does not leak any information about m. The signing authority then calculates the blinded signature s' as: s' is sent back to the author of the message, who can then remove the blinding factor to

reveal s, the valid RSA signature of m:

This works because RSA keys satisfy the equation hence s is indeed the signature of m.

and thus

Dangers of blind signing


RSA is subject to the RSA blinding attack through which it is possible to be tricked into decrypting a message by blind signing another message. Since the signing process is equivalent to decrypting with the signers secret key, an attacker can provide a blinded version of a message m encrypted with the signers public key, m' for them to sign. The encrypted message would usually be some secret information which the attacker observed being sent encrypted under the signers public key which the attacker wants to learn. When the attacker unblinds the signed version they will have the clear text:

where m' is the encrypted version of the message. When the message is signed, the cleartext m is easily extracted:

Note that (n) refers to Euler's totient function. The message is now easily obtained.

This attack works because in this blind signature scheme the signer signs the message directly. By contrast, in an unblinded signature scheme the signer would typically use a padding scheme (e.g. by instead signing the result of a Cryptographic hash function applied to the message, instead of signing the message itself), however since the signer does not know the actual message, any padding scheme would produce an incorrect value when unblinded. Due to this multiplicative property of RSA, the same key should never be used for both encryption and signing purposes.

Zero-knowledge
A zero-knowledge proof or zero-knowledge protocol is an interactive method for one party to prove to another that a (usually mathematical) statement is true, without revealing anything other than the veracity of the statement. Abstract example It is common practice to label the two parties in a zero-knowledge proof as Alice (the prover of the statement) and Bob (the verifier of the statement). In this story, Alice has uncovered the secret word used to open a magic door in a cave. The cave is shaped like a circle, with the entrance on one side and the magic door blocking the opposite side. Bob says he'll pay her for the secret, but not until he's sure that she really knows it. Alice says she'll tell him the secret, but not until she receives the money. They devise a scheme by which Alice can prove that she knows the word without telling it to Bob. First, Bob waits outside the cave as Alice goes in. We label the left and right paths from the entrance A and B. She randomly takes either path A or B. Then, Bob enters the cave and shouts the name of the path he wants her to use to return, either A or B, chosen at random. Providing she really does know the magic word, this is easy: she opens the door, if necessary, and returns along the desired path. Note that Bob does not know which path she has gone down. However, suppose she did not know the word. Then, she would only be able to return by the named path if Bob were to give the name of the same path that she had entered by. Since Bob would choose A or B at random, he would have a 50% chance of guessing correctly. If they were to repeat this trick many times, say 20 times in a row, her chance of successfully anticipating all of Bob's requests would become vanishingly small. Thus, if Alice reliably appears at the exit Bob names, he can conclude that she is very likely to know the secret word. Definition A zero-knowledge proof must satisfy three properties: Completeness: if the statement is true, the honest verifier (that is, one following the protocol properly) will be convinced of this fact by an honest prover. 2. Soundness: if the statement is false, no cheating prover can convince the honest verifier that it is true, except with some small probability. 3. Zero-knowledge: if the statement is true, no cheating verifier learns anything other than this fact. This is formalized by showing that every cheating verifier has some simulator that, given only the statement to be proven (and no access to the prover), can produce a transcript that "looks like" an interaction between the honest prover and the cheating verifier.
1.

The first two of these are properties of more general interactive proof systems. The third is what makes the proof zero-knowledge. Zero-knowledge proofs are not proofs in the mathematical sense of the term because there is some small probability, the soundness error, that a cheating prover will be able to convince the verifier of a false statement. In other words, they are probabilistic rather than deterministic. However, there are techniques to decrease the soundness error to negligibly small values. A formal definition of zero-knowledge has to use some computational model, the most common one being that of a Turing machine. Let P,V, and S be turing machines. An interactive proof system with (P,V) for a language L is zero-knowledge if for any probabilistic polynomial time (PPT) verifier there exists an expected PPT simulator S such that

The prover P is modeled as having unlimited computation power (in practice, P usually is a Probabilistic Turing machine). Intuitively, the definition states that an interactive proof system (P,V) is zero- knowledge if for any verifier there exists an efficient simulator S that can reproduce the conversation between P and on any given input. The auxiliary string z in the definition plays the role of prior knowledge. The definition implies that cannot use any prior knowledge string z to mine information out of its conversation with P because we demand that if S is also given this prior knowledge then it can reproduce the conversation between and P just as before. The definition given is that of perfect zero-knowledge. Computational zero-knowledge is obtained by requiring that the views of the verifier and the simulator are only computational indistinguishable, given the auxiliary string. Practical example We can extend these ideas to a more realistic cryptography application. In this scenario, Alice knows a Hamiltonian cycle for a large graph, G. Bob knows G but not the cycle (e.g., Alice has generated G and revealed it to him.) Alice will prove that she knows the cycle without revealing it. A Hamiltonian cycle in a graph is just one way to implement a zero knowledge proof; in fact any NP-complete problem can be used, as well as some other difficult problems such as factoring. However, Alice does not want to simply reveal the Hamiltonian cycle or any other information to Bob; she wishes to keep the cycle secret (perhaps Bob is interested in buying it but wants verification first, or maybe Alice is the only one who knows this information and is proving her identity to Bob). To show that Alice knows this Hamiltonian cycle, she and Bob play several rounds of a game.

At the beginning of each round, Alice creates H, an isomorphic graph to G (i.e. H is just like G except that all the vertices have different names). Since it is trivial to translate a Hamiltonian cycle between isomorphic graphs with known isomorphism,

if Alice knows a Hamiltonian cycle for G she also must know one for H. Alice commits to H. She could do so by using a cryptographic commitment scheme. Alternatively, she could number the vertices of H, then for each edge of H write a small piece of paper containing the two vertices of the edge and then put these pieces of paper upside down on a table. The purpose of this commitment is that Alice is not able to change H while at the same time Bob has no information about H. Bob then randomly chooses one of two questions to ask Alice. He can either ask her to show the isomorphism between H and G (see graph isomorphism problem), or he can ask her to show a Hamiltonian cycle in H. If Alice is asked to show that the two graphs are isomorphic, she first uncovers all of H (e.g. by turning all pieces of papers that she put on the table) and then provides the vertex translations that map G to H. Bob can verify that they are indeed isomorphic. If Alice is asked to prove that she knows a Hamiltonian cycle in H, she translates her Hamiltonian cycle in G onto H and only uncovers the edges on the Hamiltonian cycle. This is enough for Bob to check that H does indeed contain a Hamiltonian cycle.

Completeness: During each round, Alice does not know which question she will be asked until after giving Bob H. Therefore, in order to be able to answer both, H must be isomorphic to G and she must have a Hamiltonian cycle in H. Because only someone who knows a Hamiltonian cycle in G would always be able to answer both questions, Bob (after a sufficient number of rounds) becomes convinced that Alice does know this information. Zero-Knowledge: Alice's answers do not reveal the original Hamiltonian cycle in G. Each round, Bob will learn only H's isomorphism to G or a Hamiltonian cycle in H. He would need both answers for a single H to discover the cycle in G, so the information remains unknown as long as Alice can generate a unique H every round. If Alice does not know of a Hamiltonian Cycle in G, but somehow knew in advance what Bob would ask to see each round then she could cheat. For example, if Alice knew ahead of time that Bob would ask to see the Hamiltonian Cycle in H then she could generate a Hamiltonian cycle for an unrelated graph. Similarly, if Alice knew in advance that Bob would ask to see the isomorphism then she could simply generate an isomorphic graph H (in which she also does not know a Hamiltonian Cycle). Bob could simulate the protocol by himself (without Alice) because he knows what he will ask to see. Therefore, Bob gains no information about the Hamiltonian cycle in G from the information revealed in each round.

Soundness: If Alice does not know the information, she can guess which question Bob will ask and generate either a graph isomorphic to G or a Hamiltonian cycle for an unrelated graph, but since she does not know a Hamiltonian cycle for G she cannot do both. With this guesswork, her chance of fooling Bob is 2n, where n is the number of rounds. For all realistic purposes, it is infeasibly difficult to defeat a zero knowledge proof with a reasonable number of rounds in this way. Variants of zero-knowledge Different variants of zero-knowledge can be defined by formalizing the intuitive concept of what is meant by the output of the simulator "looking like" the execution of the real proof protocol in the following ways:

We speak of perfect zero-knowledge if the distributions produced by the simulator and the proof protocol are distributed exactly the same. This is for instance the case in the first example above. Statistical zero-knowledge means that the distributions are not necessarily exactly the same, but they are statistical close, meaning that their statistical difference is a negligible function. We speak of computational zero-knowledge if no efficient algorithm can distinguish the two distributions.

Applications Research in zero-knowledge proofs has been motivated by authentication systems where one party wants to prove its identity to a second party via some secret information (such as a password) but doesn't want the second party to learn anything about this secret. This is called a "zero-knowledge proof of knowledge". However, a password is typically too small or insufficiently random to be used in many schemes for zero-knowledge proofs of knowledge. A zero-knowledge password proof is a special kind of zero-knowledge proof of knowledge that addresses the limited size of passwords. One of the most fascinating uses of zero-knowledge proofs within cryptographic protocols is to enforce honest behavior while maintaining privacy. Roughly, the idea is to force a user to prove, using a zero-knowledge proof, that its behavior is correct according to the protocol. Because of soundness, we know that the user must really act honestly in order to be able to provide a valid proof. Because of zero knowledge, we know that the user does not compromise the privacy of its secrets in the process of providing the proof. This application of zero-knowledge proofs was first used in the ground-breaking paper of Goldreich, Micali, and Wigderson on secure multiparty computation.

Dinheiro digital
Essentially, digital cash mimics the functionality of paper cash. More technically, digital cash is a payment message bearing a digital signature which functions as a medium of exchange or store of value. Paper currency and coins represent value because they are backed by a trusted third party, the government and the banking industry. Digital coins will also represent value because they are backed by a trusted third party, usually a bank that is willing to convert digital cash to physical cash.

How does Digital Cash work?

There are a number of electronic cash protocols. To a degree, all digital cash schemes operate in the following manner: A user installs a "cyber wallet" onto computer. Money can be put in the wallet by deciding how much is needed and then sending an encrypted message to the bank asking for this amount to be deducted from the user's account. The bank reads the message with private key decryption and verifies if it has been digitally signed in order to identify the user. The bank then generates "serial numbers", encrypts the message, signs it with its digital signature and returns it. The user is now entitled to use the message (coin or token) to spend it at merchant sites. Merchants receive e-cash during a transaction and see that it has been authorized by a bank. They then contact the bank to make sure the coins have not been spent somewhere else, and the amount is credited to the merchant's account.

Key Properties of a Private Digital Cash System


The use of digital cash is not dependent on any physical location, and can be transferred between the physical world and virtual world of the Internet Smart card integration with

computer networks have been proposed to offer this functionality. Real cash is limited by its physical form .Cash represented by streams of 0's and 1's can take advantage of its electronic nature, and permeate through networks and digital sale devices at light-speed, worldwide. Ideal properties: Secure. The transaction protocol must ensure that a high-level security is maintained through sophisticated encryption techniques. For instance, Alice should be able to pass digital cash to Bob without either of them, or others, able to alter or reproduce the electronic token. Anonymous. Anonymity assures the privacy of a transaction on multiple levels. Beyond encryption, this optional intractability feature of digital cash promises to be one of the major points of competition as well as controversy between the various providers. Transactional privacy will also be at the heart of the government's attack on digital cash because it is that feature which will most likely render current legal tender irrelevant. Both Alice and Bob should have the option to remain anonymous in relation to the payment. Furthermore, at the second level, they should have the option to remain completely invisible to the mere existence of a payment on their behalf. Portable. The security and use of the digital cash is not dependent on any physical location. The cash can be transferred through computer networks and off the computer network into other storage devices. Alice and Bob should be able to walk away with their digital cash and transport it for use within alternative delivery systems, including noncomputer-network delivery channels. Digital wealth should not be restricted to a unique, proprietary computer network. Two-way. The digital cash can be transferred to other users. Essentially, peer-to-peer payments are possible without either party required to attain registered merchant status as with today's card-based systems. Alice, Bob, Carol, and David share an elaborate dinner together at a trendy restaurant and Alice pays the bill in full. Bob, Carol, and David each should then be able to transfer one-fourth of the total amount in digital cash to Alice. Off-line capable. The protocol between the two exchanging parties is executed off-line, meaning that neither is required to be host-connected in order to process. Availability must be unrestricted. Alice can freely pass value to Bob at any time of day without requiring third-party authentication. Wide acceptability. The digital cash is well-known and accepted in a large commercial zone. Primarily a brand issue, this feature implies recognition of and trust in the issuer. With several digital cash providers displaying wide acceptability, Alice should be able to use her preferred unit in more than just a restricted local setting.

Comunicao segura
When two entities are communicating with each other, and they do not want a third party to listen to their communication, then they want to pass on their message in such a way that no body else could understand their message. This is known as communicating in a secure manner or secure communication. Secure communication includes means by which people can share information with varying degrees of certainty that third parties cannot know what was said. Other than communication spoken face to face out of possibility of listening, it is probably safe to say that no communication is guaranteed secure in this sense, although practical limitations such as legislation, resources, technical issues (interception and encryption), and the sheer volume of communication are limiting factors to surveillance. The purpose of this article is to describe the various means by which security is sought and compromised, the differing kinds of security possible, and the current means and their degree of security readily available. With many communications taking place over long distance and mediated by technology, and increasing awareness of the importance of interception issues, technology and its compromise are at the heart of this debate. For this reason, this article focusses on communications mediated or intercepted by technology. Also see Trusted Computing, an approach under present development that achieves security in general at the potential cost of compelling obligatory trust in corporate and governmental bodies.

Canal seguro
A secure channel is a way of transferring data that is resistant to interception and tampering. A confidential channel is a way of transferring data that is resistant to interception, but not necessarily resistant to tampering. An authentic channel is a way of transferring data that is resistant to tampering but not necessarily resistant to interception. Secure channels in the real world There are no perfectly secure channels in the real world. There are, at best, only ways to make insecure channels (eg, couriers, homing pigeons, diplomatic bags, etc) less insecure: padlocks (between courier wrists and a briefcase), loyalty tests, security investigations, and guns for courier personnel, diplomatic immunity for diplomatic bags, and so forth. In 1976, two researchers proposed a key exchange technique (now named after them) DiffieHellman key exchange (D-H). This protocol allows two parties to generate a key only known to them, under the assumption that a certain mathematical problem (eg, the DiffieHellman problem in their proposal) is computationally infeasible (ie, very very hard) to solve, and that the two parties have access to an authentic channel. In short, that an eavesdropperconventionally termed 'Eve', who can listen to all messages exchanged by the two parties, but who can not modify the messageswill not learn the exchanged key.

Such a key exchange was impossible with any previously known cryptographic schemes based on symmetric ciphers, because with these schemes it is necessary that the two parties exchange a secret key at some prior time, hence they require a confidential channel at that time which is just what we are attempting to build. It is important to note that most cryptographic techniques are trivially breakable if keys are not exchanged securely or, if they actually were so exchanged, if those keys become known in some other way burglary or extortion, for instance. An actually secure channel will not be required if an insecure channel can be used to securely exchange keys, and if burglary, bribery, or threat aren't used. The eternal problem has been and of course remains even with modern key exchange protocols how to know when an insecure channel worked securely (or alternatively, and perhaps more importantly, when it did not), and whether anyone has actually been bribed or threatened or simply lost a notebook (or a notebook computer) with key information in it. These are hard problems in the real world and no solutions are known only expedients, jury rigs, and workarounds.

Trusted computering base


The trusted computing base (TCB) of a computer system is the set of all hardware, firmware, and/or software components that are critical to its security, in the sense that bugs or vulnerabilities occurring inside the TCB might jeopardize the security properties of the entire system. By contrast, parts of a computer system outside the TCB must not be able to misbehave in a way that would leak any more privileges than are granted to them in accordance to the security policy. The careful design and implementation of a system's trusted computing base is paramount to its overall security. Modern operating systems strive to reduce the size of the TCB so that an exhaustive examination of its code base (by means of manual or computer-assisted software audit or program verification) becomes feasible.

Definition and characterization


In the classic paper Authentication in Distributed Systems: Theory and Practice Lampson et al. define the trusted computing base of a computer system as simply a small amount of software and hardware that security depends on and that we distinguish from a much larger amount that can misbehave without affecting security. This definition, while clear and convenient, is neither theoretically exact nor intended to be, as e.g. a network server process under a UNIX-like operating system might fall victim to a security breach and compromise an important part of the system's security, yet is not part of the operating system's TCB. The Orange Book, another classic computer security literature reference, therefore provides a more formal definition of the TCB of a computer system, as the totality of protection mechanisms within it, including hardware, firmware, and software, the combination of which is responsible for enforcing a computer security policy.

The Orange Book further explains that [t]he ability of a trusted computing base to enforce correctly a unified security policy depends on the correctness of the mechanisms within the trusted computing base, the protection of those mechanisms to ensure their correctness, and the correct input of parameters related to the security policy. In other words, a given piece of hardware or software is a part of the TCB if and only if it has been designed to be a part of the mechanism that provides its security to the computer system. In operating systems, this typically consists of the kernel (or microkernel) and a select set of system utilities (for example, setuid programs and daemons in UNIX systems). In programming languages that have security features designed in such as Java and E, the TCB is formed of the language runtime and standard library.

Properties of the TCB


Predicated upon the security policy: TCB is in the eye of the consultant It should be pointed out that as a consequence of the above Orange Book definition, the boundaries of the TCB depend closely upon the specifics of how the security policy is fleshed out. In the network server example above, even though, say, a Web server that serves a multi-user application is not part of the operating system's TCB, it has the responsibility of performing access control so that the users cannot usurp the identity and privileges of each other. In this sense, it definitely is part of the TCB of the larger computer system that comprises the UNIX server, the user's browsers and the Web application; in other words, breaching into the Web server through e.g. a buffer overflow may not be regarded as a compromise of the operating system proper, but it certainly constitutes a damaging exploit on the Web application. This fundamental relativity of the boundary of the TCB is exemplifed by the concept of the target of evaluation (TOE) in the Common Criteria security process: in the course of a Common Criteria security evaluation, one of the first decisions that must be made is the boundary of the audit in terms of the list of system components that will come under scrutiny. A prerequisite to security Systems that don't have a trusted computing base as part of their design do not provide security of their own: they are only secure insofar as security is provided to them by external means (e.g. a computer sitting in a locked room without a network connection may be considered secure depending on the policy, regardless of the software it runs). This is because, as David J. Farber et al. put it, [i]n a computer system, the integrity of lower layers is typically treated as axiomatic by higher layers. As far as computer security is concerned, reasoning about the security properties of a computer system requires being able to make sound assumptions about what it can, and more importantly, cannot do; however, barring any reason to believe otherwise, a computer is able to do everything that a general Von Neumann machine can. This obviously includes operations that would be deemed

contrary to all but the simplest security policies, such as divulging an email or password that should be kept secret; however, barring special provisions in the architecture of the system, there is no denying that the computer could be programmed to perform these undesirable tasks. These special provisions that aim at preventing certain kinds of actions from being executed, in essence, constitute the trusted computing base. For this reason, the Orange Book (still a reference on the design of secure operating systems design as of 2007) characterizes the various security assurance levels that it defines mainly in terms of the structure and security features of the TCB. Software parts of the TCB need to protect themselves As outlined by the aforementioned Orange Book, software portions of the trusted computing base need to protect themselves against tampering to be of any effect. This is due to the von Neumann architecture implemented by virtually all modern computers: since machine code can be processed as just another kind of data, it can be read and overwritten by any program barring special memory management provisions that subsequently have to be treated as part of the TCB. Specifically, the trusted computing base must at least prevent its own software from being written to. In many modern CPUs, the protection of the memory that hosts the TCB is achieved by adding in a specialized piece of hardware called the memory management unit (MMU), which is programmable by the operating system to allow and deny access to specific ranges of the system memory to the programs being run. Of course, the operating system is also able to disallow such programming to the other programs. This technique is called supervisor mode; compared to more crude approaches (such as storing the TCB in ROM, or equivalently, using the Harvard architecture), it has the advantage of allowing the securitycritical software to be upgraded in the field, although allowing secure upgrades of the trusted computing base poses bootstrap problems of its own. Trusted vs. trustworthy As stated above, trust in the trusted computing base is required to make any progress in ascertaining the security of the computer system. In other words, the trusted computing base is trusted first and foremost in the sense that it has to be trusted, and not necessarily that it is trustworthy. Real-world operating systems routinely have security-critical bugs discovered in them, which attests of the practical limits of such trust. The alternative is formal software verification, which uses mathematical proof techniques to show the absence of bugs. Researchers at NICTA and its spinout Open Kernel Labs have recently performed such a formal verification of seL4, a member of the L4 microkernel family, proving functional correctness of the C implementation of the kernel. This makes seL4 the first operating-system kernel which closes the gap between trust and trustworthiness, assuming the mathematical proof and the compiler are free from error.

TCB size Due to the aforementioned need to apply costly techniques such as formal verification or manual review, the size of the TCB has immediate consequences on the economics of the TCB assurance process, and the trustworthiness of the resulting product (in terms of the mathematical expectation of the number of bugs not found during the verification or review). In order to reduce costs and security risks, the TCB should therefore be kept as small as possible. This is a key argument in the debate opposing microkernel proponents and monolithic kernel aficionados. The aforementioned Coyotos kernel will be of the microkernel kind for this reason, despite the possible performance issues that this choice entails.

Matriz de controlo de acesso


An Access Control Matrix or Access Matrix is an abstract, formal security model of protection state in computer systems, that characterizes the rights of each subject with respect to every object in the system. According to the model, the protection state of a computer system can be abstracted as a set of objects O, that is the set of entities that needs to be protected (e.g. processes, files, memory pages) and a set of subjects S, that consists of all active entities (e.g. users, processes). Further there exists a set of rights R of the form r(s,o), where , and . A right thereby specifies the kind of access a subject is allowed to process with regard to an object.

Lista de controlo de acesso


An access control list (ACL), with respect to a computer file system, is a list of permissions attached to an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects. Each entry in a typical ACL specifies a subject and an operation. For instance, if a file has an ACL that contains (Alice, delete), this would give Alice permission to delete the file.

Lista s de capacidade
A C-list is an array of capabilities, usually associated with a process and maintained by the kernel. The program running in the process does not manipulate capabilities directly, but refers to them via C-list indexesintegers indexing into the C-list. The file descriptor table in Unix is an example of a C-list. Unix processes do not manipulate file descriptors directly, but refer to them via file descriptor numbers, which are C-list indexes. In the KeyKOS and EROS operating systems, a process's capability registers constitute a C-list.

Controlo de acesso descriminatorio (DAC)


A discretionary access control (DAC) is a kind of access control defined by the Trusted Computer System Evaluation Criteria "as a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject (unless restrained by mandatory access control)". However, the meaning of the term in practice is not as clear-cut as the definition given in the TCSEC standard. For example, the term is commonly used in contexts that assume that, under DAC, every object has an owner that controls the permissions to access the object, probably because many systems do implement DAC using the concept of an owner. But the TCSEC definition does not say anything about owners, so technically an access control system doesn't have to have a concept of owner to meet the TCSEC definition of DAC. As another example, capability systems are sometimes described as providing discretionary controls because they permit subjects to transfer their access to other subjects, even though capability-based security is fundamentally not about restricting access "based on the identity of subjects". (Capability systems do not, in general, allow permissions to be passed "to any other subject"; the subject wanting to pass its permissions must first have access to the receiving subject, and subjects do not generally have access to all subjects in the system.) Discretionary access control is commonly defined in opposition to mandatory access control (sometimes termed non-discretionary access control). Occasionally a system as a whole is said to have "discretionary" or "purely discretionary" access control as a way of saying that the system lacks mandatory access control. On the other hand, systems can be said to implement both MAC and DAC simultaneously, where DAC refers to one category of access controls that subjects can transfer among each other, and MAC refers to a second category of access controls that imposes constraints upon the first.

Controlo de acesso mandatario


Mandatory access control (MAC) refers to a type of access control by which the operating system constrains the ability of a subject or initiator to access or generally perform some sort of operation on an object or target. In practice, a subject is usually a process or thread; objects are constructs such as files, directories, TCP/UDP ports, shared memory segments, etc. Subjects and objects each have a set of security attributes. Whenever a subject attempts to access an object, an authorization rule enforced by the operating system kernel examines these security attributes and decides whether the access can take place. Any operation by any subject on any object will be tested against the set of authorization rules (aka policy) to determine if the operation is allowed. A database management system, in its access control mechanism, can also apply mandatory

access control. In this case, the objects are tables, views, procedures, etc. With mandatory access control, this security policy is centrally controlled by a security policy administrator; users do not have the ability to override the policy and, for example, grant access to files that would otherwise be restricted. By contrast, discretionary access control (DAC), which also governs the ability of subjects to access objects, allows users the ability to make policy decisions and/or assign security attributes. (The traditional Unix system of users, groups, and read-write-execute permissions is an example of DAC.) MACenabled systems allow policy administrators to implement organization-wide security policies. Unlike with DAC, users cannot override or modify this policy, either accidentally or intentionally. This allows security administrators to define a central policy that is guaranteed (in principle) to be enforced for all users. Historically and traditionally, MAC has been closely associated with multi-level secure (MLS) systems. The Trusted Computer System Evaluation Criteria(TCSEC), the seminal work on the subject which is often referred to as the "Orange Book", defines MAC as "a means of restricting access to objects based on the sensitivity (as represented by a label) of the information contained in the objects and the formal authorization (i.e., clearance) of subjects to access information of such sensitivity". Early implementations of MAC such as Honeywell's SCOMP, USAF SACDIN, NSA Blacker, and Boeing's MLS LAN focused on MLS to protect military-oriented security classification levels with robust enforcement. Originally, the term MAC denoted that the access controls were not only guaranteed in principle, but in fact. Early security strategies enabled enforcement guarantees that were dependable in the face of national lab level attacks. More recently, with the advent of implementations such as SELinux (incorporated into Linux kernels from 2.6) and Mandatory Integrity Control (incorporated into Windows Vista and newer), MAC has started to become more mainstream and is evolving out of the MLS niche. These more recent MAC implementations have recognized that the narrow TCSEC definition, focused as it was on MLS, is too specific for general use. These implementations provide more depth and flexibility than earlier MLS-focused implementations, allowing (for example) administrators to focus on issues such as network attacks and malware without the rigor or constraints of MLS systems.

Modelos de autenticao
Slides

Kerberos
Kerberos is a authentication protocol, which allows nodes communicating over a nonsecure network to prove their identity to one another in a secure manner. Its designers aimed primarily at a clientserver model, and it provides mutual authentication both the user and the server verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks. Kerberos builds on symmetric key cryptography and requires a trusted third party. Extensions to Kerberos can provide for the use of public-key cryptography during certain phases of authentication. Kerberos uses as its basis the symmetric Needham-Schroeder protocol. It makes use of a trusted third party, termed a key distribution center (KDC), which consists of two logically separate parts: an Authentication Server (AS) and a Ticket Granting Server (TGS). Kerberos works on the basis of "tickets" which serve to prove the identity of users. The KDC maintains a database of secret keys; each entity on the network whether a client or a server shares a secret key known only to itself and to the KDC. Knowledge of this key serves to prove an entity's identity. For communication between two entities, the KDC generates a session key which they can use to secure their interactions. The security of the protocol relies heavily on participants maintaining loosely synchronized time and on short-lived assertions of authenticity called Kerberos tickets. Protocolo The following is an intuitive description. The client authenticates itself to the Authentication Server and receives a ticket. (All tickets are time-stamped.) It then contacts the Ticket Granting Server, and using the ticket it demonstrates its identity and asks for a service. If the client is eligible for the service, then the Ticket Granting Server sends another ticket to the client. The client then contacts the Service Server, and using this ticket it proves that it has been approved to receive the service. A simplified and more detailed description of the protocol follows. The following abbreviations are used: AS = Authentication Server SS = Service Server TGS = Ticket-Granting Server TGT = Ticket-Granting Ticket

The client authenticates to the AS once using a long-term shared secret (e.g. a password) and receives a TGT from the AS. Later, when the client wants to contact some SS, it can

(re)use this ticket to get additional tickets from TGS, for SS, without resorting to using the shared secret. These tickets can be used to prove authentication to SS. The phases are detailed below. User Client-based Logon 1. A user enters a username and password on the client machine. 2. The client performs a one-way function (hash usually) on the entered password, and this becomes the secret key of the client/user. Client Authentication 1. The client sends a cleartext message of the user ID to the AS requesting services on behalf of the user. (Note: Neither the secret key nor the password is sent to the AS.) The AS generates the secret key by hashing the password of the user found at the database (e.g. Active Directory in Windows Server). 2. The AS checks to see if the client is in its database. If it is, the AS sends back the following two messages to the client: o Message A: Client/TGS Session Key encrypted using the secret key of the client/user. o Message B: Ticket-Granting Ticket (which includes the client ID, client network address, ticket validity period, and the client/TGS session key) encrypted using the secret key of the TGS. 3. Once the client receives messages A and B, it decrypts message A to obtain the Client/TGS Session Key. This session key is used for further communications with the TGS. (Note: The client cannot decrypt Message B, as it is encrypted using TGS's secret key.) At this point, the client has enough information to authenticate itself to the TGS. Client Service Authorization 1. When requesting services, the client sends the following two messages to the TGS: o Message C: Composed of the TGT from message B and the ID of the requested service. o Message D: Authenticator (which is composed of the client ID and the timestamp), encrypted using the Client/TGS Session Key. 2. Upon receiving messages C and D, the TGS retrieves message B out of message C. It decrypts message B using the TGS secret key. This gives it the "client/TGS session key". Using this key, the TGS decrypts message D (Authenticator) and sends the following two messages to the client: o Message E: Client-to-server ticket (which includes the client ID, client network address, validity period and Client/Server Session Key) encrypted using the service's secret key. o Message F: Client/server session key encrypted with the Client/TGS Session Key.

Client Service Request 1. Upon receiving messages E and F from TGS, the client has enough information to authenticate itself to the SS. The client connects to the SS and sends the following two messages: o Message E from the previous step (the client-to-server ticket, encrypted using service's secret key). o Message G: a new Authenticator, which includes the client ID, timestamp and is encrypted using client/server session key. 2. The SS decrypts the ticket using its own secret key to retrieve the Client/Server Session Key. Using the sessions key, SS decrypts the Authenticator and sends the following message to the client to confirm its true identity and willingness to serve the client: o Message H: the timestamp found in client's Authenticator plus 1, encrypted using the Client/Server Session Key. 3. The client decrypts the confirmation using the Client/Server Session Key and checks whether the timestamp is correctly updated. If so, then the client can trust the server and can start issuing service requests to the server. 4. The server provides the requested services to the client.

Firewall
A firewall is simply a system designed to prevent unauthorised access to or from a private network. Firewalls can be implemented in both hardware and software, or a combination of both. Firewalls are frequently used to prevent unauthorised Internet users from accessing private networks connected to the Internet. All data entering or leaving the Intranet pass through the firewall, which examines each packet and blocks those that do not meet the specified security criteria. Generally, firewalls are configured to protect against unauthenticated interactive logins from the outside world. This helps prevent "hackers" from logging into machines on your network. More sophisticated firewalls block traffic from the outside to the inside, but permit users on the inside to communicate a little more freely with the outside. Firewalls are also essential since they can provide a single block point where security and audit can be imposed. Firewalls provide an important logging and auditing function; often they provide summaries to the admin about what type/volume of traffic that has been processed through it. This is an important point: providing this block point can serve the same purpose (on your network) as a armed guard can (for physical premises). Theoretically, there are two types of firewalls: 1. Network layer 2. Application layer They are not as different as you may think, as described below.

Which is which depends on what mechanisms the firewall uses to pass traffic from one security zone to another. The International Standards Organization (ISO) Open Systems Interconnect (OSI) model for networking defines seven layers, where each layer provides services that higher-level layers depend on. The important thing to recognize is that the lower-level the forwarding mechanism, the less examination the firewall can perform.
Mais cenas: http://www.ipg.pt/user/~sduarte/rc/trabalhos/Firewalls/Arquitecturas/arquitecturas_d o_firewall.htm http://www.invir.com/int-sec-firearc.html http://penta.ufrgs.br/redes296/firewall/screensub.html

Comunicao segura
SSL/TLS
Transport Layer Security (TLS) and its predecessor, Secure Socket Layer (SSL), are cryptographic protocols that provide security for communications over networks such as theInternet. TLS and SSL encrypt the segments of network connections at the Application Layer to ensure secure end-to-end transit at the Transport Layer. The TLS protocol allows client/server applications to communicate across a network in a way designed to prevent eavesdropping and tampering. TLS provides endpoint authentication andcommunications confidentiality over the Internet using cryptography. TLS provides RSA security with 1024 and 2048 bit strengths. In typical end-user/browser usage, TLS authentication is unilateral: only the server is authenticated (the client knows the server's identity), but not vice versa (the client remains unauthenticated or anonymous). TLS also supports the more secure bilateral connection mode (typically used in enterprise applications), in which both ends of the "conversation" can be assured with whom they are communicating (provided they diligently scrutinize the identity information in the other party's certificate). This is known as mutual authentication, or 2SSL. Mutual authentication requires that the TLS client-side also hold a certificate (which is not usually the case in the end-user/browser scenario). Unless, that is, TLS-PSK, the Secure Remote Password (SRP) protocol, or some other protocol is used that can provide strong mutual authentication in the absence of certificates. Typically, the key information and certificates necessary for TLS are handled in the form of X.509 certificates, which define required fields and data formats. SSL operates in modular fashion. It is extensible by design, with support for forward and backward compatibility and negotiation between peers. Cipher suite When a TLS or SSL connection is established, the client and server negotiate a CipherSuite, exchanging CipherSuite codes in the client hello and server hello messages, which specifies a combination of cryptographic algorithms to be used for the connection. The key exchange and authentication algorithms are typically public key algorithms, or as in TLS-PSK preshared keys could be used. The message authentication codes are made up from cryptographic hash functions using the HMAC construction for TLS, and a nonstandard pseudorandom function for SSL. Aplicao In applications design, TLS is usually implemented on top of any of the Transport Layer protocols, encapsulating the application-specific protocols such as HTTP, FTP, SMTP, NNTP, and XMPP. Historically it has been used primarily with reliable transport protocols such as the Transmission Control Protocol (TCP). However, it

has also been implemented with datagram-oriented transport protocols, such as the User Datagram Protocol (UDP) and the Datagram Congestion Control Protocol (DCCP), usage which has been standardized independently using the term Datagram Transport Layer Security (DTLS). A prominent use of TLS is for securing World Wide Web traffic carried by HTTP to form HTTPS. Notable applications are electronic commerce and asset management. Increasingly, theSimple Mail Transfer Protocol (SMTP) is also protected by TLS (RFC 3207). These applications use public key certificates to verify the identity of endpoints. An increasing number of client and server products support TLS natively, but many still lack support. As an alternative, users may wish to use standalone TLS products like Stunnel. Wrappers such as Stunnel rely on being able to obtain a TLS connection immediately, by simply connecting to a separate port reserved for the purpose. For example, by default the TCP port for HTTPS is 443, to distinguish it from HTTP on port 80. TLS can also be used to tunnel an entire network stack to create a VPN, as is the case with OpenVPN. Many vendors now marry TLS's encryption and authentication capabilities with authorization. There has also been substantial development since the late 1990s in creating client technology outside of the browser to enable support for client/server applications. When compared against traditional IPsec VPN technologies, TLS has some inherent advantages in firewall and NAT traversal that make it easier to administer for large remote-access populations. TLS is also a standard method to protect Session Initiation Protocol (SIP) application signaling. TLS can be used to provide authentication and encryption of the SIP signaling associated with VoIP and other SIP-based applications. Segurana TLS/SSL have a variety of security measures:

The client may use the certificate authority's (CA's) public key to validate the CA's digital signature of the server certificate. If the digital signature can be verified, the client accepts the server certificate as a valid certificate issued by a trusted CA. The client verifies that the issuing CA is on its list of trusted CAs. The client checks the server's certificate validity period. The authentication process stops if the current date and time fall outside of the validity period. Protection against a downgrade of the protocol to a previous (less secure) version or a weaker cipher suite. Numbering all the Application records with a sequence number, and using this sequence number in the message authentication codes (MACs). Using a message digest enhanced with a key (so only a key-holder can check the MAC). The HMAC construction used by most TLS ciphersuites is specified in RFC 2104 (SSLv3 used a different hash-based MAC). The message that ends the handshake ("Finished") sends a hash of all the exchanged handshake messages seen by both parties.

The pseudorandom function splits the input data in half and processes each one with a different hashing algorithm (MD5 and SHA-1), then XORs them together to create the MAC. This provides protection even if one of these algorithms is found to be vulnerable. TLS only. SSL v3 improved upon SSL v2 by adding SHA-1 based ciphers, and support for certificate authentication. Additional improvements in SSL v3 include better handshake protocol flow and increased resistance to man-in-the-middle attacks.

A vulnerability of the renegotiation procedure was discovered in August 2009 that can lead to plaintext injection attacks against SSLv3 and all current versions of TLS. For example, it allows an attacker who can hijack an https connection to splice their own requests into the beginning of the conversation the client has with the web server. The attacker can't actually decrypt the client-server communication, so it is different from a typical man-in-the-middle attack. A short-term fix is for web servers to stop allowing renegotiation, which typically will not require other changes unless client certificate authentication is used. To fix the vulnerability, a renegotiation indication extension was proposed for TLS. It will require the client and server to include and verify information about previous handshakes in any renegotiation handshakes. When a user doesn't pay attention to their browser's indication that the session is secure (typically a padlock icon), the vulnerability can be turned into a true man-in-the-middle attack This extension has become a proposed standard and has been assigned the number RFC 5746. There are some attacks against the implementation rather than the protocol itself: Most CAs don't explicitly set basicConstraints CA=FALSE for leaf nodes, and a lot of browsers and other SSL implementations (including IE, Konqueror, OpenSSL, etc.) don't check the field. This can be exploited for man-in-the-middle attack on all potential SSL connections. Some implementations (including older versions of Microsoft Cryptographic API, Network Security Services, and GnuTLS) stop reading any characters that follow the null character in the name field of the certificate, which can be exploited to fool the client into reading the certificate as if it were one that came from the authentic site, e.g. paypal.com\0.badguy.com would be mistaken as the site of paypal.com rather than badguy.com. SSL v2 is flawed in a variety of ways: Identical cryptographic keys are used for message authentication and encryption. MACs are unnecessarily weakened in the "export mode" required by U.S. export restrictions (symmetric key length was limited to 40 bits in Netscape and Internet Explorer). SSL v2 has a weak MAC construction and relies solely on the MD5 hash function. SSL v2 does not have any protection for the handshake, meaning a man-in-themiddle downgrade attack can go undetected. SSL v2 uses the TCP connection close to indicate the end of data. This means that truncation attacks are possible: the attacker simply forges a TCP FIN, leaving the recipient unaware of an illegitimate end of data message (SSL v3 fixes this problem by having an explicit closure alert).

SSL v2 assumes a single service, and a fixed domain certificate, which clashes with the standard feature of virtual hosting in webservers. This means that most websites are practically impaired from using SSL. TLS/SNI fixes this but is not deployed in webservers as yet.

SSL v2 is disabled by default in Internet Explorer 7, Mozilla Firefox 2 and Mozilla Firefox 3, and Safari. After it sends a TLS ClientHello, if Mozilla Firefox finds that the server is unable to complete the handshake, it will attempt to fall back to using SSL 3.0 with an SSL 3.0 ClientHello in SSL v2 format to maximize the likelihood of successfully handshaking with older servers. Support for SSL v2 (and weak 40-bit and 56-bit ciphers) has been removed completely from Opera as of version 9.5. How it works A TLS client and server negotiate a stateful connection by using a handshaking procedure. During this handshake, the client and server agree on various parameters used to establish the connection's security.

The handshake begins when a client connects to a TLS-enabled server requesting a secure connection, and presents a list of supported CipherSuites (ciphers and hash functions). From this list, the server picks the strongest cipher and hash function that it also supports and notifies the client of the decision. The server sends back its identification in the form of a digital certificate. The certificate usually contains the server name, the trusted certificate authority (CA), and the server's public encryption key. The client may contact the server that issued the certificate (the trusted CA as above) and confirm that the certificate is authentic before proceeding. In order to generate the session keys used for the secure connection, the client encrypts a random number (RN) with the server's public key (PbK), and sends the result to the server. Only the server should be able to decrypt it (with its private key (PvK)): this is the one fact that makes the keys hidden from third parties, since only the server and the client have access to this data. The client knows PbK and RN, and the server knows PvK and (after decryption of the client's message) RN. A third party may only know RN if PvK has been compromised. From the random number, both parties generate key material for encryption and decryption.

This concludes the handshake and begins the secured connection, which is encrypted and decrypted with the key material until the connection closes. If any one of the above steps fails, the TLS handshake fails, and the connection is not created.

TLS handshake in detail


The TLS protocol exchanges records, which encapsulate the data to be exchanged. Each record can be compressed, padded, appended with a message authentication code (MAC),

or encrypted, all depending on the state of the connection. Each record has a content type field that specifies the record, a length field, and a TLS version field. When the connection starts, the record encapsulates another protocol - the handshake messaging protocol - which has content type 22.
Simple TLS handshake

A simple connection example, illustrating a handshake where the server is authenticated by its certificate (but not the client), follows: 1. Negotiation phase: A client sends a ClientHello message specifying the highest TLS protocol version it supports, a random number, a list of suggested CipherSuites, and suggested compression methods. The server responds with a ServerHello message, containing the chosen protocol version, a random number, CipherSuite, and compression method from the choices offered by the client. The server may also send a session ID as part of the message to perform a resumed handshake. The server sends its Certificate message (depending on the selected cipher suite, this may be omitted by the server). The server sends a ServerHelloDone message, indicating it is done with handshake negotiation. The client responds with a ClientKeyExchange message, which may contain a PreMasterSecret, public key, or nothing. (Again, this depends on the selected cipher.) The client and server then use the random numbers and PreMasterSecret to compute a common secret, called the "master secret". All other key data for this connection is derived from this master secret (and the client- and servergenerated random values), which is passed through a carefully designed "pseudorandom function". 2. The client now sends a ChangeCipherSpec record, essentially telling the server, "Everything I tell you from now on will be authenticated (and encrypted if encryption parameters were present in the server certificate)." The ChangeCipherSpec is itself a record-level protocol with content type of 20. Finally, the client sends an authenticated and encrypted Finished message, containing a hash and MAC over the previous handshake messages. The server will attempt to decrypt the client's Finished message, and verify the hash and MAC. If the decryption or verification fails, the handshake is considered to have failed and the connection should be torn down. 3. Finally, the server sends a ChangeCipherSpec, telling the client, "Everything I tell you from now on will be authenticated (and encrypted with the server private key associated to the public key in the server certificate, if encryption was negotiated)." The server sends its authenticated and encrypted Finished message. The client performs the same decryption and verification. 4. Application phase: at this point, the "handshake" is complete and the application protocol is enabled, with content type of 23. Application messages exchanged

between client and server will also be authenticated and optionally encrypted exactly like in their Finished message.
Client-authenticated TLS handshake

The following full example shows a client being authenticated (in addition to the server like above) via TLS using certificates exchanged between both peers. 1. Negotiation phase: A client sends a ClientHello message specifying the highest TLS protocol version it supports, a random number, a list of suggested cipher suites and compression methods. The server responds with a ServerHello message, containing the chosen protocol version, a random number, cipher suite, and compression method from the choices offered by the client. The server may also send a session id as part of the message to perform a resumed handshake. The server sends its Certificate message (depending on the selected cipher suite, this may be omitted by the server). The server requests a certificate from the client, so that the connection can be mutually authenticated, using a CertificateRequest message. The server sends a ServerHelloDone message, indicating it is done with handshake negotiation. The client responds with a Certificate message, which contains the client's certificate. The client sends a ClientKeyExchange message, which may contain a PreMasterSecret, public key, or nothing. (Again, this depends on the selected cipher.) ThisPreMasterSecret is encrypted using the public key of the server certificate. The client sends a CertificateVerify message, which is a signature over the previous handshake messages using the client's certificate's private key. This signature can be verified by using the client's certificate's public key. This lets the server know that the client has access to the private key of the certificate and thus owns the certificate. The client and server then use the random numbers and PreMasterSecret to compute a common secret, called the "master secret". All other key data for this connection is derived from this master secret (and the client- and servergenerated random values), which is passed through a carefully designed "pseudorandom function". 2. The client now sends a ChangeCipherSpec record, essentially telling the server, "Everything I tell you from now on will be authenticated (and encrypted if encryption was negotiated)." The ChangeCipherSpec is itself a record-level protocol, and has type 20, and not 22. Finally, the client sends an encrypted Finished message, containing a hash and MAC over the previous handshake messages. The server will attempt to decrypt the client's Finished message, and verify the hash and MAC. If the decryption or verification fails, the handshake is considered to have failed and the connection should be torn down.

Finally, the server sends a ChangeCipherSpec, telling the client, "Everything I tell you from now on will be authenticated (and encrypted if encryption was negotiated)." The server sends its own encrypted Finished message. The client performs the same decryption and verification. 4. Application phase: at this point, the "handshake" is complete and the application protocol is enabled, with content type of 23. Application messages exchanged between client and server will also be encrypted exactly like in their Finished message.
3.

Resumed TLS handshake

Public key operations (e.g., RSA) are relatively expensive in terms of computational power. TLS provides a secure shortcut in the handshake mechanism to avoid these operations. In an ordinary full handshake, the server sends a session id as part of the ServerHello message. The client associates this session id with the server's IP address and TCP port, so that when the client connects again to that server, it can use the session id to shortcut the handshake. In the server, the session id maps to the cryptographic parameters previously negotiated, specifically the "master secret". Both sides must have the same "master secret" or the resumed handshake will fail (this prevents an eavesdropper from using a session id). The random data in the ClientHello and ServerHello messages virtually guarantee that the generated connection keys will be different than in the previous connection. In the RFCs, this type of handshake is called an abbreviated handshake. It is also described in the literature as a restart handshake. 1. Negotiation phase: A client sends a ClientHello message specifying the highest TLS protocol version it supports, a random number, a list of suggested cipher suites and compression methods. Included in the message is the session id from the previous TLS connection. The server responds with a ServerHello message, containing the chosen protocol version, a random number, cipher suite, and compression method from the choices offered by the client. If the server recognizes the session id sent by the client, it responds with the same session id. The client uses this to recognize that a resumed handshake is being performed. If the server does not recognize the session id sent by the client, it sends a different value for its session id. This tells the client that a resumed handshake will not be performed. At this point, both the client and server have the "master secret" and random data to generate the key data to be used for this connection. 2. The client now sends a ChangeCipherSpec record, essentially telling the server, "Everything I tell you from now on will be encrypted." The ChangeCipherSpec is itself a record-level protocol, and has type 20, and not 22. Finally, the client sends an encrypted Finished message, containing a hash and MAC over the previous handshake messages. The server will attempt to decrypt the client's Finished message, and verify the hash and MAC. If the decryption or verification fails, the handshake is considered to have failed and the connection should be torn down.

Finally, the server sends a ChangeCipherSpec, telling the client, "Everything I tell you from now on will be encrypted." The server sends its own encrypted Finished message. The client performs the same decryption and verification. 4. Application phase: at this point, the "handshake" is complete and the application protocol is enabled, with content type of 23. Application messages exchanged between client and server will also be encrypted exactly like in their Finished message.
3.

Apart from the performance benefit, resumed sessions can also be used for single sign-on as it is guaranteed that both the original session as well as any resumed session originate from the same client. This is of particular importance for the FTP over TLS/SSL protocol which would otherwise suffer from a man in the middle attack in which an attacker could intercept the contents of the secondary data connections.

SSH
Secure Shell or SSH is a network protocol that allows data to be exchanged using a secure channel between two networked devices. Used primarily on Linux and Unix based systems to access shell accounts, SSH was designed as a replacement for Telnet and other insecure remote shells, which send information, notably passwords, in plaintext, rendering them susceptible to packet analysis. The encryption used by SSH provides confidentiality and integrity of data over an insecure network, such as the Internet. SSH uses public-key cryptography to authenticate the remote computer and allow the remote computer to authenticate the user, if necessary. SSH is typically used to log into a remote machine and execute commands, but it also supports tunneling, forwarding TCP ports and X11 connections; it can transfer files using the associated SFTP or SCP protocols. SSH uses the client-server model. The standard TCP port 22 has been assigned for contacting SSH servers. An SSH client program is typically used for establishing connections to an SSH daemon accepting remote connections. Both are commonly present on most modern operating systems, including Mac OS X, Linux, FreeBSD, Solaris and OpenVMS. Proprietary, freeware and open source versions of various levels of complexity and completeness exist.

IPSec
Protocolo de Segurana IP (IP Security Protocol, mais conhecido pela sua sigla, IPSec) uma extenso do protocolo IP que visa a ser o mtodo padro para o fornecimento de privacidade do usurio (aumentando a confiabilidade das informaes fornecidas pelo usurio para uma localidade da internet, como bancos), integridade dos dados (garantindo que o mesmo contedo que chegou ao seu destino seja a mesma da origem) e autenticidade das informaes ou identity spoofing (garantia de que uma pessoa quem diz ser), quando se transferem informaes atravs de redes IP pela internet. IPSec um protocolo que opera sob a camada de rede (ou camada 3) do modelo OSI. Outros protocolos de segurana da internet como SSL e TLS operam desde a camada de transporte (camada 4) at a camada de aplicao (camada 7). Isto torna o IPsec mais flexvel, como pode ser usado protegendo os protocolos TCP e UDP, mas aumentando sua complexidade e despesas gerais de processamento, porque no se pode confiar em TCP (camada 4 do modelo OSI) para controlar a confiabilidade e a fragmentao.

PGP
Pretty Good Privacy (PGP) is a data encryption and decryption computer program that provides cryptographic privacy and authentication for data communication. PGP is often used for signing, encrypting and decrypting e-mails to increase the security of e-mail communications. It was created by Philip Zimmermann in 1991.

PGP and similar products follow the OpenPGP standard (RFC 4880) for encrypting and decrypting data. PGP encryption uses a serial combination of hashing, data compression, symmetric-key cryptography, and, finally, public-key cryptography; each step uses one of several supported algorithms. Each public key is bound to a user name and/or an e-mail address. The first version of this system was generally known as a web of trust to contrast with the X.509 system which uses a hierarchical approach based on certificate authority and which was added to PGP implementations later. Current versions of PGP encryption include both options through an automated key management server.

Compatibility
As PGP evolves, PGP systems that support newer features and algorithms are able to create encrypted messages that older PGP systems cannot decrypt, even with a valid private key. Thus, it is essential that partners in PGP communication understand each other's capabilities or at least agree on PGP settings.

Digital signatures
PGP supports message authentication and integrity checking. The latter is used to detect whether a message has been altered since it was completed (the message integrity property), and the former to determine whether it was actually sent by the person/entity claimed to be the sender (a digital signature). In PGP, these are used by default in conjunction with encryption, but can be applied to plaintext as well. The sender uses PGP to create a digital signature for the message with either the RSA or DSA signature algorithms. To do so, PGP computes a hash (also called a message digest) from the plaintext, and then creates the digital signature from that hash using the sender's private keys.

Web of trust
Both when encrypting messages and when verifying signatures, it is critical that the public key used to send messages to someone or some entity actually does 'belong' to the intended recipient. Simply downloading a public key from somewhere is not overwhelming assurance of that association; deliberate (or accidental) impersonation is possible. PGP has, from its first versions, always included provisions for distributing a user's public keys in an 'identity certificate' which is so constructed cryptographically that any tampering (or accidental garble) is readily detectable. But merely making a certificate which is impossible to modify without being detected effectively is also insufficient. It can prevent corruption only after the certificate has been created, not before. Users must also ensure by some means that the public key in a certificate actually does belong to the person/entity claiming it. From its first release, PGP products have included an internal certificate 'vetting scheme' to assist with this; a trust model which has been called a web of trust. A given public key (or more specifically, information binding a user name to a key) may be digitally signed by a third party user to attest to the association between someone (actually a user name) and the key. There are several levels of confidence which can be included in such signatures.

Although many programs read and write this information, few (if any) include this level of certification when calculating whether to trust a key. The web of trust protocol was first described by Zimmermann in 1992 in the manual for PGP version 2.0: As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys. The web of trust mechanism has advantages over a centrally managed public key infrastructure scheme such as that used by S/MIME but has not been universally used. Users have been willing to accept certificates and check their validity manually or to simply accept them. No satisfactory solution has been found for the underlying problem.

Certificates
In the (more recent) OpenPGP specification, trust signatures can be used to support creation of certificate authorities. A trust signature indicates both that the key belongs to its claimed owner and that the owner of the key is trustworthy to sign other keys at one level below their own. A level 0 signature is comparable to a web of trust signature since only the validity of the key is certified. A level 1 signature is similar to the trust one has in a certificate authority because a key signed to level 1 is able to issue an unlimited number of level 0 signatures. A level 2 signature is highly analogous to the trust assumption users must rely on whenever they use the default certificate authority list (like those included in web browsers); it allows the owner of the key to make other keys certificate authorities. PGP versions have always included a way to cancel ('revoke') identity certificates. A lost or compromised private key will require this if communication security is to be retained by that user. This is, more or less, equivalent to the certificate revocation lists of centralized PKI schemes. Recent PGP versions have also supported certificate expiration dates. The problem of correctly identifying a public key as belonging to a particular user is not unique to PGP. All public key / private key cryptosystems have the same problem, if in slightly different guise, and no fully satisfactory solution is known. PGP's original scheme, at least, leaves the decision whether or not to use its endorsement/vetting system to the user, while most other PKI schemes do not, requiring instead that every certificate attested to by a central certificate authority be accepted as correct.

Security quality
To the best of publicly available information, there is no known method which will allow a person or group to break PGP encryption by cryptographic or computational means. Indeed, in 1996, cryptographer Bruce Schneier characterized an early version as being "the closest you're likely to get to military-grade encryption." Early versions of PGP have been found to

have theoretical vulnerabilities and so current versions are recommended. In addition to protecting data in transit over a network, PGP encryption can also be used to protect data in long-term data storage such as disk files. The cryptographic security of PGP encryption depends on the assumption that the algorithms used are unbreakable by direct cryptanalysis with current equipment and techniques. For instance, in the original version, the RSA algorithm was used to encrypt session keys; RSA's security depends upon the one-way function nature of mathematical integer factoring. Likewise, the secret key algorithm used in PGP version 2 was IDEA, which might, at some future time, be found to have a previously unsuspected cryptanalytic flaw. Specific instances of current PGP, or IDEA, insecuritiesif they exist are not publicly known. As current versions of PGP have added additional encryption algorithms, the degree of their cryptographic vulnerability varies with the algorithm used. In practice, each of the algorithms in current use is not publicly known to have cryptanalytic weaknesses. New versions of PGP are released periodically and vulnerabilities that developers are aware of are progressively fixed. Any agency wanting to read PGP messages would probably use easier means than standard cryptanalysis, e.g. rubber-hose cryptanalysis or black-bag cryptanalysis i.e. installing some form of trojan horse or keystroke logging software/hardware on the target computer to capture encrypted keyrings and their passwords. The FBI has already used this attack against PGP in its investigations. However, any such vulnerabilities apply not just to PGP, but to all encryption software. In 2003, an incident involving seized Psion PDAs belonging to members of the Red Brigade indicated that neither the Italian police nor the FBI were able to decrypt PGPencrypted files stored on them. Evidence suggests that as of 2007, British police investigators are unable to break PGP, so instead have resorted to using RIPA legislation to demand the passwords/keys. In November 2009 a British citizen was convicted under RIPA legislation and jailed for 9 months for refusing to provide Police investigators with encryption keys to PGP-encrypted files.

Вам также может понравиться