Thursday, September 1, 2011

Cryptography

INTRODUCTION

Cryptography is the science of devising methods that allow information to be sent in secure form in such a way that the only person able to retrieve this information is the intended recipient.



CRYPTOGRAPHY









Process of Encryption & decryption


A message being sent is known as Plain text. The message is coded using a cryptographic algorithm. This process is called Encryption. An encrypted message is known as cipher text and is turned back into plain text by the process of decryption.

SYMMETRIC CRYPTOGRAPHY

Symmetric algorithms have one key that is used both to encrypt and decrypt the message, hence the name. Now what is a key?

KEY: - Key is a value that causes a cryptographic algorithm to run in a specific manner and produce a specific cipher text as an output. Size of key is usually measured in bits. The bigger the key size, the more secure will be the algorithm.

In symmetric cryptography, the two parties that exchange messages use the same algorithm. Only the key is changed from time to time. The same plain text with a different key results in a different cipher text. The encryption algorithm is available to the public. Hence should be strong and well-tested. The more powerful algorithm is the less likely that an attacker will be able to decrypt the resulting cipher.
Symmetric cryptography provides a means of satisfying the requirements of message content security, because the content can’t be read without the secret key. There remains a risk of exposure, because neither party can be sure that the other party has not exposed the secret key to a third party.


Key Management

A major difficulty with symmetric schemes is that the secret key has to be possessed by both parties, and hence has to be transmitted from whoever creates it to the other party. Moreover, if the key is compromised, all of the message transmission security majors are undermined. The steps taken to provide a secure mechanism for creating and passing on the secret key are referred to as key management.




The algorithm used for symmetric cryptography is Data Encryption Standard (DES), which came about due to requests for the following criteria:

• Provides a high level of security
• The security depends on keys, not the secrecy of the algorithm
• The security is capable of being evaluated
• The algorithm is completely specified and easy to understand
• It is efficient to use and adaptable
• Must be available to all users
• Must be exportable

Data Encryption Algorithm
DEA is a symmetric, block-cipher algorithm with a key length of 64-bis, and a block size of 64-bits. DEA has 16 rounds, meaning the main algorithm is repeated 16 times to produce the cipher text. It has been found that the number of rounds is exponentially proportional to the amount of time required to find a key using a brute-force attack. So as the number of rounds increases, the security of the algorithm increases exponentially.


How DES Works in Detail
DES is a block cipher--meaning it operates on plaintext blocks of a given size (64-bits) and returns cipher text blocks of the same size. Thus DES results in a permutation among the 2^64 (read this as: "2 to the 64th power") possible arrangements of 64 bits, each of which may be either 0 or 1. Each block of 64 bits is divided into two blocks of 32 bits each, a left half block L and a right half R. (This division is only used in certain operations.)
Example: Let M be the plain text message M = 0123456789ABCDEF, where M is in hexadecimal (base 16) format. Rewriting M in binary format, we get the 64-bit block of text:
M = 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
L = 0000 0001 0010 0011 0100 0101 0110 0111
R = 1000 1001 1010 1011 1100 1101 1110 1111
The first bit of M is "0". The last bit is "1". We read from left to right.
DES operates on the 64-bit blocks using key sizes of 56- bits. The keys are actually stored as being 64 bits long, but every 8th bit in the key is not used (i.e. bits numbered 8, 16, 24, 32, 40, 48, 56, and 64). However, we will nevertheless number the bits from 1 to 64, going left to right, in the following calculations. But, as you will see, the eight bits just mentioned get eliminated when we create subkeys.
Example: Let K be the hexadecimal key K = 133457799BBCDFF1. This gives us as the binary key (setting 1 = 0001, 3 = 0011, etc., and grouping together every eight bits, of which the last one in each group will be unused):
K = 00010011 00110100 01010111 01111001 10011011 10111100 11011111 11110001
The DES algorithm uses the following steps:
Create 16 sub keys, each of which is 48-bits long.
The 64-bit key is permuted according to the following table, PC-1. Since the first entry in the table is "57", this means that the 57th bit of the original key K becomes the first bit of the permuted key K+. The 49th bit of the original key becomes the second bit of the permuted key. The 4th bit of the original key is the last bit of the permuted key. Note only 56 bits of the original key appear in the permuted key.


PC-1

57 49 41 33 25 17 9
1 58 50 42 34 26 18
10 2 59 51 43 35 27
19 11 3 60 52 44 36
63 55 47 39 31 23 15
7 62 54 46 38 30 22
14 6 61 53 45 37 29
21 13 5 28 20 12 4

Example: From the original 64-bit key
K = 00010011 00110100 01010111 01111001 10011011 10111100 11011111 11110001
We get the 56-bit permutation
K+ = 1111000 0110011 0010101 0101111 0101010 1011001 1001111 0001111
Next, split this key into left and right halves, C0 and D0, where each half has 28 bits.
Example: From the permuted key K+, we get
C0 = 1111000 0110011 0010101 0101111
D0 = 0101010 1011001 1001111 0001111
With C0 and D0 defined, we now create sixteen blocks Cn and Dn, 1<=n<=16. Each pair of blocks Cn and Dn is formed from the previous pair Cn-1 and Dn-1, respectively, for n = 1, 2, ..., 16, using the following schedule of "left shifts" of the previous block. To do a left shift, move each bit one place to the left, except for the first bit, which is cycled to the end of the block.



Asymmetric Key Cryptography

Key

It's easiest to think of keys in a conceptual way. First, visualize a cipher as a machine. To run the machine, you need to stick a key in it. You can stuff plaintext in one side and get cipher-text out the other side. You can run the cipher in reverse to convert cipher-text to plaintext.

Cipher

To protect your information from curious eyes, you need to take extra precautions. A common way to protect information is to encrypt it at the sending end and decrypt it at the receiving end. Encryption is the process of taking data, called plaintext, and mathematically transforming it into an unreadable mess, called cipher-text. Decryption takes the cipher-text and transforms it back into plaintext.








Asymmetric Ciphers

• The shortcomings of symmetric ciphers are addressed by asymmetric ciphers, also called public key ciphers.
• These ciphers actually involve a public key, which can be freely distributed, and a private key, which is secret.
• These keys are always generated in matching pairs.
• Public keys really are public; you can publish them in a newspaper or write them in the sky.
• No one can violate your privacy or impersonate you without your private key.
• The mechanism for distributing public keys, however, is a big challenge.
• Data encrypted using the public key can be decrypted using the private key. No other key will decrypt the data, and the private key will decrypt only data that was encrypted using the matching public key.
• In some cases, the reverse of the process also works; data encrypted with the private key can be decrypted with the public key.

For Example If Marian wants to send a message to Robin Hood, she can encrypt it using Robin Hood's public key. Only the matching private key, which should be known only to Robin Hood, can be used to decrypt the message.
The Sheriff can intercept this message, but it doesn't do him any good because the message can be decrypted only with Robin Hood's private key. And as long as Robin Hood keeps his private key secret, he can give his public key to anyone who wants it, even the Sheriff. With the public key, the Sheriff can send Robin messages (if he wants), but can't decode anything that others send.





Asymmetric Key Algorithms
Historically, distributing the keys has always been the weakest link in most cryptosystems. No matter how strong a cryptosystem was, if an intruder could steal the key, the system was worthless. Cryptologists always took for granted that the encryption key and decryption key were the same (or easily derived from one another). But the key had to be distributed to all users of the system. Thus, it seemed as if there was an inherent built-in problem. Keys had to be protected from theft, but they also had to be distributed, so they could not just be locked up in a bank vault.

In 1976, two researchers at Stanford University, Diffie and Hellman (1976), proposed a radically new kind of cryptosystem, one in which the encryption and decryption keys were different, and the decryption key could not feasibly be derived from the encryption key. In their proposal, the (keyed) encryption algorithm, E, and the (keyed) decryption algorithm, D, had to meet three requirements. These requirements can be stated simply as follows:

1. D (E (P)) = P.
2. It is exceedingly difficult to deduce D from E.
3. E cannot be broken by a chosen plaintext attack.

The first requirement says that if we apply D to an encrypted message, E (P), we get the original plaintext message, P, back. Without this property, the legitimate receiver could not decrypt the cipher text. The second requirement speaks for itself. The third requirement is needed because, as we shall see in a moment, intruders may experiment with the algorithm to their hearts' content. Under these conditions, there is no reason that the encryption key cannot be made public.

The two main branches of public key cryptography are:

1. Public key encryption — a message encrypted with a user's public key cannot be decrypted by anyone except the user possessing the corresponding private key. This is used to ensure confidentiality.

2. Digital signatures — a message signed with a user's private key can be verified by anyone who has access to the user's public key, thereby proving that the user signed it and that the message has not been tampered with. This is used to ensure authenticity.

• An analogy for public-key encryption is that of a locked mailbox with a mail slot. The mail slot is exposed and accessible to the public; its location (the street address) is in essence the public key. Anyone knowing the street address can go to the door and drop a written message through the slot; however, only the person who possesses the key can open the mailbox and read the message.

• An analogy for digital signatures is the sealing of an envelope with a personal wax seal. The message can be opened by anyone, but the presence of the seal authenticates the sender. A central problem for public-key cryptography is proving that a public key is authentic, and has not been tampered with or replaced by a malicious third party. The usual approach to this problem is to use a public-key infrastructure (PKI), in which one or more third parties, known as certificate authorities, certify ownership of key pairs.

Goals of Computer Security

Goals of Computer Security:

The principles and goals of computer security includes following features:

• To protect computer assets from Human errors, natural disasters, physical and electronic maliciousness.
• To keep privileged and private information confidential and available only to authorized people. This is generally accomplished by identifying the individual requesting access with a login ID, authenticating their identity with a password, configuring computer access controls to match authorization rules (i.e. limit login IDs to particular files), and encrypting data which may travel outside the computer access controls (over the network, for example). Note that most desktop computers do not require a login ID and password. Therefore all access control for confidentiality depends upon the operator's actions rather than computer controlled mechanisms. In particular, if a program is run on most desktops, it turns control of the desktop over the author of the program... for better or worse.
• To keep data intact. This is done primarily through the same mechanisms that keep the data confidential with additional access controls to limit who can change the data and, perhaps, mechanisms to alert that the data has been changed. Data that travels outside the computer (over the network, for example) can be protected by cryptographic methods which make it difficult or impossible for the data to be modified without detection. As with confidentiality, most desktop computers depend upon operator actions rather than computer controlled access control to ensure data integrity.
• To keep services available. A variety of things come into play here. Access controls limit the number of avenues of potential attack. Redundant power and hardware limit the effect of failures. Backups serve as a recovery mechanism when the inevitable failure occurs. Capacity planning in concert with access controls help prevent service overloads. Constant monitoring provides trend data and ongoing operational status which in turn improve response to events or changes in use.

The basic GOALS are described below:

1. Confidentiality
Protecting information from being read or copied by people who are not authorized by the information's owner to read or copy it. This includes pieces of information from which the confidential information can be inferred. It includes failure of confidentiality or privacy or inappropriate disclosure. Privacy of personal information or of data about individuals is a significant concern, as is that of sensitive corporate data (such as trade secrets and proprietary information) and government classified data.
Threats to Confidentiality:
• Interception/Wiretapping (snifters)
– Used to be commonly installed after a system break-in
– Can capture passwords, sensitive info
• Illicit copying (proprietary information, etc.)
– Copied company documents, plans
– Copied source code for proprietary software

2. Data integrity
Protecting information including programs, backup tapes, file creation times, documentation etc from being deleted or altered without the permission of the information's owner. Integrity means different things in context: sometimes it means that data or programs should be returned exactly as they were when they were recorded, or that modifications to data or programs should be made only by authorized persons, or by authorized programs, or in certain ways, or that the quality of data should be maintained. Data and programs should meet the demands of the way in which they are to be used

Threats to Integrity
• Modification
– Changing data values (database)
– Changing programs (viruses, backdoors, Trojan horses, game cheats)
– Changing hardware (hardware key capture)
– Can be accidental corruption (interrupted DB transaction)
• Fabrication
– Spurious transactions
– Replay attacks


3. Availability

Ensuring that, the computer services are not degraded or made unavailable without authorization.
A failure of availability is also known as denial of service. Partial denial of service is lack of capacity or unacceptable responsiveness. Computer users expect programs and data to be available on demand to meet computing needs. Applications such as power generation, stock trading, and even airplane cockpit navigation and aspects of medical care have become so dependent on computing that loss of availability becomes a serious threat to life or society. Even on a less dramatic level, people have become dependent on computers in aspects of everyday life, and so maintaining expected availability of computers is probably the most important of the three goals of computer security.

Threats to Availability
• Denial of Service (DoS)
– Commonly thought of as network/system flooding
– Can be more basic: disrupting power
– Deleting files
– Hardware destruction (fire, tornado, etc.)

COMPUTER SECURITY

INTRODUCTION TO COMPUTER SECURITY

Security is the process of protecting against threats to computing systems. A threat is an event that can cause harm to computers, data or programs, or computations.

 A failure of computer security occurs because of a vulnerability or Weakness in a computing system. A threat agent person, event, or circumstance exploits vulnerability.

 Computer Security involves protecting against failures of availability, integrity or correctness, and confidentiality or privacy.

 A failure of availability is also known as denial of service. Partial denial of service is lack of capacity or unacceptable responsiveness.



Computer users expect programs and data to be available on demand to meet computing needs.
Applications such as power generation, stock trading, and even airplane cockpit navigation and aspects of medical care have become so dependent on computing that loss of availability becomes a serious threat to life or society. Even on a less dramatic level, people have become dependent on computers in aspects of everyday life, and so maintaining expected availability of computers is probably the most important of the three goals of computer security.


PRINCIPLES OF COMPUTER SECURITY

Have several major principles that it strives to uphold:

1. Confidentiality: Protecting information from being read or copied by people who are not authorized by the information's owner to read or copy it. This includes pieces of information from which the confidential information can be inferred.

2. Data Integrity: Protecting information including programs, backup tapes, file creation times, documentation, etc. from being deleted or altered without the permission of the information's owner.

3. Availability: Ensuring that the computer services are not degraded or made unavailable without authorizations.


TECHNIQUES FOR COMPUTER SECURITY

The principles of information security are upheld using 3 main techniques, which are

1. Prevention:
Stopping a security breach from happening,often by identifying vulnerabilities in a system and putting in safeguards. Examples of this technique include access control (passwords), firewalls, and encryption. It is often impossible to completely prevent security breaches.

2. Detection: Discovering that a security breach has occurred or is occurring (detection), identifying the nature of the attack (localization), the identity and whereabouts
(Identification) and nature of the perpetrators (assessment) Examples of this technique include: intrusion detection systems, system logs, digital watermarking.
Detection allows Response.

3. Response: Mitigating the consequences of the security breach, or deterring attacks usually by punishment. Examples include: insurance and prosecution.



NEED FOR COMPUTER SECURITY


Confidentiality (secrecy)

Data is kept secret from those without the proper credentials, even if that data travels through an insecure medium. In practice, this means potential attackers might be able to see garbled data that is essentially “locked,” but they should not be able to unlock that data without the proper information. In classic cryptogra­phy, the encryption (scrambling) algorithm was the secret. In modern cryptogra­phy, that isn’t feasible. The algorithms are public, and cryptographic keys are used in the encryption and decryption processes. The only thing that needs to be secret is the key. In addition, as we will demonstrate a bit later,
There are com­mon cases in which not all keys need to be kept secret.

Integrity (anti-tampering)

The basic idea behind data integrity is that there should be a way for the recipi­ent of a piece of data to determine whether any modifications are made over a period of time. For example, integrity checks can be used to make sure that data sent over a wire isn’t modified in transit. Plenty of well-known checksums exist that can detect and even correct simple errors.
However, such checksums are poor at detecting skilled intentional modifications of the data. Several crypto­graphic checksums do not have these drawbacks if used properly.

Non-repudiation

Cryptography can enable one person A to prove that a message he received from B actually came from B. B can essentially be held accountable when she sends A such a message, as she cannot deny (repudiate) that she sent it. In the real world, you have to assume that an attacker does not compromise particular cryptographic keys. The SSL protocol does not support non-repudiation, but it is easily added by using digital signatures. These simple services can be used to stop a wide variety of network attacks.

Snooping (passive eavesdropping)
An attacker watches network traffic as it passes and records interesting data, such as credit card information.

Tampering
An attacker monitors network traffic and maliciously changes data in transit (for example, an attacker may modify the contents of an email message).

Spoofing
An attacker forges network data, appearing to come from a different network address than he actually comes from. This sort of attack can be used to thwart systems that authenticate based on host information (e.g. an IP address).

Hijacking
Once a legitimate user authenticates, a spoofing attack can be used to “hijack” the connection.

Capture-replay
In some circumstances, an attacker can record and replay network transactions to ill effect. For example, say that you sell a single share of stock while the price is high. If the network protocol is not properly designed and secured, an attacker could record that transaction, then replay it later when the stock price has dropped, and do so repeatedly until all your stock is gone.



AREAS OF SECURITY
Attacks are equally easy if you’re on the same local network as one of the endpoints. Talented high school students who can use other people’s software to break into machines and manipulate them can easily manage to use these tools to attack real systems. Moreover, authentication information could usually be among the information “snooped” off a network.

Identification and authentication
Identification is typically performed by logging in or entering a username. But after entering a name, a user may be asked to prove it, so that the system can be certain that one user is not trying to impersonate another. Techniques can use two or more approaches.
User passwords are commonly employed. Password guessing attacks use computers and actual dictionaries or large word lists to try likely passwords. Brute force attacks generate and try all possible passwords. To block these attacks, users should choose strong passwords.
Physical characteristics can be determined by biometric devices. In addition to fingerprints, voice recognition, retina patterns, and pictures are used.

Access control
The system uses a validated user identity to limit the actions the user can perform. An access control policy is a series of acceptable triples (user, object, action), such as (system administrator, password file, modify), meaning that the user "system administrator" is allowed to perform the action "modify" on the object "password file." An access control list (ACL) is a set of these triples. Access control lists can be represented as a two-dimensional matrix, as a set of rules, or in other ways.
Before permission to access an object is allowed, a reference monitor (also known as a reference validation mechanism or access control mechanism) checks that the access is allowable. A reference monitor must be complete (invoked to validate every reference permission), correct (made to implement the intended access control policy exactly), and tamperproof (unable to be disabled).
Reference monitors can simply process a representation of the access control policy in list or table form. Alternatively, they can process capabilities, which are revalidated access "tickets." The access control system gives a user a capability to perform a certain access on a particular object, and the user later presents the capability to a reference monitor, which will inspect the capability and allow the access. Capabilities are useful in networked and distributed systems, in which access control may be done at one point and actions on objects may be done elsewhere.

Security of Programs
Computer programs are both part of the protection and part of the things protected in computer security. Programs implement access controls and other technical security controls. But those same programs must be protected against accesses that would modify or disable their ability to protect. And those programs must be implemented correctly.
Correctness, completeness, and exactness
A computer program is correct if it meets the requirements for which it was designed. A program is complete if it meets all requirements. Finally, a program is exact if it performs only those operations specified by requirements. Computer security requires correct, complete, and exact programs, and nothing more. A program has inevitable side effects.
For example, a program inevitably assigns values to internal variables, uses computing time, and causes entries to be generated in audit logs. Although side effects seem benign, they can be used maliciously to leak information. The exactness requirement really concerns only those significant operations specified by requirements, but in security almost any side effect can be significant. Determining which additional actions are security relevant is difficult, if not impossible.
Correctness and completeness can be determined to some degree by careful testing, although with large or complex systems it may be infeasible to test all possible situations. It is difficult to test security systems appropriately, because they can be large and complex, and because it is hard to simulate all the environments and approaches by which systems can be attacked.

Malicious code
Computing is so fast and complex that users cannot know everything a program is doing. Programs can be modified or replaced by hostile forms, with the replacements seeming outwardly the same as the originals. The general term "malicious code" covers Trojan horses, viruses, worms, and trapdoors. Malicious code has been present in computing systems since the 1960s, and it is increasingly prevalent and serious in impact. Unfortunately, there are no known complete forms of protection against malicious code. A Trojan horse is a program that has an undocumented function in addition to an apparent function. For example, a program may ostensibly format and display electronic mail messages while also covertly transmitting sensitive data.

A virus is a program that replicates and transfers itself to another computing system. When executed, each copy can also replicate, so that the infection spreads at a geometric rate.
A virus typically inserts its replicated copy into another executable program so that when the other program is executed, so is the copy of the virus. Viruses often perform covert malicious actions.
A worm is a program that, like a virus, seeks to replicate and spread. However, the goal of the worm is only to spread and consume resources. The malicious effect of the worm is denial of service by exhaustion of resources. A trapdoor is an undocumented entry point into a program. The trapdoor is inserted by a programmer to allow discreet access to a program, possibly with exceptional privileges. A user who had legitimate access at one time might have installed the trapdoor as a means of obtaining access in the future. All these forms of malicious code are serious security threats for several reasons. First, malicious code can be relatively small, so that it is not readily detected. Second, its actions can be concealed: If a program fails to perform as it did, the change is evident, but an attacker can cause the change to be subtle, delayed, or sporadic, making it very difficult to detect, let alone diagnose and correct. The covert effect of malicious code can be almost anything: It can delete files, transmit messages or files, modify documents or data files, and block a user from accessing a computer system. The attack can be transmitted in pieces that activate only when the entire attack has been delivered.
Finally, protecting against malicious code is difficult: The only known totally effective countermeasure is not to accept any executable items from anyone else, a solution that is scarcely acceptable for current networking and information sharing environments.

Security of code
It is infeasible for a user to determine that a program is secure. The user has little evidence on which to base an opinion, an insecure program may intentionally hide its weaknesses, and many users have little control even over the sources from which programs are derived. Even well−intentioned programmers can fail. Beyond principles of good software



Database Security
A database is a collection of records containing fields, organized in such a way that a single user can be allowed access to none, some, or all of the data. Typically the data are shared among several users, although not every user will have access to every item of data. A database is accessed by a database management system that performs the user interface to the database.
Integrity is a much more encompassing issue for databases than for general applications programs, because of the shared nature of the data. Integrity has many interpretations, such as assurance that data are not inadvertently overwritten, lost, or scrambled.