INTRODUCTION
Cryptography is the science of devising methods that allow information to be sent in secure form in such a way that the only person able to retrieve this information is the intended recipient.
CRYPTOGRAPHY
Process of Encryption & decryption
A message being sent is known as Plain text. The message is coded using a cryptographic algorithm. This process is called Encryption. An encrypted message is known as cipher text and is turned back into plain text by the process of decryption.
SYMMETRIC CRYPTOGRAPHY
Symmetric algorithms have one key that is used both to encrypt and decrypt the message, hence the name. Now what is a key?
KEY: - Key is a value that causes a cryptographic algorithm to run in a specific manner and produce a specific cipher text as an output. Size of key is usually measured in bits. The bigger the key size, the more secure will be the algorithm.
In symmetric cryptography, the two parties that exchange messages use the same algorithm. Only the key is changed from time to time. The same plain text with a different key results in a different cipher text. The encryption algorithm is available to the public. Hence should be strong and well-tested. The more powerful algorithm is the less likely that an attacker will be able to decrypt the resulting cipher.
Symmetric cryptography provides a means of satisfying the requirements of message content security, because the content can’t be read without the secret key. There remains a risk of exposure, because neither party can be sure that the other party has not exposed the secret key to a third party.
Key Management
A major difficulty with symmetric schemes is that the secret key has to be possessed by both parties, and hence has to be transmitted from whoever creates it to the other party. Moreover, if the key is compromised, all of the message transmission security majors are undermined. The steps taken to provide a secure mechanism for creating and passing on the secret key are referred to as key management.
The algorithm used for symmetric cryptography is Data Encryption Standard (DES), which came about due to requests for the following criteria:
• Provides a high level of security
• The security depends on keys, not the secrecy of the algorithm
• The security is capable of being evaluated
• The algorithm is completely specified and easy to understand
• It is efficient to use and adaptable
• Must be available to all users
• Must be exportable
Data Encryption Algorithm
DEA is a symmetric, block-cipher algorithm with a key length of 64-bis, and a block size of 64-bits. DEA has 16 rounds, meaning the main algorithm is repeated 16 times to produce the cipher text. It has been found that the number of rounds is exponentially proportional to the amount of time required to find a key using a brute-force attack. So as the number of rounds increases, the security of the algorithm increases exponentially.
How DES Works in Detail
DES is a block cipher--meaning it operates on plaintext blocks of a given size (64-bits) and returns cipher text blocks of the same size. Thus DES results in a permutation among the 2^64 (read this as: "2 to the 64th power") possible arrangements of 64 bits, each of which may be either 0 or 1. Each block of 64 bits is divided into two blocks of 32 bits each, a left half block L and a right half R. (This division is only used in certain operations.)
Example: Let M be the plain text message M = 0123456789ABCDEF, where M is in hexadecimal (base 16) format. Rewriting M in binary format, we get the 64-bit block of text:
M = 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
L = 0000 0001 0010 0011 0100 0101 0110 0111
R = 1000 1001 1010 1011 1100 1101 1110 1111
The first bit of M is "0". The last bit is "1". We read from left to right.
DES operates on the 64-bit blocks using key sizes of 56- bits. The keys are actually stored as being 64 bits long, but every 8th bit in the key is not used (i.e. bits numbered 8, 16, 24, 32, 40, 48, 56, and 64). However, we will nevertheless number the bits from 1 to 64, going left to right, in the following calculations. But, as you will see, the eight bits just mentioned get eliminated when we create subkeys.
Example: Let K be the hexadecimal key K = 133457799BBCDFF1. This gives us as the binary key (setting 1 = 0001, 3 = 0011, etc., and grouping together every eight bits, of which the last one in each group will be unused):
K = 00010011 00110100 01010111 01111001 10011011 10111100 11011111 11110001
The DES algorithm uses the following steps:
Create 16 sub keys, each of which is 48-bits long.
The 64-bit key is permuted according to the following table, PC-1. Since the first entry in the table is "57", this means that the 57th bit of the original key K becomes the first bit of the permuted key K+. The 49th bit of the original key becomes the second bit of the permuted key. The 4th bit of the original key is the last bit of the permuted key. Note only 56 bits of the original key appear in the permuted key.
PC-1
57 49 41 33 25 17 9
1 58 50 42 34 26 18
10 2 59 51 43 35 27
19 11 3 60 52 44 36
63 55 47 39 31 23 15
7 62 54 46 38 30 22
14 6 61 53 45 37 29
21 13 5 28 20 12 4
Example: From the original 64-bit key
K = 00010011 00110100 01010111 01111001 10011011 10111100 11011111 11110001
We get the 56-bit permutation
K+ = 1111000 0110011 0010101 0101111 0101010 1011001 1001111 0001111
Next, split this key into left and right halves, C0 and D0, where each half has 28 bits.
Example: From the permuted key K+, we get
C0 = 1111000 0110011 0010101 0101111
D0 = 0101010 1011001 1001111 0001111
With C0 and D0 defined, we now create sixteen blocks Cn and Dn, 1<=n<=16. Each pair of blocks Cn and Dn is formed from the previous pair Cn-1 and Dn-1, respectively, for n = 1, 2, ..., 16, using the following schedule of "left shifts" of the previous block. To do a left shift, move each bit one place to the left, except for the first bit, which is cycled to the end of the block.
Asymmetric Key Cryptography
Key
It's easiest to think of keys in a conceptual way. First, visualize a cipher as a machine. To run the machine, you need to stick a key in it. You can stuff plaintext in one side and get cipher-text out the other side. You can run the cipher in reverse to convert cipher-text to plaintext.
Cipher
To protect your information from curious eyes, you need to take extra precautions. A common way to protect information is to encrypt it at the sending end and decrypt it at the receiving end. Encryption is the process of taking data, called plaintext, and mathematically transforming it into an unreadable mess, called cipher-text. Decryption takes the cipher-text and transforms it back into plaintext.
Asymmetric Ciphers
• The shortcomings of symmetric ciphers are addressed by asymmetric ciphers, also called public key ciphers.
• These ciphers actually involve a public key, which can be freely distributed, and a private key, which is secret.
• These keys are always generated in matching pairs.
• Public keys really are public; you can publish them in a newspaper or write them in the sky.
• No one can violate your privacy or impersonate you without your private key.
• The mechanism for distributing public keys, however, is a big challenge.
• Data encrypted using the public key can be decrypted using the private key. No other key will decrypt the data, and the private key will decrypt only data that was encrypted using the matching public key.
• In some cases, the reverse of the process also works; data encrypted with the private key can be decrypted with the public key.
For Example If Marian wants to send a message to Robin Hood, she can encrypt it using Robin Hood's public key. Only the matching private key, which should be known only to Robin Hood, can be used to decrypt the message.
The Sheriff can intercept this message, but it doesn't do him any good because the message can be decrypted only with Robin Hood's private key. And as long as Robin Hood keeps his private key secret, he can give his public key to anyone who wants it, even the Sheriff. With the public key, the Sheriff can send Robin messages (if he wants), but can't decode anything that others send.
Asymmetric Key Algorithms
Historically, distributing the keys has always been the weakest link in most cryptosystems. No matter how strong a cryptosystem was, if an intruder could steal the key, the system was worthless. Cryptologists always took for granted that the encryption key and decryption key were the same (or easily derived from one another). But the key had to be distributed to all users of the system. Thus, it seemed as if there was an inherent built-in problem. Keys had to be protected from theft, but they also had to be distributed, so they could not just be locked up in a bank vault.
In 1976, two researchers at Stanford University, Diffie and Hellman (1976), proposed a radically new kind of cryptosystem, one in which the encryption and decryption keys were different, and the decryption key could not feasibly be derived from the encryption key. In their proposal, the (keyed) encryption algorithm, E, and the (keyed) decryption algorithm, D, had to meet three requirements. These requirements can be stated simply as follows:
1. D (E (P)) = P.
2. It is exceedingly difficult to deduce D from E.
3. E cannot be broken by a chosen plaintext attack.
The first requirement says that if we apply D to an encrypted message, E (P), we get the original plaintext message, P, back. Without this property, the legitimate receiver could not decrypt the cipher text. The second requirement speaks for itself. The third requirement is needed because, as we shall see in a moment, intruders may experiment with the algorithm to their hearts' content. Under these conditions, there is no reason that the encryption key cannot be made public.
The two main branches of public key cryptography are:
1. Public key encryption — a message encrypted with a user's public key cannot be decrypted by anyone except the user possessing the corresponding private key. This is used to ensure confidentiality.
2. Digital signatures — a message signed with a user's private key can be verified by anyone who has access to the user's public key, thereby proving that the user signed it and that the message has not been tampered with. This is used to ensure authenticity.
• An analogy for public-key encryption is that of a locked mailbox with a mail slot. The mail slot is exposed and accessible to the public; its location (the street address) is in essence the public key. Anyone knowing the street address can go to the door and drop a written message through the slot; however, only the person who possesses the key can open the mailbox and read the message.
• An analogy for digital signatures is the sealing of an envelope with a personal wax seal. The message can be opened by anyone, but the presence of the seal authenticates the sender. A central problem for public-key cryptography is proving that a public key is authentic, and has not been tampered with or replaced by a malicious third party. The usual approach to this problem is to use a public-key infrastructure (PKI), in which one or more third parties, known as certificate authorities, certify ownership of key pairs.
Thursday, September 1, 2011
Goals of Computer Security
Goals of Computer Security:
The principles and goals of computer security includes following features:
• To protect computer assets from Human errors, natural disasters, physical and electronic maliciousness.
• To keep privileged and private information confidential and available only to authorized people. This is generally accomplished by identifying the individual requesting access with a login ID, authenticating their identity with a password, configuring computer access controls to match authorization rules (i.e. limit login IDs to particular files), and encrypting data which may travel outside the computer access controls (over the network, for example). Note that most desktop computers do not require a login ID and password. Therefore all access control for confidentiality depends upon the operator's actions rather than computer controlled mechanisms. In particular, if a program is run on most desktops, it turns control of the desktop over the author of the program... for better or worse.
• To keep data intact. This is done primarily through the same mechanisms that keep the data confidential with additional access controls to limit who can change the data and, perhaps, mechanisms to alert that the data has been changed. Data that travels outside the computer (over the network, for example) can be protected by cryptographic methods which make it difficult or impossible for the data to be modified without detection. As with confidentiality, most desktop computers depend upon operator actions rather than computer controlled access control to ensure data integrity.
• To keep services available. A variety of things come into play here. Access controls limit the number of avenues of potential attack. Redundant power and hardware limit the effect of failures. Backups serve as a recovery mechanism when the inevitable failure occurs. Capacity planning in concert with access controls help prevent service overloads. Constant monitoring provides trend data and ongoing operational status which in turn improve response to events or changes in use.
The basic GOALS are described below:
1. Confidentiality
Protecting information from being read or copied by people who are not authorized by the information's owner to read or copy it. This includes pieces of information from which the confidential information can be inferred. It includes failure of confidentiality or privacy or inappropriate disclosure. Privacy of personal information or of data about individuals is a significant concern, as is that of sensitive corporate data (such as trade secrets and proprietary information) and government classified data.
Threats to Confidentiality:
• Interception/Wiretapping (snifters)
– Used to be commonly installed after a system break-in
– Can capture passwords, sensitive info
• Illicit copying (proprietary information, etc.)
– Copied company documents, plans
– Copied source code for proprietary software
2. Data integrity
Protecting information including programs, backup tapes, file creation times, documentation etc from being deleted or altered without the permission of the information's owner. Integrity means different things in context: sometimes it means that data or programs should be returned exactly as they were when they were recorded, or that modifications to data or programs should be made only by authorized persons, or by authorized programs, or in certain ways, or that the quality of data should be maintained. Data and programs should meet the demands of the way in which they are to be used
Threats to Integrity
• Modification
– Changing data values (database)
– Changing programs (viruses, backdoors, Trojan horses, game cheats)
– Changing hardware (hardware key capture)
– Can be accidental corruption (interrupted DB transaction)
• Fabrication
– Spurious transactions
– Replay attacks
3. Availability
Ensuring that, the computer services are not degraded or made unavailable without authorization.
A failure of availability is also known as denial of service. Partial denial of service is lack of capacity or unacceptable responsiveness. Computer users expect programs and data to be available on demand to meet computing needs. Applications such as power generation, stock trading, and even airplane cockpit navigation and aspects of medical care have become so dependent on computing that loss of availability becomes a serious threat to life or society. Even on a less dramatic level, people have become dependent on computers in aspects of everyday life, and so maintaining expected availability of computers is probably the most important of the three goals of computer security.
Threats to Availability
• Denial of Service (DoS)
– Commonly thought of as network/system flooding
– Can be more basic: disrupting power
– Deleting files
– Hardware destruction (fire, tornado, etc.)
The principles and goals of computer security includes following features:
• To protect computer assets from Human errors, natural disasters, physical and electronic maliciousness.
• To keep privileged and private information confidential and available only to authorized people. This is generally accomplished by identifying the individual requesting access with a login ID, authenticating their identity with a password, configuring computer access controls to match authorization rules (i.e. limit login IDs to particular files), and encrypting data which may travel outside the computer access controls (over the network, for example). Note that most desktop computers do not require a login ID and password. Therefore all access control for confidentiality depends upon the operator's actions rather than computer controlled mechanisms. In particular, if a program is run on most desktops, it turns control of the desktop over the author of the program... for better or worse.
• To keep data intact. This is done primarily through the same mechanisms that keep the data confidential with additional access controls to limit who can change the data and, perhaps, mechanisms to alert that the data has been changed. Data that travels outside the computer (over the network, for example) can be protected by cryptographic methods which make it difficult or impossible for the data to be modified without detection. As with confidentiality, most desktop computers depend upon operator actions rather than computer controlled access control to ensure data integrity.
• To keep services available. A variety of things come into play here. Access controls limit the number of avenues of potential attack. Redundant power and hardware limit the effect of failures. Backups serve as a recovery mechanism when the inevitable failure occurs. Capacity planning in concert with access controls help prevent service overloads. Constant monitoring provides trend data and ongoing operational status which in turn improve response to events or changes in use.
The basic GOALS are described below:
1. Confidentiality
Protecting information from being read or copied by people who are not authorized by the information's owner to read or copy it. This includes pieces of information from which the confidential information can be inferred. It includes failure of confidentiality or privacy or inappropriate disclosure. Privacy of personal information or of data about individuals is a significant concern, as is that of sensitive corporate data (such as trade secrets and proprietary information) and government classified data.
Threats to Confidentiality:
• Interception/Wiretapping (snifters)
– Used to be commonly installed after a system break-in
– Can capture passwords, sensitive info
• Illicit copying (proprietary information, etc.)
– Copied company documents, plans
– Copied source code for proprietary software
2. Data integrity
Protecting information including programs, backup tapes, file creation times, documentation etc from being deleted or altered without the permission of the information's owner. Integrity means different things in context: sometimes it means that data or programs should be returned exactly as they were when they were recorded, or that modifications to data or programs should be made only by authorized persons, or by authorized programs, or in certain ways, or that the quality of data should be maintained. Data and programs should meet the demands of the way in which they are to be used
Threats to Integrity
• Modification
– Changing data values (database)
– Changing programs (viruses, backdoors, Trojan horses, game cheats)
– Changing hardware (hardware key capture)
– Can be accidental corruption (interrupted DB transaction)
• Fabrication
– Spurious transactions
– Replay attacks
3. Availability
Ensuring that, the computer services are not degraded or made unavailable without authorization.
A failure of availability is also known as denial of service. Partial denial of service is lack of capacity or unacceptable responsiveness. Computer users expect programs and data to be available on demand to meet computing needs. Applications such as power generation, stock trading, and even airplane cockpit navigation and aspects of medical care have become so dependent on computing that loss of availability becomes a serious threat to life or society. Even on a less dramatic level, people have become dependent on computers in aspects of everyday life, and so maintaining expected availability of computers is probably the most important of the three goals of computer security.
Threats to Availability
• Denial of Service (DoS)
– Commonly thought of as network/system flooding
– Can be more basic: disrupting power
– Deleting files
– Hardware destruction (fire, tornado, etc.)
COMPUTER SECURITY
INTRODUCTION TO COMPUTER SECURITY
Security is the process of protecting against threats to computing systems. A threat is an event that can cause harm to computers, data or programs, or computations.
A failure of computer security occurs because of a vulnerability or Weakness in a computing system. A threat agent person, event, or circumstance exploits vulnerability.
Computer Security involves protecting against failures of availability, integrity or correctness, and confidentiality or privacy.
A failure of availability is also known as denial of service. Partial denial of service is lack of capacity or unacceptable responsiveness.
Computer users expect programs and data to be available on demand to meet computing needs.
Applications such as power generation, stock trading, and even airplane cockpit navigation and aspects of medical care have become so dependent on computing that loss of availability becomes a serious threat to life or society. Even on a less dramatic level, people have become dependent on computers in aspects of everyday life, and so maintaining expected availability of computers is probably the most important of the three goals of computer security.
PRINCIPLES OF COMPUTER SECURITY
Have several major principles that it strives to uphold:
1. Confidentiality: Protecting information from being read or copied by people who are not authorized by the information's owner to read or copy it. This includes pieces of information from which the confidential information can be inferred.
2. Data Integrity: Protecting information including programs, backup tapes, file creation times, documentation, etc. from being deleted or altered without the permission of the information's owner.
3. Availability: Ensuring that the computer services are not degraded or made unavailable without authorizations.
TECHNIQUES FOR COMPUTER SECURITY
The principles of information security are upheld using 3 main techniques, which are
1. Prevention:
Stopping a security breach from happening,often by identifying vulnerabilities in a system and putting in safeguards. Examples of this technique include access control (passwords), firewalls, and encryption. It is often impossible to completely prevent security breaches.
2. Detection: Discovering that a security breach has occurred or is occurring (detection), identifying the nature of the attack (localization), the identity and whereabouts
(Identification) and nature of the perpetrators (assessment) Examples of this technique include: intrusion detection systems, system logs, digital watermarking.
Detection allows Response.
3. Response: Mitigating the consequences of the security breach, or deterring attacks usually by punishment. Examples include: insurance and prosecution.
NEED FOR COMPUTER SECURITY
Confidentiality (secrecy)
Data is kept secret from those without the proper credentials, even if that data travels through an insecure medium. In practice, this means potential attackers might be able to see garbled data that is essentially “locked,” but they should not be able to unlock that data without the proper information. In classic cryptography, the encryption (scrambling) algorithm was the secret. In modern cryptography, that isn’t feasible. The algorithms are public, and cryptographic keys are used in the encryption and decryption processes. The only thing that needs to be secret is the key. In addition, as we will demonstrate a bit later,
There are common cases in which not all keys need to be kept secret.
Integrity (anti-tampering)
The basic idea behind data integrity is that there should be a way for the recipient of a piece of data to determine whether any modifications are made over a period of time. For example, integrity checks can be used to make sure that data sent over a wire isn’t modified in transit. Plenty of well-known checksums exist that can detect and even correct simple errors.
However, such checksums are poor at detecting skilled intentional modifications of the data. Several cryptographic checksums do not have these drawbacks if used properly.
Non-repudiation
Cryptography can enable one person A to prove that a message he received from B actually came from B. B can essentially be held accountable when she sends A such a message, as she cannot deny (repudiate) that she sent it. In the real world, you have to assume that an attacker does not compromise particular cryptographic keys. The SSL protocol does not support non-repudiation, but it is easily added by using digital signatures. These simple services can be used to stop a wide variety of network attacks.
Snooping (passive eavesdropping)
An attacker watches network traffic as it passes and records interesting data, such as credit card information.
Tampering
An attacker monitors network traffic and maliciously changes data in transit (for example, an attacker may modify the contents of an email message).
Spoofing
An attacker forges network data, appearing to come from a different network address than he actually comes from. This sort of attack can be used to thwart systems that authenticate based on host information (e.g. an IP address).
Hijacking
Once a legitimate user authenticates, a spoofing attack can be used to “hijack” the connection.
Capture-replay
In some circumstances, an attacker can record and replay network transactions to ill effect. For example, say that you sell a single share of stock while the price is high. If the network protocol is not properly designed and secured, an attacker could record that transaction, then replay it later when the stock price has dropped, and do so repeatedly until all your stock is gone.
AREAS OF SECURITY
Attacks are equally easy if you’re on the same local network as one of the endpoints. Talented high school students who can use other people’s software to break into machines and manipulate them can easily manage to use these tools to attack real systems. Moreover, authentication information could usually be among the information “snooped” off a network.
Identification and authentication
Identification is typically performed by logging in or entering a username. But after entering a name, a user may be asked to prove it, so that the system can be certain that one user is not trying to impersonate another. Techniques can use two or more approaches.
User passwords are commonly employed. Password guessing attacks use computers and actual dictionaries or large word lists to try likely passwords. Brute force attacks generate and try all possible passwords. To block these attacks, users should choose strong passwords.
Physical characteristics can be determined by biometric devices. In addition to fingerprints, voice recognition, retina patterns, and pictures are used.
Access control
The system uses a validated user identity to limit the actions the user can perform. An access control policy is a series of acceptable triples (user, object, action), such as (system administrator, password file, modify), meaning that the user "system administrator" is allowed to perform the action "modify" on the object "password file." An access control list (ACL) is a set of these triples. Access control lists can be represented as a two-dimensional matrix, as a set of rules, or in other ways.
Before permission to access an object is allowed, a reference monitor (also known as a reference validation mechanism or access control mechanism) checks that the access is allowable. A reference monitor must be complete (invoked to validate every reference permission), correct (made to implement the intended access control policy exactly), and tamperproof (unable to be disabled).
Reference monitors can simply process a representation of the access control policy in list or table form. Alternatively, they can process capabilities, which are revalidated access "tickets." The access control system gives a user a capability to perform a certain access on a particular object, and the user later presents the capability to a reference monitor, which will inspect the capability and allow the access. Capabilities are useful in networked and distributed systems, in which access control may be done at one point and actions on objects may be done elsewhere.
Security of Programs
Computer programs are both part of the protection and part of the things protected in computer security. Programs implement access controls and other technical security controls. But those same programs must be protected against accesses that would modify or disable their ability to protect. And those programs must be implemented correctly.
Correctness, completeness, and exactness
A computer program is correct if it meets the requirements for which it was designed. A program is complete if it meets all requirements. Finally, a program is exact if it performs only those operations specified by requirements. Computer security requires correct, complete, and exact programs, and nothing more. A program has inevitable side effects.
For example, a program inevitably assigns values to internal variables, uses computing time, and causes entries to be generated in audit logs. Although side effects seem benign, they can be used maliciously to leak information. The exactness requirement really concerns only those significant operations specified by requirements, but in security almost any side effect can be significant. Determining which additional actions are security relevant is difficult, if not impossible.
Correctness and completeness can be determined to some degree by careful testing, although with large or complex systems it may be infeasible to test all possible situations. It is difficult to test security systems appropriately, because they can be large and complex, and because it is hard to simulate all the environments and approaches by which systems can be attacked.
Malicious code
Computing is so fast and complex that users cannot know everything a program is doing. Programs can be modified or replaced by hostile forms, with the replacements seeming outwardly the same as the originals. The general term "malicious code" covers Trojan horses, viruses, worms, and trapdoors. Malicious code has been present in computing systems since the 1960s, and it is increasingly prevalent and serious in impact. Unfortunately, there are no known complete forms of protection against malicious code. A Trojan horse is a program that has an undocumented function in addition to an apparent function. For example, a program may ostensibly format and display electronic mail messages while also covertly transmitting sensitive data.
A virus is a program that replicates and transfers itself to another computing system. When executed, each copy can also replicate, so that the infection spreads at a geometric rate.
A virus typically inserts its replicated copy into another executable program so that when the other program is executed, so is the copy of the virus. Viruses often perform covert malicious actions.
A worm is a program that, like a virus, seeks to replicate and spread. However, the goal of the worm is only to spread and consume resources. The malicious effect of the worm is denial of service by exhaustion of resources. A trapdoor is an undocumented entry point into a program. The trapdoor is inserted by a programmer to allow discreet access to a program, possibly with exceptional privileges. A user who had legitimate access at one time might have installed the trapdoor as a means of obtaining access in the future. All these forms of malicious code are serious security threats for several reasons. First, malicious code can be relatively small, so that it is not readily detected. Second, its actions can be concealed: If a program fails to perform as it did, the change is evident, but an attacker can cause the change to be subtle, delayed, or sporadic, making it very difficult to detect, let alone diagnose and correct. The covert effect of malicious code can be almost anything: It can delete files, transmit messages or files, modify documents or data files, and block a user from accessing a computer system. The attack can be transmitted in pieces that activate only when the entire attack has been delivered.
Finally, protecting against malicious code is difficult: The only known totally effective countermeasure is not to accept any executable items from anyone else, a solution that is scarcely acceptable for current networking and information sharing environments.
Security of code
It is infeasible for a user to determine that a program is secure. The user has little evidence on which to base an opinion, an insecure program may intentionally hide its weaknesses, and many users have little control even over the sources from which programs are derived. Even well−intentioned programmers can fail. Beyond principles of good software
Database Security
A database is a collection of records containing fields, organized in such a way that a single user can be allowed access to none, some, or all of the data. Typically the data are shared among several users, although not every user will have access to every item of data. A database is accessed by a database management system that performs the user interface to the database.
Integrity is a much more encompassing issue for databases than for general applications programs, because of the shared nature of the data. Integrity has many interpretations, such as assurance that data are not inadvertently overwritten, lost, or scrambled.
Security is the process of protecting against threats to computing systems. A threat is an event that can cause harm to computers, data or programs, or computations.
A failure of computer security occurs because of a vulnerability or Weakness in a computing system. A threat agent person, event, or circumstance exploits vulnerability.
Computer Security involves protecting against failures of availability, integrity or correctness, and confidentiality or privacy.
A failure of availability is also known as denial of service. Partial denial of service is lack of capacity or unacceptable responsiveness.
Computer users expect programs and data to be available on demand to meet computing needs.
Applications such as power generation, stock trading, and even airplane cockpit navigation and aspects of medical care have become so dependent on computing that loss of availability becomes a serious threat to life or society. Even on a less dramatic level, people have become dependent on computers in aspects of everyday life, and so maintaining expected availability of computers is probably the most important of the three goals of computer security.
PRINCIPLES OF COMPUTER SECURITY
Have several major principles that it strives to uphold:
1. Confidentiality: Protecting information from being read or copied by people who are not authorized by the information's owner to read or copy it. This includes pieces of information from which the confidential information can be inferred.
2. Data Integrity: Protecting information including programs, backup tapes, file creation times, documentation, etc. from being deleted or altered without the permission of the information's owner.
3. Availability: Ensuring that the computer services are not degraded or made unavailable without authorizations.
TECHNIQUES FOR COMPUTER SECURITY
The principles of information security are upheld using 3 main techniques, which are
1. Prevention:
Stopping a security breach from happening,often by identifying vulnerabilities in a system and putting in safeguards. Examples of this technique include access control (passwords), firewalls, and encryption. It is often impossible to completely prevent security breaches.
2. Detection: Discovering that a security breach has occurred or is occurring (detection), identifying the nature of the attack (localization), the identity and whereabouts
(Identification) and nature of the perpetrators (assessment) Examples of this technique include: intrusion detection systems, system logs, digital watermarking.
Detection allows Response.
3. Response: Mitigating the consequences of the security breach, or deterring attacks usually by punishment. Examples include: insurance and prosecution.
NEED FOR COMPUTER SECURITY
Confidentiality (secrecy)
Data is kept secret from those without the proper credentials, even if that data travels through an insecure medium. In practice, this means potential attackers might be able to see garbled data that is essentially “locked,” but they should not be able to unlock that data without the proper information. In classic cryptography, the encryption (scrambling) algorithm was the secret. In modern cryptography, that isn’t feasible. The algorithms are public, and cryptographic keys are used in the encryption and decryption processes. The only thing that needs to be secret is the key. In addition, as we will demonstrate a bit later,
There are common cases in which not all keys need to be kept secret.
Integrity (anti-tampering)
The basic idea behind data integrity is that there should be a way for the recipient of a piece of data to determine whether any modifications are made over a period of time. For example, integrity checks can be used to make sure that data sent over a wire isn’t modified in transit. Plenty of well-known checksums exist that can detect and even correct simple errors.
However, such checksums are poor at detecting skilled intentional modifications of the data. Several cryptographic checksums do not have these drawbacks if used properly.
Non-repudiation
Cryptography can enable one person A to prove that a message he received from B actually came from B. B can essentially be held accountable when she sends A such a message, as she cannot deny (repudiate) that she sent it. In the real world, you have to assume that an attacker does not compromise particular cryptographic keys. The SSL protocol does not support non-repudiation, but it is easily added by using digital signatures. These simple services can be used to stop a wide variety of network attacks.
Snooping (passive eavesdropping)
An attacker watches network traffic as it passes and records interesting data, such as credit card information.
Tampering
An attacker monitors network traffic and maliciously changes data in transit (for example, an attacker may modify the contents of an email message).
Spoofing
An attacker forges network data, appearing to come from a different network address than he actually comes from. This sort of attack can be used to thwart systems that authenticate based on host information (e.g. an IP address).
Hijacking
Once a legitimate user authenticates, a spoofing attack can be used to “hijack” the connection.
Capture-replay
In some circumstances, an attacker can record and replay network transactions to ill effect. For example, say that you sell a single share of stock while the price is high. If the network protocol is not properly designed and secured, an attacker could record that transaction, then replay it later when the stock price has dropped, and do so repeatedly until all your stock is gone.
AREAS OF SECURITY
Attacks are equally easy if you’re on the same local network as one of the endpoints. Talented high school students who can use other people’s software to break into machines and manipulate them can easily manage to use these tools to attack real systems. Moreover, authentication information could usually be among the information “snooped” off a network.
Identification and authentication
Identification is typically performed by logging in or entering a username. But after entering a name, a user may be asked to prove it, so that the system can be certain that one user is not trying to impersonate another. Techniques can use two or more approaches.
User passwords are commonly employed. Password guessing attacks use computers and actual dictionaries or large word lists to try likely passwords. Brute force attacks generate and try all possible passwords. To block these attacks, users should choose strong passwords.
Physical characteristics can be determined by biometric devices. In addition to fingerprints, voice recognition, retina patterns, and pictures are used.
Access control
The system uses a validated user identity to limit the actions the user can perform. An access control policy is a series of acceptable triples (user, object, action), such as (system administrator, password file, modify), meaning that the user "system administrator" is allowed to perform the action "modify" on the object "password file." An access control list (ACL) is a set of these triples. Access control lists can be represented as a two-dimensional matrix, as a set of rules, or in other ways.
Before permission to access an object is allowed, a reference monitor (also known as a reference validation mechanism or access control mechanism) checks that the access is allowable. A reference monitor must be complete (invoked to validate every reference permission), correct (made to implement the intended access control policy exactly), and tamperproof (unable to be disabled).
Reference monitors can simply process a representation of the access control policy in list or table form. Alternatively, they can process capabilities, which are revalidated access "tickets." The access control system gives a user a capability to perform a certain access on a particular object, and the user later presents the capability to a reference monitor, which will inspect the capability and allow the access. Capabilities are useful in networked and distributed systems, in which access control may be done at one point and actions on objects may be done elsewhere.
Security of Programs
Computer programs are both part of the protection and part of the things protected in computer security. Programs implement access controls and other technical security controls. But those same programs must be protected against accesses that would modify or disable their ability to protect. And those programs must be implemented correctly.
Correctness, completeness, and exactness
A computer program is correct if it meets the requirements for which it was designed. A program is complete if it meets all requirements. Finally, a program is exact if it performs only those operations specified by requirements. Computer security requires correct, complete, and exact programs, and nothing more. A program has inevitable side effects.
For example, a program inevitably assigns values to internal variables, uses computing time, and causes entries to be generated in audit logs. Although side effects seem benign, they can be used maliciously to leak information. The exactness requirement really concerns only those significant operations specified by requirements, but in security almost any side effect can be significant. Determining which additional actions are security relevant is difficult, if not impossible.
Correctness and completeness can be determined to some degree by careful testing, although with large or complex systems it may be infeasible to test all possible situations. It is difficult to test security systems appropriately, because they can be large and complex, and because it is hard to simulate all the environments and approaches by which systems can be attacked.
Malicious code
Computing is so fast and complex that users cannot know everything a program is doing. Programs can be modified or replaced by hostile forms, with the replacements seeming outwardly the same as the originals. The general term "malicious code" covers Trojan horses, viruses, worms, and trapdoors. Malicious code has been present in computing systems since the 1960s, and it is increasingly prevalent and serious in impact. Unfortunately, there are no known complete forms of protection against malicious code. A Trojan horse is a program that has an undocumented function in addition to an apparent function. For example, a program may ostensibly format and display electronic mail messages while also covertly transmitting sensitive data.
A virus is a program that replicates and transfers itself to another computing system. When executed, each copy can also replicate, so that the infection spreads at a geometric rate.
A virus typically inserts its replicated copy into another executable program so that when the other program is executed, so is the copy of the virus. Viruses often perform covert malicious actions.
A worm is a program that, like a virus, seeks to replicate and spread. However, the goal of the worm is only to spread and consume resources. The malicious effect of the worm is denial of service by exhaustion of resources. A trapdoor is an undocumented entry point into a program. The trapdoor is inserted by a programmer to allow discreet access to a program, possibly with exceptional privileges. A user who had legitimate access at one time might have installed the trapdoor as a means of obtaining access in the future. All these forms of malicious code are serious security threats for several reasons. First, malicious code can be relatively small, so that it is not readily detected. Second, its actions can be concealed: If a program fails to perform as it did, the change is evident, but an attacker can cause the change to be subtle, delayed, or sporadic, making it very difficult to detect, let alone diagnose and correct. The covert effect of malicious code can be almost anything: It can delete files, transmit messages or files, modify documents or data files, and block a user from accessing a computer system. The attack can be transmitted in pieces that activate only when the entire attack has been delivered.
Finally, protecting against malicious code is difficult: The only known totally effective countermeasure is not to accept any executable items from anyone else, a solution that is scarcely acceptable for current networking and information sharing environments.
Security of code
It is infeasible for a user to determine that a program is secure. The user has little evidence on which to base an opinion, an insecure program may intentionally hide its weaknesses, and many users have little control even over the sources from which programs are derived. Even well−intentioned programmers can fail. Beyond principles of good software
Database Security
A database is a collection of records containing fields, organized in such a way that a single user can be allowed access to none, some, or all of the data. Typically the data are shared among several users, although not every user will have access to every item of data. A database is accessed by a database management system that performs the user interface to the database.
Integrity is a much more encompassing issue for databases than for general applications programs, because of the shared nature of the data. Integrity has many interpretations, such as assurance that data are not inadvertently overwritten, lost, or scrambled.
Saturday, July 17, 2010
Trojan Horse
The most important difference between a trojan virus/trojan horse and a virus is that trojans don’t spread themselves. Trojan horses disguise themselves as valuable and useful software available for download on the internet. Most people are fooled by this ploy and end up dowloading the virus disguised as some other application. The name comes from the mythical "Trojan Horse" that the Ancient Greeks set upon the city of Troy.
A trojan horse is typically separated into two parts – a server and a client. It’s the client that is cleverly disguised as significant software and positioned in peer-to-peer file sharing networks, or unauthorized download websites. Once the client Trojan executes on your computer, the attacker, i.e. the person running the server, has a high level of control over your computer, which can lead to destructive effects depending on the attacker’s purpose.
A trojan horse virus can spread in a number of ways. The most common means of infection is through email attachments. The developer of the virus usually uses various spamming techniques in order to distribute the virus to unsuspecting users. Another method used by malware developers to spread their trojan horse viruses is via chat software such as Yahoo Messenger and Skype. Another method used by this virus in order to infect other machines is through sending copies of itself to the people in the address book of a user whose computer has already been infected by the virus.
Trojans are executable programs, which means that when you open the file, it will perform some action(s). In Windows, executable programs have file extensions like "exe", "vbs", "com", "bat", etc. Some actual trojan filenames include: "dmsetup.exe" and "LOVE-LETTER-FOR-YOU.TXT.vbs"
A trojan horse is typically separated into two parts – a server and a client. It’s the client that is cleverly disguised as significant software and positioned in peer-to-peer file sharing networks, or unauthorized download websites. Once the client Trojan executes on your computer, the attacker, i.e. the person running the server, has a high level of control over your computer, which can lead to destructive effects depending on the attacker’s purpose.
A trojan horse virus can spread in a number of ways. The most common means of infection is through email attachments. The developer of the virus usually uses various spamming techniques in order to distribute the virus to unsuspecting users. Another method used by malware developers to spread their trojan horse viruses is via chat software such as Yahoo Messenger and Skype. Another method used by this virus in order to infect other machines is through sending copies of itself to the people in the address book of a user whose computer has already been infected by the virus.
Trojans are executable programs, which means that when you open the file, it will perform some action(s). In Windows, executable programs have file extensions like "exe", "vbs", "com", "bat", etc. Some actual trojan filenames include: "dmsetup.exe" and "LOVE-LETTER-FOR-YOU.TXT.vbs"
Types of Trojan Horse Viruses:
Trojan Horses have developed to a remarkable level of cleverness, which makes each one radically different from each other. For an inclusive understanding, we have classified them into the following:
Remote Access Trojans:
Remote Access Trojans are the most frequently available trojans. These give an attacker absolute control over the victim’s computers. The attacker can go through the files and access any personal information about the user that may be stored in the files, such as credit card numbers, passwords, and vital financial documents.
Password Sending Trojans:
The intention of a Password Sending Trojan is to copy all the cached passwords and look for other passwords as you key them into your computer, and send them to particular email addresses. These actions are performed without the awareness of the users. Passwords for restricted websites, messaging services, FTP services and email services come under direct threat with this kind of trojan.
Key Loggers:
Key Loggers type of Trojans logs victims’ keystrokes and then send the log files to the attacker. It then searches for passwords or other sensitive data in the log files. Most of the Key Loggers come with two functions, such as online and offline recording. Of course, they can be configured to send the log file to a specific email address on a daily basis.
Destructive Trojans:
The only purpose of Destructive Trojans is to destroy and delete files from the victims’ computers. They can automatically delete all the core system files of the computer. The destructive trojan could be controlled by the attacker or could be programmed to strike like a logic bomb, starting on a particular day or at specific time.
The only purpose of Destructive Trojans is to destroy and delete files from the victims’ computers. They can automatically delete all the core system files of the computer. The destructive trojan could be controlled by the attacker or could be programmed to strike like a logic bomb, starting on a particular day or at specific time.
Denial of Service (DoS) Attack Trojans:
The core design intention behind Denial of Service (DoS) Attack Trojan is to produce a lot of internet traffic on the victim’s computer or server, to the point that the Internet connection becomes too congested to let anyone visit a website or download something. An additional variation of DoS Trojan is the Mail-Bomb Trojan, whose key plan is to infect as many computers as possible, concurrently attacking numerous email addresses with haphazard subjects and contents that cannot be filtered.
The core design intention behind Denial of Service (DoS) Attack Trojan is to produce a lot of internet traffic on the victim’s computer or server, to the point that the Internet connection becomes too congested to let anyone visit a website or download something. An additional variation of DoS Trojan is the Mail-Bomb Trojan, whose key plan is to infect as many computers as possible, concurrently attacking numerous email addresses with haphazard subjects and contents that cannot be filtered.
Proxy/Wingate Trojans:
Proxy/Wingate Trojans convert the victim’s computer into a Proxy/Wingate server. That way, the infected computer is accessible to the entire globe to be used for anonymous access to a variety of unsafe Internet services. The attacker can register domains or access pornographic websites with stolen credit cards or do related illegal activities without being traced.
FTP Trojans:
FTP Trojans are possibly the most simple, and are outdated. The only action they perform is, open a port numbered 21 – the port for FTP transfers – and let anyone connect to your computer via FTP protocol. Advance versions are password-protected, so only the attacker can connect to your computer.
FTP Trojans are possibly the most simple, and are outdated. The only action they perform is, open a port numbered 21 – the port for FTP transfers – and let anyone connect to your computer via FTP protocol. Advance versions are password-protected, so only the attacker can connect to your computer.
Software Detection Killers:
Software Detection Killers kill popular antivirus/firewall programs that guard your computer to give the attacker access to the victim’s machine.
Note: A Trojan could have any one or a combination of the above mentioned functionalities.
The best way to prevent a Trojan Horse Virus from entering and infecting your computer is to never open email attachments or files that have been sent by unknown senders. However, not all files we can receive are guaranteed to be virus-free. With this, a good way of protecting your PC against malicious programs such as this harmful application is to install and update an antivirus program.
How do I get rid of trojans?!?
Here are your many options, none of them are perfect. I strongly suggest you read through all of them before rushing out and trying to run some program blindly. Remember - that's how you got in this trouble in the first place. Good luck!
1. Clean Re-installation:
Software Detection Killers kill popular antivirus/firewall programs that guard your computer to give the attacker access to the victim’s machine.
Note: A Trojan could have any one or a combination of the above mentioned functionalities.
The best way to prevent a Trojan Horse Virus from entering and infecting your computer is to never open email attachments or files that have been sent by unknown senders. However, not all files we can receive are guaranteed to be virus-free. With this, a good way of protecting your PC against malicious programs such as this harmful application is to install and update an antivirus program.
How do I get rid of trojans?!?
Here are your many options, none of them are perfect. I strongly suggest you read through all of them before rushing out and trying to run some program blindly. Remember - that's how you got in this trouble in the first place. Good luck!
1. Clean Re-installation:
Although arduous, this will always be the only sure way to eradicate a trojan or virus. Back up your entire hard disk, reformat the disk, re-install the operating system and all your applications from original CDs, and finally, if you're certain they are not infected, restore your user files from the backup. If you are not up to the task, you can pay for a professional repair service to do it.
2. Anti-Virus Software:
2. Anti-Virus Software:
Some of these can handle most of the well known trojans, but none are perfect, no matter what their advertising claims. You absolutely MUST make sure you have the very latest update files for your programs, or else they will miss the latest trojans. Compared to traditional viruses, today's trojans evolve much quicker and come in many seemingly innocuous forms, so anti-virus software is always going to be playing catch up. Also, if they fail to find every trojan, anti-virus software can give you a false sense of security, such that you go about your business not realizing that you are still dangerously compromised. There are many products to choose from, but the following are generally effective: AVP, PC-cillin, and McAfee VirusScan. All are available for immediate downloading typically with a 30 day free trial. For a more complete review of all major anti-virus programs, including specific configuration suggestions for each, see the HackFix Project's anti-virus software page [all are ext. links]. When you are done, make sure you've updated Windows with all security patches [ext. link].
3. Anti-Trojan Programs:
3. Anti-Trojan Programs:
These programs are the most effective against trojan horse attacks, because they specialize in trojans instead of general viruses. A popular choice is The Cleaner, $30 commercial software with a 30 day free trial. To use it effectively, you must follow hackfix.org's configuration suggestions [ext. link]. When you are done, make sure you've updated Windows with all security patches [ext. link], then change all your passwords because they may have been seen by every "hacker" in the world.
4. IRC Help Channels:
4. IRC Help Channels:
If you're the type that needs some hand-holding, you can find trojan/virus removal help on IRC itself, such as EFnet #dmsetup or DALnet #NoHack. These experts will try to figure out which trojan(s) you have and offer you advice on how to fix it. The previous directions were in fact adapted from advice given by EFnet #dmsetup. (See our networks page if you need help connecting to those networks.)
Denial-of-service attack (DOS)
Goal of an Attacker:
• Reduce of an availability of a system to legit users, so that the system is unable to provide the services it is supposed to provide.
• Deny you use of your own resources.
• Hence, the Denial of Service, or DOS.
Denial-of-service attack:
A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users. Although the means to carry out, motives for, and targets of a DoS attack may vary, it generally consists of the concerted efforts of a person or people to prevent an Internet site or service from functioning efficiently or at all, temporarily or indefinitely. Perpetrators of DoS attacks typically target sites or services hosted on high-profile web servers such as banks, credit card payment gateways, and even root nameservers. The term is generally used with regards to computer networks, but is not limited to this field, for example, it is also used in reference to CPU resource management. [1]
One common method of attack involves saturating the target (victim) machine with external communications requests, such that it cannot respond to legitimate traffic, or responds so slowly as to be rendered effectively unavailable. In general terms, DoS attacks are implemented by either forcing the targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service or obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.
Denial-of-service attacks are considered violations of the IAB's Internet proper use policy, and also violate the acceptable use policies of virtually all Internet Service Providers. They also commonly constitute violations of the laws of individual nations.
In a denial-of-service (DoS) attack, an attacker attempts to prevent legitimate users from accessing information or services. By targeting your computer and its network connection, or the computers and network of the sites you are trying to use, an attacker may be able to prevent you from accessing email, websites, online accounts (banking, etc.), or other services that rely on the affected computer.
Symptoms and Manifestations:
The United States Computer Emergency Response Team defines symptoms of denial-of-service attacks to include:
• Unusually slow network performance (opening files or accessing web sites)
• Unavailability of a particular web site
• Inability to access any web site
• Dramatic increase in the number of spam emails received—(this type of DoS attack is considered an e-mail bomb)[3]
Denial-of-service attacks can also lead to problems in the network 'branches' around the actual computer being attacked. For example, the bandwidth of a router between the Internet and a LAN may be consumed by an attack, compromising not only the intended computer, but also the entire network.
If the attack is conducted on a sufficiently large scale, entire geographical regions of Internet connectivity can be compromised without the attacker's knowledge or intent by incorrectly configured or flimsy network infrastructure equipment.
Methods of attack:
A "denial-of-service" attack is characterized by an explicit attempt by attackers to prevent legitimate users of a service from using that service. Attacks can be directed at any network device, including attacks on routing devices and web, electronic mail, or Domain Name System servers.
A DoS attack can be perpetrated in a number of ways. The five basic types of attack are:
1. Consumption of computational resources, such as bandwidth, disk space, or processor time
2. Disruption of configuration information, such as routing information.
3. Disruption of state information, such as unsolicited resetting of TCP sessions.
4. Disruption of physical network components.
5. Obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.
A DoS attack may include execution of malware intended to:
• Max out the processor's usage, preventing any work from occurring.
• Trigger errors in the microcode of the machine.
• Trigger errors in the sequencing of instructions, so as to force the computer into an unstable state or lock-up.
• Exploit errors in the operating system, causing resource starvation and/or thrashing, i.e. to use up all available facilities so no real work can be accomplished.
• Crash the operating system itself.
Peer-to-peer attacks:
Attackers have found a way to exploit a number of bugs in peer-to-peer servers to initiate DDoS attacks. The most aggressive of these peer-to-peer-DDoS attacks exploits DC++(DC++ is a free and open-source, peer-to-peer file-sharing client that can be used to connect to the Direct Connect network ). Peer-to-peer attacks are different from regular botnet(collection of software agents, or robots, that run autonomously and automatically)-based attacks. With peer-to-peer there is no botnet and the attacker does not have to communicate with the clients it subverts. Instead, the attacker acts as a 'puppet master,' instructing clients of large peer-to-peer file sharing hubs to disconnect from their peer-to-peer network and to connect to the victim’s website instead. As a result, several thousand computers may aggressively try to connect to a target website. While a typical web server can handle a few hundred connections/sec before performance begins to degrade, most web servers fail almost instantly under five or six thousand connections/sec. With a moderately big peer-to-peer attack a site could potentially be hit with up to 750,000 connections in a short order. The targeted web server will be plugged up by the incoming connections. While peer-to-peer attacks are easy to identify with signatures, the large number of IP addresses that need to be blocked (often over 250,000 during the course of a big attack) means that this type of attack can overwhelm mitigation defenses. Even if a mitigation device can keep blocking IP addresses, there are other problems to consider. For instance, there is a brief moment where the connection is opened on the server side before the signature itself comes through. Only once the connection is opened to the server can the identifying signature be sent and detected, and the connection torn down. Even tearing down connections takes server resources and can harm the server.
This method of attack can be prevented by specifying in the p2p protocol which ports are allowed or not. If port 80 is not allowed, the possibilities for attack on websites can be very limited.
Permanent denial-of-service attacks:
A permanent denial-of-service (PDoS), also known loosely as phlashing,[6] is an attack that damages a system so badly that it requires replacement or reinstallation of hardware.[7] Unlike the distributed denial-of-service attack, a PDoS attack exploits security flaws which allow remote administration on the management interfaces of the victim's hardware, such as routers, printers, or other networking hardware. The attacker uses these vulnerabilities to replace a device's firmware with a modified, corrupt, or defective firmware image—a process which when done legitimately is known as flashing. This therefore "bricks" the device, rendering it unusable for its original purpose until it can be repaired or replaced.
The PDoS is a pure hardware targeted attack which can be much faster and requires fewer resources than using a botnet in a DDoS attack. Because of these features, and the potential and high probability of security exploits on Network Enabled Embedded Devices (NEEDs), this technique has come to the attention of numerous hacker communities. PhlashDance is a tool created by Rich Smith[8] (an employee of Hewlett-Packard's Systems Security Lab) used to detect and demonstrate PDoS vulnerabilities at the 2008 EUSecWest Applied Security Conference in London.[8]
Application level floods
On IRC, IRC floods are a common electronic warfare weapon.
Various DoS-causing exploits such as buffer overflow can cause server-running software to get confused and fill the disk space or consume all available memory or CPU time.
Other kinds of DoS rely primarily on brute force, flooding the target with an overwhelming flux of packets, oversaturating its connection bandwidth or depleting the target's system resources. Bandwidth-saturating floods rely on the attacker having higher bandwidth available than the victim; a common way of achieving this today is via Distributed Denial of Service, employing a botnet. Other floods may use specific packet types or connection requests to saturate finite resources by, for example, occupying the maximum number of open connections or filling the victim's disk space with logs.
A "banana attack" is another particular type of DoS. It involves redirecting outgoing messages from the client back onto the client, preventing outside access, as well as flooding the client with the sent packets.
An attacker with access to a victim's computer may slow it until it is unusable or crash it by using a fork bomb.
Nuke
A Nuke is an old denial-of-service attack against computer networks consisting of fragmented or otherwise invalid ICMP packets sent to the target, achieved by using a modified ping utility to repeatedly send this corrupt data, thus slowing down the affected computer until it comes to a complete stop.
A specific example of a nuke attack that gained some prominence is the WinNuke, which exploited the vulnerability in the NetBIOS handler in Windows 95. A string of out-of-band data was sent to TCP port 139 of the victim's machine, causing it to lock up and display a Blue Screen of Death (BSOD).
Distributed attack
A distributed denial of service attack (DDoS) occurs when multiple systems flood the bandwidth or resources of a targeted system, usually one or more web servers. These systems are compromised by attackers using a variety of methods.
Malware can carry DDoS attack mechanisms; one of the better-known examples of this was MyDoom. Its DoS mechanism was triggered on a specific date and time. This type of DDoS involved hardcoding the target IP address prior to release of the malware and no further interaction was necessary to launch the attack.
A system may also be compromised with a trojan, allowing the attacker to download a zombie agent (or the trojan may contain one). Attackers can also break into systems using automated tools that exploit flaws in programs that listen for connections from remote hosts. This scenario primarily concerns systems acting as servers on the web.
Stacheldraht is a classic example of a DDoS tool. It utilizes a layered structure where the attacker uses a client program to connect to handlers, which are compromised systems that issue commands to the zombie agents, which in turn facilitate the DDoS attack. Agents are compromised via the handlers by the attacker, using automated routines to exploit vulnerabilities in programs that accept remote connections running on the targeted remote hosts. Each handler can control up to a thousand agents.[9]
These collections of systems compromisers are known as botnets. DDoS tools like stacheldraht still use classic DoS attack methods centered on IP spoofing and amplification like smurf attacks and fraggle attacks (these are also known as bandwidth consumption attacks). SYN floods (also known as resource starvation attacks) may also be used. Newer tools can use DNS servers for DoS purposes.
Simple attacks such as SYN floods may appear with a wide range of source IP addresses, giving the appearance of a well distributed DDoS. These flood attacks do not require completion of the TCP three way handshake and attempt to exhaust the destination SYN queue or the server bandwidth. Because the source IP addresses can be trivially spoofed, an attack could come from a limited set of sources, or may even originate from a single host. Stack enhancements such as syn cookies may be effective mitigation against SYN queue flooding, however complete bandwidth exhaustion may require involvement
Unlike MyDoom's DDoS mechanism, botnets can be turned against any IP address. Script kiddies use them to deny the availability of well known websites to legitimate users.[2] More sophisticated attackers use DDoS tools for the purposes of extortion — even against their business rivals.[10]
It is important to note the difference between a DDoS and DoS attack. If an attacker mounts an attack from a single host it would be classified as a DoS attack. In fact, any attack against availability would be classed as a Denial of Service attack. On the other hand, if an attacker uses a thousand systems to simultaneously launch smurf attacks against a remote host, this would be classified as a DDoS attack.
The major advantages to an attacker of using a distributed denial-of-service attack are that multiple machines can generate more attack traffic than one machine, multiple attack machines are harder to turn off than one attack machine, and that the behavior of each attack machine can be stealthier, making it harder to track down and shut down. These attacker advantages cause challenges for defense mechanisms. For example, merely purchasing more incoming bandwidth than the current volume of the attack might not help, because the attacker might be able to simply add more attack machines.
Reflected attack
A distributed reflected denial of service attack (DRDoS) involves sending forged requests of some type to a very large number of computers that will reply to the requests. Using Internet protocol spoofing, the source address is set to that of the targeted victim, which means all the replies will go to (and flood) the target.
ICMP Echo Request attacks (Smurf Attack) can be considered one form of reflected attack, as the flooding host(s) send Echo Requests to the broadcast addresses of mis-configured networks, thereby enticing many hosts to send Echo Reply packets to the victim. Some early DDoS programs implemented a distributed form of this attack.
Many services can be exploited to act as reflectors, some harder to block than others.[11] DNS amplification attacks involve a new mechanism that increased the amplification effect, using a much larger list of DNS servers than seen earlier.[12
Degradation-of-service attacks
"Pulsing" zombies are compromised computers that are directed to launch intermittent and short-lived floodings of victim websites with the intent of merely slowing it rather than crashing it. This type of attack, referred to as "degradation-of-service" rather than "denial-of-service", can be more difficult to detect than regular zombie invasions and can disrupt and hamper connection to websites for prolonged periods of time, potentially causing more damage than concentrated floods.[13][14] Exposure of degradation-of-service attacks is complicated further by the matter of discerning whether the attacks really are attacks or just healthy and likely desired increases in website traffic.[15
Unintentional denial of service
Aka VIPDoS
This describes a situation where a website ends up denied, not due to a deliberate attack by a single individual or group of individuals, but simply due to a sudden enormous spike in popularity. This can happen when an extremely popular website posts a prominent link to a second, less well-prepared site, for example, as part of a news story. The result is that a significant proportion of the primary site's regular users — potentially hundreds of thousands of people — click that link in the space of a few hours, having the same effect on the target website as a DDoS attack.
An example of this was when Michael Jackson died in 2009, websites such as Google and Twitter slowed down or even crashed. Many sites' servers thought the requests were from a virus or spyware trying to cause a Denial of Service attack, warning users that their queries looked like “automated requests from a computer virus or spyware application”[citation needed].
News sites and link sites — sites whose primary function is to provide links to interesting content elsewhere on the Internet — are most likely to cause this phenomenon. The canonical example is the Slashdot effect. Sites such as Digg, the Drudge Report, Fark, Something Awful, and the webcomic Penny Arcade have their own corresponding "effects", known as "the Digg effect", being "drudged", "farking", "goonrushing" and "wanging"; respectively.
Routers have also been known to create unintentional DoS attacks, as both D-Link and Netgear routers have created NTP vandalism by flooding NTP servers without respecting the restrictions of client types or geographical limitations.
Similar unintentional denials of service can also occur via other media, e.g. when a URL is mentioned on television. If a server is being indexed by Google or another search engine during peak periods of activity, or does not have a lot of available bandwidth while being indexed, it can also experience the effects of a DoS attack.
Legal action has been taken in at least one such case. In 2006, Universal Tube & Rollform Equipment Corporation sued YouTube: massive numbers of would-be youtube.com users accidentally typed the tube company's URL, utube.com. As a result, the tube company ended up having to spend large amounts of money on upgrading their bandwidth.[16]
Denial-of-Service Level II
The goal of DoS L2 (possibly DDoS) attack is to cause a launching of a defense mechanism which blocks the network segment from which the attack originated. In case of distributed attack or IP header modification (that depends on the kind of security behavior) it will fully block the attacked network from Internet, but without system crash.
Blind denial of service
In a blind denial of service attack, the attacker has a significant advantage. The attacker must be able to receive traffic from the victim, then the attacker must either subvert the routing fabric or use the attacker's own IP address. Either provides an opportunity for the victim to track the attacker and/or filter out his traffic. With a blind attack the attacker uses a forged IP addresses, making it extremely difficult for the victim to filter out those packets. The TCP SYN flood attack is an example of a blind attack. Designers should make every attempt possible to prevent blind denial of service attacks.[17]
Incidents
• In February, 2001, the Irish Government's Department of Finance server was hit by a denial of service attack carried out as part of a student campaign from NUI Maynooth. The Department officially complained to the University authorities and a number of students were disciplined.[citation needed]
• In February 2007, more than 10,000 online game servers in games such as Return to Castle Wolfenstein, Halo, Counter-Strike and many others were attacked by "RUS" hacker group. The DDoS attack was made from more than a thousand computer units located in the republics of the former Soviet Union, mostly from Russia, Uzbekistan and Belarus. Minor attacks are still continuing to be made today.
• On June 25, 2009, the day Michael Jackson died, the spike in searches related to Michael Jackson was so big that Google News initially mistook it for an automated attack. As a result, for about 25 minutes, when some people searched Google News they saw a "We're sorry" page before finding the articles they were looking for.[23]
• On August 6, 2009 several social networking sites, including Twitter, Facebook, Livejournal, and Google blogging pages were hit by DDoS attacks, apparently aimed at Georgian blogger "Cyxymu". Although Google came through with only minor set-backs, these attacks left Twitter crippled for hours and Facebook did eventually restore service although some users still experienced trouble. Twitter's Site latency has continued to improve, however some web requests continue to fail.
Performing DoS-attacks
A wide array of programs are used to launch DoS-attacks. Most of these programs are completely focused on performing DoS-attacks, while others are also true Packet injectors, thus able to perform other tasks as well.
An example for such tools are, hping, JAVA socket programming, httping, and a lot more. Such tools are intended for benign use, but they can also be utilized in launching attacks on victim networks. In addition to these tools, there exist a vast amount of underground tools used by attackers.
Note: hping is a command-line oriented TCP/IP packet assembler/analyzer. The interface is inspired to the ping(8) unix command, but hping isn't only able to send ICMP echo requests. It supports TCP, UDP, ICMP and RAW-IP protocols, has a traceroute mode, the ability to send files between a covered channel, and many other features.
4. Application front end hardware
Application front end hardware is intelligent hardware placed on the network before traffic reaches the servers. It can be used on networks in conjunction with routers and switches. Application front end hardware analyzes data packets as they enter the system, and then identifies them as priority, regular, or dangerous. There are more than 25 bandwidth management vendors. Hardware acceleration is key to bandwidth management. Look for granularity of bandwidth management, hardware acceleration, and automation while selecting an appliance.[citation needed]
• Reduce of an availability of a system to legit users, so that the system is unable to provide the services it is supposed to provide.
• Deny you use of your own resources.
• Hence, the Denial of Service, or DOS.
Denial-of-service attack:
A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users. Although the means to carry out, motives for, and targets of a DoS attack may vary, it generally consists of the concerted efforts of a person or people to prevent an Internet site or service from functioning efficiently or at all, temporarily or indefinitely. Perpetrators of DoS attacks typically target sites or services hosted on high-profile web servers such as banks, credit card payment gateways, and even root nameservers. The term is generally used with regards to computer networks, but is not limited to this field, for example, it is also used in reference to CPU resource management. [1]
One common method of attack involves saturating the target (victim) machine with external communications requests, such that it cannot respond to legitimate traffic, or responds so slowly as to be rendered effectively unavailable. In general terms, DoS attacks are implemented by either forcing the targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service or obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.
Denial-of-service attacks are considered violations of the IAB's Internet proper use policy, and also violate the acceptable use policies of virtually all Internet Service Providers. They also commonly constitute violations of the laws of individual nations.
In a denial-of-service (DoS) attack, an attacker attempts to prevent legitimate users from accessing information or services. By targeting your computer and its network connection, or the computers and network of the sites you are trying to use, an attacker may be able to prevent you from accessing email, websites, online accounts (banking, etc.), or other services that rely on the affected computer.
Symptoms and Manifestations:
The United States Computer Emergency Response Team defines symptoms of denial-of-service attacks to include:
• Unusually slow network performance (opening files or accessing web sites)
• Unavailability of a particular web site
• Inability to access any web site
• Dramatic increase in the number of spam emails received—(this type of DoS attack is considered an e-mail bomb)[3]
Denial-of-service attacks can also lead to problems in the network 'branches' around the actual computer being attacked. For example, the bandwidth of a router between the Internet and a LAN may be consumed by an attack, compromising not only the intended computer, but also the entire network.
If the attack is conducted on a sufficiently large scale, entire geographical regions of Internet connectivity can be compromised without the attacker's knowledge or intent by incorrectly configured or flimsy network infrastructure equipment.
Methods of attack:
A "denial-of-service" attack is characterized by an explicit attempt by attackers to prevent legitimate users of a service from using that service. Attacks can be directed at any network device, including attacks on routing devices and web, electronic mail, or Domain Name System servers.
A DoS attack can be perpetrated in a number of ways. The five basic types of attack are:
1. Consumption of computational resources, such as bandwidth, disk space, or processor time
2. Disruption of configuration information, such as routing information.
3. Disruption of state information, such as unsolicited resetting of TCP sessions.
4. Disruption of physical network components.
5. Obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.
A DoS attack may include execution of malware intended to:
• Max out the processor's usage, preventing any work from occurring.
• Trigger errors in the microcode of the machine.
• Trigger errors in the sequencing of instructions, so as to force the computer into an unstable state or lock-up.
• Exploit errors in the operating system, causing resource starvation and/or thrashing, i.e. to use up all available facilities so no real work can be accomplished.
• Crash the operating system itself.
ICMP flood:
A smurf attack is one particular variant of a flooding DoS attack on the public Internet. It relies on misconfigured network devices that allow packets to be sent to all computer hosts on a particular network via the broadcast address of the network, rather than a specific machine. The network then serves as a smurf amplifier. In such an attack, the perpetrators will send large numbers of IP packets with the source address faked to appear to be the address of the victim. The network's bandwidth is quickly used up, preventing legitimate packets from getting through to their destination.[4] To combat Denial of Service attacks on the Internet, services like the Smurf Amplifier Registry have given network service providers the ability to identify misconfigured networks and to take appropriate action such as filtering.
Ping flood is based on sending the victim an overwhelming number of ping packets, usually using the "ping" command from unix-like hosts (the -t flag on Windows systems has a far less malignant function). It is very simple to launch, the primary requirement being access to greater bandwidth than the victim.
SYN flood sends a flood of TCP/SYN packets, often with a forged sender address. Each of these packets is handled like a connection request, causing the server to spawn a half-open connection, by sending back a TCP/SYN-ACK packet, and waiting for a packet in response from the sender address. However, because the sender address is forged, the response never comes. These half-open connections saturate the number of available connections the server is able to make, keeping it from responding to legitimate requests until after the attack ends.
A smurf attack is one particular variant of a flooding DoS attack on the public Internet. It relies on misconfigured network devices that allow packets to be sent to all computer hosts on a particular network via the broadcast address of the network, rather than a specific machine. The network then serves as a smurf amplifier. In such an attack, the perpetrators will send large numbers of IP packets with the source address faked to appear to be the address of the victim. The network's bandwidth is quickly used up, preventing legitimate packets from getting through to their destination.[4] To combat Denial of Service attacks on the Internet, services like the Smurf Amplifier Registry have given network service providers the ability to identify misconfigured networks and to take appropriate action such as filtering.
Ping flood is based on sending the victim an overwhelming number of ping packets, usually using the "ping" command from unix-like hosts (the -t flag on Windows systems has a far less malignant function). It is very simple to launch, the primary requirement being access to greater bandwidth than the victim.
SYN flood sends a flood of TCP/SYN packets, often with a forged sender address. Each of these packets is handled like a connection request, causing the server to spawn a half-open connection, by sending back a TCP/SYN-ACK packet, and waiting for a packet in response from the sender address. However, because the sender address is forged, the response never comes. These half-open connections saturate the number of available connections the server is able to make, keeping it from responding to legitimate requests until after the attack ends.
Teardrop Attacks:
A Teardrop attack involves sending mangled IP(invalid) fragments with overlapping, over-sized payloads to the target machine. This can crash various operating systems due to a bug in their TCP/IP fragmentation re-assembly code.[5] Windows 3.1x, Windows 95 and Windows NT operating systems, as well as versions of Linux prior to versions 2.0.32 and 2.1.63 are vulnerable to this attack.
A Teardrop attack involves sending mangled IP(invalid) fragments with overlapping, over-sized payloads to the target machine. This can crash various operating systems due to a bug in their TCP/IP fragmentation re-assembly code.[5] Windows 3.1x, Windows 95 and Windows NT operating systems, as well as versions of Linux prior to versions 2.0.32 and 2.1.63 are vulnerable to this attack.
Peer-to-peer attacks:
Attackers have found a way to exploit a number of bugs in peer-to-peer servers to initiate DDoS attacks. The most aggressive of these peer-to-peer-DDoS attacks exploits DC++(DC++ is a free and open-source, peer-to-peer file-sharing client that can be used to connect to the Direct Connect network ). Peer-to-peer attacks are different from regular botnet(collection of software agents, or robots, that run autonomously and automatically)-based attacks. With peer-to-peer there is no botnet and the attacker does not have to communicate with the clients it subverts. Instead, the attacker acts as a 'puppet master,' instructing clients of large peer-to-peer file sharing hubs to disconnect from their peer-to-peer network and to connect to the victim’s website instead. As a result, several thousand computers may aggressively try to connect to a target website. While a typical web server can handle a few hundred connections/sec before performance begins to degrade, most web servers fail almost instantly under five or six thousand connections/sec. With a moderately big peer-to-peer attack a site could potentially be hit with up to 750,000 connections in a short order. The targeted web server will be plugged up by the incoming connections. While peer-to-peer attacks are easy to identify with signatures, the large number of IP addresses that need to be blocked (often over 250,000 during the course of a big attack) means that this type of attack can overwhelm mitigation defenses. Even if a mitigation device can keep blocking IP addresses, there are other problems to consider. For instance, there is a brief moment where the connection is opened on the server side before the signature itself comes through. Only once the connection is opened to the server can the identifying signature be sent and detected, and the connection torn down. Even tearing down connections takes server resources and can harm the server.
This method of attack can be prevented by specifying in the p2p protocol which ports are allowed or not. If port 80 is not allowed, the possibilities for attack on websites can be very limited.
Permanent denial-of-service attacks:
A permanent denial-of-service (PDoS), also known loosely as phlashing,[6] is an attack that damages a system so badly that it requires replacement or reinstallation of hardware.[7] Unlike the distributed denial-of-service attack, a PDoS attack exploits security flaws which allow remote administration on the management interfaces of the victim's hardware, such as routers, printers, or other networking hardware. The attacker uses these vulnerabilities to replace a device's firmware with a modified, corrupt, or defective firmware image—a process which when done legitimately is known as flashing. This therefore "bricks" the device, rendering it unusable for its original purpose until it can be repaired or replaced.
The PDoS is a pure hardware targeted attack which can be much faster and requires fewer resources than using a botnet in a DDoS attack. Because of these features, and the potential and high probability of security exploits on Network Enabled Embedded Devices (NEEDs), this technique has come to the attention of numerous hacker communities. PhlashDance is a tool created by Rich Smith[8] (an employee of Hewlett-Packard's Systems Security Lab) used to detect and demonstrate PDoS vulnerabilities at the 2008 EUSecWest Applied Security Conference in London.[8]
Application level floods
On IRC, IRC floods are a common electronic warfare weapon.
Various DoS-causing exploits such as buffer overflow can cause server-running software to get confused and fill the disk space or consume all available memory or CPU time.
Other kinds of DoS rely primarily on brute force, flooding the target with an overwhelming flux of packets, oversaturating its connection bandwidth or depleting the target's system resources. Bandwidth-saturating floods rely on the attacker having higher bandwidth available than the victim; a common way of achieving this today is via Distributed Denial of Service, employing a botnet. Other floods may use specific packet types or connection requests to saturate finite resources by, for example, occupying the maximum number of open connections or filling the victim's disk space with logs.
A "banana attack" is another particular type of DoS. It involves redirecting outgoing messages from the client back onto the client, preventing outside access, as well as flooding the client with the sent packets.
An attacker with access to a victim's computer may slow it until it is unusable or crash it by using a fork bomb.
Nuke
A Nuke is an old denial-of-service attack against computer networks consisting of fragmented or otherwise invalid ICMP packets sent to the target, achieved by using a modified ping utility to repeatedly send this corrupt data, thus slowing down the affected computer until it comes to a complete stop.
A specific example of a nuke attack that gained some prominence is the WinNuke, which exploited the vulnerability in the NetBIOS handler in Windows 95. A string of out-of-band data was sent to TCP port 139 of the victim's machine, causing it to lock up and display a Blue Screen of Death (BSOD).
Distributed attack
A distributed denial of service attack (DDoS) occurs when multiple systems flood the bandwidth or resources of a targeted system, usually one or more web servers. These systems are compromised by attackers using a variety of methods.
Malware can carry DDoS attack mechanisms; one of the better-known examples of this was MyDoom. Its DoS mechanism was triggered on a specific date and time. This type of DDoS involved hardcoding the target IP address prior to release of the malware and no further interaction was necessary to launch the attack.
A system may also be compromised with a trojan, allowing the attacker to download a zombie agent (or the trojan may contain one). Attackers can also break into systems using automated tools that exploit flaws in programs that listen for connections from remote hosts. This scenario primarily concerns systems acting as servers on the web.
Stacheldraht is a classic example of a DDoS tool. It utilizes a layered structure where the attacker uses a client program to connect to handlers, which are compromised systems that issue commands to the zombie agents, which in turn facilitate the DDoS attack. Agents are compromised via the handlers by the attacker, using automated routines to exploit vulnerabilities in programs that accept remote connections running on the targeted remote hosts. Each handler can control up to a thousand agents.[9]
These collections of systems compromisers are known as botnets. DDoS tools like stacheldraht still use classic DoS attack methods centered on IP spoofing and amplification like smurf attacks and fraggle attacks (these are also known as bandwidth consumption attacks). SYN floods (also known as resource starvation attacks) may also be used. Newer tools can use DNS servers for DoS purposes.
Simple attacks such as SYN floods may appear with a wide range of source IP addresses, giving the appearance of a well distributed DDoS. These flood attacks do not require completion of the TCP three way handshake and attempt to exhaust the destination SYN queue or the server bandwidth. Because the source IP addresses can be trivially spoofed, an attack could come from a limited set of sources, or may even originate from a single host. Stack enhancements such as syn cookies may be effective mitigation against SYN queue flooding, however complete bandwidth exhaustion may require involvement
Unlike MyDoom's DDoS mechanism, botnets can be turned against any IP address. Script kiddies use them to deny the availability of well known websites to legitimate users.[2] More sophisticated attackers use DDoS tools for the purposes of extortion — even against their business rivals.[10]
It is important to note the difference between a DDoS and DoS attack. If an attacker mounts an attack from a single host it would be classified as a DoS attack. In fact, any attack against availability would be classed as a Denial of Service attack. On the other hand, if an attacker uses a thousand systems to simultaneously launch smurf attacks against a remote host, this would be classified as a DDoS attack.
The major advantages to an attacker of using a distributed denial-of-service attack are that multiple machines can generate more attack traffic than one machine, multiple attack machines are harder to turn off than one attack machine, and that the behavior of each attack machine can be stealthier, making it harder to track down and shut down. These attacker advantages cause challenges for defense mechanisms. For example, merely purchasing more incoming bandwidth than the current volume of the attack might not help, because the attacker might be able to simply add more attack machines.
Reflected attack
A distributed reflected denial of service attack (DRDoS) involves sending forged requests of some type to a very large number of computers that will reply to the requests. Using Internet protocol spoofing, the source address is set to that of the targeted victim, which means all the replies will go to (and flood) the target.
ICMP Echo Request attacks (Smurf Attack) can be considered one form of reflected attack, as the flooding host(s) send Echo Requests to the broadcast addresses of mis-configured networks, thereby enticing many hosts to send Echo Reply packets to the victim. Some early DDoS programs implemented a distributed form of this attack.
Many services can be exploited to act as reflectors, some harder to block than others.[11] DNS amplification attacks involve a new mechanism that increased the amplification effect, using a much larger list of DNS servers than seen earlier.[12
Degradation-of-service attacks
"Pulsing" zombies are compromised computers that are directed to launch intermittent and short-lived floodings of victim websites with the intent of merely slowing it rather than crashing it. This type of attack, referred to as "degradation-of-service" rather than "denial-of-service", can be more difficult to detect than regular zombie invasions and can disrupt and hamper connection to websites for prolonged periods of time, potentially causing more damage than concentrated floods.[13][14] Exposure of degradation-of-service attacks is complicated further by the matter of discerning whether the attacks really are attacks or just healthy and likely desired increases in website traffic.[15
Unintentional denial of service
Aka VIPDoS
This describes a situation where a website ends up denied, not due to a deliberate attack by a single individual or group of individuals, but simply due to a sudden enormous spike in popularity. This can happen when an extremely popular website posts a prominent link to a second, less well-prepared site, for example, as part of a news story. The result is that a significant proportion of the primary site's regular users — potentially hundreds of thousands of people — click that link in the space of a few hours, having the same effect on the target website as a DDoS attack.
An example of this was when Michael Jackson died in 2009, websites such as Google and Twitter slowed down or even crashed. Many sites' servers thought the requests were from a virus or spyware trying to cause a Denial of Service attack, warning users that their queries looked like “automated requests from a computer virus or spyware application”[citation needed].
News sites and link sites — sites whose primary function is to provide links to interesting content elsewhere on the Internet — are most likely to cause this phenomenon. The canonical example is the Slashdot effect. Sites such as Digg, the Drudge Report, Fark, Something Awful, and the webcomic Penny Arcade have their own corresponding "effects", known as "the Digg effect", being "drudged", "farking", "goonrushing" and "wanging"; respectively.
Routers have also been known to create unintentional DoS attacks, as both D-Link and Netgear routers have created NTP vandalism by flooding NTP servers without respecting the restrictions of client types or geographical limitations.
Similar unintentional denials of service can also occur via other media, e.g. when a URL is mentioned on television. If a server is being indexed by Google or another search engine during peak periods of activity, or does not have a lot of available bandwidth while being indexed, it can also experience the effects of a DoS attack.
Legal action has been taken in at least one such case. In 2006, Universal Tube & Rollform Equipment Corporation sued YouTube: massive numbers of would-be youtube.com users accidentally typed the tube company's URL, utube.com. As a result, the tube company ended up having to spend large amounts of money on upgrading their bandwidth.[16]
Denial-of-Service Level II
The goal of DoS L2 (possibly DDoS) attack is to cause a launching of a defense mechanism which blocks the network segment from which the attack originated. In case of distributed attack or IP header modification (that depends on the kind of security behavior) it will fully block the attacked network from Internet, but without system crash.
Blind denial of service
In a blind denial of service attack, the attacker has a significant advantage. The attacker must be able to receive traffic from the victim, then the attacker must either subvert the routing fabric or use the attacker's own IP address. Either provides an opportunity for the victim to track the attacker and/or filter out his traffic. With a blind attack the attacker uses a forged IP addresses, making it extremely difficult for the victim to filter out those packets. The TCP SYN flood attack is an example of a blind attack. Designers should make every attempt possible to prevent blind denial of service attacks.[17]
Incidents
• In February, 2001, the Irish Government's Department of Finance server was hit by a denial of service attack carried out as part of a student campaign from NUI Maynooth. The Department officially complained to the University authorities and a number of students were disciplined.[citation needed]
• In February 2007, more than 10,000 online game servers in games such as Return to Castle Wolfenstein, Halo, Counter-Strike and many others were attacked by "RUS" hacker group. The DDoS attack was made from more than a thousand computer units located in the republics of the former Soviet Union, mostly from Russia, Uzbekistan and Belarus. Minor attacks are still continuing to be made today.
• On June 25, 2009, the day Michael Jackson died, the spike in searches related to Michael Jackson was so big that Google News initially mistook it for an automated attack. As a result, for about 25 minutes, when some people searched Google News they saw a "We're sorry" page before finding the articles they were looking for.[23]
• On August 6, 2009 several social networking sites, including Twitter, Facebook, Livejournal, and Google blogging pages were hit by DDoS attacks, apparently aimed at Georgian blogger "Cyxymu". Although Google came through with only minor set-backs, these attacks left Twitter crippled for hours and Facebook did eventually restore service although some users still experienced trouble. Twitter's Site latency has continued to improve, however some web requests continue to fail.
Performing DoS-attacks
A wide array of programs are used to launch DoS-attacks. Most of these programs are completely focused on performing DoS-attacks, while others are also true Packet injectors, thus able to perform other tasks as well.
An example for such tools are, hping, JAVA socket programming, httping, and a lot more. Such tools are intended for benign use, but they can also be utilized in launching attacks on victim networks. In addition to these tools, there exist a vast amount of underground tools used by attackers.
Note: hping is a command-line oriented TCP/IP packet assembler/analyzer. The interface is inspired to the ping(8) unix command, but hping isn't only able to send ICMP echo requests. It supports TCP, UDP, ICMP and RAW-IP protocols, has a traceroute mode, the ability to send files between a covered channel, and many other features.
Prevention and response
1. Firewalls
Firewalls have simple rules such as to allow or deny protocols, ports or IP addresses. Some DoS attacks are too complex for today's firewalls, e.g. if there is an attack on port 80 (web service), firewalls cannot prevent that attack because they cannot distinguish good traffic from DoS attack traffic. Additionally, firewalls are too deep in the network hierarchy. Routers may be affected even before the firewall gets the traffic. Nonetheless, firewalls can effectively prevent users from launching simple flooding type attacks from machines behind the firewall.
Some stateful firewalls like OpenBSD's pF, can act as a proxy for connections, the handshake is validated (with the client) instead of simply forwarding the packet to the destination. It is available for other BSDs as well. In that context, it is called "synproxy".
Firewalls have simple rules such as to allow or deny protocols, ports or IP addresses. Some DoS attacks are too complex for today's firewalls, e.g. if there is an attack on port 80 (web service), firewalls cannot prevent that attack because they cannot distinguish good traffic from DoS attack traffic. Additionally, firewalls are too deep in the network hierarchy. Routers may be affected even before the firewall gets the traffic. Nonetheless, firewalls can effectively prevent users from launching simple flooding type attacks from machines behind the firewall.
Some stateful firewalls like OpenBSD's pF, can act as a proxy for connections, the handshake is validated (with the client) instead of simply forwarding the packet to the destination. It is available for other BSDs as well. In that context, it is called "synproxy".
2. Switches
Most switches have some rate-limiting and ACL capability. Some switches provide automatic and/or system-wide rate limiting, traffic shaping, delayed binding (TCP splicing), deep packet inspection and Bogon filtering (bogus IP filtering) to detect and remediate denial of service attacks through automatic rate filtering and WAN Link failover and balancing.[citation needed]
These schemes will work as long as the DoS attacks are something that can be prevented by using them. For example SYN flood can be prevented using delayed binding or TCP splicing. Similarly content based DoS can be prevented using deep packet inspection. Attacks originating from dark addresses or going to dark addresses can be prevented using Bogon filtering. Automatic rate filtering can work as long as you have set rate-thresholds correctly and granularly. Wan-link failover will work as long as both links have DoS/DDoS prevention mechanism.[citation needed]
Most switches have some rate-limiting and ACL capability. Some switches provide automatic and/or system-wide rate limiting, traffic shaping, delayed binding (TCP splicing), deep packet inspection and Bogon filtering (bogus IP filtering) to detect and remediate denial of service attacks through automatic rate filtering and WAN Link failover and balancing.[citation needed]
These schemes will work as long as the DoS attacks are something that can be prevented by using them. For example SYN flood can be prevented using delayed binding or TCP splicing. Similarly content based DoS can be prevented using deep packet inspection. Attacks originating from dark addresses or going to dark addresses can be prevented using Bogon filtering. Automatic rate filtering can work as long as you have set rate-thresholds correctly and granularly. Wan-link failover will work as long as both links have DoS/DDoS prevention mechanism.[citation needed]
3. Routers
Similar to switches, routers have some rate-limiting and ACL capability. They, too, are manually set. Most routers can be easily overwhelmed under DoS attack. If you add rules to take flow statistics out of the router during the DoS attacks, they further slow down and complicate the matter. Cisco IOS has features that prevents flooding, i.e. example settings.[29]
Similar to switches, routers have some rate-limiting and ACL capability. They, too, are manually set. Most routers can be easily overwhelmed under DoS attack. If you add rules to take flow statistics out of the router during the DoS attacks, they further slow down and complicate the matter. Cisco IOS has features that prevents flooding, i.e. example settings.[29]
4. Application front end hardware
Application front end hardware is intelligent hardware placed on the network before traffic reaches the servers. It can be used on networks in conjunction with routers and switches. Application front end hardware analyzes data packets as they enter the system, and then identifies them as priority, regular, or dangerous. There are more than 25 bandwidth management vendors. Hardware acceleration is key to bandwidth management. Look for granularity of bandwidth management, hardware acceleration, and automation while selecting an appliance.[citation needed]
5. IPS based prevention
Intrusion-prevention systems (IPS) are effective if the attacks have signatures associated with them. However, the trend among the attacks is to have legitimate content but bad intent. Intrusion-prevention systems which work on content recognition cannot block behavior-based DoS attacks.
An ASIC based IPS can detect and block denial of service attacks because they have the processing power and the granularity to analyze the attacks and act like a circuit breaker in an automated way. An application-specific integrated circuit (ASIC) is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use. For example, a chip designed solely to run a cell phone is an ASIC. Intermediate between ASICs and industry standard integrated circuits, like the 7400 or the 4000 series, are application specific standard products (ASSPs).
A rate-based IPS (RBIPS) must analyze traffic granularly and continuously monitor the traffic pattern and determine if there is traffic anomaly. It must let the legitimate traffic flow while blocking the DoS attack traffic.
Intrusion-prevention systems (IPS) are effective if the attacks have signatures associated with them. However, the trend among the attacks is to have legitimate content but bad intent. Intrusion-prevention systems which work on content recognition cannot block behavior-based DoS attacks.
An ASIC based IPS can detect and block denial of service attacks because they have the processing power and the granularity to analyze the attacks and act like a circuit breaker in an automated way. An application-specific integrated circuit (ASIC) is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use. For example, a chip designed solely to run a cell phone is an ASIC. Intermediate between ASICs and industry standard integrated circuits, like the 7400 or the 4000 series, are application specific standard products (ASSPs).
A rate-based IPS (RBIPS) must analyze traffic granularly and continuously monitor the traffic pattern and determine if there is traffic anomaly. It must let the legitimate traffic flow while blocking the DoS attack traffic.
6. Prevention via proactive testing
Test platforms such as Mu Dynamics' Service Analyzer are available to perform simulated denial-of-service attacks that can be used to evaluate defensive mechanisms such IPS, RBIPS, as well as the popular denial-of-service mitigation products from Arbor Networks. An example of proactive testing of denial-of-service throttling capabilities in a switch was performed in 2008: The Juniper EX 4200 switch with integrated denial-of-service throttling was tested by Network Test and the resulting review was published in Network World.
Test platforms such as Mu Dynamics' Service Analyzer are available to perform simulated denial-of-service attacks that can be used to evaluate defensive mechanisms such IPS, RBIPS, as well as the popular denial-of-service mitigation products from Arbor Networks. An example of proactive testing of denial-of-service throttling capabilities in a switch was performed in 2008: The Juniper EX 4200 switch with integrated denial-of-service throttling was tested by Network Test and the resulting review was published in Network World.
7. Blackholing/Sinkholing
With blackholing, you send all the traffic which is sent to the attacked DNS or IP address to a "black hole" (null interface, non-existent server, ...). To be more efficient and avoid affecting your network connectivity, it can be managed by your ISP.[30]
Sinkholing routes to a valid IP address which analyzes traffic and reject bad ones. Sinkholing is not efficient for most severe attacks.
With blackholing, you send all the traffic which is sent to the attacked DNS or IP address to a "black hole" (null interface, non-existent server, ...). To be more efficient and avoid affecting your network connectivity, it can be managed by your ISP.[30]
Sinkholing routes to a valid IP address which analyzes traffic and reject bad ones. Sinkholing is not efficient for most severe attacks.
8. Clean pipes
All traffic is passed through a "cleaning center" via a proxy, which separates "bad" traffic (DDoS and also other common internet attacks) and only sends good traffic beyond to the server. The provider needs central connectivity to the Internet to manage this kind of service.[31]
Prolexic and Verisign are examples of providers of this service.
Denial-of-service attacks and the law
In the Police and Justice Act 2006, the United Kingdom specifically outlawed denial-of-service attacks and set a maximum penalty of 10 years in prison.[34]
All traffic is passed through a "cleaning center" via a proxy, which separates "bad" traffic (DDoS and also other common internet attacks) and only sends good traffic beyond to the server. The provider needs central connectivity to the Internet to manage this kind of service.[31]
Prolexic and Verisign are examples of providers of this service.
Denial-of-service attacks and the law
In the Police and Justice Act 2006, the United Kingdom specifically outlawed denial-of-service attacks and set a maximum penalty of 10 years in prison.[34]
Firewalls
Introduction to Internet Firewalls:
Firewalls are an excellent tool for securing a network. A firewall is system designed to prevent unauthorized access to or from a private network and basically limits access to a network from another network. Firewall that can be implemented in hardware or software, or a combination of both either denies or allows outgoing traffic known as egress filtering or incoming traffic known as ingress filtering.
In an organizational setup, firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria. A firewall should be the first line of defense in protecting the availability, integrity, and confidentiality of data in the computing environment. While a company may use packet-filtering routers for perimeter defense and host-based firewalls as an additional line of defense, in the home environment, the personal firewall plays a key role by defending the network and individual host perimeters.
Firewall software monitors your computer for suspicious activity while you are online.� Inbound intruders are stopped before they can get in, sensitive information and Trojan Horses are stopped before they can get out.� Furthermore, a record of the attack is created, including the IP address where the attack came from.� This can help the IP provider figure out where the attack is coming from so they can track down the hackers. Overall, it is important to be smart about hackers, realizing that you are vulnerable to their attacks is an important first step.� Somebody who really wants into your computer may still find a way to do it, but the point here is to make it as difficult as possible for him or her, and to send those who are just looking for the opportunity on to an easier target.
Firewall is defined as a system designed to prevent unauthorized access to or from a private network. Firewalls can be integrated in both hardware and software. All messages communicating with the intranet pass through the firewall. The firewall inspects and blocks all messages that do not meet the security stipulations.
The fundamental principle is to give the administrator a single point where the preferred policies can be enforced. This single point of control allows the administrator to conceal characteristics of a private network and protect it.
Uses of Firewall:
Protect the system from the hackers from logging into machines on network.
Provide a single access point from where security and audit can be imposed.
Act as an effective phone tap and tracing tool.
Provide an important logging and auditing function
Provide information about the nature of traffic and the number of attempts made to break into it.
Firewalls are an excellent tool for securing a network. A firewall is system designed to prevent unauthorized access to or from a private network and basically limits access to a network from another network. Firewall that can be implemented in hardware or software, or a combination of both either denies or allows outgoing traffic known as egress filtering or incoming traffic known as ingress filtering.
In an organizational setup, firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria. A firewall should be the first line of defense in protecting the availability, integrity, and confidentiality of data in the computing environment. While a company may use packet-filtering routers for perimeter defense and host-based firewalls as an additional line of defense, in the home environment, the personal firewall plays a key role by defending the network and individual host perimeters.
Firewall software monitors your computer for suspicious activity while you are online.� Inbound intruders are stopped before they can get in, sensitive information and Trojan Horses are stopped before they can get out.� Furthermore, a record of the attack is created, including the IP address where the attack came from.� This can help the IP provider figure out where the attack is coming from so they can track down the hackers. Overall, it is important to be smart about hackers, realizing that you are vulnerable to their attacks is an important first step.� Somebody who really wants into your computer may still find a way to do it, but the point here is to make it as difficult as possible for him or her, and to send those who are just looking for the opportunity on to an easier target.
Firewall is defined as a system designed to prevent unauthorized access to or from a private network. Firewalls can be integrated in both hardware and software. All messages communicating with the intranet pass through the firewall. The firewall inspects and blocks all messages that do not meet the security stipulations.
The fundamental principle is to give the administrator a single point where the preferred policies can be enforced. This single point of control allows the administrator to conceal characteristics of a private network and protect it.
Uses of Firewall:
Protect the system from the hackers from logging into machines on network.
Provide a single access point from where security and audit can be imposed.
Act as an effective phone tap and tracing tool.
Provide an important logging and auditing function
Provide information about the nature of traffic and the number of attempts made to break into it.
Firewall Loopholes(limitation) :
Firewalls cannot protect from attacks that do not go through the firewall. The prerequisite for a firewall to work is it must be a part of a consistent overall organizational security architecture.
A firewall can't protect the network against a traitor in the network environment. Although an industrial spy might export information through your firewall, the traitor just as likely to export it through a telephone, FAX machine, or floppy disk. Firewalls also cannot protect against social engineering.
Lastly, firewalls cannot protect against tunneling over most application protocols to trojaned or poorly written clients. Tunneling bad things over HTTP, SMTP, and other protocols is widely used.
A firewall can't protect the network against a traitor in the network environment. Although an industrial spy might export information through your firewall, the traitor just as likely to export it through a telephone, FAX machine, or floppy disk. Firewalls also cannot protect against social engineering.
Lastly, firewalls cannot protect against tunneling over most application protocols to trojaned or poorly written clients. Tunneling bad things over HTTP, SMTP, and other protocols is widely used.
Functionality of Firewalls:
1. Packet Filtering:
For each packet received, the packet filters gives permit/denial decision. The filtering rules are based on the packet header information. This information consists of the IP source address, the IP destination address, the encapsulated protocol, the TCP/UDP source port, the TCP/UDP destination port, and the ICMP message type.
2. Application level gateway:
2. Application level gateway:
Application level gateway is a proxy that is installed on the gateway for each desired application. It does not allow direct exchange of packets. If a particular application does not have a proxy on the gateway, the service is not forwarded across the firewall.
3. Circuit level gateway: Circuit level gateway is a specific function that can be performed by an application level gateway. It does not perform any additional packet processing or filtering. It copies bytes back and forth between the inside and connection. It is often used for outgoing connections.
3. Circuit level gateway: Circuit level gateway is a specific function that can be performed by an application level gateway. It does not perform any additional packet processing or filtering. It copies bytes back and forth between the inside and connection. It is often used for outgoing connections.
Basic Types of Firewalls:
There are two types of firewalls:
Network layer
Application layer
Network layer firewalls:
These firewalls use the source, destination addresses and ports in individual IP packets in making their decisions. A simple router is not able to make decisions about nature and destination of a packet. The distinguishing characteristic about network layer firewalls is they route traffic directly though them. They are very fast and tend to be very transparent to users.
Application layer firewalls:
They are hosts running proxy servers. They permit no traffic directly between networks, and perform intricate logging and auditing of traffic passing through them. Modern application layer firewalls are completely transparent.
The network layer firewalls are becoming increasingly conscious of the information going through them. At the same time, application layer firewalls are becoming increasingly transparent. The end result is going to be a fast packet-screening system that logs and audits information as it passes through.
Personal Firewalls
Personal firewalls are meant for providing protection to desktop PCs and small networks connected to the Internet. A personal firewall is a software program used to guard and protect a computer or a network while they are connected to the Internet. Generally, home and small networks use personal firewalls because they are relatively inexpensive and are usually easy to install. A personal firewall enforces the security policies of a computer or a network by intercepting and examining the data transportation (data packets) over the network. Security mechanism of a personal firewall works in two ways. Either it allows all the data packets to enter the network except those meeting a specified criteria (restricted ones) or it deny all the data packets from entering except those that are allowed. However, it is recommended by experts that denying all data packets except the allowed ones is better for the security of a network.
While simple personal firewall solutions are administered by users themselves, in a small network they are administered by a central security management system to implement a network wide security policy. The primary aim of a personal firewall is to close any loopholes that remain in a network and in known virus scanners so as to provide full protection to the computers in the network. When a data packet moves out of the network, it carries along with it the IP address of the system/network. Personal firewalls, with the help of NAT (network address translation), substitutes a fake IP address inside the outgoing Internet data packets so that the original IP address can't be traced.
Features and Benefits:
In recent years, broadband and other faster Internet connections have become widely available which has lead to the need for software firewalls that could be implemented and maintained by average users. Currently, there are many software vendors competing for the home and small networks market and are trying to package as many features as possible into their products. Below is the list and explanation of some of the main features that personal firewall vendors offer.
Inbound and Outbound Packet Filtering: Filtering the incoming data packets according to the security policies (created by the users or administrator) is the main function of a firewall. Data packets can be filtered using any of their attributes such as protocol, source address and port number and destination address and port number. Filtering the outgoing packets is an equally important feature of personal firewalls.
Stealth Mode:
Before attempting to penetrate a system protected by a personal firewall, an intruder usually tries to identify the target system and create a footprint of it. They may also scan it for open ports and information such as OS type and application versions. If an intruder is unable to find the system, then he would not be able to penetrate it. Stealth mode does not mean that the machine's IP address is invisible, but it makes the machine's most vulnerable entry points invisible to tools that intruders use to seek out targets. They essentially block any port that is not in use.
Support Custom Rules:
Support Custom Rules:
This feature allows the user to customize the security policy other than the values that come with the personal firewall. A user can write a security policy to block data packets by IP address, port number, or protocol or can define custom ports and protocols to use applications such as video conferencing and Voice over IP.
Ad Blocking:
Ad Blocking:
This feature blocks unwanted advertisements from displaying in the users Web browser. There are several different types of ads used by Web sites. These include pop-up ads, animated ads, skyscraper ads, and banner ads. Some personal firewalls allow the user to change the filtering rules for the different type of ads.
Content filtering:
Content filtering:
Also referred to as "parental control", this feature gives the ability to block Web sites because of its content. Filtering can be based upon a database listing these sites, a user created list of sites, or a list of keywords found in web pages.
Cookie Control:
Cookie Control:
A cookie is a small text file that a Web site places on a computer that can contain personal information such as name, address, phone number, password, etc. They can be last for the duration of the current Internet session or they can be persistent and reside on the computer indefinitely. There is also another type of cookie called a third-party cookie that can be placed on a computer to record information about the users Internet surfing habits. The cookie control feature allows the user to block these cookies from being placed on the computer. Some vendors allow the user to distinguish between the types of cookies being blocked.
Mobile Code Protection:
Mobile Code Protection:
Mobile code is active or executable code that is embedded in Web pages or HTML Email such as Java applets, ActiveX controls, and plug-ins. Mobile code can sometimes be malicious with the ability to copy files, steal passwords, copy files, and wipe out hard drives. This feature blocks the mobile code from executing and gives and alert asking the user if they want the code to execute.
Intrusion Detection:
Intrusion Detection:
From the aspect of a home and small office user, intrusion detection is the process of monitoring the events occurring with in the computer system or network and analyzes them for signs of intrusion. If an intruder gets past the firewall, this feature give an alert to the user that something suspicious is going on.
Intruder Tracking:
Intruder Tracking:
When an intrusion threat is detected, this feature identifies the source of the intrusion attempt. Some firewalls even display a map showing the approximate geographic location of the intruder.
Logging:
Logging:
This feature creates a log file that lists the data packet transmissions that were blocked by the firewall. Information in this log file includes whether the transmission was inbound or outbound, date and time that the block occurred, Source IP address and port number, destination IP address and port number, and transport protocol, such as TCP, UDP, ICMP, or IGMP.
Email Checking:
Email Checking:
Email attachments can contain attachments with viruses, worms, and other malicious code. Only certain types of attachments can contain malicious code. These attachments can be identified by their filename extensions. This feature checks incoming email for attachments with file extensions that could be malicious. An alert is usually given and the attachment is quarantined.
Application Authentication:
Application Authentication:
A major threat to a computer system is a Trojan horse. It is easy to download malicious software without knowing it. Some Trojan horse applications can take on the same name, size, and directory structure as a program that is permitted to access the Internet. To combat this problem, a hashing algorithm is used to create a digital signature each time a program is executed and compares to the previously stored digital signature of that same program. If the digital signatures are not equal, then the user is alerted. Some firewall software even includes the components associated with a program's main executable file, such as DLL files, in the digital signature.
Internet Connection Sharing (ICS) Support:
Internet Connection Sharing (ICS) Support:
Internet Connection Sharing software is used when multiple computers on home and small networks connect to the Internet through one computer called a gateway that is connected to the Internet. This feature allows the firewall software to work in conjunction with ICS software to filter data packets flowing through the gateway computer.
Choosing a Firewall for Home and Small Office:
There are certain key criteria that should be considered when selecting personal software firewalls for home and small networks. The user should identify the criteria that are important to them and then find a personal firewall product that best meets the criteria. Some of the key criteria can be:
Effectiveness of security protection - Efficiency of the firewall products to protect against intrusion, Trojans, controlling outbound traffic, and denial of service.
Effectiveness of intrusion detection - How effectively the firewall software alerts when the system is being attacked?
Effectiveness of reaction - Does the software package have the ability of discovering the identity of the attacker and how well does it block attacks?
Cost - Price of the firewall and setting up costs could be an important criterion for small organizations.
Choosing a Firewall for Home and Small Office:
There are certain key criteria that should be considered when selecting personal software firewalls for home and small networks. The user should identify the criteria that are important to them and then find a personal firewall product that best meets the criteria. Some of the key criteria can be:
Effectiveness of security protection - Efficiency of the firewall products to protect against intrusion, Trojans, controlling outbound traffic, and denial of service.
Effectiveness of intrusion detection - How effectively the firewall software alerts when the system is being attacked?
Effectiveness of reaction - Does the software package have the ability of discovering the identity of the attacker and how well does it block attacks?
Cost - Price of the firewall and setting up costs could be an important criterion for small organizations.
Wednesday, July 14, 2010
Software Applications
Software may be applied in any situation for which a prespecified set of procedural steps (i.e., an algorithm) has been defined (notable exceptions to this rule are expert system software and neural network software). Information content and determinacy are important factors in determining the nature of a software application. Content refers to the meaning and form of incoming and outgoing information.
For example, many business applications use highly structured input data (a database) and produce formatted “reports.” Software that controls an automated machine
(e.g., a numerical control) accepts discrete data items with limited structure and produces
individual machine commands in rapid succession.
Information determinacy refers to the predictability of the order and timing of information. An engineering analysis program accepts data that have a predefined order, executes the analysis algorithm(s) without interruption, and produces resultant data in report or graphical format. Such applications are determinate. A multiuser operating system, on the other hand, accepts inputs that have varied content and arbitrary timing, executes algorithms that can be interrupted by external conditions, and produces output that varies as a function of environment and time. Applications with these characteristics are indeterminate.
It is somewhat difficult to develop meaningful generic categories for software applications. As software complexity grows, neat compartmentalization disappears. The following software areas indicate the breadth of potential applications:
System software:
System software is a collection of programs written to serviceother programs. Some system software (e.g., compilers, editors, and file management utilities) process complex, but determinate, information structures. Other systems applications (e.g., operating system components, drivers, telecommunications processors) process largely indeterminate data. In either case, the system software area is characterized by heavy interaction with computer hardware; heavy usage by multiple users; concurrent operation that requires scheduling, resource sharing, and sophisticated process management; complex data structures; and multiple external interfaces.
Real-time software:
Software that monitors/analyzes/controls real-world events as they occur is called real time.
Elements of real-time software include a data gathering component that collects and formats information from an external environment, an analysis component that transforms information as required by the application, a control/output component that responds to the external environment, and a monitoring component that coordinates all other components so that realtime response (typically ranging from 1 millisecond to 1 second) can be maintained.
Business software:
Business information processing is the largest single software application area. Discrete "systems" (e.g., payroll, accounts receivable/payable, inventory) have evolved into management information system (MIS) software that accesses one or more large databases containing business information. Applications in this area restructure existing data in a way that facilitates business operations or management decision making. In addition to conventional data processing application, business software applications also encompass interactive computing (e.g., point of sale transaction processing).
Engineering and scientific software:
Engineering and scientific software have been characterized by "number crunching" algorithms. Applications range from astronomy to volcanology, from automotive stress analysis to space shuttle orbital dynamics, and from molecular biology to automated manufacturing. However, modern applications within the engineering/scientific area are moving away from conventional numerical algorithms. Computer-aided design, system simulation, and other interactive applications have begun to take on real-time and even system software characteristics.
Embedded software:
Intelligent products have become commonplace in nearly every consumer and industrial market. Embedded software resides in read-only memory and is used to control products and systems for the consumer and industrial markets. Embedded software can perform very limited and esoteric functions (e.g., keypad control for a microwave oven) or provide significant function and control capability (e.g., digital functions in an automobile such as fuel control, dashboard displays, and braking systems).
Personal computer software:
The personal computer software market has burgeoned over the past two decades. Word processing, spreadsheets, computer graphics, multimedia, entertainment, database management, personal and business financial applications, external network, and database access are only a few of hundreds of applications.
Web-based software:
The Web pages retrieved by a browser are software that incorporates executable instructions (e.g., CGI, HTML, Perl, or Java), and data (e.g. hypertext and a variety of visual and audio formats). In essence, the network becomes a massive computer providing an almost unlimited software resource that can be accessed by anyone with a modem.
Artificial intelligence software:
(AI) software makes use of nonnumerical algorithms to solve complex problems that are not amenable to computation or straightforward analysis. Expert systems, also called knowledge-
based systems, pattern recognition (image and voice), artificial neural networks, theorem proving, and game playing are representative of applications within this category
For example, many business applications use highly structured input data (a database) and produce formatted “reports.” Software that controls an automated machine
(e.g., a numerical control) accepts discrete data items with limited structure and produces
individual machine commands in rapid succession.
Information determinacy refers to the predictability of the order and timing of information. An engineering analysis program accepts data that have a predefined order, executes the analysis algorithm(s) without interruption, and produces resultant data in report or graphical format. Such applications are determinate. A multiuser operating system, on the other hand, accepts inputs that have varied content and arbitrary timing, executes algorithms that can be interrupted by external conditions, and produces output that varies as a function of environment and time. Applications with these characteristics are indeterminate.
It is somewhat difficult to develop meaningful generic categories for software applications. As software complexity grows, neat compartmentalization disappears. The following software areas indicate the breadth of potential applications:
System software:
System software is a collection of programs written to serviceother programs. Some system software (e.g., compilers, editors, and file management utilities) process complex, but determinate, information structures. Other systems applications (e.g., operating system components, drivers, telecommunications processors) process largely indeterminate data. In either case, the system software area is characterized by heavy interaction with computer hardware; heavy usage by multiple users; concurrent operation that requires scheduling, resource sharing, and sophisticated process management; complex data structures; and multiple external interfaces.
Real-time software:
Software that monitors/analyzes/controls real-world events as they occur is called real time.
Elements of real-time software include a data gathering component that collects and formats information from an external environment, an analysis component that transforms information as required by the application, a control/output component that responds to the external environment, and a monitoring component that coordinates all other components so that realtime response (typically ranging from 1 millisecond to 1 second) can be maintained.
Business software:
Business information processing is the largest single software application area. Discrete "systems" (e.g., payroll, accounts receivable/payable, inventory) have evolved into management information system (MIS) software that accesses one or more large databases containing business information. Applications in this area restructure existing data in a way that facilitates business operations or management decision making. In addition to conventional data processing application, business software applications also encompass interactive computing (e.g., point of sale transaction processing).
Engineering and scientific software:
Engineering and scientific software have been characterized by "number crunching" algorithms. Applications range from astronomy to volcanology, from automotive stress analysis to space shuttle orbital dynamics, and from molecular biology to automated manufacturing. However, modern applications within the engineering/scientific area are moving away from conventional numerical algorithms. Computer-aided design, system simulation, and other interactive applications have begun to take on real-time and even system software characteristics.
Embedded software:
Intelligent products have become commonplace in nearly every consumer and industrial market. Embedded software resides in read-only memory and is used to control products and systems for the consumer and industrial markets. Embedded software can perform very limited and esoteric functions (e.g., keypad control for a microwave oven) or provide significant function and control capability (e.g., digital functions in an automobile such as fuel control, dashboard displays, and braking systems).
Personal computer software:
The personal computer software market has burgeoned over the past two decades. Word processing, spreadsheets, computer graphics, multimedia, entertainment, database management, personal and business financial applications, external network, and database access are only a few of hundreds of applications.
Web-based software:
The Web pages retrieved by a browser are software that incorporates executable instructions (e.g., CGI, HTML, Perl, or Java), and data (e.g. hypertext and a variety of visual and audio formats). In essence, the network becomes a massive computer providing an almost unlimited software resource that can be accessed by anyone with a modem.
Artificial intelligence software:
(AI) software makes use of nonnumerical algorithms to solve complex problems that are not amenable to computation or straightforward analysis. Expert systems, also called knowledge-
based systems, pattern recognition (image and voice), artificial neural networks, theorem proving, and game playing are representative of applications within this category
Subscribe to:
Posts (Atom)